-}
-```
-
-It’s that simple. We just passed `props` as an argument to a plain JavaScript function and returned, _umm, well, what was that? That _ `_
{props.name}
_` _thing!_ It’s JSX (JavaScript Extended). We will learn more about it in a later section.
-
-This above function will render the following HTML in the browser.
-
-```
-
-
- rajat
-
-```
-
-
-> Read the section below about JSX, where I have explained how did we get this HTML from our JSX code.
-
-How can you use this functional component in your React app? Glad you asked! It’s as simple as the following.
-
-```
-
-```
-
-The attribute `name` in the above code becomes `props.name` inside our `Hello`component. The attribute `age` becomes `props.age` and so on.
-
-> Remember! You can nest one React component inside other React components.
-
-Let’s use this `Hello` component in our codepen playground. Replace the `div`inside `ReactDOM.render()` with our `Hello` component, as follows, and see the changes in the bottom window.
-
-```
-function Hello(props) {
- return
{props.name}
-}
-
-ReactDOM.render(, document.getElementById('root'));
-```
-
-
-> But what if your component has some internal state. For instance, like the following counter component, which has an internal count variable, which changes on + and — key presses.
-
-A React component with an internal state
-
-#### b) Class-based component
-
-The class-based component has an additional property `state` , which you can use to hold a component’s private data. We can rewrite our `Hello` component using class notation as follows. Since these components have a state, these are also known as Stateful components.
-
-```
-class Counter extends React.Component {
- // this method should be present in your component
- render() {
- return (
-
- {this.props.name}
-
- );
- }
-}
-```
-
-We extend `React.Component` class of React library to make class-based components in React. Learn more about JavaScript classes [here][5].
-
-The `render()` method must be present in your class as React looks for this method in order to know what UI it should render on screen.
-
-To use this sort of internal state, we first have to initialize the `state` object in the constructor of the component class, in the following way.
-
-```
-class Counter extends React.Component {
- constructor() {
- super();
-
- // define the internal state of the component
- this.state = {name: 'rajat'}
- }
-
- render() {
- return (
-
- {this.state.name}
-
- );
- }
-}
-
-// Usage:
-// In your react app:
-```
-
-Similarly, the `props` can be accessed inside our class-based component using `this.props` object.
-
-To set the state, you use `React.Component`'s `setState()`. We will see an example of this, in the last part of this tutorial.
-
-> Tip: Never call `setState()` inside `render()` function, as `setState()` causes component to re-render and this will result in endless loop.
-
-
-
-A class-based component has an optional property “state”.
-
- _Apart from _ `_state_` _, a class-based component has some life-cycle methods like _ `_componentWillMount()._` _ These you can use to do stuff, like initializing the _ `_state_` _and all but that is out of the scope of this post._
-
-### JSX
-
-JSX is a short form of _JavaScript Extended_ and it is a way to write `React`components. Using JSX, you get the full power of JavaScript inside XML like tags.
-
-You put JavaScript expressions inside `{}`. The following are some valid JSX examples.
-
- ```
-
-
- ;
-
-
-
- ```
-
-The way it works is you write JSX to describe what your UI should look like. A [transpiler][6] like `Babel` converts that code into a bunch of `React.createElement()` calls. The React library then uses those `React.createElement()` calls to construct a tree-like structure of DOM elements. In case of React for Web or Native views in case of React Native. It keeps it in the memory.
-
-React then calculates how it can effectively mimic this tree in the memory of the UI displayed to the user. This process is known as [reconciliation][7]. After that calculation is done, React makes the changes to the actual UI on the screen.
-
- ** 此处有Canvas,请手动处理 **
-
-
-How React converts your JSX into a tree which describes your app’s UI
-
-You can use [Babel’s online REPL][8] to see what React actually outputs when you write some JSX.
-
-
-
-Use Babel REPL to transform JSX into plain JavaScript
-
-> Since JSX is just a syntactic sugar over plain `React.createElement()` calls, React can be used without JSX.
-
-Now we have every concept in place, so we are well positioned to write a `counter` component that we saw earlier as a GIF.
-
-The code is as follows and I hope that you already know how to render that in our playground.
-
-```
-class Counter extends React.Component {
- constructor(props) {
- super(props);
-
- this.state = {count: this.props.start || 0}
-
- // the following bindings are necessary to make `this` work in the callback
- this.inc = this.inc.bind(this);
- this.dec = this.dec.bind(this);
- }
-
- inc() {
- this.setState({
- count: this.state.count + 1
- });
- }
-
- dec() {
- this.setState({
- count: this.state.count - 1
- });
- }
-
- render() {
- return (
-
-
-
-
{this.state.count}
-
- );
- }
-}
-```
-
-The following are some salient points about the above code.
-
-1. JSX uses `camelCasing` hence `button`'s attribute is `onClick`, not `onclick`, as we use in HTML.
-
-2. Binding is necessary for `this` to work on callbacks. See line #8 and 9 in the code above.
-
-The final interactive code is located [here][9].
-
-With that, we’ve reached the conclusion of our React crash course. I hope I have shed some light on how React works and how you can use React to build bigger apps, using smaller and reusable components.
-
-* * *
-
-If you have any queries or doubts, hit me up on Twitter [@rajat1saxena][10] or write to me at [rajat@raynstudios.com][11].
-
-* * *
-
-#### Please recommend this post, if you liked it and share it with your network. Follow me for more tech related posts and consider subscribing to my channel [Rayn Studios][12] on YouTube. Thanks a lot.
-
---------------------------------------------------------------------------------
-
-via: https://medium.freecodecamp.org/rock-solid-react-js-foundations-a-beginners-guide-c45c93f5a923
-
-作者:[Rajat Saxena ][a]
-译者:[译者ID](https://github.com/译者ID)
-校对:[校对者ID](https://github.com/校对者ID)
-
-本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
-
-[a]:https://medium.freecodecamp.org/@rajat1saxena
-[1]:https://kivenaa.com/
-[2]:https://play.google.com/store/apps/details?id=com.pollenchat.android
-[3]:https://facebook.github.io/react-native/
-[4]:https://codepen.io/raynesax/pen/MrNmBM
-[5]:https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Classes
-[6]:https://en.wikipedia.org/wiki/Source-to-source_compiler
-[7]:https://reactjs.org/docs/reconciliation.html
-[8]:https://babeljs.io/repl
-[9]:https://codepen.io/raynesax/pen/QaROqK
-[10]:https://twitter.com/rajat1saxena
-[11]:mailto:rajat@raynstudios.com
-[12]:https://www.youtube.com/channel/UCUmQhjjF9bsIaVDJUHSIIKw
\ No newline at end of file
diff --git a/sources/tech/29180329 Python ChatOps libraries- Opsdroid and Errbot.md b/sources/tech/20180329 Python ChatOps libraries- Opsdroid and Errbot.md
similarity index 99%
rename from sources/tech/29180329 Python ChatOps libraries- Opsdroid and Errbot.md
rename to sources/tech/20180329 Python ChatOps libraries- Opsdroid and Errbot.md
index d7ef058106..5f409956f7 100644
--- a/sources/tech/29180329 Python ChatOps libraries- Opsdroid and Errbot.md
+++ b/sources/tech/20180329 Python ChatOps libraries- Opsdroid and Errbot.md
@@ -1,5 +1,3 @@
-Translating by shipsw
-
Python ChatOps libraries: Opsdroid and Errbot
======
diff --git a/sources/tech/20180412 A Desktop GUI Application For NPM.md b/sources/tech/20180412 A Desktop GUI Application For NPM.md
deleted file mode 100644
index 4eabc40672..0000000000
--- a/sources/tech/20180412 A Desktop GUI Application For NPM.md
+++ /dev/null
@@ -1,147 +0,0 @@
-A Desktop GUI Application For NPM
-======
-
-
-
-NPM, short for **N** ode **P** ackage **M** anager, is a command line package manager for installing NodeJS packages, or modules. We already have have published a guide that described how to [**manage NodeJS packages using NPM**][1]. As you may noticed, managing NodeJS packages or modules using Npm is not a big deal. However, if you’re not compatible with CLI-way, there is a desktop GUI application named **NDM** which can be used for managing NodeJS applications/modules. NDM, stands for **N** PM **D** esktop **M** anager, is a free, open source graphical front-end for NPM that allows us to install, update, remove NodeJS packages via a simple graphical window.
-
-In this brief tutorial, we are going to learn about Ndm in Linux.
-
-### Install NDM
-
-NDM is available in AUR, so you can install it using any AUR helpers on Arch Linux and its derivatives like Antergos and Manjaro Linux.
-
-Using [**Pacaur**][2]:
-```
-$ pacaur -S ndm
-
-```
-
-Using [**Packer**][3]:
-```
-$ packer -S ndm
-
-```
-
-Using [**Trizen**][4]:
-```
-$ trizen -S ndm
-
-```
-
-Using [**Yay**][5]:
-```
-$ yay -S ndm
-
-```
-
-Using [**Yaourt**][6]:
-```
-$ yaourt -S ndm
-
-```
-
-On RHEL based systems like CentOS, run the following command to install NDM.
-```
-$ echo "[fury] name=ndm repository baseurl=https://repo.fury.io/720kb/ enabled=1 gpgcheck=0" | sudo tee /etc/yum.repos.d/ndm.repo && sudo yum update &&
-
-```
-
-On Debian, Ubuntu, Linux Mint:
-```
-$ echo "deb [trusted=yes] https://apt.fury.io/720kb/ /" | sudo tee /etc/apt/sources.list.d/ndm.list && sudo apt-get update && sudo apt-get install ndm
-
-```
-
-NDM can also be installed using **Linuxbrew**. First, install Linuxbrew as described in the following link.
-
-After installing Linuxbrew, you can install NDM using the following commands:
-```
-$ brew update
-
-$ brew install ndm
-
-```
-
-On other Linux distributions, go to the [**NDM releases page**][7], download the latest version, compile and install it yourself.
-
-### NDM Usage
-
-Launch NDM wither from the Menu or using application launcher. This is how NDM’s default interface looks like.
-
-![][9]
-
-From here, you can install NodeJS packages/modules either locally or globally.
-
-**Install NodeJS packages locally**
-
-To install a package locally, first choose project directory by clicking on the **“Add projects”** button from the Home screen and select the directory where you want to keep your project files. For example, I have chosen a directory named **“demo”** as my project directory.
-
-Click on the project directory (i.e **demo** ) and then, click **Add packages** button.
-
-![][10]
-
-Type the package name you want to install and hit the **Install** button.
-
-![][11]
-
-Once installed, the packages will be listed under the project’s directory. Simply click on the directory to view the list of installed packages locally.
-
-![][12]
-
-Similarly, you can create separate project directories and install NodeJS modules in them. To view the list of installed modules on a project, click on the project directory, and you will the packages on the right side.
-
-**Install NodeJS packages globally**
-
-To install NodeJS packages globally, click on the **Globals** button on the left from the main interface. Then, click “Add packages” button, type the name of the package and hit “Install” button.
-
-**Manage packages**
-
-Click on any installed packages and you will see various options on the top, such as
-
- 1. Version (to view the installed version),
- 2. Latest (to install latest available version),
- 3. Update (to update the currently selected package),
- 4. Uninstall (to remove the selected package) etc.
-
-
-
-![][13]
-
-NDM has two more options namely **“Update npm”** which is used to update the node package manager to latest available version, and **Doctor** that runs a set of checks to ensure that your npm installation has what it needs to manage your packages/modules.
-
-### Conclusion
-
-NDM makes the process of installing, updating, removing NodeJS packages easier! You don’t need to memorize the commands to perform those tasks. NDM lets us to do them all with a few mouse clicks via simple graphical window. For those who are lazy to type commands, NDM is perfect companion to manage NodeJS packages.
-
-Cheers!
-
-**Resource:**
-
-
-
---------------------------------------------------------------------------------
-
-via: https://www.ostechnix.com/ndm-a-desktop-gui-application-for-npm/
-
-作者:[SK][a]
-译者:[译者ID](https://github.com/译者ID)
-校对:[校对者ID](https://github.com/校对者ID)
-选题:[lujun9972](https://github.com/lujun9972)
-
-本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
-
-[a]:https://www.ostechnix.com/author/sk/
-[1]:https://www.ostechnix.com/manage-nodejs-packages-using-npm/
-[2]:https://www.ostechnix.com/install-pacaur-arch-linux/
-[3]:https://www.ostechnix.com/install-packer-arch-linux-2/
-[4]:https://www.ostechnix.com/trizen-lightweight-aur-package-manager-arch-based-systems/
-[5]:https://www.ostechnix.com/yay-found-yet-another-reliable-aur-helper/
-[6]:https://www.ostechnix.com/install-yaourt-arch-linux/
-[7]:https://github.com/720kb/ndm/releases
-[8]:data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7
-[9]:http://www.ostechnix.com/wp-content/uploads/2018/04/ndm-1.png
-[10]:http://www.ostechnix.com/wp-content/uploads/2018/04/ndm-5-1.png
-[11]:http://www.ostechnix.com/wp-content/uploads/2018/04/ndm-6.png
-[12]:http://www.ostechnix.com/wp-content/uploads/2018/04/ndm-7.png
-[13]:http://www.ostechnix.com/wp-content/uploads/2018/04/ndm-8.png
diff --git a/sources/tech/20180522 How to Enable Click to Minimize On Ubuntu.md b/sources/tech/20180522 How to Enable Click to Minimize On Ubuntu.md
index 761138908d..50d68ad445 100644
--- a/sources/tech/20180522 How to Enable Click to Minimize On Ubuntu.md
+++ b/sources/tech/20180522 How to Enable Click to Minimize On Ubuntu.md
@@ -1,5 +1,3 @@
-translated by cyleft
-
How to Enable Click to Minimize On Ubuntu
============================================================
diff --git a/sources/tech/20180531 How to create shortcuts in vi.md b/sources/tech/20180531 How to create shortcuts in vi.md
deleted file mode 100644
index ba856e745a..0000000000
--- a/sources/tech/20180531 How to create shortcuts in vi.md
+++ /dev/null
@@ -1,131 +0,0 @@
-【sd886393认领翻译中】How to create shortcuts in vi
-======
-
-
-
-Learning the [vi text editor][1] takes some effort, but experienced vi users know that after a while, using basic commands becomes second nature. It's a form of what is known as muscle memory, which in this case might well be called finger memory.
-
-After you get a grasp of the main approach and basic commands, you can make editing with vi even more powerful and streamlined by using its customization options to create shortcuts. I hope that the techniques described below will facilitate your writing, programming, and data manipulation.
-
-Before proceeding, I'd like to thank Chris Hermansen (who recruited me to write this article) for checking my draft with [Vim][2], as I use another version of vi. I'm also grateful for Chris's helpful suggestions, which I incorporated here.
-
-First, let's review some conventions. I'll use to designate pressing the RETURN or ENTER key, and for the space bar. CTRL-x indicates simultaneously pressing the Control key and the x key (whatever x happens to be).
-
-Set up your own command abbreviations with the `map` command. My first example involves the `write` command, used to save the current state of the file you're working on:
-```
-:w
-
-```
-
-This is only three keystrokes, but since I do it so frequently, I'd rather use only one. The key I've chosen for this purpose is the comma, which is not part of the standard vi command set. The command to set this up is:
-```
-:map , :wCTRL-v
-
-```
-
-The CTRL-v is essential since without it the would signal the end of the map, and we want to include the as part of the mapped comma. In general, CTRL-v is used to enter the keystroke (or control character) that follows rather than being interpreted literally.
-
-In the above map, the part on the right will display on the screen as `:w^M`. The caret (`^`) indicates a control character, in this case CTRL-m, which is the system's form of .
-
-So far so good—sort of. If I write my current file about a dozen times while creating and/or editing it, this map could result in a savings of 2 x 12 keystrokes. But that doesn't account for the keystrokes needed to set up the map, which in the above example is 11 (counting CTRL-v and the shifted character `:` as one stroke each). Even with a net savings, it would be a bother to set up the map each time you start a vi session.
-
-Fortunately, there's a way to put maps and other abbreviations in a startup file that vi reads each time it is invoked: the `.exrc` file, or in Vim, the `.vimrc` file. Simply create this file in your home directory with a list of maps, one per line—without the colon—and the abbreviation is defined for all subsequent vi sessions until you delete or change it.
-
-Before going on to a variation of the `map` command and another type of abbreviation method, here are a few more examples of maps that I've found useful for streamlining my text editing:
-```
- Displays as
-
-
-
-:map X :xCTRL-v :x^M
-
-
-
-or
-
-
-
-:map X ,:qCTRL-v ,:q^M
-
-```
-
-The above equivalent maps write and quit (exit) the file. The `:x` is the standard vi command for this, and the second version illustrates that a previously defined map may be used in a subsequent map.
-```
-:map v :e :e
-
-```
-
-The above starts the command to move to another file while remaining within vi; when using this, just follow the "v" with a filename, followed by .
-```
-:map CTRL-vCTRL-e :e#CTRL-v :e #^M
-
-```
-
-The `#` here is the standard vi symbol for "the alternate file," which means the filename last used, so this shortcut is handy for switching back and forth between two files. Here's an example of how I use this:
-```
-map CTRL-vCTRL-r :!spell %>err &CTRL-v :!spell %>err&^M
-
-```
-
-(Note: The first CTRL-v in both examples above is not needed in some versions of vi.) The `:!` is a way to run an external (non-vi) command. In this case (`spell`), `%` is the vi symbol denoting the current file, the `>` redirects the output of the spell-check to a file called `err`, and the `&` says to run this in the background so I can continue editing while `spell` completes its task. I can then type `verr` (using my previous shortcut, `v`, followed by `err`) to go the file of potential errors flagged by the `spell` command, then back to the file I'm working on with CTRL-e. After running the spell-check the first time, I can use CTRL-r repeatedly and return to the `err` file with just CTRL-e.
-
-A variation of the `map` command may be used to abbreviate text strings while inputting. For example,
-```
-:map! CTRL-o \fI
-
-:map! CTRL-k \fP
-
-```
-
-This will allow you to use CTRL-o as a shortcut for entering the `groff` command to italicize the word that follows, and this will allow you to use CTRL-k for the `groff` command reverts to the previous font.
-
-Here are two other examples of this technique:
-```
-:map! rh rhinoceros
-
-:map! hi hippopotamus
-
-```
-
-The above may instead be accomplished using the `ab` command, as follows (if you're trying these out in order, first use `unmap! rh` and `umap! hi`):
-```
-:ab rh rhinoceros
-
-:ab hi hippopotamus
-
-```
-
-In the `map!` method above, the abbreviation immediately expands to the defined word when typed (in Vim), whereas with the `ab` method, the expansion occurs when the abbreviation is followed by a space or punctuation mark (in both Vim and my version of vi, where the expansion also works like this for the `map!` method).
-
-To reverse any `map`, `map!`, or `ab` within a vi session, use `:unmap`, `:unmap!`, or `:unab`.
-
-In my version of vi, undefined letters that are good candidates for mapping include g, K, q, v, V, and Z; undefined control characters are CTRL-a, CTRL-c, CTRL-k, CTRL-n, CTRL-o, CTRL-p, and CTRL-x; some other undefined characters are `#` and `*`. You can also redefine characters that have meaning in vi but that you consider obscure and of little use; for example, the X that I chose for two examples in this article is a built-in vi command to delete the character to the immediate left of the current character (easily accomplished by the two-key command `hx`).
-
-Finally, the commands
-```
-:map
-
-:map!
-
-:ab
-
-```
-
-will show all the currently defined mappings and abbreviations.
-
-I hope that all of these tips will help you customize vi and make it easier and more efficient to use.
-
---------------------------------------------------------------------------------
-
-via: https://opensource.com/article/18/5/shortcuts-vi-text-editor
-
-作者:[Dan Sonnenschein][a]
-选题:[lujun9972](https://github.com/lujun9972)
-译者:[译者ID](https://github.com/译者ID)
-校对:[校对者ID](https://github.com/校对者ID)
-
-本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
-
-[a]:https://opensource.com/users/dannyman
-[1]:http://ex-vi.sourceforge.net/
-[2]:https://www.vim.org/
diff --git a/sources/tech/20180615 Complete Sed Command Guide [Explained with Practical Examples].md b/sources/tech/20180615 Complete Sed Command Guide [Explained with Practical Examples].md
index e548213483..d2c50b6029 100644
--- a/sources/tech/20180615 Complete Sed Command Guide [Explained with Practical Examples].md
+++ b/sources/tech/20180615 Complete Sed Command Guide [Explained with Practical Examples].md
@@ -1,3 +1,4 @@
+Translating by qhwdw
Complete Sed Command Guide [Explained with Practical Examples]
======
In a previous article, I showed the [basic usage of Sed][1], the stream editor, on a practical use case. Today, be prepared to gain more insight about Sed as we will take an in-depth tour of the sed execution model. This will be also an opportunity to make an exhaustive review of all Sed commands and to dive into their details and subtleties. So, if you are ready, launch a terminal, [download the test files][2] and sit comfortably before your keyboard: we will start our exploration right now!
diff --git a/sources/tech/20180703 Install Oracle VirtualBox On Ubuntu 18.04 LTS Headless Server.md b/sources/tech/20180703 Install Oracle VirtualBox On Ubuntu 18.04 LTS Headless Server.md
deleted file mode 100644
index dd8c3cdb13..0000000000
--- a/sources/tech/20180703 Install Oracle VirtualBox On Ubuntu 18.04 LTS Headless Server.md
+++ /dev/null
@@ -1,320 +0,0 @@
-Install Oracle VirtualBox On Ubuntu 18.04 LTS Headless Server
-======
-
-
-
-This step by step tutorial walk you through how to install **Oracle VirtualBox** on Ubuntu 18.04 LTS headless server. And, this guide also describes how to manage the VirtualBox headless instances using **phpVirtualBox** , a web-based front-end tool for VirtualBox. The steps described below might also work on Debian, and other Ubuntu derivatives such as Linux Mint. Let us get started.
-
-### Prerequisites
-
-Before installing Oracle VirtualBox, we need to do the following prerequisites in our Ubuntu 18.04 LTS server.
-
-First of all, update the Ubuntu server by running the following commands one by one.
-```
-$ sudo apt update
-
-$ sudo apt upgrade
-
-$ sudo apt dist-upgrade
-
-```
-
-Next, install the following necessary packages:
-```
-$ sudo apt install build-essential dkms unzip wget
-
-```
-
-After installing all updates and necessary prerequisites, restart the Ubuntu server.
-```
-$ sudo reboot
-
-```
-
-### Install Oracle VirtualBox on Ubuntu 18.04 LTS server
-
-Add Oracle VirtualBox official repository. To do so, edit **/etc/apt/sources.list** file:
-```
-$ sudo nano /etc/apt/sources.list
-
-```
-
-Add the following lines.
-
-Here, I will be using Ubuntu 18.04 LTS, so I have added the following repository.
-```
-deb http://download.virtualbox.org/virtualbox/debian bionic contrib
-
-```
-
-![][2]
-
-Replace the word **‘bionic’** with your Ubuntu distribution’s code name, such as ‘xenial’, ‘vivid’, ‘utopic’, ‘trusty’, ‘raring’, ‘quantal’, ‘precise’, ‘lucid’, ‘jessie’, ‘wheezy’, or ‘squeeze**‘.**
-
-Then, run the following command to add the Oracle public key:
-```
-$ wget -q https://www.virtualbox.org/download/oracle_vbox_2016.asc -O- | sudo apt-key add -
-
-```
-
-For VirtualBox older versions, add the following key:
-```
-$ wget -q https://www.virtualbox.org/download/oracle_vbox.asc -O- | sudo apt-key add -
-
-```
-
-Next, update the software sources using command:
-```
-$ sudo apt update
-
-```
-
-Finally, install latest Oracle VirtualBox latest version using command:
-```
-$ sudo apt install virtualbox-5.2
-
-```
-
-### Adding users to VirtualBox group
-
-We need to create and add our system user to the **vboxusers** group. You can either create a separate user and assign it to vboxusers group or use the existing user. I don’t want to create a new user, so I added my existing user to this group. Please note that if you use a separate user for virtualbox, you must log out and log in to that particular user and do the rest of the steps.
-
-I am going to use my username named **sk** , so, I ran the following command to add it to the vboxusers group.
-```
-$ sudo usermod -aG vboxusers sk
-
-```
-
-Now, run the following command to check if virtualbox kernel modules are loaded or not.
-```
-$ sudo systemctl status vboxdrv
-
-```
-
-![][3]
-
-As you can see in the above screenshot, the vboxdrv module is loaded and running!
-
-For older Ubuntu versions, run:
-```
-$ sudo /etc/init.d/vboxdrv status
-
-```
-
-If the virtualbox module doesn’t start, run the following command to start it.
-```
-$ sudo /etc/init.d/vboxdrv setup
-
-```
-
-Great! We have successfully installed VirtualBox and started virtualbox module. Now, let us go ahead and install Oracle VirtualBox extension pack.
-
-### Install VirtualBox Extension pack
-
-The VirtualBox Extension pack provides the following functionalities to the VirtualBox guests.
-
- * The virtual USB 2.0 (EHCI) device
- * VirtualBox Remote Desktop Protocol (VRDP) support
- * Host webcam passthrough
- * Intel PXE boot ROM
- * Experimental support for PCI passthrough on Linux hosts
-
-
-
-Download the latest Extension pack for VirtualBox 5.2.x from [**here**][4].
-```
-$ wget https://download.virtualbox.org/virtualbox/5.2.14/Oracle_VM_VirtualBox_Extension_Pack-5.2.14.vbox-extpack
-
-```
-
-Install Extension pack using command:
-```
-$ sudo VBoxManage extpack install Oracle_VM_VirtualBox_Extension_Pack-5.2.14.vbox-extpack
-
-```
-
-Congratulations! We have successfully installed Oracle VirtualBox with extension pack in Ubuntu 16.04 LTS server. It is time to deploy virtual machines. Refer the [**virtualbox official guide**][5] to start creating and managing virtual machines in command line.
-
-Not everyone is command line expert. Some of you might want to create and use virtual machines graphically. No worries! Here is where **phpVirtualBox** comes in handy!!
-
-### About phpVirtualBox
-
-**phpVirtualBox** is a free, web-based front-end to Oracle VirtualBox. It is written using PHP language. Using phpVirtualBox, we can easily create, delete, manage and administer virtual machines via a web browser from any remote system on the network.
-
-### Install phpVirtualBox in Ubuntu 18.04 LTS
-
-Since it is a web-based tool, we need to install Apache web server, PHP and some php modules.
-
-To do so, run:
-```
-$ sudo apt install apache2 php php-mysql libapache2-mod-php php-soap php-xml
-
-```
-
-Then, Download the phpVirtualBox 5.2.x version from the [**releases page**][6]. Please note that we have installed VirtualBox 5.2, so we must install phpVirtualBox version 5.2 as well.
-
-To download it, run:
-```
-$ wget https://github.com/phpvirtualbox/phpvirtualbox/archive/5.2-0.zip
-
-```
-
-Extract the downloaded archive with command:
-```
-$ unzip 5.2-0.zip
-
-```
-
-This command will extract the contents of 5.2.0.zip file into a folder named “phpvirtualbox-5.2-0”. Now, copy or move the contents of this folder to your apache web server root folder.
-```
-$ sudo mv phpvirtualbox-5.2-0/ /var/www/html/phpvirtualbox
-
-```
-
-Assign the proper permissions to the phpvirtualbox folder.
-```
-$ sudo chmod 777 /var/www/html/phpvirtualbox/
-
-```
-
-Next, let us configure phpVirtualBox.
-
-Copy the sample config file as shown below.
-```
-$ sudo cp /var/www/html/phpvirtualbox/config.php-example /var/www/html/phpvirtualbox/config.php
-
-```
-
-Edit phpVirtualBox **config.php** file:
-```
-$ sudo nano /var/www/html/phpvirtualbox/config.php
-
-```
-
-Find the following lines and replace the username and password with your system user (The same username that we used in “Adding users to VirtualBox group” section).
-
-In my case, my Ubuntu system username is **sk** , and its password is **ubuntu**.
-```
-var $username = 'sk';
-var $password = 'ubuntu';
-
-```
-
-![][7]
-
-Save and close the file.
-
-Next, create a new file called **/etc/default/virtualbox** :
-```
-$ sudo nano /etc/default/virtualbox
-
-```
-
-Add the following line. Replace ‘sk’ with your own username.
-```
-VBOXWEB_USER=sk
-
-```
-
-Finally, Reboot your system or simply restart the following services to complete the configuration.
-```
-$ sudo systemctl restart vboxweb-service
-
-$ sudo systemctl restart vboxdrv
-
-$ sudo systemctl restart apache2
-
-```
-
-### Adjust firewall to allow Apache web server
-
-By default, the apache web browser can’t be accessed from remote systems if you have enabled the UFW firewall in Ubuntu 18.04 LTS. You must allow the http and https traffic via UFW by following the below steps.
-
-First, let us view which applications have installed a profile using command:
-```
-$ sudo ufw app list
-Available applications:
-Apache
-Apache Full
-Apache Secure
-OpenSSH
-
-```
-
-As you can see, Apache and OpenSSH applications have installed UFW profiles.
-
-If you look into the **“Apache Full”** profile, you will see that it enables traffic to the ports **80** and **443** :
-```
-$ sudo ufw app info "Apache Full"
-Profile: Apache Full
-Title: Web Server (HTTP,HTTPS)
-Description: Apache v2 is the next generation of the omnipresent Apache web
-server.
-
-Ports:
-80,443/tcp
-
-```
-
-Now, run the following command to allow incoming HTTP and HTTPS traffic for this profile:
-```
-$ sudo ufw allow in "Apache Full"
-Rules updated
-Rules updated (v6)
-
-```
-
-If you want to allow https traffic, but only http (80) traffic, run:
-```
-$ sudo ufw app info "Apache"
-
-```
-
-### Access phpVirtualBox Web console
-
-Now, go to any remote system that has graphical web browser.
-
-In the address bar, type: ****.
-
-In my case, I navigated to this link – ****
-
-You should see the following screen. Enter the phpVirtualBox administrative user credentials.
-
-The default username and phpVirtualBox is **admin** / **admin**.
-
-![][8]
-
-Congratulations! You will now be greeted with phpVirtualBox dashboard.
-
-![][9]
-
-Now, start creating your VMs and manage them from phpvirtualbox dashboard. As I mentioned earlier, You can access the phpVirtualBox from any system in the same network. All you need is a web browser and the username and password of phpVirtualBox.
-
-If you haven’t enabled virtualization support in the BISO of host system (not the guest), phpVirtualBox allows you to create 32-bit guests only. To install 64-bit guest systems, you must enable virtualization in your host system’s BIOS. Look for an option that is something like “virtualization” or “hypervisor” in your bios and make sure it is enabled.
-
-That’s it. Hope this helps. If you find this guide useful, please share it on your social networks and support us.
-
-More good stuffs to come. Stay tuned!
-
-
---------------------------------------------------------------------------------
-
-via: https://www.ostechnix.com/install-oracle-virtualbox-ubuntu-16-04-headless-server/
-
-作者:[SK][a]
-选题:[lujun9972](https://github.com/lujun9972)
-译者:[译者ID](https://github.com/译者ID)
-校对:[校对者ID](https://github.com/校对者ID)
-
-本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
-
-[a]:https://www.ostechnix.com/author/sk/
-[1]:data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7
-[2]:http://www.ostechnix.com/wp-content/uploads/2016/07/Add-VirtualBox-repository.png
-[3]:http://www.ostechnix.com/wp-content/uploads/2016/07/vboxdrv-service.png
-[4]:https://www.virtualbox.org/wiki/Downloads
-[5]:http://www.virtualbox.org/manual/ch08.html
-[6]:https://github.com/phpvirtualbox/phpvirtualbox/releases
-[7]:http://www.ostechnix.com/wp-content/uploads/2016/07/phpvirtualbox-config.png
-[8]:http://www.ostechnix.com/wp-content/uploads/2016/07/phpvirtualbox-1.png
-[9]:http://www.ostechnix.com/wp-content/uploads/2016/07/phpvirtualbox-2.png
diff --git a/sources/tech/20180704 Setup Headless Virtualization Server Using KVM In Ubuntu 18.04 LTS.md b/sources/tech/20180704 Setup Headless Virtualization Server Using KVM In Ubuntu 18.04 LTS.md
deleted file mode 100644
index a85a637830..0000000000
--- a/sources/tech/20180704 Setup Headless Virtualization Server Using KVM In Ubuntu 18.04 LTS.md
+++ /dev/null
@@ -1,332 +0,0 @@
-Setup Headless Virtualization Server Using KVM In Ubuntu 18.04 LTS
-======
-
-
-
-We already have covered [**setting up Oracle VirtualBox on Ubuntu 18.04**][1] headless server. In this tutorial, we will be discussing how to setup headless virtualization server using **KVM** and how to manage the guest machines from a remote client. As you may know already, KVM ( **K** ernel-based **v** irtual **m** achine) is an open source, full virtualization for Linux. Using KVM, we can easily turn any Linux server in to a complete virtualization environment in minutes and deploy different kind of VMs such as GNU/Linux, *BSD, Windows etc.
-
-### Setup Headless Virtualization Server Using KVM
-
-I tested this guide on Ubuntu 18.04 LTS server, however this tutorial will work on other Linux distributions such as Debian, CentOS, RHEL and Scientific Linux. This method will be perfectly suitable for those who wants to setup a simple virtualization environment in a Linux server that doesn’t have any graphical environment.
-
-For the purpose of this guide, I will be using two systems.
-
-**KVM virtualization server:**
-
- * **Host OS** – Ubuntu 18.04 LTS minimal server (No GUI)
- * **IP Address of Host OS** : 192.168.225.22/24
- * **Guest OS** (Which we are going to host on Ubuntu 18.04) : Ubuntu 16.04 LTS server
-
-
-
-**Remote desktop client :**
-
- * **OS** – Arch Linux
-
-
-
-### Install KVM
-
-First, let us check if our system supports hardware virtualization. To do so, run the following command from the Terminal:
-```
-$ egrep -c '(vmx|svm)' /proc/cpuinfo
-
-```
-
-If the result is **zero (0)** , the system doesn’t support hardware virtualization or the virtualization is disabled in the Bios. Go to your bios and check for the virtualization option and enable it.
-
-if the result is **1** or **more** , the system will support hardware virtualization. However, you still need to enable the virtualization option in Bios before running the above commands.
-
-Alternatively, you can use the following command to verify it. You need to install kvm first as described below, in order to use this command.
-```
-$ kvm-ok
-
-```
-
-**Sample output:**
-```
-INFO: /dev/kvm exists
-KVM acceleration can be used
-
-```
-
-If you got the following error instead, you still can run guest machines in KVM, but the performance will be very poor.
-```
-INFO: Your CPU does not support KVM extensions
-INFO: For more detailed results, you should run this as root
-HINT: sudo /usr/sbin/kvm-ok
-
-```
-
-Also, there are other ways to find out if your CPU supports Virtualization or not. Refer the following guide for more details.
-
-Next, Install KVM and other required packages to setup a virtualization environment in Linux.
-
-On Ubuntu and other DEB based systems, run:
-```
-$ sudo apt-get install qemu-kvm libvirt-bin virtinst bridge-utils cpu-checker
-
-```
-
-Once KVM installed, start libvertd service (If it is not started already):
-```
-$ sudo systemctl enable libvirtd
-
-$ sudo systemctl start libvirtd
-
-```
-
-### Create Virtual machines
-
-All virtual machine files and other related files will be stored under **/var/lib/libvirt/**. The default path of ISO images is **/var/lib/libvirt/boot/**.
-
-First, let us see if there is any virtual machines. To view the list of available virtual machines, run:
-```
-$ sudo virsh list --all
-
-```
-
-**Sample output:**
-```
-Id Name State
-----------------------------------------------------
-
-```
-
-![][3]
-
-As you see above, there is no virtual machine available right now.
-
-Now, let us crate one.
-
-For example, let us create Ubuntu 16.04 Virtual machine with 512 MB RAM, 1 CPU core, 8 GB Hdd.
-```
-$ sudo virt-install --name Ubuntu-16.04 --ram=512 --vcpus=1 --cpu host --hvm --disk path=/var/lib/libvirt/images/ubuntu-16.04-vm1,size=8 --cdrom /var/lib/libvirt/boot/ubuntu-16.04-server-amd64.iso --graphics vnc
-
-```
-
-Please make sure you have Ubuntu 16.04 ISO image in path **/var/lib/libvirt/boot/** or any other path you have given in the above command.
-
-**Sample output:**
-```
-WARNING Graphics requested but DISPLAY is not set. Not running virt-viewer.
-WARNING No console to launch for the guest, defaulting to --wait -1
-
-Starting install...
-Creating domain... | 0 B 00:00:01
-Domain installation still in progress. Waiting for installation to complete.
-Domain has shutdown. Continuing.
-Domain creation completed.
-Restarting guest.
-
-```
-
-![][4]
-
-Let us break down the above command and see what each option do.
-
- * **–name** : This option defines the name of the virtual name. In our case, the name of VM is **Ubuntu-16.04**.
- * **–ram=512** : Allocates 512MB RAM to the VM.
- * **–vcpus=1** : Indicates the number of CPU cores in the VM.
- * **–cpu host** : Optimizes the CPU properties for the VM by exposing the host’s CPU’s configuration to the guest.
- * **–hvm** : Request the full hardware virtualization.
- * **–disk path** : The location to save VM’s hdd and it’s size. In our example, I have allocated 8GB hdd size.
- * **–cdrom** : The location of installer ISO image. Please note that you must have the actual ISO image in this location.
- * **–graphics vnc** : Allows VNC access to the VM from a remote client.
-
-
-
-### Access Virtual machines using VNC client
-
-Now, go to the remote Desktop system. SSH to the Ubuntu server(Virtualization server) as shown below.
-
-Here, **sk** is my Ubuntu server’s user name and **192.168.225.22** is its IP address.
-
-Run the following command to find out the VNC port number. We need this to access the Vm from a remote system.
-```
-$ sudo virsh dumpxml Ubuntu-16.04 | grep vnc
-
-```
-
-**Sample output:**
-```
-
-
-```
-
-![][5]
-
-Note down the port number **5900**. Install any VNC client application. For this guide, I will be using TigerVnc. TigerVNC is available in the Arch Linux default repositories. To install it on Arch based systems, run:
-```
-$ sudo pacman -S tigervnc
-
-```
-
-Type the following SSH port forwarding command from your remote client system that has VNC client application installed.
-
-Again, **192.168.225.22** is my Ubuntu server’s (virtualization server) IP address.
-
-Then, open the VNC client from your Arch Linux (client).
-
-Type **localhost:5900** in the VNC server field and click **Connect** button.
-
-![][6]
-
-Then start installing the Ubuntu VM as the way you do in the physical system.
-
-![][7]
-
-![][8]
-
-Similarly, you can setup as many as virtual machines depending upon server hardware specifications.
-
-Alternatively, you can use **virt-viewer** utility in order to install operating system in the guest machines. virt-viewer is available in the most Linux distribution’s default repositories. After installing virt-viewer, run the following command to establish VNC access to the VM.
-```
-$ sudo virt-viewer --connect=qemu+ssh://192.168.225.22/system --name Ubuntu-16.04
-
-```
-
-### Manage virtual machines
-
-Managing VMs from the command-line using virsh management user interface is very interesting and fun. The commands are very easy to remember. Let us see some examples.
-
-To view the list of running VMs, run:
-```
-$ sudo virsh list
-
-```
-
-Or,
-```
-$ sudo virsh list --all
-
-```
-
-**Sample output:**
-```
- Id Name State
-----------------------------------------------------
- 2 Ubuntu-16.04 running
-
-```
-
-![][9]
-
-To start a VM, run:
-```
-$ sudo virsh start Ubuntu-16.04
-
-```
-
-Alternatively, you can use the VM id to start it.
-
-![][10]
-
-As you see in the above output, Ubuntu 16.04 virtual machine’s Id is 2. So, in order to start it, just specify its Id like below.
-```
-$ sudo virsh start 2
-
-```
-
-To restart a VM, run:
-```
-$ sudo virsh reboot Ubuntu-16.04
-
-```
-
-**Sample output:**
-```
-Domain Ubuntu-16.04 is being rebooted
-
-```
-
-![][11]
-
-To pause a running VM, run:
-```
-$ sudo virsh suspend Ubuntu-16.04
-
-```
-
-**Sample output:**
-```
-Domain Ubuntu-16.04 suspended
-
-```
-
-To resume the suspended VM, run:
-```
-$ sudo virsh resume Ubuntu-16.04
-
-```
-
-**Sample output:**
-```
-Domain Ubuntu-16.04 resumed
-
-```
-
-To shutdown a VM, run:
-```
-$ sudo virsh shutdown Ubuntu-16.04
-
-```
-
-**Sample output:**
-```
-Domain Ubuntu-16.04 is being shutdown
-
-```
-
-To completely remove a VM, run:
-```
-$ sudo virsh undefine Ubuntu-16.04
-
-$ sudo virsh destroy Ubuntu-16.04
-
-```
-
-**Sample output:**
-```
-Domain Ubuntu-16.04 destroyed
-
-```
-
-![][12]
-
-For more options, I recommend you to look into the man pages.
-```
-$ man virsh
-
-```
-
-That’s all for now folks. Start playing with your new virtualization environment. KVM virtualization will be opt for research & development and testing purposes, but not limited to. If you have sufficient hardware, you can use it for large production environments. Have fun and don’t forget to leave your valuable comments in the comment section below.
-
-Cheers!
-
-
-
---------------------------------------------------------------------------------
-
-via: https://www.ostechnix.com/setup-headless-virtualization-server-using-kvm-ubuntu/
-
-作者:[SK][a]
-选题:[lujun9972](https://github.com/lujun9972)
-译者:[译者ID](https://github.com/译者ID)
-校对:[校对者ID](https://github.com/校对者ID)
-
-本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
-
-[a]:https://www.ostechnix.com/author/sk/
-[1]:https://www.ostechnix.com/install-oracle-virtualbox-ubuntu-16-04-headless-server/
-[2]:data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7
-[3]:http://www.ostechnix.com/wp-content/uploads/2016/11/sk@ubuntuserver-_001.png
-[4]:http://www.ostechnix.com/wp-content/uploads/2016/11/sk@ubuntuserver-_008-1.png
-[5]:http://www.ostechnix.com/wp-content/uploads/2016/11/sk@ubuntuserver-_002.png
-[6]:http://www.ostechnix.com/wp-content/uploads/2016/11/VNC-Viewer-Connection-Details_005.png
-[7]:http://www.ostechnix.com/wp-content/uploads/2016/11/QEMU-Ubuntu-16.04-TigerVNC_006.png
-[8]:http://www.ostechnix.com/wp-content/uploads/2016/11/QEMU-Ubuntu-16.04-TigerVNC_007.png
-[9]:http://www.ostechnix.com/wp-content/uploads/2016/11/sk@ubuntuserver-_010-1.png
-[10]:http://www.ostechnix.com/wp-content/uploads/2016/11/sk@ubuntuserver-_010-2.png
-[11]:http://www.ostechnix.com/wp-content/uploads/2016/11/sk@ubuntuserver-_011-1.png
-[12]:http://www.ostechnix.com/wp-content/uploads/2016/11/sk@ubuntuserver-_012.png
diff --git a/sources/tech/20180715 Why is Python so slow.md b/sources/tech/20180715 Why is Python so slow.md
new file mode 100644
index 0000000000..931d32a4b2
--- /dev/null
+++ b/sources/tech/20180715 Why is Python so slow.md
@@ -0,0 +1,205 @@
+Why is Python so slow?
+============================================================
+
+Python is booming in popularity. It is used in DevOps, Data Science, Web Development and Security.
+
+It does not, however, win any medals for speed.
+
+
+
+
+> How does Java compare in terms of speed to C or C++ or C# or Python? The answer depends greatly on the type of application you’re running. No benchmark is perfect, but The Computer Language Benchmarks Game is [a good starting point][5].
+
+I’ve been referring to the Computer Language Benchmarks Game for over a decade; compared with other languages like Java, C#, Go, JavaScript, C++, Python is [one of the slowest][6]. This includes [JIT][7] (C#, Java) and [AOT][8] (C, C++) compilers, as well as interpreted languages like JavaScript.
+
+ _NB: When I say “Python”, I’m talking about the reference implementation of the language, CPython. I will refer to other runtimes in this article._
+
+> I want to answer this question: When Python completes a comparable application 2–10x slower than another language, _why is it slow_ and can’t we _make it faster_ ?
+
+Here are the top theories:
+
+* “ _It’s the GIL (Global Interpreter Lock)_ ”
+
+* “ _It’s because its interpreted and not compiled_ ”
+
+* “ _It’s because its a dynamically typed language_ ”
+
+Which one of these reasons has the biggest impact on performance?
+
+### “It’s the GIL”
+
+Modern computers come with CPU’s that have multiple cores, and sometimes multiple processors. In order to utilise all this extra processing power, the Operating System defines a low-level structure called a thread, where a process (e.g. Chrome Browser) can spawn multiple threads and have instructions for the system inside. That way if one process is particularly CPU-intensive, that load can be shared across the cores and this effectively makes most applications complete tasks faster.
+
+My Chrome Browser, as I’m writing this article, has 44 threads open. Keep in mind that the structure and API of threading are different between POSIX-based (e.g. Mac OS and Linux) and Windows OS. The operating system also handles the scheduling of threads.
+
+IF you haven’t done multi-threaded programming before, a concept you’ll need to quickly become familiar with locks. Unlike a single-threaded process, you need to ensure that when changing variables in memory, multiple threads don’t try and access/change the same memory address at the same time.
+
+When CPython creates variables, it allocates the memory and then counts how many references to that variable exist, this is a concept known as reference counting. If the number of references is 0, then it frees that piece of memory from the system. This is why creating a “temporary” variable within say, the scope of a for loop, doesn’t blow up the memory consumption of your application.
+
+The challenge then becomes when variables are shared within multiple threads, how CPython locks the reference count. There is a “global interpreter lock” that carefully controls thread execution. The interpreter can only execute one operation at a time, regardless of how many threads it has.
+
+#### What does this mean to the performance of Python application?
+
+If you have a single-threaded, single interpreter application. It will make no difference to the speed. Removing the GIL would have no impact on the performance of your code.
+
+If you wanted to implement concurrency within a single interpreter (Python process) by using threading, and your threads were IO intensive (e.g. Network IO or Disk IO), you would see the consequences of GIL-contention.
+
+
+From David Beazley’s GIL visualised post [http://dabeaz.blogspot.com/2010/01/python-gil-visualized.html][1]
+
+If you have a web-application (e.g. Django) and you’re using WSGI, then each request to your web-app is a separate Python interpreter, so there is only 1 lock _per_ request. Because the Python interpreter is slow to start, some WSGI implementations have a “Daemon Mode” [which keep Python process(es) on the go for you.][9]
+
+#### What about other Python runtimes?
+
+[PyPy has a GIL][10] and it is typically >3x faster than CPython.
+
+[Jython does not have a GIL][11] because a Python thread in Jython is represented by a Java thread and benefits from the JVM memory-management system.
+
+#### How does JavaScript do this?
+
+Well, firstly all Javascript engines [use mark-and-sweep Garbage Collection][12]. As stated, the primary need for the GIL is CPython’s memory-management algorithm.
+
+JavaScript does not have a GIL, but it’s also single-threaded so it doesn’t require one. JavaScript’s event-loop and Promise/Callback pattern are how asynchronous-programming is achieved in place of concurrency. Python has a similar thing with the asyncio event-loop.
+
+### “It’s because its an interpreted language”
+
+I hear this a lot and I find it a gross-simplification of the way CPython actually works. If at a terminal you wrote `python myscript.py` then CPython would start a long sequence of reading, lexing, parsing, compiling, interpreting and executing that code.
+
+If you’re interested in how that process works, I’ve written about it before:
+
+[Modifying the Python language in 6 minutes
+This week I raised my first pull-request to the CPython core project, which was declined :-( but as to not completely…hackernoon.com][13][][14]
+
+An important point in that process is the creation of a `.pyc` file, at the compiler stage, the bytecode sequence is written to a file inside `__pycache__/`on Python 3 or in the same directory in Python 2\. This doesn’t just apply to your script, but all of the code you imported, including 3rd party modules.
+
+So most of the time (unless you write code which you only ever run once?), Python is interpreting bytecode and executing it locally. Compare that with Java and C#.NET:
+
+> Java compiles to an “Intermediate Language” and the Java Virtual Machine reads the bytecode and just-in-time compiles it to machine code. The .NET CIL is the same, the .NET Common-Language-Runtime, CLR, uses just-in-time compilation to machine code.
+
+So, why is Python so much slower than both Java and C# in the benchmarks if they all use a virtual machine and some sort of Bytecode? Firstly, .NET and Java are JIT-Compiled.
+
+JIT or Just-in-time compilation requires an intermediate language to allow the code to be split into chunks (or frames). Ahead of time (AOT) compilers are designed to ensure that the CPU can understand every line in the code before any interaction takes place.
+
+The JIT itself does not make the execution any faster, because it is still executing the same bytecode sequences. However, JIT enables optimizations to be made at runtime. A good JIT optimizer will see which parts of the application are being executed a lot, call these “hot spots”. It will then make optimizations to those bits of code, by replacing them with more efficient versions.
+
+This means that when your application does the same thing again and again, it can be significantly faster. Also, keep in mind that Java and C# are strongly-typed languages so the optimiser can make many more assumptions about the code.
+
+PyPy has a JIT and as mentioned in the previous section, is significantly faster than CPython. This performance benchmark article goes into more detail —
+
+[Which is the fastest version of Python?
+Of course, “it depends”, but what does it depend on and how can you assess which is the fastest version of Python for…hackernoon.com][15][][16]
+
+#### So why doesn’t CPython use a JIT?
+
+There are downsides to JITs: one of those is startup time. CPython startup time is already comparatively slow, PyPy is 2–3x slower to start than CPython. The Java Virtual Machine is notoriously slow to boot. The .NET CLR gets around this by starting at system-startup, but the developers of the CLR also develop the Operating System on which the CLR runs.
+
+If you have a single Python process running for a long time, with code that can be optimized because it contains “hot spots”, then a JIT makes a lot of sense.
+
+However, CPython is a general-purpose implementation. So if you were developing command-line applications using Python, having to wait for a JIT to start every time the CLI was called would be horribly slow.
+
+CPython has to try and serve as many use cases as possible. There was the possibility of [plugging a JIT into CPython][17] but this project has largely stalled.
+
+> If you want the benefits of a JIT and you have a workload that suits it, use PyPy.
+
+### “It’s because its a dynamically typed language”
+
+In a “Statically-Typed” language, you have to specify the type of a variable when it is declared. Those would include C, C++, Java, C#, Go.
+
+In a dynamically-typed language, there are still the concept of types, but the type of a variable is dynamic.
+
+```
+a = 1
+a = "foo"
+```
+
+In this toy-example, Python creates a second variable with the same name and a type of `str` and deallocates the memory created for the first instance of `a`
+
+Statically-typed languages aren’t designed as such to make your life hard, they are designed that way because of the way the CPU operates. If everything eventually needs to equate to a simple binary operation, you have to convert objects and types down to a low-level data structure.
+
+Python does this for you, you just never see it, nor do you need to care.
+
+Not having to declare the type isn’t what makes Python slow, the design of the Python language enables you to make almost anything dynamic. You can replace the methods on objects at runtime, you can monkey-patch low-level system calls to a value declared at runtime. Almost anything is possible.
+
+It’s this design that makes it incredibly hard to optimise Python.
+
+To illustrate my point, I’m going to use a syscall tracing tool that works in Mac OS called Dtrace. CPython distributions do not come with DTrace builtin, so you have to recompile CPython. I’m using 3.6.6 for my demo
+
+```
+wget https://github.com/python/cpython/archive/v3.6.6.zip
+unzip v3.6.6.zip
+cd v3.6.6
+./configure --with-dtrace
+make
+```
+
+Now `python.exe` will have Dtrace tracers throughout the code. [Paul Ross wrote an awesome Lightning Talk on Dtrace][19]. You can [download DTrace starter files][20] for Python to measure function calls, execution time, CPU time, syscalls, all sorts of fun. e.g.
+
+`sudo dtrace -s toolkit/.d -c ‘../cpython/python.exe script.py’`
+
+The `py_callflow` tracer shows all the function calls in your application
+
+
+
+
+So, does Python’s dynamic typing make it slow?
+
+* Comparing and converting types is costly, every time a variable is read, written to or referenced the type is checked
+
+* It is hard to optimise a language that is so dynamic. The reason many alternatives to Python are so much faster is that they make compromises to flexibility in the name of performance
+
+* Looking at [Cython][2], which combines C-Static Types and Python to optimise code where the types are known[ can provide ][3]an 84x performanceimprovement.
+
+### Conclusion
+
+> Python is primarily slow because of its dynamic nature and versatility. It can be used as a tool for all sorts of problems, where more optimised and faster alternatives are probably available.
+
+There are, however, ways of optimising your Python applications by leveraging async, understanding the profiling tools, and consider using multiple-interpreters.
+
+For applications where startup time is unimportant and the code would benefit a JIT, consider PyPy.
+
+For parts of your code where performance is critical and you have more statically-typed variables, consider using [Cython][4].
+
+#### Further reading
+
+Jake VDP’s excellent article (although slightly dated) [https://jakevdp.github.io/blog/2014/05/09/why-python-is-slow/][21]
+
+Dave Beazley’s talk on the GIL [http://www.dabeaz.com/python/GIL.pdf][22]
+
+All about JIT compilers [https://hacks.mozilla.org/2017/02/a-crash-course-in-just-in-time-jit-compilers/][23]
+
+--------------------------------------------------------------------------------
+
+via: https://hackernoon.com/why-is-python-so-slow-e5074b6fe55b
+
+作者:[Anthony Shaw][a]
+选题:[oska874][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]:https://hackernoon.com/@anthonypjshaw?source=post_header_lockup
+[b]:https://github.com/oska874
+[1]:http://dabeaz.blogspot.com/2010/01/python-gil-visualized.html
+[2]:http://cython.org/
+[3]:http://notes-on-cython.readthedocs.io/en/latest/std_dev.html
+[4]:http://cython.org/
+[5]:http://algs4.cs.princeton.edu/faq/
+[6]:https://benchmarksgame-team.pages.debian.net/benchmarksgame/faster/python.html
+[7]:https://en.wikipedia.org/wiki/Just-in-time_compilation
+[8]:https://en.wikipedia.org/wiki/Ahead-of-time_compilation
+[9]:https://www.slideshare.net/GrahamDumpleton/secrets-of-a-wsgi-master
+[10]:http://doc.pypy.org/en/latest/faq.html#does-pypy-have-a-gil-why
+[11]:http://www.jython.org/jythonbook/en/1.0/Concurrency.html#no-global-interpreter-lock
+[12]:https://developer.mozilla.org/en-US/docs/Web/JavaScript/Memory_Management
+[13]:https://hackernoon.com/modifying-the-python-language-in-7-minutes-b94b0a99ce14
+[14]:https://hackernoon.com/modifying-the-python-language-in-7-minutes-b94b0a99ce14
+[15]:https://hackernoon.com/which-is-the-fastest-version-of-python-2ae7c61a6b2b
+[16]:https://hackernoon.com/which-is-the-fastest-version-of-python-2ae7c61a6b2b
+[17]:https://www.slideshare.net/AnthonyShaw5/pyjion-a-jit-extension-system-for-cpython
+[18]:https://github.com/python/cpython/archive/v3.6.6.zip
+[19]:https://github.com/paulross/dtrace-py#the-lightning-talk
+[20]:https://github.com/paulross/dtrace-py/tree/master/toolkit
+[21]:https://jakevdp.github.io/blog/2014/05/09/why-python-is-slow/
+[22]:http://www.dabeaz.com/python/GIL.pdf
+[23]:https://hacks.mozilla.org/2017/02/a-crash-course-in-just-in-time-jit-compilers/
diff --git a/sources/tech/20180724 75 Most Used Essential Linux Applications of 2018.md b/sources/tech/20180724 75 Most Used Essential Linux Applications of 2018.md
deleted file mode 100644
index 919182ba1f..0000000000
--- a/sources/tech/20180724 75 Most Used Essential Linux Applications of 2018.md
+++ /dev/null
@@ -1,988 +0,0 @@
-75 Most Used Essential Linux Applications of 2018
-======
-
-**2018** has been an awesome year for a lot of applications, especially those that are both free and open source. And while various Linux distributions come with a number of default apps, users are free to take them out and use any of the free or paid alternatives of their choice.
-
-Today, we bring you a [list of Linux applications][3] that have been able to make it to users’ Linux installations almost all the time despite the butt-load of other alternatives.
-
-To simply put, any app on this list is among the most used in its category, and if you haven’t already tried it out you are probably missing out. Enjoy!
-
-### Backup Tools
-
-#### Rsync
-
-[Rsync][4] is an open source bandwidth-friendly utility tool for performing swift incremental file transfers and it is available for free.
-```
-$ rsync [OPTION...] SRC... [DEST]
-
-```
-
-To know more examples and usage, read our article “[10 Practical Examples of Rsync Command][5]” to learn more about it.
-
-#### Timeshift
-
-[Timeshift][6] provides users with the ability to protect their system by taking incremental snapshots which can be reverted to at a different date – similar to the function of Time Machine in Mac OS and System restore in Windows.
-
-
-
-### BitTorrent Client
-
-
-
-#### Deluge
-
-[Deluge][7] is a beautiful cross-platform BitTorrent client that aims to perfect the **μTorrent** experience and make it available to users for free.
-
-Install **Deluge** on **Ubuntu** and **Debian** , using following commands.
-```
-$ sudo add-apt-repository ppa:deluge-team/ppa
-$ sudo apt-get update
-$ sudo apt-get install deluge
-
-```
-
-#### qBittorent
-
-[qBittorent][8] is an open source BitTorrent protocol client that aims to provide a free alternative to torrent apps like μTorrent.
-
-Install **qBittorent** on **Ubuntu** and **Debian** , using following commands.
-```
-$ sudo add-apt-repository ppa:qbittorrent-team/qbittorrent-stable
-$ sudo apt-get update
-$ sudo apt-get install qbittorrent
-
-```
-
-#### Transmission
-
-[Transmission][9] is also a BitTorrent client with awesome functionalities and a major focus on speed and ease of use. It comes preinstalled with many Linux distros.
-
-Install **Transmission** on **Ubuntu** and **Debian** , using following commands.
-```
-$ sudo add-apt-repository ppa:transmissionbt/ppa
-$ sudo apt-get update
-$ sudo apt-get install transmission-gtk transmission-cli transmission-common transmission-daemon
-
-```
-
-### Cloud Storage
-
-
-
-#### Dropbox
-
-The [Dropbox][10] team rebranded their cloud service earlier this year to provide an even better performance and app integration for their clients. It starts with 2GB of storage for free.
-
-Install **Dropbox** on **Ubuntu** and **Debian** , using following commands.
-```
-$ cd ~ && wget -O - "https://www.dropbox.com/download?plat=lnx.x86" | tar xzf - [On 32-Bit]
-$ cd ~ && wget -O - "https://www.dropbox.com/download?plat=lnx.x86_64" | tar xzf - [On 64-Bit]
-$ ~/.dropbox-dist/dropboxd
-
-```
-
-#### Google Drive
-
-[Google Drive][11] is Google’s cloud service solution and my guess is that it needs no introduction. Just like with **Dropbox** , you can sync files across all your connected devices. It starts with 15GB of storage for free and this includes Gmail, Google photos, Maps, etc.
-
-Check out: [5 Google Drive Clients for Linux][12]
-
-#### Mega
-
-[Mega][13] stands out from the rest because apart from being extremely security-conscious, it gives free users 50GB to do as they wish! Its end-to-end encryption ensures that they can’t access your data, and if you forget your recovery key, you too wouldn’t be able to.
-
-[**Download MEGA Cloud Storage for Ubuntu][14]
-
-### Commandline Editors
-
-
-
-#### Vim
-
-[Vim][15] is an open source clone of vi text editor developed to be customizable and able to work with any type of text.
-
-Install **Vim** on **Ubuntu** and **Debian** , using following commands.
-```
-$ sudo add-apt-repository ppa:jonathonf/vim
-$ sudo apt update
-$ sudo apt install vim
-
-```
-
-#### Emacs
-
-[Emacs][16] refers to a set of highly configurable text editors. The most popular variant, GNU Emacs, is written in Lisp and C to be self-documenting, extensible, and customizable.
-
-Install **Emacs** on **Ubuntu** and **Debian** , using following commands.
-```
-$ sudo add-apt-repository ppa:kelleyk/emacs
-$ sudo apt update
-$ sudo apt install emacs25
-
-```
-
-#### Nano
-
-[Nano][17] is a feature-rich CLI text editor for power users and it has the ability to work with different terminals, among other functionalities.
-
-Install **Nano** on **Ubuntu** and **Debian** , using following commands.
-```
-$ sudo add-apt-repository ppa:n-muench/programs-ppa
-$ sudo apt-get update
-$ sudo apt-get install nano
-
-```
-
-### Download Manager
-
-
-
-#### Aria2
-
-[Aria2][18] is an open source lightweight multi-source and multi-protocol command line-based downloader with support for Metalinks, torrents, HTTP/HTTPS, SFTP, etc.
-
-Install **Aria2** on **Ubuntu** and **Debian** , using following command.
-```
-$ sudo apt-get install aria2
-
-```
-
-#### uGet
-
-[uGet][19] has earned its title as the **#1** open source download manager for Linux distros and it features the ability to handle any downloading task you can throw at it including using multiple connections, using queues, categories, etc.
-
-Install **uGet** on **Ubuntu** and **Debian** , using following commands.
-```
-$ sudo add-apt-repository ppa:plushuang-tw/uget-stable
-$ sudo apt update
-$ sudo apt install uget
-
-```
-
-#### XDM
-
-[XDM][20], **Xtreme Download Manager** is an open source downloader written in Java. Like any good download manager, it can work with queues, torrents, browsers, and it also includes a video grabber and a smart scheduler.
-
-Install **XDM** on **Ubuntu** and **Debian** , using following commands.
-```
-$ sudo add-apt-repository ppa:noobslab/apps
-$ sudo apt-get update
-$ sudo apt-get install xdman
-
-```
-
-### Email Clients
-
-
-
-#### Thunderbird
-
-[Thunderbird][21] is among the most popular email applications. It is free, open source, customizable, feature-rich, and above all, easy to install.
-
-Install **Thunderbird** on **Ubuntu** and **Debian** , using following commands.
-```
-$ sudo add-apt-repository ppa:ubuntu-mozilla-security/ppa
-$ sudo apt-get update
-$ sudo apt-get install thunderbird
-
-```
-
-#### Geary
-
-[Geary][22] is an open source email client based on WebKitGTK+. It is free, open-source, feature-rich, and adopted by the GNOME project.
-
-Install **Geary** on **Ubuntu** and **Debian** , using following commands.
-```
-$ sudo add-apt-repository ppa:geary-team/releases
-$ sudo apt-get update
-$ sudo apt-get install geary
-
-```
-
-#### Evolution
-
-[Evolution][23] is a free and open source email client for managing emails, meeting schedules, reminders, and contacts.
-
-Install **Evolution** on **Ubuntu** and **Debian** , using following commands.
-```
-$ sudo add-apt-repository ppa:gnome3-team/gnome3-staging
-$ sudo apt-get update
-$ sudo apt-get install evolution
-
-```
-
-### Finance Software
-
-
-
-#### GnuCash
-
-[GnuCash][24] is a free, cross-platform, and open source software for financial accounting tasks for personal and small to mid-size businesses.
-
-Install **GnuCash** on **Ubuntu** and **Debian** , using following commands.
-```
-$ sudo sh -c 'echo "deb http://archive.getdeb.net/ubuntu $(lsb_release -sc)-getdeb apps" >> /etc/apt/sources.list.d/getdeb.list'
-$ sudo apt-get update
-$ sudo apt-get install gnucash
-
-```
-
-#### KMyMoney
-
-[KMyMoney][25] is a finance manager software that provides all important features found in the commercially-available, personal finance managers.
-
-Install **KMyMoney** on **Ubuntu** and **Debian** , using following commands.
-```
-$ sudo add-apt-repository ppa:claydoh/kmymoney2-kde4
-$ sudo apt-get update
-$ sudo apt-get install kmymoney
-
-```
-
-### IDE Editors
-
-
-
-#### Eclipse IDE
-
-[Eclipse][26] is the most widely used Java IDE containing a base workspace and an impossible-to-overemphasize configurable plug-in system for personalizing its coding environment.
-
-For installation, read our article “[How to Install Eclipse Oxygen IDE in Debian and Ubuntu][27]”
-
-#### Netbeans IDE
-
-A fan-favourite, [Netbeans][28] enables users to easily build applications for mobile, desktop, and web platforms using Java, PHP, HTML5, JavaScript, and C/C++, among other languages.
-
-For installation, read our article “[How to Install Netbeans Oxygen IDE in Debian and Ubuntu][29]”
-
-#### Brackets
-
-[Brackets][30] is an advanced text editor developed by Adobe to feature visual tools, preprocessor support, and a design-focused user flow for web development. In the hands of an expert, it can serve as an IDE in its own right.
-
-Install **Brackets** on **Ubuntu** and **Debian** , using following commands.
-```
-$ sudo add-apt-repository ppa:webupd8team/brackets
-$ sudo apt-get update
-$ sudo apt-get install brackets
-
-```
-
-#### Atom IDE
-
-[Atom IDE][31] is a more robust version of Atom text editor achieved by adding a number of extensions and libraries to boost its performance and functionalities. It is, in a sense, Atom on steroids.
-
-Install **Atom** on **Ubuntu** and **Debian** , using following commands.
-```
-$ sudo apt-get install snapd
-$ sudo snap install atom --classic
-
-```
-
-#### Light Table
-
-[Light Table][32] is a self-proclaimed next-generation IDE developed to offer awesome features like data value flow stats and coding collaboration.
-
-Install **Light Table** on **Ubuntu** and **Debian** , using following commands.
-```
-$ sudo add-apt-repository ppa:dr-akulavich/lighttable
-$ sudo apt-get update
-$ sudo apt-get install lighttable-installer
-
-```
-
-#### Visual Studio Code
-
-[Visual Studio Code][33] is a source code editor created by Microsoft to offer users the best-advanced features in a text editor including syntax highlighting, code completion, debugging, performance statistics and graphs, etc.
-
-[**Download Visual Studio Code for Ubuntu][34]
-
-### Instant Messaging
-
-
-
-#### Pidgin
-
-[Pidgin][35] is an open source instant messaging app that supports virtually all chatting platforms and can have its abilities extended using extensions.
-
-Install **Pidgin** on **Ubuntu** and **Debian** , using following commands.
-```
-$ sudo add-apt-repository ppa:jonathonf/backports
-$ sudo apt-get update
-$ sudo apt-get install pidgin
-
-```
-
-#### Skype
-
-[Skype][36] needs no introduction and its awesomeness is available for any interested Linux user.
-
-Install **Skype** on **Ubuntu** and **Debian** , using following commands.
-```
-$ sudo apt install snapd
-$ sudo snap install skype --classic
-
-```
-
-#### Empathy
-
-[Empathy][37] is a messaging app with support for voice, video chat, text, and file transfers over multiple several protocols. It also allows you to add other service accounts to it and interface with all of them through it.
-
-Install **Empathy** on **Ubuntu** and **Debian** , using following commands.
-```
-$ sudo apt-get install empathy
-
-```
-
-### Linux Antivirus
-
-#### ClamAV/ClamTk
-
-[ClamAV][38] is an open source and cross-platform command line antivirus app for detecting Trojans, viruses, and other malicious codes. [ClamTk][39] is its GUI front-end.
-
-Install **ClamAV/ClamTk** on **Ubuntu** and **Debian** , using following commands.
-```
-$ sudo apt-get install clamav
-$ sudo apt-get install clamtk
-
-```
-
-### Linux Desktop Environments
-
-#### Cinnamon
-
-[Cinnamon][40] is a free and open-source derivative of **GNOME3** and it follows the traditional desktop metaphor conventions.
-
-Install **Cinnamon** desktop on **Ubuntu** and **Debian** , using following commands.
-```
-$ sudo add-apt-repository ppa:embrosyn/cinnamon
-$ sudo apt update
-$ sudo apt install cinnamon-desktop-environment lightdm
-
-```
-
-#### Mate
-
-The [Mate][41] Desktop Environment is a derivative and continuation of **GNOME2** developed to offer an attractive UI on Linux using traditional metaphors.
-
-Install **Mate** desktop on **Ubuntu** and **Debian** , using following commands.
-```
-$ sudo apt install tasksel
-$ sudo apt update
-$ sudo tasksel install ubuntu-mate-desktop
-
-```
-
-#### GNOME
-
-[GNOME][42] is a Desktop Environment comprised of several free and open-source applications and can run on any Linux distro and on most BSD derivatives.
-
-Install **Gnome** desktop on **Ubuntu** and **Debian** , using following commands.
-```
-$ sudo apt install tasksel
-$ sudo apt update
-$ sudo tasksel install ubuntu-desktop
-
-```
-
-#### KDE
-
-[KDE][43] is developed by the KDE community to provide users with a graphical solution to interfacing with their system and performing several computing tasks.
-
-Install **KDE** desktop on **Ubuntu** and **Debian** , using following commands.
-```
-$ sudo apt install tasksel
-$ sudo apt update
-$ sudo tasksel install kubuntu-desktop
-
-```
-
-### Linux Maintenance Tools
-
-#### GNOME Tweak Tool
-
-The [GNOME Tweak Tool][44] is the most popular tool for customizing and tweaking GNOME3 and GNOME Shell settings.
-
-Install **GNOME Tweak Tool** on **Ubuntu** and **Debian** , using following commands.
-```
-$ sudo apt install gnome-tweak-tool
-
-```
-
-#### Stacer
-
-[Stacer][45] is a free, open-source app for monitoring and optimizing Linux systems.
-
-Install **Stacer** on **Ubuntu** and **Debian** , using following commands.
-```
-$ sudo add-apt-repository ppa:oguzhaninan/stacer
-$ sudo apt-get update
-$ sudo apt-get install stacer
-
-```
-
-#### BleachBit
-
-[BleachBit][46] is a free disk space cleaner that also works as a privacy manager and system optimizer.
-
-[**Download BleachBit for Ubuntu][47]
-
-### Linux Terminals
-
-#### GNOME Terminal
-
-[GNOME Terminal][48] is GNOME’s default terminal emulator.
-
-Install **Gnome Terminal** on **Ubuntu** and **Debian** , using following commands.
-```
-$ sudo apt-get install gnome-terminal
-
-```
-
-#### Konsole
-
-[Konsole][49] is a terminal emulator for KDE.
-
-Install **Konsole** on **Ubuntu** and **Debian** , using following commands.
-```
-$ sudo apt-get install konsole
-
-```
-
-#### Terminator
-
-[Terminator][50] is a feature-rich GNOME Terminal-based terminal app built with a focus on arranging terminals, among other functions.
-
-Install **Terminator** on **Ubuntu** and **Debian** , using following commands.
-```
-$ sudo apt-get install terminator
-
-```
-
-#### Guake
-
-[Guake][51] is a lightweight drop-down terminal for the GNOME Desktop Environment.
-
-Install **Guake** on **Ubuntu** and **Debian** , using following commands.
-```
-$ sudo apt-get install guake
-
-```
-
-### Multimedia Editors
-
-#### Ardour
-
-[Ardour][52] is a beautiful Digital Audio Workstation (DAW) for recording, editing, and mixing audio professionally.
-
-Install **Ardour** on **Ubuntu** and **Debian** , using following commands.
-```
-$ sudo add-apt-repository ppa:dobey/audiotools
-$ sudo apt-get update
-$ sudo apt-get install ardour
-
-```
-
-#### Audacity
-
-[Audacity][53] is an easy-to-use cross-platform and open source multi-track audio editor and recorder; arguably the most famous of them all.
-
-Install **Audacity** on **Ubuntu** and **Debian** , using following commands.
-```
-$ sudo add-apt-repository ppa:ubuntuhandbook1/audacity
-$ sudo apt-get update
-$ sudo apt-get install audacity
-
-```
-
-#### GIMP
-
-[GIMP][54] is the most popular open source Photoshop alternative and it is for a reason. It features various customization options, 3rd-party plugins, and a helpful user community.
-
-Install **Gimp** on **Ubuntu** and **Debian** , using following commands.
-```
-$ sudo add-apt-repository ppa:otto-kesselgulasch/gimp
-$ sudo apt update
-$ sudo apt install gimp
-
-```
-
-#### Krita
-
-[Krita][55] is an open source painting app that can also serve as an image manipulating tool and it features a beautiful UI with a reliable performance.
-
-Install **Krita** on **Ubuntu** and **Debian** , using following commands.
-```
-$ sudo add-apt-repository ppa:kritalime/ppa
-$ sudo apt update
-$ sudo apt install krita
-
-```
-
-#### Lightworks
-
-[Lightworks][56] is a powerful, flexible, and beautiful tool for editing videos professionally. It comes feature-packed with hundreds of amazing effects and presets that allow it to handle any editing task that you throw at it and it has 25 years of experience to back up its claims.
-
-[**Download Lightworks for Ubuntu][57]
-
-#### OpenShot
-
-[OpenShot][58] is an award-winning free and open source video editor known for its excellent performance and powerful capabilities.
-
-Install **Openshot** on **Ubuntu** and **Debian** , using following commands.
-```
-$ sudo add-apt-repository ppa:openshot.developers/ppa
-$ sudo apt update
-$ sudo apt install openshot-qt
-
-```
-
-#### PiTiV
-
-[Pitivi][59] is a beautiful video editor that features a beautiful code base, awesome community, is easy to use, and allows for hassle-free collaboration.
-
-Install **PiTiV** on **Ubuntu** and **Debian** , using following commands.
-```
-$ flatpak install --user https://flathub.org/repo/appstream/org.pitivi.Pitivi.flatpakref
-$ flatpak install --user http://flatpak.pitivi.org/pitivi.flatpakref
-$ flatpak run org.pitivi.Pitivi//stable
-
-```
-
-### Music Players
-
-#### Rhythmbox
-
-[Rhythmbox][60] posses the ability to perform all music tasks you throw at it and has so far proved to be a reliable music player that it ships with Ubuntu.
-
-Install **Rhythmbox** on **Ubuntu** and **Debian** , using following commands.
-```
-$ sudo add-apt-repository ppa:fossfreedom/rhythmbox
-$ sudo apt-get update
-$ sudo apt-get install rhythmbox
-
-```
-
-#### Lollypop
-
-[Lollypop][61] is a beautiful, relatively new, open source music player featuring a number of advanced options like online radio, scrubbing support and party mode. Yet, it manages to keep everything simple and easy to manage.
-
-Install **Lollypop** on **Ubuntu** and **Debian** , using following commands.
-```
-$ sudo add-apt-repository ppa:gnumdk/lollypop
-$ sudo apt-get update
-$ sudo apt-get install lollypop
-
-```
-
-#### Amarok
-
-[Amarok][62] is a robust music player with an intuitive UI and tons of advanced features bundled into a single unit. It also allows users to discover new music based on their genre preferences.
-
-Install **Amarok** on **Ubuntu** and **Debian** , using following commands.
-```
-$ sudo apt-get update
-$ sudo apt-get install amarok
-
-```
-
-#### Clementine
-
-[Clementine][63] is an Amarok-inspired music player that also features a straight-forward UI, advanced control features, and the ability to let users search for and discover new music.
-
-Install **Clementine** on **Ubuntu** and **Debian** , using following commands.
-```
-$ sudo add-apt-repository ppa:me-davidsansome/clementine
-$ sudo apt-get update
-$ sudo apt-get install clementine
-
-```
-
-#### Cmus
-
-[Cmus][64] is arguably the most efficient CLI music player, Cmus is fast and reliable, and its functionality can be increased using extensions.
-
-Install **Cmus** on **Ubuntu** and **Debian** , using following commands.
-```
-$ sudo add-apt-repository ppa:jmuc/cmus
-$ sudo apt-get update
-$ sudo apt-get install cmus
-
-```
-
-### Office Suites
-
-#### Calligra Suite
-
-The [Calligra Suite][65] provides users with a set of 8 applications which cover working with office, management, and graphics tasks.
-
-Install **Calligra Suite** on **Ubuntu** and **Debian** , using following commands.
-```
-$ sudo apt-get install calligra
-
-```
-
-#### LibreOffice
-
-[LibreOffice][66] the most actively developed office suite in the open source community, LibreOffice is known for its reliability and its functions can be increased using extensions.
-
-Install **LibreOffice** on **Ubuntu** and **Debian** , using following commands.
-```
-$ sudo add-apt-repository ppa:libreoffice/ppa
-$ sudo apt update
-$ sudo apt install libreoffice
-
-```
-
-#### WPS Office
-
-[WPS Office][67] is a beautiful office suite alternative with a more modern UI.
-
-[**Download WPS Office for Ubuntu][68]
-
-### Screenshot Tools
-
-#### Shutter
-
-[Shutter][69] allows users to take screenshots of their desktop and then edit them using filters and other effects coupled with the option to upload and share them online.
-
-Install **Shutter** on **Ubuntu** and **Debian** , using following commands.
-```
-$ sudo add-apt-repository -y ppa:shutter/ppa
-$ sudo apt update
-$ sudo apt install shutter
-
-```
-
-#### Kazam
-
-[Kazam][70] screencaster captures screen content to output a video and audio file supported by any video player with VP8/WebM and PulseAudio support.
-
-Install **Kazam** on **Ubuntu** and **Debian** , using following commands.
-```
-$ sudo add-apt-repository ppa:kazam-team/unstable-series
-$ sudo apt update
-$ sudo apt install kazam python3-cairo python3-xlib
-
-```
-
-#### Gnome Screenshot
-
-[Gnome Screenshot][71] was once bundled with Gnome utilities but is now a standalone app. It can be used to take screencasts in a format that is easily shareable.
-
-Install **Gnome Screenshot** on **Ubuntu** and **Debian** , using following commands.
-```
-$ sudo apt-get update
-$ sudo apt-get install gnome-screenshot
-
-```
-
-### Screen Recorders
-
-#### SimpleScreenRecorder
-
-[SimpleScreenRecorder][72] was created to be better than the screen-recording apps available at the time of its creation and has now turned into one of the most efficient and easy-to-use screen recorders for Linux distros.
-
-Install **SimpleScreenRecorder** on **Ubuntu** and **Debian** , using following commands.
-```
-$ sudo add-apt-repository ppa:maarten-baert/simplescreenrecorder
-$ sudo apt-get update
-$ sudo apt-get install simplescreenrecorder
-
-```
-
-#### recordMyDesktop
-
-[recordMyDesktop][73] is an open source session recorder that is also capable of recording desktop session audio.
-
-Install **recordMyDesktop** on **Ubuntu** and **Debian** , using following commands.
-```
-$ sudo apt-get update
-$ sudo apt-get install gtk-recordmydesktop
-
-```
-
-### Text Editors
-
-#### Atom
-
-[Atom][74] is a modern and customizable text editor created and maintained by GitHub. It is ready for use right out of the box and can have its functionality enhanced and its UI customized using extensions and themes.
-
-Install **Atom** on **Ubuntu** and **Debian** , using following commands.
-```
-$ sudo apt-get install snapd
-$ sudo snap install atom --classic
-
-```
-
-#### Sublime Text
-
-[Sublime Text][75] is easily among the most awesome text editors to date. It is customizable, lightweight (even when bulldozed with a lot of data files and extensions), flexible, and remains free to use forever.
-
-Install **Sublime Text** on **Ubuntu** and **Debian** , using following commands.
-```
-$ sudo apt-get install snapd
-$ sudo snap install sublime-text
-
-```
-
-#### Geany
-
-[Geany][76] is a memory-friendly text editor with basic IDE features designed to exhibit shot load times and extensible functions using libraries.
-
-Install **Geany** on **Ubuntu** and **Debian** , using following commands.
-```
-$ sudo apt-get update
-$ sudo apt-get install geany
-
-```
-
-#### Gedit
-
-[Gedit][77] is famous for its simplicity and it comes preinstalled with many Linux distros because of its function as an excellent general purpose text editor.
-
-Install **Gedit** on **Ubuntu** and **Debian** , using following commands.
-```
-$ sudo apt-get update
-$ sudo apt-get install gedit
-
-```
-
-### To-Do List Apps
-
-#### Evernote
-
-[Evernote][78] is a cloud-based note-taking productivity app designed to work perfectly with different types of notes including to-do lists and reminders.
-
-There is no any official evernote app for Linux, so check out other third party [6 Evernote Alternative Clients for Linux][79].
-
-#### Everdo
-
-[Everdo][78] is a beautiful, security-conscious, low-friction Getting-Things-Done app productivity app for handling to-dos and other note types. If Evernote comes off to you in an unpleasant way, Everdo is a perfect alternative.
-
-[**Download Everdo for Ubuntu][80]
-
-#### Taskwarrior
-
-[Taskwarrior][81] is an open source and cross-platform command line app for managing tasks. It is famous for its speed and distraction-free environment.
-
-Install **Taskwarrior** on **Ubuntu** and **Debian** , using following commands.
-```
-$ sudo apt-get update
-$ sudo apt-get install taskwarrior
-
-```
-
-### Video Players
-
-#### Banshee
-
-[Banshee][82] is an open source multi-format-supporting media player that was first developed in 2005 and has only been getting better since.
-
-Install **Banshee** on **Ubuntu** and **Debian** , using following commands.
-```
-$ sudo add-apt-repository ppa:banshee-team/ppa
-$ sudo apt-get update
-$ sudo apt-get install banshee
-
-```
-
-#### VLC
-
-[VLC][83] is my favourite video player and it’s so awesome that it can play almost any audio and video format you throw at it. You can also use it to play internet radio, record desktop sessions, and stream movies online.
-
-Install **VLC** on **Ubuntu** and **Debian** , using following commands.
-```
-$ sudo add-apt-repository ppa:videolan/stable-daily
-$ sudo apt-get update
-$ sudo apt-get install vlc
-
-```
-
-#### Kodi
-
-[Kodi][84] is among the world’s most famous media players and it comes as a full-fledged media centre app for playing all things media whether locally or remotely.
-
-Install **Kodi** on **Ubuntu** and **Debian** , using following commands.
-```
-$ sudo apt-get install software-properties-common
-$ sudo add-apt-repository ppa:team-xbmc/ppa
-$ sudo apt-get update
-$ sudo apt-get install kodi
-
-```
-
-#### SMPlayer
-
-[SMPlayer][85] is a GUI for the award-winning **MPlayer** and it is capable of handling all popular media formats; coupled with the ability to stream from YouTube, Chromcast, and download subtitles.
-
-Install **SMPlayer** on **Ubuntu** and **Debian** , using following commands.
-```
-$ sudo add-apt-repository ppa:rvm/smplayer
-$ sudo apt-get update
-$ sudo apt-get install smplayer
-
-```
-
-### Virtualization Tools
-
-#### VirtualBox
-
-[VirtualBox][86] is an open source app created for general-purpose OS virtualization and it can be run on servers, desktops, and embedded systems.
-
-Install **VirtualBox** on **Ubuntu** and **Debian** , using following commands.
-```
-$ wget -q https://www.virtualbox.org/download/oracle_vbox_2016.asc -O- | sudo apt-key add -
-$ wget -q https://www.virtualbox.org/download/oracle_vbox.asc -O- | sudo apt-key add -
-$ sudo apt-get update
-$ sudo apt-get install virtualbox-5.2
-$ virtualbox
-
-```
-
-#### VMWare
-
-[VMware][87] is a digital workspace that provides platform virtualization and cloud computing services to customers and is reportedly the first to successfully virtualize x86 architecture systems. One of its products, VMware workstations allows users to run multiple OSes in a virtual memory.
-
-For installation, read our article “[How to Install VMware Workstation Pro on Ubuntu][88]“.
-
-### Web Browsers
-
-#### Chrome
-
-[Google Chrome][89] is undoubtedly the most popular browser. Known for its speed, simplicity, security, and beauty following Google’s Material Design trend, Chrome is a browser that web developers cannot do without. It is also free to use and open source.
-
-Install **Google Chrome** on **Ubuntu** and **Debian** , using following commands.
-```
-$ wget -q -O - https://dl-ssl.google.com/linux/linux_signing_key.pub | sudo apt-key add -
-$ sudo sh -c 'echo "deb http://dl.google.com/linux/chrome/deb/ stable main" >> /etc/apt/sources.list.d/google.list'
-$ sudo apt-get update
-$ sudo apt-get install google-chrome-stable
-
-```
-
-#### Firefox
-
-[Firefox Quantum][90] is a beautiful, speed, task-ready, and customizable browser capable of any browsing task that you throw at it. It is also free, open source, and packed with developer-friendly tools that are easy for even beginners to get up and running with.
-
-Install **Firefox Quantum** on **Ubuntu** and **Debian** , using following commands.
-```
-$ sudo add-apt-repository ppa:mozillateam/firefox-next
-$ sudo apt update && sudo apt upgrade
-$ sudo apt install firefox
-
-```
-
-#### Vivaldi
-
-[Vivaldi][91] is a free and open source Chrome-based project that aims to perfect Chrome’s features with a couple of more feature additions. It is known for its colourful panels, memory-friendly performance, and flexibility.
-
-[**Download Vivaldi for Ubuntu][91]
-
-That concludes our list for today. Did I skip a famous title? Tell me about it in the comments section below.
-
-Don’t forget to share this post and to subscribe to our newsletter to get the latest publications from FossMint.
-
-
---------------------------------------------------------------------------------
-
-via: https://www.fossmint.com/most-used-linux-applications/
-
-作者:[Martins D. Okoi][a]
-选题:[lujun9972](https://github.com/lujun9972)
-译者:[译者ID](https://github.com/译者ID)
-校对:[校对者ID](https://github.com/校对者ID)
-
-本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
-
-[a]:https://www.fossmint.com/author/dillivine/
-[1]:https://plus.google.com/share?url=https://www.fossmint.com/most-used-linux-applications/ (Share on Google+)
-[2]:https://www.linkedin.com/shareArticle?mini=true&url=https://www.fossmint.com/most-used-linux-applications/ (Share on LinkedIn)
-[3]:https://www.fossmint.com/awesome-linux-software/
-[4]:https://rsync.samba.org/
-[5]:https://www.tecmint.com/rsync-local-remote-file-synchronization-commands/
-[6]:https://github.com/teejee2008/timeshift
-[7]:https://deluge-torrent.org/
-[8]:https://www.qbittorrent.org/
-[9]:https://transmissionbt.com/
-[10]:https://www.dropbox.com/
-[11]:https://www.google.com/drive/
-[12]:https://www.fossmint.com/best-google-drive-clients-for-linux/
-[13]:https://mega.nz/
-[14]:https://mega.nz/sync!linux
-[15]:https://www.vim.org/
-[16]:https://www.gnu.org/s/emacs/
-[17]:https://www.nano-editor.org/
-[18]:https://aria2.github.io/
-[19]:http://ugetdm.com/
-[20]:http://xdman.sourceforge.net/
-[21]:https://www.thunderbird.net/
-[22]:https://github.com/GNOME/geary
-[23]:https://github.com/GNOME/evolution
-[24]:https://www.gnucash.org/
-[25]:https://kmymoney.org/
-[26]:https://www.eclipse.org/ide/
-[27]:https://www.tecmint.com/install-eclipse-oxygen-ide-in-ubuntu-debian/
-[28]:https://netbeans.org/
-[29]:https://www.tecmint.com/install-netbeans-ide-in-ubuntu-debian-linux-mint/
-[30]:http://brackets.io/
-[31]:https://ide.atom.io/
-[32]:http://lighttable.com/
-[33]:https://code.visualstudio.com/
-[34]:https://code.visualstudio.com/download
-[35]:https://www.pidgin.im/
-[36]:https://www.skype.com/
-[37]:https://wiki.gnome.org/Apps/Empathy
-[38]:https://www.clamav.net/
-[39]:https://dave-theunsub.github.io/clamtk/
-[40]:https://github.com/linuxmint/cinnamon-desktop
-[41]:https://mate-desktop.org/
-[42]:https://www.gnome.org/
-[43]:https://www.kde.org/plasma-desktop
-[44]:https://github.com/nzjrs/gnome-tweak-tool
-[45]:https://github.com/oguzhaninan/Stacer
-[46]:https://www.bleachbit.org/
-[47]:https://www.bleachbit.org/download
-[48]:https://github.com/GNOME/gnome-terminal
-[49]:https://konsole.kde.org/
-[50]:https://gnometerminator.blogspot.com/p/introduction.html
-[51]:http://guake-project.org/
-[52]:https://ardour.org/
-[53]:https://www.audacityteam.org/
-[54]:https://www.gimp.org/
-[55]:https://krita.org/en/
-[56]:https://www.lwks.com/
-[57]:https://www.lwks.com/index.php?option=com_lwks&view=download&Itemid=206
-[58]:https://www.openshot.org/
-[59]:http://www.pitivi.org/
-[60]:https://wiki.gnome.org/Apps/Rhythmbox
-[61]:https://gnumdk.github.io/lollypop-web/
-[62]:https://amarok.kde.org/en
-[63]:https://www.clementine-player.org/
-[64]:https://cmus.github.io/
-[65]:https://www.calligra.org/tour/calligra-suite/
-[66]:https://www.libreoffice.org/
-[67]:https://www.wps.com/
-[68]:http://wps-community.org/downloads
-[69]:http://shutter-project.org/
-[70]:https://launchpad.net/kazam
-[71]:https://gitlab.gnome.org/GNOME/gnome-screenshot
-[72]:http://www.maartenbaert.be/simplescreenrecorder/
-[73]:http://recordmydesktop.sourceforge.net/about.php
-[74]:https://atom.io/
-[75]:https://www.sublimetext.com/
-[76]:https://www.geany.org/
-[77]:https://wiki.gnome.org/Apps/Gedit
-[78]:https://everdo.net/
-[79]:https://www.fossmint.com/evernote-alternatives-for-linux/
-[80]:https://everdo.net/linux/
-[81]:https://taskwarrior.org/
-[82]:http://banshee.fm/
-[83]:https://www.videolan.org/
-[84]:https://kodi.tv/
-[85]:https://www.smplayer.info/
-[86]:https://www.virtualbox.org/wiki/VirtualBox
-[87]:https://www.vmware.com/
-[88]:https://www.tecmint.com/install-vmware-workstation-in-linux/
-[89]:https://www.google.com/chrome/
-[90]:https://www.mozilla.org/en-US/firefox/
-[91]:https://vivaldi.com/
diff --git a/sources/tech/20180724 Building a network attached storage device with a Raspberry Pi.md b/sources/tech/20180724 Building a network attached storage device with a Raspberry Pi.md
deleted file mode 100644
index 3144efd4ee..0000000000
--- a/sources/tech/20180724 Building a network attached storage device with a Raspberry Pi.md
+++ /dev/null
@@ -1,284 +0,0 @@
-Building a network attached storage device with a Raspberry Pi
-======
-
-
-
-In this three-part series, I'll explain how to set up a simple, useful NAS (network attached storage) system. I use this kind of setup to store my files on a central system, creating incremental backups automatically every night. To mount the disk on devices that are located in the same network, NFS is installed. To access files offline and share them with friends, I use [Nextcloud][1].
-
-This article will cover the basic setup of software and hardware to mount the data disk on a remote device. In the second article, I will discuss a backup strategy and set up a cron job to create daily backups. In the third and last article, we will install Nextcloud, a tool for easy file access to devices synced offline as well as online using a web interface. It supports multiple users and public file-sharing so you can share pictures with friends, for example, by sending a password-protected link.
-
-The target architecture of our system looks like this:
-
-
-### Hardware
-
-Let's get started with the hardware you need. You might come up with a different shopping list, so consider this one an example.
-
-The computing power is delivered by a [Raspberry Pi 3][2], which comes with a quad-core CPU, a gigabyte of RAM, and (somewhat) fast ethernet. Data will be stored on two USB hard drives (I use 1-TB disks); one is used for the everyday traffic, the other is used to store backups. Be sure to use either active USB hard drives or a USB hub with an additional power supply, as the Raspberry Pi will not be able to power two USB drives.
-
-### Software
-
-The operating system with the highest visibility in the community is [Raspbian][3] , which is excellent for custom projects. There are plenty of [guides][4] that explain how to install Raspbian on a Raspberry Pi, so I won't go into details here. The latest official supported version at the time of this writing is [Raspbian Stretch][5] , which worked fine for me.
-
-At this point, I will assume you have configured your basic Raspbian and are able to connect to the Raspberry Pi by `ssh`.
-
-### Prepare the USB drives
-
-To achieve good performance reading from and writing to the USB hard drives, I recommend formatting them with ext4. To do so, you must first find out which disks are attached to the Raspberry Pi. You can find the disk devices in `/dev/sd/`. Using the command `fdisk -l`, you can find out which two USB drives you just attached. Please note that all data on the USB drives will be lost as soon as you follow these steps.
-```
-pi@raspberrypi:~ $ sudo fdisk -l
-
-
-
-<...>
-
-
-
-Disk /dev/sda: 931.5 GiB, 1000204886016 bytes, 1953525168 sectors
-
-Units: sectors of 1 * 512 = 512 bytes
-
-Sector size (logical/physical): 512 bytes / 512 bytes
-
-I/O size (minimum/optimal): 512 bytes / 512 bytes
-
-Disklabel type: dos
-
-Disk identifier: 0xe8900690
-
-
-
-Device Boot Start End Sectors Size Id Type
-
-/dev/sda1 2048 1953525167 1953523120 931.5G 83 Linux
-
-
-
-
-
-Disk /dev/sdb: 931.5 GiB, 1000204886016 bytes, 1953525168 sectors
-
-Units: sectors of 1 * 512 = 512 bytes
-
-Sector size (logical/physical): 512 bytes / 512 bytes
-
-I/O size (minimum/optimal): 512 bytes / 512 bytes
-
-Disklabel type: dos
-
-Disk identifier: 0x6aa4f598
-
-
-
-Device Boot Start End Sectors Size Id Type
-
-/dev/sdb1 * 2048 1953521663 1953519616 931.5G 83 Linux
-
-```
-
-As those devices are the only 1TB disks attached to the Raspberry Pi, we can easily see that `/dev/sda` and `/dev/sdb` are the two USB drives. The partition table at the end of each disk shows how it should look after the following steps, which create the partition table and format the disks. To do this, repeat the following steps for each of the two devices by replacing `sda` with `sdb` the second time (assuming your devices are also listed as `/dev/sda` and `/dev/sdb` in `fdisk`).
-
-First, delete the partition table of the disk and create a new one containing only one partition. In `fdisk`, you can use interactive one-letter commands to tell the program what to do. Simply insert them after the prompt `Command (m for help):` as follows (you can also use the `m` command anytime to get more information):
-```
-pi@raspberrypi:~ $ sudo fdisk /dev/sda
-
-
-
-Welcome to fdisk (util-linux 2.29.2).
-
-Changes will remain in memory only, until you decide to write them.
-
-Be careful before using the write command.
-
-
-
-
-
-Command (m for help): o
-
-Created a new DOS disklabel with disk identifier 0x9c310964.
-
-
-
-Command (m for help): n
-
-Partition type
-
- p primary (0 primary, 0 extended, 4 free)
-
- e extended (container for logical partitions)
-
-Select (default p): p
-
-Partition number (1-4, default 1):
-
-First sector (2048-1953525167, default 2048):
-
-Last sector, +sectors or +size{K,M,G,T,P} (2048-1953525167, default 1953525167):
-
-
-
-Created a new partition 1 of type 'Linux' and of size 931.5 GiB.
-
-
-
-Command (m for help): p
-
-
-
-Disk /dev/sda: 931.5 GiB, 1000204886016 bytes, 1953525168 sectors
-
-Units: sectors of 1 * 512 = 512 bytes
-
-Sector size (logical/physical): 512 bytes / 512 bytes
-
-I/O size (minimum/optimal): 512 bytes / 512 bytes
-
-Disklabel type: dos
-
-Disk identifier: 0x9c310964
-
-
-
-Device Boot Start End Sectors Size Id Type
-
-/dev/sda1 2048 1953525167 1953523120 931.5G 83 Linux
-
-
-
-Command (m for help): w
-
-The partition table has been altered.
-
-Syncing disks.
-
-```
-
-Now we will format the newly created partition `/dev/sda1` using the ext4 filesystem:
-```
-pi@raspberrypi:~ $ sudo mkfs.ext4 /dev/sda1
-
-mke2fs 1.43.4 (31-Jan-2017)
-
-Discarding device blocks: done
-
-
-
-<...>
-
-
-
-Allocating group tables: done
-
-Writing inode tables: done
-
-Creating journal (1024 blocks): done
-
-Writing superblocks and filesystem accounting information: done
-
-```
-
-After repeating the above steps, let's label the new partitions according to their usage in your system:
-```
-pi@raspberrypi:~ $ sudo e2label /dev/sda1 data
-
-pi@raspberrypi:~ $ sudo e2label /dev/sdb1 backup
-
-```
-
-Now let's get those disks mounted to store some data. My experience, based on running this setup for over a year now, is that USB drives are not always available to get mounted when the Raspberry Pi boots up (for example, after a power outage), so I recommend using autofs to mount them when needed.
-
-First install autofs and create the mount point for the storage:
-```
-pi@raspberrypi:~ $ sudo apt install autofs
-
-pi@raspberrypi:~ $ sudo mkdir /nas
-
-```
-
-Then mount the devices by adding the following line to `/etc/auto.master`:
-```
-/nas /etc/auto.usb
-
-```
-
-Create the file `/etc/auto.usb` if not existing with the following content, and restart the autofs service:
-```
-data -fstype=ext4,rw :/dev/disk/by-label/data
-
-backup -fstype=ext4,rw :/dev/disk/by-label/backup
-
-pi@raspberrypi3:~ $ sudo service autofs restart
-
-```
-
-Now you should be able to access the disks at `/nas/data` and `/nas/backup`, respectively. Clearly, the content will not be too thrilling, as you just erased all the data from the disks. Nevertheless, you should be able to verify the devices are mounted by executing the following commands:
-```
-pi@raspberrypi3:~ $ cd /nas/data
-
-pi@raspberrypi3:/nas/data $ cd /nas/backup
-
-pi@raspberrypi3:/nas/backup $ mount
-
-<...>
-
-/etc/auto.usb on /nas type autofs (rw,relatime,fd=6,pgrp=463,timeout=300,minproto=5,maxproto=5,indirect)
-
-<...>
-
-/dev/sda1 on /nas/data type ext4 (rw,relatime,data=ordered)
-
-/dev/sdb1 on /nas/backup type ext4 (rw,relatime,data=ordered)
-
-```
-
-First move into the directories to make sure autofs mounts the devices. Autofs tracks access to the filesystems and mounts the needed devices on the go. Then the `mount` command shows that the two devices are actually mounted where we wanted them.
-
-Setting up autofs is a bit fault-prone, so do not get frustrated if mounting doesn't work on the first try. Give it another chance, search for more detailed resources (there is plenty of documentation online), or leave a comment.
-
-### Mount network storage
-
-Now that you have set up the basic network storage, we want it to be mounted on a remote Linux machine. We will use the network file system (NFS) for this. First, install the NFS server on the Raspberry Pi:
-```
-pi@raspberrypi:~ $ sudo apt install nfs-kernel-server
-
-```
-
-Next we need to tell the NFS server to expose the `/nas/data` directory, which will be the only device accessible from outside the Raspberry Pi (the other one will be used for backups only). To export the directory, edit the file `/etc/exports` and add the following line to allow all devices with access to the NAS to mount your storage:
-```
-/nas/data *(rw,sync,no_subtree_check)
-
-```
-
-For more information about restricting the mount to single devices and so on, refer to `man exports`. In the configuration above, anyone will be able to mount your data as long as they have access to the ports needed by NFS: `111` and `2049`. I use the configuration above and allow access to my home network only for ports 22 and 443 using the routers firewall. That way, only devices in the home network can reach the NFS server.
-
-To mount the storage on a Linux computer, run the commands:
-```
-you@desktop:~ $ sudo mkdir /nas/data
-
-you@desktop:~ $ sudo mount -t nfs :/nas/data /nas/data
-
-```
-
-Again, I recommend using autofs to mount this network device. For extra help, check out [How to use autofs to mount NFS shares][6].
-
-Now you are able to access files stored on your own RaspberryPi-powered NAS from remote devices using the NFS mount. In the next part of this series, I will cover how to automatically back up your data to the second hard drive using `rsync`. To save space on the device while still doing daily backups, you will learn how to create incremental backups with `rsync`.
-
---------------------------------------------------------------------------------
-
-via: https://opensource.com/article/18/7/network-attached-storage-Raspberry-Pi
-
-作者:[Manuel Dewald][a]
-选题:[lujun9972](https://github.com/lujun9972)
-译者:[译者ID](https://github.com/译者ID)
-校对:[校对者ID](https://github.com/校对者ID)
-
-本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
-
-[a]:https://opensource.com/users/ntlx
-[1]:https://nextcloud.com/
-[2]:https://www.raspberrypi.org/products/raspberry-pi-3-model-b/
-[3]:https://www.raspbian.org/
-[4]:https://www.raspberrypi.org/documentation/installation/installing-images/
-[5]:https://www.raspberrypi.org/blog/raspbian-stretch/
-[6]:https://opensource.com/article/18/6/using-autofs-mount-nfs-shares
diff --git a/sources/tech/20180727 How to analyze your system with perf and Python.md b/sources/tech/20180727 How to analyze your system with perf and Python.md
index ccc66b04a7..c1be98cc0e 100644
--- a/sources/tech/20180727 How to analyze your system with perf and Python.md
+++ b/sources/tech/20180727 How to analyze your system with perf and Python.md
@@ -1,5 +1,3 @@
-pinewall translating
-
How to analyze your system with perf and Python
======
diff --git a/sources/tech/20180803 5 Essential Tools for Linux Development.md b/sources/tech/20180803 5 Essential Tools for Linux Development.md
deleted file mode 100644
index 006372ca82..0000000000
--- a/sources/tech/20180803 5 Essential Tools for Linux Development.md
+++ /dev/null
@@ -1,148 +0,0 @@
-5 Essential Tools for Linux Development
-======
-
-
-
-Linux has become a mainstay for many sectors of work, play, and personal life. We depend upon it. With Linux, technology is expanding and evolving faster than anyone could have imagined. That means Linux development is also happening at an exponential rate. Because of this, more and more developers will be hopping on board the open source and Linux dev train in the immediate, near, and far-off future. For that, people will need tools. Fortunately, there are a ton of dev tools available for Linux; so many, in fact, that it can be a bit intimidating to figure out precisely what you need (especially if you’re coming from another platform).
-
-To make that easier, I thought I’d help narrow down the selection a bit for you. But instead of saying you should use Tool X and Tool Y, I’m going to narrow it down to five categories and then offer up an example for each. Just remember, for most categories, there are several available options. And, with that said, let’s get started.
-
-### Containers
-
-Let’s face it, in this day and age you need to be working with containers. Not only are they incredibly easy to deploy, they make for great development environments. If you regularly develop for a specific platform, why not do so by creating a container image that includes all of the tools you need to make the process quick and easy. With that image available, you can then develop and roll out numerous instances of whatever software or service you need.
-
-Using containers for development couldn’t be easier than it is with [Docker][1]. The advantages of using containers (and Docker) are:
-
- * Consistent development environment.
-
- * You can trust it will “just work” upon deployment.
-
- * Makes it easy to build across platforms.
-
- * Docker images available for all types of development environments and languages.
-
- * Deploying single containers or container clusters is simple.
-
-
-
-
-Thanks to [Docker Hub][2], you’ll find images for nearly any platform, development environment, server, service… just about anything you need. Using images from Docker Hub means you can skip over the creation of the development environment and go straight to work on developing your app, server, API, or service.
-
-Docker is easily installable of most every Linux platform. For example: To install Docker on Ubuntu, you only have to open a terminal window and issue the command:
-```
-sudo apt-get install docker.io
-
-```
-
-With Docker installed, you’re ready to start pulling down specific images, developing, and deploying (Figure 1).
-
-![Docker images][4]
-
-Figure 1: Docker images ready to deploy.
-
-[Used with permission][5]
-
-### Version control system
-
-If you’re working on a large project or with a team on a project, you’re going to need a version control system. Why? Because you need to keep track of your code, where your code is, and have an easy means of making commits and merging code from others. Without such a tool, your projects would be nearly impossible to manage. For Linux users, you cannot beat the ease of use and widespread deployment of [Git][6] and [GitHub][7]. If you’re new to their worlds, Git is the version control system that you install on your local machine and GitHub is the remote repository you use to upload (and then manage) your projects. Git can be installed on most Linux distributions. For example, on a Debian-based system, the install is as simple as:
-```
-sudo apt-get install git
-
-```
-
-Once installed, you are ready to start your journey with version control (Figure 2).
-
-![Git installed][9]
-
-Figure 2: Git is installed and available for many important tasks.
-
-[Used with permission][5]
-
-Github requires you to create an account. You can use it for free for non-commercial projects, or you can pay for commercial project housing (for more information check out the price matrix [here][10]).
-
-### Text editor
-
-Let’s face it, developing on Linux would be a bit of a challenge without a text editor. Of course what a text editor is varies, depending upon who you ask. One person might say vim, emacs, or nano, whereas another might go full-on GUI with their editor. But since we’re talking development, we need a tool that can meet the needs of the modern day developer. And before I mention a couple of text editors, I will say this: Yes, I know that vim is a serious workhorse for serious developers and, if you know it well vim will meet and exceed all of your needs. However, getting up to speed enough that it won’t be in your way, can be a bit of a hurdle for some developers (especially those new to Linux). Considering my goal is to always help win over new users (and not just preach to an already devout choir), I’m taking the GUI route here.
-
-As far as text editors are concerned, you cannot go wrong with the likes of [Bluefish][11]. Bluefish can be found in most standard repositories and features project support, multi-threaded support for remote files, search and replace, open files recursively, snippets sidebar, integrates with make, lint, weblint, xmllint, unlimited undo/redo, in-line spell checker, auto-recovery, full screen editing, syntax highlighting (Figure 3), support for numerous languages, and much more.
-
-![Bluefish][13]
-
-Figure 3: Bluefish running on Ubuntu Linux 18.04.
-
-[Used with permission][5]
-
-### IDE
-
-Integrated Development Environment (IDE) is a piece of software that includes a comprehensive set of tools that enable a one-stop-shop environment for developing. IDEs not only enable you to code your software, but document and build them as well. There are a number of IDEs for Linux, but one in particular is not only included in the standard repositories it is also very user-friendly and powerful. That tool in question is [Geany][14]. Geany features syntax highlighting, code folding, symbol name auto-completion, construct completion/snippets, auto-closing of XML and HTML tags, call tips, many supported filetypes, symbol lists, code navigation, build system to compile and execute your code, simple project management, and a built-in plugin system.
-
-Geany can be easily installed on your system. For example, on a Debian-based distribution, issue the command:
-```
-sudo apt-get install geany
-
-```
-
-Once installed, you’re ready to start using this very powerful tool that includes a user-friendly interface (Figure 4) that has next to no learning curve.
-
-![Geany][16]
-
-Figure 4: Geany is ready to serve as your IDE.
-
-[Used with permission][5]
-
-### diff tool
-
-There will be times when you have to compare two files to find where they differ. This could be two different copies of what was the same file (only one compiles and the other doesn’t). When that happens, you don’t want to have to do that manually. Instead, you want to employ the power of tool like [Meld][17]. Meld is a visual diff and merge tool targeted at developers. With Meld you can make short shrift out of discovering the differences between two files. Although you can use a command line diff tool, when efficiency is the name of the game, you can’t beat Meld.
-
-Meld allows you to open a comparison between to files and it will highlight the differences between each. Meld also allows you to merge comparisons either from the right or the left (as the files are opened side by side - Figure 5).
-
-![Comparing two files][19]
-
-Figure 5: Comparing two files with a simple difference.
-
-[Used with permission][5]
-
-Meld can be installed from most standard repositories. On a Debian-based system, the installation command is:
-```
-sudo apt-get install meld
-
-```
-
-### Working with efficiency
-
-These five tools not only enable you to get your work done, they help to make it quite a bit more efficient. Although there are a ton of developer tools available for Linux, you’re going to want to make sure you have one for each of the above categories (maybe even starting with the suggestions I’ve made).
-
-Learn more about Linux through the free ["Introduction to Linux" ][20]course from The Linux Foundation and edX.
-
---------------------------------------------------------------------------------
-
-via: https://www.linux.com/learn/intro-to-linux/2018/8/5-essential-tools-linux-development
-
-作者:[Jack Wallen][a]
-选题:[lujun9972](https://github.com/lujun9972)
-译者:[译者ID](https://github.com/译者ID)
-校对:[校对者ID](https://github.com/校对者ID)
-
-本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
-
-[a]:https://www.linux.com/users/jlwallen
-[1]:https://www.docker.com/
-[2]:https://hub.docker.com/
-[3]:/files/images/5devtools1jpg
-[4]:https://www.linux.com/sites/lcom/files/styles/rendered_file/public/5devtools_1.jpg?itok=V1Bsbkg9 (Docker images)
-[5]:/licenses/category/used-permission
-[6]:https://git-scm.com/
-[7]:https://github.com/
-[8]:/files/images/5devtools2jpg
-[9]:https://www.linux.com/sites/lcom/files/styles/rendered_file/public/5devtools_2.jpg?itok=YJjhe4O6 (Git installed)
-[10]:https://github.com/pricing
-[11]:http://bluefish.openoffice.nl/index.html
-[12]:/files/images/5devtools3jpg
-[13]:https://www.linux.com/sites/lcom/files/styles/rendered_file/public/5devtools_3.jpg?itok=66A7Svme (Bluefish)
-[14]:https://www.geany.org/
-[15]:/files/images/5devtools4jpg
-[16]:https://www.linux.com/sites/lcom/files/styles/rendered_file/public/5devtools_4.jpg?itok=jRcA-0ue (Geany)
-[17]:http://meldmerge.org/
-[18]:/files/images/5devtools5jpg
-[19]:https://www.linux.com/sites/lcom/files/styles/rendered_file/public/5devtools_5.jpg?itok=eLkfM9oZ (Comparing two files)
-[20]:https://training.linuxfoundation.org/linux-courses/system-administration-training/introduction-to-linux
diff --git a/sources/tech/20180815 How to Create M3U Playlists in Linux [Quick Tip].md b/sources/tech/20180815 How to Create M3U Playlists in Linux [Quick Tip].md
deleted file mode 100644
index 3c0b63d63b..0000000000
--- a/sources/tech/20180815 How to Create M3U Playlists in Linux [Quick Tip].md
+++ /dev/null
@@ -1,84 +0,0 @@
-translating by lujun9972
-How to Create M3U Playlists in Linux [Quick Tip]
-======
-**Brief: A quick tip on how to create M3U playlists in Linux terminal from unordered files to play them in a sequence.**
-
-![Create M3U playlists in Linux Terminal][1]
-
-I am a fan of foreign tv series and it’s not always easy to get them on DVD or on streaming services like [Netflix][2]. Thankfully, you can find some of them on YouTube and [download them from YouTube][3].
-
-Now there comes a problem. Your files might not be sorted in a particular order. In GNU/Linux files are not naturally sort ordered by number sequencing so I had to make a .m3u playlist so [MPV video player][4] would play the videos in sequence and not out of sequence.
-
-Also sometimes the numbers are in the middle or the end like ‘My Web Series S01E01.mkv’ as an example. The episode information here is in the middle of the filename, the ‘S01E01’ which tells us, humans, which is the first episode and which needs to come in next.
-
-So what I did was to generate an m3u playlist in the video directory and tell MPV to play the .m3u playlist and it would take care of playing them in the sequence.
-
-### What is an M3U file?
-
-[M3U][5] is basically a text file that contains filenames in a specific order. When a player like MPV or VLC opens an M3U file, it tries to play the specified files in the given sequence.
-
-### Creating M3U to play audio/video files in a sequence
-
-In my case, I used the following command:
-```
-$/home/shirish/Videos/web-series-video/$ ls -1v |grep .mkv > /tmp/1.m3u && mv /tmp/1.m3u .
-
-```
-
-Let’s break it down a bit and see each bit as to what it means –
-
-**ls -1v** = This is using the plain ls or listing entries in the directory. The -1 means list one file per line. while -v natural sort of (version) numbers within text
-
-**| grep .mkv** = It’s basically telling `ls` to look for files which are ending in .mkv . It could be .mp4 or any other media file format that you want.
-
-It’s usually a good idea to do a dry run by running the command on the console:
-```
-ls -1v |grep .mkv
-My Web Series S01E01 [Episode 1 Name] Multi 480p WEBRip x264 - xRG.mkv
-My Web Series S01E02 [Episode 2 Name] Multi 480p WEBRip x264 - xRG.mkv
-My Web Series S01E03 [Episode 3 Name] Multi 480p WEBRip x264 - xRG.mkv
-My Web Series S01E04 [Episode 4 Name] Multi 480p WEBRip x264 - xRG.mkv
-My Web Series S01E05 [Episode 5 Name] Multi 480p WEBRip x264 - xRG.mkv
-My Web Series S01E06 [Episode 6 Name] Multi 480p WEBRip x264 - xRG.mkv
-My Web Series S01E07 [Episode 7 Name] Multi 480p WEBRip x264 - xRG.mkv
-My Web Series S01E08 [Episode 8 Name] Multi 480p WEBRip x264 - xRG.mkv
-
-```
-
-This tells me that what I’m trying to do is correct. Now just have to make that the output is in the form of a .m3u playlist which is the next part.
-```
-ls -1v |grep .mkv > /tmp/web_playlist.m3u && mv /tmp/web_playlist.m3u .
-
-```
-
-This makes the .m3u generate in the current directory. The .m3u playlist is nothing but a .txt file with the same contents as above with the .m3u extension. You can edit it manually as well and add the exact filenames in an order you desire.
-
-After that you just have to do something like this:
-```
-mpv web_playlist.m3u
-
-```
-
-The great thing about MPV and the playlists, in general, is that you don’t have to binge-watch. You can see however much you want to do in one sitting and see the rest in the next session or the session after that.
-
-I hope to do articles featuring MPV as well as how to make mkv files embedding subtitles in a media file but that’s in the future.
-
-Note: It’s FOSS doesn’t encourage piracy.
-
---------------------------------------------------------------------------------
-
-via: https://itsfoss.com/create-m3u-playlist-linux/
-
-作者:[Shirsh][a]
-选题:[lujun9972](https://github.com/lujun9972)
-译者:[译者ID](https://github.com/译者ID)
-校对:[校对者ID](https://github.com/校对者ID)
-
-本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
-
-[a]:https://itsfoss.com/author/shirish/
-[1]:https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/08/Create-M3U-Playlists.jpeg
-[2]:https://itsfoss.com/netflix-open-source-ai/
-[3]:https://itsfoss.com/download-youtube-linux/
-[4]:https://itsfoss.com/mpv-video-player/
-[5]:https://en.wikipedia.org/wiki/M3U
diff --git a/sources/tech/20180816 An introduction to the Django Python web app framework.md b/sources/tech/20180816 An introduction to the Django Python web app framework.md
deleted file mode 100644
index ab7dba9526..0000000000
--- a/sources/tech/20180816 An introduction to the Django Python web app framework.md
+++ /dev/null
@@ -1,1250 +0,0 @@
-Translating by MjSeven
-
-
-An introduction to the Django Python web app framework
-======
-
-
-
-In the first three articles of this four-part series comparing different Python web frameworks, we covered the [Pyramid][1], [Flask][2], and [Tornado][3] web frameworks. We've built the same app three times and have finally made our way to [Django][4]. Django is, by and large, the major web framework for Python developers these days and it's not too hard to see why. It excels in hiding a lot of the configuration logic and letting you focus on being able to build big, quickly.
-
-That said, when it comes to small projects, like our To-Do List app, Django can be a bit like bringing a firehose to a water gun fight. Let's see how it all comes together.
-
-### About Django
-
-Django styles itself as "a high-level Python web framework that encourages rapid development and clean, pragmatic design. Built by experienced developers, it takes care of much of the hassle of web development, so you can focus on writing your app without needing to reinvent the wheel." And they really mean it! This massive web framework comes with so many batteries included that oftentimes during development it can be a mystery as to how everything manages to work together.
-
-In addition to the framework itself being large, the Django community is absolutely massive. In fact, it's so big and active that there's [a whole website][5] devoted to the third-party packages people have designed to plug into Django to do a whole host of things. This includes everything from authentication and authorization, to full-on Django-powered content management systems, to e-commerce add-ons, to integrations with Stripe. Talk about not re-inventing the wheel; chances are if you want something done with Django, someone has already done it and you can just pull it into your project.
-
-For this purpose, we want to build a REST API with Django, so we'll leverage the always popular [Django REST framework][6]. Its job is to turn the Django framework, which was made to serve fully rendered HTML pages built with Django's own templating engine, into a system specifically geared toward effectively handling REST interactions. Let's get going with that.
-
-### Django startup and configuration
-```
-$ mkdir django_todo
-
-$ cd django_todo
-
-$ pipenv install --python 3.6
-
-$ pipenv shell
-
-(django-someHash) $ pipenv install django djangorestframework
-
-```
-
-For reference, we're working with `django-2.0.7` and `djangorestframework-3.8.2`.
-
-Unlike Flask, Tornado, and Pyramid, we don't need to write our own `setup.py` file. We're not making an installable Python distribution. As with many things, Django takes care of that for us in its own Django way. We'll still need a `requirements.txt` file to track all our necessary installs for deployment elsewhere. However, as far as targeting modules within our Django project goes, Django will let us list the subdirectories we want access to, then allow us to import from those directories as if they're installed packages.
-
-First, we have to create a Django project.
-
-When we installed Django, we also installed the command-line script `django-admin`. Its job is to manage all the various Django-related commands that help put our project together and maintain it as we continue to develop. Instead of having us build up the entire Django ecosystem from scratch, the `django-admin` will allow us to get started with all the absolutely necessary files (and more) we need for a standard Django project.
-
-The syntax for invoking `django-admin`'s start-project command is `django-admin startproject `. We want the files to exist in our current working directory, so:
-```
-(django-someHash) $ django-admin startproject django_todo .
-
-```
-
-Typing `ls` will show one new file and one new directory.
-```
-(django-someHash) $ ls
-
-manage.py django_todo
-
-```
-
-`manage.py` is a command-line-executable Python file that ends up just being a wrapper around `django-admin`. As such, its job is the same: to help us manage our project. Hence the name `manage.py`.
-
-The directory it created, the `django_todo` inside of `django_todo`, represents the configuration root for our project. Let's dig into that now.
-
-### Configuring Django
-
-By calling the `django_todo` directory the "configuration root," we mean this directory holds the files necessary for generally configuring our Django project. Pretty much everything outside this directory will be focused solely on the "business logic" associated with the project's models, views, routes, etc. All points that connect the project together will lead here.
-
-Calling `ls` within `django_todo` reveals four files:
-```
-(django-someHash) $ cd django_todo
-
-(django-someHash) $ ls
-
-__init__.py settings.py urls.py wsgi.py
-
-```
-
- * `__init__.py` is empty, solely existing to turn this directory into an importable Python package.
- * `settings.py` is where most configuration items will be set, like whether the project's in DEBUG mode, what databases are in use, where Django should look for files, etc. It is the "main configuration" part of the configuration root, and we'll dig into that momentarily.
- * `urls.py` is, as the name implies, where the URLs are set. While we don't have to explicitly write every URL for the project in this file, we **do** need to make this file aware of any other places where URLs have been declared. If this file doesn't point to other URLs, those URLs don't exist. **Period.**
- * `wsgi.py` is for serving the application in production. Just like how Pyramid, Tornado, and Flask exposed some "app" object that was the configured application to be served, Django must also expose one. That's done here. It can then be served with something like [Gunicorn][7], [Waitress][8], or [uWSGI][9].
-
-
-
-#### Setting the settings
-
-Taking a look inside `settings.py` will reveal its considerable size—and these are just the defaults! This doesn't even include hooks for the database, static files, media files, any cloud integration, or any of the other dozens of ways that a Django project can be configured. Let's see, top to bottom, what we've been given:
-
- * `BASE_DIR` sets the absolute path to the base directory, or the directory where `manage.py` is located. This is useful for locating files.
- * `SECRET_KEY` is a key used for cryptographic signing within the Django project. In practice, it's used for things like sessions, cookies, CSRF protection, and auth tokens. As soon as possible, preferably before the first commit, the value for `SECRET_KEY` should be changed and moved into an environment variable.
- * `DEBUG` tells Django whether to run the project in development mode or production mode. This is an extremely critical distinction.
- * In development mode, when an error pops up, Django will show the full stack trace that led to the error, as well as all the settings and configurations involved in running the project. This can be a massive security issue if `DEBUG` was set to `True` in a production environment.
- * In production, Django shows a plain error page when things go wrong. No information is given beyond an error code.
- * A simple way to safeguard our project is to set `DEBUG` to an environment variable, like `bool(os.environ.get('DEBUG', ''))`.
- * `ALLOWED_HOSTS` is the literal list of hostnames from which the application is being served. In development this can be empty, but in production our Django project will not run if the host that serves the project is not among the list of ALLOWED_HOSTS. Another thing for the box of environment variables.
- * `INSTALLED_APPS` is the list of Django "apps" (think of them as subdirectories; more on this later) that our Django project has access to. We're given a few by default to provide…
- * The built-in Django administrative website
- * Django's built-in authentication system
- * Django's one-size-fits-all manager for data models
- * Session management
- * Cookie and session-based messaging
- * Usage of static files inherent to the site, like `css` files, `js` files, any images that are a part of our site's design, etc.
- * `MIDDLEWARE` is as it sounds: the middleware that helps our Django project run. Much of it is for handling various types of security, although we can add others as we need them.
- * `ROOT_URLCONF` sets the import path of our base-level URL configuration file. That `urls.py` that we saw before? By default, Django points to that file to gather all our URLs. If we want Django to look elsewhere, we'll set the import path to that location here.
- * `TEMPLATES` is the list of template engines that Django would use for our site's frontend if we were relying on Django to build our HTML. Since we're not, it's irrelevant.
- * `WSGI_APPLICATION` sets the import path of our WSGI application—the thing that gets served when in production. By default, it points to an `application` object in `wsgi.py`. This rarely, if ever, needs to be modified.
- * `DATABASES` sets which databases our Django project will access. The `default` database must be set. We can set others by name, as long as we provide the `HOST`, `USER`, `PASSWORD`, `PORT`, database `NAME`, and appropriate `ENGINE`. As one might imagine, these are all sensitive pieces of information, so it's best to hide them away in environment variables. [Check the Django docs][10] for more details.
- * Note: If instead of providing individual pieces of a database's location, you'd rather provide the full database URL, check out [dj_database_url][11].
- * `AUTH_PASSWORD_VALIDATORS` is effectively a list of functions that run to check input passwords. We get a few by default, but if we had other, more complex validation needs—more than merely checking if the password matches a user's attribute, if it exceeds the minimum length, if it's one of the 1,000 most common passwords, or if the password is entirely numeric—we could list them here.
- * `LANGUAGE_CODE` will set the language for the site. By default it's US English, but we could switch it up to be other languages.
- * `TIME_ZONE` is the time zone for any autogenerated timestamps in our Django project. I cannot stress enough how important it is that we stick to UTC and perform any time zone-specific processing elsewhere instead of trying to reconfigure this setting. As [this article][12] states, UTC is the common denominator among all time zones because there are no offsets to worry about. If offsets are that important, we could calculate them as needed with an appropriate offset from UTC.
- * `USE_I18N` will let Django use its own translation services to translate strings for the front end. I18N = internationalization (18 characters between "i" and "n")
- * `USE_L10N` (L10N = localization [10 characters between "l" and "n"]) will use the common local formatting of data if set to `True`. A great example is dates: in the US it's MM-DD-YYYY. In Europe, dates tend to be written DD-MM-YYYY
- * `STATIC_URL` is part of a larger body of settings for serving static files. We'll be building a REST API, so we won't need to worry about static files. In general, this sets the root path after the domain name for every static file. So, if we had a logo image to serve, it'd be `http:////logo.gif`
-
-
-
-These settings are pretty much ready to go by default. One thing we'll have to change is the `DATABASES` setting. First, we create the database that we'll be using with:
-```
-(django-someHash) $ createdb django_todo
-
-```
-
-We want to use a PostgreSQL database like we did with Flask, Pyramid, and Tornado. That means we'll have to change the `DATABASES` setting to allow our server to access a PostgreSQL database. First: the engine. By default, the database engine is `django.db.backends.sqlite3`. We'll be changing that to `django.db.backends.postgresql`.
-
-For more information about Django's available engines, [check the docs][13]. Note that while it is technically possible to incorporate a NoSQL solution into a Django project, out of the box, Django is strongly biased toward SQL solutions.
-
-Next, we have to specify the key-value pairs for the different parts of the connection parameters.
-
- * `NAME` is the name of the database we just created.
- * `USER` is an individual's Postgres database username
- * `PASSWORD` is the password needed to access the database
- * `HOST` is the host for the database. `localhost` or `127.0.0.1` will work, as we're developing locally.
- * `PORT` is whatever PORT we have open for Postgres; it's typically `5432`.
-
-
-
-`settings.py` expects us to provide string values for each of these keys. However, this is highly sensitive information. That's not going to work for any responsible developer. There are several ways to address this problem, but we'll just set up environment variables.
-```
-DATABASES = {
-
- 'default': {
-
- 'ENGINE': 'django.db.backends.postgresql',
-
- 'NAME': os.environ.get('DB_NAME', ''),
-
- 'USER': os.environ.get('DB_USER', ''),
-
- 'PASSWORD': os.environ.get('DB_PASS', ''),
-
- 'HOST': os.environ.get('DB_HOST', ''),
-
- 'PORT': os.environ.get('DB_PORT', ''),
-
- }
-
-}
-
-```
-
-Before going forward, make sure to set the environment variables or Django will not work. Also, we need to install `psycopg2` into this environment so we can talk to our database.
-
-### Django routes and views
-
-Let's make something function inside this project. We'll be using Django REST Framework to construct our REST API, so we have to make sure we can use it by adding `rest_framework` to the end of `INSTALLED_APPS` in `settings.py`.
-```
-INSTALLED_APPS = [
-
- 'django.contrib.admin',
-
- 'django.contrib.auth',
-
- 'django.contrib.contenttypes',
-
- 'django.contrib.sessions',
-
- 'django.contrib.messages',
-
- 'django.contrib.staticfiles',
-
- 'rest_framework'
-
-]
-
-```
-
-While Django REST Framework doesn't exclusively require class-based views (like Tornado) to handle incoming requests, it is the preferred method for writing views. Let's define one.
-
-Let's create a file called `views.py` in `django_todo`. Within `views.py`, we'll create our "Hello, world!" view.
-```
-# in django_todo/views.py
-
-from rest_framework.response import JsonResponse
-
-from rest_framework.views import APIView
-
-
-
-class HelloWorld(APIView):
-
- def get(self, request, format=None):
-
- """Print 'Hello, world!' as the response body."""
-
- return JsonResponse("Hello, world!")
-
-```
-
-Every Django REST Framework class-based view inherits either directly or indirectly from `APIView`. `APIView` handles a ton of stuff, but for our purposes it does these specific things:
-
- * Sets up the methods needed to direct traffic based on the HTTP method (e.g. GET, POST, PUT, DELETE)
- * Populates the `request` object with all the data and attributes we'll need for parsing and processing any incoming request
- * Takes the `Response` or `JsonResponse` that every dispatch method (i.e., methods named `get`, `post`, `put`, `delete`) returns and constructs a properly formatted HTTP response.
-
-
-
-Yay, we have a view! On its own it does nothing. We need to connect it to a route.
-
-If we hop into `django_todo/urls.py`, we reach our default URL configuration file. As mentioned earlier: If a route in our Django project is not included here, it doesn't exist.
-
-We add desired URLs by adding them to the given `urlpatterns` list. By default, we get a whole set of URLs for Django's built-in site administration backend. We'll delete that completely.
-
-We also get some very helpful doc strings that tell us exactly how to add routes to our Django project. We'll need to provide a call to `path()` with three parameters:
-
- * The desired route, as a string (without the leading slash)
- * The view function (only ever a function!) that will handle that route
- * The name of the route in our Django project
-
-
-
-Let's import our `HelloWorld` view and attach it to the home route `"/"`. We can also remove the path to the `admin` from `urlpatterns`, as we won't be using it.
-```
-# django_todo/urls.py, after the big doc string
-
-from django.urls import path
-
-from django_todo.views import HelloWorld
-
-
-
-urlpatterns = [
-
- path('', HelloWorld.as_view(), name="hello"),
-
-]
-
-```
-
-Well, this is different. The route we specified is just a blank string. Why does that work? Django assumes that every path we declare begins with a leading slash. We're just specifying routes to resources after the initial domain name. If a route isn't going to a specific resource and is instead just the home page, the route is just `""`, or effectively "no resource."
-
-The `HelloWorld` view is imported from that `views.py` file we just created. In order to do this import, we need to update `settings.py` to include `django_todo` in the list of `INSTALLED_APPS`. Yeah, it's a bit weird. Here's one way to think about it.
-
-`INSTALLED_APPS` refers to the list of directories or packages that Django sees as importable. It's Django's way of treating individual components of a project like installed packages without going through a `setup.py`. We want the `django_todo` directory to be treated like an importable package, so we include that directory in `INSTALLED_APPS`. Now, any module within that directory is also importable. So we get our view.
-
-The `path` function will ONLY take a view function as that second argument, not just a class-based view on its own. Luckily, all valid Django class-based views include this `.as_view()` method. Its job is to roll up all the goodness of the class-based view into a view function and return that view function. So, we never have to worry about making that translation. Instead, we only have to think about the business logic, letting Django and Django REST Framework handle the rest.
-
-Let's crack this open in the browser!
-
-Django comes packaged with its own local development server, accessible through `manage.py`. Let's navigate to the directory containing `manage.py` and type:
-```
-(django-someHash) $ ./manage.py runserver
-
-Performing system checks...
-
-
-
-System check identified no issues (0 silenced).
-
-August 01, 2018 - 16:47:24
-
-Django version 2.0.7, using settings 'django_todo.settings'
-
-Starting development server at http://127.0.0.1:8000/
-
-Quit the server with CONTROL-C.
-
-```
-
-When `runserver` is executed, Django does a check to make sure the project is (more or less) wired together correctly. It's not fool-proof, but it does catch some glaring issues. It also notifies us if our database is out of sync with our code. Undoubtedly ours is because we haven't committed any of our application's stuff to our database, but that's fine for now. Let's visit `http://127.0.0.1:8000` to see the output of the `HelloWorld` view.
-
-Huh. That's not the plaintext data we saw in Pyramid, Flask, and Tornado. When Django REST Framework is used, the HTTP response (when viewed in the browser) is this sort of rendered HTML, showing our actual JSON response in red.
-
-But don't fret! If we do a quick `curl` looking at `http://127.0.0.1:8000` in the command line, we don't get any of that fancy HTML. Just the content.
-```
-# Note: try this in a different terminal window, outside of the virtual environment above
-
-$ curl http://127.0.0.1:8000
-
-"Hello, world!"
-
-```
-
-Bueno!
-
-Django REST Framework wants us to have a human-friendly interface when using the browser. This makes sense; if JSON is viewed in the browser, it's typically because a human wants to check that it looks right or get a sense of what the JSON response will look like as they design some consumer of an API. It's a lot like what you'd get from a service like [Postman][14].
-
-Either way, we know our view is working! Woo! Let's recap what we've done:
-
- 1. Started the project with `django-admin startproject `
- 2. Updated the `django_todo/settings.py` to use environment variables for `DEBUG`, `SECRET_KEY`, and values in the `DATABASES` dict
- 3. Installed `Django REST Framework` and added it to the list of `INSTALLED_APPS`
- 4. Created `django_todo/views.py` to include our first view class to say Hello to the World
- 5. Updated `django_todo/urls.py` with a path to our new home route
- 6. Updated `INSTALLED_APPS` in `django_todo/settings.py` to include the `django_todo` package
-
-
-
-### Creating models
-
-Let's create our data models now.
-
-A Django project's entire infrastructure is built around data models. It's written so each data model can have its own little universe with its own views, its own set of URLs that concern its resources, and even its own tests (if we are so inclined).
-
-If we wanted to build a simple Django project, we could circumvent this by just writing our own `models.py` file in the `django_todo` directory and importing it into our views. However, we're trying to write a Django project the "right" way, so we should divide up our models as best we can into their own little packages The Django Way™.
-
-The Django Way involves creating what are called Django "apps." Django "apps" aren't separate applications per se; they don't have their own settings and whatnot (although they can). They can, however, have just about everything else one might think of being in a standalone application:
-
- * Set of self-contained URLs
- * Set of self-contained HTML templates (if we want to serve HTML)
- * One or more data models
- * Set of self-contained views
- * Set of self-contained tests
-
-
-
-They are made to be independent so they can be easily shared like standalone applications. In fact, Django REST Framework is an example of a Django app. It comes packaged with its own views and HTML templates for serving up our JSON. We just leverage that Django app to turn our project into a full-on RESTful API with less hassle.
-
-To create the Django app for our To-Do List items, we'll want to use the `startapp` command with `manage.py`.
-```
-(django-someHash) $ ./manage.py startapp todo
-
-```
-
-The `startapp` command will succeed silently. We can check that it did what it should've done by using `ls`.
-```
-(django-someHash) $ ls
-
-Pipfile Pipfile.lock django_todo manage.py todo
-
-```
-
-Look at that: We've got a brand new `todo` directory. Let's look inside!
-```
-(django-someHash) $ ls todo
-
-__init__.py admin.py apps.py migrations models.py tests.py views.py
-
-```
-
-Here are the files that `manage.py startapp` created:
-
- * `__init__.py` is empty; it exists so this directory can be seen as a valid import path for models, views, etc.
- * `admin.py` is not quite empty; it's used for formatting this app's models in the Django admin, which we're not getting into in this article.
- * `apps.py` … not much work to do here either; it helps with formatting models for the Django admin.
- * `migrations` is a directory that'll contain snapshots of our data models; it's used for updating our database. This is one of the few frameworks that comes with database management built-in, and part of that is allowing us to update our database instead of having to tear it down and rebuild it to change the schema.
- * `models.py` is where the data models live.
- * `tests.py` is where tests would go—if we wrote any.
- * `views.py` is for the views we write that pertain to the models in this app. They don't have to be written here. We could, for example, write all our views in `django_todo/views.py`. It's here, however, so it's easier to separate our concerns. This becomes far more relevant with sprawling applications that cover many conceptual spaces.
-
-
-
-What hasn't been created for us is a `urls.py` file for this app. We can make that ourselves.
-```
-(django-someHash) $ touch todo/urls.py
-
-```
-
-Before moving forward we should do ourselves a favor and add this new Django app to our list of `INSTALLED_APPS` in `django_todo/settings.py`.
-```
-# in settings.py
-
-INSTALLED_APPS = [
-
- 'django.contrib.admin',
-
- 'django.contrib.auth',
-
- 'django.contrib.contenttypes',
-
- 'django.contrib.sessions',
-
- 'django.contrib.messages',
-
- 'django.contrib.staticfiles',
-
- 'rest_framework',
-
- 'django_todo',
-
- 'todo' # <--- the line was added
-
-]
-
-```
-
-Inspecting `todo/models.py` shows that `manage.py` already wrote a bit of code for us to get started. Diverging from how models were created in the Flask, Tornado, and Pyramid implementations, Django doesn't leverage a third party to manage database sessions or the construction of its object instances. It's all rolled into Django's `django.db.models` submodule.
-
-The way a model is built, however, is more or less the same. To create a model in Django, we'll need to build a `class` that inherits from `models.Model`. All the fields that will apply to instances of that model should appear as class attributes. Instead of importing columns and field types from SQLAlchemy like we have in the past, all of our fields will come directly from `django.db.models`.
-```
-# todo/models.py
-
-from django.db import models
-
-
-
-class Task(models.Model):
-
- """Tasks for the To Do list."""
-
- name = models.CharField(max_length=256)
-
- note = models.TextField(blank=True, null=True)
-
- creation_date = models.DateTimeField(auto_now_add=True)
-
- due_date = models.DateTimeField(blank=True, null=True)
-
- completed = models.BooleanField(default=False)
-
-```
-
-While there are some definite differences between what Django needs and what SQLAlchemy-based systems need, the overall contents and structure are more or less the same. Let's point out the differences.
-
-We no longer need to declare a separate field for an auto-incremented ID number for our object instances. Django builds one for us unless we specify a different field as the primary key.
-
-Instead of instantiating `Column` objects that are passed datatype objects, we just directly reference the datatypes as the columns themselves.
-
-The `Unicode` field became either `models.CharField` or `models.TextField`. `CharField` is for small text fields of a specific maximum length, whereas `TextField` is for any amount of text.
-
-The `TextField` should be able to be blank, and we specify this in TWO ways. `blank=True` says that when an instance of this model is constructed, and the data attached to this field is being validated, it's OK for that data to be empty. This is different from `null=True`, which says when the table for this model class is constructed, the column corresponding to `note` will allow for blank or `NULL` entries. So, to sum that all up, `blank=True` controls how data gets added to model instances while `null=True` controls how the database table holding that data is constructed in the first place.
-
-The `DateTime` field grew some muscle and became able to do some work for us instead of us having to modify the `__init__` method for the class. For the `creation_date` field, we specify `auto_now_add=True`. What this means in a practical sense is that when a new model instance is created Django will automatically record the date and time of now as that field's value. That's handy!
-
-When neither `auto_now_add` nor its close cousin `auto_now` are set to `True`, `DateTimeField` will expect data like any other field. It'll need to be fed with a proper `datetime` object to be valid. The `due_date` column has `blank` and `null` both set to `True` so that an item on the To-Do List can just be an item to be done at some point in the future, with no defined date or time.
-
-`BooleanField` just ends up being a field that can take one of two values: `True` or `False`. Here, the default value is set to be `False`.
-
-#### Managing the database
-
-As mentioned earlier, Django has its own way of doing database management. Instead of having to write… really any code at all regarding our database, we leverage the `manage.py` script that Django provided on construction. It'll manage not just the construction of the tables for our database, but also any updates we wish to make to those tables without necessarily having to blow the whole thing away!
-
-Because we've constructed a new model, we need to make our database aware of it. First, we need to put into code the schema that corresponds to this model. The `makemigrations` command of `manage.py` will take a snapshot of the model class we built and all its fields. It'll take that information and package it into a Python script that'll live in this particular Django app's `migrations` directory. There will never be a reason to run this migration script directly. It'll exist solely so that Django can use it as a basis to update our database table or to inherit information when we update our model class.
-```
-(django-someHash) $ ./manage.py makemigrations
-
-Migrations for 'todo':
-
- todo/migrations/0001_initial.py
-
- - Create model Task
-
-```
-
-This will look at every app listed in `INSTALLED_APPS` and check for models that exist in those apps. It'll then check the corresponding `migrations` directory for migration files and compare them to the models in each of those `INSTALLED_APPS` apps. If a model has been upgraded beyond what the latest migration says should exist, a new migration file will be created that inherits from the most recent one. It'll be automatically named and also be given a message that says what changed since the last migration.
-
-If it's been a while since you last worked on your Django project and can't remember if your models were in sync with your migrations, you have no need to fear. `makemigrations` is an idempotent operation; your `migrations` directory will have only one copy of the current model configuration whether you run `makemigrations` once or 20 times. Even better than that, when we run `./manage.py runserver`, Django will detect that our models are out of sync with our migrations, and it'll just flat out tell us in colored text so we can make the appropriate choice.
-
-This next point is something that trips everybody up at least once: Creating a migration file does not immediately affect our database. When we ran `makemigrations`, we prepared our Django project to define how a given table should be created and end up looking. It's still on us to apply those changes to our database. That's what the `migrate` command is for.
-```
-(django-someHash) $ ./manage.py migrate
-
-Operations to perform:
-
- Apply all migrations: admin, auth, contenttypes, sessions, todo
-
-Running migrations:
-
- Applying contenttypes.0001_initial... OK
-
- Applying auth.0001_initial... OK
-
- Applying admin.0001_initial... OK
-
- Applying admin.0002_logentry_remove_auto_add... OK
-
- Applying contenttypes.0002_remove_content_type_name... OK
-
- Applying auth.0002_alter_permission_name_max_length... OK
-
- Applying auth.0003_alter_user_email_max_length... OK
-
- Applying auth.0004_alter_user_username_opts... OK
-
- Applying auth.0005_alter_user_last_login_null... OK
-
- Applying auth.0006_require_contenttypes_0002... OK
-
- Applying auth.0007_alter_validators_add_error_messages... OK
-
- Applying auth.0008_alter_user_username_max_length... OK
-
- Applying auth.0009_alter_user_last_name_max_length... OK
-
- Applying sessions.0001_initial... OK
-
- Applying todo.0001_initial... OK
-
-```
-
-When we apply our migrations, Django first checks to see if the other `INSTALLED_APPS` have migrations to be applied. It checks them in roughly the order they're listed. We want our app to be listed last, because we want to make sure that, in case our model depends on any of Django's built-in models, the database updates we make don't suffer from dependency problems.
-
-We have another model to build: the User model. However, the game has changed a bit since we're using Django. So many applications require some sort of User model that Django's `django.contrib.auth` package built its own for us to use. If it weren't for the authentication token we require for our users, we could just move on and use it instead of reinventing the wheel.
-
-However, we need that token. There are a couple of ways we can handle this.
-
- * Inherit from Django's `User` object, making our own object that extends it by adding a `token` field
- * Create a new object that exists in a one-to-one relationship with Django's `User` object, whose only purpose is to hold a token
-
-
-
-I'm in the habit of building object relationships, so let's go with the second option. Let's call it an `Owner` as it basically has a similar connotation as a `User`, which is what we want.
-
-Out of sheer laziness, we could just include this new `Owner` object in `todo/models.py`, but let's refrain from that. `Owner` doesn't explicitly have to do with the creation or maintenance of items on the task list. Conceptually, the `Owner` is simply the owner of the task. There may even come a time where we want to expand this `Owner` to include other data that has absolutely nothing to do with tasks.
-
-Just to be safe, let's make an `owner` app whose job is to house and handle this `Owner` object.
-```
-(django-someHash) $ ./manage.py startapp owner
-
-```
-
-Don't forget to add it to the list of `INSTALLED_APPS` in `settings.py`.
-```
-INSTALLED_APPS = [
-
- 'django.contrib.admin',
-
- 'django.contrib.auth',
-
- 'django.contrib.contenttypes',
-
- 'django.contrib.sessions',
-
- 'django.contrib.messages',
-
- 'django.contrib.staticfiles',
-
- 'rest_framework',
-
- 'django_todo',
-
- 'todo',
-
- 'owner'
-
-]
-
-```
-
-If we look at the root of our Django project, we now have two Django apps:
-```
-(django-someHash) $ ls
-
-Pipfile Pipfile.lock django_todo manage.py owner todo
-
-```
-
-In `owner/models.py`, let's build this `Owner` model. As mentioned earlier, it'll have a one-to-one relationship with Django's built-in `User` object. We can enforce this relationship with Django's `models.OneToOneField`
-```
-# owner/models.py
-
-from django.db import models
-
-from django.contrib.auth.models import User
-
-import secrets
-
-
-
-class Owner(models.Model):
-
- """The object that owns tasks."""
-
- user = models.OneToOneField(User, on_delete=models.CASCADE)
-
- token = models.CharField(max_length=256)
-
-
-
- def __init__(self, *args, **kwargs):
-
- """On construction, set token."""
-
- self.token = secrets.token_urlsafe(64)
-
- super().__init__(*args, **kwargs)
-
-```
-
-This says the `Owner` object is linked to the `User` object, with one `owner` instance per `user` instance. `on_delete=models.CASCADE` dictates that if the corresponding `User` gets deleted, the `Owner` instance it's linked to will also get deleted. Let's run `makemigrations` and `migrate` to bake this new model into our database.
-```
-(django-someHash) $ ./manage.py makemigrations
-
-Migrations for 'owner':
-
- owner/migrations/0001_initial.py
-
- - Create model Owner
-
-(django-someHash) $ ./manage.py migrate
-
-Operations to perform:
-
- Apply all migrations: admin, auth, contenttypes, owner, sessions, todo
-
-Running migrations:
-
- Applying owner.0001_initial... OK
-
-```
-
-Now our `Owner` needs to own some `Task` objects. It'll be very similar to the `OneToOneField` seen above, except that we'll stick a `ForeignKey` field on the `Task` object pointing to an `Owner`.
-```
-# todo/models.py
-
-from django.db import models
-
-from owner.models import Owner
-
-
-
-class Task(models.Model):
-
- """Tasks for the To Do list."""
-
- name = models.CharField(max_length=256)
-
- note = models.TextField(blank=True, null=True)
-
- creation_date = models.DateTimeField(auto_now_add=True)
-
- due_date = models.DateTimeField(blank=True, null=True)
-
- completed = models.BooleanField(default=False)
-
- owner = models.ForeignKey(Owner, on_delete=models.CASCADE)
-
-```
-
-Every To-Do List task has exactly one owner who can own multiple tasks. When that owner is deleted, any task they own goes with them.
-
-Let's now run `makemigrations` to take a new snapshot of our data model setup, then `migrate` to apply those changes to our database.
-```
-(django-someHash) django $ ./manage.py makemigrations
-
-You are trying to add a non-nullable field 'owner' to task without a default; we can't do that (the database needs something to populate existing rows).
-
-Please select a fix:
-
- 1) Provide a one-off default now (will be set on all existing rows with a null value for this column)
-
- 2) Quit, and let me add a default in models.py
-
-```
-
-Oh no! We have a problem! What happened? Well, when we created the `Owner` object and added it as a `ForeignKey` to `Task`, we basically required that every `Task` requires an `Owner`. However, the first migration we made for the `Task` object didn't include that requirement. So, even though there's no data in our database's table, Django is doing a pre-check on our migrations to make sure they're compatible and this new migration we're proposing is not.
-
-There are a few ways to deal with this sort of problem:
-
- 1. Blow away the current migration and build a new one that includes the current model configuration
- 2. Add a default value to the `owner` field on the `Task` object
- 3. Allow tasks to have `NULL` values for the `owner` field.
-
-
-
-Option 2 wouldn't make much sense here; we'd be proposing that any `Task` that was created would, by default, be linked to some default owner despite none necessarily existing.
-
-Option 1 would require us to destroy and rebuild our migrations. We should leave those alone.
-
-Let's go with option 3. In this circumstance, it won't be the end of the world if we allow the `Task` table to have null values for the owners; any tasks created from this point forward will necessarily have an owner. If you're in a situation where that isn't an acceptable schema for your database table, blow away your migrations, drop the table, and rebuild the migrations.
-```
-# todo/models.py
-
-from django.db import models
-
-from owner.models import Owner
-
-
-
-class Task(models.Model):
-
- """Tasks for the To Do list."""
-
- name = models.CharField(max_length=256)
-
- note = models.TextField(blank=True, null=True)
-
- creation_date = models.DateTimeField(auto_now_add=True)
-
- due_date = models.DateTimeField(blank=True, null=True)
-
- completed = models.BooleanField(default=False)
-
- owner = models.ForeignKey(Owner, on_delete=models.CASCADE, null=True)
-
-(django-someHash) $ ./manage.py makemigrations
-
-Migrations for 'todo':
-
- todo/migrations/0002_task_owner.py
-
- - Add field owner to task
-
-(django-someHash) $ ./manage.py migrate
-
-Operations to perform:
-
- Apply all migrations: admin, auth, contenttypes, owner, sessions, todo
-
-Running migrations:
-
- Applying todo.0002_task_owner... OK
-
-```
-
-Woo! We have our models! Welcome to the Django way of declaring objects.
-
-For good measure, let's ensure that whenever a `User` is made, it's automatically linked with a new `Owner` object. We can do this using Django's `signals` system. Basically, we say exactly what we intend: "When we get the signal that a new `User` has been constructed, construct a new `Owner` and set that new `User` as that `Owner`'s `user` field." In practice that looks like:
-```
-# owner/models.py
-
-from django.contrib.auth.models import User
-
-from django.db import models
-
-from django.db.models.signals import post_save
-
-from django.dispatch import receiver
-
-
-
-import secrets
-
-
-
-
-
-class Owner(models.Model):
-
- """The object that owns tasks."""
-
- user = models.OneToOneField(User, on_delete=models.CASCADE)
-
- token = models.CharField(max_length=256)
-
-
-
- def __init__(self, *args, **kwargs):
-
- """On construction, set token."""
-
- self.token = secrets.token_urlsafe(64)
-
- super().__init__(*args, **kwargs)
-
-
-
-
-
-@receiver(post_save, sender=User)
-
-def link_user_to_owner(sender, **kwargs):
-
- """If a new User is saved, create a corresponding Owner."""
-
- if kwargs['created']:
-
- owner = Owner(user=kwargs['instance'])
-
- owner.save()
-
-```
-
-We set up a function that listens for signals to be sent from the `User` object built into Django. It's waiting for just after a `User` object has been saved. This can come from either a new `User` or an update to an existing `User`; we discern between the two scenarios within the listening function.
-
-If the thing sending the signal was a newly created instance, `kwargs['created']` will have the value of `True`. We only want to do something if this is `True`. If it's a new instance, we create a new `Owner`, setting its `user` field to be the new `User` instance that was created. After that, we `save()` the new `Owner`. This will commit our change to the database if all is well. It'll fail if the data doesn't validate against the fields we declared.
-
-Now let's talk about how we're going to access the data.
-
-### Accessing model data
-
-In the Flask, Pyramid, and Tornado frameworks, we accessed model data by running queries against some database session. Maybe it was attached to a `request` object, maybe it was a standalone `session` object. Regardless, we had to establish a live connection to the database and query on that connection.
-
-This isn't the way Django works. Django, by default, doesn't leverage any third-party object-relational mapping (ORM) to converse with the database. Instead, Django allows the model classes to maintain their own conversations with the database.
-
-Every model class that inherits from `django.db.models.Model` will have attached to it an `objects` object. This will take the place of the `session` or `dbsession` we've become so familiar with. Let's open the special shell that Django gives us and investigate how this `objects` object works.
-```
-(django-someHash) $ ./manage.py shell
-
-Python 3.7.0 (default, Jun 29 2018, 20:13:13)
-
-[Clang 9.1.0 (clang-902.0.39.2)] on darwin
-
-Type "help", "copyright", "credits" or "license" for more information.
-
-(InteractiveConsole)
-
->>>
-
-```
-
-The Django shell is different from a normal Python shell in that it's aware of the Django project we've been building and can do easy imports of our models, views, settings, etc. without having to worry about installing a package. We can access our models with a simple `import`.
-```
->>> from owner.models import Owner
-
->>> Owner
-
-
-
-```
-
-Currently, we have no `Owner` instances. We can tell by querying for them with `Owner.objects.all()`.
-```
->>> Owner.objects.all()
-
-
-
-```
-
-Anytime we run a query method on the `.objects` object, we'll get a `QuerySet` back. For our purposes, it's effectively a `list`, and this `list` is showing us that it's empty. Let's make an `Owner` by making a `User`.
-```
->>> from django.contrib.auth.models import User
-
->>> new_user = User(username='kenyattamurphy', email='kenyatta.murphy@gmail.com')
-
->>> new_user.set_password('wakandaforever')
-
->>> new_user.save()
-
-```
-
-If we query for all of our `Owner`s now, we should find Kenyatta.
-```
->>> Owner.objects.all()
-
-]>
-
-```
-
-Yay! We've got data!
-
-### Serializing models
-
-We'll be passing data back and forth beyond just "Hello World." As such, we'll want to see some sort of JSON-ified output that represents that data well. Taking that object's data and transforming it into a JSON object for submission across HTTP is a version of data serialization. In serializing data, we're taking the data we currently have and reformatting it to fit some standard, more-easily-digestible form.
-
-If I were doing this with Flask, Pyramid, and Tornado, I'd create a new method on each model to give the user direct access to call `to_json()`. The only job of `to_json()` would be to return a JSON-serializable (i.e. numbers, strings, lists, dicts) dictionary with whatever fields I want to be displayed for the object in question.
-
-It'd probably look something like this for the `Task` object:
-```
-class Task(Base):
-
- ...all the fields...
-
-
-
- def to_json(self):
-
- """Convert task attributes to a JSON-serializable dict."""
-
- return {
-
- 'id': self.id,
-
- 'name': self.name,
-
- 'note': self.note,
-
- 'creation_date': self.creation_date.strftime('%m/%d/%Y %H:%M:%S'),
-
- 'due_date': self.due_date.strftime('%m/%d/%Y %H:%M:%S'),
-
- 'completed': self.completed,
-
- 'user': self.user_id
-
- }
-
-```
-
-It's not fancy, but it does the job.
-
-Django REST Framework, however, provides us with an object that'll not only do that for us but also validate inputs when we want to create new object instances or update existing ones. It's called the [ModelSerializer][15].
-
-Django REST Framework's `ModelSerializer` is effectively documentation for our models. They don't have lives of their own if there are no models attached (for that there's the [Serializer][16] class). Their main job is to accurately represent our model and make the conversion to JSON thoughtless when our model's data needs to be serialized and sent over a wire.
-
-Django REST Framework's `ModelSerializer` works best for simple objects. As an example, imagine that we didn't have that `ForeignKey` on the `Task` object. We could create a serializer for our `Task` that would convert its field values to JSON as necessary with the following declaration:
-```
-# todo/serializers.py
-
-from rest_framework import serializers
-
-from todo.models import Task
-
-
-
-class TaskSerializer(serializers.ModelSerializer):
-
- """Serializer for the Task model."""
-
-
-
- class Meta:
-
- model = Task
-
- fields = ('id', 'name', 'note', 'creation_date', 'due_date', 'completed')
-
-```
-
-Inside our new `TaskSerializer`, we create a `Meta` class. `Meta`'s job here is just to hold information (or metadata) about the thing we're attempting to serialize. Then, we note the specific fields that we want to show. If we wanted to show all the fields, we could just shortcut the process and use `'__all__'`. We could, alternatively, use the `exclude` keyword instead of `fields` to tell Django REST Framework that we want every field except for a select few. We can have as many serializers as we like, so maybe we want one for a small subset of fields and one for all the fields? Go wild here.
-
-In our case, there is a relation between each `Task` and its owner `Owner` that must be reflected here. As such, we need to borrow the `serializers.PrimaryKeyRelatedField` object to specify that each `Task` will have an `Owner` and that relationship is one-to-one. Its owner will be found from the set of all owners that exists. We get that set by doing a query for those owners and returning the results we want to be associated with this serializer: `Owner.objects.all()`. We also need to include `owner` in the list of fields, as we always need an `Owner` associated with a `Task`
-```
-# todo/serializers.py
-
-from rest_framework import serializers
-
-from todo.models import Task
-
-from owner.models import Owner
-
-
-
-class TaskSerializer(serializers.ModelSerializer):
-
- """Serializer for the Task model."""
-
- owner = serializers.PrimaryKeyRelatedField(queryset=Owner.objects.all())
-
-
-
- class Meta:
-
- model = Task
-
- fields = ('id', 'name', 'note', 'creation_date', 'due_date', 'completed', 'owner')
-
-```
-
-Now that this serializer is built, we can use it for all the CRUD operations we'd like to do for our objects:
-
- * If we want to `GET` a JSONified version of a specific `Task`, we can do `TaskSerializer(some_task).data`
- * If we want to accept a `POST` with the appropriate data to create a new `Task`, we can use `TaskSerializer(data=new_data).save()`
- * If we want to update some existing data with a `PUT`, we can say `TaskSerializer(existing_task, data=data).save()`
-
-
-
-We're not including `delete` because we don't really need to do anything with information for a `delete` operation. If you have access to an object you want to delete, just say `object_instance.delete()`.
-
-Here is an example of what some serialized data might look like:
-```
->>> from todo.models import Task
-
->>> from todo.serializers import TaskSerializer
-
->>> from owner.models import Owner
-
->>> from django.contrib.auth.models import User
-
->>> new_user = User(username='kenyatta', email='kenyatta@gmail.com')
-
->>> new_user.save_password('wakandaforever')
-
->>> new_user.save() # creating the User that builds the Owner
-
->>> kenyatta = Owner.objects.first() # grabbing the Owner that is kenyatta
-
->>> new_task = Task(name="Buy roast beef for the Sunday potluck", owner=kenyatta)
-
->>> new_task.save()
-
->>> TaskSerializer(new_task).data
-
-{'id': 1, 'name': 'Go to the supermarket', 'note': None, 'creation_date': '2018-07-31T06:00:25.165013Z', 'due_date': None, 'completed': False, 'owner': 1}
-
-```
-
-There's a lot more you can do with the `ModelSerializer` objects, and I suggest checking [the docs][17] for those greater capabilities. Otherwise, this is as much as we need. It's time to dig into some views.
-
-### Views for reals
-
-We've built the models and the serializers, and now we need to set up the views and URLs for our application. After all, we can't do anything with an application that has no views. We've already seen an example with the `HelloWorld` view above. However, that's always a contrived, proof-of-concept example and doesn't really show what can be done with Django REST Framework's views. Let's clear out the `HelloWorld` view and URL so we can start fresh with our views.
-
-The first view we'll build is the `InfoView`. As in the previous frameworks, we just want to package and send out a dictionary of our proposed routes. The view itself can live in `django_todo.views` since it doesn't pertain to a specific model (and thus doesn't conceptually belong in a specific app).
-```
-# django_todo/views.py
-
-from rest_framework.response import JsonResponse
-
-from rest_framework.views import APIView
-
-
-
-class InfoView(APIView):
-
- """List of routes for this API."""
-
- def get(self, request):
-
- output = {
-
- 'info': 'GET /api/v1',
-
- 'register': 'POST /api/v1/accounts',
-
- 'single profile detail': 'GET /api/v1/accounts/',
-
- 'edit profile': 'PUT /api/v1/accounts/',
-
- 'delete profile': 'DELETE /api/v1/accounts/',
-
- 'login': 'POST /api/v1/accounts/login',
-
- 'logout': 'GET /api/v1/accounts/logout',
-
- "user's tasks": 'GET /api/v1/accounts//tasks',
-
- "create task": 'POST /api/v1/accounts//tasks',
-
- "task detail": 'GET /api/v1/accounts//tasks/',
-
- "task update": 'PUT /api/v1/accounts//tasks/',
-
- "delete task": 'DELETE /api/v1/accounts//tasks/'
-
- }
-
- return JsonResponse(output)
-
-```
-
-This is pretty much identical to what we had in Tornado. Let's hook it up to an appropriate route and be on our way. For good measure, we'll also remove the `admin/` route, as we won't be using the Django administrative backend here.
-```
-# in django_todo/urls.py
-
-from django_todo.views import InfoView
-
-from django.urls import path
-
-
-
-urlpatterns = [
-
- path('api/v1', InfoView.as_view(), name="info"),
-
-]
-
-```
-
-#### Connecting models to views
-
-Let's figure out the next URL, which will be the endpoint for either creating a new `Task` or listing a user's existing tasks. This should exist in a `urls.py` in the `todo` app since this has to deal specifically with `Task` objects instead of being a part of the whole project.
-```
-# in todo/urls.py
-
-from django.urls import path
-
-from todo.views import TaskListView
-
-
-
-urlpatterns = [
-
- path('', TaskListView.as_view(), name="list_tasks")
-
-]
-
-```
-
-What's the deal with this route? We didn't specify a particular user or much of a path at all. Since there would be a couple of routes requiring the base path `/api/v1/accounts//tasks`, why write it again and again when we can just write it once?
-
-Django allows us to take a whole suite of URLs and import them into the base `django_todo/urls.py` file. We can then give every one of those imported URLs the same base path, only worrying about the variable parts when, you know, they vary.
-```
-# in django_todo/urls.py
-
-from django.urls import include, path
-
-from django_todo.views import InfoView
-
-
-
-urlpatterns = [
-
- path('api/v1', InfoView.as_view(), name="info"),
-
- path('api/v1/accounts//tasks', include('todo.urls'))
-
-]
-
-```
-
-And now every URL coming from `todo/urls.py` will be prefixed with the path `api/v1/accounts//tasks`.
-
-Let's build out the view in `todo/views.py`
-```
-# todo/views.py
-
-from django.shortcuts import get_object_or_404
-
-from rest_framework.response import JsonResponse
-
-from rest_framework.views import APIView
-
-
-
-from owner.models import Owner
-
-from todo.models import Task
-
-from todo.serializers import TaskSerializer
-
-
-
-
-
-class TaskListView(APIView):
-
- def get(self, request, username, format=None):
-
- """Get all of the tasks for a given user."""
-
- owner = get_object_or_404(Owner, user__username=username)
-
- tasks = Task.objects.filter(owner=owner).all()
-
- serialized = TaskSerializer(tasks, many=True)
-
- return JsonResponse({
-
- 'username': username,
-
- 'tasks': serialized.data
-
- })
-
-```
-
-There's a lot going on here in a little bit of code, so let's walk through it.
-
-We start out with the same inheritance of the `APIView` that we've been using, laying the groundwork for what will be our view. We override the same `get` method we've overridden before, adding a parameter that allows our view to receive the `username` from the incoming request.
-
-Our `get` method will then use that `username` to grab the `Owner` associated with that user. This `get_object_or_404` function allows us to do just that, with a little something special added for ease of use.
-
-It would make sense that there's no point in looking for tasks if the specified user can't be found. In fact, we'd want to return a 404 error. `get_object_or_404` gets a single object based on whatever criteria we pass in and either returns that object or raises an [Http404 exception][18]. We can set that criteria based on attributes of the object. The `Owner` objects are all attached to a `User` through their `user` attribute. We don't have a `User` object to search with, though. We only have a `username`. So, we say to `get_object_or_404` "when you look for an `Owner`, check to see that the `User` attached to it has the `username` that I want" by specifying `user__username`. That's TWO underscores. When filtering through a QuerySet, the two underscores mean "attribute of this nested object." Those attributes can be as deeply nested as needed.
-
-We now have the `Owner` corresponding to the given username. We use that `Owner` to filter through all the tasks, only retrieving the ones it owns with `Task.objects.filter`. We could've used the same nested-attribute pattern that we did with `get_object_or_404` to drill into the `User` connected to the `Owner` connected to the `Tasks` (`tasks = Task.objects.filter(owner__user__username=username).all()`) but there's no need to get that wild with it.
-
-`Task.objects.filter(owner=owner).all()` will provide us with a `QuerySet` of all the `Task` objects that match our query. Great. The `TaskSerializer` will then take that `QuerySet` and all its data, along with the flag of `many=True` to notify it as being a collection of items instead of just one item, and return a serialized set of results. Effectively a list of dictionaries. Finally, we provide the outgoing response with the JSON-serialized data and the username used for the query.
-
-#### Handling the POST request
-
-The `post` method will look somewhat different from what we've seen before.
-```
-# still in todo/views.py
-
-# ...other imports...
-
-from rest_framework.parsers import JSONParser
-
-from datetime import datetime
-
-
-
-class TaskListView(APIView):
-
- def get(self, request, username, format=None):
-
- ...
-
-
-
- def post(self, request, username, format=None):
-
- """Create a new Task."""
-
- owner = get_object_or_404(Owner, user__username=username)
-
- data = JSONParser().parse(request)
-
- data['owner'] = owner.id
-
- if data['due_date']:
-
- data['due_date'] = datetime.strptime(data['due_date'], '%d/%m/%Y %H:%M:%S')
-
-
-
- new_task = TaskSerializer(data=data)
-
- if new_task.is_valid():
-
- new_task.save()
-
- return JsonResponse({'msg': 'posted'}, status=201)
-
-
-
- return JsonResponse(new_task.errors, status=400)
-
-```
-
-When we receive data from the client, we parse it into a dictionary using `JSONParser().parse(request)`. We add the owner to the data and format the `due_date` for the task if one exists.
-
-Our `TaskSerializer` does the heavy lifting. It first takes in the incoming data and translates it into the fields we specified on the model. It then validates that data to make sure it fits the specified fields. If the data being attached to the new `Task` is valid, it constructs a new `Task` object with that data and commits it to the database. We then send back an appropriate "Yay! We made a new thing!" response. If not, we collect the errors that `TaskSerializer` generated and send those back to the client with a `400 Bad Request` status code.
-
-If we were to build out the `put` view for updating a `Task`, it would look very similar to this. The main difference would be that when we instantiate the `TaskSerializer`, instead of just passing in the new data, we'd pass in the old object and the new data for that object like `TaskSerializer(existing_task, data=data)`. We'd still do the validity check and send back the responses we want to send back.
-
-### Wrapping up
-
-Django as a framework is highly customizable, and everyone has their own way of stitching together a Django project. The way I've written it out here isn't necessarily the exact way that a Django project needs to be set up; it's just a) what I'm familiar with, and b) what leverages Django's management system. Django projects grow in complexity as you separate concepts into their own little silos. You do that so it's easier for multiple people to contribute to the overall project without stepping on each other's toes.
-
-The vast map of files that is a Django project, however, doesn't make it more performant or naturally predisposed to a microservice architecture. On the contrary, it can very easily become a confusing monolith. That may still be useful for your project. It may also make it harder for your project to be manageable, especially as it grows.
-
-Consider your options carefully and use the right tool for the right job. For a simple project like this, Django likely isn't the right tool.
-
-Django is meant to handle multiple sets of models that cover a variety of different project areas that may share some common ground. This project is a small, two-model project with a handful of routes. If we were to build this out more, we'd only have seven routes and still the same two models. It's hardly enough to justify a full Django project.
-
-It would be a great option if we expected this project to expand. This is not one of those projects. This is choosing a flamethrower to light a candle. It's absolute overkill.
-
-Still, a web framework is a web framework, regardless of which one you use for your project. It can take in requests and respond as well as any other, so you do as you wish. Just be aware of what overhead comes with your choice of framework.
-
-That's it! We've reached the end of this series! I hope it has been an enlightening adventure and will help you make more than just the most-familiar choice when you're thinking about how to build out your next project. Make sure to read the documentation for each framework to expand on anything covered in this series (as it's not even the least bit comprehensive). There's a wide world of stuff to get into for each. Happy coding!
-
---------------------------------------------------------------------------------
-
-via: https://opensource.com/article/18/8/django-framework
-
-作者:[Nicholas Hunt-Walker][a]
-选题:[lujun9972](https://github.com/lujun9972)
-译者:[译者ID](https://github.com/译者ID)
-校对:[校对者ID](https://github.com/校对者ID)
-
-本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
-
-[a]:https://opensource.com/users/nhuntwalker
-[1]:https://opensource.com/article/18/5/pyramid-framework
-[2]:https://opensource.com/article/18/4/flask
-[3]:https://opensource.com/article/18/6/tornado-framework
-[4]:https://www.djangoproject.com
-[5]:https://djangopackages.org/
-[6]:http://www.django-rest-framework.org/
-[7]:http://gunicorn.org/
-[8]:https://docs.pylonsproject.org/projects/waitress/en/latest/
-[9]:https://uwsgi-docs.readthedocs.io/en/latest/
-[10]:https://docs.djangoproject.com/en/2.0/ref/settings/#databases
-[11]:https://pypi.org/project/dj-database-url/
-[12]:http://yellerapp.com/posts/2015-01-12-the-worst-server-setup-you-can-make.html
-[13]:https://docs.djangoproject.com/en/2.0/ref/settings/#std:setting-DATABASE-ENGINE
-[14]:https://www.getpostman.com/
-[15]:http://www.django-rest-framework.org/api-guide/serializers/#modelserializer
-[16]:http://www.django-rest-framework.org/api-guide/serializers/
-[17]:http://www.django-rest-framework.org/api-guide/serializers/#serializers
-[18]:https://docs.djangoproject.com/en/2.0/topics/http/views/#the-http404-exception
diff --git a/sources/tech/20180821 A checklist for submitting your first Linux kernel patch.md b/sources/tech/20180821 A checklist for submitting your first Linux kernel patch.md
deleted file mode 100644
index 1fc4677491..0000000000
--- a/sources/tech/20180821 A checklist for submitting your first Linux kernel patch.md
+++ /dev/null
@@ -1,170 +0,0 @@
-A checklist for submitting your first Linux kernel patch
-======
-
-
-
-One of the biggest—and the fastest moving—open source projects, the Linux kernel, is composed of about 53,600 files and nearly 20-million lines of code. With more than 15,600 programmers contributing to the project worldwide, the Linux kernel follows a maintainer model for collaboration.
-
-
-
-In this article, I'll provide a quick checklist of steps involved with making your first kernel contribution, and look at what you should know before submitting a patch. For a more in-depth look at the submission process for contributing your first patch, read the [KernelNewbies First Kernel Patch tutorial][1].
-
-### Contributing to the kernel
-
-#### Step 1: Prepare your system.
-
-Steps in this article assume you have the following tools on your system:
-
-+ Text editor
-+ Email client
-+ Version control system (e.g., git)
-
-#### Step 2: Download the Linux kernel code repository`:`
-```
-git clone -b staging-testing
-
-git://git.kernel.org/pub/scm/linux/kernel/git/gregkh/staging.git
-
-```
-
-### Copy your current config: ````
-```
-cp /boot/config-`uname -r`* .config
-
-```
-
-### Step 3: Build/install your kernel.
-```
-make -jX
-
-sudo make modules_install install
-
-```
-
-### Step 4: Make a branch and switch to it.
-```
-git checkout -b first-patch
-
-```
-
-### Step 5: Update your kernel to point to the latest code base.
-```
-git fetch origin
-
-git rebase origin/staging-testing
-
-```
-
-### Step 6: Make a change to the code base.
-
-Recompile using `make` command to ensure that your change does not produce errors.
-
-### Step 7: Commit your changes and create a patch.
-```
-git add
-
-git commit -s -v
-
-git format-patch -o /tmp/ HEAD^
-
-```
-
-
-
-The subject consists of the path to the file name separated by colons, followed by what the patch does in the imperative tense. After a blank line comes the description of the patch and the mandatory signed off tag and, lastly, a diff of your patch.
-
-Here is another example of a simple patch:
-
-
-
-Next, send the patch [using email from the command line][2] (in this case, Mutt): ``
-```
-mutt -H /tmp/0001-
-
-```
-
-To know the list of maintainers to whom to send the patch, use the [get_maintainer.pl script][11].
-
-
-### What to know before submitting your first patch
-
- * [Greg Kroah-Hartman][3]'s [staging tree][4] is a good place to submit your [first patch][1] as he accepts easy patches from new contributors. When you get familiar with the patch-sending process, you could send subsystem-specific patches with increased complexity.
-
- * You also could start with correcting coding style issues in the code. To learn more, read the [Linux kernel coding style documentation][5].
-
- * The script [checkpatch.pl][6] detects coding style errors for you. For example, run:
- ```
- perl scripts/checkpatch.pl -f drivers/staging/android/* | less
-
- ```
-
- * You could complete TODOs left incomplete by developers:
- ```
- find drivers/staging -name TODO
- ```
-
- * [Coccinelle][7] is a helpful tool for pattern matching.
-
- * Read the [kernel mailing archives][8].
-
- * Go through the [linux.git log][9] to see commits by previous authors for inspiration.
-
- * Note: Do not top-post to communicate with the reviewer of your patch! Here's an example:
-
-**Wrong way:**
-
-Chris,
-_Yes let’s schedule the meeting tomorrow, on the second floor._
-> On Fri, Apr 26, 2013 at 9:25 AM, Chris wrote:
-> Hey John, I had some questions:
-> 1\. Do you want to schedule the meeting tomorrow?
-> 2\. On which floor in the office?
-> 3\. What time is suitable to you?
-
-(Notice that the last question was unintentionally left unanswered in the reply.)
-
-**Correct way:**
-
-Chris,
-See my answers below...
-> On Fri, Apr 26, 2013 at 9:25 AM, Chris wrote:
-> Hey John, I had some questions:
-> 1\. Do you want to schedule the meeting tomorrow?
-_Yes tomorrow is fine._
-> 2\. On which floor in the office?
-_Let's keep it on the second floor._
-> 3\. What time is suitable to you?
-_09:00 am would be alright._
-
-(All questions were answered, and this way saves reading time.)
-
- * The [Eudyptula challenge][10] is a great way to learn kernel basics.
-
-
-To learn more, read the [KernelNewbies First Kernel Patch tutorial][1]. After that, if you still have any questions, ask on the [kernelnewbies mailing list][12] or in the [#kernelnewbies IRC channel][13].
-
---------------------------------------------------------------------------------
-
-via: https://opensource.com/article/18/8/first-linux-kernel-patch
-
-作者:[Sayli Karnik][a]
-选题:[lujun9972](https://github.com/lujun9972)
-译者:[译者ID](https://github.com/译者ID)
-校对:[校对者ID](https://github.com/校对者ID)
-
-本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
-
-[a]:https://opensource.com/users/sayli
-[1]:https://kernelnewbies.org/FirstKernelPatch
-[2]:https://opensource.com/life/15/8/top-4-open-source-command-line-email-clients
-[3]:https://twitter.com/gregkh
-[4]:https://www.kernel.org/doc/html/v4.15/process/2.Process.html
-[5]:https://www.kernel.org/doc/html/v4.10/process/coding-style.html
-[6]:https://github.com/torvalds/linux/blob/master/scripts/checkpatch.pl
-[7]:http://coccinelle.lip6.fr/
-[8]:linux-kernel@vger.kernel.org
-[9]:https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/log/
-[10]:http://eudyptula-challenge.org/
-[11]:https://github.com/torvalds/linux/blob/master/scripts/get_maintainer.pl
-[12]:https://kernelnewbies.org/MailingList
-[13]:https://kernelnewbies.org/IRC
diff --git a/sources/tech/20180823 CLI- improved.md b/sources/tech/20180823 CLI- improved.md
index d06bb1b2aa..52edaa28c8 100644
--- a/sources/tech/20180823 CLI- improved.md
+++ b/sources/tech/20180823 CLI- improved.md
@@ -1,3 +1,5 @@
+Translating by DavidChenLiang
+
CLI: improved
======
I'm not sure many web developers can get away without visiting the command line. As for me, I've been using the command line since 1997, first at university when I felt both super cool l33t-hacker and simultaneously utterly out of my depth.
diff --git a/sources/tech/20180823 How To Easily And Safely Manage Cron Jobs In Linux.md b/sources/tech/20180823 How To Easily And Safely Manage Cron Jobs In Linux.md
deleted file mode 100644
index aa4ec0a655..0000000000
--- a/sources/tech/20180823 How To Easily And Safely Manage Cron Jobs In Linux.md
+++ /dev/null
@@ -1,131 +0,0 @@
-How To Easily And Safely Manage Cron Jobs In Linux
-======
-
-
-
-When it comes to schedule tasks in Linux, which utility comes to your mind first? Yeah, you guessed it right. **Cron!** The cron utility helps you to schedule commands/tasks at specific time in Unix-like operating systems. We already published a [**beginners guides to Cron jobs**][1]. I have a few years experience in Linux, so setting up cron jobs is no big deal for me. But, it is not piece of cake for newbies. The noobs may unknowingly do small mistakes while editing plain text crontab and bring down all cron jobs. Just in case, if you think you might mess up with your cron jobs, there is a good alternative way. Say hello to **Crontab UI** , a web-based tool to easily and safely manage cron jobs in Unix-like operating systems.
-
-You don’t need to manually edit the crontab file to create, delete and manage cron jobs. Everything can be done via a web browser with a couple mouse clicks. Crontab UI allows you to easily create, edit, pause, delete, backup cron jobs, and even import, export and deploy jobs on other machines without much hassle. Error log, mailing and hooks support also possible. It is free, open source and written using NodeJS.
-
-### Installing Crontab UI
-
-Installing Crontab UI is just a one-liner command. Make sure you have installed NPM. If you haven’t install npm yet, refer the following link.
-
-Next, run the following command to install Crontab UI.
-```
-$ npm install -g crontab-ui
-
-```
-
-It’s that simple. Let us go ahead and see how to manage cron jobs using Crontab UI.
-
-### Easily And Safely Manage Cron Jobs In Linux
-
-To launch Crontab UI, simply run:
-```
-$ crontab-ui
-
-```
-
-You will see the following output:
-```
-Node version: 10.8.0
-Crontab UI is running at http://127.0.0.1:8000
-
-```
-
-Now, open your web browser and navigate to ****. Make sure the port no 8000 is allowed in your firewall/router.
-
-Please note that you can only access Crontab UI web dashboard within the local system itself.
-
-If you want to run Crontab UI with your system’s IP and custom port (so you can access it from any remote system in the network), use the following command instead:
-```
-$ HOST=0.0.0.0 PORT=9000 crontab-ui
-Node version: 10.8.0
-Crontab UI is running at http://0.0.0.0:9000
-
-```
-
-Now, Crontab UI can be accessed from the any system in the nework using URL – **http:// :9000**.
-
-This is how Crontab UI dashboard looks like.
-
-
-
-As you can see in the above screenshot, Crontab UI dashbaord is very simply. All options are self-explanatory.
-
-To exit Crontab UI, press **CTRL+C**.
-
-**Create, edit, run, stop, delete a cron job**
-
-To create a new cron job, click on “New” button. Enter your cron job details and click Save.
-
- 1. Name the cron job. It is optional.
- 2. The full command you want to run.
- 3. Choose schedule time. You can either choose the quick schedule time, (such as Startup, Hourly, Daily, Weekly, Monthly, Yearly) or set the exact time to run the command. After you choosing the schedule time, the syntax of the cron job will be shown in **Jobs** field.
- 4. Choose whether you want to enable error logging for the particular job.
-
-
-
-Here is my sample cron job.
-
-
-
-As you can see, I have setup a cron job to clear pacman cache at every month.
-
-Similarly, you can create any number of jobs as you want. You will see all cron jobs in the dashboard.
-
-
-
-If you wanted to change any parameter in a cron job, just click on the **Edit** button below the job and modify the parameters as you wish. To run a job immediately, click on the button that says **Run**. To stop a job, click **Stop** button. You can view the log details of any job by clicking on the **Log** button. If the job is no longer required, simply press **Delete** button.
-
-**Backup cron jobs**
-
-To backup all cron jobs, press the Backup from main dashboard and choose OK to confirm the backup.
-
-
-
-You can use this backup in case you messed with the contents of the crontab file.
-
-**Import/Export cron jobs to other systems**
-
-Another notable feature of Crontab UI is you can import, export and deploy cron jobs to other systems. If you have multiple systems on your network that requires the same cron jobs, just press **Export** button and choose the location to save the file. All contents of crontab file will be saved in a file named **crontab.db**.
-
-Here is the contents of the crontab.db file.
-```
-$ cat Downloads/crontab.db
-{"name":"Remove Pacman Cache","command":"rm -rf /var/cache/pacman","schedule":"@monthly","stopped":false,"timestamp":"Thu Aug 23 2018 10:34:19 GMT+0000 (Coordinated Universal Time)","logging":"true","mailing":{},"created":1535020459093,"_id":"lcVc1nSdaceqS1ut"}
-
-```
-
-Then you can transfer the entire crontab.db file to some other system and import its to the new system. You don’t need to manually create cron jobs in all systems. Just create them in one system and export and import all of them to every system on the network.
-
-**Get the contents from or save to existing crontab file**
-
-There are chances that you might have already created some cron jobs using **crontab** command. If so, you can retrieve contents of the existing crontab file by click on the **“Get from crontab”** button in main dashboard.
-
-
-
-Similarly, you can save the newly created jobs using Crontab UI utility to existing crontab file in your system. To do so, just click **Save to crontab** option in the dashboard.
-
-See? Managing cron jobs is not that complicated. Any newbie user can easily maintain any number of jobs without much hassle using Crontab UI. Give it a try and let us know what do you think about this tool. I am all ears!
-
-And, that’s all for now. Hope this was useful. More good stuffs to come. Stay tuned!
-
-Cheers!
-
-
-
---------------------------------------------------------------------------------
-
-via: https://www.ostechnix.com/how-to-easily-and-safely-manage-cron-jobs-in-linux/
-
-作者:[SK][a]
-选题:[lujun9972](https://github.com/lujun9972)
-译者:[译者ID](https://github.com/译者ID)
-校对:[校对者ID](https://github.com/校对者ID)
-
-本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
-
-[a]:https://www.ostechnix.com/author/sk/
-[1]:https://www.ostechnix.com/a-beginners-guide-to-cron-jobs/
diff --git a/sources/tech/20180824 What Stable Kernel Should I Use.md b/sources/tech/20180824 What Stable Kernel Should I Use.md
deleted file mode 100644
index bfd64a2ec2..0000000000
--- a/sources/tech/20180824 What Stable Kernel Should I Use.md
+++ /dev/null
@@ -1,139 +0,0 @@
-What Stable Kernel Should I Use?
-======
-I get a lot of questions about people asking me about what stable kernel should they be using for their product/device/laptop/server/etc. all the time. Especially given the now-extended length of time that some kernels are being supported by me and others, this isn’t always a very obvious thing to determine. So this post is an attempt to write down my opinions on the matter. Of course, you are free to use what ever kernel version you want, but here’s what I recommend.
-
-As always, the opinions written here are my own, I speak for no one but myself.
-
-### What kernel to pick
-
-Here’s the my short list of what kernel you should use, raked from best to worst options. I’ll go into the details of all of these below, but if you just want the summary of all of this, here it is:
-
-Hierarchy of what kernel to use, from best solution to worst:
-
- * Supported kernel from your favorite Linux distribution
- * Latest stable release
- * Latest LTS release
- * Older LTS release that is still being maintained
-
-
-
-What kernel to never use:
-
- * Unmaintained kernel release
-
-
-
-To give numbers to the above, today, as of August 24, 2018, the front page of kernel.org looks like this:
-
-![][1]
-
-So, based on the above list that would mean that:
-
- * 4.18.5 is the latest stable release
- * 4.14.67 is the latest LTS release
- * 4.9.124, 4.4.152, and 3.16.57 are the older LTS releases that are still being maintained
- * 4.17.19 and 3.18.119 are “End of Life” kernels that have had a release in the past 60 days, and as such stick around on the kernel.org site for those who still might want to use them.
-
-
-
-Quite easy, right?
-
-Ok, now for some justification for all of this:
-
-### Distribution kernels
-
-The best solution for almost all Linux users is to just use the kernel from your favorite Linux distribution. Personally, I prefer the community based Linux distributions that constantly roll along with the latest updated kernel and it is supported by that developer community. Distributions in this category are Fedora, openSUSE, Arch, Gentoo, CoreOS, and others.
-
-All of these distributions use the latest stable upstream kernel release and make sure that any needed bugfixes are applied on a regular basis. That is the one of the most solid and best kernel that you can use when it comes to having the latest fixes ([remember all fixes are security fixes][2]) in it.
-
-There are some community distributions that take a bit longer to move to a new kernel release, but eventually get there and support the kernel they currently have quite well. Those are also great to use, and examples of these are Debian and Ubuntu.
-
-Just because I did not list your favorite distro here does not mean its kernel is not good. Look on the web site for the distro and make sure that the kernel package is constantly updated with the latest security patches, and all should be well.
-
-Lots of people seem to like the old, “traditional” model of a distribution and use RHEL, SLES, CentOS or the “LTS” Ubuntu release. Those distros pick a specific kernel version and then camp out on it for years, if not decades. They do loads of work backporting the latest bugfixes and sometimes new features to these kernels, all in a Quixote quest to keep the version number from never being changed, despite having many thousands of changes on top of that older kernel version. This work is a truly thankless job, and the developers assigned to these tasks do some wonderful work in order to achieve these goals. If you like never seeing your kernel version number change, then use these distributions. They usually cost some money to use, but the support you get from these companies is worth it when something goes wrong.
-
-So again, the best kernel you can use is one that someone else supports, and you can turn to for help. Use that support, usually you are already paying for it (for the enterprise distributions), and those companies know what they are doing.
-
-But, if you do not want to trust someone else to manage your kernel for you, or you have hardware that a distribution does not support, then you want to run the Latest stable release:
-
-### Latest stable release
-
-This kernel is the latest one from the Linux kernel developer community that they declare as “stable”. About every three months, the community releases a new stable kernel that contains all of the newest hardware support, the latest performance improvements, as well as the latest bugfixes for all parts of the kernel. Over the next 3 months, bugfixes that go into the next kernel release to be made are backported into this stable release, so that any users of this kernel are sure to get them as soon as possible.
-
-This is usually the kernel that most community distributions use as well, so you can be sure it is tested and has a large audience of users. Also, the kernel community (all 4000+ developers) are willing to help support users of this release, as it is the latest one that they made.
-
-After 3 months, a new kernel is released and you should move to it to ensure that you stay up to date, as support for this kernel is usually dropped a few weeks after the newer release happens.
-
-If you have new hardware that is purchased after the last LTS release came out, you almost are guaranteed to have to run this kernel in order to have it supported. So for desktops or new servers, this is usually the recommended kernel to be running.
-
-### Latest LTS release
-
-If your hardware relies on a vendors out-of-tree patch in order to make it work properly (like almost all embedded devices these days), then the next best kernel to be using is the latest LTS release. That release gets all of the latest kernel fixes that goes into the stable releases where applicable, and lots of users test and use it.
-
-Note, no new features and almost no new hardware support is ever added to these kernels, so if you need to use a new device, it is better to use the latest stable release, not this release.
-
-Also this release is common for users that do not like to worry about “major” upgrades happening on them every 3 months. So they stick to this release and upgrade every year instead, which is a fine practice to follow.
-
-The downsides of using this release is that you do not get the performance improvements that happen in newer kernels, except when you update to the next LTS kernel, potentially a year in the future. That could be significant for some workloads, so be very aware of this.
-
-Also, if you have problems with this kernel release, the first thing that any developer whom you report the issue to is going to ask you to do is, “does the latest stable release have this problem?” So you will need to be aware that support might not be as easy to get as with the latest stable releases.
-
-Now if you are stuck with a large patchset and can not update to a new LTS kernel once a year, perhaps you want the older LTS releases:
-
-### Older LTS release
-
-These releases have traditionally been supported by the community for 2 years, sometimes longer for when a major distribution relies on this (like Debian or SLES). However in the past year, thanks to a lot of suport and investment in testing and infrastructure from Google, Linaro, Linaro member companies, [kernelci.org][3], and others, these kernels are starting to be supported for much longer.
-
-Here’s the latest LTS releases and how long they will be supported for, as shown at [kernel.org/category/releases.html][4] on August 24, 2018:
-
-![][5]
-
-The reason that Google and other companies want to have these kernels live longer is due to the crazy (some will say broken) development model of almost all SoC chips these days. Those devices start their development lifecycle a few years before the chip is released, however that code is never merged upstream, resulting in a brand new chip being released based on a 2 year old kernel. These SoC trees usually have over 2 million lines added to them, making them something that I have started calling “Linux-like” kernels.
-
-If the LTS releases stop happening after 2 years, then support from the community instantly stops, and no one ends up doing bugfixes for them. This results in millions of very insecure devices floating around in the world, not something that is good for any ecosystem.
-
-Because of this dependency, these companies now require new devices to constantly update to the latest LTS releases as they happen for their specific release version (i.e. every 4.9.y release that happens). An example of this is the Android kernel requirements for new devices shipping for the “O” and now “P” releases specified the minimum kernel version allowed, and Android security releases might start to require those “.y” releases to happen more frequently on devices.
-
-I will note that some manufacturers are already doing this today. Sony is one great example of this, updating to the latest 4.4.y release on many of their new phones for their quarterly security release. Another good example is the small company Essential which has been tracking the 4.4.y releases faster than anyone that I know of.
-
-There is one huge caveat when using a kernel like this. The number of security fixes that get backported are not as great as with the latest LTS release, because the traditional model of the devices that use these older LTS kernels is a much more reduced user model. These kernels are not to be used in any type of “general computing” model where you have untrusted users or virtual machines, as the ability to do some of the recent Spectre-type fixes for older releases is greatly reduced, if present at all in some branches.
-
-So again, only use older LTS releases in a device that you fully control, or lock down with a very strong security model (like Android enforces using SELinux and application isolation). Never use these releases on a server with untrusted users, programs, or virtual machines.
-
-Also, support from the community for these older LTS releases is greatly reduced even from the normal LTS releases, if available at all. If you use these kernels, you really are on your own, and need to be able to support the kernel yourself, or rely on you SoC vendor to provide that support for you (note that almost none of them do provide that support, so beware…)
-
-### Unmaintained kernel release
-
-Surprisingly, many companies do just grab a random kernel release, slap it into their product and proceed to ship it in hundreds of thousands of units without a second thought. One crazy example of this would be the Lego Mindstorm systems that shipped a random -rc release of a kernel in their device for some unknown reason. A -rc release is a development release that not even the Linux kernel developers feel is ready for everyone to use just yet, let alone millions of users.
-
-You are of course free to do this if you want, but note that you really are on your own here. The community can not support you as no one is watching all kernel versions for specific issues, so you will have to rely on in-house support for everything that could go wrong. Which for some companies and systems, could be just fine, but be aware of the “hidden” cost this might cause if you do not plan for this up front.
-
-### Summary
-
-So, here’s a short list of different types of devices, and what I would recommend for their kernels:
-
- * Laptop / Desktop: Latest stable release
- * Server: Latest stable release or latest LTS release
- * Embedded device: Latest LTS release or older LTS release if the security model used is very strong and tight.
-
-
-
-And as for me, what do I run on my machines? My laptops run the latest development kernel (i.e. Linus’s development tree) plus whatever kernel changes I am currently working on and my servers run the latest stable release. So despite being in charge of the LTS releases, I don’t run them myself, except in testing systems. I rely on the development and latest stable releases to ensure that my machines are running the fastest and most secure releases that we know how to create at this point in time.
-
---------------------------------------------------------------------------------
-
-via: http://kroah.com/log/blog/2018/08/24/what-stable-kernel-should-i-use/
-
-作者:[Greg Kroah-Hartman][a]
-选题:[lujun9972](https://github.com/lujun9972)
-译者:[译者ID](https://github.com/译者ID)
-校对:[校对者ID](https://github.com/校对者ID)
-
-本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
-
-[a]:http://kroah.com
-[1]:https://s3.amazonaws.com/kroah.com/images/kernel.org_2018_08_24.png
-[2]:http://kroah.com/log/blog/2018/02/05/linux-kernel-release-model/
-[3]:https://kernelci.org/
-[4]:https://www.kernel.org/category/releases.html
-[5]:https://s3.amazonaws.com/kroah.com/images/kernel.org_releases_2018_08_24.png
diff --git a/sources/tech/20180827 4 tips for better tmux sessions.md b/sources/tech/20180827 4 tips for better tmux sessions.md
deleted file mode 100644
index b6d6a3e4fe..0000000000
--- a/sources/tech/20180827 4 tips for better tmux sessions.md
+++ /dev/null
@@ -1,89 +0,0 @@
-translating by lujun9972
-4 tips for better tmux sessions
-======
-
-
-
-The tmux utility, a terminal multiplexer, lets you treat your terminal as a multi-paned window into your system. You can arrange the configuration, run different processes in each, and generally make better use of your screen. We introduced some readers to this powerful tool [in this earlier article][1]. Here are some tips that will help you get more out of tmux if you’re getting started.
-
-This article assumes your current prefix key is Ctrl+b. If you’ve remapped that prefix, simply substitute your prefix in its place.
-
-### Set your terminal to automatically use tmux
-
-One of the biggest benefits of tmux is being able to disconnect and reconnect to sesions at wilI. This makes remote login sessions more powerful. Have you ever lost a connection and wished you could get back the work you were doing on the remote system? With tmux this problem is solved.
-
-However, you may sometimes find yourself doing work on a remote system, and realize you didn’t start a session. One way to avoid this is to have tmux start or attach every time you login to a system with in interactive shell.
-
-Add this to your remote system’s ~/.bash_profile file:
-
-```
-if [ -z "$TMUX" ]; then
- tmux attach -t default || tmux new -s default
-fi
-```
-
-Then logout of the remote system, and log back in with SSH. You’ll find you’re in a tmux session named default. This session will be regenerated at next login if you exit it. But more importantly, if you detach from it as normal, your work is waiting for you next time you login — especially useful if your connection is interrupted.
-
-Of course you can add this to your local system as well. Note that terminals inside most GUIs won’t use the default session automatically, because they aren’t login shells. While you can change that behavior, it may result in nesting that makes the session less usable, so proceed with caution.
-
-### Use zoom to focus on a single process
-
-While the point of tmux is to offer multiple windows, panes, and processes in a single session, sometimes you need to focus. If you’re in a process and need more space, or to focus on a single task, the zoom command works well. It expands the current pane to take up the entire current window space.
-
-Zoom can be useful in other situations too. For instance, imagine you’re using a terminal window in a graphical desktop. Panes can make it harder to copy and paste multiple lines from inside your tmux session. If you zoom the pane, you can do a clean copy/paste of multiple lines of data with ease.
-
-To zoom into the current pane, hit Ctrl+b, z. When you’re finished with the zoom function, hit the same key combo to unzoom the pane.
-
-### Bind some useful commands
-
-By default tmux has numerous commands available. But it’s helpful to have some of the more common operations bound to keys you can easily remember. Here are some examples you can add to your ~/.tmux.conf file to make sessions more enjoyable:
-
-```
-bind r source-file ~/.tmux.conf \; display "Reloaded config"
-```
-
-This command rereads the commands and bindings in your config file. Once you add this binding, exit any tmux sessions and then restart one. Now after you make any other future changes, simply run Ctrl+b, r and the changes will be part of your existing session.
-
-```
-bind V split-window -h
-bind H split-window
-```
-
-These commands make it easier to split the current window across a vertical axis (note that’s Shift+V) or across a horizontal axis (Shift+H).
-
-If you want to see how all keys are bound, use Ctrl+B, ? to see a list. You may see keys bound in copy-mode first, for when you’re working with copy and paste inside tmux. The prefix mode bindings are where you’ll see ones you’ve added above. Feel free to experiment with your own!
-
-### Use powerline for great justice
-
-[As reported in a previous Fedora Magazine article][2], the powerline utility is a fantastic addition to your shell. But it also has capabilities when used with tmux. Because tmux takes over the entire terminal space, the powerline window can provide more than just a better shell prompt.
-
- [][3]
-
-If you haven’t already, follow the instructions in the [Magazine’s powerline article][4] to install that utility. Then, install the addon [using sudo][5]:
-
-```
-sudo dnf install tmux-powerline
-```
-
-Now restart your session, and you’ll see a spiffy new status line at the bottom. Depending on the terminal width, the default status line now shows your current session ID, open windows, system information, date and time, and hostname. If you change directory into a git-controlled project, you’ll see the branch and color-coded status as well.
-
-Of course, this status bar is highly configurable as well. Enjoy your new supercharged tmux session, and have fun experimenting with it.
-
-
---------------------------------------------------------------------------------
-
-via: https://fedoramagazine.org/4-tips-better-tmux-sessions/
-
-作者:[Paul W. Frields][a]
-选题:[lujun9972](https://github.com/lujun9972)
-译者:[lujun9972](https://github.com/lujun9972)
-校对:[校对者ID](https://github.com/校对者ID)
-
-本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
-
-[a]:https://fedoramagazine.org/author/pfrields/
-[1]:https://fedoramagazine.org/use-tmux-more-powerful-terminal/
-[2]:https://fedoramagazine.org/add-power-terminal-powerline/
-[3]:https://fedoramagazine.org/wp-content/uploads/2018/08/Screenshot-from-2018-08-25-19-36-53.png
-[4]:https://fedoramagazine.org/add-power-terminal-powerline/
-[5]:https://fedoramagazine.org/howto-use-sudo/
diff --git a/sources/tech/20180827 Solve -error- failed to commit transaction (conflicting files)- In Arch Linux.md b/sources/tech/20180827 Solve -error- failed to commit transaction (conflicting files)- In Arch Linux.md
deleted file mode 100644
index bb0479e7fe..0000000000
--- a/sources/tech/20180827 Solve -error- failed to commit transaction (conflicting files)- In Arch Linux.md
+++ /dev/null
@@ -1,50 +0,0 @@
-translating by lujun9972
-Solve "error: failed to commit transaction (conflicting files)" In Arch Linux
-======
-
-
-
-It’s been a month since I upgraded my Arch Linux desktop. Today, I tried to update my Arch Linux system, and ran into an error that said **“error: failed to commit transaction (conflicting files) stfl: /usr/lib/libstfl.so.0 exists in filesystem”**. It looks like one library (/usr/lib/libstfl.so.0) that exists on my filesystem and pacman can’t upgrade it. If you’re encountered with the same error, here is a quick fix to resolve it.
-
-### Solve “error: failed to commit transaction (conflicting files)” In Arch Linux
-
-You have three options.
-
-1. Simply ignore the problematic **stfl** library from being upgraded and try to update the system again. Refer this guide to know [**how to ignore package from being upgraded**][1].
-
-2. Overwrite the package using command:
-```
-$ sudo pacman -Syu --overwrite /usr/lib/libstfl.so.0
-```
-
-3. Remove stfl library file manually and try to upgrade the system again. Please make sure the intended package is not a dependency to any important package. and check the archlinux.org whether are mentions of this conflict.
-```
-$ sudo rm /usr/lib/libstfl.so.0
-```
-
-Now, try to update the system:
-```
-$ sudo pacman -Syu
-```
-
-I chose the third option and just deleted the file and upgraded my Arch Linux system. It works now!
-
-Hope this helps. More good stuffs to come. Stay tuned!
-
-Cheers!
-
-
-
---------------------------------------------------------------------------------
-
-via: https://www.ostechnix.com/how-to-solve-error-failed-to-commit-transaction-conflicting-files-in-arch-linux/
-
-作者:[SK][a]
-选题:[lujun9972](https://github.com/lujun9972)
-译者:[lujun9972](https://github.com/lujun9972)
-校对:[校对者ID](https://github.com/校对者ID)
-
-本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
-
-[a]:https://www.ostechnix.com/author/sk/
-[1]:https://www.ostechnix.com/safely-ignore-package-upgraded-arch-linux/
diff --git a/sources/tech/20180831 Publishing Markdown to HTML with MDwiki.md b/sources/tech/20180831 Publishing Markdown to HTML with MDwiki.md
index c25239b7ba..769f9ba420 100644
--- a/sources/tech/20180831 Publishing Markdown to HTML with MDwiki.md
+++ b/sources/tech/20180831 Publishing Markdown to HTML with MDwiki.md
@@ -1,3 +1,4 @@
+Translating by z52527
Publishing Markdown to HTML with MDwiki
======
diff --git a/sources/tech/20180906 How To Limit Network Bandwidth In Linux Using Wondershaper.md b/sources/tech/20180906 How To Limit Network Bandwidth In Linux Using Wondershaper.md
deleted file mode 100644
index 11d266e163..0000000000
--- a/sources/tech/20180906 How To Limit Network Bandwidth In Linux Using Wondershaper.md
+++ /dev/null
@@ -1,196 +0,0 @@
-How To Limit Network Bandwidth In Linux Using Wondershaper
-======
-
-
-
-This tutorial will help you to easily limit network bandwidth and shape your network traffic in Unix-like operating systems. By limiting the network bandwidth usage, you can save unnecessary bandwidth consumption’s by applications, such as package managers (pacman, yum, apt), web browsers, torrent clients, download managers etc., and prevent the bandwidth abuse by a single or multiple users in the network. For the purpose of this tutorial, we will be using a command line utility named **Wondershaper**. Trust me, it is not that hard as you may think. It is one of the easiest and quickest way ever I have come across to limit the Internet or local network bandwidth usage in your own Linux system. Read on.
-
-Please be mindful that the aforementioned utility can only limit the incoming and outgoing traffic of your local network interfaces, not the interfaces of your router or modem. In other words, Wondershaper will only limit the network bandwidth in your local system itself, not any other systems in the network. These utility is mainly designed for limiting the bandwidth of one or more network adapters in your local system. Hope you got my point.
-
-Let us see how to use Wondershaper to shape the network traffic.
-
-### Limit Network Bandwidth In Linux Using Wondershaper
-
-**Wondershaper** is simple script used to limit the bandwidth of your system’s network adapter(s). It limits the bandwidth iproute’s tc command, but greatly simplifies its operation.
-
-**Installing Wondershaper**
-
-To install the latest version, git clone wondershaoer repository:
-
-```
-$ git clone https://github.com/magnific0/wondershaper.git
-
-```
-
-Go to the wondershaper directory and install it as show below
-
-```
-$ cd wondershaper
-
-$ sudo make install
-
-```
-
-And, run the following command to start wondershaper service automatically on every reboot.
-
-```
-$ sudo systemctl enable wondershaper.service
-
-$ sudo systemctl start wondershaper.service
-
-```
-
-You can also install using your distribution’s package manager (official or non-official) if you don’t mind the latest version.
-
-Wondershaper is available in [**AUR**][1], so you can install it in Arch-based systems using AUR helper programs such as [**Yay**][2].
-
-```
-$ yay -S wondershaper-git
-
-```
-
-On Debian, Ubuntu, Linux Mint:
-
-```
-$ sudo apt-get install wondershaper
-
-```
-
-On Fedora:
-
-```
-$ sudo dnf install wondershaper
-
-```
-
-On RHEL, CentOS, enable EPEL repository and install wondershaper as shown below.
-
-```
-$ sudo yum install epel-release
-
-$ sudo yum install wondershaper
-
-```
-
-Finally, start wondershaper service automatically on every reboot.
-
-```
-$ sudo systemctl enable wondershaper.service
-
-$ sudo systemctl start wondershaper.service
-
-```
-
-**Usage**
-
-First, find the name of your network interface. Here are some common ways to find the details of a network card.
-
-```
-$ ip addr
-
-$ route
-
-$ ifconfig
-
-```
-
-Once you find the network card name, you can limit the bandwidth rate as shown below.
-
-```
-$ sudo wondershaper -a -d -u
-
-```
-
-For instance, if your network card name is **enp0s8** and you wanted to limit the bandwidth to **1024 Kbps** for **downloads** and **512 kbps** for **uploads** , the command would be:
-
-```
-$ sudo wondershaper -a enp0s8 -d 1024 -u 512
-
-```
-
-Where,
-
- * **-a** : network card name
- * **-d** : download rate
- * **-u** : upload rate
-
-
-
-To clear the limits from a network adapter, simply run:
-
-```
-$ sudo wondershaper -c -a enp0s8
-
-```
-
-Or
-
-```
-$ sudo wondershaper -c enp0s8
-
-```
-
-Just in case, there are more than one network card available in your system, you need to manually set the download/upload rates for each network interface card as described above.
-
-If you have installed Wondershaper by cloning its GitHub repository, there is a configuration named **wondershaper.conf** exists in **/etc/conf.d/** location. Make sure you have set the download or upload rates by modifying the appropriate values(network card name, download/upload rate) in this file.
-
-```
-$ sudo nano /etc/conf.d/wondershaper.conf
-
-[wondershaper]
-# Adapter
-#
-IFACE="eth0"
-
-# Download rate in Kbps
-#
-DSPEED="2048"
-
-# Upload rate in Kbps
-#
-USPEED="512"
-
-```
-
-Here is the sample before Wondershaper:
-
-After enabling Wondershaper:
-
-As you can see, the download rate has been tremendously reduced after limiting the bandwidth using WOndershaper in my Ubuntu 18.o4 LTS server.
-
-For more details, view the help section by running the following command:
-
-```
-$ wondershaper -h
-
-```
-
-Or, refer man pages.
-
-```
-$ man wondershaper
-
-```
-
-As far as tested, Wondershaper worked just fine as described above. Give it a try and let us know what do you think about this utility.
-
-And, that’s all for now. Hope this was useful. More good stuffs to come. Stay tuned.
-
-Cheers!
-
-
-
---------------------------------------------------------------------------------
-
-via: https://www.ostechnix.com/how-to-limit-network-bandwidth-in-linux-using-wondershaper/
-
-作者:[SK][a]
-选题:[lujun9972](https://github.com/lujun9972)
-译者:[译者ID](https://github.com/译者ID)
-校对:[校对者ID](https://github.com/校对者ID)
-
-本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
-
-[a]: https://www.ostechnix.com/author/sk/
-[1]: https://aur.archlinux.org/packages/wondershaper-git/
-[2]: https://www.ostechnix.com/yay-found-yet-another-reliable-aur-helper/
diff --git a/sources/tech/20180907 How to Use the Netplan Network Configuration Tool on Linux.md b/sources/tech/20180907 How to Use the Netplan Network Configuration Tool on Linux.md
deleted file mode 100644
index a9d3eb0895..0000000000
--- a/sources/tech/20180907 How to Use the Netplan Network Configuration Tool on Linux.md
+++ /dev/null
@@ -1,230 +0,0 @@
-LuuMing translating
-How to Use the Netplan Network Configuration Tool on Linux
-======
-
-
-
-For years Linux admins and users have configured their network interfaces in the same way. For instance, if you’re a Ubuntu user, you could either configure the network connection via the desktop GUI or from within the /etc/network/interfaces file. The configuration was incredibly easy and never failed to work. The configuration within that file looked something like this:
-
-```
-auto enp10s0
-
-iface enp10s0 inet static
-
-address 192.168.1.162
-
-netmask 255.255.255.0
-
-gateway 192.168.1.100
-
-dns-nameservers 1.0.0.1,1.1.1.1
-
-```
-
-Save and close that file. Restart networking with the command:
-
-```
-sudo systemctl restart networking
-
-```
-
-Or, if you’re not using a non-systemd distribution, you could restart networking the old fashioned way like so:
-
-```
-sudo /etc/init.d/networking restart
-
-```
-
-Your network will restart and the newly configured interface is good to go.
-
-That’s how it’s been done for years. Until now. With certain distributions (such as Ubuntu Linux 18.04), the configuration and control of networking has changed considerably. Instead of that interfaces file and using the /etc/init.d/networking script, we now turn to [Netplan][1]. Netplan is a command line utility for the configuration of networking on certain Linux distributions. Netplan uses YAML description files to configure network interfaces and, from those descriptions, will generate the necessary configuration options for any given renderer tool.
-
-I want to show you how to use Netplan on Linux, to configure a static IP address and a DHCP address. I’ll be demonstrating on Ubuntu Server 18.04. I will give you one word of warning, the .yaml files you create for Netplan must be consistent in spacing, otherwise they’ll fail to work. You don’t have to use a specific spacing for each line, it just has to remain consistent.
-
-### The new configuration files
-
-Open a terminal window (or log into your Ubuntu Server via SSH). You will find the new configuration files for Netplan in the /etc/netplan directory. Change into that directory with the command cd /etc/netplan. Once in that directory, you will probably only see a single file:
-
-```
-01-netcfg.yaml
-
-```
-
-You can create a new file or edit the default. If you opt to edit the default, I suggest making a copy with the command:
-
-```
-sudo cp /etc/netplan/01-netcfg.yaml /etc/netplan/01-netcfg.yaml.bak
-
-```
-
-With your backup in place, you’re ready to configure.
-
-### Network Device Name
-
-Before you configure your static IP address, you’ll need to know the name of device to be configured. To do that, you can issue the command ip a and find out which device is to be used (Figure 1).
-
-![netplan][3]
-
-Figure 1: Finding our device name with the ip a command.
-
-[Used with permission][4]
-
-I’ll be configuring ens5 for a static IP address.
-
-### Configuring a Static IP Address
-
-Open the original .yaml file for editing with the command:
-
-```
-sudo nano /etc/netplan/01-netcfg.yaml
-
-```
-
-The layout of the file looks like this:
-
-network:
-
-Version: 2
-
-Renderer: networkd
-
-ethernets:
-
-DEVICE_NAME:
-
-Dhcp4: yes/no
-
-Addresses: [IP/NETMASK]
-
-Gateway: GATEWAY
-
-Nameservers:
-
-Addresses: [NAMESERVER, NAMESERVER]
-
-Where:
-
- * DEVICE_NAME is the actual device name to be configured.
-
- * yes/no is an option to enable or disable dhcp4.
-
- * IP is the IP address for the device.
-
- * NETMASK is the netmask for the IP address.
-
- * GATEWAY is the address for your gateway.
-
- * NAMESERVER is the comma-separated list of DNS nameservers.
-
-
-
-
-Here’s a sample .yaml file:
-
-```
-network:
-
- version: 2
-
- renderer: networkd
-
- ethernets:
-
- ens5:
-
- dhcp4: no
-
- addresses: [192.168.1.230/24]
-
- gateway4: 192.168.1.254
-
- nameservers:
-
- addresses: [8.8.4.4,8.8.8.8]
-
-```
-
-Edit the above to fit your networking needs. Save and close that file.
-
-Notice the netmask is no longer configured in the form 255.255.255.0. Instead, the netmask is added to the IP address.
-
-### Testing the Configuration
-
-Before we apply the change, let’s test the configuration. To do that, issue the command:
-
-```
-sudo netplan try
-
-```
-
-The above command will validate the configuration before applying it. If it succeeds, you will see Configuration accepted. In other words, Netplan will attempt to apply the new settings to a running system. Should the new configuration file fail, Netplan will automatically revert to the previous working configuration. Should the new configuration work, it will be applied.
-
-### Applying the New Configuration
-
-If you are certain of your configuration file, you can skip the try option and go directly to applying the new options. The command for this is:
-
-```
-sudo netplan apply
-
-```
-
-At this point, you can issue the command ip a to see that your new address configurations are in place.
-
-### Configuring DHCP
-
-Although you probably won’t be configuring your server for DHCP, it’s always good to know how to do this. For example, you might not know what static IP addresses are currently available on your network. You could configure the device for DHCP, get an IP address, and then reconfigure that address as static.
-
-To use DHCP with Netplan, the configuration file would look something like this:
-
-```
-network:
-
- version: 2
-
- renderer: networkd
-
- ethernets:
-
- ens5:
-
- Addresses: []
-
- dhcp4: true
-
- optional: true
-
-```
-
-Save and close that file. Test the file with:
-
-```
-sudo netplan try
-
-```
-
-Netplan should succeed and apply the DHCP configuration. You could then issue the ip a command, get the dynamically assigned address, and then reconfigure a static address. Or, you could leave it set to use DHCP (but seeing as how this is a server, you probably won’t want to do that).
-
-Should you have more than one interface, you could name the second .yaml configuration file 02-netcfg.yaml. Netplan will apply the configuration files in numerical order, so 01 will be applied before 02. Create as many configuration files as needed for your server.
-
-### That’s All There Is
-
-Believe it or not, that’s all there is to using Netplan. Although it is a significant change to how we’re accustomed to configuring network addresses, it’s not all that hard to get used to. But this style of configuration is here to stay… so you will need to get used to it.
-
-Learn more about Linux through the free ["Introduction to Linux" ][5]course from The Linux Foundation and edX.
-
---------------------------------------------------------------------------------
-
-via: https://www.linux.com/learn/intro-to-linux/2018/9/how-use-netplan-network-configuration-tool-linux
-
-作者:[Jack Wallen][a]
-选题:[lujun9972](https://github.com/lujun9972)
-译者:[译者ID](https://github.com/译者ID)
-校对:[校对者ID](https://github.com/校对者ID)
-
-本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
-
-[a]: https://www.linux.com/users/jlwallen
-[1]: https://netplan.io/
-[3]: https://www.linux.com/sites/lcom/files/styles/rendered_file/public/netplan_1.jpg?itok=XuIsXWbV (netplan)
-[4]: /licenses/category/used-permission
-[5]: https://training.linuxfoundation.org/linux-courses/system-administration-training/introduction-to-linux
diff --git a/sources/tech/20180921 Clinews - Read News And Latest Headlines From Commandline.md b/sources/tech/20180921 Clinews - Read News And Latest Headlines From Commandline.md
deleted file mode 100644
index b7082ea141..0000000000
--- a/sources/tech/20180921 Clinews - Read News And Latest Headlines From Commandline.md
+++ /dev/null
@@ -1,138 +0,0 @@
-translating----geekpi
-
-Clinews – Read News And Latest Headlines From Commandline
-======
-
-
-
-A while ago, we have written about a CLI news client named [**InstantNews**][1] that helps you to read news and latest headlines from commandline instantly. Today, I stumbled upon a similar utility named **Clinews** which serves the same purpose – reading news and latest headlines from popular websites, blogs from Terminal. You don’t need to install GUI applications or mobile apps. You can read what’s happening in the world right from your Terminal. It is free, open source utility written using **NodeJS**.
-
-### Installing Clinews
-
-Since Clinews is written using NodeJS, you can install it using NPM package manager. If you haven’t install NodeJS, install it as described in the following link.
-
-Once node installed, run the following command to install Clinews:
-
-```
-$ npm i -g clinews
-```
-
-You can also install Clinews using **Yarn** :
-
-```
-$ yarn global add clinews
-```
-
-Yarn itself can installed using npm
-
-```
-$ npm -i yarn
-```
-
-### Configure News API
-
-Clinews retrieves all news headlines from [**News API**][2]. News API is a simple and easy-to-use API that returns JSON metadata for the headlines currently published on a range of news sources and blogs. It currently provides live headlines from 70 popular sources, including Ars Technica, BBC, Blooberg, CNN, Daily Mail, Engadget, ESPN, Financial Times, Google News, hacker News, IGN, Mashable, National Geographic, Reddit r/all, Reuters, Speigel Online, Techcrunch, The Guardian, The Hindu, The Huffington Post, The Newyork Times, The Next Web, The Wall street Journal, USA today and [**more**][3].
-
-First, you need an API key from News API. Go to [**https://newsapi.org/register**][4] URL and register a free account to get the API key.
-
-Once you got the API key from News API site, edit your **.bashrc** file:
-
-```
-$ vi ~/.bashrc
-
-```
-
-Add newsapi API key at the end like below:
-
-```
-export IN_API_KEY="Paste-API-key-here"
-
-```
-
-Please note that you need to paste the key inside the double quotes. Save and close the file.
-
-Run the following command to update the changes.
-
-```
-$ source ~/.bashrc
-
-```
-
-Done. Now let us go ahead and fetch the latest headlines from new sources.
-
-### Read News And Latest Headlines From Commandline
-
-To read news and latest headlines from specific new source, for example **The Hindu** , run:
-
-```
-$ news fetch the-hindu
-
-```
-
-Here, **“the-hindu”** is the new source id (fetch id).
-
-The above command will fetch latest 10 headlines from The Hindu news portel and display them in the Terminal. Also, it displays a brief description of the news, the published date and time, and the actual link to the source.
-
-**Sample output:**
-
-
-
-To read a news in your browser, hold Ctrl key and click on the URL. It will open in your default web browser.
-
-To view all the sources you can get news from, run:
-
-```
-$ news sources
-
-```
-
-**Sample output:**
-
-
-
-As you see in the above screenshot, Clinews lists all news sources including the name of the news source, fetch id, description of the site, website URL and the country where it is located. As of writing this guide, Clinews currently supports 70+ news sources.
-
-Clinews can also able to search for news stories across all sources matching search criteria/term. Say for example, to list all news stories with titles containing the words **“Tamilnadu”** , use the following command:
-
-```
-$ news search "Tamilnadu"
-```
-
-This command will scrap all news sources for stories that match term **Tamilnadu**.
-
-Clinews has some extra flags that helps you to
-
- * limit the amount of news stories you want to see,
- * sort news stories (top, latest, popular),
- * display news stories category wise (E.g. business, entertainment, gaming, general, music, politics, science-and-nature, sport, technology)
-
-
-
-For more details, see the help section:
-
-```
-$ clinews -h
-```
-
-And, that’s all for now. Hope this was useful. More good stuffs to come. Stay tuned!
-
-Cheers!
-
-
-
---------------------------------------------------------------------------------
-
-via: https://www.ostechnix.com/clinews-read-news-and-latest-headlines-from-commandline/
-
-作者:[SK][a]
-选题:[lujun9972](https://github.com/lujun9972)
-译者:[译者ID](https://github.com/译者ID)
-校对:[校对者ID](https://github.com/校对者ID)
-
-本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
-
-[a]: https://www.ostechnix.com/author/sk/
-[1]: https://www.ostechnix.com/get-news-instantly-commandline-linux/
-[2]: https://newsapi.org/
-[3]: https://newsapi.org/sources
-[4]: https://newsapi.org/register
diff --git a/sources/tech/20180924 A Simple, Beautiful And Cross-platform Podcast App.md b/sources/tech/20180924 A Simple, Beautiful And Cross-platform Podcast App.md
deleted file mode 100644
index 628a805144..0000000000
--- a/sources/tech/20180924 A Simple, Beautiful And Cross-platform Podcast App.md
+++ /dev/null
@@ -1,114 +0,0 @@
-translating by Flowsnow
-
-A Simple, Beautiful And Cross-platform Podcast App
-======
-
-
-
-Podcasts have become very popular in the last few years. Podcasts are what’s called “infotainment”, they are generally light-hearted, but they generally give you valuable information. Podcasts have blown up in the last few years, and if you like something, chances are there is a podcast about it. There are a lot of podcast players out there for the Linux desktop, but if you want something that is visually beautiful, has slick animations, and works on every platform, there aren’t a lot of alternatives to **CPod**. CPod (formerly known as **Cumulonimbus** ) is an open source and slickest podcast app that works on Linux, MacOS and Windows.
-
-CPod runs on something called **Electron** – a tool that allows developers to build cross-platform (E.g Windows, MacOs and Linux) desktop GUI applications. In this brief guide, we will be discussing – how to install and use CPod podcast app in Linux.
-
-### Installing CPod
-
-Go to the [**releases page**][1] of CPod. Download and Install the binary for your platform of choice. If you use Ubuntu/Debian, you can just download and install the .deb file from the releases page as shown below.
-
-```
-$ wget https://github.com/z-------------/CPod/releases/download/v1.25.7/CPod_1.25.7_amd64.deb
-
-$ sudo apt update
-
-$ sudo apt install gdebi
-
-$ sudo gdebi CPod_1.25.7_amd64.deb
-```
-
-If you use any other distribution, you probably should use the **AppImage** in the releases page.
-
-Download the AppImage file from the releases page.
-
-Open your terminal, and go to the directory where the AppImage file has been stored. Change the permissions to allow execution:
-
-```
-$ chmod +x CPod-1.25.7-x86_64.AppImage
-```
-
-Execute the AppImage File:
-
-```
-$ ./CPod-1.25.7-x86_64.AppImage
-```
-
-You’ll be presented a dialog asking whether to integrate the app with the system. Click **Yes** if you want to do so.
-
-### Features
-
-**Explore Tab**
-
-
-
-CPod uses the Apple iTunes database to find podcasts. This is good, because the iTunes database is the biggest one out there. If there is a podcast out there, chances are it’s on iTunes. To find podcasts, just use the top search bar in the Explore section. The Explore Section also shows a few popular podcasts.
-
-**Home Tab**
-
-
-
-The Home Tab is the tab that opens by default when you open the app. The Home Tab shows a chronological list of all the episodes of all the podcasts that you have subscribed to.
-
-From the home tab, you can:
-
- 1. Mark episodes read.
- 2. Download them for offline playing
- 3. Add them to the queue.
-
-
-
-**Subscriptions Tab**
-
-
-
-You can of course, subscribe to podcasts that you like. A few other things you can do in the Subscriptions Tab is:
-
- 1. Refresh Podcast Artwork
- 2. Export and Import Subscriptions to/from an .OPML file.
-
-
-
-**The Player**
-
-
-
-The player is perhaps the most beautiful part of CPod. The app changes the overall look and feel according to the banner of the podcast. There’s a sound visualiser at the bottom. To the right, you can see and search for other episodes of this podcast.
-
-**Cons/Missing Features**
-
-While I love this app, there are a few features and disadvantages that CPod does have:
-
- 1. Poor MPRIS Integration – You can play/pause the podcast from the media player dialog of your desktop environment, but not much more. The name of the podcast is not shown, and you can go to the next/previous episode.
- 2. No support for chapters.
- 3. No auto-downloading – you have to manually download episodes.
- 4. CPU usage during use is pretty high (even for an Electron app).
-
-
-
-### Verdict
-
-While it does have its cons, CPod is clearly the most aesthetically pleasing podcast player app out there, and it has most basic features down. If you love using visually beautiful apps, and don’t need the advanced features, this is the perfect app for you. I know for a fact that I’m going to use it.
-
-Do you like CPod? Please put your opinions on the comments below!
-
-
-
---------------------------------------------------------------------------------
-
-via: https://www.ostechnix.com/cpod-a-simple-beautiful-and-cross-platform-podcast-app/
-
-作者:[EDITOR][a]
-选题:[lujun9972](https://github.com/lujun9972)
-译者:[译者ID](https://github.com/译者ID)
-校对:[校对者ID](https://github.com/校对者ID)
-
-本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
-
-[a]: https://www.ostechnix.com/author/editor/
-[1]: https://github.com/z-------------/CPod/releases
diff --git a/sources/tech/20180925 Hegemon - A Modular System Monitor Application Written In Rust.md b/sources/tech/20180925 Hegemon - A Modular System Monitor Application Written In Rust.md
deleted file mode 100644
index a75c1f3e9a..0000000000
--- a/sources/tech/20180925 Hegemon - A Modular System Monitor Application Written In Rust.md
+++ /dev/null
@@ -1,80 +0,0 @@
-translating---geekpi
-
-Hegemon – A Modular System Monitor Application Written In Rust
-======
-
-
-
-When it comes to monitor running processes in Unix-like systems, the most commonly used applications are **top** and **htop** , which is an enhanced version of top. My personal favorite is htop. However, the developers are releasing few alternatives to these applications every now and then. One such alternative to top and htop utilities is **Hegemon**. It is a modular system monitor application written using **Rust** programming language.
-
-Concerning about the features of Hegemon, we can list the following:
-
- * Hegemon will monitor the usage of CPU, memory and Swap.
- * It monitors the system’s temperature and fan speed.
- * The update interval time can be adjustable. The default value is 3 seconds.
- * We can reveal more detailed graph and additional information by expanding the data streams.
- * Unit tests
- * Clean interface
- * Free and open source.
-
-
-
-### Installing Hegemon
-
-Make sure you have installed **Rust 1.26** or later version. To install Rust in your Linux distribution, refer the following guide:
-
-[Install Rust Programming Language In Linux][2]
-
-Also, install [libsensors][1] library. It is available in the default repositories of most Linux distributions. For example, you can install it in RPM based systems such as Fedora using the following command:
-
-```
-$ sudo dnf install lm_sensors-devel
-```
-
-On Debian-based systems like Ubuntu, Linux Mint, it can be installed using command:
-
-```
-$ sudo apt-get install libsensors4-dev
-```
-
-Once you installed Rust and libsensors, install Hegemon using command:
-
-```
-$ cargo install hegemon
-```
-
-Once hegemon installed, start monitoring the running processes in your Linux system using command:
-
-```
-$ hegemon
-```
-
-Here is the sample output from my Arch Linux desktop.
-
-
-
-To exit, press **Q**.
-
-
-Please be mindful that hegemon is still in its early development stage and it is not complete replacement for **top** command. There might be bugs and missing features. If you came across any bugs, report them in the project’s github page. The developer is planning to bring more features in the upcoming versions. So, keep an eye on this project.
-
-And, that’s all for now. Hope this helps. More good stuffs to come. Stay tuned!
-
-Cheers!
-
-
-
---------------------------------------------------------------------------------
-
-via: https://www.ostechnix.com/hegemon-a-modular-system-monitor-application-written-in-rust/
-
-作者:[SK][a]
-选题:[lujun9972](https://github.com/lujun9972)
-译者:[译者ID](https://github.com/译者ID)
-校对:[校对者ID](https://github.com/校对者ID)
-
-本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
-
-[a]: https://www.ostechnix.com/author/sk/
-[1]: https://github.com/lm-sensors/lm-sensors
-[2]: https://www.ostechnix.com/install-rust-programming-language-in-linux/
diff --git a/sources/tech/20180925 How to Boot Ubuntu 18.04 - Debian 9 Server in Rescue (Single User mode) - Emergency Mode.md b/sources/tech/20180925 How to Boot Ubuntu 18.04 - Debian 9 Server in Rescue (Single User mode) - Emergency Mode.md
index ff33e7c175..7a3702a124 100644
--- a/sources/tech/20180925 How to Boot Ubuntu 18.04 - Debian 9 Server in Rescue (Single User mode) - Emergency Mode.md
+++ b/sources/tech/20180925 How to Boot Ubuntu 18.04 - Debian 9 Server in Rescue (Single User mode) - Emergency Mode.md
@@ -1,3 +1,5 @@
+translating---geekpi
+
How to Boot Ubuntu 18.04 / Debian 9 Server in Rescue (Single User mode) / Emergency Mode
======
Booting a Linux Server into a single user mode or **rescue mode** is one of the important troubleshooting that a Linux admin usually follow while recovering the server from critical conditions. In Ubuntu 18.04 and Debian 9, single user mode is known as a rescue mode.
diff --git a/sources/tech/20180925 How to Replace one Linux Distro With Another in Dual Boot -Guide.md b/sources/tech/20180925 How to Replace one Linux Distro With Another in Dual Boot -Guide.md
index ab9fa8acc3..0e473dbc59 100644
--- a/sources/tech/20180925 How to Replace one Linux Distro With Another in Dual Boot -Guide.md
+++ b/sources/tech/20180925 How to Replace one Linux Distro With Another in Dual Boot -Guide.md
@@ -1,3 +1,5 @@
+HankChow translating
+
How to Replace one Linux Distro With Another in Dual Boot [Guide]
======
**If you have a Linux distribution installed, you can replace it with another distribution in the dual boot. You can also keep your personal documents while switching the distribution.**
diff --git a/sources/tech/20180926 3 open source distributed tracing tools.md b/sources/tech/20180926 3 open source distributed tracing tools.md
deleted file mode 100644
index 9879302d38..0000000000
--- a/sources/tech/20180926 3 open source distributed tracing tools.md
+++ /dev/null
@@ -1,90 +0,0 @@
-translating by belitex
-
-3 open source distributed tracing tools
-======
-
-Find performance issues quickly with these tools, which provide a graphical view of what's happening across complex software systems.
-
-
-
-Distributed tracing systems enable users to track a request through a software system that is distributed across multiple applications, services, and databases as well as intermediaries like proxies. This allows for a deeper understanding of what is happening within the software system. These systems produce graphical representations that show how much time the request took on each step and list each known step.
-
-A user reviewing this content can determine where the system is experiencing latencies or blockages. Instead of testing the system like a binary search tree when requests start failing, operators and developers can see exactly where the issues begin. This can also reveal where performance changes might be occurring from deployment to deployment. It’s always better to catch regressions automatically by alerting to the anomalous behavior than to have your customers tell you.
-
-How does this tracing thing work? Well, each request gets a special ID that’s usually injected into the headers. This ID uniquely identifies that transaction. This transaction is normally called a trace. The trace is the overall abstract idea of the entire transaction. Each trace is made up of spans. These spans are the actual work being performed, like a service call or a database request. Each span also has a unique ID. Spans can create subsequent spans called child spans, and child spans can have multiple parents.
-
-Once a transaction (or trace) has run its course, it can be searched in a presentation layer. There are several tools in this space that we’ll discuss later, but the picture below shows [Jaeger][1] from my [Istio walkthrough][2]. It shows multiple spans of a single trace. The power of this is immediately clear as you can better understand the transaction’s story at a glance.
-
-
-
-This demo uses Istio’s built-in OpenTracing implementation, so I can get tracing without even modifying my application. It also uses Jaeger, which is OpenTracing-compatible.
-
-So what is OpenTracing? Let’s find out.
-
-### OpenTracing API
-
-[OpenTracing][3] is a spec that grew out of [Zipkin][4] to provide cross-platform compatibility. It offers a vendor-neutral API for adding tracing to applications and delivering that data into distributed tracing systems. A library written for the OpenTracing spec can be used with any system that is OpenTracing-compliant. Zipkin, Jaeger, and Appdash are examples of open source tools that have adopted the open standard, but even proprietary tools like [Datadog][5] and [Instana][6] are adopting it. This is expected to continue as OpenTracing reaches ubiquitous status.
-
-### OpenCensus
-
-Okay, we have OpenTracing, but what is this [OpenCensus][7] thing that keeps popping up in my searches? Is it a competing standard, something completely different, or something complementary?
-
-The answer depends on who you ask. I will do my best to explain the difference (as I understand it): OpenCensus takes a more holistic or all-inclusive approach. OpenTracing is focused on establishing an open API and spec and not on open implementations for each language and tracing system. OpenCensus provides not only the specification but also the language implementations and wire protocol. It also goes beyond tracing by including additional metrics that are normally outside the scope of distributed tracing systems.
-
-OpenCensus allows viewing data on the host where the application is running, but it also has a pluggable exporter system for exporting data to central aggregators. The current exporters produced by the OpenCensus team include Zipkin, Prometheus, Jaeger, Stackdriver, Datadog, and SignalFx, but anyone can create an exporter.
-
-From my perspective, there’s a lot of overlap. One isn’t necessarily better than the other, but it’s important to know what each does and doesn’t do. OpenTracing is primarily a spec, with others doing the implementation and opinionation. OpenCensus provides a holistic approach for the local component with more opinionation but still requires other systems for remote aggregation.
-
-### Tool options
-
-#### Zipkin
-
-Zipkin was one of the first systems of this kind. It was developed by Twitter based on the [Google Dapper paper][8] about the internal system Google uses. Zipkin was written using Java, and it can use Cassandra or ElasticSearch as a scalable backend. Most companies should be satisfied with one of those options. The lowest supported Java version is Java 6. It also uses the [Thrift][9] binary communication protocol, which is popular in the Twitter stack and is hosted as an Apache project.
-
-The system consists of reporters (clients), collectors, a query service, and a web UI. Zipkin is meant to be safe in production by transmitting only a trace ID within the context of a transaction to inform receivers that a trace is in process. The data collected in each reporter is then transmitted asynchronously to the collectors. The collectors store these spans in the database, and the web UI presents this data to the end user in a consumable format. The delivery of data to the collectors can occur in three different methods: HTTP, Kafka, and Scribe.
-
-The [Zipkin community][10] has also created [Brave][11], a Java client implementation compatible with Zipkin. It has no dependencies, so it won’t drag your projects down or clutter them with libraries that are incompatible with your corporate standards. There are many other implementations, and Zipkin is compatible with the OpenTracing standard, so these implementations should also work with other distributed tracing systems. The popular Spring framework has a component called [Spring Cloud Sleuth][12] that is compatible with Zipkin.
-
-#### Jaeger
-
-[Jaeger][1] is a newer project from Uber Technologies that the [CNCF][13] has since adopted as an Incubating project. It is written in Golang, so you don’t have to worry about having dependencies installed on the host or any overhead of interpreters or language virtual machines. Similar to Zipkin, Jaeger also supports Cassandra and ElasticSearch as scalable storage backends. Jaeger is also fully compatible with the OpenTracing standard.
-
-Jaeger’s architecture is similar to Zipkin, with clients (reporters), collectors, a query service, and a web UI, but it also has an agent on each host that locally aggregates the data. The agent receives data over a UDP connection, which it batches and sends to a collector. The collector receives that data in the form of the [Thrift][14] protocol and stores that data in Cassandra or ElasticSearch. The query service can access the data store directly and provide that information to the web UI.
-
-By default, a user won’t get all the traces from the Jaeger clients. The system samples 0.1% (1 in 1,000) of traces that pass through each client. Keeping and transmitting all traces would be a bit overwhelming to most systems. However, this can be increased or decreased by configuring the agents, which the client consults with for its configuration. This sampling isn’t completely random, though, and it’s getting better. Jaeger uses probabilistic sampling, which tries to make an educated guess at whether a new trace should be sampled or not. [Adaptive sampling is on its roadmap][15], which will improve the sampling algorithm by adding additional context for making decisions.
-
-#### Appdash
-
-[Appdash][16] is a distributed tracing system written in Golang, like Jaeger. It was created by [Sourcegraph][17] based on Google’s Dapper and Twitter’s Zipkin. Similar to Jaeger and Zipkin, Appdash supports the OpenTracing standard; this was a later addition and requires a component that is different from the default component. This adds risk and complexity.
-
-At a high level, Appdash’s architecture consists mostly of three components: a client, a local collector, and a remote collector. There’s not a lot of documentation, so this description comes from testing the system and reviewing the code. The client in Appdash gets added to your code. Appdash provides Python, Golang, and Ruby implementations, but OpenTracing libraries can be used with Appdash’s OpenTracing implementation. The client collects the spans and sends them to the local collector. The local collector then sends the data to a centralized Appdash server running its own local collector, which is the remote collector for all other nodes in the system.
-
---------------------------------------------------------------------------------
-
-via: https://opensource.com/article/18/9/distributed-tracing-tools
-
-作者:[Dan Barker][a]
-选题:[lujun9972](https://github.com/lujun9972)
-译者:[译者ID](https://github.com/译者ID)
-校对:[校对者ID](https://github.com/校对者ID)
-
-本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
-
-[a]: https://opensource.com/users/barkerd427
-[1]: https://www.jaegertracing.io/
-[2]: https://www.youtube.com/watch?v=T8BbeqZ0Rls
-[3]: http://opentracing.io/
-[4]: https://zipkin.io/
-[5]: https://www.datadoghq.com/
-[6]: https://www.instana.com/
-[7]: https://opencensus.io/
-[8]: https://research.google.com/archive/papers/dapper-2010-1.pdf
-[9]: https://thrift.apache.org/
-[10]: https://zipkin.io/pages/community.html
-[11]: https://github.com/openzipkin/brave
-[12]: https://cloud.spring.io/spring-cloud-sleuth/
-[13]: https://www.cncf.io/
-[14]: https://en.wikipedia.org/wiki/Apache_Thrift
-[15]: https://www.jaegertracing.io/docs/roadmap/#adaptive-sampling
-[16]: https://github.com/sourcegraph/appdash
-[17]: https://about.sourcegraph.com/
diff --git a/sources/tech/20180926 An introduction to swap space on Linux systems.md b/sources/tech/20180926 An introduction to swap space on Linux systems.md
deleted file mode 100644
index da50208533..0000000000
--- a/sources/tech/20180926 An introduction to swap space on Linux systems.md
+++ /dev/null
@@ -1,302 +0,0 @@
-heguangzhi Translating
-
-An introduction to swap space on Linux systems
-======
-
-
-
-Swap space is a common aspect of computing today, regardless of operating system. Linux uses swap space to increase the amount of virtual memory available to a host. It can use one or more dedicated swap partitions or a swap file on a regular filesystem or logical volume.
-
-There are two basic types of memory in a typical computer. The first type, random access memory (RAM), is used to store data and programs while they are being actively used by the computer. Programs and data cannot be used by the computer unless they are stored in RAM. RAM is volatile memory; that is, the data stored in RAM is lost if the computer is turned off.
-
-Hard drives are magnetic media used for long-term storage of data and programs. Magnetic media is nonvolatile; the data stored on a disk remains even when power is removed from the computer. The CPU (central processing unit) cannot directly access the programs and data on the hard drive; it must be copied into RAM first, and that is where the CPU can access its programming instructions and the data to be operated on by those instructions. During the boot process, a computer copies specific operating system programs, such as the kernel and init or systemd, and data from the hard drive into RAM, where it is accessed directly by the computer’s processor, the CPU.
-
-### Swap space
-
-Swap space is the second type of memory in modern Linux systems. The primary function of swap space is to substitute disk space for RAM memory when real RAM fills up and more space is needed.
-
-For example, assume you have a computer system with 8GB of RAM. If you start up programs that don’t fill that RAM, everything is fine and no swapping is required. But suppose the spreadsheet you are working on grows when you add more rows, and that, plus everything else that's running, now fills all of RAM. Without swap space available, you would have to stop working on the spreadsheet until you could free up some of your limited RAM by closing down some other programs.
-
-The kernel uses a memory management program that detects blocks, aka pages, of memory in which the contents have not been used recently. The memory management program swaps enough of these relatively infrequently used pages of memory out to a special partition on the hard drive specifically designated for “paging,” or swapping. This frees up RAM and makes room for more data to be entered into your spreadsheet. Those pages of memory swapped out to the hard drive are tracked by the kernel’s memory management code and can be paged back into RAM if they are needed.
-
-The total amount of memory in a Linux computer is the RAM plus swap space and is referred to as virtual memory.
-
-### Types of Linux swap
-
-Linux provides for two types of swap space. By default, most Linux installations create a swap partition, but it is also possible to use a specially configured file as a swap file. A swap partition is just what its name implies—a standard disk partition that is designated as swap space by the `mkswap` command.
-
-A swap file can be used if there is no free disk space in which to create a new swap partition or space in a volume group where a logical volume can be created for swap space. This is just a regular file that is created and preallocated to a specified size. Then the `mkswap` command is run to configure it as swap space. I don’t recommend using a file for swap space unless absolutely necessary.
-
-### Thrashing
-
-Thrashing can occur when total virtual memory, both RAM and swap space, become nearly full. The system spends so much time paging blocks of memory between swap space and RAM and back that little time is left for real work. The typical symptoms of this are obvious: The system becomes slow or completely unresponsive, and the hard drive activity light is on almost constantly.
-
-If you can manage to issue a command like `free` that shows CPU load and memory usage, you will see that the CPU load is very high, perhaps as much as 30 to 40 times the number of CPU cores in the system. Another symptom is that both RAM and swap space are almost completely allocated.
-
-After the fact, looking at SAR (system activity report) data can also show these symptoms. I install SAR on every system I work on and use it for post-repair forensic analysis.
-
-### What is the right amount of swap space?
-
-Many years ago, the rule of thumb for the amount of swap space that should be allocated on the hard drive was 2X the amount of RAM installed in the computer (of course, that was when most computers' RAM was measured in KB or MB). So if a computer had 64KB of RAM, a swap partition of 128KB would be an optimum size. This rule took into account the facts that RAM sizes were typically quite small at that time and that allocating more than 2X RAM for swap space did not improve performance. With more than twice RAM for swap, most systems spent more time thrashing than actually performing useful work.
-
-RAM has become an inexpensive commodity and most computers these days have amounts of RAM that extend into tens of gigabytes. Most of my newer computers have at least 8GB of RAM, one has 32GB, and my main workstation has 64GB. My older computers have from 4 to 8 GB of RAM.
-
-When dealing with computers having huge amounts of RAM, the limiting performance factor for swap space is far lower than the 2X multiplier. The Fedora 28 online Installation Guide, which can be found online at [Fedora Installation Guide][1], defines current thinking about swap space allocation. I have included below some discussion and the table of recommendations from that document.
-
-The following table provides the recommended size of a swap partition depending on the amount of RAM in your system and whether you want sufficient memory for your system to hibernate. The recommended swap partition size is established automatically during installation. To allow for hibernation, however, you will need to edit the swap space in the custom partitioning stage.
-
-_Table 1: Recommended system swap space in Fedora 28 documentation_
-
-| **Amount of system RAM** | **Recommended swap space** | **Recommended swap with hibernation** |
-|--------------------------|-----------------------------|---------------------------------------|
-| less than 2 GB | 2 times the amount of RAM | 3 times the amount of RAM |
-| 2 GB - 8 GB | Equal to the amount of RAM | 2 times the amount of RAM |
-| 8 GB - 64 GB | 0.5 times the amount of RAM | 1.5 times the amount of RAM |
-| more than 64 GB | workload dependent | hibernation not recommended |
-
-At the border between each range listed above (for example, a system with 2 GB, 8 GB, or 64 GB of system RAM), use discretion with regard to chosen swap space and hibernation support. If your system resources allow for it, increasing the swap space may lead to better performance.
-
-Of course, most Linux administrators have their own ideas about the appropriate amount of swap space—as well as pretty much everything else. Table 2, below, contains my recommendations based on my personal experiences in multiple environments. These may not work for you, but as with Table 1, they may help you get started.
-
-_Table 2: Recommended system swap space per the author_
-
-| Amount of RAM | Recommended swap space |
-|---------------|------------------------|
-| ≤ 2GB | 2X RAM |
-| 2GB – 8GB | = RAM |
-| >8GB | 8GB |
-
-One consideration in both tables is that as the amount of RAM increases, beyond a certain point adding more swap space simply leads to thrashing well before the swap space even comes close to being filled. If you have too little virtual memory while following these recommendations, you should add more RAM, if possible, rather than more swap space. As with all recommendations that affect system performance, use what works best for your specific environment. This will take time and effort to experiment and make changes based on the conditions in your Linux environment.
-
-#### Adding more swap space to a non-LVM disk environment
-
-Due to changing requirements for swap space on hosts with Linux already installed, it may become necessary to modify the amount of swap space defined for the system. This procedure can be used for any general case where the amount of swap space needs to be increased. It assumes sufficient available disk space is available. This procedure also assumes that the disks are partitioned in “raw” EXT4 and swap partitions and do not use logical volume management (LVM).
-
-The basic steps to take are simple:
-
- 1. Turn off the existing swap space.
-
- 2. Create a new swap partition of the desired size.
-
- 3. Reread the partition table.
-
- 4. Configure the partition as swap space.
-
- 5. Add the new partition/etc/fstab.
-
- 6. Turn on swap.
-
-
-
-
-A reboot should not be necessary.
-
-For safety's sake, before turning off swap, at the very least you should ensure that no applications are running and that no swap space is in use. The `free` or `top` commands can tell you whether swap space is in use. To be even safer, you could revert to run level 1 or single-user mode.
-
-Turn off the swap partition with the command which turns off all swap space:
-
-```
-swapoff -a
-
-```
-
-Now display the existing partitions on the hard drive.
-
-```
-fdisk -l
-
-```
-
-This displays the current partition tables on each drive. Identify the current swap partition by number.
-
-Start `fdisk` in interactive mode with the command:
-
-```
-fdisk /dev/
-
-```
-
-For example:
-
-```
-fdisk /dev/sda
-
-```
-
-At this point, `fdisk` is now interactive and will operate only on the specified disk drive.
-
-Use the fdisk `p` sub-command to verify that there is enough free space on the disk to create the new swap partition. The space on the hard drive is shown in terms of 512-byte blocks and starting and ending cylinder numbers, so you may have to do some math to determine the available space between and at the end of allocated partitions.
-
-Use the `n` sub-command to create a new swap partition. fdisk will ask you the starting cylinder. By default, it chooses the lowest-numbered available cylinder. If you wish to change that, type in the number of the starting cylinder.
-
-The `fdisk` command now allows you to enter the size of the partitions in a number of formats, including the last cylinder number or the size in bytes, KB or MB. Type in 4000M, which will give about 4GB of space on the new partition (for example), and press Enter.
-
-Use the `p` sub-command to verify that the partition was created as you specified it. Note that the partition will probably not be exactly what you specified unless you used the ending cylinder number. The `fdisk` command can only allocate disk space in increments on whole cylinders, so your partition may be a little smaller or larger than you specified. If the partition is not what you want, you can delete it and create it again.
-
-Now it is necessary to specify that the new partition is to be a swap partition. The sub-command `t` allows you to specify the type of partition. So enter `t`, specify the partition number, and when it asks for the hex code partition type, type 82, which is the Linux swap partition type, and press Enter.
-
-When you are satisfied with the partition you have created, use the `w` sub-command to write the new partition table to the disk. The `fdisk` program will exit and return you to the command prompt after it completes writing the revised partition table. You will probably receive the following message as `fdisk` completes writing the new partition table:
-
-```
-The partition table has been altered!
-Calling ioctl() to re-read partition table.
-WARNING: Re-reading the partition table failed with error 16: Device or resource busy.
-The kernel still uses the old table.
-The new table will be used at the next reboot.
-Syncing disks.
-```
-
-At this point, you use the `partprobe` command to force the kernel to re-read the partition table so that it is not necessary to perform a reboot.
-
-```
-partprobe
-```
-
-Now use the command `fdisk -l` to list the partitions and the new swap partition should be among those listed. Be sure that the new partition type is “Linux swap”.
-
-It will be necessary to modify the /etc/fstab file to point to the new swap partition. The existing line may look like this:
-
-```
-LABEL=SWAP-sdaX swap swap defaults 0 0
-
-```
-
-where `X` is the partition number. Add a new line that looks similar this, depending upon the location of your new swap partition:
-
-```
-/dev/sdaY swap swap defaults 0 0
-
-```
-
-Be sure to use the correct partition number. Now you can perform the final step in creating the swap partition. Use the `mkswap` command to define the partition as a swap partition.
-
-```
-mkswap /dev/sdaY
-
-```
-
-The final step is to turn swap on using the command:
-
-```
-swapon -a
-
-```
-
-Your new swap partition is now online along with the previously existing swap partition. You can use the `free` or `top` commands to verify this.
-
-#### Adding swap to an LVM disk environment
-
-If your disk setup uses LVM, changing swap space will be fairly easy. Again, this assumes that space is available in the volume group in which the current swap volume is located. By default, the installation procedures for Fedora Linux in an LVM environment create the swap partition as a logical volume. This makes it easy because you can simply increase the size of the swap volume.
-
-Here are the steps required to increase the amount of swap space in an LVM environment:
-
- 1. Turn off all swap.
-
- 2. Increase the size of the logical volume designated for swap.
-
- 3. Configure the resized volume as swap space.
-
- 4. Turn on swap.
-
-
-
-
-First, let’s verify that swap exists and is a logical volume using the `lvs` command (list logical volume).
-
-```
-[root@studentvm1 ~]# lvs
- LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert
- home fedora_studentvm1 -wi-ao---- 2.00g
- pool00 fedora_studentvm1 twi-aotz-- 2.00g 8.17 2.93
- root fedora_studentvm1 Vwi-aotz-- 2.00g pool00 8.17
- swap fedora_studentvm1 -wi-ao---- 8.00g
- tmp fedora_studentvm1 -wi-ao---- 5.00g
- usr fedora_studentvm1 -wi-ao---- 15.00g
- var fedora_studentvm1 -wi-ao---- 10.00g
-[root@studentvm1 ~]#
-```
-
-You can see that the current swap size is 8GB. In this case, we want to add 2GB to this swap volume. First, stop existing swap. You may have to terminate running programs if swap space is in use.
-
-```
-swapoff -a
-
-```
-
-Now increase the size of the logical volume.
-
-```
-[root@studentvm1 ~]# lvextend -L +2G /dev/mapper/fedora_studentvm1-swap
- Size of logical volume fedora_studentvm1/swap changed from 8.00 GiB (2048 extents) to 10.00 GiB (2560 extents).
- Logical volume fedora_studentvm1/swap successfully resized.
-[root@studentvm1 ~]#
-```
-
-Run the `mkswap` command to make this entire 10GB partition into swap space.
-
-```
-[root@studentvm1 ~]# mkswap /dev/mapper/fedora_studentvm1-swap
-mkswap: /dev/mapper/fedora_studentvm1-swap: warning: wiping old swap signature.
-Setting up swapspace version 1, size = 10 GiB (10737414144 bytes)
-no label, UUID=3cc2bee0-e746-4b66-aa2d-1ea15ef1574a
-[root@studentvm1 ~]#
-```
-
-Turn swap back on.
-
-```
-[root@studentvm1 ~]# swapon -a
-[root@studentvm1 ~]#
-```
-
-Now verify the new swap space is present with the list block devices command. Again, a reboot is not required.
-
-```
-[root@studentvm1 ~]# lsblk
-NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
-sda 8:0 0 60G 0 disk
-|-sda1 8:1 0 1G 0 part /boot
-`-sda2 8:2 0 59G 0 part
- |-fedora_studentvm1-pool00_tmeta 253:0 0 4M 0 lvm
- | `-fedora_studentvm1-pool00-tpool 253:2 0 2G 0 lvm
- | |-fedora_studentvm1-root 253:3 0 2G 0 lvm /
- | `-fedora_studentvm1-pool00 253:6 0 2G 0 lvm
- |-fedora_studentvm1-pool00_tdata 253:1 0 2G 0 lvm
- | `-fedora_studentvm1-pool00-tpool 253:2 0 2G 0 lvm
- | |-fedora_studentvm1-root 253:3 0 2G 0 lvm /
- | `-fedora_studentvm1-pool00 253:6 0 2G 0 lvm
- |-fedora_studentvm1-swap 253:4 0 10G 0 lvm [SWAP]
- |-fedora_studentvm1-usr 253:5 0 15G 0 lvm /usr
- |-fedora_studentvm1-home 253:7 0 2G 0 lvm /home
- |-fedora_studentvm1-var 253:8 0 10G 0 lvm /var
- `-fedora_studentvm1-tmp 253:9 0 5G 0 lvm /tmp
-sr0 11:0 1 1024M 0 rom
-[root@studentvm1 ~]#
-```
-
-You can also use the `swapon -s` command, or `top`, `free`, or any of several other commands to verify this.
-
-```
-[root@studentvm1 ~]# free
- total used free shared buff/cache available
-Mem: 4038808 382404 2754072 4152 902332 3404184
-Swap: 10485756 0 10485756
-[root@studentvm1 ~]#
-```
-
-Note that the different commands display or require as input the device special file in different forms. There are a number of ways in which specific devices are accessed in the /dev directory. My article, [Managing Devices in Linux][2], includes more information about the /dev directory and its contents.
-
---------------------------------------------------------------------------------
-
-via: https://opensource.com/article/18/9/swap-space-linux-systems
-
-作者:[David Both][a]
-选题:[lujun9972](https://github.com/lujun9972)
-译者:[译者ID](https://github.com/译者ID)
-校对:[校对者ID](https://github.com/校对者ID)
-
-本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
-
-[a]: https://opensource.com/users/dboth
-[1]: https://docs.fedoraproject.org/en-US/fedora/f28/install-guide/
-[2]: https://opensource.com/article/16/11/managing-devices-linux
diff --git a/sources/tech/20180926 How to use the Scikit-learn Python library for data science projects.md b/sources/tech/20180926 How to use the Scikit-learn Python library for data science projects.md
deleted file mode 100644
index e8b108720e..0000000000
--- a/sources/tech/20180926 How to use the Scikit-learn Python library for data science projects.md
+++ /dev/null
@@ -1,260 +0,0 @@
-translating by Flowsnow
-
-How to use the Scikit-learn Python library for data science projects
-======
-
-
-
-The Scikit-learn Python library, initially released in 2007, is commonly used in solving machine learning and data science problems—from the beginning to the end. The versatile library offers an uncluttered, consistent, and efficient API and thorough online documentation.
-
-### What is Scikit-learn?
-
-[Scikit-learn][1] is an open source Python library that has powerful tools for data analysis and data mining. It's available under the BSD license and is built on the following machine learning libraries:
-
- * **NumPy** , a library for manipulating multi-dimensional arrays and matrices. It also has an extensive compilation of mathematical functions for performing various calculations.
- * **SciPy** , an ecosystem consisting of various libraries for completing technical computing tasks.
- * **Matplotlib** , a library for plotting various charts and graphs.
-
-
-
-Scikit-learn offers an extensive range of built-in algorithms that make the most of data science projects.
-
-Here are the main ways the Scikit-learn library is used.
-
-#### 1. Classification
-
-The [classification][2] tools identify the category associated with provided data. For example, they can be used to categorize email messages as either spam or not.
-
- * Support vector machines (SVMs)
- * Nearest neighbors
- * Random forest
-
-
-
-#### 2. Regression
-
-Classification algorithms in Scikit-learn include:
-
-Regression involves creating a model that tries to comprehend the relationship between input and output data. For example, regression tools can be used to understand the behavior of stock prices.
-
-Regression algorithms include:
-
- * SVMs
- * Ridge regression
- * Lasso
-
-
-
-#### 3. Clustering
-
-The Scikit-learn clustering tools are used to automatically group data with the same characteristics into sets. For example, customer data can be segmented based on their localities.
-
-Clustering algorithms include:
-
- * K-means
- * Spectral clustering
- * Mean-shift
-
-
-
-#### 4. Dimensionality reduction
-
-Dimensionality reduction lowers the number of random variables for analysis. For example, to increase the efficiency of visualizations, outlying data may not be considered.
-
-Dimensionality reduction algorithms include:
-
- * Principal component analysis (PCA)
- * Feature selection
- * Non-negative matrix factorization
-
-
-
-#### 5. Model selection
-
-Model selection algorithms offer tools to compare, validate, and select the best parameters and models to use in your data science projects.
-
-Model selection modules that can deliver enhanced accuracy through parameter tuning include:
-
- * Grid search
- * Cross-validation
- * Metrics
-
-
-
-#### 6. Preprocessing
-
-The Scikit-learn preprocessing tools are important in feature extraction and normalization during data analysis. For example, you can use these tools to transform input data—such as text—and apply their features in your analysis.
-
-Preprocessing modules include:
-
- * Preprocessing
- * Feature extraction
-
-
-
-### A Scikit-learn library example
-
-Let's use a simple example to illustrate how you can use the Scikit-learn library in your data science projects.
-
-We'll use the [Iris flower dataset][3], which is incorporated in the Scikit-learn library. The Iris flower dataset contains 150 details about three flower species:
-
- * Setosa—labeled 0
- * Versicolor—labeled 1
- * Virginica—labeled 2
-
-
-
-The dataset includes the following characteristics of each flower species (in centimeters):
-
- * Sepal length
- * Sepal width
- * Petal length
- * Petal width
-
-
-
-#### Step 1: Importing the library
-
-Since the Iris dataset is included in the Scikit-learn data science library, we can load it into our workspace as follows:
-
-```
-from sklearn import datasets
-iris = datasets.load_iris()
-```
-
-These commands import the **datasets** module from **sklearn** , then use the **load_digits()** method from **datasets** to include the data in the workspace.
-
-#### Step 2: Getting dataset characteristics
-
-The **datasets** module contains several methods that make it easier to get acquainted with handling data.
-
-In Scikit-learn, a dataset refers to a dictionary-like object that has all the details about the data. The data is stored using the **.data** key, which is an array list.
-
-For instance, we can utilize **iris.data** to output information about the Iris flower dataset.
-
-```
-print(iris.data)
-```
-
-Here is the output (the results have been truncated):
-
-```
-[[5.1 3.5 1.4 0.2]
- [4.9 3. 1.4 0.2]
- [4.7 3.2 1.3 0.2]
- [4.6 3.1 1.5 0.2]
- [5. 3.6 1.4 0.2]
- [5.4 3.9 1.7 0.4]
- [4.6 3.4 1.4 0.3]
- [5. 3.4 1.5 0.2]
- [4.4 2.9 1.4 0.2]
- [4.9 3.1 1.5 0.1]
- [5.4 3.7 1.5 0.2]
- [4.8 3.4 1.6 0.2]
- [4.8 3. 1.4 0.1]
- [4.3 3. 1.1 0.1]
- [5.8 4. 1.2 0.2]
- [5.7 4.4 1.5 0.4]
- [5.4 3.9 1.3 0.4]
- [5.1 3.5 1.4 0.3]
-```
-
-Let's also use **iris.target** to give us information about the different labels of the flowers.
-
-```
-print(iris.target)
-```
-
-Here is the output:
-
-```
-[0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
- 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1
- 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 2 2 2 2 2 2 2 2 2 2 2
- 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2
- 2 2]
-
-```
-
-If we use **iris.target_names** , we'll output an array of the names of the labels found in the dataset.
-
-```
-print(iris.target_names)
-```
-
-Here is the result after running the Python code:
-
-```
-['setosa' 'versicolor' 'virginica']
-```
-
-#### Step 3: Visualizing the dataset
-
-We can use the [box plot][4] to produce a visual depiction of the Iris flower dataset. The box plot illustrates how the data is distributed over the plane through their quartiles.
-
-Here's how to achieve this:
-
-```
-import seaborn as sns
-box_data = iris.data #variable representing the data array
-box_target = iris.target #variable representing the labels array
-sns.boxplot(data = box_data,width=0.5,fliersize=5)
-sns.set(rc={'figure.figsize':(2,15)})
-```
-
-Let's see the result:
-
-
-
-On the horizontal axis:
-
- * 0 is sepal length
- * 1 is sepal width
- * 2 is petal length
- * 3 is petal width
-
-
-
-The vertical axis is dimensions in centimeters.
-
-### Wrapping up
-
-Here is the entire code for this simple Scikit-learn data science tutorial.
-
-```
-from sklearn import datasets
-iris = datasets.load_iris()
-print(iris.data)
-print(iris.target)
-print(iris.target_names)
-import seaborn as sns
-box_data = iris.data #variable representing the data array
-box_target = iris.target #variable representing the labels array
-sns.boxplot(data = box_data,width=0.5,fliersize=5)
-sns.set(rc={'figure.figsize':(2,15)})
-```
-
-Scikit-learn is a versatile Python library you can use to efficiently complete data science projects.
-
-If you want to learn more, check out the tutorials on [LiveEdu][5], such as Andrey Bulezyuk's video on using the Scikit-learn library to create a [machine learning application][6].
-
-Do you have any questions or comments? Feel free to share them below.
-
---------------------------------------------------------------------------------
-
-via: https://opensource.com/article/18/9/how-use-scikit-learn-data-science-projects
-
-作者:[Dr.Michael J.Garbade][a]
-选题:[lujun9972](https://github.com/lujun9972)
-译者:[译者ID](https://github.com/译者ID)
-校对:[校对者ID](https://github.com/校对者ID)
-
-本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
-
-[a]: https://opensource.com/users/drmjg
-[1]: http://scikit-learn.org/stable/index.html
-[2]: https://blog.liveedu.tv/regression-versus-classification-machine-learning-whats-the-difference/
-[3]: https://en.wikipedia.org/wiki/Iris_flower_data_set
-[4]: https://en.wikipedia.org/wiki/Box_plot
-[5]: https://www.liveedu.tv/guides/data-science/
-[6]: https://www.liveedu.tv/andreybu/REaxr-machine-learning-model-python-sklearn-kera/oPGdP-machine-learning-model-python-sklearn-kera/
diff --git a/sources/tech/20180927 5 cool tiling window managers.md b/sources/tech/20180927 5 cool tiling window managers.md
new file mode 100644
index 0000000000..f687918c65
--- /dev/null
+++ b/sources/tech/20180927 5 cool tiling window managers.md
@@ -0,0 +1,87 @@
+5 cool tiling window managers
+======
+
+
+The Linux desktop ecosystem offers multiple window managers (WMs). Some are developed as part of a desktop environment. Others are meant to be used as standalone application. This is the case of tiling WMs, which offer a more lightweight, customized environment. This article presents five such tiling WMs for you to try out.
+
+### i3
+
+[i3][1] is one of the most popular tiling window managers. Like most other such WMs, i3 focuses on low resource consumption and customizability by the user.
+
+You can refer to [this previous article in the Magazine][2] to get started with i3 installation details and how to configure it.
+
+### sway
+
+[sway][3] is a tiling Wayland compositor. It has the advantage of compatibility with an existing i3 configuration, so you can use it to replace i3 and use Wayland as the display protocol.
+
+You can use dnf to install sway from Fedora repository:
+
+```
+$ sudo dnf install sway
+```
+
+If you want to migrate from i3 to sway, there’s a small [migration guide][4] available.
+
+### Qtile
+
+[Qtile][5] is another tiling manager that also happens to be written in Python. By default, you configure Qtile in a Python script located under ~/.config/qtile/config.py. When this script is not available, Qtile uses a default [configuration][6].
+
+One of the benefits of Qtile being in Python is you can write scripts to control the WM. For example, the following script prints the screen details:
+
+```
+> from libqtile.command import Client
+> c = Client()
+> print(c.screen.info)
+{'index': 0, 'width': 1920, 'height': 1006, 'x': 0, 'y': 0}
+```
+
+To install Qlite on Fedora, use the following command:
+
+```
+$ sudo dnf install qtile
+```
+
+### dwm
+
+The [dwm][7] window manager focuses more on being lightweight. One goal of the project is to keep dwm minimal and small. For example, the entire code base never exceeded 2000 lines of code. On the other hand, dwm isn’t as easy to customize and configure. Indeed, the only way to change dwm default configuration is to [edit the source code and recompile the application][8].
+
+If you want to try the default configuration, you can install dwm in Fedora using dnf:
+
+```
+$ sudo dnf install dwm
+```
+
+For those who wand to change their dwm configuration, the dwm-user package is available in Fedora. This package automatically recompiles dwm using the configuration stored in the user home directory at ~/.dwm/config.h.
+
+### awesome
+
+[awesome][9] originally started as a fork of dwm, to provide configuration of the WM using an external configuration file. The configuration is done via Lua scripts, which allow you to write scripts to automate tasks or create widgets.
+
+You can check out awesome on Fedora by installing it like this:
+
+```
+$ sudo dnf install awesome
+```
+
+
+--------------------------------------------------------------------------------
+
+via: https://fedoramagazine.org/5-cool-tiling-window-managers/
+
+作者:[Clément Verna][a]
+选题:[lujun9972](https://github.com/lujun9972)
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://fedoramagazine.org
+[1]: https://i3wm.org/
+[2]: https://fedoramagazine.org/getting-started-i3-window-manager/
+[3]: https://swaywm.org/
+[4]: https://github.com/swaywm/sway/wiki/i3-Migration-Guide
+[5]: http://www.qtile.org/
+[6]: https://github.com/qtile/qtile/blob/develop/libqtile/resources/default_config.py
+[7]: https://dwm.suckless.org/
+[8]: https://dwm.suckless.org/customisation/
+[9]: https://awesomewm.org/
diff --git a/sources/tech/20180927 How To Find And Delete Duplicate Files In Linux.md b/sources/tech/20180927 How To Find And Delete Duplicate Files In Linux.md
deleted file mode 100644
index e3a0a9d561..0000000000
--- a/sources/tech/20180927 How To Find And Delete Duplicate Files In Linux.md
+++ /dev/null
@@ -1,441 +0,0 @@
-How To Find And Delete Duplicate Files In Linux
-======
-
-
-
-I always backup the configuration files or any old files to somewhere in my hard disk before edit or modify them, so I can restore them from the backup if I accidentally did something wrong. But the problem is I forgot to clean up those files and my hard disk is filled with a lot of duplicate files after a certain period of time. I feel either too lazy to clean the old files or afraid that I may delete an important files. If you’re anything like me and overwhelming with multiple copies of same files in different backup directories, you can find and delete duplicate files using the tools given below in Unix-like operating systems.
-
-**A word of caution:**
-
-Please be careful while deleting duplicate files. If you’re not careful, it will lead you to [**accidental data loss**][1]. I advice you to pay extra attention while using these tools.
-
-### Find And Delete Duplicate Files In Linux
-
-For the purpose of this guide, I am going to discuss about three utilities namely,
-
- 1. Rdfind,
- 2. Fdupes,
- 3. FSlint.
-
-
-
-These three utilities are free, open source and works on most Unix-like operating systems.
-
-##### 1. Rdfind
-
-**Rdfind** , stands for **r** edundant **d** ata **find** , is a free and open source utility to find duplicate files across and/or within directories and sub-directories. It compares files based on their content, not on their file names. Rdfind uses **ranking** algorithm to classify original and duplicate files. If you have two or more equal files, Rdfind is smart enough to find which is original file, and consider the rest of the files as duplicates. Once it found the duplicates, it will report them to you. You can decide to either delete them or replace them with [**hard links** or **symbolic (soft) links**][2].
-
-**Installing Rdfind**
-
-Rdfind is available in [**AUR**][3]. So, you can install it in Arch-based systems using any AUR helper program like [**Yay**][4] as shown below.
-
-```
-$ yay -S rdfind
-
-```
-
-On Debian, Ubuntu, Linux Mint:
-
-```
-$ sudo apt-get install rdfind
-
-```
-
-On Fedora:
-
-```
-$ sudo dnf install rdfind
-
-```
-
-On RHEL, CentOS:
-
-```
-$ sudo yum install epel-release
-
-$ sudo yum install rdfind
-
-```
-
-**Usage**
-
-Once installed, simply run Rdfind command along with the directory path to scan for the duplicate files.
-
-```
-$ rdfind ~/Downloads
-
-```
-
-
-
-As you see in the above screenshot, Rdfind command will scan ~/Downloads directory and save the results in a file named **results.txt** in the current working directory. You can view the name of the possible duplicate files in results.txt file.
-
-```
-$ cat results.txt
-# Automatically generated
-# duptype id depth size device inode priority name
-DUPTYPE_FIRST_OCCURRENCE 1469 8 9 2050 15864884 1 /home/sk/Downloads/tor-browser_en-US/Browser/TorBrowser/Tor/PluggableTransports/fte/tests/dfas/test5.regex
-DUPTYPE_WITHIN_SAME_TREE -1469 8 9 2050 15864886 1 /home/sk/Downloads/tor-browser_en-US/Browser/TorBrowser/Tor/PluggableTransports/fte/tests/dfas/test6.regex
-[...]
-DUPTYPE_FIRST_OCCURRENCE 13 0 403635 2050 15740257 1 /home/sk/Downloads/Hyperledger(1).pdf
-DUPTYPE_WITHIN_SAME_TREE -13 0 403635 2050 15741071 1 /home/sk/Downloads/Hyperledger.pdf
-# end of file
-
-```
-
-By reviewing the results.txt file, you can easily find the duplicates. You can remove the duplicates manually if you want to.
-
-Also, you can **-dryrun** option to find all duplicates in a given directory without changing anything and output the summary in your Terminal:
-
-```
-$ rdfind -dryrun true ~/Downloads
-
-```
-
-Once you found the duplicates, you can replace them with either hardlinks or symlinks.
-
-To replace all duplicates with hardlinks, run:
-
-```
-$ rdfind -makehardlinks true ~/Downloads
-
-```
-
-To replace all duplicates with symlinks/soft links, run:
-
-```
-$ rdfind -makesymlinks true ~/Downloads
-
-```
-
-You may have some empty files in a directory and want to ignore them. If so, use **-ignoreempty** option like below.
-
-```
-$ rdfind -ignoreempty true ~/Downloads
-
-```
-
-If you don’t want the old files anymore, just delete duplicate files instead of replacing them with hard or soft links.
-
-To delete all duplicates, simply run:
-
-```
-$ rdfind -deleteduplicates true ~/Downloads
-
-```
-
-If you do not want to ignore empty files and delete them along with all duplicates, run:
-
-```
-$ rdfind -deleteduplicates true -ignoreempty false ~/Downloads
-
-```
-
-For more details, refer the help section:
-
-```
-$ rdfind --help
-
-```
-
-And, the manual pages:
-
-```
-$ man rdfind
-
-```
-
-##### 2. Fdupes
-
-**Fdupes** is yet another command line utility to identify and remove the duplicate files within specified directories and the sub-directories. It is free, open source utility written in **C** programming language. Fdupes identifies the duplicates by comparing file sizes, partial MD5 signatures, full MD5 signatures, and finally performing a byte-by-byte comparison for verification.
-
-Similar to Rdfind utility, Fdupes comes with quite handful of options to perform operations, such as:
-
- * Recursively search duplicate files in directories and sub-directories
- * Exclude empty files and hidden files from consideration
- * Show the size of the duplicates
- * Delete duplicates immediately as they encountered
- * Exclude files with different owner/group or permission bits as duplicates
- * And a lot more.
-
-
-
-**Installing Fdupes**
-
-Fdupes is available in the default repositories of most Linux distributions.
-
-On Arch Linux and its variants like Antergos, Manjaro Linux, install it using Pacman like below.
-
-```
-$ sudo pacman -S fdupes
-
-```
-
-On Debian, Ubuntu, Linux Mint:
-
-```
-$ sudo apt-get install fdupes
-
-```
-
-On Fedora:
-
-```
-$ sudo dnf install fdupes
-
-```
-
-On RHEL, CentOS:
-
-```
-$ sudo yum install epel-release
-
-$ sudo yum install fdupes
-
-```
-
-**Usage**
-
-Fdupes usage is pretty simple. Just run the following command to find out the duplicate files in a directory, for example **~/Downloads**.
-
-```
-$ fdupes ~/Downloads
-
-```
-
-Sample output from my system:
-
-```
-/home/sk/Downloads/Hyperledger.pdf
-/home/sk/Downloads/Hyperledger(1).pdf
-
-```
-
-As you can see, I have a duplicate file in **/home/sk/Downloads/** directory. It shows the duplicates from the parent directory only. How to view the duplicates from sub-directories? Just use **-r** option like below.
-
-```
-$ fdupes -r ~/Downloads
-
-```
-
-Now you will see the duplicates from **/home/sk/Downloads/** directory and its sub-directories as well.
-
-Fdupes can also be able to find duplicates from multiple directories at once.
-
-```
-$ fdupes ~/Downloads ~/Documents/ostechnix
-
-```
-
-You can even search multiple directories, one recursively like below:
-
-```
-$ fdupes ~/Downloads -r ~/Documents/ostechnix
-
-```
-
-The above commands searches for duplicates in “~/Downloads” directory and “~/Documents/ostechnix” directory and its sub-directories.
-
-Sometimes, you might want to know the size of the duplicates in a directory. If so, use **-S** option like below.
-
-```
-$ fdupes -S ~/Downloads
-403635 bytes each:
-/home/sk/Downloads/Hyperledger.pdf
-/home/sk/Downloads/Hyperledger(1).pdf
-
-```
-
-Similarly, to view the size of the duplicates in parent and child directories, use **-Sr** option.
-
-We can exclude empty and hidden files from consideration using **-n** and **-A** respectively.
-
-```
-$ fdupes -n ~/Downloads
-
-$ fdupes -A ~/Downloads
-
-```
-
-The first command will exclude zero-length files from consideration and the latter will exclude hidden files from consideration while searching for duplicates in the specified directory.
-
-To summarize duplicate files information, use **-m** option.
-
-```
-$ fdupes -m ~/Downloads
-1 duplicate files (in 1 sets), occupying 403.6 kilobytes
-
-```
-
-To delete all duplicates, use **-d** option.
-
-```
-$ fdupes -d ~/Downloads
-
-```
-
-Sample output:
-
-```
-[1] /home/sk/Downloads/Hyperledger Fabric Installation.pdf
-[2] /home/sk/Downloads/Hyperledger Fabric Installation(1).pdf
-
-Set 1 of 1, preserve files [1 - 2, all]:
-
-```
-
-This command will prompt you for files to preserve and delete all other duplicates. Just enter any number to preserve the corresponding file and delete the remaining files. Pay more attention while using this option. You might delete original files if you’re not be careful.
-
-If you want to preserve the first file in each set of duplicates and delete the others without prompting each time, use **-dN** option (not recommended).
-
-```
-$ fdupes -dN ~/Downloads
-
-```
-
-To delete duplicates as they are encountered, use **-I** flag.
-
-```
-$ fdupes -I ~/Downloads
-
-```
-
-For more details about Fdupes, view the help section and man pages.
-
-```
-$ fdupes --help
-
-$ man fdupes
-
-```
-
-##### 3. FSlint
-
-**FSlint** is yet another duplicate file finder utility that I use from time to time to get rid of the unnecessary duplicate files and free up the disk space in my Linux system. Unlike the other two utilities, FSlint has both GUI and CLI modes. So, it is more user-friendly tool for newbies. FSlint not just finds the duplicates, but also bad symlinks, bad names, temp files, bad IDS, empty directories, and non stripped binaries etc.
-
-**Installing FSlint**
-
-FSlint is available in [**AUR**][5], so you can install it using any AUR helpers.
-
-```
-$ yay -S fslint
-
-```
-
-On Debian, Ubuntu, Linux Mint:
-
-```
-$ sudo apt-get install fslint
-
-```
-
-On Fedora:
-
-```
-$ sudo dnf install fslint
-
-```
-
-On RHEL, CentOS:
-
-```
-$ sudo yum install epel-release
-
-```
-
-$ sudo yum install fslint
-
-Once it is installed, launch it from menu or application launcher.
-
-This is how FSlint GUI looks like.
-
-
-
-As you can see, the interface of FSlint is user-friendly and self-explanatory. In the **Search path** tab, add the path of the directory you want to scan and click **Find** button on the lower left corner to find the duplicates. Check the recurse option to recursively search for duplicates in directories and sub-directories. The FSlint will quickly scan the given directory and list out them.
-
-
-
-From the list, choose the duplicates you want to clean and select any one of them given actions like Save, Delete, Merge and Symlink.
-
-In the **Advanced search parameters** tab, you can specify the paths to exclude while searching for duplicates.
-
-
-
-**FSlint command line options**
-
-FSlint provides a collection of the following CLI utilities to find duplicates in your filesystem:
-
- * **findup** — find DUPlicate files
- * **findnl** — find Name Lint (problems with filenames)
- * **findu8** — find filenames with invalid utf8 encoding
- * **findbl** — find Bad Links (various problems with symlinks)
- * **findsn** — find Same Name (problems with clashing names)
- * **finded** — find Empty Directories
- * **findid** — find files with dead user IDs
- * **findns** — find Non Stripped executables
- * **findrs** — find Redundant Whitespace in files
- * **findtf** — find Temporary Files
- * **findul** — find possibly Unused Libraries
- * **zipdir** — Reclaim wasted space in ext2 directory entries
-
-
-
-All of these utilities are available under **/usr/share/fslint/fslint/fslint** location.
-
-For example, to find duplicates in a given directory, do:
-
-```
-$ /usr/share/fslint/fslint/findup ~/Downloads/
-
-```
-
-Similarly, to find empty directories, the command would be:
-
-```
-$ /usr/share/fslint/fslint/finded ~/Downloads/
-
-```
-
-To get more details on each utility, for example **findup** , run:
-
-```
-$ /usr/share/fslint/fslint/findup --help
-
-```
-
-For more details about FSlint, refer the help section and man pages.
-
-```
-$ /usr/share/fslint/fslint/fslint --help
-
-$ man fslint
-
-```
-
-##### Conclusion
-
-You know now about three tools to find and delete unwanted duplicate files in Linux. Among these three tools, I often use Rdfind. It doesn’t mean that the other two utilities are not efficient, but I am just happy with Rdfind so far. Well, it’s your turn. Which is your favorite tool and why? Let us know them in the comment section below.
-
-And, that’s all for now. Hope this was useful. More good stuffs to come. Stay tuned!
-
-Cheers!
-
-
-
---------------------------------------------------------------------------------
-
-via: https://www.ostechnix.com/how-to-find-and-delete-duplicate-files-in-linux/
-
-作者:[SK][a]
-选题:[lujun9972](https://github.com/lujun9972)
-译者:[译者ID](https://github.com/译者ID)
-校对:[校对者ID](https://github.com/校对者ID)
-
-本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
-
-[a]: https://www.ostechnix.com/author/sk/
-[1]: https://www.ostechnix.com/prevent-files-folders-accidental-deletion-modification-linux/
-[2]: https://www.ostechnix.com/explaining-soft-link-and-hard-link-in-linux-with-examples/
-[3]: https://aur.archlinux.org/packages/rdfind/
-[4]: https://www.ostechnix.com/yay-found-yet-another-reliable-aur-helper/
-[5]: https://aur.archlinux.org/packages/fslint/
diff --git a/sources/tech/20180928 10 handy Bash aliases for Linux.md b/sources/tech/20180928 10 handy Bash aliases for Linux.md
deleted file mode 100644
index 7ae1070997..0000000000
--- a/sources/tech/20180928 10 handy Bash aliases for Linux.md
+++ /dev/null
@@ -1,118 +0,0 @@
-translating---geekpi
-
-10 handy Bash aliases for Linux
-======
-Get more efficient by using condensed versions of long Bash commands.
-
-
-
-How many times have you repeatedly typed out a long command on the command line and wished there was a way to save it for later? This is where Bash aliases come in handy. They allow you to condense long, cryptic commands down to something easy to remember and use. Need some examples to get you started? No problem!
-
-To use a Bash alias you've created, you need to add it to your .bash_profile file, which is located in your home folder. Note that this file is hidden and accessible only from the command line. The easiest way to work with this file is to use something like Vi or Nano.
-
-### 10 handy Bash aliases
-
- 1. How many times have you needed to unpack a .tar file and couldn't remember the exact arguments needed? Aliases to the rescue! Just add the following to your .bash_profile file and then use **untar FileName** to unpack any .tar file.
-
-
-
-```
-alias untar='tar -zxvf '
-
-```
-
- 2. Want to download something but be able to resume if something goes wrong?
-
-
-
-```
-alias wget='wget -c '
-
-```
-
- 3. Need to generate a random, 20-character password for a new online account? No problem.
-
-
-
-```
-alias getpass="openssl rand -base64 20"
-
-```
-
- 4. Downloaded a file and need to test the checksum? We've got that covered too.
-
-
-
-```
-alias sha='shasum -a 256 '
-
-```
-
- 5. A normal ping will go on forever. We don't want that. Instead, let's limit that to just five pings.
-
-
-
-```
-alias ping='ping -c 5'
-
-```
-
- 6. Start a web server in any folder you'd like.
-
-
-
-```
-alias www='python -m SimpleHTTPServer 8000'
-
-```
-
- 7. Want to know how fast your network is? Just download Speedtest-cli and use this alias. You can choose a server closer to your location by using the **speedtest-cli --list** command.
-
-
-
-```
-alias speed='speedtest-cli --server 2406 --simple'
-
-```
-
- 8. How many times have you needed to know your external IP address and had no idea how to get that info? Yeah, me too.
-
-
-
-```
-alias ipe='curl ipinfo.io/ip'
-
-```
-
- 9. Need to know your local IP address?
-
-
-
-```
-alias ipi='ipconfig getifaddr en0'
-
-```
-
- 10. Finally, let's clear the screen.
-
-
-
-```
-alias c='clear'
-
-```
-
-As you can see, Bash aliases are a super-easy way to simplify your life on the command line. Want more info? I recommend a quick Google search for "Bash aliases" or a trip to GitHub.
-
---------------------------------------------------------------------------------
-
-via: https://opensource.com/article/18/9/handy-bash-aliases
-
-作者:[Patrick H.Mullins][a]
-选题:[lujun9972](https://github.com/lujun9972)
-译者:[译者ID](https://github.com/译者ID)
-校对:[校对者ID](https://github.com/校对者ID)
-
-本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
-
-[a]: https://opensource.com/users/pmullins
diff --git a/sources/tech/20180928 A Free And Secure Online PDF Conversion Suite.md b/sources/tech/20180928 A Free And Secure Online PDF Conversion Suite.md
deleted file mode 100644
index afb66e43ee..0000000000
--- a/sources/tech/20180928 A Free And Secure Online PDF Conversion Suite.md
+++ /dev/null
@@ -1,111 +0,0 @@
-A Free And Secure Online PDF Conversion Suite
-======
-
-
-
-We are always in search for a better and more efficient solution that can make our lives more convenient. That is why when you are working with PDF documents you need a fast and reliable tool that you can use in every situation. Therefore, we wanted to introduce you to **EasyPDF** Online PDF Suite for every occasion. The promise behind this tool is that it can make your PDF management easier and we tested it to check that claim.
-
-But first, here are the most important things you need to know about EasyPDF:
-
- * EasyPDF is free and anonymous online PDF Conversion Suite.
- * Convert PDF to Word, Excel, PowerPoint, AutoCAD, JPG, GIF and Text.
- * Create PDF from Word, PowerPoint, JPG, Excel files and many other formats.
- * Manipulate PDFs with PDF Merge, Split and Compress.
- * OCR conversion of scanned PDFs and images.
- * Upload files from your device or the Cloud (Google Drive and DropBox).
- * Available on Windows, Linux, Mac, and smartphones via any browser.
- * Multiple languages supported.
-
-
-
-### EasyPDF User Interface
-
-
-
-One of the first things that catches your eye is the sleek user interface which gives the tool clean and functional environment in where you can work comfortably. The whole experience is even better because there are no ads on a website at all.
-
-All different types of conversions have their dedicated menu with a simple box to add files, so you don’t have to wonder about what you need to do.
-
-Most websites aren’t optimized to work well and run smoothly on mobile phones, but EasyPDF is an exception from that rule. It opens almost instantly on smartphone and is easy to navigate. You can also add it as the shortcut on your home screen from the **three dots menu** on the Chrome app.
-
-
-
-### Functionality
-
-Apart from looking nice, EasyPDF is pretty straightforward to use. You **don’t need to register** or leave an **email** to use the tool. It is completely anonymous. Additionally, it doesn’t put any limitations to the number or size of files for conversion. No installation required either! Cool, yeah?
-
-You choose a desired conversion format, for example, PDF to Word. Select the PDF file you want to convert. You can upload a file from the device by either drag & drop or selecting the file from the folder. There is also an option to upload a document from [**Google Drive**][1] or [**Dropbox**][2].
-
-After you choose the file, press the Convert button to start the conversion process. You won’t wait for a long time to get your file because conversion will finish in a minute. If you have some more files to convert, remember to download the file before you proceed further. If you don’t download the document first, you will lose it.
-
-
-
-For a different type of conversion, return to the homepage.
-
-The currently available types of conversions are:
-
- * **PDF to Word** – Convert PDF documents to Word documents
-
- * **PDF to PowerPoint** – Convert PDF documents to PowerPoint Presentations
-
- * **PDF to Excel** – Convert PDF documents to Excel documents
-
- * **PDF Creation** – Create PDF documents from any type of file (E.g text, doc, odt)
-
- * **Word to PDF** – Convert Word documents to PDF documents
-
- * **JPG to PDF** – Convert JPG images to PDF documents
-
- * **PDF to AutoCAD** – Convert PDF documents to .dwg format (DWG is native format for CAD packages)
-
- * **PDF to Text** – Convert PDF documents to Text documents
-
- * **PDF Split** – Split PDF files into multiple parts
-
- * **PDF Merge** – Merge multiple PDF files into one
-
- * **PDF Compress** – Compress PDF documents
-
- * **PDF to JPG** – Convert PDF documents to JPG images
-
- * **PDF to PNG** – Convert PDF documents to PNG images
-
- * **PDF to GIF** – Convert PDF documents to GIF files
-
- * **OCR Online** –
-
-Convert scanned paper documents
-
-to editable files (E.g Word, Excel, Text)
-
-
-
-
-Want to give it a try? Great! Click the following link and start converting!
-
-[][https://easypdf.com/]
-
-### Conclusion
-
-EasyPDF lives up to its name and enables easier PDF management. As far as I tested EasyPDF service, It offers out of the box conversion feature completely **FREE!** It is fast, secure and reliable. You will find the quality of services most satisfying without having to pay anything or leaving your personal data like email address. Give it a try and who knows maybe you will find your new favorite PDF tool.
-
-And, that’s all for now. More good stuffs to come. Stay tuned!
-
-Cheers!
-
-
-
---------------------------------------------------------------------------------
-
-via: https://www.ostechnix.com/easypdf-a-free-and-secure-online-pdf-conversion-suite/
-
-作者:[SK][a]
-选题:[lujun9972](https://github.com/lujun9972)
-译者:[译者ID](https://github.com/译者ID)
-校对:[校对者ID](https://github.com/校对者ID)
-
-本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
-
-[a]: https://www.ostechnix.com/author/sk/
-[1]: https://www.ostechnix.com/how-to-mount-google-drive-locally-as-virtual-file-system-in-linux/
-[2]: https://www.ostechnix.com/install-dropbox-in-ubuntu-18-04-lts-desktop/
diff --git a/sources/tech/20180928 How to Install Popcorn Time on Ubuntu 18.04 and Other Linux Distributions.md b/sources/tech/20180928 How to Install Popcorn Time on Ubuntu 18.04 and Other Linux Distributions.md
deleted file mode 100644
index 578624aba4..0000000000
--- a/sources/tech/20180928 How to Install Popcorn Time on Ubuntu 18.04 and Other Linux Distributions.md
+++ /dev/null
@@ -1,233 +0,0 @@
-Translating by dianbanjiu How to Install Popcorn Time on Ubuntu 18.04 and Other Linux Distributions
-======
-**Brief: This tutorial shows you how to install Popcorn Time on Ubuntu and other Linux distributions. Some handy Popcorn Time tips have also been discussed.**
-
-[Popcorn Time][1] is an open source [Netflix][2] inspired [torrent][3] streaming application for Linux, Mac and Windows.
-
-With the regular torrents, you have to wait for the download to finish before you could watch the videos.
-
-[Popcorn Time][4] is different. It uses torrent underneath but allows you to start watching the videos (almost) immediately. It’s like you are watching videos on streaming websites like YouTube or Netflix. You don’t have to wait for the download to finish here.
-
-![Popcorn Time in Ubuntu Linux][5]
-Popcorn Time
-
-If you want to watch movies online without those creepy ads, Popcorn Time is a good alternative. Keep in mind that the streaming quality depends on the number of available seeds.
-
-Popcorn Time also provides a nice user interface where you can browse through available movies, tv-series and other contents. If you ever used [Netflix on Linux][6], you will find it’s somewhat a similar experience.
-
-Using torrent to download movies is illegal in several countries where there are strict laws against piracy. In countries like the USA, UK and West European you may even get legal notices. That said, it’s up to you to decide if you want to use it or not. You have been warned.
-(If you still want to take the risk and use Popcorn Time, you should use a VPN service like [Ivacy][7] that has been specifically designed for using Torrents and protecting your identity. Even then it’s not always easy to avoid the snooping authorities.)
-
-Some of the main features of Popcorn Time are:
-
- * Watch movies and TV Series online using Torrent
- * A sleek user interface lets you browse the available movies and TV series
- * Change streaming quality
- * Bookmark content for watching later
- * Download content for offline viewing
- * Ability to enable subtitles by default, change the subtitles size etc
- * Keyboard shortcuts to navigate through Popcorn Time
-
-
-
-### How to install Popcorn Time on Ubuntu and other Linux Distributions
-
-I am using Ubuntu 18.04 in this tutorial but you can use the same instructions for other Linux distributions such as Linux Mint, Debian, Manjaro, Deepin etc.
-
-Let’s see how to install Popcorn time on Linux. It’s really easy actually. Simply follow the instructions and copy paste the commands I have mentioned.
-
-#### Step 1: Download Popcorn Time
-
-You can download Popcorn Time from its official website. The download link is present on the homepage itself.
-
-[Get Popcorn Time](https://popcorntime.sh/)
-
-#### Step 2: Install Popcorn Time
-
-Once you have downloaded Popcorn Time, it’s time to use it. The downloaded file is a tar file that consists of an executable among other files. While you can extract this tar file anywhere, the [Linux convention is to install additional software in][8] /[opt directory.][8]
-
-Create a new directory in /opt:
-
-```
-sudo mkdir /opt/popcorntime
-```
-
-Now go to the Downloads directory.
-
-```
-cd ~/Downloads
-```
-
-Extract the downloaded Popcorn Time files into the newly created /opt/popcorntime directory.
-
-```
-sudo tar Jxf Popcorn-Time-* -C /opt/popcorntime
-```
-
-#### Step 3: Make Popcorn Time accessible for everyone
-
-You would want every user on your system to be able to run Popcorn Time without sudo access, right? To do that, you need to create a [symbolic link][9] to the executable in /usr/bin directory.
-
-```
-ln -sf /opt/popcorntime/Popcorn-Time /usr/bin/Popcorn-Time
-```
-
-#### Step 4: Create desktop launcher for Popcorn Time
-
-So far so good. But you would also like to see Popcorn Time in the application menu, add it to your favorite application list etc.
-
-For that, you need to create a desktop entry.
-
-Open a terminal and create a new file named popcorntime.desktop in /usr/share/applications.
-
-You can use any [command line based text editor][10]. Ubuntu has [Nano][11] installed by default so you can use that.
-
-```
-sudo nano /usr/share/applications/popcorntime.desktop
-```
-
-Insert the following lines here:
-
-```
-[Desktop Entry]
-Version = 1.0
-Type = Application
-Terminal = false
-Name = Popcorn Time
-Exec = /usr/bin/Popcorn-Time
-Icon = /opt/popcorntime/popcorn.png
-Categories = Application;
-```
-
-If you used Nano editor, save it using shortcut Ctrl+X. When asked for saving, enter Y and then press enter again to save and exit.
-
-We are almost there. One last thing to do here is to have the correct icon for Popcorn Time. For that, you can download a Popcorn Time icon and save it as popcorn.png in /opt/popcorntime directory.
-
-You can do that using the command below:
-
-```
-sudo wget -O /opt/popcorntime/popcorn.png https://upload.wikimedia.org/wikipedia/commons/d/df/Pctlogo.png
-
-```
-
-That’s it. Now you can search for Popcorn Time and click on it to launch it.
-
-![Popcorn Time installed on Ubuntu][12]
-Search for Popcorn Time in Menu
-
-On the first launch, you’ll have to accept the terms and conditions.
-
-![Popcorn Time in Ubuntu Linux][13]
-Accept the Terms of Service
-
-Once you do that, you can enjoy the movies and TV shows.
-
-![Watch movies on Popcorn Time][14]
-
-Well, that’s all you needed to install Popcorn Time on Ubuntu or any other Linux distribution. You can start watching your favorite movies straightaway.
-
-However, if you are interested, I would suggest reading these Popcorn Time tips to get more out of it.
-
-[![][15]][16]
-![][17]
-
-### 7 Tips for using Popcorn Time effectively
-
-Now that you have installed Popcorn Time, I am going to tell you some nifty Popcorn Time tricks. I assure you that it will enhance your experience with Popcorn Time multiple folds.
-
-#### 1\. Use advanced settings
-
-Always have the advanced settings enabled. It gives you more options to tweak Popcorn Time. Go to the top right corner and click on the gear symbol. Click on it and check advanced settings on the next screen.
-
-
-
-#### 2\. Watch the movies in VLC or other players
-
-Did you know that you can choose to watch a file in your preferred media player instead of the default Popcorn Time player? Of course, that media player should have been installed in the system.
-
-Now you may ask why would one want to use another player. And my answer is because other players like VLC has hidden features which you might not find in the Popcorn Time player.
-
-For example, if a file has very low volume, you can use VLC to enhance the audio by 400 percent. You can also [synchronize incoherent subtitles with VLC][18]. You can switch between media players before you start to play a file:
-
-
-
-#### 3\. Bookmark movies and watch it later
-
-Just browsing through movies and TV series but don’t have time or mood to watch those? No issues. You can add the movies to the bookmark and can access these bookmarked videos from the Favorites tab. This enables you to create a list of movies you would watch later.
-
-
-
-#### 4\. Check torrent health and seed information
-
-As I had mentioned earlier, your viewing experience in Popcorn Times depends on torrent speed. Good thing is that Popcorn time shows the health of the torrent file so that you can be aware of the streaming speed.
-
-You will see a green/yellow/red dot on the file. Green means there are plenty of seeds and the file will stream easily. Yellow means a medium number of seeds, streaming should be okay. Red means there are very few seeds available and the streaming will be poor or won’t work at all.
-
-
-
-#### 5\. Add custom subtitles
-
-If you need subtitles and it is not available in your preferred language, you can add custom subtitles downloaded from external websites. Get the .srt files and use it inside Popcorn Time:
-
-
-
-This is where VLC comes handy as you can [download subtitles automatically with VLC][19].
-
-
-#### 6\. Save the files for offline viewing
-
-When Popcorn Times stream a content, it downloads it and store temporarily. When you close the app, it’s cleaned out. You can change this behavior so that the downloaded file remains there for your future use.
-
-In the advanced settings, scroll down a bit. Look for Cache directory. You can change this to some other directory like Downloads. This way, even if you close Popcorn Time, the file will be available for viewing.
-
-
-
-#### 7\. Drag and drop external torrent files to play immediately
-
-I bet you did not know about this one. If you don’t find a certain movie on Popcorn Time, download the torrent file from your favorite torrent website. Open Popcorn Time and just drag and drop the torrent file in Popcorn Time. It will start playing the file, depending upon seeds. This way, you don’t need to download the entire file before watching it.
-
-When you drag and drop the torrent file in Popcorn Time, it will give you the option to choose which video file should it play. If there are subtitles in it, it will play automatically or else, you can add external subtitles.
-
-
-
-There are plenty of other features in Popcorn Time. But I’ll stop with my list here and let you explore Popcorn Time on Ubuntu Linux. I hope you find these Popcorn Time tips and tricks useful.
-
-I am repeating again. Using Torrents is illegal in many countries. If you do that, take precaution and use a VPN service. If you are looking for my recommendation, you can go for [Swiss-based privacy company ProtonVPN][20] (of [ProtonMail][21] fame). Singapore based [Ivacy][7] is another good option. If you think these are expensive, you can look for [cheap VPN deals on It’s FOSS Shop][22].
-
-Note: This article contains affiliate links. Please read our [affiliate policy][23].
-
---------------------------------------------------------------------------------
-
-via: https://itsfoss.com/popcorn-time-ubuntu-linux/
-
-作者:[Abhishek Prakash][a]
-选题:[lujun9972](https://github.com/lujun9972)
-译者:[译者ID](https://github.com/译者ID)
-校对:[校对者ID](https://github.com/校对者ID)
-
-本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
-
-[a]: https://itsfoss.com/author/abhishek/
-[1]: https://popcorntime.sh/
-[2]: https://netflix.com/
-[3]: https://en.wikipedia.org/wiki/Torrent_file
-[4]: https://en.wikipedia.org/wiki/Popcorn_Time
-[5]: https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/09/popcorn-time-linux.jpeg
-[6]: https://itsfoss.com/netflix-firefox-linux/
-[7]: https://billing.ivacy.com/page/23628
-[8]: http://tldp.org/LDP/Linux-Filesystem-Hierarchy/html/opt.html
-[9]: https://en.wikipedia.org/wiki/Symbolic_link
-[10]: https://itsfoss.com/command-line-text-editors-linux/
-[11]: https://itsfoss.com/nano-3-release/
-[12]: https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/09/popcorn-time-ubuntu-menu.jpg
-[13]: https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/09/popcorn-time-ubuntu-license.jpeg
-[14]: https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/09/popcorn-time-watch-movies.jpeg
-[15]: https://ivacy.postaffiliatepro.com/accounts/default1/vdegzkxbw/7f82d531.png
-[16]: https://billing.ivacy.com/page/23628/7f82d531
-[17]: http://ivacy.postaffiliatepro.com/scripts/vdegzkxiw?aff=23628&a_bid=7f82d531
-[18]: https://itsfoss.com/how-to-synchronize-subtitles-with-movie-quick-tip/
-[19]: https://itsfoss.com/download-subtitles-automatically-vlc-media-player-ubuntu/
-[20]: https://protonvpn.net/?aid=chmod777
-[21]: https://itsfoss.com/protonmail/
-[22]: https://shop.itsfoss.com/search?utf8=%E2%9C%93&query=vpn
-[23]: https://itsfoss.com/affiliate-policy/
diff --git a/sources/tech/20180928 Quiet log noise with Python and machine learning.md b/sources/tech/20180928 Quiet log noise with Python and machine learning.md
new file mode 100644
index 0000000000..f1fe2f1b7f
--- /dev/null
+++ b/sources/tech/20180928 Quiet log noise with Python and machine learning.md
@@ -0,0 +1,110 @@
+Quiet log noise with Python and machine learning
+======
+
+Logreduce saves debugging time by picking out anomalies from mountains of log data.
+
+
+
+Continuous integration (CI) jobs can generate massive volumes of data. When a job fails, figuring out what went wrong can be a tedious process that involves investigating logs to discover the root cause—which is often found in a fraction of the total job output. To make it easier to separate the most relevant data from the rest, the [Logreduce][1] machine learning model is trained using previous successful job runs to extract anomalies from failed runs' logs.
+
+This principle can also be applied to other use cases, for example, extracting anomalies from [Journald][2] or other systemwide regular log files.
+
+### Using machine learning to reduce noise
+
+A typical log file contains many nominal events ("baselines") along with a few exceptions that are relevant to the developer. Baselines may contain random elements such as timestamps or unique identifiers that are difficult to detect and remove. To remove the baseline events, we can use a [k-nearest neighbors pattern recognition algorithm][3] (k-NN).
+
+
+
+Log events must be converted to numeric values for k-NN regression. Using the generic feature extraction tool [HashingVectorizer][4] enables the process to be applied to any type of log. It hashes each word and encodes each event in a sparse matrix. To further reduce the search space, tokenization removes known random words, such as dates or IP addresses.
+
+
+
+Once the model is trained, the k-NN search tells us the distance of each new event from the baseline.
+
+
+
+This [Jupyter notebook][5] demonstrates the process and graphs the sparse matrix vectors.
+
+
+
+### Introducing Logreduce
+
+The Logreduce Python software transparently implements this process. Logreduce's initial goal was to assist with [Zuul CI][6] job failure analyses using the build database, and it is now integrated into the [Software Factory][7] development forge's job logs process.
+
+At its simplest, Logreduce compares files or directories and removes lines that are similar. Logreduce builds a model for each source file and outputs any of the target's lines whose distances are above a defined threshold by using the following syntax: **distance | filename:line-number: line-content**.
+
+```
+$ logreduce diff /var/log/audit/audit.log.1 /var/log/audit/audit.log
+INFO logreduce.Classifier - Training took 21.982s at 0.364MB/s (1.314kl/s) (8.000 MB - 28.884 kilo-lines)
+0.244 | audit.log:19963: type=USER_AUTH acct="root" exe="/usr/bin/su" hostname=managesf.sftests.com
+INFO logreduce.Classifier - Testing took 18.297s at 0.306MB/s (1.094kl/s) (5.607 MB - 20.015 kilo-lines)
+99.99% reduction (from 20015 lines to 1
+
+```
+
+A more advanced Logreduce use can train a model offline to be reused. Many variants of the baselines can be used to fit the k-NN search tree.
+
+```
+$ logreduce dir-train audit.clf /var/log/audit/audit.log.*
+INFO logreduce.Classifier - Training took 80.883s at 0.396MB/s (1.397kl/s) (32.001 MB - 112.977 kilo-lines)
+DEBUG logreduce.Classifier - audit.clf: written
+$ logreduce dir-run audit.clf /var/log/audit/audit.log
+```
+
+Logreduce also implements interfaces to discover baselines for Journald time ranges (days/weeks/months) and Zuul CI job build histories. It can also generate HTML reports that group anomalies found in multiple files in a simple interface.
+
+
+
+### Managing baselines
+
+The key to using k-NN regression for anomaly detection is to have a database of known good baselines, which the model uses to detect lines that deviate too far. This method relies on the baselines containing all nominal events, as anything that isn't found in the baseline will be reported as anomalous.
+
+CI jobs are great targets for k-NN regression because the job outputs are often deterministic and previous runs can be automatically used as baselines. Logreduce features Zuul job roles that can be used as part of a failed job post task in order to issue a concise report (instead of the full job's logs). This principle can be applied to other cases, as long as baselines can be constructed in advance. For example, a nominal system's [SoS report][8] can be used to find issues in a defective deployment.
+
+
+
+### Anomaly classification service
+
+The next version of Logreduce introduces a server mode to offload log processing to an external service where reports can be further analyzed. It also supports importing existing reports and requests to analyze a Zuul build. The services run analyses asynchronously and feature a web interface to adjust scores and remove false positives.
+
+
+
+Reviewed reports can be archived as a standalone dataset with the target log files and the scores for anomalous lines recorded in a flat JSON file.
+
+### Project roadmap
+
+Logreduce is already being used effectively, but there are many opportunities for improving the tool. Plans for the future include:
+
+ * Curating many annotated anomalies found in log files and producing a public domain dataset to enable further research. Anomaly detection in log files is a challenging topic, and having a common dataset to test new models would help identify new solutions.
+ * Reusing the annotated anomalies with the model to refine the distances reported. For example, when users mark lines as false positives by setting their distance to zero, the model could reduce the score of those lines in future reports.
+ * Fingerprinting archived anomalies to detect when a new report contains an already known anomaly. Thus, instead of reporting the anomaly's content, the service could notify the user that the job hit a known issue. When the issue is fixed, the service could automatically restart the job.
+ * Supporting more baseline discovery interfaces for targets such as SOS reports, Jenkins builds, Travis CI, and more.
+
+
+
+If you are interested in getting involved in this project, please contact us on the **#log-classify** Freenode IRC channel. Feedback is always appreciated!
+
+Tristan Cacqueray will present [Reduce your log noise using machine learning][9] at the [OpenStack Summit][10], November 13-15 in Berlin.
+
+--------------------------------------------------------------------------------
+
+via: https://opensource.com/article/18/9/quiet-log-noise-python-and-machine-learning
+
+作者:[Tristan de Cacqueray][a]
+选题:[lujun9972](https://github.com/lujun9972)
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://opensource.com/users/tristanc
+[1]: https://pypi.org/project/logreduce/
+[2]: http://man7.org/linux/man-pages/man8/systemd-journald.service.8.html
+[3]: https://en.wikipedia.org/wiki/K-nearest_neighbors_algorithm
+[4]: http://scikit-learn.org/stable/modules/generated/sklearn.feature_extraction.text.HashingVectorizer.html
+[5]: https://github.com/TristanCacqueray/anomaly-detection-workshop-opendev/blob/master/datasets/notebook/anomaly-detection-with-scikit-learn.ipynb
+[6]: https://zuul-ci.org
+[7]: https://www.softwarefactory-project.io
+[8]: https://sos.readthedocs.io/en/latest/
+[9]: https://www.openstack.org/summit/berlin-2018/summit-schedule/speakers/4307
+[10]: https://www.openstack.org/summit/berlin-2018/
diff --git a/sources/tech/20180929 Use Cozy to Play Audiobooks in Linux.md b/sources/tech/20180929 Use Cozy to Play Audiobooks in Linux.md
new file mode 100644
index 0000000000..8e6583f046
--- /dev/null
+++ b/sources/tech/20180929 Use Cozy to Play Audiobooks in Linux.md
@@ -0,0 +1,138 @@
+Use Cozy to Play Audiobooks in Linux
+======
+**We review Cozy, an audiobook player for Linux. Read to find out if it’s worth to install Cozy on your Linux system or not.**
+
+![Audiobook player for Linux][1]
+
+Audiobooks are a great way to consume literature. Many people who don’t have time to read, choose to listen. Most people, myself included, just use a regular media player like VLC or [MPV][2] for listening to audiobooks on Linux.
+
+Today, we will look at a Linux application built solely for listening to audiobooks.
+
+![][3]Cozy Audiobook Player
+
+### Cozy Audiobook Player for Linux
+
+The [Cozy Audiobook Player][4] is created by [Julian Geywitz][5] from Germany. It is built using both Python and GTK+ 3. According to the site, Julian wrote Cozy on Fedora and optimized it for [elementary OS][6].
+
+The player borrows its layout from iTunes. The player controls are placed along the top of the application The library takes up the rest. You can sort all of your audiobooks based on the title, author and reader, and search very quickly.
+
+![][7]Initial setup
+
+When you first launch [Cozy][8], you are given the option to choose where you will store your audiobook files. Cozy will keep an eye on that folder and update your library as you add new audiobooks. You can also set it up to use an external or network drive.
+
+#### Features of Cozy
+
+Here is a full list of the features that [Cozy][9] has to offer.
+
+ * Import all your audiobooks into Cozy to browse them comfortably
+ * Sort your audiobooks by author, reader & title
+ * Remembers your playback position
+ * Sleep timer
+ * Playback speed control
+ * Search your audiobook library
+ * Add multiple storage locations
+ * Drag & Drop to import new audio books
+ * Support for DRM free mp3, m4a (aac, ALAC, …), flac, ogg, wav files
+ * Mpris integration (Media keys & playback info for the desktop environment)
+ * Developed on Fedora and tested under elementaryOS
+
+
+
+#### Experiencing Cozy
+
+![][10]Audiobook library
+
+At first, I was excited to try our Cozy because I like to listen to audiobooks. However, I ran into a couple of issues. There is no way to edit the information of an audiobook. For example, I downloaded a couple audiobooks from [LibriVox][11] to test it. All three audiobooks were listed under “Unknown” for the reader. There was nothing to edit or change the audiobook info. I guess you could edit all of the files, but that would take quite a bit of time.
+
+When I listen to an audiobook, I like to know what track is currently playing. Cozy only has a single progress bar for the whole audiobook. I know that Cozy is designed to remember where you left off in an audiobook, but if I was going to continue to listen to the audiobook on my phone, I would like to know what track I am on.
+
+![][12]Settings
+
+There was also an option in the setting menu to turn on a dark theme. As you can see in the screenshots, the application has a black theme, to begin with. I turned the option on, but nothing happened. There isn’t even an option to add a theme or change any of the colors. Overall, the application had a feeling of not being finished.
+
+#### Installing Cozy on Linux
+
+If you would like to install Cozy, you have several options for different distros.
+
+##### Ubuntu, Debian, openSUSE, Fedora
+
+Julian used the [openSUSE Build Service][13] to create custom repos for Ubuntu, Debian, openSUSE and Fedora. Each one only takes a couple terminal commands to install.
+
+##### Install Cozy using Flatpak on any Linux distribution (including Ubuntu)
+
+If your [distro supports Flatpak][14], you can install Cozy using the following commands:
+
+```
+flatpak remote-add --user --if-not-exists flathub https://flathub.org/repo/flathub.flatpakrepo
+flatpak install --user flathub com.github.geigi.cozy
+```
+
+##### Install Cozy on elementary OS
+
+If you have elementary OS installed, you can install Cozy from the [built-in App Store][15].
+
+##### Install Cozy on Arch Linux
+
+Cozy is available in the [Arch User Repository][16]. All you have to do is search for `cozy-audiobooks`.
+
+### Where to find free Audiobooks?
+
+In order to try out this application, you will need to find some audiobooks to listen to. My favorite site for audiobooks is [LibriVox][11]. Since [LibriVox][17] depends on volunteers to record audiobooks, the quality can vary. However, there are a number of very talented readers.
+
+Here is a list of free audiobook sources:
+
++ [Open Culture][20]
++ [Project Gutenberg][21]
++ [Digitalbook.io][22]
++ [FreeClassicAudioBooks.com][23]
++ [MindWebs][24]
++ [Scribl][25]
+
+
+### Final Thoughts on Cozy
+
+For now, I think I’ll stick with my preferred audiobook software (VLC) for now. Cozy just doesn’t add anything. I won’t call it a [must-have application for Linux][18] just yet. There is no compelling reason for me to switch. Maybe, I’ll revisit it again in the future, maybe when it hits 1.0.
+
+Take Cozy for a spin. You might come to a different conclusion.
+
+Have you ever used Cozy? If not, what is your favorite audiobook player? What is your favorite source for free audiobooks? Let us know in the comments below.
+
+If you found this article interesting, please take a minute to share it on social media, Hacker News or [Reddit][19].
+
+--------------------------------------------------------------------------------
+
+via: https://itsfoss.com/cozy-audiobook-player/
+
+作者:[John Paul][a]
+选题:[lujun9972](https://github.com/lujun9972)
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://itsfoss.com/author/john/
+[1]: https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/09/audiobook-player-linux.png
+[2]: https://itsfoss.com/mpv-video-player/
+[3]: https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/09/cozy3.jpg
+[4]: https://cozy.geigi.de/
+[5]: https://github.com/geigi
+[6]: https://elementary.io/
+[7]: https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/09/cozy1.jpg
+[8]: https://github.com/geigi/cozy
+[9]: https://www.patreon.com/geigi
+[10]: https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/09/cozy2.jpg
+[11]: https://librivox.org/
+[12]: https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/09/cozy4.jpg
+[13]: https://software.opensuse.org//download.html?project=home%3Ageigi&package=com.github.geigi.cozy
+[14]: https://itsfoss.com/flatpak-guide/
+[15]: https://elementary.io/store/
+[16]: https://aur.archlinux.org/
+[17]: https://archive.org/details/librivoxaudio
+[18]: https://itsfoss.com/essential-linux-applications/
+[19]: http://reddit.com/r/linuxusersgroup
+[20]: http://www.openculture.com/freeaudiobooks
+[21]: http://www.gutenberg.org/browse/categories/1
+[22]: https://www.digitalbook.io/
+[23]: http://freeclassicaudiobooks.com/
+[24]: https://archive.org/details/MindWebs_201410
+[25]: https://scribl.com/
diff --git a/sources/tech/20181001 16 iptables tips and tricks for sysadmins.md b/sources/tech/20181001 16 iptables tips and tricks for sysadmins.md
new file mode 100644
index 0000000000..9e07971c81
--- /dev/null
+++ b/sources/tech/20181001 16 iptables tips and tricks for sysadmins.md
@@ -0,0 +1,261 @@
+16 iptables tips and tricks for sysadmins
+======
+Iptables provides powerful capabilities to control traffic coming in and out of your system.
+
+
+
+Modern Linux kernels come with a packet-filtering framework named [Netfilter][1]. Netfilter enables you to allow, drop, and modify traffic coming in and going out of a system. The **iptables** userspace command-line tool builds upon this functionality to provide a powerful firewall, which you can configure by adding rules to form a firewall policy. [iptables][2] can be very daunting with its rich set of capabilities and baroque command syntax. Let's explore some of them and develop a set of iptables tips and tricks for many situations a system administrator might encounter.
+
+### Avoid locking yourself out
+
+Scenario: You are going to make changes to the iptables policy rules on your company's primary server. You want to avoid locking yourself—and potentially everybody else—out. (This costs time and money and causes your phone to ring off the wall.)
+
+#### Tip #1: Take a backup of your iptables configuration before you start working on it.
+
+Back up your configuration with the command:
+
+```
+/sbin/iptables-save > /root/iptables-works
+
+```
+#### Tip #2: Even better, include a timestamp in the filename.
+
+Add the timestamp with the command:
+
+```
+/sbin/iptables-save > /root/iptables-works-`date +%F`
+
+```
+
+You get a file with a name like:
+
+```
+/root/iptables-works-2018-09-11
+
+```
+
+If you do something that prevents your system from working, you can quickly restore it:
+
+```
+/sbin/iptables-restore < /root/iptables-works-2018-09-11
+
+```
+
+#### Tip #3: Every time you create a backup copy of the iptables policy, create a link to the file with 'latest' in the name.
+
+```
+ln –s /root/iptables-works-`date +%F` /root/iptables-works-latest
+
+```
+
+#### Tip #4: Put specific rules at the top of the policy and generic rules at the bottom.
+
+Avoid generic rules like this at the top of the policy rules:
+
+```
+iptables -A INPUT -p tcp --dport 22 -j DROP
+
+```
+
+The more criteria you specify in the rule, the less chance you will have of locking yourself out. Instead of the very generic rule above, use something like this:
+
+```
+iptables -A INPUT -p tcp --dport 22 –s 10.0.0.0/8 –d 192.168.100.101 -j DROP
+
+```
+
+This rule appends ( **-A** ) to the **INPUT** chain a rule that will **DROP** any packets originating from the CIDR block **10.0.0.0/8** on TCP ( **-p tcp** ) port 22 ( **\--dport 22** ) destined for IP address 192.168.100.101 ( **-d 192.168.100.101** ).
+
+There are plenty of ways you can be more specific. For example, using **-i eth0** will limit the processing to a single NIC in your server. This way, the filtering actions will not apply the rule to **eth1**.
+
+#### Tip #5: Whitelist your IP address at the top of your policy rules.
+
+This is a very effective method of not locking yourself out. Everybody else, not so much.
+
+```
+iptables -I INPUT -s -j ACCEPT
+
+```
+
+You need to put this as the first rule for it to work properly. Remember, **-I** inserts it as the first rule; **-A** appends it to the end of the list.
+
+#### Tip #6: Know and understand all the rules in your current policy.
+
+Not making a mistake in the first place is half the battle. If you understand the inner workings behind your iptables policy, it will make your life easier. Draw a flowchart if you must. Also remember: What the policy does and what it is supposed to do can be two different things.
+
+### Set up a workstation firewall policy
+
+Scenario: You want to set up a workstation with a restrictive firewall policy.
+
+#### Tip #1: Set the default policy as DROP.
+
+```
+# Set a default policy of DROP
+*filter
+:INPUT DROP [0:0]
+:FORWARD DROP [0:0]
+:OUTPUT DROP [0:0]
+```
+
+#### Tip #2: Allow users the minimum amount of services needed to get their work done.
+
+The iptables rules need to allow the workstation to get an IP address, netmask, and other important information via DHCP ( **-p udp --dport 67:68 --sport 67:68** ). For remote management, the rules need to allow inbound SSH ( **\--dport 22** ), outbound mail ( **\--dport 25** ), DNS ( **\--dport 53** ), outbound ping ( **-p icmp** ), Network Time Protocol ( **\--dport 123 --sport 123** ), and outbound HTTP ( **\--dport 80** ) and HTTPS ( **\--dport 443** ).
+
+```
+# Set a default policy of DROP
+*filter
+:INPUT DROP [0:0]
+:FORWARD DROP [0:0]
+:OUTPUT DROP [0:0]
+
+# Accept any related or established connections
+-I INPUT 1 -m state --state RELATED,ESTABLISHED -j ACCEPT
+-I OUTPUT 1 -m state --state RELATED,ESTABLISHED -j ACCEPT
+
+# Allow all traffic on the loopback interface
+-A INPUT -i lo -j ACCEPT
+-A OUTPUT -o lo -j ACCEPT
+
+# Allow outbound DHCP request
+-A OUTPUT –o eth0 -p udp --dport 67:68 --sport 67:68 -j ACCEPT
+
+# Allow inbound SSH
+-A INPUT -i eth0 -p tcp -m tcp --dport 22 -m state --state NEW -j ACCEPT
+
+# Allow outbound email
+-A OUTPUT -i eth0 -p tcp -m tcp --dport 25 -m state --state NEW -j ACCEPT
+
+# Outbound DNS lookups
+-A OUTPUT -o eth0 -p udp -m udp --dport 53 -j ACCEPT
+
+# Outbound PING requests
+-A OUTPUT –o eth0 -p icmp -j ACCEPT
+
+# Outbound Network Time Protocol (NTP) requests
+-A OUTPUT –o eth0 -p udp --dport 123 --sport 123 -j ACCEPT
+
+# Outbound HTTP
+-A OUTPUT -o eth0 -p tcp -m tcp --dport 80 -m state --state NEW -j ACCEPT
+-A OUTPUT -o eth0 -p tcp -m tcp --dport 443 -m state --state NEW -j ACCEPT
+
+COMMIT
+```
+
+### Restrict an IP address range
+
+Scenario: The CEO of your company thinks the employees are spending too much time on Facebook and not getting any work done. The CEO tells the CIO to do something about the employees wasting time on Facebook. The CIO tells the CISO to do something about employees wasting time on Facebook. Eventually, you are told the employees are wasting too much time on Facebook, and you have to do something about it. You decide to block all access to Facebook. First, find out Facebook's IP address by using the **host** and **whois** commands.
+
+```
+host -t a www.facebook.com
+www.facebook.com is an alias for star.c10r.facebook.com.
+star.c10r.facebook.com has address 31.13.65.17
+whois 31.13.65.17 | grep inetnum
+inetnum: 31.13.64.0 - 31.13.127.255
+```
+
+Then convert that range to CIDR notation by using the [CIDR to IPv4 Conversion][3] page. You get **31.13.64.0/18**. To prevent outgoing access to [www.facebook.com][4], enter:
+
+```
+iptables -A OUTPUT -p tcp -i eth0 –o eth1 –d 31.13.64.0/18 -j DROP
+```
+
+### Regulate by time
+
+Scenario: The backlash from the company's employees over denying access to Facebook access causes the CEO to relent a little (that and his administrative assistant's reminding him that she keeps HIS Facebook page up-to-date). The CEO decides to allow access to Facebook.com only at lunchtime (12PM to 1PM). Assuming the default policy is DROP, use iptables' time features to open up access.
+
+```
+iptables –A OUTPUT -p tcp -m multiport --dport http,https -i eth0 -o eth1 -m time --timestart 12:00 --timestart 12:00 –timestop 13:00 –d
+31.13.64.0/18 -j ACCEPT
+```
+
+This command sets the policy to allow ( **-j ACCEPT** ) http and https ( **-m multiport --dport http,https** ) between noon ( **\--timestart 12:00** ) and 13PM ( **\--timestop 13:00** ) to Facebook.com ( **–d[31.13.64.0/18][5]** ).
+
+### Regulate by time—Take 2
+
+Scenario: During planned downtime for system maintenance, you need to deny all TCP and UDP traffic between the hours of 2AM and 3AM so maintenance tasks won't be disrupted by incoming traffic. This will take two iptables rules:
+
+```
+iptables -A INPUT -p tcp -m time --timestart 02:00 --timestop 03:00 -j DROP
+iptables -A INPUT -p udp -m time --timestart 02:00 --timestop 03:00 -j DROP
+```
+
+With these rules, TCP and UDP traffic ( **-p tcp and -p udp** ) are denied ( **-j DROP** ) between the hours of 2AM ( **\--timestart 02:00** ) and 3AM ( **\--timestop 03:00** ) on input ( **-A INPUT** ).
+
+### Limit connections with iptables
+
+Scenario: Your internet-connected web servers are under attack by bad actors from around the world attempting to DoS (Denial of Service) them. To mitigate these attacks, you restrict the number of connections a single IP address can have to your web server:
+
+```
+iptables –A INPUT –p tcp –syn -m multiport -–dport http,https –m connlimit -–connlimit-above 20 –j REJECT -–reject-with-tcp-reset
+```
+
+Let's look at what this rule does. If a host makes more than 20 ( **-–connlimit-above 20** ) new connections ( **–p tcp –syn** ) in a minute to the web servers ( **-–dport http,https** ), reject the new connection ( **–j REJECT** ) and tell the connecting host you are rejecting the connection ( **-–reject-with-tcp-reset** ).
+
+### Monitor iptables rules
+
+Scenario: Since iptables operates on a "first match wins" basis as packets traverse the rules in a chain, frequently matched rules should be near the top of the policy and less frequently matched rules should be near the bottom. How do you know which rules are traversed the most or the least so they can be ordered nearer the top or the bottom?
+
+#### Tip #1: See how many times each rule has been hit.
+
+Use this command:
+
+```
+iptables -L -v -n –line-numbers
+```
+
+The command will list all the rules in the chain ( **-L** ). Since no chain was specified, all the chains will be listed with verbose output ( **-v** ) showing packet and byte counters in numeric format ( **-n** ) with line numbers at the beginning of each rule corresponding to that rule's position in the chain.
+
+Using the packet and bytes counts, you can order the most frequently traversed rules to the top and the least frequently traversed rules towards the bottom.
+
+#### Tip #2: Remove unnecessary rules.
+
+Which rules aren't getting any matches at all? These would be good candidates for removal from the policy. You can find that out with this command:
+
+```
+iptables -nvL | grep -v "0 0"
+```
+
+Note: that's not a tab between the zeros; there are five spaces between the zeros.
+
+#### Tip #3: Monitor what's going on.
+
+You would like to monitor what's going on with iptables in real time, like with **top**. Use this command to monitor the activity of iptables activity dynamically and show only the rules that are actively being traversed:
+
+```
+watch --interval=5 'iptables -nvL | grep -v "0 0"'
+```
+
+**watch** runs **'iptables -nvL | grep -v "0 0"'** every five seconds and displays the first screen of its output. This allows you to watch the packet and byte counts change over time.
+
+### Report on iptables
+
+Scenario: Your manager thinks this iptables firewall stuff is just great, but a daily activity report would be even better. Sometimes it's more important to write a report than to do the work.
+
+Use the packet filter/firewall/IDS log analyzer [FWLogwatch][6] to create reports based on the iptables firewall logs. FWLogwatch supports many log formats and offers many analysis options. It generates daily and monthly summaries of the log files, allowing the security administrator to free up substantial time, maintain better control over network security, and reduce unnoticed attacks.
+
+Here is sample output from FWLogwatch:
+
+
+
+### More than just ACCEPT and DROP
+
+We've covered many facets of iptables, all the way from making sure you don't lock yourself out when working with iptables to monitoring iptables to visualizing the activity of an iptables firewall. These will get you started down the path to realizing even more iptables tips and tricks.
+
+--------------------------------------------------------------------------------
+
+via: https://opensource.com/article/18/10/iptables-tips-and-tricks
+
+作者:[Gary Smith][a]
+选题:[lujun9972](https://github.com/lujun9972)
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://opensource.com/users/greptile
+[1]: https://en.wikipedia.org/wiki/Netfilter
+[2]: https://en.wikipedia.org/wiki/Iptables
+[3]: http://www.ipaddressguide.com/cidr
+[4]: http://www.facebook.com
+[5]: http://31.13.64.0/18
+[6]: http://fwlogwatch.inside-security.de/
diff --git a/sources/tech/20181001 How to Install Pip on Ubuntu.md b/sources/tech/20181001 How to Install Pip on Ubuntu.md
new file mode 100644
index 0000000000..8751dc50f9
--- /dev/null
+++ b/sources/tech/20181001 How to Install Pip on Ubuntu.md
@@ -0,0 +1,179 @@
+How to Install Pip on Ubuntu
+======
+**Pip is a command line tool that allows you to install software packages written in Python. Learn how to install Pip on Ubuntu and how to use it for installing Python applications.**
+
+There are numerous ways to [install software on Ubuntu][1]. You can install applications from the software center, from downloaded DEB files, from PPA, from [Snap packages][2], [using Flatpak][3], using [AppImage][4] and even from the good old source code.
+
+There is one more way to install packages in [Ubuntu][5]. It’s called Pip and you can use it to install Python-based applications.
+
+### What is Pip
+
+[Pip][6] stands for “Pip Installs Packages”. [Pip][7] is a command line based package management system. It is used to install and manage software written in [Python language][8].
+
+You can use Pip to install packages listed in the Python Package Index ([PyPI][9]).
+
+As a software developer, you can use pip to install various Python module and packages for your own Python projects.
+
+As an end user, you may need pip in order to install some applications that are developed using Python and can be installed easily using pip. One such example is [Stress Terminal][10] application that you can easily install with pip.
+
+Let’s see how you can install pip on Ubuntu and other Ubuntu-based distributions.
+
+### How to install Pip on Ubuntu
+
+![Install pip on Ubuntu Linux][11]
+
+Pip is not installed on Ubuntu by default. You’ll have to install it. Installing pip on Ubuntu is really easy. I’ll show it to you in a moment.
+
+Ubuntu 18.04 has both Python 2 and Python 3 installed by default. And hence, you should install pip for both Python versions.
+
+Pip, by default, refers to the Python 2. Pip in Python 3 is referred by pip3.
+
+Note: I am using Ubuntu 18.04 in this tutorial. But the instructions here should be valid for other versions like Ubuntu 16.04, 18.10 etc. You may also use the same commands on other Linux distributions based on Ubuntu such as Linux Mint, Linux Lite, Xubuntu, Kubuntu etc.
+
+#### Install pip for Python 2
+
+First, make sure that you have Python 2 installed. On Ubuntu, use the command below to verify.
+
+```
+python2 --version
+
+```
+
+If there is no error and a valid output that shows the Python version, you have Python 2 installed. So now you can install pip for Python 2 using this command:
+
+```
+sudo apt install python-pip
+
+```
+
+It will install pip and a number of other dependencies with it. Once installed, verify that you have pip installed correctly.
+
+```
+pip --version
+
+```
+
+It should show you a version number, something like this:
+
+```
+pip 9.0.1 from /usr/lib/python2.7/dist-packages (python 2.7)
+
+```
+
+This mans that you have successfully installed pip on Ubuntu.
+
+#### Install pip for Python 3
+
+You have to make sure that Python 3 is installed on Ubuntu. To check that, use this command:
+
+```
+python3 --version
+
+```
+
+If it shows you a number like Python 3.6.6, Python 3 is installed on your Linux system.
+
+Now, you can install pip3 using the command below:
+
+```
+sudo apt install python3-pip
+
+```
+
+You should verify that pip3 has been installed correctly using this command:
+
+```
+pip3 --version
+
+```
+
+It should show you a number like this:
+
+```
+pip 9.0.1 from /usr/lib/python3/dist-packages (python 3.6)
+
+```
+
+It means that pip3 is successfully installed on your system.
+
+### How to use Pip command
+
+Now that you have installed pip, let’s quickly see some of the basic pip commands. These commands will help you use pip commands for searching, installing and removing Python packages.
+
+To search packages from the Python Package Index, you can use the following pip command:
+
+```
+pip search
+
+```
+
+For example, if you search or stress, it will show all the packages that have the string ‘stress’ in its name or description.
+
+```
+pip search stress
+stress (1.0.0) - A trivial utility for consuming system resources.
+s-tui (0.8.2) - Stress Terminal UI stress test and monitoring tool
+stressypy (0.0.12) - A simple program for calling stress and/or stress-ng from python
+fuzzing (0.3.2) - Tools for stress testing applications.
+stressant (0.4.1) - Simple stress-test tool
+stressberry (0.1.7) - Stress tests for the Raspberry Pi
+mobbage (0.2) - A HTTP stress test and benchmark tool
+stresser (0.2.1) - A large-scale stress testing framework.
+cyanide (1.3.0) - Celery stress testing and integration test support.
+pysle (1.5.7) - An interface to ISLEX, a pronunciation dictionary with stress markings.
+ggf (0.3.2) - global geometric factors and corresponding stresses of the optical stretcher
+pathod (0.17) - A pathological HTTP/S daemon for testing and stressing clients.
+MatPy (1.0) - A toolbox for intelligent material design, and automatic yield stress determination
+netblow (0.1.2) - Vendor agnostic network testing framework to stress network failures
+russtress (0.1.3) - Package that helps you to put lexical stress in russian text
+switchy (0.1.0a1) - A fast FreeSWITCH control library purpose-built on traffic theory and stress testing.
+nx4_selenium_test (0.1) - Provides a Python class and apps which monitor and/or stress-test the NoMachine NX4 web interface
+physical_dualism (1.0.0) - Python library that approximates the natural frequency from stress via physical dualism, and vice versa.
+fsm_effective_stress (1.0.0) - Python library that uses the rheological-dynamical analogy (RDA) to compute damage and effective buckling stress in prismatic shell structures.
+processpathway (0.3.11) - A nifty little toolkit to create stress-free, frustrationless image processing pathways from your webcam for computer vision experiments. Or observing your cat.
+
+```
+
+If you want to install an application using pip, you can use it in the following manner:
+
+```
+pip install
+
+```
+
+Pip doesn’t support tab completion so the package name should be exact. It will download all the necessary files and installed that package.
+
+If you want to remove a Python package installed via pip, you can use the remove option in pip.
+
+```
+pip uninstall
+
+```
+
+You can use pip3 instead of pip in the above commands.
+
+I hope this quick tip helped you to install pip on Ubuntu. If you have any questions or suggestions, please let me know in the comment section below.
+
+--------------------------------------------------------------------------------
+
+via: https://itsfoss.com/install-pip-ubuntu/
+
+作者:[Abhishek Prakash][a]
+选题:[lujun9972](https://github.com/lujun9972)
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://itsfoss.com/author/abhishek/
+[1]: https://itsfoss.com/how-to-add-remove-programs-in-ubuntu/
+[2]: https://itsfoss.com/use-snap-packages-ubuntu-16-04/
+[3]: https://itsfoss.com/flatpak-guide/
+[4]: https://itsfoss.com/use-appimage-linux/
+[5]: https://www.ubuntu.com/
+[6]: https://en.wikipedia.org/wiki/Pip_(package_manager)
+[7]: https://pypi.org/project/pip/
+[8]: https://www.python.org/
+[9]: https://pypi.org/
+[10]: https://itsfoss.com/stress-terminal-ui/
+[11]: https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/10/install-pip-ubuntu.png
diff --git a/sources/tech/20181001 Turn your book into a website and an ePub using Pandoc.md b/sources/tech/20181001 Turn your book into a website and an ePub using Pandoc.md
new file mode 100644
index 0000000000..bd79cb3c04
--- /dev/null
+++ b/sources/tech/20181001 Turn your book into a website and an ePub using Pandoc.md
@@ -0,0 +1,263 @@
+Turn your book into a website and an ePub using Pandoc
+======
+Write once, publish twice using Markdown and Pandoc.
+
+
+
+Pandoc is a command-line tool for converting files from one markup language to another. In my [introduction to Pandoc][1], I explained how to convert text written in Markdown into a website, a slideshow, and a PDF.
+
+In this follow-up article, I'll dive deeper into [Pandoc][2], showing how to produce a website and an ePub book from the same Markdown source file. I'll use my upcoming e-book, [GRASP Principles for the Object-Oriented Mind][3], which I created using this process, as an example.
+
+First I will explain the file structure used for the book, then how to use Pandoc to generate a website and deploy it in GitHub. Finally, I demonstrate how to generate its companion ePub book.
+
+You can find the code in my [Programming Fight Club][4] GitHub repository.
+
+### Setting up the writing structure
+
+I do all of my writing in Markdown syntax. You can also use HTML, but the more HTML you introduce the highest risk that problems arise when Pandoc converts Markdown to an ePub document. My books follow the one-chapter-per-file pattern. Declare chapters using the Markdown heading H1 ( **#** ). You can put more than one chapter in each file, but putting them in separate files makes it easier to find content and do updates later.
+
+The meta-information follows a similar pattern: each output format has its own meta-information file. Meta-information files define information about your documents, such as text to add to your HTML or the license of your ePub. I store all of my Markdown documents in a folder named parts (this is important for the Makefile that generates the website and ePub). As an example, let's take the table of contents, the preface, and the about chapters (divided into the files toc.md, preface.md, and about.md) and, for clarity, we will leave out the remaining chapters.
+
+My about file might begin like:
+
+```
+# About this book {-}
+
+## Who should read this book {-}
+
+Before creating a complex software system one needs to create a solid foundation.
+General Responsibility Assignment Software Principles (GRASP) are guidelines to assign
+responsibilities to software classes in object-oriented programming.
+```
+
+Once the chapters are finished, the next step is to add meta-information to setup the format for the website and the ePub.
+
+### Generating the website
+
+#### Create the HTML meta-information file
+
+The meta-information file (web-metadata.yaml) for my website is a simple YAML file that contains information about the author, title, rights, content for the **< head>** tag, and content for the beginning and end of the HTML file.
+
+I recommend (at minimum) including the following fields in the web-metadata.yaml file:
+
+```
+---
+title: GRASP principles for the Object-oriented mind
+author: Kiko Fernandez-Reyes
+rights: 2017 Kiko Fernandez-Reyes, CC-BY-NC-SA 4.0 International
+header-includes:
+- |
+ \```{=html}
+
+
+ \```
+include-before:
+- |
+ \```{=html}
+
+ \```
+---
+```
+
+Some variables to note:
+
+ * The **header-includes** variable contains HTML that will be embedded inside the **< head>** tag.
+ * The line after calling a variable must be **\- |**. The next line must begin with triple backquotes that are aligned with the **|** or Pandoc will reject it. **{=html}** tells Pandoc that this is raw text and should not be processed as Markdown. (For this to work, you need to check that the **raw_attribute** extension in Pandoc is enabled. To check, type **pandoc --list-extensions | grep raw** and make sure the returned list contains an item named **+raw_html** ; the plus sign indicates it is enabled.)
+ * The variable **include-before** adds some HTML at the beginning of your website, and I ask readers to consider spreading the word or buying me a coffee.
+ * The **include-after** variable appends raw HTML at the end of the website and shows my book's license.
+
+
+
+These are only some of the fields available; take a look at the template variables in HTML (my article [introduction to Pandoc][1] covered this for LaTeX but the process is the same for HTML) to learn about others.
+
+#### Split the website into chapters
+
+The website can be generated as a whole, resulting in a long page with all the content, or split into chapters, which I think is easier to read. I'll explain how to divide the website into chapters so the reader doesn't get intimidated by a long website.
+
+To make the website easy to deploy on GitHub Pages, we need to create a root folder called docs (which is the root folder that GitHub Pages uses by default to render a website). Then we need to create folders for each chapter under docs, place the HTML chapters in their own folders, and the file content in a file named index.html.
+
+For example, the about.md file is converted to a file named index.html that is placed in a folder named about (about/index.html). This way, when users type **http:// /about/**, the index.html file from the folder about will be displayed in their browser.
+
+The following Makefile does all of this:
+
+```
+# Your book files
+DEPENDENCIES= toc preface about
+
+# Placement of your HTML files
+DOCS=docs
+
+all: web
+
+web: setup $(DEPENDENCIES)
+ @cp $(DOCS)/toc/index.html $(DOCS)
+
+
+# Creation and copy of stylesheet and images into
+# the assets folder. This is important to deploy the
+# website to Github Pages.
+setup:
+ @mkdir -p $(DOCS)
+ @cp -r assets $(DOCS)
+
+
+# Creation of folder and index.html file on a
+# per-chapter basis
+
+$(DEPENDENCIES):
+ @mkdir -p $(DOCS)/$@
+ @pandoc -s --toc web-metadata.yaml parts/$@.md \
+ -c /assets/pandoc.css -o $(DOCS)/$@/index.html
+
+clean:
+ @rm -rf $(DOCS)
+
+.PHONY: all clean web setup
+```
+
+The option **-c /assets/pandoc.css** declares which CSS stylesheet to use; it will be fetched from **/assets/pandoc.css**. In other words, inside the **< head>** HTML tag, Pandoc adds the following line:
+
+```
+
+```
+
+To generate the website, type:
+
+```
+make
+```
+
+The root folder should contain now the following structure and files:
+
+```
+.---parts
+| |--- toc.md
+| |--- preface.md
+| |--- about.md
+|
+|---docs
+ |--- assets/
+ |--- index.html
+ |--- toc
+ | |--- index.html
+ |
+ |--- preface
+ | |--- index.html
+ |
+ |--- about
+ |--- index.html
+
+```
+
+#### Deploy the website
+
+To deploy the website on GitHub, follow these steps:
+
+ 1. Create a new repository
+ 2. Push your content to the repository
+ 3. Go to the GitHub Pages section in the repository's Settings and select the option for GitHub to use the content from the Master branch
+
+
+
+You can get more details on the [GitHub Pages][5] site.
+
+Check out [my book's website][6], generated using this process, to see the result.
+
+### Generating the ePub book
+
+#### Create the ePub meta-information file
+
+The ePub meta-information file, epub-meta.yaml, is similar to the HTML meta-information file. The main difference is that ePub offers other template variables, such as **publisher** and **cover-image**. Your ePub book's stylesheet will probably differ from your website's; mine uses one named epub.css.
+
+```
+---
+title: 'GRASP principles for the Object-oriented Mind'
+publisher: 'Programming Language Fight Club'
+author: Kiko Fernandez-Reyes
+rights: 2017 Kiko Fernandez-Reyes, CC-BY-NC-SA 4.0 International
+cover-image: assets/cover.png
+stylesheet: assets/epub.css
+...
+```
+
+Add the following content to the previous Makefile:
+
+```
+epub:
+ @pandoc -s --toc epub-meta.yaml \
+ $(addprefix parts/, $(DEPENDENCIES:=.md)) -o $(DOCS)/assets/book.epub
+```
+
+The command for the ePub target takes all the dependencies from the HTML version (your chapter names), appends to them the Markdown extension, and prepends them with the path to the folder chapters' so Pandoc knows how to process them. For example, if **$(DEPENDENCIES)** was only **preface about** , then the Makefile would call:
+
+```
+@pandoc -s --toc epub-meta.yaml \
+parts/preface.md parts/about.md -o $(DOCS)/assets/book.epub
+```
+
+Pandoc would take these two chapters, combine them, generate an ePub, and place the book under the Assets folder.
+
+Here's an [example][7] of an ePub created using this process.
+
+### Summarizing the process
+
+The process to create a website and an ePub from a Markdown file isn't difficult, but there are a lot of details. The following outline may make it easier for you to follow.
+
+ * HTML book:
+ * Write chapters in Markdown
+ * Add metadata
+ * Create a Makefile to glue pieces together
+ * Set up GitHub Pages
+ * Deploy
+ * ePub book:
+ * Reuse chapters from previous work
+ * Add new metadata file
+ * Create a Makefile to glue pieces together
+ * Set up GitHub Pages
+ * Deploy
+
+
+
+--------------------------------------------------------------------------------
+
+via: https://opensource.com/article/18/10/book-to-website-epub-using-pandoc
+
+作者:[Kiko Fernandez-Reyes][a]
+选题:[lujun9972](https://github.com/lujun9972)
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://opensource.com/users/kikofernandez
+[1]: https://opensource.com/article/18/9/intro-pandoc
+[2]: https://pandoc.org/
+[3]: https://www.programmingfightclub.com/
+[4]: https://github.com/kikofernandez/programmingfightclub
+[5]: https://pages.github.com/
+[6]: https://www.programmingfightclub.com/grasp-principles/
+[7]: https://github.com/kikofernandez/programmingfightclub/raw/master/docs/web_assets/demo.epub
diff --git a/sources/tech/20181002 4 open source invoicing tools for small businesses.md b/sources/tech/20181002 4 open source invoicing tools for small businesses.md
new file mode 100644
index 0000000000..29589a6ad1
--- /dev/null
+++ b/sources/tech/20181002 4 open source invoicing tools for small businesses.md
@@ -0,0 +1,76 @@
+4 open source invoicing tools for small businesses
+======
+Manage your billing and get paid with easy-to-use, web-based invoicing software.
+
+
+
+No matter what your reasons for starting a small business, the key to keeping that business going is getting paid. Getting paid usually means sending a client an invoice.
+
+It's easy enough to whip up an invoice using LibreOffice Writer or LibreOffice Calc, but sometimes you need a bit more. A more professional look. A way of keeping track of your invoices. Reminders about when to follow up on the invoices you've sent.
+
+There's a wide range of commercial and closed-source invoicing tools out there. But the offerings on the open source side of the fence are just as good, and maybe even more flexible, than their closed source counterparts.
+
+Let's take a look at four web-based open source invoicing tools that are great choices for freelancers and small businesses on a tight budget. I reviewed two of them in 2014, in an [earlier version][1] of this article. These four picks are easy to use and you can use them on just about any device.
+
+### Invoice Ninja
+
+I've never been a fan of the term ninja. Despite that, I like [Invoice Ninja][2]. A lot. It melds a simple interface with a set of features that let you create, manage, and send invoices to clients and customers.
+
+You can easily configure multiple clients, track payments and outstanding invoices, generate quotes, and email invoices. What sets Invoice Ninja apart from its competitors is its [integration with][3] over 40 online popular payment gateways, including PayPal, Stripe, WePay, and Apple Pay.
+
+[Download][4] a version that you can install on your own server or get an account with the [hosted version][5] of Invoice Ninja. There's a free version and a paid tier that will set you back US$ 8 a month.
+
+### InvoicePlane
+
+Once upon a time, there was a nifty open source invoicing tool called FusionInvoice. One day, its creators took the latest version of the code proprietary. That didn't end happily, as FusionInvoice's doors were shut for good in 2018. But that wasn't the end of the application. An old version of the code stayed open source and morphed into [InvoicePlane][6], which packs all of FusionInvoice's goodness.
+
+Creating an invoice takes just a couple of clicks. You can make them as minimal or detailed as you need. When you're ready, you can email your invoices or output them as PDFs. You can also create recurring invoices for clients or customers you regularly bill.
+
+InvoicePlane does more than generate and track invoices. You can also create quotes for jobs or goods, track products you sell, view and enter payments, and run reports on your invoices.
+
+[Grab the code][7] and install it on your web server. Or, if you're not quite ready to do that, [take the demo][8] for a spin.
+
+### OpenSourceBilling
+
+Described by its developer as "beautifully simple billing software," [OpenSourceBilling][9] lives up to the description. It has one of the cleanest interfaces I've seen, which makes configuring and using the tool a breeze.
+
+OpenSourceBilling stands out because of its dashboard, which tracks your current and past invoices, as well as any outstanding amounts. Your information is broken up into graphs and tables, which makes it easy to follow.
+
+You do much of the configuration on the invoice itself. You can add items, tax rates, clients, and even payment terms with a click and a few keystrokes. OpenSourceBilling saves that information across all of your invoices, both new and old.
+
+As with some of the other tools we've looked at, OpenSourceBilling has a [demo][10] you can try.
+
+### BambooInvoice
+
+When I was a full-time freelance writer and consultant, I used [BambooInvoice][11] to bill my clients. When its original developer stopped working on the software, I was a bit disappointed. But BambooInvoice is back, and it's as good as ever.
+
+What attracted me to BambooInvoice is its simplicity. It does one thing and does it well. You can create and edit invoices, and BambooInvoice keeps track of them by client and by the invoice numbers you assign to them. It also lets you know which invoices are open or overdue. You can email the invoices from within the application or generate PDFs. You can also run reports to keep tabs on your income.
+
+To [install][12] and use BambooInvoice, you'll need a web server running PHP 5 or newer as well as a MySQL database. Chances are you already have access to one, so you're good to go.
+
+Do you have a favorite open source invoicing tool? Feel free to share it by leaving a comment.
+
+--------------------------------------------------------------------------------
+
+via: https://opensource.com/article/18/10/open-source-invoicing-tools
+
+作者:[Scott Nesbitt][a]
+选题:[lujun9972](https://github.com/lujun9972)
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://opensource.com/users/scottnesbitt
+[1]: https://opensource.com/business/14/9/4-open-source-invoice-tools
+[2]: https://www.invoiceninja.org/
+[3]: https://www.invoiceninja.com/integrations/
+[4]: https://github.com/invoiceninja/invoiceninja
+[5]: https://www.invoiceninja.com/invoicing-pricing-plans/
+[6]: https://invoiceplane.com/
+[7]: https://wiki.invoiceplane.com/en/1.5/getting-started/installation
+[8]: https://demo.invoiceplane.com/
+[9]: http://www.opensourcebilling.org/
+[10]: http://demo.opensourcebilling.org/
+[11]: https://www.bambooinvoice.net/
+[12]: https://sourceforge.net/projects/bambooinvoice/
diff --git a/sources/tech/20181002 How to use the SSH and SFTP protocols on your home network.md b/sources/tech/20181002 How to use the SSH and SFTP protocols on your home network.md
new file mode 100644
index 0000000000..a58aa55ffd
--- /dev/null
+++ b/sources/tech/20181002 How to use the SSH and SFTP protocols on your home network.md
@@ -0,0 +1,78 @@
+translating by singledo
+
+How to use the SSH and SFTP protocols on your home network
+======
+
+Use the SSH and SFTP protocols to access other devices, efficiently and securely transfer files, and more.
+
+
+
+Years ago, I decided to set up an extra computer (I always have extra computers) so that I could access it from work to transfer files I might need. To do this, the basic first step is to have your ISP assign a fixed IP address.
+
+The not-so-basic but much more important next step is to set up your accessible system safely. In this particular case, I was planning to access it only from work, so I could restrict access to that IP address. Even so, you want to use all possible security features. What is amazing—and scary—is that as soon as you set this up, people from all over the world will immediately attempt to access your system. You can discover this by checking the logs. I presume there are bots constantly searching for open doors wherever they can find them.
+
+Not long after I set up my computer, I decided my access was more a toy than a need, so I turned it off and gave myself one less thing to worry about. Nonetheless, there is another use for SSH and SFTP inside your home network, and it is more or less already set up for you.
+
+One requirement, of course, is that the other computer in your home must be turned on, although it doesn’t matter whether someone is logged on or not. You also need to know its IP address. There are two ways to find this out. One is to get access to the router, which you can do through a browser. Typically, its address is something like **192.168.1.254**. With some searching, it should be easy enough to find out what is currently on and hooked up to the system by eth0 or WiFi. What can be challenging is recognizing the computer you’re interested in.
+
+I find it easier to go to the computer in question, bring up a shell, and type:
+
+```
+ifconfig
+
+```
+
+This spits out a lot of information, but the bit you want is right after `inet` and might look something like **192.168.1.234**. After you find that, go back to the client computer you want to access this host, and on the command line, type:
+
+```
+ssh gregp@192.168.1.234
+
+```
+
+For this to work, **gregp** must be a valid user on that system. You will then be asked for his password, and if you enter it correctly, you will be connected to that other computer in a shell environment. I confess that I don’t use SSH in this way very often. I have used it at times so I can run `dnf` to upgrade some other computer than the one I’m sitting at. Usually, I use SFTP:
+
+```
+sftp gregp@192.168.1.234
+
+```
+
+because I have a greater need for an easy method of transferring files from one computer to another. It’s certainly more convenient and less time-consuming than using a USB stick or an external drive.
+
+`get`, to receive files from the host; and `put`, to send files to the host. I usually migrate to the directory on my client where I either want to save files I will get from the host or send to the host before I connect. When you connect, you will be in the top-level directory—in this example, **home/gregp**. Once connected, you can then use `cd` just as you would in your client, except now you’re changing your working directory on the host. You may need to use `ls` to make sure you know where you are.
+
+Once you’re connected, the two basic commands for SFTP are, to receive files from the host; and, to send files to the host. I usually migrate to the directory on my client where I either want to save files I will get from the host or send to the host before I connect. When you connect, you will be in the top-level directory—in this example,. Once connected, you can then usejust as you would in your client, except now you’re changing your working directory on the host. You may need to useto make sure you know where you are.
+
+If you need to change the working directory on your client, use the command `lcd` (as in **local change directory** ). Similarly, use `lls` to show the working directory contents on your client system.
+
+What if the host doesn’t have a directory with the name you would like? Use `mkdir` to make a new directory on it. Or you might copy a whole directory of files to the host with this:
+
+```
+put -r ThisDir/
+
+```
+
+which creates the directory and then copies all of its files and subdirectories to the host. These transfers are extremely fast, as fast as your hardware allows, and have none of the bottlenecks you might encounter on the internet. To see a list of commands you can use in an SFTP session, check:
+
+```
+man sftp
+
+```
+
+I have also been able to put SFTP to use on a Windows VM on my computer, yet another advantage of setting up a VM rather than a dual-boot system. This lets me move files to or from the Linux part of the system. So far I have only done this using a client in Windows.
+
+You can also use SSH and SFTP to access any devices connected to your router by wire or WiFi. For a while, I used an app called [SSHDroid][1], which runs SSH in a passive mode. In other words, you use your computer to access the Android device that is the host. Recently I found another app, [Admin Hands][2], where the tablet or phone is the client and can be used for either SSH or SFTP operations. This app is great for backing up or sharing photos from your phone.
+
+--------------------------------------------------------------------------------
+
+via: https://opensource.com/article/18/10/ssh-sftp-home-network
+
+作者:[Geg Pittman][a]
+选题:[lujun9972](https://github.com/lujun9972)
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://opensource.com/users/greg-p
+[1]: https://play.google.com/store/apps/details?id=berserker.android.apps.sshdroid
+[2]: https://play.google.com/store/apps/details?id=com.arpaplus.adminhands&hl=en_US
diff --git a/sources/tech/20181003 Introducing Swift on Fedora.md b/sources/tech/20181003 Introducing Swift on Fedora.md
new file mode 100644
index 0000000000..186117cd7c
--- /dev/null
+++ b/sources/tech/20181003 Introducing Swift on Fedora.md
@@ -0,0 +1,72 @@
+translating---geekpi
+
+Introducing Swift on Fedora
+======
+
+
+
+Swift is a general-purpose programming language built using a modern approach to safety, performance, and software design patterns. It aims to be the best language for a variety of programming projects, ranging from systems programming to desktop applications and scaling up to cloud services. Read more about it and how to try it out in Fedora.
+
+### Safe, Fast, Expressive
+
+Like many modern programming languages, Swift was designed to be safer than C-based languages. For example, variables are always initialized before they can be used. Arrays and integers are checked for overflow. Memory is automatically managed.
+
+Swift puts intent right in the syntax. To declare a variable, use the var keyword. To declare a constant, use let.
+
+Swift also guarantees that objects can never be nil; in fact, trying to use an object known to be nil will cause a compile-time error. When using a nil value is appropriate, it supports a mechanism called **optionals**. An optional may contain nil, but is safely unwrapped using the **?** operator.
+
+Some additional features include:
+
+ * Closures unified with function pointers
+ * Tuples and multiple return values
+ * Generics
+ * Fast and concise iteration over a range or collection
+ * Structs that support methods, extensions, and protocols
+ * Functional programming patterns, e.g., map and filter
+ * Powerful error handling built-in
+ * Advanced control flow with do, guard, defer, and repeat keywords
+
+
+
+### Try Swift out
+
+Swift is available in Fedora 28 under then package name **swift-lang**. Once installed, run swift and the REPL console starts up.
+
+```
+$ swift
+Welcome to Swift version 4.2 (swift-4.2-RELEASE). Type :help for assistance.
+ 1> let greeting="Hello world!"
+greeting: String = "Hello world!"
+ 2> print(greeting)
+Hello world!
+ 3> greeting = "Hello universe!"
+error: repl.swift:3:10: error: cannot assign to value: 'greeting' is a 'let' constant
+greeting = "Hello universe!"
+~~~~~~~~ ^
+
+
+ 3>
+
+```
+
+Swift has a growing community, and in particular, a [work group][1] dedicated to making it an efficient and effective server-side programming language. Be sure to visit [its home page][2] for more ways to get involved.
+
+Photo by [Uillian Vargas][3] on [Unsplash][4].
+
+
+--------------------------------------------------------------------------------
+
+via: https://fedoramagazine.org/introducing-swift-fedora/
+
+作者:[Link Dupont][a]
+选题:[lujun9972](https://github.com/lujun9972)
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://fedoramagazine.org/author/linkdupont/
+[1]: https://swift.org/server/
+[2]: http://swift.org
+[3]: https://unsplash.com/photos/7oJpVR1inGk?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText
+[4]: https://unsplash.com/search/photos/fast?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText
diff --git a/sources/tech/20181003 Oomox - Customize And Create Your Own GTK2, GTK3 Themes.md b/sources/tech/20181003 Oomox - Customize And Create Your Own GTK2, GTK3 Themes.md
new file mode 100644
index 0000000000..e45d96470f
--- /dev/null
+++ b/sources/tech/20181003 Oomox - Customize And Create Your Own GTK2, GTK3 Themes.md
@@ -0,0 +1,128 @@
+Oomox – Customize And Create Your Own GTK2, GTK3 Themes
+======
+
+
+
+Theming and Visual customization is one of the main advantages of Linux. Since all the code is open, you can change how your Linux system looks and behaves to a greater degree than you ever could with Windows/Mac OS. GTK theming is perhaps the most popular way in which people customize their Linux desktops. The GTK toolkit is used by a wide variety of desktop environments like Gnome, Cinnamon, Unity, XFCE, and budgie. This means that a single theme made for GTK can be applied to any of these Desktop Environments with little changes.
+
+There are a lot of very high quality popular GTK themes out there, such as **Arc** , **Numix** , and **Adapta**. But if you want to customize these themes and create your own visual design, you can use **Oomox**.
+
+The Oomox is a graphical app for customizing and creating your own GTK theme complete with your own color, icon and terminal style. It comes with several presets, which you can apply on a Numix, Arc, or Materia style theme to create your own GTK theme.
+
+### Installing Oomox
+
+On Arch Linux and its variants:
+
+Oomox is available on [**AUR**][1], so you can install it using any AUR helper programs like [**Yay**][2].
+
+```
+$ yay -S oomox
+
+```
+
+On Debian/Ubuntu/Linux Mint, download `oomox.deb`package from [**here**][3] and install it as shown below. As of writing this guide, the latest version was **oomox_1.7.0.5.deb**.
+
+```
+$ sudo dpkg -i oomox_1.7.0.5.deb
+$ sudo apt install -f
+
+```
+
+On Fedora, Oomox is available in third-party **COPR** repository.
+
+```
+$ sudo dnf copr enable tcg/themes
+$ sudo dnf install oomox
+
+```
+
+Oomox is also available as a [**Flatpak app**][4]. Make sure you have installed Flatpak as described in [**this guide**][5]. And then, install and run Oomox using the following commands:
+
+```
+$ flatpak install flathub com.github.themix_project.Oomox
+
+$ flatpak run com.github.themix_project.Oomox
+
+```
+
+For other Linux distributions, go to the Oomox project page (Link is given at the end of this guide) on Github and compile and install it manually from source.
+
+### Customize And Create Your Own GTK2, GTK3 Themes
+
+**Theme Customization**
+
+
+
+You can change the colour of practically every UI element, like:
+
+ 1. Headers
+ 2. Buttons
+ 3. Buttons inside Headers
+ 4. Menus
+ 5. Selected Text
+
+
+
+To the left, there are a number of presets, like the Cars theme, modern themes like Materia, and Numix, and retro themes. Then, at the top of the main window, there’s an option called **Theme Style** , that lets you set the overall visual style of the theme. You can choose from between Numix, Arc, and Materia.
+
+With certain styles like Numix, you can even change things like the Header Gradient, Outline Width and Panel Opacity. You can also add a Dark Mode for your theme that will be automatically created from the default theme.
+
+
+
+**Iconset Customization**
+
+You can customize the iconset that will be used for the theme icons. There are 2 options – Gnome Colors and Archdroid. You an change the base, and stroke colours of the iconset.
+
+**Terminal Customization**
+
+You can also customize the terminal colours. The app has several presets for this, but you can customize the exact colour code for each colour value like red, green,black, and so on. You can also auto swap the foreground and background colours.
+
+**Spotify Theme**
+
+A unique feature this app has is that you can theme the spotify app to your liking. You can change the foreground, background, and accent color of the spotify app to match the overall GTK theme.
+
+Then, just press the **Apply Spotify Theme** button, and you’ll get this window:
+
+
+
+Just hit apply, and you’re done.
+
+**Exporting your Theme**
+
+Once you’re done customizing the theme to your liking, you can rename it by clicking the rename button at the top left:
+
+
+
+And then, just hit **Export Theme** to export the theme to your system.
+
+
+
+You can also just export just the Iconset or the terminal theme.
+
+After this, you can open any Visual Customization app for your Desktop Environment, like Tweaks for Gnome based DEs, or the **XFCE Appearance Settings** , and select your exported GTK and Shell theme.
+
+### Verdict
+
+If you are a Linux theme junkie, and you know exactly how each button, how each header in your system should look like, Oomox is worth a look. For extreme customizers, it lets you change virtually everything about how your system looks. For people who just want to tweak an existing theme a little bit, it has many, many presets so you can get what you want without a lot of effort.
+
+Have you tried it? What are your thoughts on Oomox? Put them in the comments below!
+
+
+
+--------------------------------------------------------------------------------
+
+via: https://www.ostechnix.com/oomox-customize-and-create-your-own-gtk2-gtk3-themes/
+
+作者:[EDITOR][a]
+选题:[lujun9972](https://github.com/lujun9972)
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://www.ostechnix.com/author/editor/
+[1]: https://aur.archlinux.org/packages/oomox/
+[2]: https://www.ostechnix.com/yay-found-yet-another-reliable-aur-helper/
+[3]: https://github.com/themix-project/oomox/releases
+[4]: https://flathub.org/apps/details/com.github.themix_project.Oomox
+[5]: https://www.ostechnix.com/flatpak-new-framework-desktop-applications-linux/
diff --git a/sources/tech/20181003 Tips for listing files with ls at the Linux command line.md b/sources/tech/20181003 Tips for listing files with ls at the Linux command line.md
new file mode 100644
index 0000000000..fda48f1622
--- /dev/null
+++ b/sources/tech/20181003 Tips for listing files with ls at the Linux command line.md
@@ -0,0 +1,75 @@
+translating---geekpi
+
+Tips for listing files with ls at the Linux command line
+======
+Learn some of the Linux 'ls' command's most useful variations.
+
+
+One of the first commands I learned in Linux was `ls`. Knowing what’s in a directory where a file on your system resides is important. Being able to see and modify not just some but all of the files is also important.
+
+My first LInux cheat sheet was the [One Page Linux Manual][1] , which was released in1999 and became my go-to reference. I taped it over my desk and referred to it often as I began to explore Linux. Listing files with `ls -l` is introduced on the first page, at the bottom of the first column.
+
+Later, I would learn other iterations of this most basic command. Through the `ls` command, I began to learn about the complexity of the Linux file permissions and what was mine and what required root or sudo permission to change. I became very comfortable on the command line over time, and while I still use `ls -l` to find files in the directory, I frequently use `ls -al` so I can see hidden files that might need to be changed, like configuration files.
+
+According to an article by Eric Fischer about the `ls` command in the [Linux Documentation Project][2], the command's roots go back to the `listf` command on MIT’s Compatible Time Sharing System in 1961. When CTSS was replaced by [Multics][3], the command became `list`, with switches like `list -all`. According to [Wikipedia][4], `ls` appeared in the original version of AT&T Unix. The `ls` command we use today on Linux systems comes from the [GNU Core Utilities][5].
+
+Most of the time, I use only a couple of iterations of the command. Looking inside a directory with `ls` or `ls -al` is how I generally use the command, but there are many other options that you should be familiar with.
+
+`$ ls -l` provides a simple list of the directory:
+
+
+
+Using the man pages of my Fedora 28 system, I find that there are many other options to `ls`, all of which provide interesting and useful information about the Linux file system. By entering `man ls` at the command prompt, we can begin to explore some of the other options:
+
+
+
+To sort the directory by file sizes, use `ls -lS`:
+
+
+
+To list the contents in reverse order, use `ls -lr`:
+
+
+
+To list contents by columns, use `ls -c`:
+
+
+
+`ls -al` provides a list of all the files in the same directory:
+
+
+
+Here are some additional options that I find useful and interesting:
+
+ * List only the .txt files in the directory: `ls *.txt`
+ * List by file size: `ls -s`
+ * Sort by time and date: `ls -d`
+ * Sort by extension: `ls -X`
+ * Sort by file size: `ls -S`
+ * Long format with file size: `ls -ls`
+ * List only the .txt files in a directory: `ls *.txt`
+
+
+
+To generate a directory list in the specified format and send it to a file for later viewing, enter `ls -al > mydirectorylist`. Finally, one of the more exotic commands I found is `ls -R`, which provides a recursive list of all the directories on your computer and their contents.
+
+For a complete list of the all the iterations of the `ls` command, refer to the [GNU Core Utilities][6].
+
+--------------------------------------------------------------------------------
+
+via: https://opensource.com/article/18/10/ls-command
+
+作者:[Don Watkins][a]
+选题:[lujun9972](https://github.com/lujun9972)
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://opensource.com/users/don-watkins
+[1]: http://hackerspace.cs.rutgers.edu/library/General/One_Page_Linux_Manual.pdf
+[2]: http://www.tldp.org/LDP/LG/issue48/fischer.html
+[3]: https://en.wikipedia.org/wiki/Multics
+[4]: https://en.wikipedia.org/wiki/Ls
+[5]: http://www.gnu.org/s/coreutils/
+[6]: https://www.gnu.org/software/coreutils/manual/html_node/ls-invocation.html#ls-invocation
diff --git a/sources/tech/20181004 Archiving web sites.md b/sources/tech/20181004 Archiving web sites.md
new file mode 100644
index 0000000000..558c057913
--- /dev/null
+++ b/sources/tech/20181004 Archiving web sites.md
@@ -0,0 +1,119 @@
+Archiving web sites
+======
+
+I recently took a deep dive into web site archival for friends who were worried about losing control over the hosting of their work online in the face of poor system administration or hostile removal. This makes web site archival an essential instrument in the toolbox of any system administrator. As it turns out, some sites are much harder to archive than others. This article goes through the process of archiving traditional web sites and shows how it falls short when confronted with the latest fashions in the single-page applications that are bloating the modern web.
+
+### Converting simple sites
+
+The days of handcrafted HTML web sites are long gone. Now web sites are dynamic and built on the fly using the latest JavaScript, PHP, or Python framework. As a result, the sites are more fragile: a database crash, spurious upgrade, or unpatched vulnerability might lose data. In my previous life as web developer, I had to come to terms with the idea that customers expect web sites to basically work forever. This expectation matches poorly with "move fast and break things" attitude of web development. Working with the [Drupal][2] content-management system (CMS) was particularly challenging in that regard as major upgrades deliberately break compatibility with third-party modules, which implies a costly upgrade process that clients could seldom afford. The solution was to archive those sites: take a living, dynamic web site and turn it into plain HTML files that any web server can serve forever. This process is useful for your own dynamic sites but also for third-party sites that are outside of your control and you might want to safeguard.
+
+For simple or static sites, the venerable [Wget][3] program works well. The incantation to mirror a full web site, however, is byzantine:
+
+```
+ $ nice wget --mirror --execute robots=off --no-verbose --convert-links \
+ --backup-converted --page-requisites --adjust-extension \
+ --base=./ --directory-prefix=./ --span-hosts \
+ --domains=www.example.com,example.com http://www.example.com/
+
+```
+
+The above downloads the content of the web page, but also crawls everything within the specified domains. Before you run this against your favorite site, consider the impact such a crawl might have on the site. The above command line deliberately ignores [`robots.txt`][] rules, as is now [common practice for archivists][4], and hammer the website as fast as it can. Most crawlers have options to pause between hits and limit bandwidth usage to avoid overwhelming the target site.
+
+The above command will also fetch "page requisites" like style sheets (CSS), images, and scripts. The downloaded page contents are modified so that links point to the local copy as well. Any web server can host the resulting file set, which results in a static copy of the original web site.
+
+That is, when things go well. Anyone who has ever worked with a computer knows that things seldom go according to plan; all sorts of things can make the procedure derail in interesting ways. For example, it was trendy for a while to have calendar blocks in web sites. A CMS would generate those on the fly and make crawlers go into an infinite loop trying to retrieve all of the pages. Crafty archivers can resort to regular expressions (e.g. Wget has a `--reject-regex` option) to ignore problematic resources. Another option, if the administration interface for the web site is accessible, is to disable calendars, login forms, comment forms, and other dynamic areas. Once the site becomes static, those will stop working anyway, so it makes sense to remove such clutter from the original site as well.
+
+### JavaScript doom
+
+Unfortunately, some web sites are built with much more than pure HTML. In single-page sites, for example, the web browser builds the content itself by executing a small JavaScript program. A simple user agent like Wget will struggle to reconstruct a meaningful static copy of those sites as it does not support JavaScript at all. In theory, web sites should be using [progressive enhancement][5] to have content and functionality available without JavaScript but those directives are rarely followed, as anyone using plugins like [NoScript][6] or [uMatrix][7] will confirm.
+
+Traditional archival methods sometimes fail in the dumbest way. When trying to build an offsite backup of a local newspaper ([pamplemousse.ca][8]), I found that WordPress adds query strings (e.g. `?ver=1.12.4`) at the end of JavaScript includes. This confuses content-type detection in the web servers that serve the archive, which rely on the file extension to send the right `Content-Type` header. When such an archive is loaded in a web browser, it fails to load scripts, which breaks dynamic websites.
+
+As the web moves toward using the browser as a virtual machine to run arbitrary code, archival methods relying on pure HTML parsing need to adapt. The solution for such problems is to record (and replay) the HTTP headers delivered by the server during the crawl and indeed professional archivists use just such an approach.
+
+### Creating and displaying WARC files
+
+At the [Internet Archive][9], Brewster Kahle and Mike Burner designed the [ARC][10] (for "ARChive") file format in 1996 to provide a way to aggregate the millions of small files produced by their archival efforts. The format was eventually standardized as the WARC ("Web ARChive") [specification][11] that was released as an ISO standard in 2009 and revised in 2017. The standardization effort was led by the [International Internet Preservation Consortium][12] (IIPC), which is an "international organization of libraries and other organizations established to coordinate efforts to preserve internet content for the future", according to Wikipedia; it includes members such as the US Library of Congress and the Internet Archive. The latter uses the WARC format internally in its Java-based [Heritrix crawler][13].
+
+A WARC file aggregates multiple resources like HTTP headers, file contents, and other metadata in a single compressed archive. Conveniently, Wget actually supports the file format with the `--warc` parameter. Unfortunately, web browsers cannot render WARC files directly, so a viewer or some conversion is necessary to access the archive. The simplest such viewer I have found is [pywb][14], a Python package that runs a simple webserver to offer a Wayback-Machine-like interface to browse the contents of WARC files. The following set of commands will render a WARC file on `http://localhost:8080/`:
+
+```
+ $ pip install pywb
+ $ wb-manager init example
+ $ wb-manager add example crawl.warc.gz
+ $ wayback
+
+```
+
+This tool was, incidentally, built by the folks behind the [Webrecorder][15] service, which can use a web browser to save dynamic page contents.
+
+Unfortunately, pywb has trouble loading WARC files generated by Wget because it [followed][16] an [inconsistency in the 1.0 specification][17], which was [fixed in the 1.1 specification][18]. Until Wget or pywb fix those problems, WARC files produced by Wget are not reliable enough for my uses, so I have looked at other alternatives. A crawler that got my attention is simply called [crawl][19]. Here is how it is invoked:
+
+```
+ $ crawl https://example.com/
+
+```
+
+(It does say "very simple" in the README.) The program does support some command-line options, but most of its defaults are sane: it will fetch page requirements from other domains (unless the `-exclude-related` flag is used), but does not recurse out of the domain. By default, it fires up ten parallel connections to the remote site, a setting that can be changed with the `-c` flag. But, best of all, the resulting WARC files load perfectly in pywb.
+
+### Future work and alternatives
+
+There are plenty more [resources][20] for using WARC files. In particular, there's a Wget drop-in replacement called [Wpull][21] that is specifically designed for archiving web sites. It has experimental support for [PhantomJS][22] and [youtube-dl][23] integration that should allow downloading more complex JavaScript sites and streaming multimedia, respectively. The software is the basis for an elaborate archival tool called [ArchiveBot][24], which is used by the "loose collective of rogue archivists, programmers, writers and loudmouths" at [ArchiveTeam][25] in its struggle to "save the history before it's lost forever". It seems that PhantomJS integration does not work as well as the team wants, so ArchiveTeam also uses a rag-tag bunch of other tools to mirror more complex sites. For example, [snscrape][26] will crawl a social media profile to generate a list of pages to send into ArchiveBot. Another tool the team employs is [crocoite][27], which uses the Chrome browser in headless mode to archive JavaScript-heavy sites.
+
+This article would also not be complete without a nod to the [HTTrack][28] project, the "website copier". Working similarly to Wget, HTTrack creates local copies of remote web sites but unfortunately does not support WARC output. Its interactive aspects might be of more interest to novice users unfamiliar with the command line.
+
+In the same vein, during my research I found a full rewrite of Wget called [Wget2][29] that has support for multi-threaded operation, which might make it faster than its predecessor. It is [missing some features][30] from Wget, however, most notably reject patterns, WARC output, and FTP support but adds RSS, DNS caching, and improved TLS support.
+
+Finally, my personal dream for these kinds of tools would be to have them integrated with my existing bookmark system. I currently keep interesting links in [Wallabag][31], a self-hosted "read it later" service designed as a free-software alternative to [Pocket][32] (now owned by Mozilla). But Wallabag, by design, creates only a "readable" version of the article instead of a full copy. In some cases, the "readable version" is actually [unreadable][33] and Wallabag sometimes [fails to parse the article][34]. Instead, other tools like [bookmark-archiver][35] or [reminiscence][36] save a screenshot of the page along with full HTML but, unfortunately, no WARC file that would allow an even more faithful replay.
+
+The sad truth of my experiences with mirrors and archival is that data dies. Fortunately, amateur archivists have tools at their disposal to keep interesting content alive online. For those who do not want to go through that trouble, the Internet Archive seems to be here to stay and Archive Team is obviously [working on a backup of the Internet Archive itself][37].
+
+--------------------------------------------------------------------------------
+
+via: https://anarc.at/blog/2018-10-04-archiving-web-sites/
+
+作者:[Anarcat][a]
+选题:[lujun9972](https://github.com/lujun9972)
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://anarc.at
+[1]: https://anarc.at/blog
+[2]: https://drupal.org
+[3]: https://www.gnu.org/software/wget/
+[4]: https://blog.archive.org/2017/04/17/robots-txt-meant-for-search-engines-dont-work-well-for-web-archives/
+[5]: https://en.wikipedia.org/wiki/Progressive_enhancement
+[6]: https://noscript.net/
+[7]: https://github.com/gorhill/uMatrix
+[8]: https://pamplemousse.ca/
+[9]: https://archive.org
+[10]: http://www.archive.org/web/researcher/ArcFileFormat.php
+[11]: https://iipc.github.io/warc-specifications/
+[12]: https://en.wikipedia.org/wiki/International_Internet_Preservation_Consortium
+[13]: https://github.com/internetarchive/heritrix3/wiki
+[14]: https://github.com/webrecorder/pywb
+[15]: https://webrecorder.io/
+[16]: https://github.com/webrecorder/pywb/issues/294
+[17]: https://github.com/iipc/warc-specifications/issues/23
+[18]: https://github.com/iipc/warc-specifications/pull/24
+[19]: https://git.autistici.org/ale/crawl/
+[20]: https://archiveteam.org/index.php?title=The_WARC_Ecosystem
+[21]: https://github.com/chfoo/wpull
+[22]: http://phantomjs.org/
+[23]: http://rg3.github.io/youtube-dl/
+[24]: https://www.archiveteam.org/index.php?title=ArchiveBot
+[25]: https://archiveteam.org/
+[26]: https://github.com/JustAnotherArchivist/snscrape
+[27]: https://github.com/PromyLOPh/crocoite
+[28]: http://www.httrack.com/
+[29]: https://gitlab.com/gnuwget/wget2
+[30]: https://gitlab.com/gnuwget/wget2/wikis/home
+[31]: https://wallabag.org/
+[32]: https://getpocket.com/
+[33]: https://github.com/wallabag/wallabag/issues/2825
+[34]: https://github.com/wallabag/wallabag/issues/2914
+[35]: https://pirate.github.io/bookmark-archiver/
+[36]: https://github.com/kanishka-linux/reminiscence
+[37]: http://iabak.archiveteam.org
diff --git a/sources/tech/20181004 Functional programming in Python- Immutable data structures.md b/sources/tech/20181004 Functional programming in Python- Immutable data structures.md
new file mode 100644
index 0000000000..e6050d52f9
--- /dev/null
+++ b/sources/tech/20181004 Functional programming in Python- Immutable data structures.md
@@ -0,0 +1,191 @@
+Translating by Ryze-Borgia
+Functional programming in Python: Immutable data structures
+======
+Immutability can help us better understand our code. Here's how to achieve it without sacrificing performance.
+
+
+
+In this two-part series, I will discuss how to import ideas from the functional programming methodology into Python in order to have the best of both worlds.
+
+This first post will explore how immutable data structures can help. The second part will explore higher-level functional programming concepts in Python using the **toolz** library.
+
+Why functional programming? Because mutation is hard to reason about. If you are already convinced that mutation is problematic, great. If you're not convinced, you will be by the end of this post.
+
+Let's begin by considering squares and rectangles. If we think in terms of interfaces, neglecting implementation details, are squares a subtype of rectangles?
+
+The definition of a subtype rests on the [Liskov substitution principle][1]. In order to be a subtype, it must be able to do everything the supertype does.
+
+How would we define an interface for a rectangle?
+
+```
+from zope.interface import Interface
+
+class IRectangle(Interface):
+ def get_length(self):
+ """Squares can do that"""
+ def get_width(self):
+ """Squares can do that"""
+ def set_dimensions(self, length, width):
+ """Uh oh"""
+```
+
+If this is the definition, then squares cannot be a subtype of rectangles; they cannot respond to a `set_dimensions` method if the length and width are different.
+
+A different approach is to choose to make rectangles immutable.
+
+```
+class IRectangle(Interface):
+ def get_length(self):
+ """Squares can do that"""
+ def get_width(self):
+ """Squares can do that"""
+ def with_dimensions(self, length, width):
+ """Returns a new rectangle"""
+```
+
+Now, a square can be a rectangle. It can return a new rectangle (which would not usually be a square) when `with_dimensions` is called, but it would not stop being a square.
+
+This might seem like an academic problem—until we consider that squares and rectangles are, in a sense, a container for their sides. After we understand this example, the more realistic case this comes into play with is more traditional containers. For example, consider random-access arrays.
+
+We have `ISquare` and `IRectangle`, and `ISquare` is a subtype of `IRectangle`.
+
+We want to put rectangles in a random-access array:
+
+```
+class IArrayOfRectangles(Interface):
+ def get_element(self, i):
+ """Returns Rectangle"""
+ def set_element(self, i, rectangle):
+ """'rectangle' can be any IRectangle"""
+```
+
+We want to put squares in a random-access array too:
+
+```
+class IArrayOfSquare(Interface):
+ def get_element(self, i):
+ """Returns Square"""
+ def set_element(self, i, square):
+ """'square' can be any ISquare"""
+```
+
+Even though `ISquare` is a subtype of `IRectangle`, no array can implement both `IArrayOfSquare` and `IArrayOfRectangle`.
+
+Why not? Assume `bucket` implements both.
+
+```
+>>> rectangle = make_rectangle(3, 4)
+>>> bucket.set_element(0, rectangle) # This is allowed by IArrayOfRectangle
+>>> thing = bucket.get_element(0) # That has to be a square by IArrayOfSquare
+>>> assert thing.height == thing.width
+Traceback (most recent call last):
+ File "", line 1, in
+AssertionError
+```
+
+Being unable to implement both means that neither is a subtype of the other, even though `ISquare` is a subtype of `IRectangle`. The problem is the `set_element` method: If we had a read-only array, `IArrayOfSquare` would be a subtype of `IArrayOfRectangle`.
+
+Mutability, in both the mutable `IRectangle` interface and the mutable `IArrayOf*` interfaces, has made thinking about types and subtypes much more difficult—and giving up on the ability to mutate meant that the intuitive relationships we expected to have between the types actually hold.
+
+Mutation can also have non-local effects. This happens when a shared object between two places is mutated by one. The classic example is one thread mutating a shared object with another thread, but even in a single-threaded program, sharing between places that are far apart is easy. Consider that in Python, most objects are reachable from many places: as a module global, or in a stack trace, or as a class attribute.
+
+If we cannot constrain the sharing, we might think about constraining the mutability.
+
+Here is an immutable rectangle, taking advantage of the [attrs][2] library:
+
+```
+@attr.s(frozen=True)
+class Rectange(object):
+ length = attr.ib()
+ width = attr.ib()
+ @classmethod
+ def with_dimensions(cls, length, width):
+ return cls(length, width)
+```
+
+Here is a square:
+
+```
+@attr.s(frozen=True)
+class Square(object):
+ side = attr.ib()
+ @classmethod
+ def with_dimensions(cls, length, width):
+ return Rectangle(length, width)
+```
+
+Using the `frozen` argument, we can easily have `attrs`-created classes be immutable. All the hard work of writing `__setitem__` correctly has been done by others and is completely invisible to us.
+
+It is still easy to modify objects; it's just nigh impossible to mutate them.
+
+```
+too_long = Rectangle(100, 4)
+reasonable = attr.evolve(too_long, length=10)
+```
+
+The [Pyrsistent][3] package allows us to have immutable containers.
+
+```
+# Vector of integers
+a = pyrsistent.v(1, 2, 3)
+# Not a vector of integers
+b = a.set(1, "hello")
+```
+
+While `b` is not a vector of integers, nothing will ever stop `a` from being one.
+
+What if `a` was a million elements long? Is `b` going to copy 999,999 of them? Pyrsistent comes with "big O" performance guarantees: All operations take `O(log n)` time. It also comes with an optional C extension to improve performance beyond the big O.
+
+For modifying nested objects, it comes with a concept of "transformers:"
+
+```
+blog = pyrsistent.m(
+ title="My blog",
+ links=pyrsistent.v("github", "twitter"),
+ posts=pyrsistent.v(
+ pyrsistent.m(title="no updates",
+ content="I'm busy"),
+ pyrsistent.m(title="still no updates",
+ content="still busy")))
+new_blog = blog.transform(["posts", 1, "content"],
+ "pretty busy")
+```
+
+`new_blog` will now be the immutable equivalent of
+
+```
+{'links': ['github', 'twitter'],
+ 'posts': [{'content': "I'm busy",
+ 'title': 'no updates'},
+ {'content': 'pretty busy',
+ 'title': 'still no updates'}],
+ 'title': 'My blog'}
+```
+
+But `blog` is still the same. This means anyone who had a reference to the old object has not been affected: The transformation had only local effects.
+
+This is useful when sharing is rampant. For example, consider default arguments:
+
+```
+def silly_sum(a, b, extra=v(1, 2)):
+ extra = extra.extend([a, b])
+ return sum(extra)
+```
+
+In this post, we have learned why immutability can be useful for thinking about our code, and how to achieve it without an extravagant performance price. Next time, we will learn how immutable objects allow us to use powerful programming constructs.
+
+--------------------------------------------------------------------------------
+
+via: https://opensource.com/article/18/10/functional-programming-python-immutable-data-structures
+
+作者:[Moshe Zadka][a]
+选题:[lujun9972](https://github.com/lujun9972)
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://opensource.com/users/moshez
+[1]: https://en.wikipedia.org/wiki/Liskov_substitution_principle
+[2]: https://www.attrs.org/en/stable/
+[3]: https://pyrsistent.readthedocs.io/en/latest/
diff --git a/sources/tech/20181004 PyTorch 1.0 Preview Release- Facebook-s newest Open Source AI.md b/sources/tech/20181004 PyTorch 1.0 Preview Release- Facebook-s newest Open Source AI.md
new file mode 100644
index 0000000000..6418db9444
--- /dev/null
+++ b/sources/tech/20181004 PyTorch 1.0 Preview Release- Facebook-s newest Open Source AI.md
@@ -0,0 +1,181 @@
+PyTorch 1.0 Preview Release: Facebook’s newest Open Source AI
+======
+Facebook already uses its own Open Source AI, PyTorch quite extensively in its own artificial intelligence projects. Recently, they have gone a league ahead by releasing a pre-release preview version 1.0.
+
+For those who are not familiar, [PyTorch][1] is a Python-based library for Scientific Computing.
+
+PyTorch harnesses the [superior computational power of Graphical Processing Units (GPUs)][2] for carrying out complex [Tensor][3] computations and implementing [deep neural networks][4]. So, it is used widely across the world by numerous researchers and developers.
+
+This new ready-to-use [Preview Release][5] was announced at the [PyTorch Developer Conference][6] at [The Midway][7], San Francisco, CA on Tuesday, October 2, 2018.
+
+### Highlights of PyTorch 1.0 Release Candidate
+
+![PyTorhc is Python based open source AI framework from Facebook][8]
+
+Some of the main new features in the release candidate are:
+
+#### 1\. JIT
+
+JIT is a set of compiler tools to bring research close to production. It includes a Python-based language called Torch Script and also ways to make existing code compatible with itself.
+
+#### 2\. New torch.distributed library: “C10D”
+
+“C10D” enables asynchronous operation on different backends with performance improvements on slower networks and more.
+
+#### 3\. C++ frontend (experimental)
+
+Though it has been specifically mentioned as an unstable API (expected in a pre-release), this is a pure C++ interface to the PyTorch backend that follows the API and architecture of the established Python frontend to enable research in high performance, low latency and C++ applications installed directly on hardware.
+
+To know more, you can take a look at the complete [update notes][9] on GitHub.
+
+The first stable version PyTorch 1.0 will be released in summer.
+
+### Installing PyTorch on Linux
+
+To install PyTorch v1.0rc0, the developers recommend using [conda][10] while there also other ways to do that as shown on their [local installation page][11] where they have documented everything necessary in detail.
+
+#### Prerequisites
+
+ * Linux
+ * Pip
+ * Python
+ * [CUDA][12] (For Nvidia GPU owners)
+
+
+
+As we recently showed you [how to install and use Pip][13], let’s get to know how we can install PyTorch with it.
+
+Note that PyTorch has GPU and CPU-only variants. You should install the one that suits your hardware.
+
+#### Installing old and stable version of PyTorch
+
+If you want the stable release (version 0.4) for your GPU, use:
+
+```
+pip install torch torchvision
+
+```
+
+Use these two commands in succession for a CPU-only stable release:
+
+```
+pip install http://download.pytorch.org/whl/cpu/torch-0.4.1-cp27-cp27mu-linux_x86_64.whl
+pip install torchvision
+
+```
+
+#### Installing PyTorch 1.0 Release Candidate
+
+You install PyTorch 1.0 RC GPU version with this command:
+
+```
+pip install torch_nightly -f https://download.pytorch.org/whl/nightly/cu92/torch_nightly.html
+
+```
+
+If you do not have a GPU and would prefer a CPU-only version, use:
+
+```
+pip install torch_nightly -f https://download.pytorch.org/whl/nightly/cpu/torch_nightly.html
+
+```
+
+#### Verifying your PyTorch installation
+
+Startup the python console on a terminal with the following simple command:
+
+```
+python
+
+```
+
+Now enter the following sample code line by line to verify your installation:
+
+```
+from __future__ import print_function
+import torch
+x = torch.rand(5, 3)
+print(x)
+
+```
+
+You should get an output like:
+
+```
+tensor([[0.3380, 0.3845, 0.3217],
+ [0.8337, 0.9050, 0.2650],
+ [0.2979, 0.7141, 0.9069],
+ [0.1449, 0.1132, 0.1375],
+ [0.4675, 0.3947, 0.1426]])
+
+```
+
+To check whether you can use PyTorch’s GPU capabilities, use the following sample code:
+
+```
+import torch
+torch.cuda.is_available()
+
+```
+
+The resulting output should be:
+
+```
+True
+
+```
+
+Support for AMD GPUs for PyTorch is still under development, so complete test coverage is not yet provided as reported [here][14], suggesting this [resource][15] in case you have an AMD GPU.
+
+Lets now look into some research projects that extensively use PyTorch:
+
+### Ongoing Research Projects based on PyTorch
+
+ * [Detectron][16]: Facebook AI Research’s software system to intelligently detect and classify objects. It is based on Caffe2. Earlier this year, Caffe2 and PyTorch [joined forces][17] to create a Research + Production enabled PyTorch 1.0 we talk about.
+ * [Unsupervised Sentiment Discovery][18]: Such methods are extensively used with social media algorithms.
+ * [vid2vid][19]: Photorealistic video-to-video translation
+ * [DeepRecommender][20] (We covered how such systems work on our past [Netflix AI article][21])
+
+
+
+Nvidia, leading GPU manufacturer covered more on this with their own [update][22] on this recent development where you can also read about ongoing collaborative research endeavours.
+
+### How should we react to such PyTorch capabilities?
+
+To think Facebook applies such amazingly innovative projects and more in its social media algorithms, should we appreciate all this or get alarmed? This is almost [Skynet][23]! This newly improved production-ready pre-release of PyTorch will certainly push things further ahead! Feel free to share your thoughts with us in the comments below!
+
+--------------------------------------------------------------------------------
+
+via: https://itsfoss.com/pytorch-open-source-ai-framework/
+
+作者:[Avimanyu Bandyopadhyay][a]
+选题:[lujun9972](https://github.com/lujun9972)
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://itsfoss.com/author/avimanyu/
+[1]: https://pytorch.org/
+[2]: https://en.wikipedia.org/wiki/General-purpose_computing_on_graphics_processing_units
+[3]: https://en.wikipedia.org/wiki/Tensor
+[4]: https://www.techopedia.com/definition/32902/deep-neural-network
+[5]: https://code.fb.com/ai-research/facebook-accelerates-ai-development-with-new-partners-and-production-capabilities-for-pytorch-1-0
+[6]: https://pytorch.fbreg.com/
+[7]: https://www.themidwaysf.com/
+[8]: https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/10/pytorch.jpeg
+[9]: https://github.com/pytorch/pytorch/releases/tag/v1.0rc0
+[10]: https://conda.io/
+[11]: https://pytorch.org/get-started/locally/
+[12]: https://www.pugetsystems.com/labs/hpc/How-to-install-CUDA-9-2-on-Ubuntu-18-04-1184/
+[13]: https://itsfoss.com/install-pip-ubuntu/
+[14]: https://github.com/pytorch/pytorch/issues/10657#issuecomment-415067478
+[15]: https://rocm.github.io/install.html#installing-from-amd-rocm-repositories
+[16]: https://github.com/facebookresearch/Detectron
+[17]: https://caffe2.ai/blog/2018/05/02/Caffe2_PyTorch_1_0.html
+[18]: https://github.com/NVIDIA/sentiment-discovery
+[19]: https://github.com/NVIDIA/vid2vid
+[20]: https://github.com/NVIDIA/DeepRecommender/
+[21]: https://itsfoss.com/netflix-open-source-ai/
+[22]: https://news.developer.nvidia.com/pytorch-1-0-accelerated-on-nvidia-gpus/
+[23]: https://en.wikipedia.org/wiki/Skynet_(Terminator)
diff --git a/sources/tech/20181005 Dbxfs - Mount Dropbox Folder Locally As Virtual File System In Linux.md b/sources/tech/20181005 Dbxfs - Mount Dropbox Folder Locally As Virtual File System In Linux.md
new file mode 100644
index 0000000000..691600a4cc
--- /dev/null
+++ b/sources/tech/20181005 Dbxfs - Mount Dropbox Folder Locally As Virtual File System In Linux.md
@@ -0,0 +1,133 @@
+Dbxfs – Mount Dropbox Folder Locally As Virtual File System In Linux
+======
+
+
+
+A while ago, we summarized all the possible ways to **[mount Google drive locally][1]** as a virtual file system and access the files stored in the google drive from your Linux operating system. Today, we are going to learn to mount Dropbox folder in your local file system using **dbxfs** utility. The dbxfs is used to mount your Dropbox folder locally as a virtual filesystem in Unix-like operating systems. While it is easy to [**install Dropbox client**][2] in Linux, this approach slightly differs from the official method. It is a command line dropbox client and requires no disk space for access. The dbxfs application is free, open source and written for Python 3.5+.
+
+### Installing dbxfs
+
+The dbxfs officially supports Linux and Mac OS. However, it should work on any POSIX system that provides a **FUSE-compatible library** or has the ability to mount **SMB** shares. Since it is written for Python 3.5, it can installed using **pip3** package manager. Refer the following guide if you haven’t installed PIP yet.
+
+And, install FUSE library as well.
+
+On Debian-based systems, run the following command to install FUSE:
+
+```
+$ sudo apt install libfuse2
+
+```
+
+On Fedora:
+
+```
+$ sudo dnf install fuse
+
+```
+
+Once you installed all required dependencies, run the following command to install dbxfs utility:
+
+```
+$ pip3 install dbxfs
+
+```
+
+### Mount Dropbox folder locally
+
+Create a mount point to mount your dropbox folder in your local file system.
+
+```
+$ mkdir ~/mydropbox
+
+```
+
+Then, mount the dropbox folder locally using dbxfs utility as shown below:
+
+```
+$ dbxfs ~/mydropbox
+
+```
+
+You will be asked to generate an access token:
+
+
+
+To generate an access token, just navigate to the URL given in the above output from your web browser and click **Allow** to authenticate Dropbox access. You need to log in to your dropbox account to complete authorization process.
+
+A new authorization code will be generated in the next screen. Copy the code and head back to your Terminal and paste it into cli-dbxfs prompt to finish the process.
+
+You will be then asked to save the credentials for future access. Type **Y** or **N** whether you want to save or decline. And then, you need to enter a passphrase twice for the new access token.
+
+Finally, click **Y** to accept **“/home/username/mydropbox”** as the default mount point. If you want to set different path, type **N** and enter the location of your choice.
+
+[![Generate access token 2][3]][4]
+
+All done! From now on, you can see your Dropbox folder is locally mounted in your filesystem.
+
+
+
+### Change Access Token Storage Path
+
+By default, the dbxfs application will store your Dropbox access token in the system keyring or an encrypted file. However, you might want to store it in a **gpg** encrypted file or something else. If so, get an access token by creating a personal app on the [Dropbox developers app console][5].
+
+
+
+Once the app is created, click **Generate** button in the next button. This access token can be used to access your Dropbox account via the API. Don’t share your access token with anyone.
+
+
+
+Once you created an access token, encrypt it using any encryption tools of your choice, such as [**Cryptomater**][6], [**Cryptkeeper**][7], [**CryptGo**][8], [**Cryptr**][9], [**Tomb**][10], [**Toplip**][11] and [**GnuPG**][12] etc., and store it in your preferred location.
+
+Next edit the dbxfs configuration file and add the following line in it:
+
+```
+"access_token_command": ["gpg", "--decrypt", "/path/to/access/token/file.gpg"]
+
+```
+
+You can find the dbxfs configuration file by running the following command:
+
+```
+$ dbxfs --print-default-config-file
+
+```
+
+For more details, refer dbxfs help section:
+
+```
+$ dbxfs -h
+
+```
+
+As you can see, mounting Dropfox folder locally in your file system using Dbxfs utility is no big deal. As far tested, dbxfs just works fine as expected. Give it a try if you’re interested to see how it works and let us know about your experience in the comment section below.
+
+And, that’s all for now. Hope this was useful. More good stuff to come. Stay tuned!
+
+Cheers!
+
+
+
+--------------------------------------------------------------------------------
+
+via: https://www.ostechnix.com/dbxfs-mount-dropbox-folder-locally-as-virtual-file-system-in-linux/
+
+作者:[SK][a]
+选题:[lujun9972](https://github.com/lujun9972)
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://www.ostechnix.com/author/sk/
+[1]: https://www.ostechnix.com/how-to-mount-google-drive-locally-as-virtual-file-system-in-linux/
+[2]: https://www.ostechnix.com/install-dropbox-in-ubuntu-18-04-lts-desktop/
+[3]: data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7
+[4]: http://www.ostechnix.com/wp-content/uploads/2018/10/Generate-access-token-2.png
+[5]: https://dropbox.com/developers/apps
+[6]: https://www.ostechnix.com/cryptomator-open-source-client-side-encryption-tool-cloud/
+[7]: https://www.ostechnix.com/how-to-encrypt-your-personal-foldersdirectories-in-linux-mint-ubuntu-distros/
+[8]: https://www.ostechnix.com/cryptogo-easy-way-encrypt-password-protect-files/
+[9]: https://www.ostechnix.com/cryptr-simple-cli-utility-encrypt-decrypt-files/
+[10]: https://www.ostechnix.com/tomb-file-encryption-tool-protect-secret-files-linux/
+[11]: https://www.ostechnix.com/toplip-strong-file-encryption-decryption-cli-utility/
+[12]: https://www.ostechnix.com/an-easy-way-to-encrypt-and-decrypt-files-from-commandline-in-linux/
diff --git a/sources/tech/20181005 How to use Kolibri to access educational material offline.md b/sources/tech/20181005 How to use Kolibri to access educational material offline.md
new file mode 100644
index 0000000000..f856a497cd
--- /dev/null
+++ b/sources/tech/20181005 How to use Kolibri to access educational material offline.md
@@ -0,0 +1,107 @@
+How to use Kolibri to access educational material offline
+======
+Kolibri makes digital educational materials available to students without internet access.
+
+
+
+While the internet has thoroughly transformed the availability of educational content for much of the world, many people still live in places where online access is poor or even nonexistent. [Kolibri][1] is a great solution for these communities. It's an app that creates an offline server to deliver high-quality educational resources to learners. You can set up Kolibri on a wide range of [hardware][2], including low-cost Windows, MacOS, and Linux (including Raspberry Pi) computers. A version for Android tablets is in the works.
+
+Because it's open source, free to use, works without broadband access (after initial setup), and includes a wide range of educational content, it gives students in rural schools, refugee camps, orphanages, informal schools, prisons, and other places without reliable internet service access to many of the same resources used by students all over the world.
+
+In addition to being simple to install, it's easy to customize Kolibri for various educational missions and needs, including literacy building, general reference materials, and life skills training. Kolibri includes content from sources including [OpenStax,][3] [CK-12][4], [Khan Academy][5], and [EngageNY][6]; once these packages are "seeded" by connecting the Kolibri serving device to a robust internet connection, they are immediately available for offline access on client devices through a compatible browser.
+
+### Installation and setup
+
+I installed Kolibri on an Intel i3-based laptop running Fedora 28. I chose the **pip install** method, which is very easy. Here's how to do it.
+
+Open a terminal and enter:
+
+```
+$ sudo pip install kolibri
+
+```
+
+Start Kolibri by entering **$** **kolibri** **start** in the terminal.
+
+Find your Kolibri installation's URL in the terminal.
+
+
+
+Open your browser and point it to that URL, being sure to append port **8080**.
+
+Select the default language—options include English, Spanish, French, Arabic, Portuguese, Hindi, Farsi, Burmese, and Bengali. (I chose English.)
+
+Name your facility, i.e., your classroom, library, or home. (I named mine Test.)
+
+
+
+Tell Kolibri what type of facility you're setting up—self-managed, admin-managed, or informal. (I chose self-managed.)
+
+
+
+Create an admin account.
+
+
+
+### Add content
+
+You can add Kolibri-curated content channels while you are connected to broadband service. Explore and add content from the menu at the top-left of the browser.
+
+
+
+Choose Device and Import.
+
+
+
+Selecting English as the default language provides access to 29 content channels including Touchable Earth, Global Digital Library, Khan Academy, OpenStax, CK-12, EngageNY, Blockly games, and more.
+
+Select a channel you're interested in. You have the option to download the entire channel (which might take a long time) or to select the specific content you want to download.
+
+
+
+To access your content, return to the top-left menu and select Learn.
+
+
+
+### Add users
+
+User accounts can be set up as learners, coaches, or admins. Users can access the Kolibri server from most web browsers on any Linux, MacOS, Windows, Android, or iOS device on the same network, even if the network isn't connected to the internet. Admins can set up classes on the device, assign coaches and learners to classes, and see every user's interaction and how much time they spend with the content.
+
+If your Kolibri server is set up as self-managed, users can create their own accounts by entering the Kolibri URL in their browser and following the prompts. For information on setting up users on an admin-managed server, check out Kolibri's [documentation][7].
+
+
+
+After logging in, the user can access content right away to begin learning.
+
+### Learn more
+
+Kolibri is a very powerful learning resource, especially for people who don't have a robust connection to the internet. Its [documentation][8] is very complete, and a [demo][9] site maintained by the project allows you to try it out.
+
+Kolibri is open source under the [MIT License][10]. The project, which is managed by the nonprofit organization Learning Equality, is looking for developers—if you would like to get involved, be sure to check out them on [GitHub][11]. To learn more, follow Learning Equality and Kolibri on its [blog][12], [Twitter][13], and [Facebook][14] pages.
+
+--------------------------------------------------------------------------------
+
+via: https://opensource.com/article/18/10/getting-started-kolibri
+
+作者:[Don Watkins][a]
+选题:[lujun9972](https://github.com/lujun9972)
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://opensource.com/users/don-watkins
+[1]: https://learningequality.org/kolibri/
+[2]: https://drive.google.com/file/d/0B9ZzDms8cSNgVWRKdUlPc2lkTkk/view
+[3]: https://openstax.org/
+[4]: https://www.ck12.org/
+[5]: https://www.khanacademy.org/
+[6]: https://www.engageny.org/
+[7]: https://kolibri.readthedocs.io/en/latest/manage.html#create-a-new-user-account
+[8]: https://learningequality.org/documentation/
+[9]: http://kolibridemo.learningequality.org/learn/#/topics
+[10]: https://github.com/learningequality/kolibri/blob/develop/LICENSE
+[11]: https://github.com/learningequality/
+[12]: https://blog.learningequality.org/
+[13]: https://twitter.com/LearnEQ/
+[14]: https://www.facebook.com/learningequality
diff --git a/sources/tech/20181005 Open Source Logging Tools for Linux.md b/sources/tech/20181005 Open Source Logging Tools for Linux.md
new file mode 100644
index 0000000000..723488008a
--- /dev/null
+++ b/sources/tech/20181005 Open Source Logging Tools for Linux.md
@@ -0,0 +1,188 @@
+Open Source Logging Tools for Linux
+======
+
+
+
+If you’re a Linux systems administrator, one of the first tools you will turn to for troubleshooting are log files. These files hold crucial information that can go a long way to help you solve problems affecting your desktops and servers. For many sysadmins (especially those of an old-school sort), nothing beats the command line for checking log files. But for those who’d rather have a more efficient (and possibly modern) approach to troubleshooting, there are plenty of options.
+
+In this article, I’ll highlight a few such tools available for the Linux platform. I won’t be getting into logging tools that might be specific to a certain service (such as Kubernetes or Apache), and instead will focus on tools that work to mine the depths of all that magical information written into /var/log.
+
+Speaking of which…
+
+### What is /var/log?
+
+If you’re new to Linux, you might not know what the /var/log directory contains. However, the name is very telling. Within this directory is housed all of the log files from the system and any major service (such as Apache, MySQL, MariaDB, etc.) installed on the operating system. Open a terminal window and issue the command cd /var/log. Follow that with the command ls and you’ll see all of the various systems that have log files you can view (Figure 1).
+
+![/var/log/][2]
+
+Figure 1: Our ls command reveals the logs available in /var/log/.
+
+[Used with permission][3]
+
+Say, for instance, you want to view the syslog log file. Issue the command less syslog and you can scroll through all of the gory details of that particular log. But what if the standard terminal isn’t for you? What options do you have? Plenty. Let’s take a look at few such options.
+
+### Logs
+
+If you use the GNOME desktop (or other, as Logs can be installed on more than just GNOME), you have at your fingertips a log viewer that mainly just adds the slightest bit of GUI goodness over the log files to create something as simple as it is effective. Once installed (from the standard repositories), open Logs from the desktop menu, and you’ll be treated to an interface (Figure 2) that allows you to select from various types of logs (Important, All, System, Security, and Hardware), as well as select a boot period (from the top center drop-down), and even search through all of the available logs.
+
+![Logs tool][5]
+
+Figure 2: The GNOME Logs tool is one of the easiest GUI log viewers you’ll find for Linux.
+
+[Used with permission][3]
+
+Logs is a great tool, especially if you’re not looking for too many bells and whistles getting in the way of you viewing crucial log entries, so you can troubleshoot your systems.
+
+### KSystemLog
+
+KSystemLog is to KDE what Logs is to GNOME, but with a few more features to add into the mix. Although both make it incredibly simple to view your system log files, only KSystemLog includes colorized log lines, tabbed viewing, copy log lines to the desktop clipboard, built-in capability for sending log messages directly to the system, read detailed information for each log line, and more. KSystemLog views all the same logs found in GNOME Logs, only with a different layout.
+
+From the main window (Figure 3), you can view any of the different log (from System Log, Authentication Log, X.org Log, Journald Log), search the logs, filter by Date, Host, Process, Message, and select log priorities.
+
+![KSystemLog][7]
+
+Figure 3: The KSystemLog main window.
+
+[Used with permission][3]
+
+If you click on the Window menu, you can open a new tab, where you can select a different log/filter combination to view. From that same menu, you can even duplicate the current tab. If you want to manually add a log to a file, do the following:
+
+ 1. Open KSystemLog.
+
+ 2. Click File > Add Log Entry.
+
+ 3. Create your log entry (Figure 4).
+
+ 4. Click OK
+
+
+![log entry][9]
+
+Figure 4: Creating a manual log entry with KSystemLog.
+
+[Used with permission][3]
+
+KSystemLog makes viewing logs in KDE an incredibly easy task.
+
+### Logwatch
+
+Logwatch isn’t a fancy GUI tool. Instead, logwatch allows you to set up a logging system that will email you important alerts. You can have those alerts emailed via an SMTP server or you can simply view them on the local machine. Logwatch can be found in the standard repositories for almost every distribution, so installation can be done with a single command, like so:
+
+```
+sudo apt-get install logwatch
+```
+
+Or:
+
+```
+sudo dnf install logwatch
+```
+
+During the installation, you will be required to select the delivery method for alerts (Figure 5). If you opt to go the local mail delivery only, you’ll need to install the mailutils app (so you can view mail locally, via the mail command).
+
+![ Logwatch][11]
+
+Figure 5: Configuring Logwatch alert sending method.
+
+[Used with permission][3]
+
+All Logwatch configurations are handled in a single file. To edit that file, issue the command sudo nano /usr/share/logwatch/default.conf/logwatch.conf. You’ll want to edit the MailTo = option. If you’re viewing this locally, set that to the Linux username you want the logs sent to (such as MailTo = jack). If you are sending these logs to an external email address, you’ll also need to change the MailFrom = option to a legitimate email address. From within that same configuration file, you can also set the detail level and the range of logs to send. Save and close that file.
+Once configured, you can send your first mail with a command like:
+
+```
+logwatch --detail Med --mailto ADDRESS --service all --range today
+Where ADDRESS is either the local user or an email address.
+
+```
+
+For more information on using Logwatch, issue the command man logwatch. Read through the manual page to see the different options that can be used with the tool.
+
+### Rsyslog
+
+Rsyslog is a convenient way to send remote client logs to a centralized server. Say you have one Linux server you want to use to collect the logs from other Linux servers in your data center. With Rsyslog, this is easily done. Rsyslog has to be installed on all clients and the centralized server (by issuing a command like sudo apt-get install rsyslog). Once installed, create the /etc/rsyslog.d/server.conf file on the centralized server, with the contents:
+
+```
+# Provide UDP syslog reception
+$ModLoad imudp
+$UDPServerRun 514
+
+# Provide TCP syslog reception
+$ModLoad imtcp
+$InputTCPServerRun 514
+
+# Use custom filenaming scheme
+$template FILENAME,"/var/log/remote/%HOSTNAME%.log"
+*.* ?FILENAME
+
+$PreserveFQDN on
+
+```
+
+Save and close that file. Now, on every client machine, create the file /etc/rsyslog.d/client.conf with the contents:
+
+```
+$PreserveFQDN on
+$ActionQueueType LinkedList
+$ActionQueueFileName srvrfwd
+$ActionResumeRetryCount -1
+$ActionQueueSaveOnShutdown on
+*.* @@SERVER_IP:514
+
+```
+
+Where SERVER_IP is the IP address of your centralized server. Save and close that file. Restart rsyslog on all machines with the command:
+
+```
+sudo systemctl restart rsyslog
+
+```
+
+You can now view the centralized log files with the command (run on the centralized server):
+
+```
+tail -f /var/log/remote/*.log
+
+```
+
+The tail command allows you to view those files as they are written to, in real time. You should see log entries appear that include the client hostname (Figure 6).
+
+![Rsyslog][13]
+
+Figure 6: Rsyslog showing entries for a connected client.
+
+[Used with permission][3]
+
+Rsyslog is a great tool for creating a single point of entry for viewing the logs of all of your Linux servers.
+
+### More where that came from
+
+This article only scratched the surface of the logging tools to be found on the Linux platform. And each of the above tools is capable of more than what is outlined here. However, this overview should give you a place to start your long day's journey into the Linux log file.
+
+Learn more about Linux through the free ["Introduction to Linux" ][14]course from The Linux Foundation and edX.
+
+--------------------------------------------------------------------------------
+
+via: https://www.linux.com/learn/intro-to-linux/2018/10/open-source-logging-tools-linux
+
+作者:[JACK WALLEN][a]
+选题:[lujun9972](https://github.com/lujun9972)
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://www.linux.com/users/jlwallen
+[1]: /files/images/logs1jpg
+[2]: https://www.linux.com/sites/lcom/files/styles/rendered_file/public/logs_1.jpg?itok=8yO2q1rW (/var/log/)
+[3]: /licenses/category/used-permission
+[4]: /files/images/logs2jpg
+[5]: https://www.linux.com/sites/lcom/files/styles/rendered_file/public/logs_2.jpg?itok=kF6V46ZB (Logs tool)
+[6]: /files/images/logs3jpg
+[7]: https://www.linux.com/sites/lcom/files/styles/rendered_file/public/logs_3.jpg?itok=PhrIzI1N (KSystemLog)
+[8]: /files/images/logs4jpg
+[9]: https://www.linux.com/sites/lcom/files/styles/rendered_file/public/logs_4.jpg?itok=OxsGJ-TJ (log entry)
+[10]: /files/images/logs5jpg
+[11]: https://www.linux.com/sites/lcom/files/styles/rendered_file/public/logs_5.jpg?itok=GeAR551e (Logwatch)
+[12]: /files/images/logs6jpg
+[13]: https://www.linux.com/sites/lcom/files/styles/rendered_file/public/logs_6.jpg?itok=ira8UZOr (Rsyslog)
+[14]: https://training.linuxfoundation.org/linux-courses/system-administration-training/introduction-to-linux
diff --git a/sources/tech/20181005 Terminalizer - A Tool To Record Your Terminal And Generate Animated Gif Images.md b/sources/tech/20181005 Terminalizer - A Tool To Record Your Terminal And Generate Animated Gif Images.md
new file mode 100644
index 0000000000..26d1941cc1
--- /dev/null
+++ b/sources/tech/20181005 Terminalizer - A Tool To Record Your Terminal And Generate Animated Gif Images.md
@@ -0,0 +1,171 @@
+Terminalizer – A Tool To Record Your Terminal And Generate Animated Gif Images
+======
+This is know topic for most of us and i don’t want to give you the detailed information about this flow. Also, we had written many article under this topics.
+
+Script command is the one of the standard command to record Linux terminal sessions. Today we are going to discuss about same kind of tool called Terminalizer.
+
+This tool will help us to record the users terminal activity, also will help us to identify other useful information from the output.
+
+### What Is Terminalizer
+
+Terminalizer allow users to record their terminal activity and allow them to generate animated gif images. It’s highly customizable CLI tool that user can share a link for an online player, web player for a recording file.
+
+**Suggested Read :**
+**(#)** [Script – A Simple Command To Record Your Terminal Session Activity][1]
+**(#)** [Automatically Record/Capture All Users Terminal Sessions Activity In Linux][2]
+**(#)** [Teleconsole – A Tool To Share Your Terminal Session Instantly To Anyone In Seconds][3]
+**(#)** [tmate – Instantly Share Your Terminal Session To Anyone In Seconds][4]
+**(#)** [Peek – Create a Animated GIF Recorder in Linux][5]
+**(#)** [Kgif – A Simple Shell Script to Create a Gif File from Active Window][6]
+**(#)** [Gifine – Quickly Create An Animated GIF Video In Ubuntu/Debian][7]
+
+There is no distribution official package to install this utility and we can easily install it by using Node.js.
+
+### How To Install Noje.js in Linux
+
+Node.js can be installed in multiple ways. Here, we are going to teach you the standard method.
+
+For Ubuntu/LinuxMint use [APT-GET Command][8] or [APT Command][9] to install Node.js
+
+```
+$ curl -sL https://deb.nodesource.com/setup_8.x | sudo -E bash -
+$ sudo apt-get install -y nodejs
+
+```
+
+For Debian use [APT-GET Command][8] or [APT Command][9] to install Node.js
+
+```
+# curl -sL https://deb.nodesource.com/setup_8.x | bash -
+# apt-get install -y nodejs
+
+```
+
+For **`RHEL/CentOS`** , use [YUM Command][10] to install tmux.
+
+```
+$ sudo curl --silent --location https://rpm.nodesource.com/setup_8.x | sudo bash -
+$ sudo yum install epel-release
+$ sudo yum -y install nodejs
+
+```
+
+For **`Fedora`** , use [DNF Command][11] to install tmux.
+
+```
+$ sudo dnf install nodejs
+
+```
+
+For **`Arch Linux`** , use [Pacman Command][12] to install tmux.
+
+```
+$ sudo pacman -S nodejs npm
+
+```
+
+For **`openSUSE`** , use [Zypper Command][13] to install tmux.
+
+```
+$ sudo zypper in nodejs6
+
+```
+
+### How to Install Terminalizer
+
+As you have already installed prerequisite package called Node.js, now it’s time to install Terminalizer on your system. Simple run the below npm command to install Terminalizer.
+
+```
+$ sudo npm install -g terminalizer
+
+```
+
+### How to Use Terminalizer
+
+To record your session activity using Terminalizer, just run the following Terminalizer command. Once you started the recording then play around it and finally hit `CTRL+D` to exit and save the recording.
+
+```
+# terminalizer record 2g-session
+
+defaultConfigPath
+The recording session is started
+Press CTRL+D to exit and save the recording
+
+```
+
+This will save your recording session as a YAML file, in this case my filename would be 2g-session-activity.yml.
+![][15]
+
+Just type few commands to verify this and finally hit `CTRL+D` to exit the current capture. When you hit `CTRL+D` on the terminal and you will be getting the below output.
+
+```
+# logout
+Successfully Recorded
+The recording data is saved into the file:
+/home/daygeek/2g-session.yml
+You can edit the file and even change the configurations.
+
+```
+
+![][16]
+
+### How to Play the Recorded File
+
+Use the below command format to paly your recorded YAML file. Make sure, you have to input your recorded file instead of us.
+
+```
+# terminalizer play 2g-session
+
+```
+
+Render a recording file as an animated gif image.
+
+```
+# terminalizer render 2g-session
+
+```
+
+`Note:` Below two commands are not implemented yet in the current version and will be available in the next version.
+
+If you would like to share your recording to others then upload a recording file and get a link for an online player and share it.
+
+```
+terminalizer share 2g-session
+
+```
+
+Generate a web player for a recording file
+
+```
+# terminalizer generate 2g-session
+
+```
+
+--------------------------------------------------------------------------------
+
+via: https://www.2daygeek.com/terminalizer-a-tool-to-record-your-terminal-and-generate-animated-gif-images/
+
+作者:[Prakash Subramanian][a]
+选题:[lujun9972](https://github.com/lujun9972)
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://www.2daygeek.com/author/prakash/
+[1]: https://www.2daygeek.com/script-command-record-save-your-terminal-session-activity-linux/
+[2]: https://www.2daygeek.com/automatically-record-all-users-terminal-sessions-activity-linux-script-command/
+[3]: https://www.2daygeek.com/teleconsole-share-terminal-session-instantly-to-anyone-in-seconds/
+[4]: https://www.2daygeek.com/tmate-instantly-share-your-terminal-session-to-anyone-in-seconds/
+[5]: https://www.2daygeek.com/peek-create-animated-gif-screen-recorder-capture-arch-linux-mint-fedora-ubuntu/
+[6]: https://www.2daygeek.com/kgif-create-animated-gif-file-active-window-screen-recorder-capture-arch-linux-mint-fedora-ubuntu-debian-opensuse-centos/
+[7]: https://www.2daygeek.com/gifine-create-animated-gif-vedio-recorder-linux-mint-debian-ubuntu/
+[8]: https://www.2daygeek.com/apt-get-apt-cache-command-examples-manage-packages-debian-ubuntu-systems/
+[9]: https://www.2daygeek.com/apt-command-examples-manage-packages-debian-ubuntu-systems/
+[10]: https://www.2daygeek.com/yum-command-examples-manage-packages-rhel-centos-systems/
+[11]: https://www.2daygeek.com/dnf-command-examples-manage-packages-fedora-system/
+[12]: https://www.2daygeek.com/pacman-command-examples-manage-packages-arch-linux-system/
+[13]: https://www.2daygeek.com/zypper-command-examples-manage-packages-opensuse-system/
+[14]: data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7
+[15]: https://www.2daygeek.com/wp-content/uploads/2018/10/terminalizer-record-2g-session-1.gif
+[16]: https://www.2daygeek.com/wp-content/uploads/2018/10/terminalizer-play-2g-session.gif
diff --git a/sources/tech/20181008 KeeWeb - An Open Source, Cross Platform Password Manager.md b/sources/tech/20181008 KeeWeb - An Open Source, Cross Platform Password Manager.md
new file mode 100644
index 0000000000..a9b20ac54d
--- /dev/null
+++ b/sources/tech/20181008 KeeWeb - An Open Source, Cross Platform Password Manager.md
@@ -0,0 +1,110 @@
+KeeWeb – An Open Source, Cross Platform Password Manager
+======
+
+
+
+If you’ve been using the internet for any amount of time, chances are, you have a lot of accounts on a lot of websites. All of those accounts must have passwords, and you have to remember all those passwords. Either that, or write them down somewhere. Writing down passwords on paper may not be secure, and remembering them won’t be practically possible if you have more than a few passwords. This is why Password Managers have exploded in popularity in the last few years. A password Manager is like a central repository where you store all your passwords for all your accounts, and you lock it with a master password. With this approach, the only thing you need to remember is the Master password.
+
+**KeePass** is one such open source password manager. KeePass has an official client, but it’s pretty barebones. But there are a lot of other apps, both for your computer and for your phone, that are compatible with the KeePass file format for storing encrypted passwords. One such app is **KeeWeb**.
+
+KeeWeb is an open source, cross platform password manager with features like cloud sync, keyboard shortcuts and plugin support. KeeWeb uses Electron, which means it runs on Windows, Linux, and Mac OS.
+
+### Using KeeWeb Password Manager
+
+When it comes to using KeeWeb, you actually have 2 options. You can either use KeeWeb webapp without having to install it on your system and use it on the fly or simply install KeeWeb client in your local system.
+
+**Using the KeeWeb webapp**
+
+If you don’t want to bother installing a desktop app, you can just go to [**https://app.keeweb.info/**][1] and use it as a password manager.
+
+
+
+It has all the features of the desktop app. Obviously, this requires you to be online when using the app.
+
+**Installing KeeWeb on your Desktop**
+
+If you like the comfort and offline availability of using a desktop app, you can also install it on your desktop.
+
+If you use Ubuntu/Debian, you can just go to [**releases pages**][2] and download KeeWeb latest **.deb** file, which you can install via this command:
+
+```
+$ sudo dpkg -i KeeWeb-1.6.3.linux.x64.deb
+
+```
+
+If you’re on Arch, it is available in the [**AUR**][3], so you can install using any helper programs like [**Yay**][4]:
+
+```
+$ yay -S keeweb
+
+```
+
+Once installed, launch it from Menu or application launcher. This is how KeeWeb default interface looks like:
+
+
+
+### General Layout
+
+KeeWeb basically shows a list of all your passwords, along with all your tags to the left. Clicking on a tag will filter the list to only passwords of that tag. To the right, all the fields for the selected account are shown. You can set username, password, website, or just add a custom note. You can even create your own fields and mark them as secure fields, which is great when storing things like credit card information. You can copy passwords by just clicking on them. KeeWeb also shows the date when an account was created and modified. Deleted passwords are kept in the trash, where they can be restored or permanently deleted.
+
+
+
+### KeeWeb Features
+
+**Cloud Sync**
+
+One of the main features of KeeWeb is the support for a wide variety of remote locations and cloud services.
+Other than loading local files, you can open files from:
+
+ 1. WebDAV Servers
+ 2. Google Drive
+ 3. Dropbox
+ 4. OneDrive
+
+
+
+This means that if you use multiple computers, you can synchronize the password files between them, so you don’t have to worry about not having all the passwords available on all devices.
+
+**Password Generator**
+
+
+
+Along with encrypting your passwords, it’s also important to create new, strong passwords for every single account. This means that if one of your account gets hacked, the attacker won’t be able to get in to your other accounts using the same password.
+
+To achieve this, KeeWeb has a built-in password generator, that lets you generate a custom password of a specific length, including specific type of characters.
+
+**Plugins**
+
+
+
+You can extend KeeWeb functionality with plugins. Some of these plugins are translations for other languages, while others add new functionality, like checking **** for exposed passwords.
+
+**Local Backups**
+
+
+
+Regardless of where your password file is stored, you should probably keep local backups of the file on your computer. Luckily, KeeWeb has this feature built-in. You can backup to a specific path, and set it to backup periodically, or just whenever the file is changed.
+
+
+### Verdict
+
+I have actually been using KeeWeb for several years now. It completely changed the way I store my passwords. The cloud sync is basically the feature that makes it a done deal for me. I don’t have to worry about keeping multiple unsynchronized files on multiple devices. If you want a great looking password manager that has cloud sync, KeeWeb is something you should look at.
+
+
+
+--------------------------------------------------------------------------------
+
+via: https://www.ostechnix.com/keeweb-an-open-source-cross-platform-password-manager/
+
+作者:[EDITOR][a]
+选题:[lujun9972](https://github.com/lujun9972)
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://www.ostechnix.com/author/editor/
+[1]: https://app.keeweb.info/
+[2]: https://github.com/keeweb/keeweb/releases/latest
+[3]: https://aur.archlinux.org/packages/keeweb/
+[4]: https://www.ostechnix.com/yay-found-yet-another-reliable-aur-helper/
diff --git a/sources/tech/20181008 Play Windows games on Fedora with Steam Play and Proton.md b/sources/tech/20181008 Play Windows games on Fedora with Steam Play and Proton.md
new file mode 100644
index 0000000000..22b4cc8558
--- /dev/null
+++ b/sources/tech/20181008 Play Windows games on Fedora with Steam Play and Proton.md
@@ -0,0 +1,103 @@
+Play Windows games on Fedora with Steam Play and Proton
+======
+
+
+
+Some weeks ago, Steam [announced][1] a new addition to Steam Play with Linux support for Windows games using Proton, a fork from WINE. This capability is still in beta, and not all games work. Here are some more details about Steam and Proton.
+
+According to the Steam website, there are new features in the beta release:
+
+ * Windows games with no Linux version currently available can now be installed and run directly from the Linux Steam client, complete with native Steamworks and OpenVR support.
+ * DirectX 11 and 12 implementations are now based on Vulkan, which improves game compatibility and reduces performance impact.
+ * Fullscreen support has been improved. Fullscreen games seamlessly stretch to the desired display without interfering with the native monitor resolution or requiring the use of a virtual desktop.
+ * Improved game controller support. Games automatically recognize all controllers supported by Steam. Expect more out-of-the-box controller compatibility than even the original version of the game.
+ * Performance for multi-threaded games has been greatly improved compared to vanilla WINE.
+
+
+
+### Installation
+
+If you’re interested in trying Steam with Proton out, just follow these easy steps. (Note that you can ignore the first steps to enable the Steam Beta if you have the [latest updated version of Steam installed][2]. In that case you no longer need Steam Beta to use Proton.)
+
+Open up Steam and log in to your account. This example screenshot shows support for only 22 games before enabling Proton.
+
+![][3]
+
+Now click on Steam option on top of the client. This displays a drop down menu. Then select Settings.
+
+![][4]
+
+Now the settings window pops up. Select the Account option and next to Beta participation, click on change.
+
+![][5]
+
+Now change None to Steam Beta Update.
+
+![][6]
+
+Click on OK and a prompt asks you to restart.
+
+![][7]
+
+Let Steam download the update. This can take a while depending on your internet speed and computer resources.
+
+![][8]
+
+After restarting, go back to the Settings window. This time you’ll see a new option. Make sure the check boxes for Enable Steam Play for supported titles, Enable Steam Play for all titles and Use this tool instead of game-specific selections from Steam are enabled. The compatibility tool should be Proton.
+
+![][9]
+
+The Steam client asks you to restart. Do so, and once you log back into your Steam account, your game library for Linux should be extended.
+
+![][10]
+
+### Installing a Windows game using Steam Play
+
+Now that you have Proton enabled, install a game. Select the title you want and you’ll find the process is similar to installing a normal game on Steam, as shown in these screenshots.
+
+![][11]
+
+![][12]
+
+![][13]
+
+![][14]
+
+After the game is done downloading and installing, you can play it.
+
+![][15]
+
+![][16]
+
+Some games may be affected by the beta nature of Proton. The game in this example, Chantelise, had no audio and a low frame rate. Keep in mind this capability is still in beta and Fedora is not responsible for results. If you’d like to read further, the community has created a [Google doc][17] with a list of games that have been tested.
+
+
+--------------------------------------------------------------------------------
+
+via: https://fedoramagazine.org/play-windows-games-steam-play-proton/
+
+作者:[Francisco J. Vergara Torres][a]
+选题:[lujun9972](https://github.com/lujun9972)
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://fedoramagazine.org/author/patxi/
+[1]: https://steamcommunity.com/games/221410/announcements/detail/1696055855739350561
+[2]: https://fedoramagazine.org/third-party-repositories-fedora/
+[3]: https://fedoramagazine.org/wp-content/uploads/2018/09/listOfGamesLinux-300x197.png
+[4]: https://fedoramagazine.org/wp-content/uploads/2018/09/1-300x169.png
+[5]: https://fedoramagazine.org/wp-content/uploads/2018/09/2-300x196.png
+[6]: https://fedoramagazine.org/wp-content/uploads/2018/09/4-300x272.png
+[7]: https://fedoramagazine.org/wp-content/uploads/2018/09/6-300x237.png
+[8]: https://fedoramagazine.org/wp-content/uploads/2018/09/7-300x126.png
+[9]: https://fedoramagazine.org/wp-content/uploads/2018/09/10-300x237.png
+[10]: https://fedoramagazine.org/wp-content/uploads/2018/09/12-300x196.png
+[11]: https://fedoramagazine.org/wp-content/uploads/2018/09/13-300x196.png
+[12]: https://fedoramagazine.org/wp-content/uploads/2018/09/14-300x195.png
+[13]: https://fedoramagazine.org/wp-content/uploads/2018/09/15-300x196.png
+[14]: https://fedoramagazine.org/wp-content/uploads/2018/09/16-300x195.png
+[15]: https://fedoramagazine.org/wp-content/uploads/2018/09/Screenshot-from-2018-08-30-15-14-59-300x169.png
+[16]: https://fedoramagazine.org/wp-content/uploads/2018/09/Screenshot-from-2018-08-30-15-19-34-300x169.png
+[17]: https://docs.google.com/spreadsheets/d/1DcZZQ4HL_Ol969UbXJmFG8TzOHNnHoj8Q1f8DIFe8-8/edit#gid=1003113831
diff --git a/sources/tech/20181008 Python at the pump- A script for filling your gas tank.md b/sources/tech/20181008 Python at the pump- A script for filling your gas tank.md
new file mode 100644
index 0000000000..493a906b3f
--- /dev/null
+++ b/sources/tech/20181008 Python at the pump- A script for filling your gas tank.md
@@ -0,0 +1,101 @@
+Python at the pump: A script for filling your gas tank
+======
+Here's how I used Python to discover a strategy for cost-effective fill-ups.
+
+
+
+I recently began driving a car that had traditionally used premium gas (93 octane). According to the maker, though, it requires only 91 octane. The thing is, in the US, you can buy only 87, 89, or 93 octane. Where I live, gas prices jump 30 cents per gallon jump from one grade to the next, so premium costs 60 cents more than regular. So why not try to save some money?
+
+It’s easy enough to wait until the gas gauge shows that the tank is half full and then fill it with 89 octane, and there you have 91 octane. But it gets tricky to know what to do next—half a tank of 91 octane plus half a tank of 93 ends up being 92, and where do you go from there? You can make continuing calculations, but they get increasingly messy. This is where Python came into the picture.
+
+I wanted to come up with a simple scheme in which I could fill the tank at some level with 93 octane, then at the same or some other level with 89 octane, with the primary goal to never get below 91 octane with the final mixture. What I needed to do was create some recurring calculation that uses the previous octane value for the preceding fill-up. I suppose there would be some polynomial equation that would solve this, but in Python, this sounds like a loop.
+
+```
+#!/usr/bin/env python
+# octane.py
+
+o = 93.0
+newgas = 93.0 # this represents the octane of the last fillup
+i = 1
+while i < 21: # 20 iterations (trips to the pump)
+ if newgas == 89.0: # if the last fillup was with 89 octane
+ # switch to 93
+ newgas = 93.0
+ o = newgas/2 + o/2 # fill when gauge is 1/2 full
+ else: # if it wasn't 89 octane, switch to that
+ newgas = 89.0
+ o = newgas/2 + o/2 # fill when gauge says 1/2 full
+ print str(i) + ': '+ str(o)
+ i += 1
+```
+
+As you can see, I am initializing the variable o (the current octane mixture in the tank) and the variable newgas (what I last filled the tank with) at the same value of 93. The loop then will repeat 20 times, for 20 fill-ups, switching from 89 octane and 93 octane for every other trip to the station.
+
+```
+1: 91.0
+2: 92.0
+3: 90.5
+4: 91.75
+5: 90.375
+6: 91.6875
+7: 90.34375
+8: 91.671875
+9: 90.3359375
+10: 91.66796875
+11: 90.333984375
+12: 91.6669921875
+13: 90.3334960938
+14: 91.6667480469
+15: 90.3333740234
+16: 91.6666870117
+17: 90.3333435059
+18: 91.6666717529
+19: 90.3333358765
+20: 91.6666679382
+```
+
+This shows is that I probably need only 10 or 15 loops to see stabilization. It also shows that soon enough, I undershoot my 91 octane target. It’s also interesting to see this stabilization of the alternating mixture values, and it turns out this happens with any scheme where you choose the same amounts each time. In fact, it is true even if the amount of the fill-up is different for 89 and 93 octane.
+
+So at this point, I began playing with fractions, reasoning that I would probably need a bigger 93 octane fill-up than the 89 fill-up. I also didn’t want to make frequent trips to the gas station. What I ended up with (which seemed pretty good to me) was to wait until the tank was about 7⁄12 full and fill it with 89 octane, then wait until it was ¼ full and fill it with 93 octane.
+
+Here is what the changes in the loop look like:
+
+```
+ if newgas == 89.0:
+
+ newgas = 93.0
+ o = 3*newgas/4 + o/4
+ else:
+ newgas = 89.0
+ o = 5*newgas/12 + 7*o/12
+```
+
+Here are the numbers, starting with the tenth fill-up:
+
+```
+10: 92.5122272978
+11: 91.0487992571
+12: 92.5121998143
+13: 91.048783225
+14: 92.5121958062
+15: 91.048780887
+```
+
+As you can see, this keeps the final octane very slightly above 91 all the time. Of course, my gas gauge isn’t marked in twelfths, but 7⁄12 is slightly less than 5⁄8, and I can handle that.
+
+An alternative simple solution might have been run the tank to empty and fill with 93 octane, then next time only half-fill it for 89—and perhaps this will be my default plan. Personally, I’m not a fan of running the tank all the way down since this isn’t always convenient. On the other hand, it could easily work on a long trip. And sometimes I buy gas because of a sudden drop in prices. So in the end, this scheme is one of a series of options that I can consider.
+
+The most important thing for Python users: Don’t code while driving!
+
+--------------------------------------------------------------------------------
+
+via: https://opensource.com/article/18/10/python-gas-pump
+
+作者:[Greg Pittman][a]
+选题:[lujun9972](https://github.com/lujun9972)
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://opensource.com/users/greg-p
diff --git a/sources/tech/20181008 Taking notes with Laverna, a web-based information organizer.md b/sources/tech/20181008 Taking notes with Laverna, a web-based information organizer.md
new file mode 100644
index 0000000000..27616a9f6e
--- /dev/null
+++ b/sources/tech/20181008 Taking notes with Laverna, a web-based information organizer.md
@@ -0,0 +1,128 @@
+Taking notes with Laverna, a web-based information organizer
+======
+
+
+
+I don’t know anyone who doesn’t take notes. Most of the people I know use an online note-taking application like Evernote, Simplenote, or Google Keep.
+
+All of those are good tools, but they’re proprietary. And you have to wonder about the privacy of your information—especially in light of [Evernote’s great privacy flip-flop of 2016][1]. If you want more control over your notes and your data, you need to turn to an open source tool—preferably one that you can host yourself.
+
+And there are a number of good [open source alternatives to Evernote][2]. One of these is Laverna. Let’s take a look at it.
+
+### Getting Laverna
+
+You can [host Laverna yourself][3] or use the [web version][4]
+
+Since I have nowhere to host the application, I’ll focus here on using the web version of Laverna. Aside from the installation and setting up storage (more on that below), I’m told that the experience with a self-hosted version of Laverna is the same.
+
+### Setting up Laverna
+
+To start using Laverna right away, click the **Start using now** button on the front page of [Laverna.cc][5].
+
+On the welcome screen, click **Next**. You’ll be asked to enter an encryption password to secure your notes and get to them when you need to. You’ll also be asked to choose a way to synchronize your notes. I’ll discuss synchronization in a moment, so just enter a password and click **Next**.
+
+
+
+When you log in, you'll see a blank canvas:
+
+
+
+### Storing your notes
+
+Before diving into how to use Laverna, let’s walk through how to store your notes.
+
+Out of the box, Laverna stores your notes in your browser’s cache. The problem with that is when you clear the cache, you lose your notes. You can also store your notes using:
+
+ * Dropbox, a popular and proprietary web-based file syncing and storing service
+ * [remoteStorage][6], which offers a way for web applications to store information in the cloud.
+
+
+
+Using Dropbox is convenient, but it’s proprietary. There are also concerns about [privacy and surveillance][7]. Laverna encrypts your notes before saving them, but not all encryption is foolproof. Even if you don’t have anything illegal or sensitive in your notes, they’re no one’s business but your own.
+
+remoteStorage, on the other hand, is kind of techie to set up. There are few hosted storage services out there. I use [5apps][8].
+
+To change how Laverna stores your notes, click the hamburger menu in the top-left corner. Click **Settings** and then **Sync**.
+
+
+
+Select the service you want to use, then click **Save**. After that, click the left arrow in the top-left corner. You’ll be asked to authorize Laverna with the service you chose.
+
+### Using Laverna
+
+With that out of the way, let’s get down to using Laverna. Create a new note by clicking the **New Note** icon, which opens the note editor:
+
+
+
+Type a title for your note, then start typing the note in the left pane of the editor. The right pane displays a preview of your note:
+
+
+
+You can format your notes using Markdown; add formatting using your keyboard or the toolbar at the top of the window.
+
+You can also embed an image or file from your computer into a note, or link to one on the web. When you embed an image, it’s stored with your note.
+
+When you’re done, click **Save**.
+
+### Organizing your notes
+
+Like some other note-taking tools, Laverna lists the last note that you created or edited at the top. If you have a lot of notes, it can take a bit of work to find the one you're looking for.
+
+To better organize your notes, you can group them into notebooks, where you can quickly filter them based on a topic or a grouping.
+
+When you’re creating or editing a note, you can select a notebook from the **Select notebook** list in the top-left corner of the window. If you don’t have any notebooks, select **Add a new notebook** from the list and type the notebook’s name.
+
+You can also make that notebook a child of another notebook. Let’s say, for example, you maintain three blogs. You can create a notebook called **Blog Post Notes** and name children for each blog.
+
+To filter your notes by notebook, click the hamburger menu, followed by the name of a notebook. Only the notes in the notebook you choose will appear in the list.
+
+
+
+### Using Laverna across devices
+
+I use Laverna on my laptop and on an eight-inch tablet running [LineageOS][9]. Getting the two devices to use the same storage and display the same notes takes a little work.
+
+First, you’ll need to export your settings. Log into wherever you’re using Laverna and click the hamburger menu. Click **Settings** , then **Import & Export**. Under **Settings** , click **Export settings**. Laverna saves a file named laverna-settings.json to your device.
+
+Copy that file to the other device or devices on which you want to use Laverna. You can do that by emailing it to yourself or by syncing the file across devices using an application like [ownCloud][10] or [Nextcloud][11].
+
+On the other device, click **Import** on the splash screen. Otherwise, click the hamburger menu and then **Settings > Import & Export**. Click **Import settings**. Find the JSON file with your settings, click **Open** and then **Save**.
+
+Laverna will ask you to:
+
+ * Log back in using your password.
+ * Register with the storage service you’re using.
+
+
+
+Repeat this process for each device that you want to use. It’s cumbersome, I know. I’ve done it. You should need to do it only once per device, though.
+
+### Final thoughts
+
+Once you set up Laverna, it’s easy to use and has just the right features for what I need to do. I’m hoping that the developers can expand the storage and syncing options to include open source applications like Nextcloud and ownCloud.
+
+While Laverna doesn’t have all the bells and whistles of a note-taking application like Evernote, it does a great job of letting you take and organize your notes. The fact that Laverna is open source and supports Markdown are two additional great reasons to use it.
+
+--------------------------------------------------------------------------------
+
+via: https://opensource.com/article/18/10/taking-notes-laverna
+
+作者:[Scott Nesbitt][a]
+选题:[lujun9972](https://github.com/lujun9972)
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://opensource.com/users/scottnesbitt
+[1]: https://blog.evernote.com/blog/2016/12/15/evernote-revisits-privacy-policy/
+[2]: https://opensource.com/life/16/8/open-source-alternatives-evernote
+[3]: https://github.com/Laverna/laverna
+[4]: https://laverna.cc/
+[5]: http://laverna.cc/
+[6]: https://remotestorage.io/
+[7]: https://www.zdnet.com/article/dropbox-faces-questions-over-claims-of-improper-data-sharing/
+[8]: https://5apps.com/storage/beta
+[9]: https://lineageos.org/
+[10]: https://owncloud.com/
+[11]: https://nextcloud.com/
diff --git a/sources/tech/20181009 6 Commands To Shutdown And Reboot The Linux System From Terminal.md b/sources/tech/20181009 6 Commands To Shutdown And Reboot The Linux System From Terminal.md
new file mode 100644
index 0000000000..15230ecd0b
--- /dev/null
+++ b/sources/tech/20181009 6 Commands To Shutdown And Reboot The Linux System From Terminal.md
@@ -0,0 +1,328 @@
+6 Commands To Shutdown And Reboot The Linux System From Terminal
+======
+Linux administrator performing many tasks in their routine work. The system Shutdown and Reboot task also included in it.
+
+It’s one of the risky task for them because some times it wont come back due to some reasons and they need to spend more time on it to troubleshoot.
+
+These task can be performed through CLI in Linux. Most of the time Linux administrator prefer to perform these kind of tasks via CLI because they are familiar on this.
+
+There are few commands are available in Linux to perform these tasks and user needs to choose appropriate command to perform the task based on the requirement.
+
+All these commands has their own feature and allow Linux admin to use it.
+
+**Suggested Read :**
+**(#)** [11 Methods To Find System/Server Uptime In Linux][1]
+**(#)** [Tuptime – A Tool To Report The Historical And Statistical Running Time Of Linux System][2]
+
+When the system is initiated for Shutdown or Reboot. It will be notified to all logged-in users and processes. Also, it wont allow any new logins if the time argument is used.
+
+I would suggest you to double check before you perform this action because you need to follow few prerequisites to make sure everything is fine.
+
+Those steps are listed below.
+
+ * Make sure you should have a console access to troubleshoot further in case any issues arise. VMWare access for VMs and IPMI/iLO/iDRAC access for physical servers.
+ * You have to create a ticket as per your company procedure either Incident or Change ticket and get approval
+ * Take the important configuration files backup and move to other servers for safety
+ * Verify the log files (Perform the pre-check)
+ * Communicate about your activity with other dependencies teams like DBA, Application, etc
+ * Ask them to bring down their Database service or Application service and get a confirmation from them.
+ * Validate the same from your end using the appropriate command to double confirm this.
+ * Finally reboot the system
+ * Verify the log files (Perform the post-check), If everything is good then move to next step. If you found something is wrong then troubleshoot accordingly.
+ * If it’s back to up and running, ask the dependencies team to bring up their applications.
+ * Monitor for some time, and communicate back to them saying everything is working fine as expected.
+
+
+
+This task can be performed using following commands.
+
+ * **`shutdown Command:`** shutdown command used to halt, power-off or reboot the machine.
+ * **`halt Command:`** halt command used to halt, power-off or reboot the machine.
+ * **`poweroff Command:`** poweroff command used to halt, power-off or reboot the machine.
+ * **`reboot Command:`** reboot command used to halt, power-off or reboot the machine.
+ * **`init Command:`** init (short for initialization) is the first process started during booting of the computer system.
+ * **`systemctl Command:`** systemd is a system and service manager for Linux operating systems.
+
+
+
+### Method-1: How To Shutdown And Reboot The Linux System Using Shutdown Command
+
+shutdown command used to power-off or reboot a Linux remote machine or local host. It’s offering
+multiple options to perform this task effectively. If the time argument is used, 5 minutes before the system goes down the /run/nologin file is created to ensure that further logins shall not be allowed.
+
+The general syntax is
+
+```
+# shutdown [OPTION] [TIME] [MESSAGE]
+
+```
+
+Run the below command to shutdown a Linux machine immediately. It will kill all the processes immediately and will shutdown the system.
+
+```
+# shutdown -h now
+
+```
+
+ * **`-h:`** Equivalent to –poweroff, unless –halt is specified.
+
+
+
+Alternatively we can use the shutdown command with `halt` option to bring down the machine immediately.
+
+```
+# shutdown --halt now
+or
+# shutdown -H now
+
+```
+
+ * **`-H, --halt:`** Halt the machine.
+
+
+
+Alternatively we can use the shutdown command with `poweroff` option to bring down the machine immediately.
+
+```
+# shutdown --poweroff now
+or
+# shutdown -P now
+
+```
+
+ * **`-P, --poweroff:`** Power-off the machine (the default).
+
+
+
+Run the below command to shutdown a Linux machine immediately. It will kill all the processes immediately and will shutdown the system.
+
+```
+# shutdown -h now
+
+```
+
+ * **`-h:`** Equivalent to –poweroff, unless –halt is specified.
+
+
+
+If you run the below commands without time parameter, it will wait for a minute then execute the given command.
+
+```
+# shutdown -h
+Shutdown scheduled for Mon 2018-10-08 06:42:31 EDT, use 'shutdown -c' to cancel.
+
+[email protected]#
+Broadcast message from [email protected] (Mon 2018-10-08 06:41:31 EDT):
+
+The system is going down for power-off at Mon 2018-10-08 06:42:31 EDT!
+
+```
+
+All other logged in users can see a broadcast message in their terminal like below.
+
+```
+[[email protected] ~]$
+Broadcast message from [email protected] (Mon 2018-10-08 06:41:31 EDT):
+
+The system is going down for power-off at Mon 2018-10-08 06:42:31 EDT!
+
+```
+
+for Halt option.
+
+```
+# shutdown -H
+Shutdown scheduled for Mon 2018-10-08 06:37:53 EDT, use 'shutdown -c' to cancel.
+
+[email protected]#
+Broadcast message from [email protected] (Mon 2018-10-08 06:36:53 EDT):
+
+The system is going down for system halt at Mon 2018-10-08 06:37:53 EDT!
+
+```
+
+for Poweroff option.
+
+```
+# shutdown -P
+Shutdown scheduled for Mon 2018-10-08 06:40:07 EDT, use 'shutdown -c' to cancel.
+
+[email protected]#
+Broadcast message from [email protected] (Mon 2018-10-08 06:39:07 EDT):
+
+The system is going down for power-off at Mon 2018-10-08 06:40:07 EDT!
+
+```
+
+This can be cancelled by hitting `shutdown -c` option on your terminal.
+
+```
+# shutdown -c
+
+Broadcast message from [email protected] (Mon 2018-10-08 06:39:09 EDT):
+
+The system shutdown has been cancelled at Mon 2018-10-08 06:40:09 EDT!
+
+```
+
+All other logged in users can see a broadcast message in their terminal like below.
+
+```
+[[email protected] ~]$
+Broadcast message from [email protected] (Mon 2018-10-08 06:41:35 EDT):
+
+The system shutdown has been cancelled at Mon 2018-10-08 06:42:35 EDT!
+
+```
+
+Add a time parameter, if you want to perform shutdown or reboot in `N` seconds. Here you can add broadcast a custom message to logged-in users. In this example, we are rebooting the machine in another 5 minutes.
+
+```
+# shutdown -r +5 "To activate the latest Kernel"
+Shutdown scheduled for Mon 2018-10-08 07:13:16 EDT, use 'shutdown -c' to cancel.
+
+[[email protected] ~]#
+Broadcast message from [email protected] (Mon 2018-10-08 07:08:16 EDT):
+
+To activate the latest Kernel
+The system is going down for reboot at Mon 2018-10-08 07:13:16 EDT!
+
+```
+
+Run the below command to reboot a Linux machine immediately. It will kill all the processes immediately and will reboot the system.
+
+```
+# shutdown -r now
+
+```
+
+ * **`-r, --reboot:`** Reboot the machine.
+
+
+
+### Method-2: How To Shutdown And Reboot The Linux System Using reboot Command
+
+reboot command used to power-off or reboot a Linux remote machine or local host. Reboot command comes with two useful options.
+
+It will perform a graceful shutdown and restart of the machine (This is similar to your restart option which is available in your system menu).
+
+Run “reboot’ command without any option to reboot Linux machine.
+
+```
+# reboot
+
+```
+
+Run the “reboot” command with `-p` option to power-off or shutdown the Linux machine.
+
+```
+# reboot -p
+
+```
+
+ * **`-p, --poweroff:`** Power-off the machine, either halt or poweroff commands is invoked.
+
+
+
+Run the “reboot” command with `-f` option to forcefully reboot the Linux machine (This is similar to pressing the power button on the CPU).
+
+```
+# reboot -f
+
+```
+
+ * **`-f, --force:`** Force immediate halt, power-off, or reboot.
+
+
+
+### Method-3: How To Shutdown And Reboot The Linux System Using init Command
+
+init (short for initialization) is the first process started during booting of the computer system.
+
+It will check the /etc/inittab file to decide the Linux run level. Also, allow users to perform shutdown and reboot the Linux machine. There are seven runlevels exist, from zero to six.
+
+**Suggested Read :**
+**(#)** [How To Check All Running Services In Linux][3]
+
+Run the below init command to shutdown the system .
+
+```
+# init 0
+
+```
+
+ * **`0:`** Halt – to shutdown the system.
+
+
+
+Run the below init command to reboot the system .
+
+```
+# init 6
+
+```
+
+ * **`6:`** Reboot – to reboot the system.
+
+
+
+### Method-4: How To Shutdown The Linux System Using halt Command
+
+halt command used to power-off or shutdown a Linux remote machine or local host.
+halt terminates all processes and shuts down the cpu.
+
+```
+# halt
+
+```
+
+### Method-5: How To Shutdown The Linux System Using poweroff Command
+
+poweroff command used to power-off or shutdown a Linux remote machine or local host. Poweroff is exactly like halt, but it also turns off the unit itself (lights and everything on a PC). It sends an ACPI command to the board, then to the PSU, to cut the power.
+
+```
+# poweroff
+
+```
+
+### Method-6: How To Shutdown And Reboot The Linux System Using systemctl Command
+
+Systemd is a new init system and system manager which was implemented/adapted into all the major Linux distributions over the traditional SysV init systems.
+
+systemd is compatible with SysV and LSB init scripts. It can work as a drop-in replacement for sysvinit system. systemd is the first process get started by kernel and holding PID 1.
+
+**Suggested Read :**
+**(#)** [chkservice – A Tool For Managing Systemd Units From Linux Terminal][4]
+
+It’s a parent process for everything and Fedora 15 is the first distribution which was adapted systemd instead of upstart.
+
+systemctl is command line utility and primary tool to manage the systemd daemons/services such as (start, restart, stop, enable, disable, reload & status).
+
+systemd uses .service files Instead of bash scripts (SysVinit uses). systemd sorts all daemons into their own Linux cgroups and you can see the system hierarchy by exploring /cgroup/systemd file.
+
+```
+# systemctl halt
+# systemctl poweroff
+# systemctl reboot
+# systemctl suspend
+# systemctl hibernate
+
+```
+
+--------------------------------------------------------------------------------
+
+via: https://www.2daygeek.com/6-commands-to-shutdown-halt-poweroff-reboot-the-linux-system/
+
+作者:[Prakash Subramanian][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://www.2daygeek.com/author/prakash/
+[b]: https://github.com/lujun9972
+[1]: https://www.2daygeek.com/11-methods-to-find-check-system-server-uptime-in-linux/
+[2]: https://www.2daygeek.com/tuptime-a-tool-to-report-the-historical-and-statistical-running-time-of-linux-system/
+[3]: https://www.2daygeek.com/how-to-check-all-running-services-in-linux/
+[4]: https://www.2daygeek.com/chkservice-a-tool-for-managing-systemd-units-from-linux-terminal/
diff --git a/sources/tech/20181009 Convert Screenshots of Equations into LaTeX Instantly With This Nifty Tool.md b/sources/tech/20181009 Convert Screenshots of Equations into LaTeX Instantly With This Nifty Tool.md
new file mode 100644
index 0000000000..f2c17ff7c2
--- /dev/null
+++ b/sources/tech/20181009 Convert Screenshots of Equations into LaTeX Instantly With This Nifty Tool.md
@@ -0,0 +1,70 @@
+Convert Screenshots of Equations into LaTeX Instantly With This Nifty Tool
+======
+**Mathpix is a nifty little tool that allows you to take screenshots of complex mathematical equations and instantly converts it into LaTeX editable text.**
+
+![Mathpix converts math equations images into LaTeX][1]
+
+[LaTeX editors][2] are excellent when it comes to writing academic and scientific documentation.
+
+There is a steep learning curved involved of course. And this learning curve becomes steeper if you have to write complex mathematical equations.
+
+[Mathpix][3] is a nifty little tool that helps you in this regard.
+
+Suppose you are reading a document that has mathematical equations. If you want to use those equations in your [LaTeX document][4], you need to use your ninja LaTeX skills and plenty of time.
+
+But Mathpix solves this problem for you. With Mathpix, you take the screenshot of the mathematical equations, and it will instantly give you the LaTeX code. You can then use this code in your [favorite LaTeX editor][2].
+
+See Mathpix in action in the video below:
+
+
+
+[Video credit][5]: Reddit User [kaitlinmcunningham][6]
+
+Isn’t it super-cool? I guess the hardest part of writing LaTeX documents are those complicated equations. For lazy bums like me, Mathpix is a godsend.
+
+### Getting Mathpix
+
+Mathpix is available for Linux, macOS, Windows and iOS. There is no Android app for the moment.
+
+Note: Mathpix is a free to use tool but it’s not open source.
+
+On Linux, [Mathpix is available as a Snap package][7]. Which means [if you have Snap support enabled on your Linux distribution][8], you can install Mathpix with this simple command:
+
+```
+sudo snap install mathpix-snipping-tool
+
+```
+
+Using Mathpix is simple. Once installed, open the tool. You’ll find it in the top panel. You can start taking the screenshot with Mathpix using the keyboard shortcut Ctrl+Alt+M.
+
+It will instantly translate the image of equation into a LaTeX code. The code will be copied into clipboard and you can then paste it in a LaTeX editor.
+
+Mathpix’s optical character recognition technology is [being used][9] by a number of companies like [WolframAlpha][10], Microsoft, Google, etc. to improve their tools’ image recognition capability while dealing with math symbols.
+
+Altogether, it’s an awesome tool for students and academics. It’s free to use and I so wish that it was an open source tool. We cannot get everything in life, can we?
+
+Do you use Mathpix or some other similar tool while dealing with mathematical symbols in LaTeX? What do you think of Mathpix? Share your views with us in the comment section.
+
+--------------------------------------------------------------------------------
+
+via: https://itsfoss.com/mathpix/
+
+作者:[Abhishek Prakash][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://itsfoss.com/author/abhishek/
+[b]: https://github.com/lujun9972
+[1]: https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/10/mathpix-converts-equations-into-latex.jpeg
+[2]: https://itsfoss.com/latex-editors-linux/
+[3]: https://mathpix.com/
+[4]: https://www.latex-project.org/
+[5]: https://g.redditmedia.com/b-GL1rQwNezQjGvdlov9U_6vDwb1A7kEwGHYcQ1Ogtg.gif?fm=mp4&mp4-fragmented=false&s=39fd1816b43e2b544986d629f75a7a8e
+[6]: https://www.reddit.com/user/kaitlinmcunningham
+[7]: https://snapcraft.io/mathpix-snipping-tool
+[8]: https://itsfoss.com/install-snap-linux/
+[9]: https://mathpix.com/api.html
+[10]: https://www.wolframalpha.com/
diff --git a/sources/tech/20181009 How To Create And Maintain Your Own Man Pages.md b/sources/tech/20181009 How To Create And Maintain Your Own Man Pages.md
new file mode 100644
index 0000000000..6d78d132e2
--- /dev/null
+++ b/sources/tech/20181009 How To Create And Maintain Your Own Man Pages.md
@@ -0,0 +1,198 @@
+How To Create And Maintain Your Own Man Pages
+======
+
+
+
+We already have discussed about a few [**good alternatives to Man pages**][1]. Those alternatives are mainly used for learning concise Linux command examples without having to go through the comprehensive man pages. If you’re looking for a quick and dirty way to easily and quickly learn a Linux command, those alternatives are worth trying. Now, you might be thinking – how can I create my own man-like help pages for a Linux command? This is where **“Um”** comes in handy. Um is a command line utility, used to easily create and maintain your own Man pages that contains only what you’ve learned about a command so far.
+
+By creating your own alternative to man pages, you can avoid lots of unnecessary, comprehensive details in a man page and include only what is necessary to keep in mind. If you ever wanted to created your own set of man-like pages, Um will definitely help. In this brief tutorial, we will see how to install “Um” command line utility and how to create our own man pages.
+
+### Installing Um
+
+Um is available for Linux and Mac OS. At present, it can only be installed using **Linuxbrew** package manager in Linux systems. Refer the following link if you haven’t installed Linuxbrew yet.
+
+Once Linuxbrew installed, run the following command to install Um utility.
+
+```
+$ brew install sinclairtarget/wst/um
+
+```
+
+If you will see an output something like below, congratulations! Um has been installed and ready to use.
+
+```
+[...]
+==> Installing sinclairtarget/wst/um
+==> Downloading https://github.com/sinclairtarget/um/archive/4.0.0.tar.gz
+==> Downloading from https://codeload.github.com/sinclairtarget/um/tar.gz/4.0.0
+-=#=# # #
+==> Downloading https://rubygems.org/gems/kramdown-1.17.0.gem
+######################################################################## 100.0%
+==> gem install /home/sk/.cache/Homebrew/downloads/d0a5d978120a791d9c5965fc103866815189a4e3939
+==> Caveats
+Bash completion has been installed to:
+/home/linuxbrew/.linuxbrew/etc/bash_completion.d
+==> Summary
+🍺 /home/linuxbrew/.linuxbrew/Cellar/um/4.0.0: 714 files, 1.3MB, built in 35 seconds
+==> Caveats
+==> openssl
+A CA file has been bootstrapped using certificates from the SystemRoots
+keychain. To add additional certificates (e.g. the certificates added in
+the System keychain), place .pem files in
+/home/linuxbrew/.linuxbrew/etc/openssl/certs
+
+and run
+/home/linuxbrew/.linuxbrew/opt/openssl/bin/c_rehash
+==> ruby
+Emacs Lisp files have been installed to:
+/home/linuxbrew/.linuxbrew/share/emacs/site-lisp/ruby
+==> um
+Bash completion has been installed to:
+/home/linuxbrew/.linuxbrew/etc/bash_completion.d
+
+```
+
+Before going to use to make your man pages, you need to enable bash completion for Um.
+
+To do so, open your **~/.bash_profile** file:
+
+```
+$ nano ~/.bash_profile
+
+```
+
+And, add the following lines in it:
+
+```
+if [ -f $(brew --prefix)/etc/bash_completion.d/um-completion.sh ]; then
+ . $(brew --prefix)/etc/bash_completion.d/um-completion.sh
+fi
+
+```
+
+Save and close the file. Run the following commands to update the changes.
+
+```
+$ source ~/.bash_profile
+
+```
+
+All done. let us go ahead and create our first man page.
+
+### Create And Maintain Your Own Man Pages
+
+Let us say, you want to create your own man page for “dpkg” command. To do so, run:
+
+```
+$ um edit dpkg
+
+```
+
+The above command will open a markdown template in your default editor:
+
+
+
+My default editor is Vi, so the above commands open it in the Vi editor. Now, start adding everything you want to remember about “dpkg” command in this template.
+
+Here is a sample:
+
+
+
+As you see in the above output, I have added Synopsis, description and two options for dpkg command. You can add as many as sections you want in the man pages. Make sure you have given proper and easily-understandable titles for each section. Once done, save and quit the file (If you use Vi editor, Press **ESC** key and type **:wq** ).
+
+Finally, view your newly created man page using command:
+
+```
+$ um dpkg
+
+```
+
+
+
+As you can see, the the dpkg man page looks exactly like the official man pages. If you want to edit and/or add more details in a man page, again run the same command and add the details.
+
+```
+$ um edit dpkg
+
+```
+
+To view the list of newly created man pages using Um, run:
+
+```
+$ um list
+
+```
+
+All man pages will be saved under a directory named**`.um`**in your home directory
+
+Just in case, if you don’t want a particular page, simply delete it as shown below.
+
+```
+$ um rm dpkg
+
+```
+
+To view the help section and all available general options, run:
+
+```
+$ um --help
+usage: um
+ um [ARGS...]
+
+The first form is equivalent to `um read `.
+
+Subcommands:
+ um (l)ist List the available pages for the current topic.
+ um (r)ead Read the given page under the current topic.
+ um (e)dit Create or edit the given page under the current topic.
+ um rm Remove the given page.
+ um (t)opic [topic] Get or set the current topic.
+ um topics List all topics.
+ um (c)onfig [config key] Display configuration environment.
+ um (h)elp [sub-command] Display this help message, or the help message for a sub-command.
+
+```
+
+### Configure Um
+
+To view the current configuration, run:
+
+```
+$ um config
+Options prefixed by '*' are set in /home/sk/.um/umconfig.
+editor = vi
+pager = less
+pages_directory = /home/sk/.um/pages
+default_topic = shell
+pages_ext = .md
+
+```
+
+In this file, you can edit and change the values for **pager** , **editor** , **default_topic** , **pages_directory** , and **pages_ext** options as you wish. Say for example, if you want to save the newly created Um pages in your **[Dropbox][2]** folder, simply change the value of **pages_directory** directive and point it to the Dropbox folder in **~/.um/umconfig** file.
+
+```
+pages_directory = /Users/myusername/Dropbox/um
+
+```
+
+And, that’s all for now. Hope this was useful. More good stuffs to come. Stay tuned!
+
+Cheers!
+
+
+
+--------------------------------------------------------------------------------
+
+via: https://www.ostechnix.com/how-to-create-and-maintain-your-own-man-pages/
+
+作者:[SK][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://www.ostechnix.com/author/sk/
+[b]: https://github.com/lujun9972
+[1]: https://www.ostechnix.com/3-good-alternatives-man-pages-every-linux-user-know/
+[2]: https://www.ostechnix.com/install-dropbox-in-ubuntu-18-04-lts-desktop/
diff --git a/sources/tech/20181010 5 alerting and visualization tools for sysadmins.md b/sources/tech/20181010 5 alerting and visualization tools for sysadmins.md
new file mode 100644
index 0000000000..f933449461
--- /dev/null
+++ b/sources/tech/20181010 5 alerting and visualization tools for sysadmins.md
@@ -0,0 +1,163 @@
+5 alerting and visualization tools for sysadmins
+======
+These open source tools help users understand system behavior and output, and provide alerts for potential problems.
+
+
+
+You probably know (or can guess) what alerting and visualization tools are used for. Why would we discuss them as observability tools, especially since some systems include visualization as a feature?
+
+Observability comes from control theory and describes our ability to understand a system based on its inputs and outputs. This article focuses on the output component of observability.
+
+Alerting and visualization tools analyze the outputs of other systems and provide structured representations of these outputs. Alerts are basically a synthesized understanding of negative system outputs, and visualizations are disambiguated structured representations that facilitate user comprehension.
+
+### Common types of alerts and visualizations
+
+#### Alerts
+
+Let’s first cover what alerts are _not_. Alerts should not be sent if the human responder can’t do anything about the problem. This includes alerts that are sent to multiple individuals with only a few who can respond, or situations where every anomaly in the system triggers an alert. This leads to alert fatigue and receivers ignoring all alerts within a specific medium until the system escalates to a medium that isn’t already saturated.
+
+For example, if an operator receives hundreds of emails a day from the alerting system, that operator will soon ignore all emails from the alerting system. The operator will respond to a real incident only when he or she is experiencing the problem, emailed by a customer, or called by the boss. In this case, alerts have lost their meaning and usefulness.
+
+Alerts are not a constant stream of information or a status update. They are meant to convey a problem from which the system can’t automatically recover, and they are sent only to the individual most likely to be able to recover the system. Everything that falls outside this definition isn’t an alert and will only damage your employees and company culture.
+
+Everyone has a different set of alert types, so I won't discuss things like priority levels (P1-P5) or models that use words like "Informational," "Warning," and "Critical." Instead, I’ll describe the generic categories emergent in complex systems’ incident response.
+
+You might have noticed I mentioned an “Informational” alert type right after I wrote that alerts shouldn’t be informational. Well, not everyone agrees, but I don’t consider something an alert if it isn’t sent to anyone. It is a data point that many systems refer to as an alert. It represents some event that should be known but not responded to. It is generally part of the visualization system of the alerting tool and not an event that triggers actual notifications. Mike Julian covers this and other aspects of alerting in his book [Practical Monitoring][1]. It's a must read for work in this area.
+
+Non-informational alerts consist of types that can be responded to or require action. I group these into two categories: internal outage and external outage. (Most companies have more than two levels for prioritizing their response efforts.) Degraded system performance is considered an outage in this model, as the impact to each user is usually unknown.
+
+Internal outages are a lower priority than external outages, but they still need to be responded to quickly. They often include internal systems that company employees use or components of applications that are visible only to company employees.
+
+External outages consist of any system outage that would immediately impact a customer. These don’t include a system outage that prevents releasing updates to the system. They do include customer-facing application failures, database outages, and networking partitions that hurt availability or consistency if either can impact a user. They also include outages of tools that may not have a direct impact on users, as the application continues to run but this transparent dependency impacts performance. This is common when the system uses some external service or data source that isn’t necessary for full functionality but may cause delays as the application performs retries or handles errors from this external dependency.
+
+### Visualizations
+
+There are many visualization types, and I won’t cover them all here. It’s a fascinating area of research. On the data analytics side of my career, learning and applying that knowledge is a constant challenge. We need to provide simple representations of complex system outputs for the widest dissemination of information. [Google Charts][2] and [Tableau][3] have a wide selection of visualization types. We’ll cover the most common visualizations and some innovative solutions for quickly understanding systems.
+
+#### Line chart
+
+The line chart is probably the most common visualization. It does a pretty good job of producing an understanding of a system over time. A line chart in a metrics system would have a line for each unique metric or some aggregation of metrics. This can get confusing when there are a lot of metrics in the same dashboard (as shown below), but most systems can select specific metrics to view rather than having all of them visible. Also, anomalous behavior is easy to spot if it’s significant enough to escape the noise of normal operations. Below we can see purple, yellow, and light blue lines that might indicate anomalous behavior.
+
+
+
+Another feature of a line chart is that you can often stack them to show relationships. For example, you might want to look at requests on each server individually, but also in aggregate. This allows you to understand the overall system as well as each instance in the same graph.
+
+
+
+#### Heatmaps
+
+Another common visualization is the heatmap. It is useful when looking at histograms. This type of visualization is similar to a bar chart but can show gradients within the bars representing the different percentiles of the overall metric. For example, suppose you’re looking at request latencies and you want to quickly understand the overall trend as well as the distribution of all requests. A heatmap is great for this, and it can use color to disambiguate the quantity of each section with a quick glance.
+
+The heatmap below shows the higher concentration around the centerline of the graph with an easy-to-understand visualization of the distribution vertically for each time bucket. We might want to review a couple of points in time where the distribution gets wide while the others are fairly tight like at 14:00. This distribution might be a negative performance indicator.
+
+
+
+#### Gauges
+
+The last common visualization I’ll cover here is the gauge, which helps users understand a single metric quickly. Gauges can represent a single metric, like your speedometer represents your driving speed or your gas gauge represents the amount of gas in your car. Similar to the gas gauge, most monitoring gauges clearly indicate what is good and what isn’t. Often (as is shown below), good is represented by green, getting worse by orange, and “everything is breaking” by red. The middle row below shows traditional gauges.
+
+
+
+This image shows more than just traditional gauges. The other gauges are single stat representations that are similar to the function of the classic gauge. They all use the same color scheme to quickly indicate system health with just a glance. Arguably, the bottom row is probably the best example of a gauge that allows you to glance at a dashboard and know that everything is healthy (or not). This type of visualization is usually what I put on a top-level dashboard. It offers a full, high-level understanding of system health in seconds.
+
+#### Flame graphs
+
+A less common visualization is the flame graph, introduced by [Netflix’s Brendan Gregg][4] in 2011. It’s not ideal for dashboarding or quickly observing high-level system concerns; it’s normally seen when trying to understand a specific application problem. This visualization focuses on CPU and memory and the associated frames. The X-axis lists the frames alphabetically, and the Y-axis shows stack depth. Each rectangle is a stack frame and includes the function being called. The wider the rectangle, the more it appears in the stack. This method is invaluable when trying to diagnose system performance at the application level and I urge everyone to give it a try.
+
+
+
+### Tool options
+
+There are several commercial options for alerting, but since this is Opensource.com, I’ll cover only systems that are being used at scale by real companies that you can use at no cost. Hopefully, you’ll be able to contribute new and innovative features to make these systems even better.
+
+### Alerting tools
+
+#### Bosun
+
+If you’ve ever done anything with computers and gotten stuck, the help you received was probably thanks to a Stack Exchange system. Stack Exchange runs many different websites around a crowdsourced question-and-answer model. [Stack Overflow][5] is very popular with developers, and [Super User][6] is popular with operations. However, there are now hundreds of sites ranging from parenting to sci-fi and philosophy to bicycles.
+
+Stack Exchange open-sourced its alert management system, [Bosun][7], around the same time Prometheus and its [AlertManager][8] system were released. There were many similarities in the two systems, and that’s a really good thing. Like Prometheus, Bosun is written in Golang. Bosun’s scope is more extensive than Prometheus’ as it can interact with systems beyond metrics aggregation. It can also ingest data from log and event aggregation systems. It supports Graphite, InfluxDB, OpenTSDB, and Elasticsearch.
+
+Bosun’s architecture consists of a single server binary, a backend like OpenTSDB, Redis, and [scollector agents][9]. The scollector agents automatically detect services on a host and report metrics for those processes and other system resources. This data is sent to a metrics backend. The Bosun server binary then queries the backends to determine if any alerts need to be fired. Bosun can also be used by tools like [Grafana][10] to query the underlying backends through one common interface. Redis is used to store state and metadata for Bosun.
+
+A really neat feature of Bosun is that it lets you test your alerts against historical data. This was something I missed in Prometheus several years ago, when I had data for an issue I wanted alerts on but no easy way to test it. To make sure my alerts were working, I had to create and insert dummy data. This system alleviates that very time-consuming process.
+
+Bosun also has the usual features like showing simple graphs and creating alerts. It has a powerful expression language for writing alerting rules. However, it only has email and HTTP notification configurations, which means connecting to Slack and other tools requires a bit more customization ([which its documentation covers][11]). Similar to Prometheus, Bosun can use templates for these notifications, which means they can look as awesome as you want them to. You can use all your HTML and CSS skills to create the baddest email alert anyone has ever seen.
+
+#### Cabot
+
+[Cabot][12] was created by a company called [Arachnys][13]. You may not know who Arachnys is or what it does, but you have probably felt its impact: It built the leading cloud-based solution for fighting financial crimes. That sounds pretty cool, right? At a previous company, I was involved in similar functions around [“know your customer"][14] laws. Most companies would consider it a very bad thing to be linked to a terrorist group, for example, funneling money through their systems. These solutions also help defend against less-atrocious offenders like fraudsters who could also pose a risk to the institution.
+
+So why did Arachnys create Cabot? Well, it is kind of a Christmas present to everyone, as it was a Christmas project built because its developers couldn’t wrap their heads around [Nagios][15]. And really, who can blame them? Cabot was written with Django and Bootstrap, so it should be easy for most to contribute to the project. (Another interesting factoid: The name comes from the creator’s dog.)
+
+The Cabot architecture is similar to Bosun in that it doesn’t collect any data. Instead, it accesses data through the APIs of the tools it is alerting for. Therefore, Cabot uses a pull (rather than a push) model for alerting. It reaches out into each system’s API and retrieves the information it needs to make a decision based on a specific check. Cabot stores the alerting data in a Postgres database and also has a cache using Redis.
+
+Cabot natively supports [Graphite][16], but it also supports [Jenkins][17], which is rare in this area. [Arachnys][13] uses Jenkins like a centralized cron, but I like this idea of treating build failures like outages. Obviously, a build failure isn’t as critical as a production outage, but it could still alert the team and escalate if the failure isn’t resolved. Who actually checks Jenkins every time an email comes in about a build failure? Yeah, me too!
+
+Another interesting feature is that Cabot can integrate with Google Calendar for on-call rotations. Cabot calls this feature Rota, which is a British term for a roster or rotation. This makes a lot of sense, and I wish other systems would take this idea further. Cabot doesn’t support anything more complex than primary and backup personnel, but there is certainly room for additional features. The docs say if you want something more advanced, you should look at a commercial option.
+
+#### StatsAgg
+
+[StatsAgg][18]? How did that make the list? Well, it’s not every day you come across a publishing company that has created an alerting platform. I think that deserves recognition. Of course, [Pearson][19] isn’t just a publishing company anymore; it has several web presences and a joint venture with [O’Reilly Media][20]. However, I still think of it as the company that published my schoolbooks and tests.
+
+StatsAgg isn’t just an alerting platform; it’s also a metrics aggregation platform. And it’s kind of like a proxy for other systems. It supports Graphite, StatsD, InfluxDB, and OpenTSDB as inputs, but it can also forward those metrics to their respective platforms. This is an interesting concept, but potentially risky as loads increase on a central service. However, if the StatsAgg infrastructure is robust enough, it can still produce alerts even when a backend storage platform has an outage.
+
+StatsAgg is written in Java and consists only of the main server and UI, which keeps complexity to a minimum. It can send alerts based on regular expression matching and is focused on alerting by service rather than host or instance. Its goal is to fill a void in the open source observability stack, and I think it does that quite well.
+
+### Visualization tools
+
+#### Grafana
+
+Almost everyone knows about [Grafana][10], and many have used it. I have used it for years whenever I need a simple dashboard. The tool I used before was deprecated, and I was fairly distraught about that until Grafana made it okay. Grafana was gifted to us by Torkel Ödegaard. Like Cabot, Grafana was also created around Christmastime, and released in January 2014. It has come a long way in just a few years. It started life as a Kibana dashboarding system, and Torkel forked it into what became Grafana.
+
+Grafana’s sole focus is presenting monitoring data in a more usable and pleasing way. It can natively gather data from Graphite, Elasticsearch, OpenTSDB, Prometheus, and InfluxDB. There’s an Enterprise version that uses plugins for more data sources, but there’s no reason those other data source plugins couldn’t be created as open source, as the Grafana plugin ecosystem already offers many other data sources.
+
+What does Grafana do for me? It provides a central location for understanding my system. It is web-based, so anyone can access the information, although it can be restricted using different authentication methods. Grafana can provide knowledge at a glance using many different types of visualizations. However, it has started integrating alerting and other features that aren’t traditionally combined with visualizations.
+
+Now you can set alerts visually. That means you can look at a graph, maybe even one showing where an alert should have triggered due to some degradation of the system, click on the graph where you want the alert to trigger, and then tell Grafana where to send the alert. That’s a pretty powerful addition that won’t necessarily replace an alerting platform, but it can certainly help augment it by providing a different perspective on alerting criteria.
+
+Grafana has also introduced more collaboration features. Users have been able to share dashboards for a long time, meaning you don’t have to create your own dashboard for your [Kubernetes][21] cluster because there are several already available—with some maintained by Kubernetes developers and others by Grafana developers.
+
+The most significant addition around collaboration is annotations. Annotations allow a user to add context to part of a graph. Other users can then use this context to understand the system better. This is an invaluable tool when a team is in the middle of an incident and communication and common understanding are critical. Having all the information right where you’re already looking makes it much more likely that knowledge will be shared across the team quickly. It’s also a nice feature to use during blameless postmortems when the team is trying to understand how the failure occurred and learn more about their system.
+
+#### Vizceral
+
+Netflix created [Vizceral][22] to understand its traffic patterns better when performing a traffic failover. Unlike Grafana, which is a more general tool, Vizceral serves a very specific use case. Netflix no longer uses this tool internally and says it is no longer actively maintained, but it still updates the tool periodically. I highlight it here primarily to point out an interesting visualization mechanism and how it can help solve a problem. It’s worth running it in a demo environment just to better grasp the concepts and witness what’s possible with these systems.
+
+### What to read next
+
+--------------------------------------------------------------------------------
+
+via: https://opensource.com/article/18/10/alerting-and-visualization-tools-sysadmins
+
+作者:[Dan Barker][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://opensource.com/users/barkerd427
+[b]: https://github.com/lujun9972
+[1]: https://www.practicalmonitoring.com/
+[2]: https://developers.google.com/chart/interactive/docs/gallery
+[3]: https://libguides.libraries.claremont.edu/c.php?g=474417&p=3286401
+[4]: http://www.brendangregg.com/flamegraphs.html
+[5]: https://stackoverflow.com/
+[6]: https://superuser.com/
+[7]: http://bosun.org/
+[8]: https://prometheus.io/docs/alerting/alertmanager/
+[9]: https://bosun.org/scollector/
+[10]: https://grafana.com/
+[11]: https://bosun.org/notifications
+[12]: https://cabotapp.com/
+[13]: https://www.arachnys.com/
+[14]: https://en.wikipedia.org/wiki/Know_your_customer
+[15]: https://www.nagios.org/
+[16]: https://graphiteapp.org/
+[17]: https://jenkins.io/
+[18]: https://github.com/PearsonEducation/StatsAgg
+[19]: https://www.pearson.com/us/
+[20]: https://www.oreilly.com/
+[21]: https://opensource.com/resources/what-is-kubernetes
+[22]: https://github.com/Netflix/vizceral
diff --git a/sources/tech/20181010 An introduction to using tcpdump at the Linux command line.md b/sources/tech/20181010 An introduction to using tcpdump at the Linux command line.md
new file mode 100644
index 0000000000..6998661f23
--- /dev/null
+++ b/sources/tech/20181010 An introduction to using tcpdump at the Linux command line.md
@@ -0,0 +1,457 @@
+An introduction to using tcpdump at the Linux command line
+======
+
+This flexible, powerful command-line tool helps ease the pain of troubleshooting network issues.
+
+
+
+In my experience as a sysadmin, I have often found network connectivity issues challenging to troubleshoot. For those situations, tcpdump is a great ally.
+
+Tcpdump is a command line utility that allows you to capture and analyze network traffic going through your system. It is often used to help troubleshoot network issues, as well as a security tool.
+
+A powerful and versatile tool that includes many options and filters, tcpdump can be used in a variety of cases. Since it's a command line tool, it is ideal to run in remote servers or devices for which a GUI is not available, to collect data that can be analyzed later. It can also be launched in the background or as a scheduled job using tools like cron.
+
+In this article, we'll look at some of tcpdump's most common features.
+
+### 1\. Installation on Linux
+
+Tcpdump is included with several Linux distributions, so chances are, you already have it installed. Check if tcpdump is installed on your system with the following command:
+
+```
+$ which tcpdump
+/usr/sbin/tcpdump
+```
+
+If tcpdump is not installed, you can install it but using your distribution's package manager. For example, on CentOS or Red Hat Enterprise Linux, like this:
+
+```
+$ sudo yum install -y tcpdump
+```
+
+Tcpdump requires `libpcap`, which is a library for network packet capture. If it's not installed, it will be automatically added as a dependency.
+
+You're ready to start capturing some packets.
+
+### 2\. Capturing packets with tcpdump
+
+To capture packets for troubleshooting or analysis, tcpdump requires elevated permissions, so in the following examples most commands are prefixed with `sudo`.
+
+To begin, use the command `tcpdump -D` to see which interfaces are available for capture:
+
+```
+$ sudo tcpdump -D
+1.eth0
+2.virbr0
+3.eth1
+4.any (Pseudo-device that captures on all interfaces)
+5.lo [Loopback]
+```
+
+In the example above, you can see all the interfaces available in my machine. The special interface `any` allows capturing in any active interface.
+
+Let's use it to start capturing some packets. Capture all packets in any interface by running this command:
+
+```
+$ sudo tcpdump -i any
+tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
+listening on any, link-type LINUX_SLL (Linux cooked), capture size 262144 bytes
+09:56:18.293641 IP rhel75.localdomain.ssh > 192.168.64.1.56322: Flags [P.], seq 3770820720:3770820916, ack 3503648727, win 309, options [nop,nop,TS val 76577898 ecr 510770929], length 196
+09:56:18.293794 IP 192.168.64.1.56322 > rhel75.localdomain.ssh: Flags [.], ack 196, win 391, options [nop,nop,TS val 510771017 ecr 76577898], length 0
+09:56:18.295058 IP rhel75.59883 > gateway.domain: 2486+ PTR? 1.64.168.192.in-addr.arpa. (43)
+09:56:18.310225 IP gateway.domain > rhel75.59883: 2486 NXDomain* 0/1/0 (102)
+09:56:18.312482 IP rhel75.49685 > gateway.domain: 34242+ PTR? 28.64.168.192.in-addr.arpa. (44)
+09:56:18.322425 IP gateway.domain > rhel75.49685: 34242 NXDomain* 0/1/0 (103)
+09:56:18.323164 IP rhel75.56631 > gateway.domain: 29904+ PTR? 1.122.168.192.in-addr.arpa. (44)
+09:56:18.323342 IP rhel75.localdomain.ssh > 192.168.64.1.56322: Flags [P.], seq 196:584, ack 1, win 309, options [nop,nop,TS val 76577928 ecr 510771017], length 388
+09:56:18.323563 IP 192.168.64.1.56322 > rhel75.localdomain.ssh: Flags [.], ack 584, win 411, options [nop,nop,TS val 510771047 ecr 76577928], length 0
+09:56:18.335569 IP gateway.domain > rhel75.56631: 29904 NXDomain* 0/1/0 (103)
+09:56:18.336429 IP rhel75.44007 > gateway.domain: 61677+ PTR? 98.122.168.192.in-addr.arpa. (45)
+09:56:18.336655 IP gateway.domain > rhel75.44007: 61677* 1/0/0 PTR rhel75. (65)
+09:56:18.337177 IP rhel75.localdomain.ssh > 192.168.64.1.56322: Flags [P.], seq 584:1644, ack 1, win 309, options [nop,nop,TS val 76577942 ecr 510771047], length 1060
+
+---- SKIPPING LONG OUTPUT -----
+
+09:56:19.342939 IP 192.168.64.1.56322 > rhel75.localdomain.ssh: Flags [.], ack 1752016, win 1444, options [nop,nop,TS val 510772067 ecr 76578948], length 0
+^C
+9003 packets captured
+9010 packets received by filter
+7 packets dropped by kernel
+$
+```
+
+Tcpdump continues to capture packets until it receives an interrupt signal. You can interrupt capturing by pressing `Ctrl+C`. As you can see in this example, `tcpdump` captured more than 9,000 packets. In this case, since I am connected to this server using `ssh`, tcpdump captured all these packages. To limit the number of packets captured and stop `tcpdump`, use the `-c` option:
+
+```
+$ sudo tcpdump -i any -c 5
+tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
+listening on any, link-type LINUX_SLL (Linux cooked), capture size 262144 bytes
+11:21:30.242740 IP rhel75.localdomain.ssh > 192.168.64.1.56322: Flags [P.], seq 3772575680:3772575876, ack 3503651743, win 309, options [nop,nop,TS val 81689848 ecr 515883153], length 196
+11:21:30.242906 IP 192.168.64.1.56322 > rhel75.localdomain.ssh: Flags [.], ack 196, win 1443, options [nop,nop,TS val 515883235 ecr 81689848], length 0
+11:21:30.244442 IP rhel75.43634 > gateway.domain: 57680+ PTR? 1.64.168.192.in-addr.arpa. (43)
+11:21:30.244829 IP gateway.domain > rhel75.43634: 57680 NXDomain 0/0/0 (43)
+11:21:30.247048 IP rhel75.33696 > gateway.domain: 37429+ PTR? 28.64.168.192.in-addr.arpa. (44)
+5 packets captured
+12 packets received by filter
+0 packets dropped by kernel
+$
+```
+
+In this case, `tcpdump` stopped capturing automatically after capturing five packets. This is useful in different scenarios—for instance, if you're troubleshooting connectivity and capturing a few initial packages is enough. This is even more useful when we apply filters to capture specific packets (shown below).
+
+By default, tcpdump resolves IP addresses and ports into names, as shown in the previous example. When troubleshooting network issues, it is often easier to use the IP addresses and port numbers; disable name resolution by using the option `-n` and port resolution with `-nn`:
+
+```
+$ sudo tcpdump -i any -c5 -nn
+tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
+listening on any, link-type LINUX_SLL (Linux cooked), capture size 262144 bytes
+23:56:24.292206 IP 192.168.64.28.22 > 192.168.64.1.35110: Flags [P.], seq 166198580:166198776, ack 2414541257, win 309, options [nop,nop,TS val 615664 ecr 540031155], length 196
+23:56:24.292357 IP 192.168.64.1.35110 > 192.168.64.28.22: Flags [.], ack 196, win 1377, options [nop,nop,TS val 540031229 ecr 615664], length 0
+23:56:24.292570 IP 192.168.64.28.22 > 192.168.64.1.35110: Flags [P.], seq 196:568, ack 1, win 309, options [nop,nop,TS val 615664 ecr 540031229], length 372
+23:56:24.292655 IP 192.168.64.1.35110 > 192.168.64.28.22: Flags [.], ack 568, win 1400, options [nop,nop,TS val 540031229 ecr 615664], length 0
+23:56:24.292752 IP 192.168.64.28.22 > 192.168.64.1.35110: Flags [P.], seq 568:908, ack 1, win 309, options [nop,nop,TS val 615664 ecr 540031229], length 340
+5 packets captured
+6 packets received by filter
+0 packets dropped by kernel
+```
+
+As shown above, the capture output now displays the IP addresses and port numbers. This also prevents tcpdump from issuing DNS lookups, which helps to lower network traffic while troubleshooting network issues.
+
+Now that you're able to capture network packets, let's explore what this output means.
+
+### 3\. Understanding the output format
+
+Tcpdump is capable of capturing and decoding many different protocols, such as TCP, UDP, ICMP, and many more. While we can't cover all of them here, to help you get started, let's explore the TCP packet. You can find more details about the different protocol formats in tcpdump's [manual pages][1]. A typical TCP packet captured by tcpdump looks like this:
+
+```
+08:41:13.729687 IP 192.168.64.28.22 > 192.168.64.1.41916: Flags [P.], seq 196:568, ack 1, win 309, options [nop,nop,TS val 117964079 ecr 816509256], length 372
+```
+
+The fields may vary depending on the type of packet being sent, but this is the general format.
+
+The first field, `08:41:13.729687,` represents the timestamp of the received packet as per the local clock.
+
+Next, `IP` represents the network layer protocol—in this case, `IPv4`. For `IPv6` packets, the value is `IP6`.
+
+The next field, `192.168.64.28.22`, is the source IP address and port. This is followed by the destination IP address and port, represented by `192.168.64.1.41916`.
+
+After the source and destination, you can find the TCP Flags `Flags [P.]`. Typical values for this field include:
+
+| Value | Flag Type | Description |
+|-------| --------- | ----------------- |
+| S | SYN | Connection Start |
+| F | FIN | Connection Finish |
+| P | PUSH | Data push |
+| R | RST | Connection reset |
+| . | ACK | Acknowledgment |
+
+This field can also be a combination of these values, such as `[S.]` for a `SYN-ACK` packet.
+
+Next is the sequence number of the data contained in the packet. For the first packet captured, this is an absolute number. Subsequent packets use a relative number to make it easier to follow. In this example, the sequence is `seq 196:568,` which means this packet contains bytes 196 to 568 of this flow.
+
+This is followed by the Ack Number: `ack 1`. In this case, it is 1 since this is the side sending data. For the side receiving data, this field represents the next expected byte (data) on this flow. For example, the Ack number for the next packet in this flow would be 568.
+
+The next field is the window size `win 309`, which represents the number of bytes available in the receiving buffer, followed by TCP options such as the MSS (Maximum Segment Size) or Window Scale. For details about TCP protocol options, consult [Transmission Control Protocol (TCP) Parameters][2].
+
+Finally, we have the packet length, `length 372`, which represents the length, in bytes, of the payload data. The length is the difference between the last and first bytes in the sequence number.
+
+Now let's learn how to filter packages to narrow down results and make it easier to troubleshoot specific issues.
+
+### 4\. Filtering packets
+
+As mentioned above, tcpdump can capture too many packages, some of which are not even related to the issue you're troubleshooting. For example, if you're troubleshooting a connectivity issue with a web server you're not interested in the SSH traffic, so removing the SSH packets from the output makes it easier to work on the real issue.
+
+One of tcpdump's most powerful features is its ability to filter the captured packets using a variety of parameters, such as source and destination IP addresses, ports, protocols, etc. Let's look at some of the most common ones.
+
+#### Protocol
+
+To filter packets based on protocol, specifying the protocol in the command line. For example, capture ICMP packets only by using this command:
+
+```
+$ sudo tcpdump -i any -c5 icmp
+tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
+listening on any, link-type LINUX_SLL (Linux cooked), capture size 262144 bytes
+```
+
+In a different terminal, try to ping another machine:
+
+```
+$ ping opensource.com
+PING opensource.com (54.204.39.132) 56(84) bytes of data.
+64 bytes from ec2-54-204-39-132.compute-1.amazonaws.com (54.204.39.132): icmp_seq=1 ttl=47 time=39.6 ms
+```
+
+Back in the tcpdump capture, notice that tcpdump captures and displays only the ICMP-related packets. In this case, tcpdump is not displaying name resolution packets that were generated when resolving the name `opensource.com`:
+
+```
+09:34:20.136766 IP rhel75 > ec2-54-204-39-132.compute-1.amazonaws.com: ICMP echo request, id 20361, seq 1, length 64
+09:34:20.176402 IP ec2-54-204-39-132.compute-1.amazonaws.com > rhel75: ICMP echo reply, id 20361, seq 1, length 64
+09:34:21.140230 IP rhel75 > ec2-54-204-39-132.compute-1.amazonaws.com: ICMP echo request, id 20361, seq 2, length 64
+09:34:21.180020 IP ec2-54-204-39-132.compute-1.amazonaws.com > rhel75: ICMP echo reply, id 20361, seq 2, length 64
+09:34:22.141777 IP rhel75 > ec2-54-204-39-132.compute-1.amazonaws.com: ICMP echo request, id 20361, seq 3, length 64
+5 packets captured
+5 packets received by filter
+0 packets dropped by kernel
+```
+
+#### Host
+
+Limit capture to only packets related to a specific host by using the `host` filter:
+
+```
+$ sudo tcpdump -i any -c5 -nn host 54.204.39.132
+tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
+listening on any, link-type LINUX_SLL (Linux cooked), capture size 262144 bytes
+09:54:20.042023 IP 192.168.122.98.39326 > 54.204.39.132.80: Flags [S], seq 1375157070, win 29200, options [mss 1460,sackOK,TS val 122350391 ecr 0,nop,wscale 7], length 0
+09:54:20.088127 IP 54.204.39.132.80 > 192.168.122.98.39326: Flags [S.], seq 1935542841, ack 1375157071, win 28960, options [mss 1460,sackOK,TS val 522713542 ecr 122350391,nop,wscale 9], length 0
+09:54:20.088204 IP 192.168.122.98.39326 > 54.204.39.132.80: Flags [.], ack 1, win 229, options [nop,nop,TS val 122350437 ecr 522713542], length 0
+09:54:20.088734 IP 192.168.122.98.39326 > 54.204.39.132.80: Flags [P.], seq 1:113, ack 1, win 229, options [nop,nop,TS val 122350438 ecr 522713542], length 112: HTTP: GET / HTTP/1.1
+09:54:20.129733 IP 54.204.39.132.80 > 192.168.122.98.39326: Flags [.], ack 113, win 57, options [nop,nop,TS val 522713552 ecr 122350438], length 0
+5 packets captured
+5 packets received by filter
+0 packets dropped by kernel
+```
+
+In this example, tcpdump captures and displays only packets to and from host `54.204.39.132`.
+
+#### Port
+
+To filter packets based on the desired service or port, use the `port` filter. For example, capture packets related to a web (HTTP) service by using this command:
+
+```
+$ sudo tcpdump -i any -c5 -nn port 80
+tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
+listening on any, link-type LINUX_SLL (Linux cooked), capture size 262144 bytes
+09:58:28.790548 IP 192.168.122.98.39330 > 54.204.39.132.80: Flags [S], seq 1745665159, win 29200, options [mss 1460,sackOK,TS val 122599140 ecr 0,nop,wscale 7], length 0
+09:58:28.834026 IP 54.204.39.132.80 > 192.168.122.98.39330: Flags [S.], seq 4063583040, ack 1745665160, win 28960, options [mss 1460,sackOK,TS val 522775728 ecr 122599140,nop,wscale 9], length 0
+09:58:28.834093 IP 192.168.122.98.39330 > 54.204.39.132.80: Flags [.], ack 1, win 229, options [nop,nop,TS val 122599183 ecr 522775728], length 0
+09:58:28.834588 IP 192.168.122.98.39330 > 54.204.39.132.80: Flags [P.], seq 1:113, ack 1, win 229, options [nop,nop,TS val 122599184 ecr 522775728], length 112: HTTP: GET / HTTP/1.1
+09:58:28.878445 IP 54.204.39.132.80 > 192.168.122.98.39330: Flags [.], ack 113, win 57, options [nop,nop,TS val 522775739 ecr 122599184], length 0
+5 packets captured
+5 packets received by filter
+0 packets dropped by kernel
+```
+
+#### Source IP/hostname
+
+You can also filter packets based on the source or destination IP Address or hostname. For example, to capture packets from host `192.168.122.98`:
+
+```
+$ sudo tcpdump -i any -c5 -nn src 192.168.122.98
+tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
+listening on any, link-type LINUX_SLL (Linux cooked), capture size 262144 bytes
+10:02:15.220824 IP 192.168.122.98.39436 > 192.168.122.1.53: 59332+ A? opensource.com. (32)
+10:02:15.220862 IP 192.168.122.98.39436 > 192.168.122.1.53: 20749+ AAAA? opensource.com. (32)
+10:02:15.364062 IP 192.168.122.98.39334 > 54.204.39.132.80: Flags [S], seq 1108640533, win 29200, options [mss 1460,sackOK,TS val 122825713 ecr 0,nop,wscale 7], length 0
+10:02:15.409229 IP 192.168.122.98.39334 > 54.204.39.132.80: Flags [.], ack 669337581, win 229, options [nop,nop,TS val 122825758 ecr 522832372], length 0
+10:02:15.409667 IP 192.168.122.98.39334 > 54.204.39.132.80: Flags [P.], seq 0:112, ack 1, win 229, options [nop,nop,TS val 122825759 ecr 522832372], length 112: HTTP: GET / HTTP/1.1
+5 packets captured
+5 packets received by filter
+0 packets dropped by kernel
+```
+
+Notice that tcpdumps captured packets with source IP address `192.168.122.98` for multiple services such as name resolution (port 53) and HTTP (port 80). The response packets are not displayed since their source IP is different.
+
+Conversely, you can use the `dst` filter to filter by destination IP/hostname:
+
+```
+$ sudo tcpdump -i any -c5 -nn dst 192.168.122.98
+tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
+listening on any, link-type LINUX_SLL (Linux cooked), capture size 262144 bytes
+10:05:03.572931 IP 192.168.122.1.53 > 192.168.122.98.47049: 2248 1/0/0 A 54.204.39.132 (48)
+10:05:03.572944 IP 192.168.122.1.53 > 192.168.122.98.47049: 33770 0/0/0 (32)
+10:05:03.621833 IP 54.204.39.132.80 > 192.168.122.98.39338: Flags [S.], seq 3474204576, ack 3256851264, win 28960, options [mss 1460,sackOK,TS val 522874425 ecr 122993922,nop,wscale 9], length 0
+10:05:03.667767 IP 54.204.39.132.80 > 192.168.122.98.39338: Flags [.], ack 113, win 57, options [nop,nop,TS val 522874436 ecr 122993972], length 0
+10:05:03.672221 IP 54.204.39.132.80 > 192.168.122.98.39338: Flags [P.], seq 1:643, ack 113, win 57, options [nop,nop,TS val 522874437 ecr 122993972], length 642: HTTP: HTTP/1.1 302 Found
+5 packets captured
+5 packets received by filter
+0 packets dropped by kernel
+```
+
+#### Complex expressions
+
+You can also combine filters by using the logical operators `and` and `or` to create more complex expressions. For example, to filter packets from source IP address `192.168.122.98` and service HTTP only, use this command:
+
+```
+$ sudo tcpdump -i any -c5 -nn src 192.168.122.98 and port 80
+tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
+listening on any, link-type LINUX_SLL (Linux cooked), capture size 262144 bytes
+10:08:00.472696 IP 192.168.122.98.39342 > 54.204.39.132.80: Flags [S], seq 2712685325, win 29200, options [mss 1460,sackOK,TS val 123170822 ecr 0,nop,wscale 7], length 0
+10:08:00.516118 IP 192.168.122.98.39342 > 54.204.39.132.80: Flags [.], ack 268723504, win 229, options [nop,nop,TS val 123170865 ecr 522918648], length 0
+10:08:00.516583 IP 192.168.122.98.39342 > 54.204.39.132.80: Flags [P.], seq 0:112, ack 1, win 229, options [nop,nop,TS val 123170866 ecr 522918648], length 112: HTTP: GET / HTTP/1.1
+10:08:00.567044 IP 192.168.122.98.39342 > 54.204.39.132.80: Flags [.], ack 643, win 239, options [nop,nop,TS val 123170916 ecr 522918661], length 0
+10:08:00.788153 IP 192.168.122.98.39342 > 54.204.39.132.80: Flags [F.], seq 112, ack 643, win 239, options [nop,nop,TS val 123171137 ecr 522918661], length 0
+5 packets captured
+5 packets received by filter
+0 packets dropped by kernel
+```
+
+You can create more complex expressions by grouping filter with parentheses. In this case, enclose the entire filter expression with quotation marks to prevent the shell from confusing them with shell expressions:
+
+```
+$ sudo tcpdump -i any -c5 -nn "port 80 and (src 192.168.122.98 or src 54.204.39.132)"
+tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
+listening on any, link-type LINUX_SLL (Linux cooked), capture size 262144 bytes
+10:10:37.602214 IP 192.168.122.98.39346 > 54.204.39.132.80: Flags [S], seq 871108679, win 29200, options [mss 1460,sackOK,TS val 123327951 ecr 0,nop,wscale 7], length 0
+10:10:37.650651 IP 54.204.39.132.80 > 192.168.122.98.39346: Flags [S.], seq 854753193, ack 871108680, win 28960, options [mss 1460,sackOK,TS val 522957932 ecr 123327951,nop,wscale 9], length 0
+10:10:37.650708 IP 192.168.122.98.39346 > 54.204.39.132.80: Flags [.], ack 1, win 229, options [nop,nop,TS val 123328000 ecr 522957932], length 0
+10:10:37.651097 IP 192.168.122.98.39346 > 54.204.39.132.80: Flags [P.], seq 1:113, ack 1, win 229, options [nop,nop,TS val 123328000 ecr 522957932], length 112: HTTP: GET / HTTP/1.1
+10:10:37.692900 IP 54.204.39.132.80 > 192.168.122.98.39346: Flags [.], ack 113, win 57, options [nop,nop,TS val 522957942 ecr 123328000], length 0
+5 packets captured
+5 packets received by filter
+0 packets dropped by kernel
+```
+
+In this example, we're filtering packets for HTTP service only (port 80) and source IP addresses `192.168.122.98` or `54.204.39.132`. This is a quick way of examining both sides of the same flow.
+
+### 5\. Checking packet content
+
+In the previous examples, we're checking only the packets' headers for information such as source, destinations, ports, etc. Sometimes this is all we need to troubleshoot network connectivity issues. Sometimes, however, we need to inspect the content of the packet to ensure that the message we're sending contains what we need or that we received the expected response. To see the packet content, tcpdump provides two additional flags: `-X` to print content in hex, and ASCII or `-A` to print the content in ASCII.
+
+For example, inspect the HTTP content of a web request like this:
+
+```
+$ sudo tcpdump -i any -c10 -nn -A port 80
+tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
+listening on any, link-type LINUX_SLL (Linux cooked), capture size 262144 bytes
+13:02:14.871803 IP 192.168.122.98.39366 > 54.204.39.132.80: Flags [S], seq 2546602048, win 29200, options [mss 1460,sackOK,TS val 133625221 ecr 0,nop,wscale 7], length 0
+E..<..@.@.....zb6.'....P...@......r............
+............................
+13:02:14.910734 IP 54.204.39.132.80 > 192.168.122.98.39366: Flags [S.], seq 1877348646, ack 2546602049, win 28960, options [mss 1460,sackOK,TS val 525532247 ecr 133625221,nop,wscale 9], length 0
+E..<..@./..a6.'...zb.P..o..&...A..q a..........
+.R.W....... ................
+13:02:14.910832 IP 192.168.122.98.39366 > 54.204.39.132.80: Flags [.], ack 1, win 229, options [nop,nop,TS val 133625260 ecr 525532247], length 0
+E..4..@.@.....zb6.'....P...Ao..'...........
+.....R.W................
+13:02:14.911808 IP 192.168.122.98.39366 > 54.204.39.132.80: Flags [P.], seq 1:113, ack 1, win 229, options [nop,nop,TS val 133625261 ecr 525532247], length 112: HTTP: GET / HTTP/1.1
+E.....@.@..1..zb6.'....P...Ao..'...........
+.....R.WGET / HTTP/1.1
+User-Agent: Wget/1.14 (linux-gnu)
+Accept: */*
+Host: opensource.com
+Connection: Keep-Alive
+
+................
+13:02:14.951199 IP 54.204.39.132.80 > 192.168.122.98.39366: Flags [.], ack 113, win 57, options [nop,nop,TS val 525532257 ecr 133625261], length 0
+E..4.F@./.."6.'...zb.P..o..'.......9.2.....
+.R.a....................
+13:02:14.955030 IP 54.204.39.132.80 > 192.168.122.98.39366: Flags [P.], seq 1:643, ack 113, win 57, options [nop,nop,TS val 525532258 ecr 133625261], length 642: HTTP: HTTP/1.1 302 Found
+E....G@./...6.'...zb.P..o..'.......9.......
+.R.b....HTTP/1.1 302 Found
+Server: nginx
+Date: Sun, 23 Sep 2018 17:02:14 GMT
+Content-Type: text/html; charset=iso-8859-1
+Content-Length: 207
+X-Content-Type-Options: nosniff
+Location: https://opensource.com/
+Cache-Control: max-age=1209600
+Expires: Sun, 07 Oct 2018 17:02:14 GMT
+X-Request-ID: v-6baa3acc-bf52-11e8-9195-22000ab8cf2d
+X-Varnish: 632951979
+Age: 0
+Via: 1.1 varnish (Varnish/5.2)
+X-Cache: MISS
+Connection: keep-alive
+
+
+
+302 Found
+
+