**, and notice that the paragraph line is indented automatically.
-
-```
-
-
Vim plugins are awesome !
-
-```
-
-Vim Surround has many other options. Give it a try—and consult [GitHub][7] for additional information.
-
-### 4\. Vim Gitgutter
-
-The [Vim Gitgutter][8] plugin is useful for anyone using Git for version control. It shows the output of **Git diff** as symbols in the "gutter"—the sign column where Vim presents additional information, such as line numbers. For example, consider the following as the committed version in Git:
-
-```
- 1 package main
- 2
- 3 import "fmt"
- 4
- 5 func main() {
- 6 x := true
- 7 items := []string{"tv", "pc", "tablet"}
- 8
- 9 if x {
- 10 for _, i := range items {
- 11 fmt.Println(i)
- 12 }
- 13 }
- 14 }
-```
-
-After making some changes, Vim Gitgutter displays the following symbols in the gutter:
-
-```
- 1 package main
- 2
- 3 import "fmt"
- 4
-_ 5 func main() {
- 6 items := []string{"tv", "pc", "tablet"}
- 7
-~ 8 if len(items) > 0 {
- 9 for _, i := range items {
- 10 fmt.Println(i)
-+ 11 fmt.Println("------")
- 12 }
- 13 }
- 14 }
-```
-
-The **-** symbol shows that a line was deleted between lines 5 and 6. The **~** symbol shows that line 8 was modified, and the symbol **+** shows that line 11 was added.
-
-In addition, Vim Gitgutter allows you to navigate between "hunks"—individual changes made in the file—with **[c** and **]c** , or even stage individual hunks for commit by pressing **Leader+hs**.
-
-This plugin gives you immediate visual feedback of changes, and it's a great addition to your toolbox if you use Git.
-
-### 5\. VIM Fugitive
-
-[Vim Fugitive][9] is another great plugin for anyone incorporating Git into the Vim workflow. It's a Git wrapper that allows you to execute Git commands directly from Vim and integrates with Vim's interface. This plugin has many features—check its [GitHub][10] page for more information.
-
-Here's a basic Git workflow example using Vim Fugitive. Considering the changes we've made to the Go code block on section 4, you can use **git blame** by typing the command **:Gblame** :
-
-```
-e9949066 (Ricardo Gerardi 2018-12-05 18:17:19 -0500)│ 1 package main
-e9949066 (Ricardo Gerardi 2018-12-05 18:17:19 -0500)│ 2
-e9949066 (Ricardo Gerardi 2018-12-05 18:17:19 -0500)│ 3 import "fmt"
-e9949066 (Ricardo Gerardi 2018-12-05 18:17:19 -0500)│ 4
-e9949066 (Ricardo Gerardi 2018-12-05 18:17:19 -0500)│_ 5 func main() {
-e9949066 (Ricardo Gerardi 2018-12-05 18:17:19 -0500)│ 6 items := []string{"tv", "pc", "tablet"}
-e9949066 (Ricardo Gerardi 2018-12-05 18:17:19 -0500)│ 7
-00000000 (Not Committed Yet 2018-12-05 18:55:00 -0500)│~ 8 if len(items) > 0 {
-e9949066 (Ricardo Gerardi 2018-12-05 18:17:19 -0500)│ 9 for _, i := range items {
-e9949066 (Ricardo Gerardi 2018-12-05 18:17:19 -0500)│ 10 fmt.Println(i)
-00000000 (Not Committed Yet 2018-12-05 18:55:00 -0500)│+ 11 fmt.Println("------")
-e9949066 (Ricardo Gerardi 2018-12-05 18:17:19 -0500)│ 12 }
-e9949066 (Ricardo Gerardi 2018-12-05 18:17:19 -0500)│ 13 }
-e9949066 (Ricardo Gerardi 2018-12-05 18:17:19 -0500)│ 14 }
-```
-
-You can see that lines 8 and 11 have not been committed. Check the repository status by typing **:Gstatus** :
-
-```
- 1 # On branch master
- 2 # Your branch is up to date with 'origin/master'.
- 3 #
- 4 # Changes not staged for commit:
- 5 # (use "git add
..." to update what will be committed)
- 6 # (use "git checkout -- ..." to discard changes in working directory)
- 7 #
- 8 # modified: vim-5plugins/examples/test1.go
- 9 #
- 10 no changes added to commit (use "git add" and/or "git commit -a")
---------------------------------------------------------------------------------------------------------
- 1 package main
- 2
- 3 import "fmt"
- 4
-_ 5 func main() {
- 6 items := []string{"tv", "pc", "tablet"}
- 7
-~ 8 if len(items) > 0 {
- 9 for _, i := range items {
- 10 fmt.Println(i)
-+ 11 fmt.Println("------")
- 12 }
- 13 }
- 14 }
-```
-
-Vim Fugitive opens a split window with the result of **git status**. You can stage a file for commit by pressing the **-** key on the line with the name of the file. You can reset the status by pressing **-** again. The message updates to reflect the new status:
-
-```
- 1 # On branch master
- 2 # Your branch is up to date with 'origin/master'.
- 3 #
- 4 # Changes to be committed:
- 5 # (use "git reset HEAD ..." to unstage)
- 6 #
- 7 # modified: vim-5plugins/examples/test1.go
- 8 #
---------------------------------------------------------------------------------------------------------
- 1 package main
- 2
- 3 import "fmt"
- 4
-_ 5 func main() {
- 6 items := []string{"tv", "pc", "tablet"}
- 7
-~ 8 if len(items) > 0 {
- 9 for _, i := range items {
- 10 fmt.Println(i)
-+ 11 fmt.Println("------")
- 12 }
- 13 }
- 14 }
-```
-
-Now you can use the command **:Gcommit** to commit the changes. Vim Fugitive opens another split that allows you to enter a commit message:
-
-```
- 1 vim-5plugins: Updated test1.go example file
- 2 # Please enter the commit message for your changes. Lines starting
- 3 # with '#' will be ignored, and an empty message aborts the commit.
- 4 #
- 5 # On branch master
- 6 # Your branch is up to date with 'origin/master'.
- 7 #
- 8 # Changes to be committed:
- 9 # modified: vim-5plugins/examples/test1.go
- 10 #
-```
-
-Save the file with **:wq** to complete the commit:
-
-```
-[master c3bf80f] vim-5plugins: Updated test1.go example file
- 1 file changed, 2 insertions(+), 2 deletions(-)
-Press ENTER or type command to continue
-```
-
-You can use **:Gstatus** again to see the result and **:Gpush** to update the remote repository with the new commit.
-
-```
- 1 # On branch master
- 2 # Your branch is ahead of 'origin/master' by 1 commit.
- 3 # (use "git push" to publish your local commits)
- 4 #
- 5 nothing to commit, working tree clean
-```
-
-If you like Vim Fugitive and want to learn more, the GitHub repository has links to screencasts showing additional functionality and workflows. Check it out!
-
-### What's next?
-
-These Vim plugins help developers write code in any programming language. There are two other categories of plugins to help developers: code-completion plugins and syntax-checker plugins. They are usually related to specific programming languages, so I will cover them in a follow-up article.
-
-Do you have another Vim plugin you use when writing code? Please share it in the comments below.
-
---------------------------------------------------------------------------------
-
-via: https://opensource.com/article/19/1/vim-plugins-developers
-
-作者:[Ricardo Gerardi][a]
-选题:[lujun9972][b]
-译者:[pityonline](https://github.com/pityonline)
-校对:[校对者ID](https://github.com/校对者ID)
-
-本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
-
-[a]: https://opensource.com/users/rgerardi
-[b]: https://github.com/lujun9972
-[1]: https://www.vim.org/
-[2]: https://www.vim.org/scripts/script.php?script_id=3599
-[3]: https://github.com/jiangmiao/auto-pairs
-[4]: https://github.com/scrooloose/nerdcommenter
-[5]: http://vim.wikia.com/wiki/Filetype.vim
-[6]: https://www.vim.org/scripts/script.php?script_id=1697
-[7]: https://github.com/tpope/vim-surround
-[8]: https://github.com/airblade/vim-gitgutter
-[9]: https://www.vim.org/scripts/script.php?script_id=2975
-[10]: https://github.com/tpope/vim-fugitive
diff --git a/sources/tech/20190111 Top 5 Linux Distributions for Productivity.md b/sources/tech/20190111 Top 5 Linux Distributions for Productivity.md
deleted file mode 100644
index fbd8b9d120..0000000000
--- a/sources/tech/20190111 Top 5 Linux Distributions for Productivity.md
+++ /dev/null
@@ -1,170 +0,0 @@
-[#]: collector: (lujun9972)
-[#]: translator: ( )
-[#]: reviewer: ( )
-[#]: publisher: ( )
-[#]: url: ( )
-[#]: subject: (Top 5 Linux Distributions for Productivity)
-[#]: via: (https://www.linux.com/blog/learn/2019/1/top-5-linux-distributions-productivity)
-[#]: author: (Jack Wallen https://www.linux.com/users/jlwallen)
-
-Top 5 Linux Distributions for Productivity
-======
-
-
-
-I have to confess, this particular topic is a tough one to address. Why? First off, Linux is a productive operating system by design. Thanks to an incredibly reliable and stable platform, getting work done is easy. Second, to gauge effectiveness, you have to consider what type of work you need a productivity boost for. General office work? Development? School? Data mining? Human resources? You see how this question can get somewhat complicated.
-
-That doesn’t mean, however, that some distributions aren’t able to do a better job of configuring and presenting that underlying operating system into an efficient platform for getting work done. Quite the contrary. Some distributions do a much better job of “getting out of the way,” so you don’t find yourself in a work-related hole, having to dig yourself out and catch up before the end of day. These distributions help strip away the complexity that can be found in Linux, thereby making your workflow painless.
-
-Let’s take a look at the distros I consider to be your best bet for productivity. To help make sense of this, I’ve divided them into categories of productivity. That task itself was challenging, because everyone’s productivity varies. For the purposes of this list, however, I’ll look at:
-
- * General Productivity: For those who just need to work efficiently on multiple tasks.
-
- * Graphic Design: For those that work with the creation and manipulation of graphic images.
-
- * Development: For those who use their Linux desktops for programming.
-
- * Administration: For those who need a distribution to facilitate their system administration tasks.
-
- * Education: For those who need a desktop distribution to make them more productive in an educational environment.
-
-
-
-
-Yes, there are more categories to be had, many of which can get very niche-y, but these five should fill most of your needs.
-
-### General Productivity
-
-For general productivity, you won’t get much more efficient than [Ubuntu][1]. The primary reason for choosing Ubuntu for this category is the seamless integration of apps, services, and desktop. You might be wondering why I didn’t choose Linux Mint for this category? Because Ubuntu now defaults to the GNOME desktop, it gains the added advantage of GNOME Extensions (Figure 1).
-
-![GNOME Clipboard][3]
-
-Figure 1: The GNOME Clipboard Indicator extension in action.
-
-[Used with permission][4]
-
-These extensions go a very long way to aid in boosting productivity (so Ubuntu gets the nod over Mint). But Ubuntu didn’t just accept a vanilla GNOME desktop. Instead, they tweaked it to make it slightly more efficient and user-friendly, out of the box. And because Ubuntu contains just the right mixture of default, out-of-the-box, apps (that just work), it makes for a nearly perfect platform for productivity.
-
-Whether you need to write a paper, work on a spreadsheet, code a new app, work on your company website, create marketing images, administer a server or network, or manage human resources from within your company HR tool, Ubuntu has you covered. The Ubuntu desktop distribution also doesn’t require the user to jump through many hoops to get things working … it simply works (and quite well). Finally, thanks to it’s Debian base, Ubuntu makes installing third-party apps incredibly easy.
-
-Although Ubuntu tends to be the go-to for nearly every list of “top distributions for X,” it’s very hard to argue against this particular distribution topping the list of general productivity distributions.
-
-### Graphic Design
-
-If you’re looking to up your graphic design productivity, you can’t go wrong with [Fedora Design Suite][5]. This Fedora respin was created by the team responsible for all Fedora-related art work. Although the default selection of apps isn’t a massive collection of tools, those it does include are geared specifically for the creation and manipulation of images.
-
-With apps like GIMP, Inkscape, Darktable, Krita, Entangle, Blender, Pitivi, Scribus, and more (Figure 2), you’ll find everything you need to get your image editing jobs done and done well. But Fedora Design Suite doesn’t end there. This desktop platform also includes a bevy of tutorials that cover countless subjects for many of the installed applications. For anyone trying to be as productive as possible, this is some seriously handy information to have at the ready. I will say, however, the tutorial entry in the GNOME Favorites is nothing more than a link to [this page][6].
-
-![Fedora Design Suite Favorites][8]
-
-Figure 2: The Fedora Design Suite Favorites menu includes plenty of tools for getting your graphic design on.
-
-[Used with permission][4]
-
-Those that work with a digital camera will certainly appreciate the inclusion of the Entangle app, which allows you to control your DSLR from the desktop.
-
-### Development
-
-Nearly all Linux distributions are great platforms for programmers. However, one particular distributions stands out, above the rest, as one of the most productive tools you’ll find for the task. That OS comes from [System76][9] and it’s called [Pop!_OS][10]. Pop!_OS is tailored specifically for creators, but not of the artistic type. Instead, Pop!_OS is geared toward creators who specialize in developing, programming, and making. If you need an environment that is not only perfected suited for your development work, but includes a desktop that’s sure to get out of your way, you won’t find a better option than Pop!_OS (Figure 3).
-
-What might surprise you (given how “young” this operating system is), is that Pop!_OS is also one of the single most stable GNOME-based platforms you’ll ever use. This means Pop!_OS isn’t just for creators and makers, but anyone looking for a solid operating system. One thing that many users will greatly appreciate with Pop!_OS, is that you can download an ISO specifically for your video hardware. If you have Intel hardware, [download][10] the version for Intel/AMD. If your graphics card is NVIDIA, download that specific release. Either way, you are sure go get a solid platform for which to create your masterpiece.
-
-![Pop!_OS][12]
-
-Figure 3: The Pop!_OS take on GNOME Overview.
-
-[Used with permission][4]
-
-Interestingly enough, with Pop!_OS, you won’t find much in the way of pre-installed development tools. You won’t find an included IDE, or many other dev tools. You can, however, find all the development tools you need in the Pop Shop.
-
-### Administration
-
-If you’re looking to find one of the most productive distributions for admin tasks, look no further than [Debian][13]. Why? Because Debian is not only incredibly reliable, it’s one of those distributions that gets out of your way better than most others. Debian is the perfect combination of ease of use and unlimited possibility. On top of which, because this is the distribution for which so many others are based, you can bet if there’s an admin tool you need for a task, it’s available for Debian. Of course, we’re talking about general admin tasks, which means most of the time you’ll be using a terminal window to SSH into your servers (Figure 4) or a browser to work with web-based GUI tools on your network. Why bother making use of a desktop that’s going to add layers of complexity (such as SELinux in Fedora, or YaST in openSUSE)? Instead, chose simplicity.
-
-![Debian][15]
-
-Figure 4: SSH’ing into a remote server on Debian.
-
-[Used with permission][4]
-
-And because you can select which desktop you want (from GNOME, Xfce, KDE, Cinnamon, MATE, LXDE), you can be sure to have the interface that best matches your work habits.
-
-### Education
-
-If you are a teacher or student, or otherwise involved in education, you need the right tools to be productive. Once upon a time, there existed the likes of Edubuntu. That distribution never failed to be listed in the top of education-related lists. However, that distro hasn’t been updated since it was based on Ubuntu 14.04. Fortunately, there’s a new education-based distribution ready to take that title, based on openSUSE. This spin is called [openSUSE:Education-Li-f-e][16] (Linux For Education - Figure 5), and is based on openSUSE Leap 42.1 (so it is slightly out of date).
-
-openSUSE:Education-Li-f-e includes tools like:
-
- * Brain Workshop - A dual n-back brain exercise
-
- * GCompris - An educational software suite for young children
-
- * gElemental - A periodic table viewer
-
- * iGNUit - A general purpose flash card program
-
- * Little Wizard - Development environment for children based on Pascal
-
- * Stellarium - An astronomical sky simulator
-
- * TuxMath - An math tutor game
-
- * TuxPaint - A drawing program for young children
-
- * TuxType - An educational typing tutor for children
-
- * wxMaxima - A cross platform GUI for the computer algebra system
-
- * Inkscape - Vector graphics program
-
- * GIMP - Graphic image manipulation program
-
- * Pencil - GUI prototyping tool
-
- * Hugin - Panorama photo stitching and HDR merging program
-
-
-![Education][18]
-
-Figure 5: The openSUSE:Education-Li-f-e distro has plenty of tools to help you be productive in or for school.
-
-[Used with permission][4]
-
-Also included with openSUSE:Education-Li-f-e is the [KIWI-LTSP Server][19]. The KIWI-LTSP Server is a flexible, cost effective solution aimed at empowering schools, businesses, and organizations all over the world to easily install and deploy desktop workstations. Although this might not directly aid the student to be more productive, it certainly enables educational institutions be more productive in deploying desktops for students to use. For more information on setting up KIWI-LTSP, check out the openSUSE [KIWI-LTSP quick start guide][20].
-
-Learn more about Linux through the free ["Introduction to Linux" ][21]course from The Linux Foundation and edX.
-
---------------------------------------------------------------------------------
-
-via: https://www.linux.com/blog/learn/2019/1/top-5-linux-distributions-productivity
-
-作者:[Jack Wallen][a]
-选题:[lujun9972][b]
-译者:[译者ID](https://github.com/译者ID)
-校对:[校对者ID](https://github.com/校对者ID)
-
-本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
-
-[a]: https://www.linux.com/users/jlwallen
-[b]: https://github.com/lujun9972
-[1]: https://www.ubuntu.com/
-[2]: /files/images/productivity1jpg
-[3]: https://www.linux.com/sites/lcom/files/styles/rendered_file/public/productivity_1.jpg?itok=yxez3X1w (GNOME Clipboard)
-[4]: /licenses/category/used-permission
-[5]: https://labs.fedoraproject.org/en/design-suite/
-[6]: https://fedoraproject.org/wiki/Design_Suite/Tutorials
-[7]: /files/images/productivity2jpg
-[8]: https://www.linux.com/sites/lcom/files/styles/rendered_file/public/productivity_2.jpg?itok=ke0b8qyH (Fedora Design Suite Favorites)
-[9]: https://system76.com/
-[10]: https://system76.com/pop
-[11]: /files/images/productivity3jpg-0
-[12]: https://www.linux.com/sites/lcom/files/styles/rendered_file/public/productivity_3_0.jpg?itok=8UkCUfsD (Pop!_OS)
-[13]: https://www.debian.org/
-[14]: /files/images/productivity4jpg
-[15]: https://www.linux.com/sites/lcom/files/styles/rendered_file/public/productivity_4.jpg?itok=c9yD3Xw2 (Debian)
-[16]: https://en.opensuse.org/openSUSE:Education-Li-f-e
-[17]: /files/images/productivity5jpg
-[18]: https://www.linux.com/sites/lcom/files/styles/rendered_file/public/productivity_5.jpg?itok=oAFtV8nT (Education)
-[19]: https://en.opensuse.org/Portal:KIWI-LTSP
-[20]: https://en.opensuse.org/SDB:KIWI-LTSP_quick_start
-[21]: https://training.linuxfoundation.org/linux-courses/system-administration-training/introduction-to-linux
diff --git a/sources/tech/20190115 Linux Desktop Setup - HookRace Blog.md b/sources/tech/20190115 Linux Desktop Setup - HookRace Blog.md
new file mode 100644
index 0000000000..29d5f63d2a
--- /dev/null
+++ b/sources/tech/20190115 Linux Desktop Setup - HookRace Blog.md
@@ -0,0 +1,514 @@
+[#]: collector: (lujun9972)
+[#]: translator: ( )
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+[#]: subject: (Linux Desktop Setup · HookRace Blog)
+[#]: via: (https://hookrace.net/blog/linux-desktop-setup/)
+[#]: author: (Dennis Felsing http://felsin9.de/nnis/)
+
+Linux Desktop Setup
+======
+
+
+My software setup has been surprisingly constant over the last decade, after a few years of experimentation since I initially switched to Linux in 2006. It might be interesting to look back in another 10 years and see what changed. A quick overview of what’s running as I’m writing this post:
+
+[![htop overview][1]][2]
+
+### Motivation
+
+My software priorities are, in no specific order:
+
+ * Programs should run on my local system so that I’m in control of them, this excludes cloud solutions.
+ * Programs should run in the terminal, so that they can be used consistently from anywhere, including weak computers or a phone.
+ * Keyboard focused is nearly automatic by using terminal software. I prefer to use the mouse where it makes sense only, reaching for the mouse all the time during typing feels like a waste of time. Occasionally it took me an hour to notice that the mouse wasn’t even plugged in.
+ * Ideally use fast and efficient software, I don’t like hearing the fan and feeling the room heat up. I can also keep running older hardware for much longer, my 10 year old Thinkpad x200s is still fine for all the software I use.
+ * Be composable. I don’t want to do every step manually, instead automate more when it makes sense. This naturally favors the shell.
+
+
+
+### Operating Systems
+
+I had a hard start with Linux 12 years ago by removing Windows, armed with just the [Gentoo Linux][3] installation CD and a printed manual to get a functioning Linux system. It took me a few days of compiling and tinkering, but in the end I felt like I had learnt a lot.
+
+I haven’t looked back to Windows since then, but I switched to [Arch Linux][4] on my laptop after having the fan fail from the constant compilation stress. Later I also switched all my other computers and private servers to Arch Linux. As a rolling release distribution you get package upgrades all the time, but the most important breakages are nicely reported in the [Arch Linux News][5].
+
+One annoyance though is that Arch Linux removes the old kernel modules once you upgrade it. I usually notice that once I try plugging in a USB flash drive and the kernel fails to load the relevant module. Instead you’re supposed to reboot after each kernel upgrade. There are a few [hacks][6] around to get around the problem, but I haven’t been bothered enough to actually use them.
+
+Similar problems happen with other programs, commonly Firefox, cron or Samba requiring a restart after an upgrade, but annoyingly not warning you that that’s the case. [SUSE][7], which I use at work, nicely warns about such cases.
+
+For the [DDNet][8] production servers I prefer [Debian][9] over Arch Linux, so that I have a lower chance of breakage on each upgrade. For my firewall and router I used [OpenBSD][10] for its clean system, documentation and great [pf firewall][11], but right now I don’t have a need for a separate router anymore.
+
+### Window Manager
+
+Since I started out with Gentoo I quickly noticed the huge compile time of KDE, which made it a no-go for me. I looked around for more minimal solutions, and used [Openbox][12] and [Fluxbox][13] initially. At some point I jumped on the tiling window manager train in order to be more keyboard-focused and picked up [dwm][14] and [awesome][15] close to their initial releases.
+
+In the end I settled on [xmonad][16] thanks to its flexibility, extendability and being written and configured in pure [Haskell][17], a great functional programming language. One example of this is that at home I run a single 40” 4K screen, but often split it up into four virtual screens, each displaying a workspace on which my windows are automatically arranged. Of course xmonad has a [module][18] for that.
+
+[dzen][19] and [conky][20] function as a simple enough status bar for me. My entire conky config looks like this:
+
+```
+out_to_console yes
+update_interval 1
+total_run_times 0
+
+TEXT
+${downspeed eth0} ${upspeed eth0} | $cpu% ${loadavg 1} ${loadavg 2} ${loadavg 3} $mem/$memmax | ${time %F %T}
+```
+
+And gets piped straight into dzen2 with `conky | dzen2 -fn '-xos4-terminus-medium-r-normal-*-12-*-*-*-*-*-*-*' -bg '#000000' -fg '#ffffff' -p -e '' -x 1000 -w 920 -xs 1 -ta r`.
+
+One important feature for me is to make the terminal emit a beep sound once a job is done. This is done simply by adding a `\a` character to the `PR_TITLEBAR` variable in zsh, which is shown whenever a job is done. Of course I disable the actual beep sound by blacklisting the `pcspkr` kernel module with `echo "blacklist pcspkr" > /etc/modprobe.d/nobeep.conf`. Instead the sound gets turned into an urgency by urxvt’s `URxvt.urgentOnBell: true` setting. Then xmonad has an urgency hook to capture this and I can automatically focus the currently urgent window with a key combination. In dzen I get the urgent windowspaces displayed with a nice and bright `#ff0000`.
+
+The final result in all its glory on my Laptop:
+
+[![Laptop screenshot][21]][22]
+
+I hear that [i3][23] has become quite popular in the last years, but it requires more manual window alignment instead of specifying automated methods to do it.
+
+I realize that there are also terminal multiplexers like [tmux][24], but I still require a few graphical applications, so in the end I never used them productively.
+
+### Terminal Persistency
+
+In order to keep terminals alive I use [dtach][25], which is just the detach feature of screen. In order to make every terminal on my computer detachable I wrote a [small wrapper script][26]. This means that even if I had to restart my X server I could keep all my terminals running just fine, both local and remote.
+
+### Shell & Programming
+
+Instead of [bash][27] I use [zsh][28] as my shell for its huge number of features.
+
+As a terminal emulator I found [urxvt][29] to be simple enough, support Unicode and 256 colors and has great performance. Another great feature is being able to run the urxvt client and daemon separately, so that even a large number of terminals barely takes up any memory (except for the scrollback buffer).
+
+There is only one font that looks absolutely clean and perfect to me: [Terminus][30]. Since i’s a bitmap font everything is pixel perfect and renders extremely fast and at low CPU usage. In order to switch fonts on-demand in each terminal with `CTRL-WIN-[1-7]` my ~/.Xdefaults contains:
+
+```
+URxvt.font: -xos4-terminus-medium-r-normal-*-14-*-*-*-*-*-*-*
+dzen2.font: -xos4-terminus-medium-r-normal-*-14-*-*-*-*-*-*-*
+
+URxvt.keysym.C-M-1: command:\033]50;-xos4-terminus-medium-r-normal-*-12-*-*-*-*-*-*-*\007
+URxvt.keysym.C-M-2: command:\033]50;-xos4-terminus-medium-r-normal-*-14-*-*-*-*-*-*-*\007
+URxvt.keysym.C-M-3: command:\033]50;-xos4-terminus-medium-r-normal-*-18-*-*-*-*-*-*-*\007
+URxvt.keysym.C-M-4: command:\033]50;-xos4-terminus-medium-r-normal-*-22-*-*-*-*-*-*-*\007
+URxvt.keysym.C-M-5: command:\033]50;-xos4-terminus-medium-r-normal-*-24-*-*-*-*-*-*-*\007
+URxvt.keysym.C-M-6: command:\033]50;-xos4-terminus-medium-r-normal-*-28-*-*-*-*-*-*-*\007
+URxvt.keysym.C-M-7: command:\033]50;-xos4-terminus-medium-r-normal-*-32-*-*-*-*-*-*-*\007
+
+URxvt.keysym.C-M-n: command:\033]10;#ffffff\007\033]11;#000000\007\033]12;#ffffff\007\033]706;#00ffff\007\033]707;#ffff00\007
+URxvt.keysym.C-M-b: command:\033]10;#000000\007\033]11;#ffffff\007\033]12;#000000\007\033]706;#0000ff\007\033]707;#ff0000\007
+```
+
+For programming and writing I use [Vim][31] with syntax highlighting and [ctags][32] for indexing, as well as a few terminal windows with grep, sed and the other usual suspects for search and manipulation. This is probably not at the same level of comfort as an IDE, but allows me more automation.
+
+One problem with Vim is that you get so used to its key mappings that you’ll want to use them everywhere.
+
+[Python][33] and [Nim][34] do well as scripting languages where the shell is not powerful enough.
+
+### System Monitoring
+
+[htop][35] (look at the background of that site, it’s a live view of the server that’s hosting it) works great for getting a quick overview of what the software is currently doing. [lm_sensors][36] allows monitoring the hardware temperatures, fans and voltages. [powertop][37] is a great little tool by Intel to find power savings. [ncdu][38] lets you analyze disk usage interactively.
+
+[nmap][39], iptraf-ng, [tcpdump][40] and [Wireshark][41] are essential tools for analyzing network problems.
+
+There are of course many more great tools.
+
+### Mails & Synchronization
+
+On my home server I have a [fetchmail][42] daemon running for each email acccount that I have. Fetchmail just retrieves the incoming emails and invokes [procmail][43]:
+
+```
+#!/bin/sh
+for i in /home/deen/.fetchmail/*; do
+ FETCHMAILHOME=$i /usr/bin/fetchmail -m 'procmail -d %T' -d 60
+done
+```
+
+The configuration is as simple as it could be and waits for the server to inform us of fresh emails:
+
+```
+poll imap.1und1.de protocol imap timeout 120 user "dennis@felsin9.de" password "XXX" folders INBOX keep ssl idle
+```
+
+My `.procmailrc` config contains a few rules to backup all mails and sort them into the correct directories, for example based on the mailing list id or from field in the mail header:
+
+```
+MAILDIR=/home/deen/shared/Maildir
+LOGFILE=$HOME/.procmaillog
+LOGABSTRACT=no
+VERBOSE=off
+FORMAIL=/usr/bin/formail
+NL="
+"
+
+:0wc
+* ! ? test -d /media/mailarchive/`date +%Y`
+| mkdir -p /media/mailarchive/`date +%Y`
+
+# Make backups of all mail received in format YYYY/YYYY-MM
+:0c
+/media/mailarchive/`date +%Y`/`date +%Y-%m`
+
+:0
+* ^From: .*(.*@.*.kit.edu|.*@.*.uka.de|.*@.*.uni-karlsruhe.de)
+$MAILDIR/.uni/
+
+:0
+* ^list-Id:.*lists.kit.edu
+$MAILDIR/.uni-ml/
+
+[...]
+```
+
+To send emails I use [msmtp][44], which is also great to configure:
+
+```
+account default
+host smtp.1und1.de
+tls on
+tls_trust_file /etc/ssl/certs/ca-certificates.crt
+auth on
+from dennis@felsin9.de
+user dennis@felsin9.de
+password XXX
+
+[...]
+```
+
+But so far the emails are still on the server. My documents are all stored in a directory that I synchronize between all computers using [Unison][45]. Think of Unison as a bidirectional interactive [rsync][46]. My emails are part of this documents directory and thus they end up on my desktop computers.
+
+This also means that while the emails reach my server immediately, I only fetch them on deman instead of getting instant notifications when an email comes in.
+
+From there I read the mails with [mutt][47], using the sidebar plugin to display my mail directories. The `/etc/mailcap` file is essential to display non-plaintext mails containing HTML, Word or PDF:
+
+```
+text/html;w3m -I %{charset} -T text/html; copiousoutput
+application/msword; antiword %s; copiousoutput
+application/pdf; pdftotext -layout /dev/stdin -; copiousoutput
+```
+
+### News & Communication
+
+[Newsboat][48] is a nice little RSS/Atom feed reader in the terminal. I have it running on the server in a `tach` session with about 150 feeds. Filtering feeds locally is also possible, for example:
+
+```
+ignore-article "https://forum.ddnet.tw/feed.php" "title =~ \"Map Testing •\" or title =~ \"Old maps •\" or title =~ \"Map Bugs •\" or title =~ \"Archive •\" or title =~ \"Waiting for mapper •\" or title =~ \"Other mods •\" or title =~ \"Fixes •\""
+```
+
+I use [Irssi][49] the same way for communication via IRC.
+
+### Calendar
+
+[remind][50] is a calendar that can be used from the command line. Setting new reminders is done by editing the `rem` files:
+
+```
+# One time events
+REM 2019-01-20 +90 Flight to China %b
+
+# Recurring Holidays
+REM 1 May +90 Holiday "Tag der Arbeit" %b
+REM [trigger(easterdate(year(today()))-2)] +90 Holiday "Karfreitag" %b
+
+# Time Change
+REM Nov Sunday 1 --7 +90 Time Change (03:00 -> 02:00) %b
+REM Apr Sunday 1 --7 +90 Time Change (02:00 -> 03:00) %b
+
+# Birthdays
+FSET birthday(x) "'s " + ord(year(trigdate())-x) + " birthday is %b"
+REM 16 Apr +90 MSG Andreas[birthday(1994)]
+
+# Sun
+SET $LatDeg 49
+SET $LatMin 19
+SET $LatSec 49
+SET $LongDeg -8
+SET $LongMin -40
+SET $LongSec -24
+
+MSG Sun from [sunrise(trigdate())] to [sunset(trigdate())]
+[...]
+```
+
+Unfortunately there is no Chinese Lunar calendar function in remind yet, so Chinese holidays can’t be calculated easily.
+
+I use two aliases for remind:
+
+```
+rem -m -b1 -q -g
+```
+
+to see a list of the next events in chronological order and
+
+```
+rem -m -b1 -q -cuc12 -w$(($(tput cols)+1)) | sed -e "s/\f//g" | less
+```
+
+to show a calendar fitting just the width of my terminal:
+
+![remcal][51]
+
+### Dictionary
+
+[rdictcc][52] is a little known dictionary tool that uses the excellent dictionary files from [dict.cc][53] and turns them into a local database:
+
+```
+$ rdictcc rasch
+====================[ A => B ]====================
+rasch:
+ - apace
+ - brisk [speedy]
+ - cursory
+ - in a timely manner
+ - quick
+ - quickly
+ - rapid
+ - rapidly
+ - sharpish [Br.] [coll.]
+ - speedily
+ - speedy
+ - swift
+ - swiftly
+rasch [gehen]:
+ - smartly [quickly]
+Rasch {n} [Zittergras-Segge]:
+ - Alpine grass [Carex brizoides]
+ - quaking grass sedge [Carex brizoides]
+Rasch {m} [regional] [Putzrasch]:
+ - scouring pad
+====================[ B => A ]====================
+Rasch model:
+ - Rasch-Modell {n}
+```
+
+### Writing and Reading
+
+I have a simple todo file containing my tasks, that is basically always sitting open in a Vim session. For work I also use the todo file as a “done” file so that I can later check what tasks I finished on each day.
+
+For writing documents, letters and presentations I use [LaTeX][54] for its superior typesetting. A simple letter in German format can be set like this for example:
+
+```
+\documentclass[paper = a4, fromalign = right]{scrlttr2}
+\usepackage{german}
+\usepackage{eurosym}
+\usepackage[utf8]{inputenc}
+\setlength{\parskip}{6pt}
+\setlength{\parindent}{0pt}
+
+\setkomavar{fromname}{Dennis Felsing}
+\setkomavar{fromaddress}{Meine Str. 1\\69181 Leimen}
+\setkomavar{subject}{Titel}
+
+\setkomavar*{enclseparator}{Anlagen}
+
+\makeatletter
+\@setplength{refvpos}{89mm}
+\makeatother
+
+\begin{document}
+\begin{letter} {Herr Soundso\\Deine Str. 2\\69121 Heidelberg}
+\opening{Sehr geehrter Herr Soundso,}
+
+Sie haben bei mir seit dem Bla Bla Bla.
+
+Ich fordere Sie hiermit zu Bla Bla Bla auf.
+
+\closing{Mit freundlichen Grüßen}
+
+\end{letter}
+\end{document}
+```
+
+Further example documents and presentations can be found over at [my private site][55].
+
+To read PDFs [Zathura][56] is fast, has Vim-like controls and even supports two different PDF backends: Poppler and MuPDF. [Evince][57] on the other hand is more full-featured for the cases where I encounter documents that Zathura doesn’t like.
+
+### Graphical Editing
+
+[GIMP][58] and [Inkscape][59] are easy choices for photo editing and interactive vector graphics respectively.
+
+In some cases [Imagemagick][60] is good enough though and can be used straight from the command line and thus automated to edit images. Similarly [Graphviz][61] and [TikZ][62] can be used to draw graphs and other diagrams.
+
+### Web Browsing
+
+As a web browser I’ve always used [Firefox][63] for its extensibility and low resource usage compared to Chrome.
+
+Unfortunately the [Pentadactyl][64] extension development stopped after Firefox switched to Chrome-style extensions entirely, so I don’t have satisfying Vim-like controls in my browser anymore.
+
+### Media Players
+
+[mpv][65] with hardware decoding allows watching videos at 5% CPU load using the `vo=gpu` and `hwdec=vaapi` config settings. `audio-channels=2` in mpv seems to give me clearer downmixing to my stereo speakers / headphones than what PulseAudio does by default. A great little feature is exiting with `Shift-Q` instead of just `Q` to save the playback location. When watching with someone with another native tongue you can use `--secondary-sid=` to show two subtitles at once, the primary at the bottom, the secondary at the top of the screen
+
+My wirelss mouse can easily be made into a remote control with mpv with a small `~/.config/mpv/input.conf`:
+
+```
+MOUSE_BTN5 run "mixer" "pcm" "-2"
+MOUSE_BTN6 run "mixer" "pcm" "+2"
+MOUSE_BTN1 cycle sub-visibility
+MOUSE_BTN7 add chapter -1
+MOUSE_BTN8 add chapter 1
+```
+
+[youtube-dl][66] works great for watching videos hosted online, best quality can be achieved with `-f bestvideo+bestaudio/best --all-subs --embed-subs`.
+
+As a music player [MOC][67] hasn’t been actively developed for a while, but it’s still a simple player that plays every format conceivable, including the strangest Chiptune formats. In the AUR there is a [patch][68] adding PulseAudio support as well. Even with the CPU clocked down to 800 MHz MOC barely uses 1-2% of a single CPU core.
+
+![moc][69]
+
+My music collection sits on my home server so that I can access it from anywhere. It is mounted using [SSHFS][70] and automount in the `/etc/fstab/`:
+
+```
+root@server:/media/media /mnt/media fuse.sshfs noauto,x-systemd.automount,idmap=user,IdentityFile=/root/.ssh/id_rsa,allow_other,reconnect 0 0
+```
+
+### Cross-Platform Building
+
+Linux is great to build packages for any major operating system except Linux itself! In the beginning I used [QEMU][71] to with an old Debian, Windows and Mac OS X VM to build for these platforms.
+
+Nowadays I switched to using chroot for the old Debian distribution (for maximum Linux compatibility), [MinGW][72] to cross-compile for Windows and [OSXCross][73] to cross-compile for Mac OS X.
+
+The script used to [build DDNet][74] as well as the [instructions for updating library builds][75] are based on this.
+
+### Backups
+
+As usual, we nearly forgot about backups. Even if this is the last chapter, it should not be an afterthought.
+
+I wrote [rrb][76] (reverse rsync backup) 10 years ago to wrap rsync so that I only need to give the backup server root SSH rights to the computers that it is backing up. Surprisingly rrb needed 0 changes in the last 10 years, even though I kept using it the entire time.
+
+The backups are stored straight on the filesystem. Incremental backups are implemented using hard links (`--link-dest`). A simple [config][77] defines how long backups are kept, which defaults to:
+
+```
+KEEP_RULES=( \
+ 7 7 \ # One backup a day for the last 7 days
+ 31 8 \ # 8 more backups for the last month
+ 365 11 \ # 11 more backups for the last year
+1825 4 \ # 4 more backups for the last 5 years
+)
+```
+
+Since some of my computers don’t have a static IP / DNS entry and I still want to back them up using rrb I use a reverse SSH tunnel (as a systemd service) for them:
+
+```
+[Unit]
+Description=Reverse SSH Tunnel
+After=network.target
+
+[Service]
+ExecStart=/usr/bin/ssh -N -R 27276:localhost:22 -o "ExitOnForwardFailure yes" server
+KillMode=process
+Restart=always
+
+[Install]
+WantedBy=multi-user.target
+```
+
+Now the server can reach the client through `ssh -p 27276 localhost` while the tunnel is running to perform the backup, or in `.ssh/config` format:
+
+```
+Host cr-remote
+ HostName localhost
+ Port 27276
+```
+
+While talking about SSH hacks, sometimes a server is not easily reachable thanks to some bad routing. In that case you can route the SSH connection through another server to get better routing, in this case going through the USA to reach my Chinese server which had not been reliably reachable from Germany for a few weeks:
+
+```
+Host chn.ddnet.tw
+ ProxyCommand ssh -q usa.ddnet.tw nc -q0 chn.ddnet.tw 22
+ Port 22
+```
+
+### Final Remarks
+
+Thanks for reading my random collection of tools. I probably forgot many programs that I use so naturally every day that I don’t even think about them anymore. Let’s see how stable my software setup stays in the next years. If you have any questions, feel free to get in touch with me at [dennis@felsin9.de][78].
+
+Comments on [Hacker News][79].
+
+--------------------------------------------------------------------------------
+
+via: https://hookrace.net/blog/linux-desktop-setup/
+
+作者:[Dennis Felsing][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: http://felsin9.de/nnis/
+[b]: https://github.com/lujun9972
+[1]: https://hookrace.net/public/linux-desktop/htop_small.png
+[2]: https://hookrace.net/public/linux-desktop/htop.png
+[3]: https://gentoo.org/
+[4]: https://www.archlinux.org/
+[5]: https://www.archlinux.org/news/
+[6]: https://www.reddit.com/r/archlinux/comments/4zrsc3/keep_your_system_fully_functional_after_a_kernel/
+[7]: https://www.suse.com/
+[8]: https://ddnet.tw/
+[9]: https://www.debian.org/
+[10]: https://www.openbsd.org/
+[11]: https://www.openbsd.org/faq/pf/
+[12]: http://openbox.org/wiki/Main_Page
+[13]: http://fluxbox.org/
+[14]: https://dwm.suckless.org/
+[15]: https://awesomewm.org/
+[16]: https://xmonad.org/
+[17]: https://www.haskell.org/
+[18]: http://hackage.haskell.org/package/xmonad-contrib-0.15/docs/XMonad-Layout-LayoutScreens.html
+[19]: http://robm.github.io/dzen/
+[20]: https://github.com/brndnmtthws/conky
+[21]: https://hookrace.net/public/linux-desktop/laptop_small.png
+[22]: https://hookrace.net/public/linux-desktop/laptop.png
+[23]: https://i3wm.org/
+[24]: https://github.com/tmux/tmux/wiki
+[25]: http://dtach.sourceforge.net/
+[26]: https://github.com/def-/tach/blob/master/tach
+[27]: https://www.gnu.org/software/bash/
+[28]: http://www.zsh.org/
+[29]: http://software.schmorp.de/pkg/rxvt-unicode.html
+[30]: http://terminus-font.sourceforge.net/
+[31]: https://www.vim.org/
+[32]: http://ctags.sourceforge.net/
+[33]: https://www.python.org/
+[34]: https://nim-lang.org/
+[35]: https://hisham.hm/htop/
+[36]: http://lm-sensors.org/
+[37]: https://01.org/powertop/
+[38]: https://dev.yorhel.nl/ncdu
+[39]: https://nmap.org/
+[40]: https://www.tcpdump.org/
+[41]: https://www.wireshark.org/
+[42]: http://www.fetchmail.info/
+[43]: http://www.procmail.org/
+[44]: https://marlam.de/msmtp/
+[45]: https://www.cis.upenn.edu/~bcpierce/unison/
+[46]: https://rsync.samba.org/
+[47]: http://www.mutt.org/
+[48]: https://newsboat.org/
+[49]: https://irssi.org/
+[50]: https://www.roaringpenguin.com/products/remind
+[51]: https://hookrace.net/public/linux-desktop/remcal.png
+[52]: https://github.com/tsdh/rdictcc
+[53]: https://www.dict.cc/
+[54]: https://www.latex-project.org/
+[55]: http://felsin9.de/nnis/research/
+[56]: https://pwmt.org/projects/zathura/
+[57]: https://wiki.gnome.org/Apps/Evince
+[58]: https://www.gimp.org/
+[59]: https://inkscape.org/
+[60]: https://imagemagick.org/Usage/
+[61]: https://www.graphviz.org/
+[62]: https://sourceforge.net/projects/pgf/
+[63]: https://www.mozilla.org/en-US/firefox/new/
+[64]: https://github.com/5digits/dactyl
+[65]: https://mpv.io/
+[66]: https://rg3.github.io/youtube-dl/
+[67]: http://moc.daper.net/
+[68]: https://aur.archlinux.org/packages/moc-pulse/
+[69]: https://hookrace.net/public/linux-desktop/moc.png
+[70]: https://github.com/libfuse/sshfs
+[71]: https://www.qemu.org/
+[72]: http://www.mingw.org/
+[73]: https://github.com/tpoechtrager/osxcross
+[74]: https://github.com/ddnet/ddnet-scripts/blob/master/ddnet-release.sh
+[75]: https://github.com/ddnet/ddnet-scripts/blob/master/ddnet-lib-update.sh
+[76]: https://github.com/def-/rrb/blob/master/rrb
+[77]: https://github.com/def-/rrb/blob/master/config.example
+[78]: mailto:dennis@felsin9.de
+[79]: https://news.ycombinator.com/item?id=18979731
diff --git a/sources/tech/20190116 The Evil-Twin Framework- A tool for improving WiFi security.md b/sources/tech/20190116 The Evil-Twin Framework- A tool for improving WiFi security.md
deleted file mode 100644
index 81b5d2ddf1..0000000000
--- a/sources/tech/20190116 The Evil-Twin Framework- A tool for improving WiFi security.md
+++ /dev/null
@@ -1,236 +0,0 @@
-[#]: collector: (lujun9972)
-[#]: translator: ( )
-[#]: reviewer: ( )
-[#]: publisher: ( )
-[#]: url: ( )
-[#]: subject: (The Evil-Twin Framework: A tool for improving WiFi security)
-[#]: via: (https://opensource.com/article/19/1/evil-twin-framework)
-[#]: author: (André Esser https://opensource.com/users/andreesser)
-
-The Evil-Twin Framework: A tool for improving WiFi security
-======
-Learn about a pen-testing tool intended to test the security of WiFi access points for all types of threats.
-
-
-The increasing number of devices that connect over-the-air to the internet over-the-air and the wide availability of WiFi access points provide many opportunities for attackers to exploit users. By tricking users to connect to [rogue access points][1], hackers gain full control over the users' network connection, which allows them to sniff and alter traffic, redirect users to malicious sites, and launch other attacks over the network..
-
-To protect users and teach them to avoid risky online behaviors, security auditors and researchers must evaluate users' security practices and understand the reasons they connect to WiFi access points without being confident they are safe. There are a significant number of tools that can conduct WiFi audits, but no single tool can test the many different attack scenarios and none of the tools integrate well with one another.
-
-The **Evil-Twin Framework** (ETF) aims to fix these problems in the WiFi auditing process by enabling auditors to examine multiple scenarios and integrate multiple tools. This article describes the framework and its functionalities, then provides some examples to show how it can be used.
-
-### The ETF architecture
-
-The ETF framework was written in [Python][2] because the development language is very easy to read and make contributions to. In addition, many of the ETF's libraries, such as **[Scapy][3]** , were already developed for Python, making it easy to use them for ETF.
-
-The ETF architecture (Figure 1) is divided into different modules that interact with each other. The framework's settings are all written in a single configuration file. The user can verify and edit the settings through the user interface via the **ConfigurationManager** class. Other modules can only read these settings and run according to them.
-
-![Evil-Twin Framework Architecture][5]
-
-Figure 1: Evil-Twin framework architecture
-
-The ETF supports multiple user interfaces that interact with the framework. The current default interface is an interactive console, similar to the one on [Metasploit][6]. A graphical user interface (GUI) and a command line interface (CLI) are under development for desktop/browser use, and mobile interfaces may be an option in the future. The user can edit the settings in the configuration file using the interactive console (and eventually with the GUI). The user interface can interact with every other module that exists in the framework.
-
-The WiFi module ( **AirCommunicator** ) was built to support a wide range of WiFi capabilities and attacks. The framework identifies three basic pillars of Wi-Fi communication: **packet sniffing** , **custom packet injection** , and **access point creation**. The three main WiFi communication modules are **AirScanner** , **AirInjector** , and **AirHost** , which are responsible for packet sniffing, packet injection, and access point creation, respectively. The three classes are wrapped inside the main WiFi module, AirCommunicator, which reads the configuration file before starting the services. Any type of WiFi attack can be built using one or more of these core features.
-
-To enable man-in-the-middle (MITM) attacks, which are a common way to attack WiFi clients, the framework has an integrated module called ETFITM (Evil-Twin Framework-in-the-Middle). This module is responsible for the creation of a web proxy used to intercept and manipulate HTTP/HTTPS traffic.
-
-There are many other tools that can leverage the MITM position created by the ETF. Through its extensibility, ETF can support them—and, instead of having to call them separately, you can add the tools to the framework just by extending the Spawner class. This enables a developer or security auditor to call the program with a preconfigured argument string from within the framework.
-
-The other way to extend the framework is through plugins. There are two categories of plugins: **WiFi plugins** and **MITM plugins**. MITM plugins are scripts that can run while the MITM proxy is active. The proxy passes the HTTP(S) requests and responses through to the plugins where they can be logged or manipulated. WiFi plugins follow a more complex flow of execution but still expose a fairly simple API to contributors who wish to develop and use their own plugins. WiFi plugins can be further divided into three categories, one for each of the core WiFi communication modules.
-
-Each of the core modules has certain events that trigger the execution of a plugin. For instance, AirScanner has three defined events to which a response can be programmed. The events usually correspond to a setup phase before the service starts running, a mid-execution phase while the service is running, and a teardown or cleanup phase after a service finishes. Since Python allows multiple inheritance, one plugin can subclass more than one plugin class.
-
-Figure 1 above is a summary of the framework's architecture. Lines pointing away from the ConfigurationManager mean that the module reads information from it and lines pointing towards it mean that the module can write/edit configurations.
-
-### Examples of using the Evil-Twin Framework
-
-There are a variety of ways ETF can conduct penetration testing on WiFi network security or work on end users' awareness of WiFi security. The following examples describe some of the framework's pen-testing functionalities, such as access point and client detection, WPA and WEP access point attacks, and evil twin access point creation.
-
-These examples were devised using ETF with WiFi cards that allow WiFi traffic capture. They also utilize the following abbreviations for ETF setup commands:
-
- * **APS** access point SSID
- * **APB** access point BSSID
- * **APC** access point channel
- * **CM** client MAC address
-
-
-
-In a real testing scenario, make sure to replace these abbreviations with the correct information.
-
-#### Capturing a WPA 4-way handshake after a de-authentication attack
-
-This scenario (Figure 2) takes two aspects into consideration: the de-authentication attack and the possibility of catching a 4-way WPA handshake. The scenario starts with a running WPA/WPA2-enabled access point with one connected client device (in this case, a smartphone). The goal is to de-authenticate the client with a general de-authentication attack then capture the WPA handshake once it tries to reconnect. The reconnection will be done manually immediately after being de-authenticated.
-
-![Scenario for capturing a WPA handshake after a de-authentication attack][8]
-
-Figure 2: Scenario for capturing a WPA handshake after a de-authentication attack
-
-The consideration in this example is the ETF's reliability. The goal is to find out if the tools can consistently capture the WPA handshake. The scenario will be performed multiple times with each tool to check its reliability when capturing the WPA handshake.
-
-There is more than one way to capture a WPA handshake using the ETF. One way is to use a combination of the AirScanner and AirInjector modules; another way is to just use the AirInjector. The following scenario uses a combination of both modules.
-
-The ETF launches the AirScanner module and analyzes the IEEE 802.11 frames to find a WPA handshake. Then the AirInjector can launch a de-authentication attack to force a reconnection. The following steps must be done to accomplish this on the ETF:
-
- 1. Enter the AirScanner configuration mode: **config airscanner**
- 2. Configure the AirScanner to not hop channels: **config airscanner**
- 3. Set the channel to sniff the traffic on the access point channel (APC): **set fixed_sniffing_channel = **
- 4. Start the AirScanner module with the CredentialSniffer plugin: **start airscanner with credentialsniffer**
- 5. Add a target access point BSSID (APS) from the sniffed access points list: **add aps where ssid = **
- 6. Start the AirInjector, which by default lauches the de-authentication attack: **start airinjector**
-
-
-
-This simple set of commands enables the ETF to perform an efficient and successful de-authentication attack on every test run. The ETF can also capture the WPA handshake on every test run. The following code makes it possible to observe the ETF's successful execution.
-
-```
-███████╗████████╗███████╗
-██╔════╝╚══██╔══╝██╔════╝
-█████╗ ██║ █████╗
-██╔══╝ ██║ ██╔══╝
-███████╗ ██║ ██║
-╚══════╝ ╚═╝ ╚═╝
-
-
-[+] Do you want to load an older session? [Y/n]: n
-[+] Creating new temporary session on 02/08/2018
-[+] Enter the desired session name:
-ETF[etf/aircommunicator/]::> config airscanner
-ETF[etf/aircommunicator/airscanner]::> listargs
- sniffing_interface = wlan1; (var)
- probes = True; (var)
- beacons = True; (var)
- hop_channels = false; (var)
-fixed_sniffing_channel = 11; (var)
-ETF[etf/aircommunicator/airscanner]::> start airscanner with
-arpreplayer caffelatte credentialsniffer packetlogger selfishwifi
-ETF[etf/aircommunicator/airscanner]::> start airscanner with credentialsniffer
-[+] Successfully added credentialsniffer plugin.
-[+] Starting packet sniffer on interface 'wlan1'
-[+] Set fixed channel to 11
-ETF[etf/aircommunicator/airscanner]::> add aps where ssid = CrackWPA
-ETF[etf/aircommunicator/airscanner]::> start airinjector
-ETF[etf/aircommunicator/airscanner]::> [+] Starting deauthentication attack
- - 1000 bursts of 1 packets
- - 1 different packets
-[+] Injection attacks finished executing.
-[+] Starting post injection methods
-[+] Post injection methods finished
-[+] WPA Handshake found for client '70:3e:ac:bb:78:64' and network 'CrackWPA'
-```
-
-#### Launching an ARP replay attack and cracking a WEP network
-
-The next scenario (Figure 3) will also focus on the [Address Resolution Protocol][9] (ARP) replay attack's efficiency and the speed of capturing the WEP data packets containing the initialization vectors (IVs). The same network may require a different number of caught IVs to be cracked, so the limit for this scenario is 50,000 IVs. If the network is cracked during the first test with less than 50,000 IVs, that number will be the new limit for the following tests on the network. The cracking tool to be used will be **aircrack-ng**.
-
-The test scenario starts with an access point using WEP encryption and an offline client that knows the key—the key for testing purposes is 12345, but it can be a larger and more complex key. Once the client connects to the WEP access point, it will send out a gratuitous ARP packet; this is the packet that's meant to be captured and replayed. The test ends once the limit of packets containing IVs is captured.
-
-![Scenario for capturing a WPA handshake after a de-authentication attack][11]
-
-Figure 3: Scenario for capturing a WPA handshake after a de-authentication attack
-
-ETF uses Python's Scapy library for packet sniffing and injection. To minimize known performance problems in Scapy, ETF tweaks some of its low-level libraries to significantly speed packet injection. For this specific scenario, the ETF uses **tcpdump** as a background process instead of Scapy for more efficient packet sniffing, while Scapy is used to identify the encrypted ARP packet.
-
-This scenario requires the following commands and operations to be performed on the ETF:
-
- 1. Enter the AirScanner configuration mode: **config airscanner**
- 2. Configure the AirScanner to not hop channels: **set hop_channels = false**
- 3. Set the channel to sniff the traffic on the access point channel (APC): **set fixed_sniffing_channel = **
- 4. Enter the ARPReplayer plugin configuration mode: **config arpreplayer**
- 5. Set the target access point BSSID (APB) of the WEP network: **set target_ap_bssid **
- 6. Start the AirScanner module with the ARPReplayer plugin: **start airscanner with arpreplayer**
-
-
-
-After executing these commands, ETF correctly identifies the encrypted ARP packet, then successfully performs an ARP replay attack, which cracks the network.
-
-#### Launching a catch-all honeypot
-
-The scenario in Figure 4 creates multiple access points with the same SSID. This technique discovers the encryption type of a network that was probed for but out of reach. By launching multiple access points with all security settings, the client will automatically connect to the one that matches the security settings of the locally cached access point information.
-
-![Scenario for capturing a WPA handshake after a de-authentication attack][13]
-
-Figure 4: Scenario for capturing a WPA handshake after a de-authentication attack
-
-Using the ETF, it is possible to configure the **hostapd** configuration file then launch the program in the background. Hostapd supports launching multiple access points on the same wireless card by configuring virtual interfaces, and since it supports all types of security configurations, a complete catch-all honeypot can be set up. For the WEP and WPA(2)-PSK networks, a default password is used, and for the WPA(2)-EAP, an "accept all" policy is configured.
-
-For this scenario, the following commands and operations must be performed on the ETF:
-
- 1. Enter the APLauncher configuration mode: **config aplauncher**
- 2. Set the desired access point SSID (APS): **set ssid = **
- 3. Configure the APLauncher as a catch-all honeypot: **set catch_all_honeypot = true**
- 4. Start the AirHost module: **start airhost**
-
-
-
-With these commands, the ETF can launch a complete catch-all honeypot with all types of security configurations. ETF also automatically launches the DHCP and DNS servers that allow clients to stay connected to the internet. ETF offers a better, faster, and more complete solution to create catch-all honeypots. The following code enables the successful execution of the ETF to be observed.
-
-```
-███████╗████████╗███████╗
-██╔════╝╚══██╔══╝██╔════╝
-█████╗ ██║ █████╗
-██╔══╝ ██║ ██╔══╝
-███████╗ ██║ ██║
-╚══════╝ ╚═╝ ╚═╝
-
-
-[+] Do you want to load an older session? [Y/n]: n
-[+] Creating ne´,cxzw temporary session on 03/08/2018
-[+] Enter the desired session name:
-ETF[etf/aircommunicator/]::> config aplauncher
-ETF[etf/aircommunicator/airhost/aplauncher]::> setconf ssid CatchMe
-ssid = CatchMe
-ETF[etf/aircommunicator/airhost/aplauncher]::> setconf catch_all_honeypot true
-catch_all_honeypot = true
-ETF[etf/aircommunicator/airhost/aplauncher]::> start airhost
-[+] Killing already started processes and restarting network services
-[+] Stopping dnsmasq and hostapd services
-[+] Access Point stopped...
-[+] Running airhost plugins pre_start
-[+] Starting hostapd background process
-[+] Starting dnsmasq service
-[+] Running airhost plugins post_start
-[+] Access Point launched successfully
-[+] Starting dnsmasq service
-```
-
-### Conclusions and future work
-
-These scenarios use common and well-known attacks to help validate the ETF's capabilities for testing WiFi networks and clients. The results also validate that the framework's architecture enables new attack vectors and features to be developed on top of it while taking advantage of the platform's existing capabilities. This should accelerate development of new WiFi penetration-testing tools, since a lot of the code is already written. Furthermore, the fact that complementary WiFi technologies are all integrated in a single tool will make WiFi pen-testing simpler and more efficient.
-
-The ETF's goal is not to replace existing tools but to complement them and offer a broader choice to security auditors when conducting WiFi pen-testing and improving user awareness.
-
-The ETF is an open source project [available on GitHub][14] and community contributions to its development are welcomed. Following are some of the ways you can help.
-
-One of the limitations of current WiFi pen-testing is the inability to log important events during tests. This makes reporting identified vulnerabilities both more difficult and less accurate. The framework could implement a logger that can be accessed by every class to create a pen-testing session report.
-
-The ETF tool's capabilities cover many aspects of WiFi pen-testing. On one hand, it facilitates the phases of WiFi reconnaissance, vulnerability discovery, and attack. On the other hand, it doesn't offer a feature that facilitates the reporting phase. Adding the concept of a session and a session reporting feature, such as the logging of important events during a session, would greatly increase the value of the tool for real pen-testing scenarios.
-
-Another valuable contribution would be extending the framework to facilitate WiFi fuzzing. The IEEE 802.11 protocol is very complex, and considering there are multiple implementations of it, both on the client and access point side, it's safe to assume these implementations contain bugs and even security flaws. These bugs could be discovered by fuzzing IEEE 802.11 protocol frames. Since Scapy allows custom packet creation and injection, a fuzzer can be implemented through it.
-
---------------------------------------------------------------------------------
-
-via: https://opensource.com/article/19/1/evil-twin-framework
-
-作者:[André Esser][a]
-选题:[lujun9972][b]
-译者:[译者ID](https://github.com/译者ID)
-校对:[校对者ID](https://github.com/校对者ID)
-
-本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
-
-[a]: https://opensource.com/users/andreesser
-[b]: https://github.com/lujun9972
-[1]: https://en.wikipedia.org/wiki/Rogue_access_point
-[2]: https://www.python.org/
-[3]: https://scapy.net
-[4]: /file/417776
-[5]: https://opensource.com/sites/default/files/uploads/pic1.png (Evil-Twin Framework Architecture)
-[6]: https://www.metasploit.com
-[7]: /file/417781
-[8]: https://opensource.com/sites/default/files/uploads/pic2.png (Scenario for capturing a WPA handshake after a de-authentication attack)
-[9]: https://en.wikipedia.org/wiki/Address_Resolution_Protocol
-[10]: /file/417786
-[11]: https://opensource.com/sites/default/files/uploads/pic3.png (Scenario for capturing a WPA handshake after a de-authentication attack)
-[12]: /file/417791
-[13]: https://opensource.com/sites/default/files/uploads/pic4.png (Scenario for capturing a WPA handshake after a de-authentication attack)
-[14]: https://github.com/Esser420/EvilTwinFramework
diff --git a/sources/tech/20190118 Secure Email Service Tutanota Has a Desktop App Now.md b/sources/tech/20190118 Secure Email Service Tutanota Has a Desktop App Now.md
deleted file mode 100644
index f56f1272f2..0000000000
--- a/sources/tech/20190118 Secure Email Service Tutanota Has a Desktop App Now.md
+++ /dev/null
@@ -1,119 +0,0 @@
-[#]: collector: (lujun9972)
-[#]: translator: ( )
-[#]: reviewer: ( )
-[#]: publisher: ( )
-[#]: url: ( )
-[#]: subject: (Secure Email Service Tutanota Has a Desktop App Now)
-[#]: via: (https://itsfoss.com/tutanota-desktop)
-[#]: author: (John Paul https://itsfoss.com/author/john/)
-
-Secure Email Service Tutanota Has a Desktop App Now
-======
-
-[Tutanota][1] recently [announced][2] the release of a desktop app for their email service. The beta is available for Linux, Windows, and macOS.
-
-### What is Tutanota?
-
-There are plenty of free, ad-supported email services available online. However, the majority of those email services are not exactly secure or privacy-minded. In this post-[Snowden][3] world, [Tutanota][4] offers a free, secure email service with a focus on privacy.
-
-Tutanota has a number of eye-catching features, such as:
-
- * End-to-end encrypted mailbox
- * End-to-end encrypted address book
- * Automatic end-to-end encrypted emails between users
- * End-to-end encrypted emails to any email address with a shared password
- * Secure password reset that gives Tutanota absolutely no access
- * Strips IP addresses from emails sent and received
- * The code that runs Tutanota is [open source][5]
- * Two-factor authentication
- * Focus on privacy
- * Passwords are salted and hashed locally with Bcrypt
- * Secure servers located in Germany
- * TLS with support for PFS, DMARC, DKIM, DNSSEC, and DANE
- * Full-text search of encrypted data executed locally
-
-
-
-![][6]
-Tutanota on the web
-
-You can [sign up for an account for free][7]. You can also upgrade your account to get extra features, such as custom domains, custom domain login, domain rules, extra storage, and aliases. They also have accounts available for businesses.
-
-Tutanota is also available on mobile devices. In fact, it’s [Android app is open source as well][8].
-
-This German company is planning to expand beyond email. They hope to offer an encrypted calendar and cloud storage. You can help them reach their goals by [donating][9] via PayPal and cryptocurrency.
-
-### The New Desktop App from Tutanota
-
-Tutanota announced the [beta release][2] of the desktop app right before Christmas. They based this app on [Electron][10].
-
-![][11]
-Tutanota desktop app
-
-They went the Electron route:
-
- * to support all three major operating systems with minimum effort.
- * to quickly adapt the new desktop clients so that they match new features added to the webmail client.
- * to allocate development time to particular desktop features, e.g. offline availability, email import, that will simultaneously be available in all three desktop clients.
-
-
-
-Because this is a beta, there are several features missing from the app. The development team at Tutanota is working to add the following features:
-
- * Email import and synchronization with external mailboxes. This will “enable Tutanota to import emails from external mailboxes and encrypt the data locally on your device before storing it on the Tutanota servers.”
- * Offline availability of emails
- * Two-factor authentication
-
-
-
-### How to Install the Tutanota desktop client?
-
-![][12]Composing email in Tutanota
-
-You can [download][2] the beta app directly from Tutanota’s website. They have an [AppImage file for Linux][13], a .exe file for Windows, and a .app file for macOS. You can post any bugs that you encounter to the Tutanota [GitHub account][14].
-
-To prove the security of the app, Tutanota signed each version. “The signatures make sure that the desktop clients as well as any updates come directly from us and have not been tampered with.” You can verify the signature using from Tutanota’s [GitHub page][15].
-
-Remember, you will need to create a Tutanota account before you can use it. This is email client is designed to work solely with Tutanota.
-
-### Wrapping up
-
-I tested out the Tutanota email app on Linux Mint MATE. As to be expected, it was a mirror image of the web app. At this point in time, I don’t see any difference between the desktop app and the web app. The only use case that I can see to use the app now is to have Tutanota in its own window.
-
-Have you ever used [Tutanota][16]? If not, what is your favorite privacy conscience email service? Let us know in the comments below.
-
-If you found this article interesting, please take a minute to share it on social media, Hacker News or [Reddit][17].
-
-![][18]
-
---------------------------------------------------------------------------------
-
-via: https://itsfoss.com/tutanota-desktop
-
-作者:[John Paul][a]
-选题:[lujun9972][b]
-译者:[译者ID](https://github.com/译者ID)
-校对:[校对者ID](https://github.com/校对者ID)
-
-本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
-
-[a]: https://itsfoss.com/author/john/
-[b]: https://github.com/lujun9972
-[1]: https://itsfoss.com/tutanota-review/
-[2]: https://tutanota.com/blog/posts/desktop-clients/
-[3]: https://en.wikipedia.org/wiki/Edward_Snowden
-[4]: https://tutanota.com/
-[5]: https://tutanota.com/blog/posts/open-source-email
-[6]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2018/12/tutanota2.jpg?resize=800%2C490&ssl=1
-[7]: https://tutanota.com/pricing
-[8]: https://itsfoss.com/tutanota-fdroid-release/
-[9]: https://tutanota.com/community
-[10]: https://electronjs.org/
-[11]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/01/tutanota-app1.png?fit=800%2C486&ssl=1
-[12]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2018/12/tutanota1.jpg?resize=800%2C405&ssl=1
-[13]: https://itsfoss.com/use-appimage-linux/
-[14]: https://github.com/tutao/tutanota
-[15]: https://github.com/tutao/tutanota/blob/master/buildSrc/installerSigner.js
-[16]: https://tutanota.com/polo/
-[17]: http://reddit.com/r/linuxusersgroup
-[18]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2018/02/tutanota-featured.png?fit=800%2C450&ssl=1
diff --git a/sources/tech/20190121 Akira- The Linux Design Tool We-ve Always Wanted.md b/sources/tech/20190121 Akira- The Linux Design Tool We-ve Always Wanted.md
deleted file mode 100644
index bd58eca5bf..0000000000
--- a/sources/tech/20190121 Akira- The Linux Design Tool We-ve Always Wanted.md
+++ /dev/null
@@ -1,92 +0,0 @@
-[#]: collector: (lujun9972)
-[#]: translator: ( )
-[#]: reviewer: ( )
-[#]: publisher: ( )
-[#]: url: ( )
-[#]: subject: (Akira: The Linux Design Tool We’ve Always Wanted?)
-[#]: via: (https://itsfoss.com/akira-design-tool)
-[#]: author: (Ankush Das https://itsfoss.com/author/ankush/)
-
-Akira: The Linux Design Tool We’ve Always Wanted?
-======
-
-Let’s make it clear, I am not a professional designer – but I’ve used certain tools on Windows (like Photoshop, Illustrator, etc.) and [Figma][1] (which is a browser-based interface design tool). I’m sure there are a lot more design tools available for Mac and Windows.
-
-Even on Linux, there is a limited number of dedicated [graphic design tools][2]. A few of these tools like [GIMP][3] and [Inkscape][4] are used by professionals as well. But most of them are not considered professional grade, unfortunately.
-
-Even if there are a couple more solutions – I’ve never come across a native Linux application that could replace [Sketch][5], Figma, or Adobe **** XD. Any professional designer would agree to that, isn’t it?
-
-### Is Akira going to replace Sketch, Figma, and Adobe XD on Linux?
-
-Well, in order to develop something that would replace those awesome proprietary tools – [Alessandro Castellani][6] – came up with a [Kickstarter campaign][7] by teaming up with a couple of experienced developers –
-[Alberto Fanjul][8], [Bilal Elmoussaoui][9], and [Felipe Escoto][10].
-
-So, yes, Akira is still pretty much just an idea- with a working prototype of its interface (as I observed in their [live stream session][11] via Kickstarter recently).
-
-### If it does not exist, why the Kickstarter campaign?
-
-![][12]
-
-The aim of the Kickstarter campaign is to gather funds in order to hire the developers and take a few months off to dedicate their time in order to make Akira possible.
-
-Nonetheless, if you want to support the project, you should know some details, right?
-
-Fret not, we asked a couple of questions in their livestream session – let’s get into it…
-
-### Akira: A few more details
-
-![Akira prototype interface][13]
-Image Credits: Kickstarter
-
-As the Kickstarter campaign describes:
-
-> The main purpose of Akira is to offer a fast and intuitive tool to **create Web and Mobile interfaces** , more like **Sketch** , **Figma** , or **Adobe XD** , with a completely native experience for Linux.
-
-They’ve also written a detailed description as to how the tool will be different from Inkscape, Glade, or QML Editor. Of course, if you want all the technical details, [Kickstarter][7] is the way to go. But, before that, let’s take a look at what they had to say when I asked some questions about Akira.
-
-Q: If you consider your project – similar to what Figma offers – why should one consider installing Akira instead of using the web-based tool? Is it just going to be a clone of those tools – offering a native Linux experience or is there something really interesting to encourage users to switch (except being an open source solution)?
-
-**Akira:** A native experience on Linux is always better and fast in comparison to a web-based electron app. Also, the hardware configuration matters if you choose to utilize Figma – but Akira will be light on system resource and you will still be able to do similar stuff without needing to go online.
-
-Q: Let’s assume that it becomes the open source solution that Linux users have been waiting for (with similar features offered by proprietary tools). What are your plans to sustain it? Do you plan to introduce any pricing plans – or rely on donations?
-
-**Akira** : The project will mostly rely on Donations (something like [Krita Foundation][14] could be an idea). But, there will be no “pro” pricing plans – it will be available for free and it will be an open source project.
-
-So, with the response I got, it definitely seems to be something promising that we should probably support.
-
-### Wrapping Up
-
-What do you think about Akira? Is it just going to remain a concept? Or do you hope to see it in action?
-
-Let us know your thoughts in the comments below.
-
-![][15]
-
---------------------------------------------------------------------------------
-
-via: https://itsfoss.com/akira-design-tool
-
-作者:[Ankush Das][a]
-选题:[lujun9972][b]
-译者:[译者ID](https://github.com/译者ID)
-校对:[校对者ID](https://github.com/校对者ID)
-
-本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
-
-[a]: https://itsfoss.com/author/ankush/
-[b]: https://github.com/lujun9972
-[1]: https://www.figma.com/
-[2]: https://itsfoss.com/best-linux-graphic-design-software/
-[3]: https://itsfoss.com/gimp-2-10-release/
-[4]: https://inkscape.org/
-[5]: https://www.sketchapp.com/
-[6]: https://github.com/Alecaddd
-[7]: https://www.kickstarter.com/projects/alecaddd/akira-the-linux-design-tool/description
-[8]: https://github.com/albfan
-[9]: https://github.com/bilelmoussaoui
-[10]: https://github.com/Philip-Scott
-[11]: https://live.kickstarter.com/alessandro-castellani/live-stream/the-current-state-of-akira
-[12]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/01/akira-design-tool-kickstarter.jpg?resize=800%2C451&ssl=1
-[13]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/01/akira-mockup.png?ssl=1
-[14]: https://krita.org/en/about/krita-foundation/
-[15]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/01/akira-design-tool-kickstarter.jpg?fit=812%2C458&ssl=1
diff --git a/sources/tech/20190122 Get started with Go For It, a flexible to-do list application.md b/sources/tech/20190122 Get started with Go For It, a flexible to-do list application.md
deleted file mode 100644
index cd5d3c63ed..0000000000
--- a/sources/tech/20190122 Get started with Go For It, a flexible to-do list application.md
+++ /dev/null
@@ -1,60 +0,0 @@
-[#]: collector: (lujun9972)
-[#]: translator: (geekpi)
-[#]: reviewer: ( )
-[#]: publisher: ( )
-[#]: url: ( )
-[#]: subject: (Get started with Go For It, a flexible to-do list application)
-[#]: via: (https://opensource.com/article/19/1/productivity-tool-go-for-it)
-[#]: author: (Kevin Sonney https://opensource.com/users/ksonney (Kevin Sonney))
-
-Get started with Go For It, a flexible to-do list application
-======
-Go For It, the tenth in our series on open source tools that will make you more productive in 2019, builds on the Todo.txt system to help you get more things done.
-
-
-There seems to be a mad rush at the beginning of every year to find ways to be more productive. New Year's resolutions, the itch to start the year off right, and of course, an "out with the old, in with the new" attitude all contribute to this. And the usual round of recommendations is heavily biased towards closed source and proprietary software. It doesn't have to be that way.
-
-Here's the tenth of my picks for 19 new (or new-to-you) open source tools to help you be more productive in 2019.
-
-### Go For It
-
-Sometimes what a person needs to be productive isn't a fancy kanban board or a set of notes, but a simple, straightforward to-do list. Something that is as basic as "add item to list, check it off when done." And for that, the [plain-text Todo.txt system][1] is possibly one of the easiest to use, and it's supported on almost every system out there.
-
-
-
-[Go For It][2] is a simple, easy-to-use graphical interface for Todo.txt. It can be used with an existing file, if you are already using Todo.txt, and will create both a to-do and a done file if you aren't. It allows drag-and-drop ordering of tasks, allowing users to organize to-do items in the order they want to execute them. It also supports priorities, projects, and contexts, as outlined in the [Todo.txt format guidelines][3]. And, it can filter tasks by context or project simply by clicking on the project or context in the task list.
-
-
-
-At first, Go For It may look the same as just about any other Todo.txt program, but looks can be deceiving. The real feature that sets Go For It apart is that it includes a built-in [Pomodoro Technique][4] timer. Select the task you want to complete, switch to the Timer tab, and click Start. When the task is done, simply click Done, and it will automatically reset the timer and pick the next task on the list. You can pause and restart the timer as well as click Skip to jump to the next task (or break). It provides a warning when 60 seconds are left for the current task. The default time for tasks is set at 25 minutes, and the default time for breaks is set at five minutes. You can adjust this in the Settings screen, as well as the location of the directory containing your Todo.txt and done.txt files.
-
-
-
-Go For It's third tab, Done, allows you to look at the tasks you've completed and clean them out when you want. Being able to look at what you've accomplished can be very motivating and a good way to get a feel for where you are in a longer process.
-
-
-
-It also has all of Todo.txt's other advantages. Go For It's list is accessible by other programs that use the same format, including [Todo.txt's original command-line tool][5] and any [add-ons][6] you've installed.
-
-Go For It seeks to be a simple tool to help manage your to-do list and get those items done. If you already use Todo.txt, Go For It is a fantastic addition to your toolkit, and if you don't, it's a really good way to start using one of the simplest and most flexible systems available.
-
-
---------------------------------------------------------------------------------
-
-via: https://opensource.com/article/19/1/productivity-tool-go-for-it
-
-作者:[Kevin Sonney][a]
-选题:[lujun9972][b]
-译者:[译者ID](https://github.com/译者ID)
-校对:[校对者ID](https://github.com/校对者ID)
-
-本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
-
-[a]: https://opensource.com/users/ksonney (Kevin Sonney)
-[b]: https://github.com/lujun9972
-[1]: http://todotxt.org/
-[2]: http://manuel-kehl.de/projects/go-for-it/
-[3]: https://github.com/todotxt/todo.txt
-[4]: https://en.wikipedia.org/wiki/Pomodoro_Technique
-[5]: https://github.com/todotxt/todo.txt-cli
-[6]: https://github.com/todotxt/todo.txt-cli/wiki/Todo.sh-Add-on-Directory
diff --git a/sources/tech/20190122 How To Copy A File-Folder From A Local System To Remote System In Linux.md b/sources/tech/20190122 How To Copy A File-Folder From A Local System To Remote System In Linux.md
deleted file mode 100644
index 6de6cd173f..0000000000
--- a/sources/tech/20190122 How To Copy A File-Folder From A Local System To Remote System In Linux.md
+++ /dev/null
@@ -1,398 +0,0 @@
-[#]: collector: (lujun9972)
-[#]: translator: ( )
-[#]: reviewer: ( )
-[#]: publisher: ( )
-[#]: url: ( )
-[#]: subject: (How To Copy A File/Folder From A Local System To Remote System In Linux?)
-[#]: via: (https://www.2daygeek.com/linux-scp-rsync-pscp-command-copy-files-folders-in-multiple-servers-using-shell-script/)
-[#]: author: (Prakash Subramanian https://www.2daygeek.com/author/prakash/)
-
-How To Copy A File/Folder From A Local System To Remote System In Linux?
-======
-
-Copying a file from one server to another server or local to remote is one of the routine task for Linux administrator.
-
-If anyone says no, i won’t accept because this is one of the regular activity wherever you go.
-
-It can be done in many ways and we are trying to cover all the possible options.
-
-You can choose the one which you would prefer. Also, check other commands as well that may help you for some other purpose.
-
-I have tested all these commands and script in my test environment so, you can use this for your routine work.
-
-By default every one go with SCP because it’s one of the native command that everyone use for file copy. But commands which is listed in this article are be smart so, give a try if you would like to try new things.
-
-This can be done in below four ways easily.
-
- * **`SCP:`** scp copies files between hosts on a network. It uses ssh for data transfer, and uses the same authentication and provides the same security as ssh.
- * **`RSYNC:`** rsync is a fast and extraordinarily versatile file copying tool. It can copy locally, to/from another host over any remote shell, or to/from a remote rsync daemon.
- * **`PSCP:`** pscp is a program for copying files in parallel to a number of hosts. It provides features such as passing a password to scp, saving output to files, and timing out.
- * **`PRSYNC:`** prsync is a program for copying files in parallel to a number of hosts. It provides features such as passing a password to ssh, saving output to files, and timing out.
-
-
-
-### Method-1: Copy Files/Folders From A Local System To Remote System In Linux Using SCP Command?
-
-scp command allow us to copy files/folders from a local system to remote system.
-
-We are going to copy the `output.txt` file from my local system to `2g.CentOS.com` remote system under `/opt/backup` directory.
-
-```
-# scp output.txt root@2g.CentOS.com:/opt/backup
-
-output.txt 100% 2468 2.4KB/s 00:00
-```
-
-We are going to copy two files `output.txt` and `passwd-up.sh` files from my local system to `2g.CentOS.com` remote system under `/opt/backup` directory.
-
-```
-# scp output.txt passwd-up.sh root@2g.CentOS.com:/opt/backup
-
-output.txt 100% 2468 2.4KB/s 00:00
-passwd-up.sh 100% 877 0.9KB/s 00:00
-```
-
-We are going to copy the `shell-script` directory from my local system to `2g.CentOS.com` remote system under `/opt/backup` directory.
-
-This will copy the `shell-script` directory and associated files under `/opt/backup` directory.
-
-```
-# scp -r /home/daygeek/2g/shell-script/ [email protected]:/opt/backup/
-
-output.txt 100% 2468 2.4KB/s 00:00
-ovh.sh 100% 76 0.1KB/s 00:00
-passwd-up.sh 100% 877 0.9KB/s 00:00
-passwd-up1.sh 100% 7 0.0KB/s 00:00
-server-list.txt 100% 23 0.0KB/s 00:00
-```
-
-### Method-2: Copy Files/Folders From A Local System To Multiple Remote System In Linux Using Shell Script with scp Command?
-
-If you would like to copy the same file into multiple remote servers then create the following small shell script to achieve this.
-
-To do so, get the servers list and add those into `server-list.txt` file. Make sure you have to update the servers list into `server-list.txt` file. Each server should be in separate line.
-
-Finally mention the file location which you want to copy like below.
-
-```
-# file-copy.sh
-
-#!/bin/sh
-for server in `more server-list.txt`
-do
- scp /home/daygeek/2g/shell-script/output.txt [email protected]$server:/opt/backup
-done
-```
-
-Once you done, set an executable permission to password-update.sh file.
-
-```
-# chmod +x file-copy.sh
-```
-
-Finally run the script to achieve this.
-
-```
-# ./file-copy.sh
-
-output.txt 100% 2468 2.4KB/s 00:00
-output.txt 100% 2468 2.4KB/s 00:00
-```
-
-Use the following script to copy the multiple files into multiple remote servers.
-
-```
-# file-copy.sh
-
-#!/bin/sh
-for server in `more server-list.txt`
-do
- scp /home/daygeek/2g/shell-script/output.txt passwd-up.sh [email protected]$server:/opt/backup
-done
-```
-
-The below output shows all the files twice as this copied into two servers.
-
-```
-# ./file-cp.sh
-
-output.txt 100% 2468 2.4KB/s 00:00
-passwd-up.sh 100% 877 0.9KB/s 00:00
-output.txt 100% 2468 2.4KB/s 00:00
-passwd-up.sh 100% 877 0.9KB/s 00:00
-```
-
-Use the following script to copy the directory recursively into multiple remote servers.
-
-```
-# file-copy.sh
-
-#!/bin/sh
-for server in `more server-list.txt`
-do
- scp -r /home/daygeek/2g/shell-script/ [email protected]$server:/opt/backup
-done
-```
-
-Output for the above script.
-
-```
-# ./file-cp.sh
-
-output.txt 100% 2468 2.4KB/s 00:00
-ovh.sh 100% 76 0.1KB/s 00:00
-passwd-up.sh 100% 877 0.9KB/s 00:00
-passwd-up1.sh 100% 7 0.0KB/s 00:00
-server-list.txt 100% 23 0.0KB/s 00:00
-
-output.txt 100% 2468 2.4KB/s 00:00
-ovh.sh 100% 76 0.1KB/s 00:00
-passwd-up.sh 100% 877 0.9KB/s 00:00
-passwd-up1.sh 100% 7 0.0KB/s 00:00
-server-list.txt 100% 23 0.0KB/s 00:00
-```
-
-### Method-3: Copy Files/Folders From A Local System To Multiple Remote System In Linux Using PSCP Command?
-
-pscp command directly allow us to perform the copy to multiple remote servers.
-
-Use the following pscp command to copy a single file to remote server.
-
-```
-# pscp.pssh -H 2g.CentOS.com /home/daygeek/2g/shell-script/output.txt /opt/backup
-
-[1] 18:46:11 [SUCCESS] 2g.CentOS.com
-```
-
-Use the following pscp command to copy a multiple files to remote server.
-
-```
-# pscp.pssh -H 2g.CentOS.com /home/daygeek/2g/shell-script/output.txt ovh.sh /opt/backup
-
-[1] 18:47:48 [SUCCESS] 2g.CentOS.com
-```
-
-Use the following pscp command to copy a directory recursively to remote server.
-
-```
-# pscp.pssh -H 2g.CentOS.com -r /home/daygeek/2g/shell-script/ /opt/backup
-
-[1] 18:48:46 [SUCCESS] 2g.CentOS.com
-```
-
-Use the following pscp command to copy a single file to multiple remote servers.
-
-```
-# pscp.pssh -h server-list.txt /home/daygeek/2g/shell-script/output.txt /opt/backup
-
-[1] 18:49:48 [SUCCESS] 2g.CentOS.com
-[2] 18:49:48 [SUCCESS] 2g.Debian.com
-```
-
-Use the following pscp command to copy a multiple files to multiple remote servers.
-
-```
-# pscp.pssh -h server-list.txt /home/daygeek/2g/shell-script/output.txt passwd-up.sh /opt/backup
-
-[1] 18:50:30 [SUCCESS] 2g.Debian.com
-[2] 18:50:30 [SUCCESS] 2g.CentOS.com
-```
-
-Use the following pscp command to copy a directory recursively to multiple remote servers.
-
-```
-# pscp.pssh -h server-list.txt -r /home/daygeek/2g/shell-script/ /opt/backup
-
-[1] 18:51:31 [SUCCESS] 2g.Debian.com
-[2] 18:51:31 [SUCCESS] 2g.CentOS.com
-```
-
-### Method-4: Copy Files/Folders From A Local System To Multiple Remote System In Linux Using rsync Command?
-
-Rsync is a fast and extraordinarily versatile file copying tool. It can copy locally, to/from another host over any remote shell, or to/from a remote rsync daemon.
-
-Use the following rsync command to copy a single file to remote server.
-
-```
-# rsync -avz /home/daygeek/2g/shell-script/output.txt [email protected]:/opt/backup
-
-sending incremental file list
-output.txt
-
-sent 598 bytes received 31 bytes 1258.00 bytes/sec
-total size is 2468 speedup is 3.92
-```
-
-Use the following pscp command to copy a multiple files to remote server.
-
-```
-# rsync -avz /home/daygeek/2g/shell-script/output.txt passwd-up.sh root@2g.CentOS.com:/opt/backup
-
-sending incremental file list
-output.txt
-passwd-up.sh
-
-sent 737 bytes received 50 bytes 1574.00 bytes/sec
-total size is 2537 speedup is 3.22
-```
-
-Use the following rsync command to copy a single file to remote server overh ssh.
-
-```
-# rsync -avzhe ssh /home/daygeek/2g/shell-script/output.txt root@2g.CentOS.com:/opt/backup
-
-sending incremental file list
-output.txt
-
-sent 598 bytes received 31 bytes 419.33 bytes/sec
-total size is 2.47K speedup is 3.92
-```
-
-Use the following pscp command to copy a directory recursively to remote server over ssh. This will copy only files not the base directory.
-
-```
-# rsync -avzhe ssh /home/daygeek/2g/shell-script/ root@2g.CentOS.com:/opt/backup
-
-sending incremental file list
-./
-output.txt
-ovh.sh
-passwd-up.sh
-passwd-up1.sh
-server-list.txt
-
-sent 3.85K bytes received 281 bytes 8.26K bytes/sec
-total size is 9.12K speedup is 2.21
-```
-
-### Method-5: Copy Files/Folders From A Local System To Multiple Remote System In Linux Using Shell Script with rsync Command?
-
-If you would like to copy the same file into multiple remote servers then create the following small shell script to achieve this.
-
-```
-# file-copy.sh
-
-#!/bin/sh
-for server in `more server-list.txt`
-do
- rsync -avzhe ssh /home/daygeek/2g/shell-script/ root@2g.CentOS.com$server:/opt/backup
-done
-```
-
-Output for the above shell script.
-
-```
-# ./file-copy.sh
-
-sending incremental file list
-./
-output.txt
-ovh.sh
-passwd-up.sh
-passwd-up1.sh
-server-list.txt
-
-sent 3.86K bytes received 281 bytes 8.28K bytes/sec
-total size is 9.13K speedup is 2.21
-
-sending incremental file list
-./
-output.txt
-ovh.sh
-passwd-up.sh
-passwd-up1.sh
-server-list.txt
-
-sent 3.86K bytes received 281 bytes 2.76K bytes/sec
-total size is 9.13K speedup is 2.21
-```
-
-### Method-6: Copy Files/Folders From A Local System To Multiple Remote System In Linux Using Shell Script with scp Command?
-
-In the above two shell script, we need to mention the file and folder location as a prerequiesties but here i did a small modification that allow the script to get a file or folder as a input. It could be very useful when you want to perform the copy multiple times in a day.
-
-```
-# file-copy.sh
-
-#!/bin/sh
-for server in `more server-list.txt`
-do
-scp -r $1 root@2g.CentOS.com$server:/opt/backup
-done
-```
-
-Run the shell script and give the file name as a input.
-
-```
-# ./file-copy.sh output1.txt
-
-output1.txt 100% 3558 3.5KB/s 00:00
-output1.txt 100% 3558 3.5KB/s 00:00
-```
-
-### Method-7: Copy Files/Folders From A Local System To Multiple Remote System In Linux With Non-Standard Port Number?
-
-Use the below shell script to copy a file or folder if you are using Non-Standard port.
-
-If you are using `Non-Standard` port, make sure you have to mention the port number as follow for SCP command.
-
-```
-# file-copy-scp.sh
-
-#!/bin/sh
-for server in `more server-list.txt`
-do
-scp -P 2222 -r $1 root@2g.CentOS.com$server:/opt/backup
-done
-```
-
-Run the shell script and give the file name as a input.
-
-```
-# ./file-copy.sh ovh.sh
-
-ovh.sh 100% 3558 3.5KB/s 00:00
-ovh.sh 100% 3558 3.5KB/s 00:00
-```
-
-If you are using `Non-Standard` port, make sure you have to mention the port number as follow for rsync command.
-
-```
-# file-copy-rsync.sh
-
-#!/bin/sh
-for server in `more server-list.txt`
-do
-rsync -avzhe 'ssh -p 2222' $1 root@2g.CentOS.com$server:/opt/backup
-done
-```
-
-Run the shell script and give the file name as a input.
-
-```
-# ./file-copy-rsync.sh passwd-up.sh
-sending incremental file list
-passwd-up.sh
-
-sent 238 bytes received 35 bytes 26.00 bytes/sec
-total size is 159 speedup is 0.58
-
-sending incremental file list
-passwd-up.sh
-
-sent 238 bytes received 35 bytes 26.00 bytes/sec
-total size is 159 speedup is 0.58
-```
---------------------------------------------------------------------------------
-
-via: https://www.2daygeek.com/linux-scp-rsync-pscp-command-copy-files-folders-in-multiple-servers-using-shell-script/
-
-作者:[Prakash Subramanian][a]
-选题:[lujun9972][b]
-译者:[译者ID](https://github.com/译者ID)
-校对:[校对者ID](https://github.com/校对者ID)
-
-本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
-
-[a]: https://www.2daygeek.com/author/prakash/
-[b]: https://github.com/lujun9972
diff --git a/sources/tech/20190123 Mind map yourself using FreeMind and Fedora.md b/sources/tech/20190123 Mind map yourself using FreeMind and Fedora.md
deleted file mode 100644
index 146f95752a..0000000000
--- a/sources/tech/20190123 Mind map yourself using FreeMind and Fedora.md
+++ /dev/null
@@ -1,81 +0,0 @@
-[#]: collector: (lujun9972)
-[#]: translator: ( )
-[#]: reviewer: ( )
-[#]: publisher: ( )
-[#]: url: ( )
-[#]: subject: (Mind map yourself using FreeMind and Fedora)
-[#]: via: (https://fedoramagazine.org/mind-map-yourself-using-freemind-and-fedora/)
-[#]: author: (Paul W. Frields https://fedoramagazine.org/author/pfrields/)
-
-Mind map yourself using FreeMind and Fedora
-======
-
-
-A mind map of yourself sounds a little far-fetched at first. Is this process about neural pathways? Or telepathic communication? Not at all. Instead, a mind map of yourself is a way to describe yourself to others visually. It also shows connections among the characteristics you use to describe yourself. It’s a useful way to share information with others in a clever but also controllable way. You can use any mind map application for this purpose. This article shows you how to get started using [FreeMind][1], available in Fedora.
-
-### Get the application
-
-The FreeMind application has been around a while. While the UI is a bit dated and could use a refresh, it’s a powerful app that offers many options for building mind maps. And of course it’s 100% open source. There are other mind mapping apps available for Fedora and Linux users, as well. Check out [this previous article that covers several mind map options][2].
-
-Install FreeMind from the Fedora repositories using the Software app if you’re running Fedora Workstation. Or use this [sudo][3] command in a terminal:
-
-```
-$ sudo dnf install freemind
-```
-
-You can launch the app from the GNOME Shell Overview in Fedora Workstation. Or use the application start service your desktop environment provides. FreeMind shows you a new, blank map by default:
-
-![][4]
-FreeMind initial (blank) mind map
-
-A map consists of linked items or descriptions — nodes. When you think of something related to a node you want to capture, simply create a new node connected to it.
-
-### Mapping yourself
-
-Click in the initial node. Replace it with your name by editing the text and hitting **Enter**. You’ve just started your mind map.
-
-What would you think of if you had to fully describe yourself to someone? There are probably many things to cover. How do you spend your time? What do you enjoy? What do you dislike? What do you value? Do you have a family? All of this can be captured in nodes.
-
-To add a node connection, select the existing node, and hit **Insert** , or use the “light bulb” icon for a new child node. To add another node at the same level as the new child, use **Enter**.
-
-Don’t worry if you make a mistake. You can use the **Delete** key to remove an unwanted node. There’s no rules about content. Short nodes are best, though. They allow your mind to move quickly when creating the map. Concise nodes also let viewers scan and understand the map easily later.
-
-This example uses nodes to explore each of these major categories:
-
-![][5]
-Personal mind map, first level
-
-You could do another round of iteration for each of these areas. Let your mind freely connect ideas to generate the map. Don’t worry about “getting it right.” It’s better to get everything out of your head and onto the display. Here’s what a next-level map might look like.
-
-![][6]
-Personal mind map, second level
-
-You could expand on any of these nodes in the same way. Notice how much information you can quickly understand about John Q. Public in the example.
-
-### How to use your personal mind map
-
-This is a great way to have team or project members introduce themselves to each other. You can apply all sorts of formatting and color to the map to give it personality. These are fun to do on paper, of course. But having one on your Fedora system means you can always fix mistakes, or even make changes as you change.
-
-Have fun exploring your personal mind map!
-
-
-
---------------------------------------------------------------------------------
-
-via: https://fedoramagazine.org/mind-map-yourself-using-freemind-and-fedora/
-
-作者:[Paul W. Frields][a]
-选题:[lujun9972][b]
-译者:[译者ID](https://github.com/译者ID)
-校对:[校对者ID](https://github.com/校对者ID)
-
-本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
-
-[a]: https://fedoramagazine.org/author/pfrields/
-[b]: https://github.com/lujun9972
-[1]: http://freemind.sourceforge.net/wiki/index.php/Main_Page
-[2]: https://fedoramagazine.org/three-mind-mapping-tools-fedora/
-[3]: https://fedoramagazine.org/howto-use-sudo/
-[4]: https://fedoramagazine.org/wp-content/uploads/2019/01/Screenshot-from-2019-01-19-15-17-04-1024x736.png
-[5]: https://fedoramagazine.org/wp-content/uploads/2019/01/Screenshot-from-2019-01-19-15-32-38-1024x736.png
-[6]: https://fedoramagazine.org/wp-content/uploads/2019/01/Screenshot-from-2019-01-19-15-38-00-1024x736.png
diff --git a/sources/tech/20190124 Get started with LogicalDOC, an open source document management system.md b/sources/tech/20190124 Get started with LogicalDOC, an open source document management system.md
deleted file mode 100644
index 21687c0ce3..0000000000
--- a/sources/tech/20190124 Get started with LogicalDOC, an open source document management system.md
+++ /dev/null
@@ -1,62 +0,0 @@
-[#]: collector: (lujun9972)
-[#]: translator: ( )
-[#]: reviewer: ( )
-[#]: publisher: ( )
-[#]: url: ( )
-[#]: subject: (Get started with LogicalDOC, an open source document management system)
-[#]: via: (https://opensource.com/article/19/1/productivity-tool-logicaldoc)
-[#]: author: (Kevin Sonney https://opensource.com/users/ksonney)
-
-Get started with LogicalDOC, an open source document management system
-======
-Keep better track of document versions with LogicalDOC, the 12th in our series on open source tools that will make you more productive in 2019.
-
-
-
-There seems to be a mad rush at the beginning of every year to find ways to be more productive. New Year's resolutions, the itch to start the year off right, and of course, an "out with the old, in with the new" attitude all contribute to this. And the usual round of recommendations is heavily biased towards closed source and proprietary software. It doesn't have to be that way.
-
-Here's the 12th of my picks for 19 new (or new-to-you) open source tools to help you be more productive in 2019.
-
-### LogicalDOC
-
-Part of being productive is being able to find what you need when you need it. We've all seen directories full of similar files with similar names, a result of renaming them every time a document changes to keep track of all the versions. For example, my wife is a writer, and she often saves document revisions with new names before she sends them to reviewers.
-
-
-
-A coder's natural solution to this problem—Git or another version control tool—won't work for document creators because the systems used for code often don't play nice with the formats used by commercial text editors. And before someone says, "just change formats," [that isn't an option for everyone][1]. Also, many version control tools are not very friendly for the less technically inclined. In large organizations, there are tools to solve this problem, but they also require the resources of a large organization to run, manage, and support them.
-
-
-
-[LogicalDOC CE][2] is an open source document management system built to solve this problem. It allows users to check in, check out, version, search, and lock document files and keeps a history of versions, similar to the version control tools used by coders.
-
-LogicalDOC can be [installed][3] on Linux, MacOS, and Windows using a Java-based installer. During installation, you'll be prompted for details on the database where its data will be stored and have an option for a local-only file store. You'll get the URL and a default username and password to access the server as well as an option to save a script to automate future installations.
-
-After you log in, LogicalDOC's default screen lists the documents you have tagged, checked out, and any recent notes on them. Switching to the Documents tab will show the files you have access to. You can upload documents by selecting a file through the interface or using drag and drop. If you upload a ZIP file, LogicalDOC will expand it and add its individual files to the repository.
-
-
-
-Right-clicking on a file will bring up a menu of options to check out files, lock files against changes, and do a whole host of other things. Checking out a file downloads it to your local machine where it can be edited. A checked-out file cannot be modified by anyone else until it's checked back in. When the file is checked back in (using the same menu), the user can add tags to the version and is required to comment on what was done to it.
-
-
-
-Going back and looking at earlier versions is as easy as downloading them from the Versions page. There are also import and export options for some third-party services, with [Dropbox][4] support built-in.
-
-Document management is not just for big companies that can afford expensive solutions. LogicalDOC helps you keep track of the documents you're using with a revision history and a safe repository for documents that are otherwise difficult to manage.
-
---------------------------------------------------------------------------------
-
-via: https://opensource.com/article/19/1/productivity-tool-logicaldoc
-
-作者:[Kevin Sonney][a]
-选题:[lujun9972][b]
-译者:[译者ID](https://github.com/译者ID)
-校对:[校对者ID](https://github.com/校对者ID)
-
-本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
-
-[a]: https://opensource.com/users/ksonney (Kevin Sonney)
-[b]: https://github.com/lujun9972
-[1]: http://www.antipope.org/charlie/blog-static/2013/10/why-microsoft-word-must-die.html
-[2]: https://www.logicaldoc.com/download-logicaldoc-community
-[3]: https://docs.logicaldoc.com/en/installation
-[4]: https://dropbox.com
diff --git a/sources/tech/20190124 ODrive (Open Drive) - Google Drive GUI Client For Linux.md b/sources/tech/20190124 ODrive (Open Drive) - Google Drive GUI Client For Linux.md
deleted file mode 100644
index 71a91ec3d8..0000000000
--- a/sources/tech/20190124 ODrive (Open Drive) - Google Drive GUI Client For Linux.md
+++ /dev/null
@@ -1,127 +0,0 @@
-[#]: collector: (lujun9972)
-[#]: translator: ( )
-[#]: reviewer: ( )
-[#]: publisher: ( )
-[#]: url: ( )
-[#]: subject: (ODrive (Open Drive) – Google Drive GUI Client For Linux)
-[#]: via: (https://www.2daygeek.com/odrive-open-drive-google-drive-gui-client-for-linux/)
-[#]: author: (Magesh Maruthamuthu https://www.2daygeek.com/author/magesh/)
-
-ODrive (Open Drive) – Google Drive GUI Client For Linux
-======
-
-This we had discussed in so many times. However, i will give a small introduction about it.
-
-As of now there is no official Google Drive Client for Linux and we need to use unofficial clients.
-
-There are many applications available in Linux for Google Drive integration.
-
-Each application has came out with set of features.
-
-We had written few articles about this in our website in the past.
-
-Those are **[DriveSync][1]** , **[Google Drive Ocamlfuse Client][2]** and **[Mount Google Drive in Linux Using Nautilus File Manager][3]**.
-
-Today also we are going to discuss about the same topic and the utility name is ODrive.
-
-### What’s ODrive?
-
-ODrive stands for Open Drive. It’s a GUI client for Google Drive which was written in electron framework.
-
-It’s simple GUI which allow users to integrate the Google Drive with few steps.
-
-### How To Install & Setup ODrive on Linux?
-
-Since the developer is offering the AppImage package and there is no difficulty for installing the ODrive on Linux.
-
-Simple download the latest ODrive AppImage package from developer github page using **wget Command**.
-
-```
-$ wget https://github.com/liberodark/ODrive/releases/download/0.1.3/odrive-0.1.3-x86_64.AppImage
-```
-
-You have to set executable file permission to the ODrive AppImage file.
-
-```
-$ chmod +x odrive-0.1.3-x86_64.AppImage
-```
-
-Simple run the following ODrive AppImage file to launch the ODrive GUI for further setup.
-
-```
-$ ./odrive-0.1.3-x86_64.AppImage
-```
-
-You might get the same window like below when you ran the above command. Just hit the **`Next`** button for further setup.
-![][5]
-
-Click **`Connect`** link to add a Google drive account.
-![][6]
-
-Enter your email id which you want to setup a Google Drive account.
-![][7]
-
-Enter your password for the given email id.
-![][8]
-
-Allow ODrive (Open Drive) to access your Google account.
-![][9]
-
-By default, it will choose the folder location. You can change if you want to use the specific one.
-![][10]
-
-Finally hit **`Synchronize`** button to start download the files from Google Drive to your local system.
-![][11]
-
-Synchronizing is in progress.
-![][12]
-
-Once synchronizing is completed. It will show you all files downloaded.
-Once synchronizing is completed. It’s shows you that all the files has been downloaded.
-![][13]
-
-I have seen all the files were downloaded in the mentioned directory.
-![][14]
-
-If you want to sync any new files from local system to Google Drive. Just start the `ODrive` from the application menu but it won’t actual launch the application. But it will be running in the background that we can able to see by using the ps command.
-
-```
-$ ps -df | grep odrive
-```
-
-![][15]
-
-It will automatically sync once you add a new file into the google drive folder. The same has been checked through notification menu. Yes, i can see one file was synced to Google Drive.
-![][16]
-
-GUI is not loading after sync, and i’m not sure this functionality. I will check with developer and will add update based on his input.
-
---------------------------------------------------------------------------------
-
-via: https://www.2daygeek.com/odrive-open-drive-google-drive-gui-client-for-linux/
-
-作者:[Magesh Maruthamuthu][a]
-选题:[lujun9972][b]
-译者:[译者ID](https://github.com/译者ID)
-校对:[校对者ID](https://github.com/校对者ID)
-
-本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
-
-[a]: https://www.2daygeek.com/author/magesh/
-[b]: https://github.com/lujun9972
-[1]: https://www.2daygeek.com/drivesync-google-drive-sync-client-for-linux/
-[2]: https://www.2daygeek.com/mount-access-google-drive-on-linux-with-google-drive-ocamlfuse-client/
-[3]: https://www.2daygeek.com/mount-access-setup-google-drive-in-linux/
-[4]: data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7
-[5]: https://www.2daygeek.com/wp-content/uploads/2019/01/odrive-open-drive-google-drive-gui-client-for-linux-1.png
-[6]: https://www.2daygeek.com/wp-content/uploads/2019/01/odrive-open-drive-google-drive-gui-client-for-linux-2.png
-[7]: https://www.2daygeek.com/wp-content/uploads/2019/01/odrive-open-drive-google-drive-gui-client-for-linux-3.png
-[8]: https://www.2daygeek.com/wp-content/uploads/2019/01/odrive-open-drive-google-drive-gui-client-for-linux-4.png
-[9]: https://www.2daygeek.com/wp-content/uploads/2019/01/odrive-open-drive-google-drive-gui-client-for-linux-5.png
-[10]: https://www.2daygeek.com/wp-content/uploads/2019/01/odrive-open-drive-google-drive-gui-client-for-linux-6.png
-[11]: https://www.2daygeek.com/wp-content/uploads/2019/01/odrive-open-drive-google-drive-gui-client-for-linux-7.png
-[12]: https://www.2daygeek.com/wp-content/uploads/2019/01/odrive-open-drive-google-drive-gui-client-for-linux-8a.png
-[13]: https://www.2daygeek.com/wp-content/uploads/2019/01/odrive-open-drive-google-drive-gui-client-for-linux-9.png
-[14]: https://www.2daygeek.com/wp-content/uploads/2019/01/odrive-open-drive-google-drive-gui-client-for-linux-11.png
-[15]: https://www.2daygeek.com/wp-content/uploads/2019/01/odrive-open-drive-google-drive-gui-client-for-linux-9b.png
-[16]: https://www.2daygeek.com/wp-content/uploads/2019/01/odrive-open-drive-google-drive-gui-client-for-linux-10.png
diff --git a/sources/tech/20190124 What does DevOps mean to you.md b/sources/tech/20190124 What does DevOps mean to you.md
deleted file mode 100644
index c62f0f83ba..0000000000
--- a/sources/tech/20190124 What does DevOps mean to you.md
+++ /dev/null
@@ -1,143 +0,0 @@
-[#]: collector: (lujun9972)
-[#]: translator: ( )
-[#]: reviewer: ( )
-[#]: publisher: ( )
-[#]: url: ( )
-[#]: subject: (What does DevOps mean to you?)
-[#]: via: (https://opensource.com/article/19/1/what-does-devops-mean-you)
-[#]: author: (Girish Managoli https://opensource.com/users/gammay)
-
-What does DevOps mean to you?
-======
-6 experts break down DevOps and the practices and philosophies key to making it work.
-
-
-
-It's said if you ask 10 people about DevOps, you will get 12 answers. This is a result of the diversity in opinions and expectations around DevOps—not to mention the disparity in its practices.
-
-To decipher the paradoxes around DevOps, we went to the people who know it the best—its top practitioners around the industry. These are people who have been around the horn, who know the ins and outs of technology, and who have practiced DevOps for years. Their viewpoints should encourage, stimulate, and provoke your thoughts around DevOps.
-
-### What does DevOps mean to you?
-
-Let's start with the fundamentals. We're not looking for textbook answers, rather we want to know what the experts say.
-
-In short, the experts say DevOps is about principles, practices, and tools.
-
-[Ann Marie Fred][1], DevOps lead for IBM Digital Business Group's Commerce Platform, says, "to me, DevOps is a set of principles and practices designed to make teams more effective in designing, developing, delivering, and operating software."
-
-According to [Daniel Oh][2], senior DevOps evangelist at Red Hat, "in general, DevOps is compelling for enterprises to evolve current IT-based processes and tools related to app development, IT operations, and security protocol."
-
-[Brent Reed][3], founder of Tactec Strategic Solutions, talks about continuous improvement for the stakeholders. "DevOps means to me a way of working that includes a mindset that allows for continuous improvement for operational performance, maturing to organizational performance, resulting in delighted stakeholders."
-
-Many of the experts also emphasize culture. Ann Marie says, "it's also about continuous improvement and learning. It's about people and culture as much as it is about tools and technology."
-
-To [Dan Barker][4], chief architect and DevOps leader at the National Association of Insurance Commissioners (NAIC), "DevOps is primarily about culture. … It has brought several independent areas together like lean, [just culture][5], and continuous learning. And I see culture as being the most critical and the hardest to execute on."
-
-[Chris Baynham-Hughes][6], head of DevOps at Atos, says, "[DevOps] practice is adopted through the evolution of culture, process, and tooling within an organization. The key focus is culture change, and the key tenants of DevOps culture are collaboration, experimentation, fast-feedback, and continuous improvement."
-
-[Geoff Purdy][7], cloud architect, talks about agility and feedback "shortening and amplifying feedback loops. We want teams to get feedback in minutes rather than weeks."
-
-But in the end, Daniel nails it by explaining how open source and open culture allow him to achieve his goals "in easy and quick ways. In DevOps initiatives, the most important thing for me should be open culture rather than useful tools, multiple solutions."
-
-### What DevOps practices have you found effective?
-
-"Picking one, automated provisioning has been hugely effective for my team. "
-
-The most effective practices cited by the experts are pervasive yet disparate.
-
-According to Ann Marie, "some of the most powerful [practices] are agile project management; breaking down silos between cross-functional, autonomous squads; fully automated continuous delivery; green/blue deploys for zero downtime; developers setting up their own monitoring and alerting; blameless post-mortems; automating security and compliance."
-
-Chris says, "particular breakthroughs have been empathetic collaboration; continuous improvement; open leadership; reducing distance to the business; shifting from vertical silos to horizontal, cross-functional product teams; work visualization; impact mapping; Mobius loop; shortening of feedback loops; automation (from environments to CI/CD)."
-
-Brent supports "evolving a learning culture that includes TDD [test-driven development] and BDD [behavior-driven development] capturing of a story and automating the sequences of events that move from design, build, and test through implementation and production with continuous integration and delivery pipelines. A fail-first approach to testing, the ability to automate integration and delivery processes and include fast feedback throughout the lifecycle."
-
-Geoff highlights automated provisioning. "Picking one, automated provisioning has been hugely effective for my team. More specifically, automated provisioning from a versioned Infrastructure-as-Code codebase."
-
-Dan uses fun. "We do a lot of different things to create a DevOps culture. We hold 'lunch and learns' with free food to encourage everyone to come and learn together; we buy books and study in groups."
-
-### How do you motivate your team to achieve DevOps goals?
-
-```
-"Celebrate wins and visualize the progress made."
-```
-
-Daniel emphasizes "automation that matters. In order to minimize objection from multiple teams in a DevOps initiative, you should encourage your team to increase the automation capability of development, testing, and IT operations along with new processes and procedures. For example, a Linux container is the key tool to achieve the automation capability of DevOps."
-
-Geoff agrees, saying, "automate the toil. Are there tasks you hate doing? Great. Engineer them out of existence if possible. Otherwise, automate them. It keeps the job from becoming boring and routine because the job constantly evolves."
-
-Dan, Ann Marie, and Brent stress team motivation.
-
-Dan says, "at the NAIC, we have a great awards system for encouraging specific behaviors. We have multiple tiers of awards, and two of them can be given to anyone by anyone. We also give awards to teams after they complete something significant, but we often award individual contributors."
-
-According to Ann Marie, "the biggest motivator for teams in my area is seeing the success of others. We have a weekly playback for each other, and part of that is sharing what we've learned from trying out new tools or practices. When teams are enthusiastic about something they're doing and willing to help others get started, more teams will quickly get on board."
-
-Brent agrees. "Getting everyone educated and on the same baseline of knowledge is essential ... assessing what helps the team achieve [and] what it needs to deliver with the product owner and users is the first place I like to start."
-
-Chris recommends a two-pronged approach. "Run small, weekly goals that are achievable and agreed by the team as being important and [where] they can see progress outside of the feature work they are doing. Celebrate wins and visualize the progress made."
-
-### How do DevOps and agile work together?
-
-```
-"DevOps != Agile, second Agile != Scrum."
-```
-
-This is an important question because both DevOps and agile are cornerstones of modern software development.
-
-DevOps is a process of software development focusing on communication and collaboration to facilitate rapid application and product deployment, whereas agile is a development methodology involving continuous development, continuous iteration, and continuous testing to achieve predictable and quality deliverables.
-
-So, how do they relate? Let's ask the experts.
-
-In Brent's view, "DevOps != Agile, second Agile != Scrum. … Agile tools and ways of working—that support DevOps strategies and goals—are how they mesh together."
-
-Chris says, "agile is a fundamental component of DevOps for me. Sure, we could talk about how we adopt DevOps culture in a non-agile environment, but ultimately, improving agility in the way software is engineered is a key indicator as to the maturity of DevOps adoption within the organization."
-
-Dan relates DevOps to the larger [Agile Manifesto][8]. "I never talk about agile without referencing the Agile Manifesto in order to set the baseline. There are many implementations that don't focus on the Manifesto. When you read the Manifesto, they've really described DevOps from a development perspective. Therefore, it is very easy to fit agile into a DevOps culture, as agile is focused on communication, collaboration, flexibility to change, and getting to production quickly."
-
-Geoff sees "DevOps as one of many implementations of agile. Agile is essentially a set of principles, while DevOps is a culture, process, and toolchain that embodies those principles."
-
-Ann Marie keeps it succinct, saying "agile is a prerequisite for DevOps. DevOps makes agile more effective."
-
-### Has DevOps benefited from open source?
-
-```
-"Open source done well requires a DevOps culture."
-```
-
-This question receives a fervent "yes" from all participants followed by an explanation of the benefits they've seen.
-
-Ann Marie says, "we get to stand on the shoulders of giants and build upon what's already available. The open source model of maintaining software, with pull requests and code reviews, also works very well for DevOps teams."
-
-Chris agrees that DevOps has "undoubtedly" benefited from open source. "From the engineering and tooling side (e.g., Ansible), to the process and people side, through the sharing of stories within the industry and the open leadership community."
-
-A benefit Geoff cites is "grassroots adoption. Nobody had to sign purchase requisitions for free (as in beer) software. Teams found tooling that met their needs, were free (as in freedom) to modify, [then] built on top of it, and contributed enhancements back to the larger community. Rinse, repeat."
-
-Open source has shown DevOps "better ways you can adopt new changes and overcome challenges, just like open source software developers are doing it," says Daniel.
-
-Brent concurs. "DevOps has benefited in many ways from open source. One way is the ability to use the tools to understand how they can help accelerate DevOps goals and strategies. Educating the development and operations folks on crucial things like automation, virtualization and containerization, auto-scaling, and many of the qualities that are difficult to achieve without introducing technology enablers that make DevOps easier."
-
-Dan notes the two-way, symbiotic relationship between DevOps and open source. "Open source done well requires a DevOps culture. Most open source projects have very open communication structures with very little obscurity. This has actually been a great learning opportunity for DevOps practitioners around what they might bring into their own organizations. Also, being able to use tools from a community that is similar to that of your own organization only encourages your own culture growth. I like to use GitLab as an example of this symbiotic relationship. When I bring [GitLab] into a company, we get a great tool, but what I'm really buying is their unique culture. That brings substantial value through our interactions with them and our ability to contribute back. Their tool also has a lot to offer for a DevOps organization, but their culture has inspired awe in the companies where I've introduced it."
-
-Now that our DevOps experts have weighed in, please share your thoughts on what DevOps means—as well as the other questions we posed—in the comments.
-
-
---------------------------------------------------------------------------------
-
-via: https://opensource.com/article/19/1/what-does-devops-mean-you
-
-作者:[Girish Managoli][a]
-选题:[lujun9972][b]
-译者:[译者ID](https://github.com/译者ID)
-校对:[校对者ID](https://github.com/校对者ID)
-
-本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
-
-[a]: https://opensource.com/users/gammay
-[b]: https://github.com/lujun9972
-[1]: https://twitter.com/DukeAMO
-[2]: https://twitter.com/danieloh30?lang=en
-[3]: https://twitter.com/brentareed
-[4]: https://twitter.com/barkerd427
-[5]: https://psnet.ahrq.gov/resources/resource/1582
-[6]: https://twitter.com/onlychrisbh?lang=en
-[7]: https://twitter.com/geoff_purdy
-[8]: https://agilemanifesto.org/
diff --git a/sources/tech/20190127 Eliminate error handling by eliminating errors.md b/sources/tech/20190127 Eliminate error handling by eliminating errors.md
new file mode 100644
index 0000000000..6eac4740eb
--- /dev/null
+++ b/sources/tech/20190127 Eliminate error handling by eliminating errors.md
@@ -0,0 +1,204 @@
+[#]: collector: (lujun9972)
+[#]: translator: ( )
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+[#]: subject: (Eliminate error handling by eliminating errors)
+[#]: via: (https://dave.cheney.net/2019/01/27/eliminate-error-handling-by-eliminating-errors)
+[#]: author: (Dave Cheney https://dave.cheney.net/author/davecheney)
+
+Eliminate error handling by eliminating errors
+======
+
+Go 2 aims to improve the overhead of [error handling][1], but do you know what is better than an improved syntax for handling errors? Not needing to handle errors at all. Now, I’m not saying “delete your error handling code”, instead I’m suggesting changing your code so you don’t have as many errors to handle.
+
+This article draws inspiration from a chapter in John Ousterhout’s, _[A philosophy of Software Design,][2]_ “Define Errors Out of Existence”. I’m going to try to apply his advice to Go.
+
+* * *
+
+Here’s a function to count the number of lines in a file,
+
+```
+func CountLines(r io.Reader) (int, error) {
+ var (
+ br = bufio.NewReader(r)
+ lines int
+ err error
+ )
+
+ for {
+ _, err = br.ReadString('\n')
+ lines++
+ if err != nil {
+ break
+ }
+ }
+
+ if err != io.EOF {
+ return 0, err
+ }
+ return lines, nil
+ }
+```
+
+We construct a `bufio.Reader`, then sit in a loop calling the `ReadString` method, incrementing a counter until we reach the end of the file, then we return the number of lines read. That’s the code we _wanted_ to write, instead `CountLines` is made more complicated by its error handling. For example, there is this strange construction:
+
+```
+_, err = br.ReadString('\n')
+lines++
+if err != nil {
+ break
+}
+```
+
+We increment the count of lines _before_ checking the error—that looks odd. The reason we have to write it this way is `ReadString` will return an error if it encounters an end-of-file—`io.EOF`—before hitting a newline character. This can happen if there is no trailing newline.
+
+To address this problem, we rearrange the logic to increment the line count, then see if we need to exit the loop.1
+
+But we’re not done checking errors yet. `ReadString` will return `io.EOF` when it hits the end of the file. This is expected, `ReadString` needs some way of saying _stop, there is nothing more to read_. So before we return the error to the caller of `CountLine`, we need to check if the error was _not_ `io.EOF`, and in that case propagate it up, otherwise we return `nil` to say that everything worked fine. This is why the final line of the function is not simply
+
+```
+return lines, err
+```
+
+I think this is a good example of Russ Cox’s [observation that error handling can obscure the operation of the function][3]. Let’s look at an improved version.
+
+```
+func CountLines(r io.Reader) (int, error) {
+ sc := bufio.NewScanner(r)
+ lines := 0
+
+ for sc.Scan() {
+ lines++
+ }
+
+ return lines, sc.Err()
+}
+```
+
+This improved version switches from using `bufio.Reader` to `bufio.Scanner`. Under the hood `bufio.Scanner` uses `bufio.Reader` adding a layer of abstraction which helps remove the error handling which obscured the operation of our previous version of `CountLines` 2
+
+The method `sc.Scan()` returns `true` if the scanner _has_ matched a line of text and _has not_ encountered an error. So, the body of our `for` loop will be called only when there is a line of text in the scanner’s buffer. This means our revised `CountLines` correctly handles the case where there is no trailing newline, It also correctly handles the case where the file is empty.
+
+Secondly, as `sc.Scan` returns `false` once an error is encountered, our `for` loop will exit when the end-of-file is reached or an error is encountered. The `bufio.Scanner` type memoises the first error it encounters and we recover that error once we’ve exited the loop using the `sc.Err()` method.
+
+Lastly, `buffo.Scanner` takes care of handling `io.EOF` and will convert it to a `nil` if the end of file was reached without encountering another error.
+
+* * *
+
+My second example is inspired by Rob Pikes’ _[Errors are values][4]_ blog post.
+
+When dealing with opening, writing and closing files, the error handling is present but not overwhelming as, the operations can be encapsulated in helpers like `ioutil.ReadFile` and `ioutil.WriteFile`. However, when dealing with low level network protocols it often becomes necessary to build the response directly using I/O primitives, thus the error handling can become repetitive. Consider this fragment of a HTTP server which is constructing a HTTP/1.1 response.
+
+```
+type Header struct {
+ Key, Value string
+}
+
+type Status struct {
+ Code int
+ Reason string
+}
+
+func WriteResponse(w io.Writer, st Status, headers []Header, body io.Reader) error {
+ _, err := fmt.Fprintf(w, "HTTP/1.1 %d %s\r\n", st.Code, st.Reason)
+ if err != nil {
+ return err
+ }
+
+ for _, h := range headers {
+ _, err := fmt.Fprintf(w, "%s: %s\r\n", h.Key, h.Value)
+ if err != nil {
+ return err
+ }
+ }
+
+ if _, err := fmt.Fprint(w, "\r\n"); err != nil {
+ return err
+ }
+
+ _, err = io.Copy(w, body)
+ return err
+}
+```
+
+First we construct the status line using `fmt.Fprintf`, and check the error. Then for each header we write the header key and value, checking the error each time. Lastly we terminate the header section with an additional `\r\n`, check the error, and copy the response body to the client. Finally, although we don’t need to check the error from `io.Copy`, we do need to translate it from the two return value form that `io.Copy` returns into the single return value that `WriteResponse` expects.
+
+Not only is this a lot of repetitive work, each operation—fundamentally writing bytes to an `io.Writer`—has a different form of error handling. But we can make it easier on ourselves by introducing a small wrapper type.
+
+```
+type errWriter struct {
+ io.Writer
+ err error
+}
+
+func (e *errWriter) Write(buf []byte) (int, error) {
+ if e.err != nil {
+ return 0, e.err
+ }
+
+ var n int
+ n, e.err = e.Writer.Write(buf)
+ return n, nil
+}
+```
+
+`errWriter` fulfils the `io.Writer` contract so it can be used to wrap an existing `io.Writer`. `errWriter` passes writes through to its underlying writer until an error is detected. From that point on, it discards any writes and returns the previous error.
+
+```
+func WriteResponse(w io.Writer, st Status, headers []Header, body io.Reader) error {
+ ew := &errWriter{Writer: w}
+ fmt.Fprintf(ew, "HTTP/1.1 %d %s\r\n", st.Code, st.Reason)
+
+ for _, h := range headers {
+ fmt.Fprintf(ew, "%s: %s\r\n", h.Key, h.Value)
+ }
+
+ fmt.Fprint(ew, "\r\n")
+ io.Copy(ew, body)
+
+ return ew.err
+}
+```
+
+Applying `errWriter` to `WriteResponse` dramatically improves the clarity of the code. Each of the operations no longer needs to bracket itself with an error check. Reporting the error is moved to the end of the function by inspecting the `ew.err` field, avoiding the annoying translation from `io.Copy`’s return values.
+
+* * *
+
+When you find yourself faced with overbearing error handling, try to extract some of the operations into a helper type.
+
+ 1. This logic _still_ isn’t correct, can you spot the bug?
+ 2. `bufio.Scanner` can scan for any pattern, by default it looks for newlines.
+
+
+
+### Related posts:
+
+ 1. [Error handling vs. exceptions redux][5]
+ 2. [Stack traces and the errors package][6]
+ 3. [Subcommand handling in Go][7]
+ 4. [Constant errors][8]
+
+
+
+--------------------------------------------------------------------------------
+
+via: https://dave.cheney.net/2019/01/27/eliminate-error-handling-by-eliminating-errors
+
+作者:[Dave Cheney][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://dave.cheney.net/author/davecheney
+[b]: https://github.com/lujun9972
+[1]: https://go.googlesource.com/proposal/+/master/design/go2draft-error-handling-overview.md
+[2]: https://www.amazon.com/Philosophy-Software-Design-John-Ousterhout/dp/1732102201
+[3]: https://www.youtube.com/watch?v=6wIP3rO6On8
+[4]: https://blog.golang.org/errors-are-values
+[5]: https://dave.cheney.net/2014/11/04/error-handling-vs-exceptions-redux (Error handling vs. exceptions redux)
+[6]: https://dave.cheney.net/2016/06/12/stack-traces-and-the-errors-package (Stack traces and the errors package)
+[7]: https://dave.cheney.net/2013/11/07/subcommand-handling-in-go (Subcommand handling in Go)
+[8]: https://dave.cheney.net/2016/04/07/constant-errors (Constant errors)
diff --git a/sources/tech/20190128 Top Hex Editors for Linux.md b/sources/tech/20190128 Top Hex Editors for Linux.md
deleted file mode 100644
index 5cd47704b4..0000000000
--- a/sources/tech/20190128 Top Hex Editors for Linux.md
+++ /dev/null
@@ -1,146 +0,0 @@
-[#]: collector: (lujun9972)
-[#]: translator: ( )
-[#]: reviewer: ( )
-[#]: publisher: ( )
-[#]: url: ( )
-[#]: subject: (Top Hex Editors for Linux)
-[#]: via: (https://itsfoss.com/hex-editors-linux)
-[#]: author: (Ankush Das https://itsfoss.com/author/ankush/)
-
-Top Hex Editors for Linux
-======
-
-Hex editor lets you view/edit the binary data of a file – which is in the form of “hexadecimal” values and hence the name “Hex” editor. Let’s be frank, not everyone needs it. Only a specific group of users who have to deal with the binary data use it.
-
-If you have no idea, what it is, let me give you an example. Suppose, you have the configuration files of a game, you can open them using a hex editor and change certain values to have more ammo/score and so on. To know more about Hex editors, you should start with the [Wikipedia page][1].
-
-In case you already know what’s it used for – let us take a look at the best Hex editors available for Linux.
-
-### 5 Best Hex Editors Available
-
-![Best Hex Editors for Linux][2]
-
-**Note:** The hex editors mentioned are in no particular order of ranking.
-
-#### 1\. Bless Hex Editor
-
-![bless hex editor][3]
-
-**Key Features** :
-
- * Raw disk editing
- * Multilevel undo/redo operations.
- * Multiple tabs
- * Conversion table
- * Plugin support to extend the functionality
-
-
-
-Bless is one of the most popular Hex editor available for Linux. You can find it listed in your AppCenter or Software Center. If that is not the case, you can check out their [GitHub page][4] for the build and the instructions associated.
-
-It can easily handle editing big files without slowing down – so it’s a fast hex editor.
-
-#### 2\. GNOME Hex Editor
-
-![gnome hex editor][5]
-
-**Key Features:**
-
- * View/Edit in either Hex/Ascii
-
- * Edit large files
-
- *
-
-
-Yet another amazing Hex editor – specifically tailored for GNOME. Well, I personally use Elementary OS, so I find it listed in the App Center. You should find it in the Software Center as well. If not, refer to the [GitHub page][6] for the source.
-
-You can use this editor to view/edit in either hex or ASCII. The user interface is quite simple – as you can see in the image above.
-
-#### 3\. Okteta
-
-![okteta][7]
-
-**Key Features:**
-
- * Customizable data views
- * Multiple tabs
- * Character encodings: All 8-bit encodings as supplied by Qt, EBCDIC
- * Decoding table listing common simple data types.
-
-
-
-Okteta is a simple hex editor with not so fancy features. Although it can handle most of the tasks. There’s a separate module of it which you can use to embed this in other programs to view/edit files.
-
-Similar to all the above-mentioned editors, you can find this listed on your AppCenter and Software center as well.
-
-#### 4\. wxHexEditor
-
-![wxhexeditor][8]
-
-**Key Features:**
-
- * Easily handle big files
- * Has x86 disassembly support
- * **** Sector Indication **** on Disk devices
- * Supports customizable hex panel formatting and colors.
-
-
-
-This is something interesting. It is primarily a Hex editor but you can also use it as a low level disk editor. For example, if you have a problem with your HDD, you can use this editor to edit the the sectors in raw hex and fix it.
-
-You can find it listed on your App Center and Software Center. If not, [Sourceforge][9] is the way to go.
-
-#### 5\. Hexedit (Command Line)
-
-![hexedit][10]
-
-**Key Features** :
-
- * Works via terminal
- * It’s fast and simple
-
-
-
-If you want something to work on your terminal, you can go ahead and install Hexedit via the console. It’s my favorite Linux hex editor in command line.
-
-When you launch it, you will have to specify the location of the file, and it’ll then open it for you.
-
-To install it, just type in:
-
-```
-sudo apt install hexedit
-```
-
-### Wrapping Up
-
-Hex editors could come in handy to experiment and learn. If you are someone experienced, you should opt for the one with more feature – with a GUI. Although, it all comes down to personal preferences.
-
-What do you think about the usefulness of Hex editors? Which one do you use? Did we miss listing your favorite? Let us know in the comments!
-
-![][11]
-
---------------------------------------------------------------------------------
-
-via: https://itsfoss.com/hex-editors-linux
-
-作者:[Ankush Das][a]
-选题:[lujun9972][b]
-译者:[译者ID](https://github.com/译者ID)
-校对:[校对者ID](https://github.com/校对者ID)
-
-本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
-
-[a]: https://itsfoss.com/author/ankush/
-[b]: https://github.com/lujun9972
-[1]: https://en.wikipedia.org/wiki/Hex_editor
-[2]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/01/Linux-hex-editors-800x450.jpeg?resize=800%2C450&ssl=1
-[3]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/01/bless-hex-editor.jpg?ssl=1
-[4]: https://github.com/bwrsandman/Bless
-[5]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/01/ghex-hex-editor.jpg?ssl=1
-[6]: https://github.com/GNOME/ghex
-[7]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/01/okteta-hex-editor-800x466.jpg?resize=800%2C466&ssl=1
-[8]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/01/wxhexeditor.jpg?ssl=1
-[9]: https://sourceforge.net/projects/wxhexeditor/
-[10]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/01/hexedit-console.jpg?resize=800%2C566&ssl=1
-[11]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/01/Linux-hex-editors.jpeg?fit=800%2C450&ssl=1
diff --git a/sources/tech/20190129 7 Methods To Identify Disk Partition-FileSystem UUID On Linux.md b/sources/tech/20190129 7 Methods To Identify Disk Partition-FileSystem UUID On Linux.md
deleted file mode 100644
index 366e75846d..0000000000
--- a/sources/tech/20190129 7 Methods To Identify Disk Partition-FileSystem UUID On Linux.md
+++ /dev/null
@@ -1,159 +0,0 @@
-[#]: collector: (lujun9972)
-[#]: translator: ( )
-[#]: reviewer: ( )
-[#]: publisher: ( )
-[#]: url: ( )
-[#]: subject: (7 Methods To Identify Disk Partition/FileSystem UUID On Linux)
-[#]: via: (https://www.2daygeek.com/check-partitions-uuid-filesystem-uuid-universally-unique-identifier-linux/)
-[#]: author: (Magesh Maruthamuthu https://www.2daygeek.com/author/magesh/)
-
-7 Methods To Identify Disk Partition/FileSystem UUID On Linux
-======
-
-As a Linux administrator you should aware of that how do you check partition UUID or filesystem UUID.
-
-Because most of the Linux systems are mount the partitions with UUID. The same has been verified in the `/etc/fstab` file.
-
-There are many utilities are available to check UUID. In this article we will show you how to check UUID in many ways and you can choose the one which is suitable for you.
-
-### What Is UUID?
-
-UUID stands for Universally Unique Identifier which helps Linux system to identify a hard drives partition instead of block device file.
-
-libuuid is part of the util-linux-ng package since kernel version 2.15.1 and it’s installed by default in Linux system.
-
-The UUIDs generated by this library can be reasonably expected to be unique within a system, and unique across all systems.
-
-It’s a 128 bit number used to identify information in computer systems. UUIDs were originally used in the Apollo Network Computing System (NCS) and later UUIDs are standardized by the Open Software Foundation (OSF) as part of the Distributed Computing Environment (DCE).
-
-UUIDs are represented as 32 hexadecimal (base 16) digits, displayed in five groups separated by hyphens, in the form 8-4-4-4-12 for a total of 36 characters (32 alphanumeric characters and four hyphens).
-
-For example: d92fa769-e00f-4fd7-b6ed-ecf7224af7fa
-
-Sample of my /etc/fstab file.
-
-```
-# cat /etc/fstab
-
-# /etc/fstab: static file system information.
-#
-# Use 'blkid' to print the universally unique identifier for a device; this may
-# be used with UUID= as a more robust way to name devices that works even if
-# disks are added and removed. See fstab(5).
-#
-#
-UUID=69d9dd18-36be-4631-9ebb-78f05fe3217f / ext4 defaults,noatime 0 1
-UUID=a2092b92-af29-4760-8e68-7a201922573b swap swap defaults,noatime 0 2
-```
-
-We can check this using the following seven commands.
-
- * **`blkid Command:`** locate/print block device attributes.
- * **`lsblk Command:`** lsblk lists information about all available or the specified block devices.
- * **`hwinfo Command:`** hwinfo stands for hardware information tool is another great utility that used to probe for the hardware present in the system.
- * **`udevadm Command:`** udev management tool.
- * **`tune2fs Command:`** adjust tunable filesystem parameters on ext2/ext3/ext4 filesystems.
- * **`dumpe2fs Command:`** dump ext2/ext3/ext4 filesystem information.
- * **`Using by-uuid Path:`** The directory contains UUID and real block device files, UUIDs were symlink with real block device files.
-
-
-
-### How To Check Disk Partition/FileSystem UUID In Linux Uusing blkid Command?
-
-blkid is a command-line utility to locate/print block device attributes. It uses libblkid library to get disk partition UUID in Linux system.
-
-```
-# blkid
-/dev/sda1: UUID="d92fa769-e00f-4fd7-b6ed-ecf7224af7fa" TYPE="ext4" PARTUUID="eab59449-01"
-/dev/sdc1: UUID="d17e3c31-e2c9-4f11-809c-94a549bc43b7" TYPE="ext2" PARTUUID="8cc8f9e5-01"
-/dev/sdc3: UUID="ca307aa4-0866-49b1-8184-004025789e63" TYPE="ext4" PARTUUID="8cc8f9e5-03"
-/dev/sdc5: PARTUUID="8cc8f9e5-05"
-```
-
-### How To Check Disk Partition/FileSystem UUID In Linux Uusing lsblk Command?
-
-lsblk lists information about all available or the specified block devices. The lsblk command reads the sysfs filesystem and udev db to gather information.
-
-If the udev db is not available or lsblk is compiled without udev support than it tries to read LABELs, UUIDs and filesystem types from the block device. In this case root permissions are necessary. The command prints all block devices (except RAM disks) in a tree-like format by default.
-
-```
-# lsblk -o name,mountpoint,size,uuid
-NAME MOUNTPOINT SIZE UUID
-sda 30G
-└─sda1 / 20G d92fa769-e00f-4fd7-b6ed-ecf7224af7fa
-sdb 10G
-sdc 10G
-├─sdc1 1G d17e3c31-e2c9-4f11-809c-94a549bc43b7
-├─sdc3 1G ca307aa4-0866-49b1-8184-004025789e63
-├─sdc4 1K
-└─sdc5 1G
-sdd 10G
-sde 10G
-sr0 1024M
-```
-
-### How To Check Disk Partition/FileSystem UUID In Linux Uusing by-uuid path?
-
-The directory contains UUID and real block device files, UUIDs were symlink with real block device files.
-
-```
-# ls -lh /dev/disk/by-uuid/
-total 0
-lrwxrwxrwx 1 root root 10 Jan 29 08:34 ca307aa4-0866-49b1-8184-004025789e63 -> ../../sdc3
-lrwxrwxrwx 1 root root 10 Jan 29 08:34 d17e3c31-e2c9-4f11-809c-94a549bc43b7 -> ../../sdc1
-lrwxrwxrwx 1 root root 10 Jan 29 08:34 d92fa769-e00f-4fd7-b6ed-ecf7224af7fa -> ../../sda1
-```
-
-### How To Check Disk Partition/FileSystem UUID In Linux Uusing hwinfo Command?
-
-**[hwinfo][1]** stands for hardware information tool is another great utility that used to probe for the hardware present in the system and display detailed information about varies hardware components in human readable format.
-
-```
-# hwinfo --block | grep by-uuid | awk '{print $3,$7}'
-/dev/sdc1, /dev/disk/by-uuid/d17e3c31-e2c9-4f11-809c-94a549bc43b7
-/dev/sdc3, /dev/disk/by-uuid/ca307aa4-0866-49b1-8184-004025789e63
-/dev/sda1, /dev/disk/by-uuid/d92fa769-e00f-4fd7-b6ed-ecf7224af7fa
-```
-
-### How To Check Disk Partition/FileSystem UUID In Linux Uusing udevadm Command?
-
-udevadm expects a command and command specific options. It controls the runtime behavior of systemd-udevd, requests kernel events, manages the event queue, and provides simple debugging mechanisms.
-
-```
-udevadm info -q all -n /dev/sdc1 | grep -i by-uuid | head -1
-S: disk/by-uuid/d17e3c31-e2c9-4f11-809c-94a549bc43b7
-```
-
-### How To Check Disk Partition/FileSystem UUID In Linux Uusing tune2fs Command?
-
-tune2fs allows the system administrator to adjust various tunable filesystem parameters on Linux ext2, ext3, or ext4 filesystems. The current values of these options can be displayed by using the -l option.
-
-```
-# tune2fs -l /dev/sdc1 | grep UUID
-Filesystem UUID: d17e3c31-e2c9-4f11-809c-94a549bc43b7
-```
-
-### How To Check Disk Partition/FileSystem UUID In Linux Uusing dumpe2fs Command?
-
-dumpe2fs prints the super block and blocks group information for the filesystem present on device.
-
-```
-# dumpe2fs /dev/sdc1 | grep UUID
-dumpe2fs 1.43.5 (04-Aug-2017)
-Filesystem UUID: d17e3c31-e2c9-4f11-809c-94a549bc43b7
-```
-
---------------------------------------------------------------------------------
-
-via: https://www.2daygeek.com/check-partitions-uuid-filesystem-uuid-universally-unique-identifier-linux/
-
-作者:[Magesh Maruthamuthu][a]
-选题:[lujun9972][b]
-译者:[译者ID](https://github.com/译者ID)
-校对:[校对者ID](https://github.com/校对者ID)
-
-本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
-
-[a]: https://www.2daygeek.com/author/magesh/
-[b]: https://github.com/lujun9972
-[1]: https://www.2daygeek.com/hwinfo-check-display-detect-system-hardware-information-linux/
diff --git a/sources/tech/20190129 Get started with gPodder, an open source podcast client.md b/sources/tech/20190129 Get started with gPodder, an open source podcast client.md
deleted file mode 100644
index ca1556e16d..0000000000
--- a/sources/tech/20190129 Get started with gPodder, an open source podcast client.md
+++ /dev/null
@@ -1,64 +0,0 @@
-[#]: collector: (lujun9972)
-[#]: translator: ( )
-[#]: reviewer: ( )
-[#]: publisher: ( )
-[#]: url: ( )
-[#]: subject: (Get started with gPodder, an open source podcast client)
-[#]: via: (https://opensource.com/article/19/1/productivity-tool-gpodder)
-[#]: author: (Kevin Sonney https://opensource.com/users/ksonney (Kevin Sonney))
-
-Get started with gPodder, an open source podcast client
-======
-Keep your podcasts synced across your devices with gPodder, the 17th in our series on open source tools that will make you more productive in 2019.
-
-
-
-There seems to be a mad rush at the beginning of every year to find ways to be more productive. New Year's resolutions, the itch to start the year off right, and of course, an "out with the old, in with the new" attitude all contribute to this. And the usual round of recommendations is heavily biased towards closed source and proprietary software. It doesn't have to be that way.
-
-Here's the 17th of my picks for 19 new (or new-to-you) open source tools to help you be more productive in 2019.
-
-### gPodder
-
-I like podcasts. Heck, I like them so much I record three of them (you can find links to them in [my profile][1]). I learn a lot from podcasts and play them in the background when I'm working. But keeping them in sync between multiple desktops and mobile devices can be a bit of a challenge.
-
-[gPodder][2] is a simple, cross-platform podcast downloader, player, and sync tool. It supports RSS feeds, [FeedBurner][3], [YouTube][4], and [SoundCloud][5], and it also has an open source sync service that you can run if you want. gPodder doesn't do podcast playback; instead, it uses your audio or video player of choice.
-
-
-
-Installing gPodder is very straightforward. Installers are available for Windows and MacOS, and packages are available for major Linux distributions. If it isn't available in your distribution, you can run it directly from a Git checkout. With the "Add Podcasts via URL" menu option, you can enter a podcast's RSS feed URL or one of the "special" URLs for the other services. gPodder will fetch a list of episodes and present a dialog where you can select which episodes to download or mark old episodes on the list.
-
-
-
-One of its nicer features is that if a URL is already in your clipboard, gPodder will automatically place it in its URL field, which makes it really easy to add a new podcast to your list. If you already have an OPML file of podcast feeds, you can upload and import it. There is also a discovery option that allows you to search for podcasts on [gPodder.net][6], the free and open source podcast listing site by the people who write and maintain gPodder.
-
-
-
-A [mygpo][7] server synchronizes podcasts between devices. By default, gPodder uses [gPodder.net][8]'s servers, but you can change this in the configuration files if want to run your own (be aware that you'll have to modify the configuration file directly). Syncing allows you to keep your lists consistent between desktops and mobile devices. This is very useful if you listen to podcasts on multiple devices (for example, I listen on my work computer, home computer, and mobile phone), as it means no matter where you are, you have the most recent lists of podcasts and episodes without having to set things up again and again.
-
-
-
-Clicking on a podcast episode will bring up the text post associated with it, and clicking "Play" will launch your device's default audio or video player. If you want to use something other than the default, you can change this in gPodder's configuration settings.
-
-gPodder makes it simple to find, download, and listen to podcasts, synchronize them across devices, and access a lot of other features in an easy-to-use interface.
-
---------------------------------------------------------------------------------
-
-via: https://opensource.com/article/19/1/productivity-tool-gpodder
-
-作者:[Kevin Sonney][a]
-选题:[lujun9972][b]
-译者:[译者ID](https://github.com/译者ID)
-校对:[校对者ID](https://github.com/校对者ID)
-
-本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
-
-[a]: https://opensource.com/users/ksonney (Kevin Sonney)
-[b]: https://github.com/lujun9972
-[1]: https://opensource.com/users/ksonney
-[2]: https://gpodder.github.io/
-[3]: https://feedburner.google.com/
-[4]: https://youtube.com
-[5]: https://soundcloud.com/
-[6]: http://gpodder.net
-[7]: https://github.com/gpodder/mygpo
-[8]: http://gPodder.net
diff --git a/sources/tech/20190129 You shouldn-t name your variables after their types for the same reason you wouldn-t name your pets -dog- or -cat.md b/sources/tech/20190129 You shouldn-t name your variables after their types for the same reason you wouldn-t name your pets -dog- or -cat.md
new file mode 100644
index 0000000000..75ad9e93c6
--- /dev/null
+++ b/sources/tech/20190129 You shouldn-t name your variables after their types for the same reason you wouldn-t name your pets -dog- or -cat.md
@@ -0,0 +1,83 @@
+[#]: collector: (lujun9972)
+[#]: translator: ( )
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+[#]: subject: (You shouldn’t name your variables after their types for the same reason you wouldn’t name your pets “dog” or “cat”)
+[#]: via: (https://dave.cheney.net/2019/01/29/you-shouldnt-name-your-variables-after-their-types-for-the-same-reason-you-wouldnt-name-your-pets-dog-or-cat)
+[#]: author: (Dave Cheney https://dave.cheney.net/author/davecheney)
+
+You shouldn’t name your variables after their types for the same reason you wouldn’t name your pets “dog” or “cat”
+======
+
+The name of a variable should describe its contents, not the _type_ of the contents. Consider this example:
+
+```
+var usersMap map[string]*User
+```
+
+What are some good properties of this declaration? We can see that it’s a map, and it has something to do with the `*User` type, so that’s probably good. But `usersMap` _is_ a map and Go, being a statically typed language, won’t let us accidentally use a map where a different type is required, so the `Map` suffix as a safety precaution is redundant.
+
+Now, consider what happens if we declare other variables using this pattern:
+
+```
+var (
+ companiesMap map[string]*Company
+ productsMap map[string]*Products
+)
+```
+
+Now we have three map type variables in scope, `usersMap`, `companiesMap`, and `productsMap`, all mapping `string`s to different `struct` types. We know they are maps, and we also know that their declarations prevent us from using one in place of another—the compiler will throw an error if we try to use `companiesMap` where the code is expecting a `map[string]*User`. In this situation it’s clear that the `Map` suffix does not improve the clarity of the code, its just extra boilerplate to type.
+
+My suggestion is avoid any suffix that resembles the _type_ of the variable. Said another way, if `users` isn’t descriptive enough, then `usersMap` won’t be either.
+
+This advice also applies to function parameters. For example:
+
+```
+type Config struct {
+ //
+}
+
+func WriteConfig(w io.Writer, config *Config)
+```
+
+Naming the `*Config` parameter `config` is redundant. We know it’s a pointer to a `Config`, it says so right there in the declaration. Instead consider if `conf` will do, or maybe just `c` if the lifetime of the variable is short enough.
+
+This advice is more than just a desire for brevity. If there is more that one `*Config` in scope at any one time, calling them `config1` and `config2` is less descriptive than calling them `original` and `updated` . The latter are less likely to be accidentally transposed—something the compiler won’t catch—while the former differ only in a one character suffix.
+
+Finally, don’t let package names steal good variable names. The name of an imported identifier includes its package name. For example the `Context` type in the `context` package will be known as `context.Context` when imported into another package . This makes it impossible to use `context` as a variable or type, unless of course you rename the import, but that’s throwing good after bad. This is why the local declaration for `context.Context` types is traditionally `ctx`. eg.
+
+```
+func WriteLog(ctx context.Context, message string)
+```
+
+* * *
+
+A variable’s name should be independent of its type. You shouldn’t name your variables after their types for the same reason you wouldn’t name your pets “dog” or “cat”. You shouldn’t include the name of your type in the name of your variable for the same reason.
+
+### Related posts:
+
+ 1. [On declaring variables][1]
+ 2. [Go, without package scoped variables][2]
+ 3. [A whirlwind tour of Go’s runtime environment variables][3]
+ 4. [Declaration scopes in Go][4]
+
+
+
+--------------------------------------------------------------------------------
+
+via: https://dave.cheney.net/2019/01/29/you-shouldnt-name-your-variables-after-their-types-for-the-same-reason-you-wouldnt-name-your-pets-dog-or-cat
+
+作者:[Dave Cheney][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://dave.cheney.net/author/davecheney
+[b]: https://github.com/lujun9972
+[1]: https://dave.cheney.net/2014/05/24/on-declaring-variables (On declaring variables)
+[2]: https://dave.cheney.net/2017/06/11/go-without-package-scoped-variables (Go, without package scoped variables)
+[3]: https://dave.cheney.net/2015/11/29/a-whirlwind-tour-of-gos-runtime-environment-variables (A whirlwind tour of Go’s runtime environment variables)
+[4]: https://dave.cheney.net/2016/12/15/declaration-scopes-in-go (Declaration scopes in Go)
diff --git a/sources/tech/20190130 Get started with Budgie Desktop, a Linux environment.md b/sources/tech/20190130 Get started with Budgie Desktop, a Linux environment.md
deleted file mode 100644
index 9dceb60f1d..0000000000
--- a/sources/tech/20190130 Get started with Budgie Desktop, a Linux environment.md
+++ /dev/null
@@ -1,60 +0,0 @@
-[#]: collector: (lujun9972)
-[#]: translator: (geekpi)
-[#]: reviewer: ( )
-[#]: publisher: ( )
-[#]: url: ( )
-[#]: subject: (Get started with Budgie Desktop, a Linux environment)
-[#]: via: (https://opensource.com/article/19/1/productivity-tool-budgie-desktop)
-[#]: author: (Kevin Sonney https://opensource.com/users/ksonney (Kevin Sonney))
-
-Get started with Budgie Desktop, a Linux environment
-======
-Configure your desktop as you want with Budgie, the 18th in our series on open source tools that will make you more productive in 2019.
-
-
-
-There seems to be a mad rush at the beginning of every year to find ways to be more productive. New Year's resolutions, the itch to start the year off right, and of course, an "out with the old, in with the new" attitude all contribute to this. And the usual round of recommendations is heavily biased towards closed source and proprietary software. It doesn't have to be that way.
-
-Here's the 18th of my picks for 19 new (or new-to-you) open source tools to help you be more productive in 2019.
-
-### Budgie Desktop
-
-There are many, many desktop environments for Linux. From the easy to use and graphically stunning [GNOME desktop][1] (default on most major Linux distributions) and [KDE][2], to the minimalist [Openbox][3], to the highly configurable tiling [i3][4], there are a lot of options. What I look for in a good desktop environment is speed, unobtrusiveness, and a clean user experience. It is hard to be productive when a desktop works against you, not with or for you.
-
-
-
-[Budgie Desktop][5] is the default desktop on the [Solus][6] Linux distribution and is available as an add-on package for most of the major Linux distributions. It is based on GNOME and uses many of the same tools and libraries you likely already have on your computer.
-
-The default desktop is exceptionally minimalistic, with just the panel and a blank desktop. Budgie includes an integrated sidebar (called Raven) that gives quick access to the calendar, audio controls, and settings menu. Raven also contains an integrated notification area with a unified display of system messages similar to MacOS's.
-
-
-
-Clicking on the gear icon in Raven brings up Budgie's control panel with its configuration settings. Since Budgie is still in development, it is a little bare-bones compared to GNOME or KDE, and I hope it gets more options over time. The Top Panel option, which allows the user to configure the ordering, positioning, and contents of the top panel, is nice.
-
-
-
-The Budgie Welcome application (presented at first login) contains options to install additional software, panel applets, snaps, and Flatpack packages. There are applets to handle networking, screenshots, additional clocks and timers, and much, much more.
-
-
-
-Budgie provides a desktop that is clean and stable. It responds quickly and has many options that allow you to customize it as you see fit.
-
---------------------------------------------------------------------------------
-
-via: https://opensource.com/article/19/1/productivity-tool-budgie-desktop
-
-作者:[Kevin Sonney][a]
-选题:[lujun9972][b]
-译者:[译者ID](https://github.com/译者ID)
-校对:[校对者ID](https://github.com/校对者ID)
-
-本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
-
-[a]: https://opensource.com/users/ksonney (Kevin Sonney)
-[b]: https://github.com/lujun9972
-[1]: https://www.gnome.org/
-[2]: https://www.kde.org/
-[3]: http://openbox.org/wiki/Main_Page
-[4]: https://i3wm.org/
-[5]: https://getsol.us/solus/experiences/
-[6]: https://getsol.us/home/
diff --git a/sources/tech/20190130 Olive is a new Open Source Video Editor Aiming to Take On Biggies Like Final Cut Pro.md b/sources/tech/20190130 Olive is a new Open Source Video Editor Aiming to Take On Biggies Like Final Cut Pro.md
deleted file mode 100644
index 989cd0d60f..0000000000
--- a/sources/tech/20190130 Olive is a new Open Source Video Editor Aiming to Take On Biggies Like Final Cut Pro.md
+++ /dev/null
@@ -1,102 +0,0 @@
-[#]: collector: (lujun9972)
-[#]: translator: ( )
-[#]: reviewer: ( )
-[#]: publisher: ( )
-[#]: url: ( )
-[#]: subject: (Olive is a new Open Source Video Editor Aiming to Take On Biggies Like Final Cut Pro)
-[#]: via: (https://itsfoss.com/olive-video-editor)
-[#]: author: (Abhishek Prakash https://itsfoss.com/author/abhishek/)
-
-Olive is a new Open Source Video Editor Aiming to Take On Biggies Like Final Cut Pro
-======
-
-[Olive][1] is a new open source video editor under development. This non-linear video editor aims to provide a free alternative to high-end professional video editing software. Too high an aim? I think so.
-
-If you have read our [list of best video editors for Linux][2], you might have noticed that most of the ‘professional-grade’ video editors such as [Lightworks][3] or DaVinciResolve are neither free nor open source.
-
-[Kdenlive][4] and Shotcut are there but they don’t often meet the standards of professional video editing (that’s what many Linux users have expressed).
-
-This gap between the hobbyist and professional video editors prompted the developer(s) of Olive to start this project.
-
-![Olive Video Editor][5]Olive Video Editor Interface
-
-There is a detailed [review of Olive on Libre Graphics World][6]. Actually, this is where I came to know about Olive first. You should read the article if you are interested in knowing more about it.
-
-### Installing Olive Video Editor in Linux
-
-Let me remind you. Olive is in the early stages of development. You’ll find plenty of bugs and missing/incomplete features. You should not treat it as your main video editor just yet.
-
-If you want to test Olive, there are several ways to install it on Linux.
-
-#### Install Olive in Ubuntu-based distributions via PPA
-
-You can install Olive via its official PPA in Ubuntu, Mint and other Ubuntu-based distributions.
-
-```
-sudo add-apt-repository ppa:olive-editor/olive-editor
-sudo apt-get update
-sudo apt-get install olive-editor
-```
-
-#### Install Olive via Snap
-
-If your Linux distribution supports Snap, you can use the command below to install it.
-
-```
-sudo snap install --edge olive-editor
-```
-
-#### Install Olive via Flatpak
-
-If your [Linux distribution supports Flatpak][7], you can install Olive video editor via Flatpak.
-
-#### Use Olive via AppImage
-
-Don’t want to install it? Download the [AppImage][8] file, set it as executable and run it.
-
-Both 32-bit and 64-bit AppImage files are available. You should download the appropriate file.
-
-Olive is also available for Windows and macOS. You can get it from their [download page][9].
-
-### Want to support the development of Olive video editor?
-
-If you like what Olive is trying to achieve and want to support it, here are a few ways you can do that.
-
-If you are testing Olive and find some bugs, please report it on their GitHub repository.
-
-If you are a programmer, go and check out the source code of Olive and see if you could help the project with your coding skills.
-
-Contributing to projects financially is another way you can help the development of open source software. You can support Olive monetarily by becoming a patron.
-
-If you don’t have either the money or coding skills to support Olive, you could still help it. Share this article or Olive’s website on social media or in Linux/software related forums and groups you frequent. A little word of mouth should help it indirectly.
-
-### What do you think of Olive?
-
-It’s too early to judge Olive. I hope that the development continues rapidly and we have a stable release of Olive by the end of the year (if I am not being overly optimistic).
-
-What do you think of Olive? Do you agree with the developer’s aim of targeting the pro-users? What features would you like Olive to have?
-
-
---------------------------------------------------------------------------------
-
-via: https://itsfoss.com/olive-video-editor
-
-作者:[Abhishek Prakash][a]
-选题:[lujun9972][b]
-译者:[译者ID](https://github.com/译者ID)
-校对:[校对者ID](https://github.com/校对者ID)
-
-本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
-
-[a]: https://itsfoss.com/author/abhishek/
-[b]: https://github.com/lujun9972
-[1]: https://www.olivevideoeditor.org/
-[2]: https://itsfoss.com/best-video-editing-software-linux/
-[3]: https://www.lwks.com/
-[4]: https://kdenlive.org/en/
-[5]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/01/olive-video-editor-interface.jpg?resize=800%2C450&ssl=1
-[6]: http://libregraphicsworld.org/blog/entry/introducing-olive-new-non-linear-video-editor
-[7]: https://itsfoss.com/flatpak-guide/
-[8]: https://itsfoss.com/use-appimage-linux/
-[9]: https://www.olivevideoeditor.org/download.php
-[10]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/01/olive-video-editor-interface.jpg?fit=800%2C450&ssl=1
diff --git a/sources/tech/20190131 VA Linux- The Linux Company That Once Ruled NASDAQ.md b/sources/tech/20190131 VA Linux- The Linux Company That Once Ruled NASDAQ.md
new file mode 100644
index 0000000000..78e0d0ecfd
--- /dev/null
+++ b/sources/tech/20190131 VA Linux- The Linux Company That Once Ruled NASDAQ.md
@@ -0,0 +1,147 @@
+[#]: collector: (lujun9972)
+[#]: translator: ( )
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+[#]: subject: (VA Linux: The Linux Company That Once Ruled NASDAQ)
+[#]: via: (https://itsfoss.com/story-of-va-linux/)
+[#]: author: (Avimanyu Bandyopadhyay https://itsfoss.com/author/avimanyu/)
+
+VA Linux: The Linux Company That Once Ruled NASDAQ
+======
+
+This is our first article in the Linux and open source history series. We will be covering more trivia, anecdotes and other nostalgic events from the past.
+
+At its time, _VA Linux_ was indeed a crusade to free the world from Microsoft’s domination.
+
+On a historical incident in December 1999, the shares of a private firm skyrocketed from just $30 to a whopping $239 within just a day of its [IPO][1]! It was a record-breaking development that day.
+
+The company was _VA Linux_ , a firm with only 200 employees that was based on the idea of deploying Intel Hardware with Linux and FOSS, had begun a fantastic journey [on the likes of Sun and Dell][2].
+
+It traded with a symbol LNUX and gained around 700 percent on its first day of trading. But hardly one year later, the [LNUX stocks were selling below $9 per share][3].
+
+How come a successful Linux based company become a subsidiary of [Gamestop][4], a gaming company?
+
+Let us look back into the highs and lows of this record-breaking Linux corporation by knowing their history in brief.
+
+### How did it all actually begin?
+
+In the year 1993, a graduate student at Stanford University wanted to own a powerful workstation but could not afford to buy expensive [Sun][5] Workstations, which used to be sold at extremely high prices of $7,000 per system at that time.
+
+So, he decided to do build one on his own ([DIY][6] [FTW][7]!). Using an Intel 486-chip running at just 33 megahertz, he installed Linux and finally had a machine that was twice as fast as Sun’s but at a much lower price tag: $2,000.
+
+That student was none other than _VA Research_ founder [Larry Augustin][8], whose idea was loved by many at that exciting time in the Stanford campus. People started buying machines with similar configurations from him and his friend and co-founder, James Vera. This is how _VA Research_ was formed.
+
+![VA Linux founder, Larry Augustin][9]
+
+> Once software goes into the GPL, you can’t take it back. People can stop contributing, but the code that exists, people can continue to develop on it.
+>
+> Without a doubt, a futuristic quote from VA Linux founder, Larry Augustin, 10 years ago | Read the whole interview [here][10]
+
+#### Some screenshots of their web domains from the early days
+
+![Linux Powered Machines on sale on varesearch.com | July 15, 1997][11]
+
+![varesearch.com reveals emerging growth | February 16, 1998][12]
+
+![On June 26, 2001, they transitioned from hardware to software | valinux.com as on June 22, 2001][13]
+
+### The spectacular rise and the devastating fall of VA Linux
+
+VA Research had a big year in 1999 and perhaps it was the biggest for them as they acquired many growing companies and competitors at that time, along with starting many innovative initiatives. The next year in 2000, they created a subsidiary in Japan named _VA Linux Systems Japan K.K._ They were at their peak that year.
+
+After they transitioned completely from hardware to software, stock prices started to fall drastically since 2002. It all happened because of slower-than-expected sales growth from new customers in the dot-com sector. In the later years they sold off a few brands and top employees also resigned in 2010.
+
+Gamestop finally [acquired][14] Geeknet Inc. (the new name of VA Linux) for $140 million on June 2, 2015.
+
+In case you’re curious for a detailed chronicle, I have separately created this [timeline][15], highlighting events year-wise.
+
+![Image Credit: Wikipedia][16]
+
+### What happened to VA Linux afterward?
+
+Geeknet owned by Gamestop is now an online retailer for the global geek community as [ThinkGeek][17].
+
+SourceForge and Slashdot were what still kept them linked with Linux and Open Source until _Dice Holdings_ acquired Slashdot, SourceForge, and Freecode.
+
+An [article][18] from 2016 sadly quotes in its final paragraph:
+
+> “Being acquired by a company that caters to gamers and does not have anything in particular to do with open source software may be a lackluster ending for what was once a spectacularly valuable Linux business.”
+
+Did we note Linux and Gamers? Does Linux really not have anything to do with Gaming? Are these two terms really so far apart? What about [Gaming on Linux][19]? What about [Open Source Games][20]?
+
+How could have the stalwarts from _VA Linux_ with years and years of experience in the Linux arena contributed to the Linux Gaming community? What could have happened had [Valve][21] (who are currently so [dedicated][22] towards Linux Gaming) acquired _VA Linux_ instead of Gamestop? Can we ponder?
+
+The seeds of ideas that were planted by _VA Research_ will continue to inspire the Linux and FOSS community because of its significant contributions in the world of Open Source. At _It’s FOSS,_ our heartfelt salute goes out to those noble ideas!
+
+Want to feel the nostalgia? Use the [timeline][15] dates with the [Way Back Machine][23] to check out previously owned _VA_ domains like _valinux.com_ or _varesearch.com_ in the past three decades! You can even check _linux.com_ that was once owned by _VA Linux Systems_.
+
+But wait, are we really done here? What happened to the subsidiary named _VA Linux Systems Japan K.K._? Well, it’s [a different story there][24] and still going strong with the original ideologies of _VA Linux_!
+
+![VA Linux booth circa 2000 | Image Credit: Storem][25]
+
+#### _VA Linux_ Subsidiary Still Operational in Japan!
+
+VA Linux is still operational through its [Japanese subsidiary][26]. It provides the following services:
+
+ * Failure Analysis and Support Services: [_VA Quest_][27]
+ * Entrusted Development Service
+ * Consulting Service
+
+
+
+_VA_ _Quest_ , in particular, continues its services as a failure-analysis solution for tracking down and dealing with kernel bugs which might be getting in its customers’ way since 2005. [Tetsuro Yogo][28] took over as the New President and CEO on April 3, 2017. Check out their timeline [here][29]! They are also [on GitHub][30]!
+
+You can also read about a recent development reported on August 2 last year, on this [translated][31] version of a Japanese IT news page. It’s an update about _VA Linux_ providing technical support service of “[Kubernetes][32]” container management software in Japan.
+
+Its good to know that their 18-year-old subsidiary is still doing well in Japan and the name of _VA Linux_ continues to flourish there even today!
+
+What are your views? Do you want to share anything on _VA Linux_? Please let us know in the comments section below.
+
+I hope you liked this first article in the Linux history series. If you know such interesting facts from the past that you would like us to cover here, please let us know.
+
+--------------------------------------------------------------------------------
+
+via: https://itsfoss.com/story-of-va-linux/
+
+作者:[Avimanyu Bandyopadhyay][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://itsfoss.com/author/avimanyu/
+[b]: https://github.com/lujun9972
+[1]: https://en.wikipedia.org/wiki/Initial_public_offering
+[2]: https://www.forbes.com/1999/05/03/feat.html
+[3]: https://www.channelfutures.com/open-source/open-source-history-the-spectacular-rise-and-fall-of-va-linux
+[4]: https://www.gamestop.com/
+[5]: http://www.sun.com/
+[6]: https://en.wikipedia.org/wiki/Do_it_yourself
+[7]: https://www.urbandictionary.com/define.php?term=FTW
+[8]: https://www.linkedin.com/in/larryaugustin/
+[9]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2018/09/VA-Linux-Founder-Larry-Augustin.jpg?ssl=1
+[10]: https://www.linuxinsider.com/story/SourceForges-Larry-Augustin-A-Better-Way-to-Build-Web-Apps-62155.html
+[11]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2018/09/VA-Research-com-Snapshot-July-15-1997.jpg?ssl=1
+[12]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2018/09/VA-Research-com-Snapshot-Feb-16-1998.jpg?ssl=1
+[13]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2018/09/VA-Linux-com-Snapshot-June-22-2001.jpg?ssl=1
+[14]: http://geekgirlpenpals.com/geeknet-parent-company-to-thinkgeek-entered-agreement-with-gamestop/
+[15]: https://medium.com/@avimanyu786/a-timeline-of-va-linux-through-the-years-6813e2bd4b13
+[16]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/01/LNUX-stock-fall.png?ssl=1
+[17]: https://www.thinkgeek.com/
+[18]: https://www.channelfutures.com/open-source/open-source-history-spectacular-rise-and-fall-va-linux
+[19]: https://itsfoss.com/linux-gaming-distributions/
+[20]: https://en.wikipedia.org/wiki/Open-source_video_game
+[21]: https://www.valvesoftware.com/
+[22]: https://itsfoss.com/steam-play-proton/
+[23]: https://archive.org/web/web.php
+[24]: https://translate.google.com/translate?sl=auto&tl=en&js=y&prev=_t&hl=en&ie=UTF-8&u=https%3A%2F%2Fwww.valinux.co.jp%2Fcorp%2Fstatement%2F&edit-text=
+[25]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/01/va-linux-team-booth.jpg?resize=800%2C600&ssl=1
+[26]: https://www.valinux.co.jp/english/
+[27]: https://www.linux.com/news/va-linux-announces-linux-failure-analysis-service
+[28]: https://www.linkedin.com/in/yogo45/
+[29]: https://www.valinux.co.jp/english/about/timeline/
+[30]: https://github.com/vaj
+[31]: https://translate.google.com/translate?sl=auto&tl=en&js=y&prev=_t&hl=en&ie=UTF-8&u=https%3A%2F%2Fit.impressbm.co.jp%2Farticles%2F-%2F16499
+[32]: https://en.wikipedia.org/wiki/Kubernetes
diff --git a/sources/tech/20190131 Will quantum computing break security.md b/sources/tech/20190131 Will quantum computing break security.md
deleted file mode 100644
index af374408dc..0000000000
--- a/sources/tech/20190131 Will quantum computing break security.md
+++ /dev/null
@@ -1,106 +0,0 @@
-[#]: collector: (lujun9972)
-[#]: translator: (HankChow)
-[#]: reviewer: ( )
-[#]: publisher: ( )
-[#]: url: ( )
-[#]: subject: (Will quantum computing break security?)
-[#]: via: (https://opensource.com/article/19/1/will-quantum-computing-break-security)
-[#]: author: (Mike Bursell https://opensource.com/users/mikecamel)
-
-Will quantum computing break security?
-======
-
-Do you want J. Random Hacker to be able to pretend they're your bank?
-
-
-
-Over the past few years, a new type of computer has arrived on the block: the quantum computer. It's arguably the sixth type of computer:
-
- 1. **Humans:** Before there were artificial computers, people used, well, people. And people with this job were called "computers."
-
- 2. **Mechanical analogue:** These are devices such as the [Antikythera mechanism][1], astrolabes, or slide rules.
-
- 3. **Mechanical digital:** In this category, I'd count anything that allowed discrete mathematics but didn't use electronics for the actual calculation: the abacus, Babbage's Difference Engine, etc.
-
- 4. **Electronic analogue:** Many of these were invented for military uses such as bomb sights, gun aiming, etc.
-
- 5. **Electronic digital:** I'm going to go out on a limb here and characterise Colossus as the first electronic digital computer1: these are basically what we use today for anything from mobile phones to supercomputers.
-
- 6. **Quantum computers:** These are coming and are fundamentally different from all of the previous generations.
-
-
-
-
-### What is quantum computing?
-
-Quantum computing uses concepts from quantum mechanics to allow very different types of calculations from what we're used to in "classical computing." I'm not even going to try to explain, because I know I'd do a terrible job, so I suggest you try something like [Wikipedia's definition][2] as a starting point. What's important for our purposes is to understand that quantum computers use qubits to do calculations, and for quite a few types of mathematical algorithms—and therefore computing operations––they can solve problems much faster than classical computers.
-
-What's "much faster"? Much, much faster: orders of magnitude faster. A calculation that might take years or decades with a classical computer could, in certain circumstances, take seconds. Impressive, yes? And scary. Because one of the types of problems that quantum computers should be good at solving is decrypting encrypted messages, even without the keys.
-
-This means that someone with a sufficiently powerful quantum computer should be able to read all of your current and past messages, decrypt any stored data, and maybe fake digital signatures. Is this a big thing? Yes. Do you want J. Random Hacker to be able to pretend they're your bank?2 Do you want that transaction on the blockchain where you were sold a 10 bedroom mansion in Mayfair to be "corrected" to be a bedsit in Weston-super-Mare?3
-
-### Some good news
-
-This is all scary stuff, but there's good news of various types.
-
-The first is that, in order to make any of this work at all, you need a quantum computer with a good number of qubits operating, and this is turning out to be hard.4 The general consensus is that we've got a few years before anybody has a "big" enough quantum computer to do serious damage to classical encryption algorithms.
-
-The second is that, even with a sufficient number of qubits to attacks our existing algorithms, you still need even more to allow for error correction.
-
-The third is that, although there are theoretical models to show how to attack some of our existing algorithms, actually making them work is significantly harder than you or I5 might expect. In fact, some of the attacks may turn out to be infeasible or just take more years to perfect than we worry about.
-
-The fourth is that there are clever people out there who are designing quantum-computation-resistant algorithms (sometimes referred to as "post-quantum algorithms") that we can use, at least for new encryption, once they've been tested and become widely available.
-
-All in all, in fact, there's a strong body of expert opinion that says we shouldn't be overly worried about quantum computing breaking our encryption in the next five or even 10 years.
-
-### And some bad news
-
-It's not all rosy, however. Two issues stick out to me as areas of concern.
-
- 1. People are still designing and rolling out systems that don't consider the issue. If you're coming up with a system that is likely to be in use for 10 or more years or will be encrypting or signing data that must remain confidential or attributable over those sorts of periods, then you should be considering the possible impact of quantum computing on your system.
-
- 2. Some of the new, quantum-computing-resistant algorithms are proprietary. This means that when you and I want to start implementing systems that are designed to be quantum-computing resistant, we'll have to pay to do so. I'm a big proponent of open source, and particularly of [open source cryptography][3], and my big worry is that we just won't be able to open source these things, and worse, that when new protocol standards are created––either de-facto or through standards bodies––they will choose proprietary algorithms that exclude the use of open source, whether on purpose, through ignorance, or because few good alternatives are available.
-
-
-
-
-### What to do?
-
-Luckily, there are things you can do to address both of the issues above. The first is to think and plan when designing a system about what the impact of quantum computing might be on it. Often—very often—you won't need to implement anything explicit now (and it could be hard to, given the current state of the art), but you should at least embrace [the concept of crypto-agility][4]: designing protocols and systems so you can swap out algorithms if required.7
-
-The second is a call to arms: Get involved in the open source movement and encourage everybody you know who has anything to do with cryptography to rally for open standards and for research into non-proprietary, quantum-computing-resistant algorithms. This is something that's very much on my to-do list, and an area where pressure and lobbying is just as important as the research itself.
-
-1\. I think it's fair to call it the first electronic, programmable computer. I know there were earlier non-programmable ones, and that some claim ENIAC, but I don't have the space or the energy to argue the case here.
-
-2\. No.
-
-3\. See 2. Don't get me wrong, by the way—I grew up near Weston-super-Mare, and it's got things going for it, but it's not Mayfair.
-
-4\. And if a quantum physicist says something's hard, then to my mind, it's hard.
-
-5\. And I'm assuming that neither of us is a quantum physicist or mathematician.6
-
-6\. I'm definitely not.
-
-7\. And not just for quantum-computing reasons: There's a good chance that some of our existing classical algorithms may just fall to other, non-quantum attacks such as new mathematical approaches.
-
-This article was originally published on [Alice, Eve, and Bob][5] and is reprinted with the author's permission.
-
---------------------------------------------------------------------------------
-
-via: https://opensource.com/article/19/1/will-quantum-computing-break-security
-
-作者:[Mike Bursell][a]
-选题:[lujun9972][b]
-译者:[译者ID](https://github.com/译者ID)
-校对:[校对者ID](https://github.com/校对者ID)
-
-本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
-
-[a]: https://opensource.com/users/mikecamel
-[b]: https://github.com/lujun9972
-[1]: https://en.wikipedia.org/wiki/Antikythera_mechanism
-[2]: https://en.wikipedia.org/wiki/Quantum_computing
-[3]: https://opensource.com/article/17/10/many-eyes
-[4]: https://aliceevebob.com/2017/04/04/disbelieving-the-many-eyes-hypothesis/
-[5]: https://aliceevebob.com/2019/01/08/will-quantum-computing-break-security/
diff --git a/sources/tech/20190201 Top 5 Linux Distributions for New Users.md b/sources/tech/20190201 Top 5 Linux Distributions for New Users.md
deleted file mode 100644
index 6b6985bf0a..0000000000
--- a/sources/tech/20190201 Top 5 Linux Distributions for New Users.md
+++ /dev/null
@@ -1,121 +0,0 @@
-[#]: collector: (lujun9972)
-[#]: translator: ( )
-[#]: reviewer: ( )
-[#]: publisher: ( )
-[#]: url: ( )
-[#]: subject: (Top 5 Linux Distributions for New Users)
-[#]: via: (https://www.linux.com/blog/learn/2019/2/top-5-linux-distributions-new-users)
-[#]: author: (Jack Wallen https://www.linux.com/users/jlwallen)
-
-Top 5 Linux Distributions for New Users
-======
-
-
-
-Linux has come a long way from its original offering. But, no matter how often you hear how easy Linux is now, there are still skeptics. To back up this claim, the desktop must be simple enough for those unfamiliar with Linux to be able to make use of it. And, the truth is that plenty of desktop distributions make this a reality.
-
-### No Linux knowledge required
-
-It might be simple to misconstrue this as yet another “best user-friendly Linux distributions” list. That is not what we’re looking at here. What’s the difference? For my purposes, the defining line is whether or not Linux actually plays into the usage. In other words, could you set a user in front of a desktop operating system and have them be instantly at home with its usage? No Linux knowledge required.
-
-Believe it or not, some distributions do just that. I have five I’d like to present to you here. You’ve probably heard of all of them. They might not be your distribution of choice, but you can guarantee that they slide Linux out of the spotlight and place the user front and center.
-
-Let’s take a look at the chosen few.
-
-### Elementary OS
-
-The very philosophy of Elementary OS is centered around how people actually use their desktops. The developers and designers have gone out of their way to create a desktop that is as simple as possible. In the process, they’ve de-Linux’d Linux. That is not to say they’ve removed Linux from the equation. No. Instead, what they’ve done is create an operating system that is about as neutral as you’ll find. Elementary OS is streamlined in such a way as to make sure everything is perfectly logical. From the single Dock to the clear-to-anyone Applications menu, this is a desktop that doesn’t say to the user, “You’re using Linux!” In fact, the layout itself is reminiscent of Mac, but with the addition of a simple app menu (Figure 1).
-
-![Elementary OS Juno][2]
-
-Figure 1: The Elementary OS Juno Application menu in action.
-
-[Used with permission][3]
-
-Another important aspect of Elementary OS that places it on this list is that it’s not nearly as flexible as some other desktop distributions. Sure, some users would balk at that, but having a desktop that doesn’t throw every bell and whistle at the user makes for a very familiar environment -- one that neither requires or allows a lot of tinkering. That aspect of the OS goes a long way to make the platform familiar to new users.
-
-And like any modern Linux desktop distribution, Elementary OS includes and App Store, called AppCenter, where users can install all the applications they need, without ever having to touch the command line.
-
-### Deepin
-
-Deepin not only gets my nod for one of the most beautiful desktops on the market, it’s also just as easy to adopt as any desktop operating system available. With a very simplistic take on the desktop interface, there’s very little in the way of users with zero Linux experience getting up to speed on its usage. In fact, you’d be hard-pressed to find a user who couldn’t instantly start using the Deepin desktop. The only possible hitch in that works might be the sidebar control center (Figure 2).
-
-![][5]
-
-Figure 2: The Deepin sidebar control panel.
-
-[Used with permission][3]
-
-But even that sidebar control panel is as intuitive as any other configuration tool on the market. And anyone that has used a mobile device will be instantly at home with the layout. As for opening applications, Deepin takes a macOS Launchpad approach with the Launcher. This button is in the usual far right position on the desktop dock, so users will immediately gravitate to that, understanding that it is probably akin to the standard “Start” menu.
-
-In similar fashion as Elementary OS (and most every Linux distribution on the market), Deepin includes an app store (simply called “Store”), where plenty of apps can be installed with ease.
-
-### Ubuntu
-
-You knew it was coming. Ubuntu is most often ranked at the top of most user-friendly Linux lists. Why? Because it’s one of the chosen few where a knowledge of Linux simply isn’t necessary to get by on the desktop. Prior to the adoption of GNOME (and the ousting of Unity), that wouldn’t have been the case. Why? Because Unity often needed a bit of tweaking to get it to the point where a tiny bit of Linux knowledge wasn’t necessary (Figure 3). Now that Ubuntu has adopted GNOME, and tweaked it to the point where an understanding of GNOME isn’t even necessary, this desktop makes Linux take a back seat to simplicity and usability.
-
-![Ubuntu 18.04][7]
-
-Figure 3: The Ubuntu 18.04 desktop is instantly familiar.
-
-[Used with permission][3]
-
-Unlike Elementary OS, Ubuntu doesn’t hold the user back. So anyone who wants more from their desktop, can have it. However, the out of the box experience is enough for just about any user type. Anyone looking for a desktop that makes the user unaware as to just how much power they have at their fingertips, could certainly do worse than Ubuntu.
-
-### Linux Mint
-
-I will preface this by saying I’ve never been the biggest fan of Linux Mint. It’s not that I don’t respect what the developers are doing, it’s more an aesthetic. I prefer modern-looking desktop environments. But that old school desktop metaphor (found in the default Cinnamon desktop) is perfectly familiar to nearly anyone who uses it. With a taskbar, start button, system tray, and desktop icons (Figure 4), Linux Mint offers an interface that requires zero learning curve. In fact, some users might be initially fooled into thinking they are working with a Windows 7 clone. Even the updates warning icon will look instantly familiar to users.
-
-![Linux Mint ][9]
-
-Figure 4: The Linux Mint Cinnamon desktop is very Windows 7-ish.
-
-[Used with permission][3]
-
-Because Linux Mint benefits from being based on Ubuntu, it’ll not only enjoy an immediate familiarity, but a high usability. No matter if you have even the slightest understanding of the underlying platform, users will feel instantly at home on Linux Mint.
-
-### Ubuntu Budgie
-
-Our list concludes with a distribution that also does a fantastic job of making the user forget they are using Linux, and makes working with the usual tools a simple, beautiful thing. Melding the Budgie Desktop with Ubuntu makes for an impressively easy to use distribution. And although the layout of the desktop (Figure 5) might not be the standard fare, there is no doubt the acclimation takes no time. In fact, outside of the Dock defaulting to the left side of the desktop, Ubuntu Budgie has a decidedly Elementary OS look to it.
-
-![Budgie][11]
-
-Figure 5: The Budgie desktop is as beautiful as it is simple.
-
-[Used with permission][3]
-
-The System Tray/Notification area in Ubuntu Budgie offers a few more features than the usual fare: Features such as quick access to Caffeine (a tool to keep your desktop awake), a Quick Notes tool (for taking simple notes), Night Lite switch, a Places drop-down menu (for quick access to folders), and of course the Raven applet/notification sidebar (which is similar to, but not quite as elegant as, the Control Center sidebar in Deepin). Budgie also includes an application menu (top left corner), which gives users access to all of their installed applications. Open an app and the icon will appear in the Dock. Right-click that app icon and select Keep in Dock for even quicker access.
-
-Everything about Ubuntu Budgie is intuitive, so there’s practically zero learning curve involved. It doesn’t hurt that this distribution is as elegant as it is easy to use.
-
-### Give One A Chance
-
-And there you have it, five Linux distributions that, each in their own way, offer a desktop experience that any user would be instantly familiar with. Although none of these might be your choice for top distribution, it’s hard to argue their value when it comes to users who have no familiarity with Linux.
-
-Learn more about Linux through the free ["Introduction to Linux" ][12]course from The Linux Foundation and edX.
-
---------------------------------------------------------------------------------
-
-via: https://www.linux.com/blog/learn/2019/2/top-5-linux-distributions-new-users
-
-作者:[Jack Wallen][a]
-选题:[lujun9972][b]
-译者:[译者ID](https://github.com/译者ID)
-校对:[校对者ID](https://github.com/校对者ID)
-
-本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
-
-[a]: https://www.linux.com/users/jlwallen
-[b]: https://github.com/lujun9972
-[1]: https://www.linux.com/files/images/elementaryosjpg-2
-[2]: https://www.linux.com/sites/lcom/files/styles/rendered_file/public/elementaryos_0.jpg?itok=KxgNUvMW (Elementary OS Juno)
-[3]: https://www.linux.com/licenses/category/used-permission
-[4]: https://www.linux.com/files/images/deepinjpg
-[5]: https://www.linux.com/sites/lcom/files/styles/rendered_file/public/deepin.jpg?itok=VV381a9f
-[6]: https://www.linux.com/files/images/ubuntujpg-1
-[7]: https://www.linux.com/sites/lcom/files/styles/rendered_file/public/ubuntu_1.jpg?itok=bax-_Tsg (Ubuntu 18.04)
-[8]: https://www.linux.com/files/images/linuxmintjpg
-[9]: https://www.linux.com/sites/lcom/files/styles/rendered_file/public/linuxmint.jpg?itok=8sPon0Cq (Linux Mint )
-[10]: https://www.linux.com/files/images/budgiejpg-0
-[11]: https://www.linux.com/sites/lcom/files/styles/rendered_file/public/budgie_0.jpg?itok=zcf-AHmj (Budgie)
-[12]: https://training.linuxfoundation.org/linux-courses/system-administration-training/introduction-to-linux
diff --git a/sources/tech/20190202 CrossCode is an Awesome 16-bit Sci-Fi RPG Game.md b/sources/tech/20190202 CrossCode is an Awesome 16-bit Sci-Fi RPG Game.md
new file mode 100644
index 0000000000..15349fbf32
--- /dev/null
+++ b/sources/tech/20190202 CrossCode is an Awesome 16-bit Sci-Fi RPG Game.md
@@ -0,0 +1,98 @@
+[#]: collector: (lujun9972)
+[#]: translator: ( )
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+[#]: subject: (CrossCode is an Awesome 16-bit Sci-Fi RPG Game)
+[#]: via: (https://itsfoss.com/crosscode-game/)
+[#]: author: (Phillip Prado https://itsfoss.com/author/phillip/)
+
+CrossCode is an Awesome 16-bit Sci-Fi RPG Game
+======
+
+What starts off as an obvious sci-fi 16-bit 2D action RPG quickly turns into a JRPG inspired pseudo-MMO open-world puzzle platformer. Though at first glance this sounds like a jumbled mess, [CrossCode][1] manages to bundle all of its influences into a seamless gaming experience that feels nothing shy of excellent.
+
+Note: CrossCode is not open source software. We have covered it because it is Linux specific.
+
+![][2]
+
+### Story
+
+You play as Lea, a girl who has forgotten her identity, where she comes from, and how to speak. As you walk through the early parts of the story, you come to find that you are a character in a digital world — a video game. But not just any video game — an MMO. And you, Lea, must venture into the digital world known as CrossWorlds in order to unravel the secrets of your past.
+
+As you progress through the game, you unveil more and more about yourself, learning how you got to this point in the first place. This doesn’t sound too crazy of a story, but the gameplay implementation and appropriately paced storyline make for quite a captivating experience.
+
+The story unfolds at a satisfying speed and the character’s development is genuinely gratifying — both fictionally and mechanically. The only critique I had was that it felt like the introductory segment took a little too long — dragging the tutorial into the gameplay for quite some time, and keeping the player from getting into the real meat of the game.
+
+All-in-all, CrossCode’s story did not leave me wanting, not even in the slightest. It’s deep, fun, heartwarming, intelligent, and all while never sacrificing great character development. Without spoiling anything, I will say that if you are someone that enjoys a good story, you will need to give CrossCode a look.
+
+![][3]
+
+### Gameplay
+
+Yes, the story is great and all, but if there is one place that CrossCode truly shines, it has to be its gameplay. The game’s mechanics are fast-paced, challenging, intuitive, and downright fun!
+
+You start off with a dodge, block, melee, and ranged attack, each slowly developing overtime as the character tree is unlocked. This all-too-familiar mix of combat elements balances skill and hack-n-slash mechanics in a way that doesn’t conflict with one another.
+
+The game utilizes this mix of skills to create some amazing puzzle solving and combat that helps CrossCode’s gameplay truly stand out. Whether you are making your way through one of the four main dungeons, or you are taking a boss head on, you can’t help but periodically stop and think “wow, this game is great!”
+
+Though this has to be the game’s strongest feature, it can also be the game’s biggest downfall. Part of the reason that the story and character progression is so satisfying is because the combat and puzzle mechanics can be incredibly challenging, and that’s putting it lightly.
+
+There are times in which CrossCode’s gameplay feels downright impossible. Bosses take an expert amount of focus, and dungeons require all of the patience you can muster up just to simply finish them.
+
+![][4]
+
+The game requires a type of dexterity I have not quite had to master yet. I mean, sure there are more challenging puzzle games out there, yes there are more difficult platformers, and of course there are more grueling RPGs, but adding all of these elements into one game while spurring the player along with an alluring story requires a level of mechanical balance that I haven’t found in many other games.
+
+And though there were times I felt the gameplay was flat out punishing, I was constantly reminded that this is simply not the case. Death doesn’t cause serious character regression, you can take a break from dungeons when you feel overwhelmed, and there is a plethora of checkpoints throughout the game’s most difficult parts to help the player along.
+
+Where other games fall short by giving the player nothing to lose, this reality redeems CrossCode amid its rigorous gameplay. CrossCode may be one of the only games I know that takes two common flaws in games and holds the tension between them so well that it becomes one of the game’s best strengths.
+
+![][5]
+
+### Design
+
+One of the things that surprised me most about CrossCode was how well it’s world and sound design come together. Right off the bat, from the moment you boot the game up, it is clear the developers meant business when designing CrossCode.
+
+Being in a fictional MMO world, the game’s character ensemble is vibrant and distinctive, each having its own tone and personality. The games sound and motion graphics are tactile and responsive, giving the player a healthy amount of feedback during gameplay. And the soundtrack behind the game is simply beautiful, ebbing and flowing between intense moments of combat to blissful moments of exploration.
+
+If I had to fault CrossCode in this category it would have to be in the size of the map. Yes, the dungeons are long, and yes, the CrossWorlds map looks gigantic, but I still wanted more to explore outside crippling dungeons. The game is beautiful and fluid, but akin to RPG games of yore — aka. Zelda games pre-Breath of the Wild — I wish there was just a little more for me to freely explore.
+
+It is obvious that the developers really cared about this aspect of the game, and you can tell they spent an incredible amount of time developing its design. CrossCode set itself up for success here in its plot and content, and the developers capitalize on the opportunity, knocking another category out of the park.
+
+![][6]
+
+### Conclusion
+
+In the end, it is obvious how I feel about this game. And just in case you haven’t caught on yet…I love it. It holds a near perfect balance between being difficult and rewarding, simple and complex, linear and open, making CrossCode one of [the best Linux games][7] out there.
+
+Developed by [Radical Fish Games][8], CrossCode was officially released for Linux on September 21, 2018, seven years after development began. You can pick up the game over on [Steam][9], [GOG][10], or [Humble Bundle][11].
+
+If you play games regularly, you may want to [subscribe to Humble Monthly][12] ([affiliate][13] link). For $12 per month, you’ll get games worth over $100 (not all for Linux). Over 450,000 gamers worldwide use Humble Monthly.
+
+--------------------------------------------------------------------------------
+
+via: https://itsfoss.com/crosscode-game/
+
+作者:[Phillip Prado][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://itsfoss.com/author/phillip/
+[b]: https://github.com/lujun9972
+[1]: http://www.cross-code.com/en/home
+[2]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/02/CrossCode-Level-up.png?fit=800%2C451&ssl=1
+[3]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/02/CrossCode-Equpiment.png?fit=800%2C451&ssl=1
+[4]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/02/CrossCode-character-development.png?fit=800%2C451&ssl=1
+[5]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/02/CrossCode-Environment.png?fit=800%2C451&ssl=1
+[6]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/02/CrossCode-dungeon.png?fit=800%2C451&ssl=1
+[7]: https://itsfoss.com/free-linux-games/
+[8]: http://www.radicalfishgames.com/
+[9]: https://store.steampowered.com/app/368340/CrossCode/
+[10]: https://www.gog.com/game/crosscode
+[11]: https://www.humblebundle.com/store/crosscode
+[12]: https://www.humblebundle.com/monthly?partner=itsfoss
+[13]: https://itsfoss.com/affiliate-policy/
diff --git a/sources/tech/20190204 7 Best VPN Services For 2019.md b/sources/tech/20190204 7 Best VPN Services For 2019.md
deleted file mode 100644
index e72d7de3df..0000000000
--- a/sources/tech/20190204 7 Best VPN Services For 2019.md
+++ /dev/null
@@ -1,77 +0,0 @@
-[#]: collector: (lujun9972)
-[#]: translator: ( )
-[#]: reviewer: ( )
-[#]: publisher: ( )
-[#]: url: ( )
-[#]: subject: (7 Best VPN Services For 2019)
-[#]: via: (https://www.ostechnix.com/7-best-opensource-vpn-services-for-2019/)
-[#]: author: (Editor https://www.ostechnix.com/author/editor/)
-
-7 Best VPN Services For 2019
-======
-
-At least 67 percent of global businesses in the past three years have faced data breaching. The breaching has been reported to expose hundreds of millions of customers. Studies show that an estimated 93 percent of these breaches would have been avoided had data security fundamentals been considered beforehand.
-
-Understand that poor data security can be extremely costly, especially to a business and could quickly lead to widespread disruption and possible harm to your brand reputation. Although some businesses can pick up the pieces the hard way, there are still those that fail to recover. Today however, you are fortunate to have access to data and network security software.
-
-
-
-As you start 2019, keep off cyber-attacks by investing in a **V** irtual **P** rivate **N** etwork commonly known as **VPN**. When it comes to online privacy and security, there are many uncertainties. There are hundreds of different VPN providers, and picking the right one means striking just the right balance between pricing, services, and ease of use.
-
-If you are looking for a solid 100 percent tested and secure VPN, you might want to do your due diligence and identify the best match. Here are the top 7 Best tried and tested VPN services For 2019.
-
-### 1. Vpnunlimitedapp
-
-With VPN Unlimited, you have total security. This VPN allows you to use any WIFI without worrying that your personal data can be leaked. With AES-256, your data is encrypted and protected against prying third-parties and hackers. This VPN ensures you stay anonymous and untracked on all websites no matter the location. It offers a 7-day trial and a variety of protocol options: OpenVPN, IKEv2, and KeepSolid Wise. Demanding users are entitled to special extras such as a personal server, lifetime VPN subscription, and personal IP options.
-
-### 2. VPN Lite
-
-VPN Lite is an easy-to-use and **free VPN service** that allows you to browse the internet at no charges. You remain anonymous and your privacy is protected. It obscures your IP and encrypts your data meaning third parties are not able to track your activities on all online platforms. You also get to access all online content. With VPN Lite, you get to access blocked sites in your state. You can also gain access to public WIFI without the worry of having sensitive information tracked and hacked by spyware and hackers.
-
-### 3. HotSpot Shield
-
-Launched in 2005, this is a popular VPN embraced by the majority of users. The VPN protocol here is integrated by at least 70 percent of the largest security companies globally. It is also known to have thousands of servers across the globe. It comes with two free options. One is completely free but supported by online advertisements, and the second one is a 7-day trial which is the flagship product. It contains military grade data encryption and protects against malware. HotSpot Shield guaranteed secure browsing and offers lightning-fast speeds.
-
-### 4. TunnelBear
-
-This is the best way to start if you are new to VPNs. It comes to you with a user-friendly interface complete with animated bears. With the help of TunnelBear, users are able to connect to servers in at least 22 countries at great speeds. It uses **AES 256-bit encryption** guaranteeing no data logging meaning your data stays protected. You also get unlimited data for up to five devices.
-
-### 5. ProtonVPN
-
-This VPN offers you a strong premium service. You may suffer from reduced connection speeds, but you also get to enjoy its unlimited data. It features an intuitive interface easy to use, and comes with a multi-platform compatibility. Proton’s servers are said to be specifically optimized for torrenting and thus cannot give access to Netflix. You get strong security features such as protocols and encryptions meaning your browsing activities remain secure.
-
-### 6. ExpressVPN
-
-This is known as the best offshore VPN for unblocking and privacy. It has gained recognition for being the top VPN service globally resulting from solid customer support and fast speeds. It offers routers that come with browser extensions and custom firmware. ExpressVPN also has an admirable scope of quality apps, plenty of servers, and can only support up to three devices.
-
-It’s not entirely free, and happens to be one of the most expensive VPNs on the market today because it is fully packed with the most advanced features. With it comes a 30-day money-back guarantee, meaning you can freely test this VPN for a month. Good thing is; it is completely risk-free. If you need a VPN for a short duration to bypass online censorship for instance, this could, be your go-to solution. You don’t want to give trials to a spammy, slow, free program.
-
-It is also one of the best ways to enjoy online streaming as well as outdoor security. Should you need to continue using it, you only have to renew or cancel your free trial if need be. Express VPN has over 2000 servers across 90 countries, unblocks Netflix, gives lightning fast connections, and gives users total privacy.
-
-### 7. PureVPN
-
-While this VPN may not be completely free, it falls under the most budget-friendly services on this list. Users can sign up for a free seven days trial and later choose one of its paid plans. With this VPN, you get to access 750-plus servers in at least 140 countries. There is also access to easy installation on almost all devices. All its paid features can still be accessed within the free trial window. That includes unlimited data transfers, IP leakage protection, and ISP invisibility. The supproted operating systems are iOS, Android, Windows, Linux, and macOS.
-
-### Summary
-
-With the large variety of available freemium VPN services today, why not take that opportunity to protect yourself and your customers? Understand that there are some great VPN services. Even the most secure free service however, cannot be touted as risk free. You might want to upgrade to a premium one for increased protection. Premium VPN allows you to test freely offering risk-free money-back guarantee. Whether you plan to sign up for a paid VPN or commit to a free one, it is highly advisable to have a VPN.
-
-**About the author:**
-
-**Renetta K. Molina** is a tech enthusiast and fitness enthusiast. She writes about technology, apps, WordPress and a variety of other topics. In her free time, she likes to play golf and read books. She loves to learn and try new things.
-
-
-
---------------------------------------------------------------------------------
-
-via: https://www.ostechnix.com/7-best-opensource-vpn-services-for-2019/
-
-作者:[Editor][a]
-选题:[lujun9972][b]
-译者:[译者ID](https://github.com/译者ID)
-校对:[校对者ID](https://github.com/校对者ID)
-
-本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
-
-[a]: https://www.ostechnix.com/author/editor/
-[b]: https://github.com/lujun9972
diff --git a/sources/tech/20190205 5 Linux GUI Cloud Backup Tools.md b/sources/tech/20190205 5 Linux GUI Cloud Backup Tools.md
new file mode 100644
index 0000000000..45e0bf1342
--- /dev/null
+++ b/sources/tech/20190205 5 Linux GUI Cloud Backup Tools.md
@@ -0,0 +1,251 @@
+[#]: collector: (lujun9972)
+[#]: translator: ( )
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+[#]: subject: (5 Linux GUI Cloud Backup Tools)
+[#]: via: (https://www.linux.com/blog/learn/2019/2/5-linux-gui-cloud-backup-tools)
+[#]: author: (Jack Wallen https://www.linux.com/users/jlwallen)
+
+5 Linux GUI Cloud Backup Tools
+======
+
+
+We have reached a point in time where most every computer user depends upon the cloud … even if only as a storage solution. What makes the cloud really important to users, is when it’s employed as a backup. Why is that such a game changer? By backing up to the cloud, you have access to those files, from any computer you have associated with your cloud account. And because Linux powers the cloud, many services offer Linux tools.
+
+Let’s take a look at five such tools. I will focus on GUI tools, because they offer a much lower barrier to entry to many of the CLI tools. I’ll also be focusing on various, consumer-grade cloud services (e.g., [Google Drive][1], [Dropbox][2], [Wasabi][3], and [pCloud][4]). And, I will be demonstrating on the Elementary OS platform, but all of the tools listed will function on most Linux desktop distributions.
+
+Note: Of the following backup solutions, only Duplicati is licensed as open source. With that said, let’s see what’s available.
+
+### Insync
+
+I must confess, [Insync][5] has been my cloud backup of choice for a very long time. Since Google refuses to release a Linux desktop client for Google Drive (and I depend upon Google Drive daily), I had to turn to a third-party solution. Said solution is Insync. This particular take on syncing the desktop to Drive has not only been seamless, but faultless since I began using the tool.
+
+The cost of Insync is a one-time $29.99 fee (per Google account). Trust me when I say this tool is worth the price of entry. With Insync you not only get an easy-to-use GUI for managing your Google Drive backup and sync, you get a tool (Figure 1) that gives you complete control over what is backed up and how it is backed up. Not only that, but you can also install Nautilus integration (which also allows you to easy add folders outside of the configured Drive sync destination).
+
+![Insync app][7]
+
+Figure 1: The Insync app window on Elementary OS.
+
+[Used with permission][8]
+
+You can download Insync for Ubuntu (or its derivatives), Linux Mint, Debian, and Fedora from the [Insync download page][9]. Once you’ve installed Insync (and associated it with your account), you can then install Nautilus integration with these steps (demonstrating on Elementary OS):
+
+ 1. Open a terminal window and issue the command sudo nano /etc/apt/sources.list.d/insync.list.
+
+ 2. Paste the following into the new file: deb precise non-free contrib.
+
+ 3. Save and close the file.
+
+ 4. Update apt with the command sudo apt-get update.
+
+ 5. Install the necessary package with the command sudo apt-get install insync-nautilus.
+
+
+
+
+Allow the installation to complete. Once finished, restart Nautilus with the command nautilus -q (or log out and back into the desktop). You should now see an Insync entry in the Nautilus right-click context menu (Figure 2).
+
+
+
+Figure 2: Insync/Nautilus integration in action.
+
+[Used with permission][8]
+
+### Dropbox
+
+Although [Dropbox][2] drew the ire of many in the Linux community (by dropping support for all filesystems but unencrypted ext4), it still supports a great deal of Linux desktop deployments. In other words, if your distribution still uses the ext4 file system (and you do not opt to encrypt your full drive), you’re good to go.
+
+The good news is the Dropbox Linux desktop client is quite good. The tool offers a system tray icon that allows you to easily interact with your cloud syncing. Dropbox also includes CLI tools and a Nautilus integration (by way of an additional addon found [here][10]).
+
+The Linux Dropbox desktop sync tool works exactly as you’d expect. From the Dropbox system tray drop-down (Figure 3) you can open the Dropbox folder, launch the Dropbox website, view recently changed files, get more space, pause syncing, open the preferences window, find help, and quite Dropbox.
+
+![Dropbox][12]
+
+Figure 3: The Dropbox system tray drop-down on Elementary OS.
+
+[Used with permission][8]
+
+The Dropbox/Nautilus integration is an important component, as it makes quickly adding to your cloud backup seamless and fast. From the Nautilus file manager, locate and right-click the folder to bad added, and select Dropbox > Move to Dropbox (Figure 4).
+
+The only caveat to the Dropbox/Nautilus integration is that the only option is to move a folder to Dropbox. To some this might not be an option. The developers of this package would be wise to instead have the action create a link (instead of actually moving the folder).
+
+Outside of that one issue, the Dropbox cloud sync/backup solution for Linux is a great route to go.
+
+### pCloud
+
+pCloud might well be one of the finest cloud backup solutions you’ve never heard of. This take on cloud storage/backup includes features like:
+
+ * Encryption (subscription service required for this feature);
+
+ * Mobile apps for Android and iOS;
+
+ * Linux, Mac, and Windows desktop clients;
+
+ * Easy file/folder sharing;
+
+ * Built-in audio/video players;
+
+ * No file size limitation;
+
+ * Sync any folder from the desktop;
+
+ * Panel integration for most desktops; and
+
+ * Automatic file manager integration.
+
+
+
+
+pCloud offers both Linux desktop and CLI tools that function quite well. pCloud offers both a free plan (with 10GB of storage), a Premium Plan (with 500GB of storage for a one-time fee of $175.00), and a Premium Plus Plan (with 2TB of storage for a one-time fee of $350.00). Both non-free plans can also be paid on a yearly basis (instead of the one-time fee).
+
+The pCloud desktop client is quite user-friendly. Once installed, you have access to your account information (Figure 5), the ability to create sync pairs, create shares, enable crypto (which requires an added subscription), and general settings.
+
+![pCloud][14]
+
+Figure 5: The pCloud desktop client is incredibly easy to use.
+
+[Used with permission][8]
+
+The one caveat to pCloud is there’s no file manager integration for Linux. That’s overcome by the Sync folder in the pCloud client.
+
+### CloudBerry
+
+The primary focus for [CloudBerry][15] is for Managed Service Providers. The business side of CloudBerry does have an associated cost (one that is probably well out of the price range for the average user looking for a simple cloud backup solution). However, for home usage, CloudBerry is free.
+
+What makes CloudBerry different than the other tools is that it’s not a backup/storage solution in and of itself. Instead, CloudBerry serves as a link between your desktop and the likes of:
+
+ * AWS
+
+ * Microsoft Azure
+
+ * Google Cloud
+
+ * BackBlaze
+
+ * OpenStack
+
+ * Wasabi
+
+ * Local storage
+
+ * External drives
+
+ * Network Attached Storage
+
+ * Network Shares
+
+ * And more
+
+
+
+
+In other words, you use CloudBerry as the interface between the files/folders you want to share and the destination with which you want send them. This also means you must have an account with one of the many supported solutions.
+Once you’ve installed CloudBerry, you create a new Backup plan for the target storage solution. For that configuration, you’ll need such information as:
+
+ * Access Key
+
+ * Secret Key
+
+ * Bucket
+
+
+
+
+What you’ll need for the configuration will depend on the account you’re connecting to (Figure 6).
+
+![CloudBerry][17]
+
+Figure 6: Setting up a CloudBerry backup for Wasabi.
+
+[Used with permission][8]
+
+The one caveat to CloudBerry is that it does not integrate with any file manager, nor does it include a system tray icon for interaction with the service.
+
+### Duplicati
+
+[Duplicati][18] is another option that allows you to sync your local directories with either locally attached drives, network attached storage, or a number of cloud services. The options supported include:
+
+ * Local folders
+
+ * Attached drives
+
+ * FTP/SFTP
+
+ * OpenStack
+
+ * WebDAV
+
+ * Amazon Cloud Drive
+
+ * Amazon S3
+
+ * Azure Blob
+
+ * Box.com
+
+ * Dropbox
+
+ * Google Cloud Storage
+
+ * Google Drive
+
+ * Microsoft OneDrive
+
+ * And many more
+
+
+
+
+Once you install Duplicati (download the installer for Debian, Ubuntu, Fedora, or RedHat from the [Duplicati downloads page][19]), click on the entry in your desktop menu, which will open a web page to the tool (Figure 7), where you can configure the app settings, create a new backup, restore from a backup, and more.
+
+
+
+To create a backup, click Add backup and walk through the easy-to-use wizard (Figure 8). The backup service you choose will dictate what you need for a successful configuration.
+
+![Duplicati backup][21]
+
+Figure 8: Creating a new Duplicati backup for Google Drive.
+
+[Used with permission][8]
+
+For example, in order to create a backup to Google Drive, you’ll need an AuthID. For that, click the AuthID link in the Destination section of the setup, where you’ll be directed to select the Google Account to associate with the backup. Once you’ve allowed Duplicati access to the account, the AuthID will fill in and you’re ready to continue. Click Test connection and you’ll be asked to okay the creation of a new folder (if necessary). Click Next to complete the setup of the backup.
+
+### More Where That Came From
+
+These five cloud backup tools aren’t the end of this particular rainbow. There are plenty more options where these came from (including CLI-only tools). But any of these backup clients will do a great job of serving your Linux desktop-to-cloud backup needs.
+
+--------------------------------------------------------------------------------
+
+via: https://www.linux.com/blog/learn/2019/2/5-linux-gui-cloud-backup-tools
+
+作者:[Jack Wallen][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://www.linux.com/users/jlwallen
+[b]: https://github.com/lujun9972
+[1]: https://www.google.com/drive/
+[2]: https://www.dropbox.com/
+[3]: https://wasabi.com/
+[4]: https://www.pcloud.com/
+[5]: https://www.insynchq.com/
+[6]: /files/images/insync1jpg
+[7]: https://www.linux.com/sites/lcom/files/styles/rendered_file/public/insync_1.jpg?itok=_SDP77uE (Insync app)
+[8]: /licenses/category/used-permission
+[9]: https://www.insynchq.com/downloads
+[10]: https://www.dropbox.com/install-linux
+[11]: /files/images/dropbox1jpg
+[12]: https://www.linux.com/sites/lcom/files/styles/rendered_file/public/dropbox_1.jpg?itok=BYbg-sKB (Dropbox)
+[13]: /files/images/pcloud1jpg
+[14]: https://www.linux.com/sites/lcom/files/styles/rendered_file/public/pcloud_1.jpg?itok=cAUz8pya (pCloud)
+[15]: https://www.cloudberrylab.com
+[16]: /files/images/cloudberry1jpg
+[17]: https://www.linux.com/sites/lcom/files/styles/rendered_file/public/cloudberry_1.jpg?itok=s0aP5xuN (CloudBerry)
+[18]: https://www.duplicati.com/
+[19]: https://www.duplicati.com/download
+[20]: /files/images/duplicati2jpg
+[21]: https://www.linux.com/sites/lcom/files/styles/rendered_file/public/duplicati_2.jpg?itok=Xkn8s3jg (Duplicati backup)
diff --git a/sources/tech/20190205 Installing Kali Linux on VirtualBox- Quickest - Safest Way.md b/sources/tech/20190205 Installing Kali Linux on VirtualBox- Quickest - Safest Way.md
index 54e4ce314c..e8722c63cc 100644
--- a/sources/tech/20190205 Installing Kali Linux on VirtualBox- Quickest - Safest Way.md
+++ b/sources/tech/20190205 Installing Kali Linux on VirtualBox- Quickest - Safest Way.md
@@ -4,13 +4,13 @@
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Installing Kali Linux on VirtualBox: Quickest & Safest Way)
-[#]: via: (https://itsfoss.com/install-kali-linux-virtualbox)
+[#]: via: (https://itsfoss.com/install-kali-linux-virtualbox/)
[#]: author: (Ankush Das https://itsfoss.com/author/ankush/)
Installing Kali Linux on VirtualBox: Quickest & Safest Way
======
-**This tutorial shows you how to install Kali Linux on Virtual Box in Windows and Linux in the quickest way possible.**
+_**This tutorial shows you how to install Kali Linux on Virtual Box in Windows and Linux in the quickest way possible.**_
[Kali Linux][1] is one of the [best Linux distributions for hacking][2] and security enthusiasts.
@@ -22,7 +22,7 @@ With Virtual Box, you can use Kali Linux as a regular application in your Window
Using Kali Linux in a virtual machine is also safe. Whatever you do inside Kali Linux will NOT impact your ‘host system’ (i.e. your original Windows or Linux operating system). Your actual operating system will be untouched and your data in the host system will be safe.
-![Kali Linux on Virtual Box][3]
+![][3]
### How to install Kali Linux on VirtualBox
@@ -30,7 +30,7 @@ I’ll be using [VirtualBox][4] here. It is a wonderful open source virtualizati
In this tutorial, we will talk about Kali Linux in particular but you can install almost any other OS whose ISO file exists or a pre-built virtual machine save file is available.
-**Note:** The same steps apply for Windows/Linux running VirtualBox.
+**Note:** _The same steps apply for Windows/Linux running VirtualBox._
As I already mentioned, you can have either Windows or Linux installed as your host. But, in this case, I have Windows 10 installed (don’t hate me!) where I try to install Kali Linux in VirtualBox step by step.
@@ -38,25 +38,29 @@ And, the best part is – even if you happen to use a Linux distro as your prima
Wondering, how? Let’s see…
+[Subscribe to Our YouTube Channel for More Linux Videos][5]
+
### Step by Step Guide to install Kali Linux on VirtualBox
-We are going to use a custom Kali Linux image made for VirtualBox specifically. You can also download the ISO file for Kali Linux and create a new virtual machine – but why do that when you have an easy alternative?
+_We are going to use a custom Kali Linux image made for VirtualBox specifically. You can also download the ISO file for Kali Linux and create a new virtual machine – but why do that when you have an easy alternative?_
#### 1\. Download and install VirtualBox
The first thing you need to do is to download and install VirtualBox from Oracle’s official website.
-[Download VirtualBox](https://www.virtualbox.org/wiki/Downloads)
+[Download VirtualBox][6]
-Once you download the installer, just double click on it to install VirtualBox. It’s the same for installing VirtualBox on Ubuntu/Fedora Linux as well.
+Once you download the installer, just double click on it to install VirtualBox. It’s the same for [installing VirtualBox on Ubuntu][7]/Fedora Linux as well.
#### 2\. Download ready-to-use virtual image of Kali Linux
-After installing it successfully, head to [Offensive Security’s download page][5] to download the VM image for VirtualBox. If you change your mind to utilize [VMware][6], that is available too.
+After installing it successfully, head to [Offensive Security’s download page][8] to download the VM image for VirtualBox. If you change your mind to utilize [VMware][9], that is available too.
-![Kali Linux Virtual Box Image][7]
+![][10]
-As you can see the file size is well over 3 GB, you should either use the torrent option or download it using a [download manager][8].
+As you can see the file size is well over 3 GB, you should either use the torrent option or download it using a [download manager][11].
+
+[Kali Linux Virtual Image][8]
#### 3\. Install Kali Linux on Virtual Box
@@ -66,11 +70,11 @@ Here’s how to import the VirtualBox image for Kali Linux:
**Step 1** : Launch VirtualBox. You will notice an **Import** button – click on it
-![virtualbox import][9] Click on Import button
+![Click on Import button][12]
**Step 2:** Next, browse the file you just downloaded and choose it to be imported (as you can see in the image below). The file name should start with ‘kali linux‘ and end with . **ova** extension.
-![virtualbox import file][10] Importing Kali Linux image
+![Importing Kali Linux image][13]
**S** Once selected, proceed by clicking on **Next**.
@@ -78,7 +82,7 @@ Here’s how to import the VirtualBox image for Kali Linux:
You need to select a path where you have sufficient storage available. I would never recommend the **C:** drive on Windows.
-![virtualbox kali linux settings][11] Import hard drives as VDI
+![Import hard drives as VDI][14]
Here, the hard drives as VDI refer to virtually mount the hard drives by allocating the storage space set.
@@ -88,7 +92,11 @@ After you are done with the settings, hit **Import** and wait for a while.
You might get an error at first for USB port 2.0 controller support, you can disable it to resolve it or just follow the on-screen instruction of installing an additional package to fix it. And, you are done!
-![kali linux on windows virtual box][12]Kali Linux running in VirtualBox
+![Kali Linux running in VirtualBox][15]
+
+The default username in Kali Linux is root and the default password is toor. You should be able to login to the system with it.
+
+Do note that you should [update Kali Linux][16] before trying to install a new applications or trying to hack your neighbor’s WiFi.
I hope this guide helps you easily install Kali Linux on Virtual Box. Of course, Kali Linux has a lot of useful tools in it for penetration testing – good luck with that!
@@ -102,12 +110,13 @@ Offensive Security, the company behind Kali Linux, has created a guide book that
Basically, it has everything you need to get started with Kali Linux. And the best thing is that the book is available to download for free.
-Let us know in the comments below if you face an issue or simply share your experience with Kali Linux on VirtualBox.
+[Download Kali Linux Revealed for FREE][17]
+Let us know in the comments below if you face an issue or simply share your experience with Kali Linux on VirtualBox.
--------------------------------------------------------------------------------
-via: https://itsfoss.com/install-kali-linux-virtualbox
+via: https://itsfoss.com/install-kali-linux-virtualbox/
作者:[Ankush Das][a]
选题:[lujun9972][b]
@@ -122,12 +131,16 @@ via: https://itsfoss.com/install-kali-linux-virtualbox
[2]: https://itsfoss.com/linux-hacking-penetration-testing/
[3]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/02/kali-linux-virtual-box.png?resize=800%2C450&ssl=1
[4]: https://www.virtualbox.org/
-[5]: https://www.offensive-security.com/kali-linux-vm-vmware-virtualbox-image-download/
-[6]: https://itsfoss.com/install-vmware-player-ubuntu-1310/
-[7]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/02/kali-linux-virtual-box-image.jpg?resize=800%2C347&ssl=1
-[8]: https://itsfoss.com/4-best-download-managers-for-linux/
-[9]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/02/vmbox-import-kali-linux.jpg?ssl=1
-[10]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/02/vmbox-linux-next.jpg?ssl=1
-[11]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/02/vmbox-kali-linux-settings.jpg?ssl=1
-[12]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/02/kali-linux-on-windows-virtualbox.jpg?resize=800%2C429&ssl=1
-[13]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/02/kali-linux-virtual-box.png?fit=800%2C450&ssl=1
+[5]: https://www.youtube.com/c/itsfoss?sub_confirmation=1
+[6]: https://www.virtualbox.org/wiki/Downloads
+[7]: https://itsfoss.com/install-virtualbox-ubuntu/
+[8]: https://www.offensive-security.com/kali-linux-vm-vmware-virtualbox-image-download/
+[9]: https://itsfoss.com/install-vmware-player-ubuntu-1310/
+[10]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/02/kali-linux-virtual-box-image.jpg?resize=800%2C347&ssl=1
+[11]: https://itsfoss.com/4-best-download-managers-for-linux/
+[12]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/02/vmbox-import-kali-linux.jpg?ssl=1
+[13]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/02/vmbox-linux-next.jpg?ssl=1
+[14]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/02/vmbox-kali-linux-settings.jpg?ssl=1
+[15]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/02/kali-linux-on-windows-virtualbox.jpg?resize=800%2C429&ssl=1
+[16]: https://linuxhandbook.com/update-kali-linux/
+[17]: https://kali.training/downloads/Kali-Linux-Revealed-1st-edition.pdf
diff --git a/sources/tech/20190206 4 cool new projects to try in COPR for February 2019.md b/sources/tech/20190206 4 cool new projects to try in COPR for February 2019.md
deleted file mode 100644
index b63cf4da75..0000000000
--- a/sources/tech/20190206 4 cool new projects to try in COPR for February 2019.md
+++ /dev/null
@@ -1,95 +0,0 @@
-[#]: collector: (lujun9972)
-[#]: translator: (geekpi)
-[#]: reviewer: ( )
-[#]: publisher: ( )
-[#]: url: ( )
-[#]: subject: (4 cool new projects to try in COPR for February 2019)
-[#]: via: (https://fedoramagazine.org/4-cool-new-projects-to-try-in-copr-for-february-2019/)
-[#]: author: (Dominik Turecek https://fedoramagazine.org)
-
-4 cool new projects to try in COPR for February 2019
-======
-
-
-
-COPR is a [collection][1] of personal repositories for software that isn’t carried in Fedora. Some software doesn’t conform to standards that allow easy packaging. Or it may not meet other Fedora standards, despite being free and open source. COPR can offer these projects outside the Fedora set of packages. Software in COPR isn’t supported by Fedora infrastructure or signed by the project. However, it can be a neat way to try new or experimental software.
-
-Here’s a set of new and interesting projects in COPR.
-
-### CryFS
-
-[CryFS][2] is a cryptographic filesystem. It is designed for use with cloud storage, mainly Dropbox, although it works with other storage providers as well. CryFS encrypts not only the files in the filesystem, but also metadata, file sizes and directory structure.
-
-#### Installation instructions
-
-The repo currently provides CryFS for Fedora 28 and 29, and for EPEL 7. To install CryFS, use these commands:
-
-```
-sudo dnf copr enable fcsm/cryfs
-sudo dnf install cryfs
-```
-
-### Cheat
-
-[Cheat][3] is a utility for viewing various cheatsheets in command-line, aiming to help remind usage of programs that are used only occasionally. For many Linux utilities, cheat provides cheatsheets containing condensed information from man pages, focusing mainly on the most used examples. In addition to the built-in cheatsheets, cheat allows you to edit the existing ones or creating new ones from scratch.
-
-![][4]
-
-#### Installation instructions
-
-The repo currently provides cheat for Fedora 28, 29 and Rawhide, and for EPEL 7. To install cheat, use these commands:
-
-```
-sudo dnf copr enable tkorbar/cheat
-sudo dnf install cheat
-```
-
-### Setconf
-
-[Setconf][5] is a simple program for making changes in configuration files, serving as an alternative for sed. The only thing setconf does is that it finds the key in the specified file and changes its value. Setconf provides only a few options to change its behavior — for example, uncommenting the line that is being changed.
-
-#### Installation instructions
-
-The repo currently provides setconf for Fedora 27, 28 and 29. To install setconf, use these commands:
-
-```
-sudo dnf copr enable jamacku/setconf
-sudo dnf install setconf
-```
-
-### Reddit Terminal Viewer
-
-[Reddit Terminal Viewer][6], or rtv, is an interface for browsing Reddit from terminal. It provides the basic functionality of Reddit, so you can log in to your account, view subreddits, comment, upvote and discover new topics. Rtv currently doesn’t, however, support Reddit tags.
-
-![][7]
-
-#### Installation instructions
-
-The repo currently provides Reddit Terminal Viewer for Fedora 29 and Rawhide. To install Reddit Terminal Viewer, use these commands:
-
-```
-sudo dnf copr enable tc01/rtv
-sudo dnf install rtv
-```
-
-
---------------------------------------------------------------------------------
-
-via: https://fedoramagazine.org/4-cool-new-projects-to-try-in-copr-for-february-2019/
-
-作者:[Dominik Turecek][a]
-选题:[lujun9972][b]
-译者:[译者ID](https://github.com/译者ID)
-校对:[校对者ID](https://github.com/校对者ID)
-
-本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
-
-[a]: https://fedoramagazine.org
-[b]: https://github.com/lujun9972
-[1]: https://copr.fedorainfracloud.org/
-[2]: https://www.cryfs.org/
-[3]: https://github.com/chrisallenlane/cheat
-[4]: https://fedoramagazine.org/wp-content/uploads/2019/01/cheat.png
-[5]: https://setconf.roboticoverlords.org/
-[6]: https://github.com/michael-lazar/rtv
-[7]: https://fedoramagazine.org/wp-content/uploads/2019/01/rtv.png
diff --git a/sources/tech/20190206 And, Ampersand, and - in Linux.md b/sources/tech/20190206 And, Ampersand, and - in Linux.md
deleted file mode 100644
index 88a0458539..0000000000
--- a/sources/tech/20190206 And, Ampersand, and - in Linux.md
+++ /dev/null
@@ -1,211 +0,0 @@
-[#]: collector: (lujun9972)
-[#]: translator: ( )
-[#]: reviewer: ( )
-[#]: publisher: ( )
-[#]: url: ( )
-[#]: subject: (And, Ampersand, and & in Linux)
-[#]: via: (https://www.linux.com/blog/learn/2019/2/and-ampersand-and-linux)
-[#]: author: (Paul Brown https://www.linux.com/users/bro66)
-
-And, Ampersand, and & in Linux
-======
-
-
-Take a look at the tools covered in the [three][1] [previous][2] [articles][3], and you will see that understanding the glue that joins them together is as important as recognizing the tools themselves. Indeed, tools tend to be simple, and understanding what _mkdir_ , _touch_ , and _find_ do (make a new directory, update a file, and find a file in the directory tree, respectively) in isolation is easy.
-
-But understanding what
-
-```
-mkdir test_dir 2>/dev/null || touch images.txt && find . -iname "*jpg" > backup/dir/images.txt &
-```
-
-does, and why we would write a command line like that is a whole different story.
-
-It pays to look more closely at the sign and symbols that live between the commands. It will not only help you better understand how things work, but will also make you more proficient in chaining commands together to create compound instructions that will help you work more efficiently.
-
-In this article and the next, we'll be looking at the the ampersand (`&`) and its close friend, the pipe (`|`), and see how they can mean different things in different contexts.
-
-### Behind the Scenes
-
-Let's start simple and see how you can use `&` as a way of pushing a command to the background. The instruction:
-
-```
-cp -R original/dir/ backup/dir/
-```
-
-Copies all the files and subdirectories in _original/dir/_ into _backup/dir/_. So far so simple. But if that turns out to be a lot of data, it could tie up your terminal for hours.
-
-However, using:
-
-```
-cp -R original/dir/ backup/dir/ &
-```
-
-pushes the process to the background courtesy of the final `&`. This frees you to continue working on the same terminal or even to close the terminal and still let the process finish up. Do note, however, that if the process is asked to print stuff out to the standard output (like in the case of `echo` or `ls`), it will continue to do so, even though it is being executed in the background.
-
-When you push a process into the background, Bash will print out a number. This number is the PID or the _Process' ID_. Every process running on your Linux system has a unique process ID and you can use this ID to pause, resume, and terminate the process it refers to. This will become useful later.
-
-In the meantime, there are a few tools you can use to manage your processes as long as you remain in the terminal from which you launched them:
-
- * `jobs` shows you the processes running in your current terminal, whether be it in the background or foreground. It also shows you a number associated with each job (different from the PID) that you can use to refer to each process:
-
-```
- $ jobs
-[1]- Running cp -i -R original/dir/* backup/dir/ &
-[2]+ Running find . -iname "*jpg" > backup/dir/images.txt &
-```
-
- * `fg` brings a job from the background to the foreground so you can interact with it. You tell `fg` which process you want to bring to the foreground with a percentage symbol (`%`) followed by the number associated with the job that `jobs` gave you:
-
-```
- $ fg %1 # brings the cp job to the foreground
-cp -i -R original/dir/* backup/dir/
-```
-
-If the job was stopped (see below), `fg` will start it again.
-
- * You can stop a job in the foreground by holding down [Ctrl] and pressing [Z]. This doesn't abort the action, it pauses it. When you start it again with (`fg` or `bg`) it will continue from where it left off...
-
-...Except for [`sleep`][4]: the time a `sleep` job is paused still counts once `sleep` is resumed. This is because `sleep` takes note of the clock time when it was started, not how long it was running. This means that if you run `sleep 30` and pause it for more than 30 seconds, once you resume, `sleep` will exit immediately.
-
- * The `bg` command pushes a job to the background and resumes it again if it was paused:
-
-```
- $ bg %1
-[1]+ cp -i -R original/dir/* backup/dir/ &
-```
-
-
-
-
-As mentioned above, you won't be able to use any of these commands if you close the terminal from which you launched the process or if you change to another terminal, even though the process will still continue working.
-
-To manage background processes from another terminal you need another set of tools. For example, you can tell a process to stop from a a different terminal with the [`kill`][5] command:
-
-```
-kill -s STOP
-```
-
-And you know the PID because that is the number Bash gave you when you started the process with `&`, remember? Oh! You didn't write it down? No problem. You can get the PID of any running process with the `ps` (short for _processes_ ) command. So, using
-
-```
-ps | grep cp
-```
-
-will show you all the processes containing the string " _cp_ ", including the copying job we are using for our example. It will also show you the PID:
-
-```
-$ ps | grep cp
-14444 pts/3 00:00:13 cp
-```
-
-In this case, the PID is _14444_. and it means you can stop the background copying with:
-
-```
-kill -s STOP 14444
-```
-
-Note that `STOP` here does the same thing as [Ctrl] + [Z] above, that is, it pauses the execution of the process.
-
-To start the paused process again, you can use the `CONT` signal:
-
-```
-kill -s CONT 14444
-```
-
-There is a good list of many of [the main signals you can send a process here][6]. According to that, if you wanted to terminate the process, not just pause it, you could do this:
-
-```
-kill -s TERM 14444
-```
-
-If the process refuses to exit, you can force it with:
-
-```
-kill -s KILL 14444
-```
-
-This is a bit dangerous, but very useful if a process has gone crazy and is eating up all your resources.
-
-In any case, if you are not sure you have the correct PID, add the `x` option to `ps`:
-
-```
-$ ps x| grep cp
-14444 pts/3 D 0:14 cp -i -R original/dir/Hols_2014.mp4
- original/dir/Hols_2015.mp4 original/dir/Hols_2016.mp4
- original/dir/Hols_2017.mp4 original/dir/Hols_2018.mp4 backup/dir/
-```
-
-And you should be able to see what process you need.
-
-Finally, there is nifty tool that combines `ps` and `grep` all into one:
-
-```
-$ pgrep cp
-8
-18
-19
-26
-33
-40
-47
-54
-61
-72
-88
-96
-136
-339
-6680
-13735
-14444
-```
-
-Lists all the PIDs of processes that contain the string " _cp_ ".
-
-In this case, it isn't very helpful, but this...
-
-```
-$ pgrep -lx cp
-14444 cp
-```
-
-... is much better.
-
-In this case, `-l` tells `pgrep` to show you the name of the process and `-x` tells `pgrep` you want an exact match for the name of the command. If you want even more details, try `pgrep -ax command`.
-
-### Next time
-
-Putting an `&` at the end of commands has helped us explain the rather useful concept of processes working in the background and foreground and how to manage them.
-
-One last thing before we leave: processes running in the background are what are known as _daemons_ in UNIX/Linux parlance. So, if you had heard the term before and wondered what they were, there you go.
-
-As usual, there are more ways to use the ampersand within a command line, many of which have nothing to do with pushing processes into the background. To see what those uses are, we'll be back next week with more on the matter.
-
-Read more:
-
-[Linux Tools: The Meaning of Dot][1]
-
-[Understanding Angle Brackets in Bash][2]
-
-[More About Angle Brackets in Bash][3]
-
---------------------------------------------------------------------------------
-
-via: https://www.linux.com/blog/learn/2019/2/and-ampersand-and-linux
-
-作者:[Paul Brown][a]
-选题:[lujun9972][b]
-译者:[译者ID](https://github.com/译者ID)
-校对:[校对者ID](https://github.com/校对者ID)
-
-本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
-
-[a]: https://www.linux.com/users/bro66
-[b]: https://github.com/lujun9972
-[1]: https://www.linux.com/blog/learn/2019/1/linux-tools-meaning-dot
-[2]: https://www.linux.com/blog/learn/2019/1/understanding-angle-brackets-bash
-[3]: https://www.linux.com/blog/learn/2019/1/more-about-angle-brackets-bash
-[4]: https://ss64.com/bash/sleep.html
-[5]: https://bash.cyberciti.biz/guide/Sending_signal_to_Processes
-[6]: https://www.computerhope.com/unix/signals.htm
diff --git a/sources/tech/20190206 Flowblade 2.0 is Here with New Video Editing Tools and a Refreshed UI.md b/sources/tech/20190206 Flowblade 2.0 is Here with New Video Editing Tools and a Refreshed UI.md
new file mode 100644
index 0000000000..603ae570eb
--- /dev/null
+++ b/sources/tech/20190206 Flowblade 2.0 is Here with New Video Editing Tools and a Refreshed UI.md
@@ -0,0 +1,96 @@
+[#]: collector: (lujun9972)
+[#]: translator: ( )
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+[#]: subject: (Flowblade 2.0 is Here with New Video Editing Tools and a Refreshed UI)
+[#]: via: (https://itsfoss.com/flowblade-video-editor-release/)
+[#]: author: (Ankush Das https://itsfoss.com/author/ankush/)
+
+Flowblade 2.0 is Here with New Video Editing Tools and a Refreshed UI
+======
+
+[Flowblade][1] is one of the rare [video editors that are only available for Linux][2]. It is not the feature set – but the simplicity, flexibility, and being an open source project that counts.
+
+However, with Flowblade 2.0 – released recently – it is now more powerful and useful. A lot of new tools along with a complete overhaul in the workflow can be seen.
+
+In this article, we shall take a look at what’s new with Flowblade 2.0.
+
+### New Features in Flowblade 2.0
+
+Here are some of the major new changes in the latest release of Flowblade.
+
+#### GUI Updates
+
+![Flowblade 2.0][3]
+
+This was a much needed change. I’m always looking for open source solutions that works as expected along with a great GUI.
+
+So, in this update, you will observe a new custom theme set as the default – which looks good though.
+
+Overall, the panel design, the toolbox and stuff has been taken care of to make it look modern. The overhaul considers small changes like the cursor icon upon tool selection and so on.
+
+#### Workflow Overhaul
+
+No matter what features you get to utilize, the workflow matters to people who regularly edit videos. So, it has to be intuitive.
+
+With the recent release, they have made sure that you can configure and set the workflow as per your preference. Well, that is definitely flexible because not everyone has the same requirement.
+
+#### New Tools
+
+![Flowblade Video Editor Interface][4]
+
+**Keyframe tool** : This enables editing and adjusting the Volume and Brightness [keyframes][5] on timeline.
+
+**Multitrim** : A combination of trill, roll, and slip tool.
+
+**Cut:** Available now as a tool in addition to the traditional cut at the playhead.
+
+**Ripple trim:** It is a mode of Trim tool – not often used by many – now available as a separate tool.
+
+#### More changes?
+
+In addition to these major changes listed above, they have added some keyframe editing updates and compositors ( _AlphaXOR, Alpha Out, and Alpha_ ) to utilize alpha channel data to combine images.
+
+A lot of more tiny little changes have taken place as well – you can check those out in the official [changelog][6] on GitHub.
+
+### Installing Flowblade 2.0
+
+If you use Debian or Ubuntu based Linux distributions, there are .deb binaries available for easily installing Flowblade 2.0.
+
+For the rest, you’ll have to [install it using the source code][7].
+
+All the files are available on it’s GitHub page. You can download it from the page below.
+
+[Download Flowblade 2.0][8]
+
+### Wrapping Up
+
+If you are interested in video editing, perhaps you would like to follow the development of [Olive][9], a new open source video editor in development.
+
+Now that you know about the latest changes and additions. What do you think about Flowblade 2.0 as a video editor? Is it good enough for you?
+
+Let us know your thoughts in the comments below.
+
+--------------------------------------------------------------------------------
+
+via: https://itsfoss.com/flowblade-video-editor-release/
+
+作者:[Ankush Das][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://itsfoss.com/author/ankush/
+[b]: https://github.com/lujun9972
+[1]: https://github.com/jliljebl/flowblade
+[2]: https://itsfoss.com/best-video-editing-software-linux/
+[3]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/02/flowblade-2.jpg?ssl=1
+[4]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/02/flowblade-2-1.jpg?resize=800%2C450&ssl=1
+[5]: https://en.wikipedia.org/wiki/Key_frame
+[6]: https://github.com/jliljebl/flowblade/blob/master/flowblade-trunk/docs/RELEASE_NOTES.md
+[7]: https://itsfoss.com/install-software-from-source-code/
+[8]: https://github.com/jliljebl/flowblade/releases/tag/v2.0
+[9]: https://itsfoss.com/olive-video-editor/
diff --git a/sources/tech/20190206 Getting started with Vim visual mode.md b/sources/tech/20190206 Getting started with Vim visual mode.md
deleted file mode 100644
index e6b9b1da9b..0000000000
--- a/sources/tech/20190206 Getting started with Vim visual mode.md
+++ /dev/null
@@ -1,126 +0,0 @@
-[#]: collector: (lujun9972)
-[#]: translator: ( )
-[#]: reviewer: ( )
-[#]: publisher: ( )
-[#]: url: ( )
-[#]: subject: (Getting started with Vim visual mode)
-[#]: via: (https://opensource.com/article/19/2/getting-started-vim-visual-mode)
-[#]: author: (Susan Lauber https://opensource.com/users/susanlauber)
-
-Getting started with Vim visual mode
-======
-Visual mode makes it easier to highlight and manipulate text in Vim.
-
-
-Ansible playbook files are text files in a YAML format. People who work regularly with them have their favorite editors and plugin extensions to make the formatting easier.
-
-When I teach Ansible with the default editor available in most Linux distributions, I use Vim's visual mode a lot. It allows me to highlight my actions on the screen—what I am about to edit and the text manipulation task I'm doing—to make it easier for my students to learn.
-
-### Vim's visual mode
-
-When editing text with Vim, visual mode can be extremely useful for identifying chunks of text to be manipulated.
-
-Vim's visual mode has three versions: character, line, and block. The keystrokes to enter each mode are:
-
- * Character mode: **v** (lower-case)
- * Line mode: **V** (upper-case)
- * Block mode: **Ctrl+v**
-
-
-
-Here are some ways to use each mode to simplify your work.
-
-### Character mode
-
-Character mode can highlight a sentence in a paragraph or a phrase in a sentence. Then the visually identified text can be deleted, copied, changed, or modified with any other Vim editing command.
-
-#### Move a sentence
-
-To move a sentence from one place to another, start by opening the file and moving the cursor to the first character in the sentence you want to move.
-
-
-
- * Press the **v** key to enter visual character mode. The word **VISUAL** will appear at the bottom of the screen.
- * Use the Arrow keys to highlight the desired text. You can use other navigation commands, such as **w** to highlight to the beginning of the next word or **$** to include the rest of the line.
- * Once the text is highlighted, press the **d** key to delete the text.
- * If you deleted too much or not enough, press **u** to undo and start again.
- * Move your cursor to the new location and press **p** to paste the text.
-
-
-
-#### Change a phrase
-
-You can also highlight a chunk of text that you want to replace.
-
-
-
- * Place the cursor at the first character you want to change.
- * Press **v** to enter visual character mode.
- * Use navigation commands, such as the Arrow keys, to highlight the phrase.
- * Press **c** to change the highlighted text.
- * The highlighted text will disappear, and you will be in Insert mode where you can add new text.
- * After you finish typing the new text, press **Esc** to return to command mode and save your work.
-
-
-
-### Line mode
-
-When working with Ansible playbooks, the order of tasks can matter. Use visual line mode to move a task to a different location in the playbook.
-
-#### Manipulate multiple lines of text
-
-
-
- * Place your cursor anywhere on the first or last line of the text you want to manipulate.
- * Press **Shift+V** to enter line mode. The words **VISUAL LINE** will appear at the bottom of the screen.
- * Use navigation commands, such as the Arrow keys, to highlight multiple lines of text.
- * Once the desired text is highlighted, use commands to manipulate it. Press **d** to delete, then move the cursor to the new location, and press **p** to paste the text.
- * **y** (yank) can be used instead of **d** (delete) if you want to copy the task.
-
-
-
-#### Indent a set of lines
-
-When working with Ansible playbooks or YAML files, indentation matters. A highlighted block can be shifted right or left with the **>** and **<** keys.
-
-![]9https://opensource.com/sites/default/files/uploads/vim-visual-line2.png
-
- * Press **>** to increase the indentation of all the lines.
- * Press **<** to decrease the indentation of all the lines.
-
-
-
-Try other Vim commands to apply them to the highlighted text.
-
-### Block mode
-
-The visual block mode is useful for manipulation of specific tabular data files, but it can also be extremely helpful as a tool to verify indentation of an Ansible playbook.
-
-Tasks are a list of items and in YAML each list item starts with a dash followed by a space. The dashes must line up in the same column to be at the same indentation level. This can be difficult to see with just the human eye. Indentation of other lines within the task is also important.
-
-#### Verify tasks lists are indented the same
-
-
-
- * Place your cursor on the first character of the list item.
- * Press **Ctrl+v** to enter visual block mode. The words **VISUAL BLOCK** will appear at the bottom of the screen.
- * Use the Arrow keys to highlight the single character column. You can verify that each task is indented the same amount.
- * Use the Arrow keys to expand the block right or left to check whether the other indentation is correct.
-
-
-
-Even though I am comfortable with other Vim editing shortcuts, I still like to use visual mode to sort out what text I want to manipulate. When I demo other concepts during a presentation, my students see a tool to highlight text and hit delete in this "new to them" text only editor.
-
---------------------------------------------------------------------------------
-
-via: https://opensource.com/article/19/2/getting-started-vim-visual-mode
-
-作者:[Susan Lauber][a]
-选题:[lujun9972][b]
-译者:[译者ID](https://github.com/译者ID)
-校对:[校对者ID](https://github.com/校对者ID)
-
-本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
-
-[a]: https://opensource.com/users/susanlauber
-[b]: https://github.com/lujun9972
diff --git a/sources/tech/20190207 10 Methods To Create A File In Linux.md b/sources/tech/20190207 10 Methods To Create A File In Linux.md
deleted file mode 100644
index b74bbacf13..0000000000
--- a/sources/tech/20190207 10 Methods To Create A File In Linux.md
+++ /dev/null
@@ -1,325 +0,0 @@
-[#]: collector: (lujun9972)
-[#]: translator: ( )
-[#]: reviewer: ( )
-[#]: publisher: ( )
-[#]: url: ( )
-[#]: subject: (10 Methods To Create A File In Linux)
-[#]: via: (https://www.2daygeek.com/linux-command-to-create-a-file/)
-[#]: author: (Vinoth Kumar https://www.2daygeek.com/author/vinoth/)
-
-10 Methods To Create A File In Linux
-======
-
-As we already know that everything is a file in Linux, that includes device as well.
-
-Linux admin should be performing the file creation activity multiple times (It may 20 times or 50 times or more than that, it’s depends upon their environment) in a day.
-
-Navigate to the following URL, if you would like to **[create a file in a specific size in Linux][1]**.
-
-It’s very important. how efficiently are we creating a file. Why i’m saying efficient? there is a lot of benefit if you know the efficient way to perform an activity.
-
-It will save you a lot of time. You can spend those valuable time on other important or major tasks, where you want to spend some more time instead of doing that in hurry.
-
-Here i’m including multiple ways to create a file in Linux. I advise you to choose few which is easy and efficient for you to perform your activity.
-
-You no need to install any of the following commands because all these commands has been installed as part of Linux core utilities except nano command.
-
-It can be done using the following 6 methods.
-
- * **`Redirect Symbol (>):`** Standard redirect symbol allow us to create a 0KB empty file in Linux.
- * **`touch:`** touch command can create a 0KB empty file if does not exist.
- * **`echo:`** echo command is used to display line of text that are passed as an argument.
- * **`printf:`** printf command is used to display the given text on the terminal window.
- * **`cat:`** It concatenate files and print on the standard output.
- * **`vi/vim:`** Vim is a text editor that is upwards compatible to Vi. It can be used to edit all kinds of plain text.
- * **`nano:`** nano is a small and friendly editor. It copies the look and feel of Pico, but is free software.
- * **`head:`** head is used to print the first part of files..
- * **`tail:`** tail is used to print the last part of files..
- * **`truncate:`** truncate is used to shrink or extend the size of a file to the specified size.
-
-
-
-### How To Create A File In Linux Using Redirect Symbol (>)?
-
-Standard redirect symbol allow us to create a 0KB empty file in Linux. Basically it used to redirect the output of a command to a new file. When you use redirect symbol without a command then it’s create a file.
-
-But it won’t allow you to input any text while creating a file. However, it’s very simple and will be useful for lazy admins. To do so, simple enter the redirect symbol followed by the filename which you want.
-
-```
-$ > daygeek.txt
-```
-
-Use the ls command to check the created file.
-
-```
-$ ls -lh daygeek.txt
--rw-rw-r-- 1 daygeek daygeek 0 Feb 4 02:00 daygeek.txt
-```
-
-### How To Create A File In Linux Using touch Command?
-
-touch command is used to update the access and modification times of each FILE to the current time.
-
-It’s create a new file if does not exist. Also, touch command doesn’t allow us to enter any text while creating a file. By default it creates a 0KB empty file.
-
-```
-$ touch daygeek1.txt
-```
-
-Use the ls command to check the created file.
-
-```
-$ ls -lh daygeek1.txt
--rw-rw-r-- 1 daygeek daygeek 0 Feb 4 02:02 daygeek1.txt
-```
-
-### How To Create A File In Linux Using echo Command?
-
-echo is a built-in command found in most operating systems. It is frequently used in scripts, batch files, and as part of individual commands to insert a text.
-
-This is nice command that allow users to input a text while creating a file. Also, it allow us to append the text in the next time.
-
-```
-$ echo "2daygeek.com is a best Linux blog to learn Linux" > daygeek2.txt
-```
-
-Use the ls command to check the created file.
-
-```
-$ ls -lh daygeek2.txt
--rw-rw-r-- 1 daygeek daygeek 49 Feb 4 02:04 daygeek2.txt
-```
-
-To view the content from the file, use the cat command.
-
-```
-$ cat daygeek2.txt
-2daygeek.com is a best Linux blog to learn Linux
-```
-
-If you would like to append the content in the same file, use the double redirect Symbol (>>).
-
-```
-$ echo "It's FIVE years old blog" >> daygeek2.txt
-```
-
-You can view the appended content from the file using cat command.
-
-```
-$ cat daygeek2.txt
-2daygeek.com is a best Linux blog to learn Linux
-It's FIVE years old blog
-```
-
-### How To Create A File In Linux Using printf Command?
-
-printf command also works in the same way like how echo command works.
-
-printf command in Linux is used to display the given string on the terminal window. printf can have format specifiers, escape sequences or ordinary characters.
-
-```
-$ printf "2daygeek.com is a best Linux blog to learn Linux\n" > daygeek3.txt
-```
-
-Use the ls command to check the created file.
-
-```
-$ ls -lh daygeek3.txt
--rw-rw-r-- 1 daygeek daygeek 48 Feb 4 02:12 daygeek3.txt
-```
-
-To view the content from the file, use the cat command.
-
-```
-$ cat daygeek3.txt
-2daygeek.com is a best Linux blog to learn Linux
-```
-
-If you would like to append the content in the same file, use the double redirect Symbol (>>).
-
-```
-$ printf "It's FIVE years old blog\n" >> daygeek3.txt
-```
-
-You can view the appended content from the file using cat command.
-
-```
-$ cat daygeek3.txt
-2daygeek.com is a best Linux blog to learn Linux
-It's FIVE years old blog
-```
-
-### How To Create A File In Linux Using cat Command?
-
-cat stands for concatenate. It is very frequently used in Linux to reads data from a file.
-
-cat is one of the most frequently used commands on Unix-like operating systems. It’s offer three functions which is related to text file such as display content of a file, combine multiple files into the single output and create a new file.
-
-```
-$ cat > daygeek4.txt
-2daygeek.com is a best Linux blog to learn Linux
-It's FIVE years old blog
-```
-
-Use the ls command to check the created file.
-
-```
-$ ls -lh daygeek4.txt
--rw-rw-r-- 1 daygeek daygeek 74 Feb 4 02:18 daygeek4.txt
-```
-
-To view the content from the file, use the cat command.
-
-```
-$ cat daygeek4.txt
-2daygeek.com is a best Linux blog to learn Linux
-It's FIVE years old blog
-```
-
-If you would like to append the content in the same file, use the double redirect Symbol (>>).
-
-```
-$ cat >> daygeek4.txt
-This website is maintained by Magesh M, It's licensed under CC BY-NC 4.0.
-```
-
-You can view the appended content from the file using cat command.
-
-```
-$ cat daygeek4.txt
-2daygeek.com is a best Linux blog to learn Linux
-It's FIVE years old blog
-This website is maintained by Magesh M, It's licensed under CC BY-NC 4.0.
-```
-
-### How To Create A File In Linux Using vi/vim Command?
-
-Vim is a text editor that is upwards compatible to Vi. It can be used to edit all kinds of plain text. It is especially useful for editing programs.
-
-There are a lot of features are available in vim to edit a single file with the command.
-
-```
-$ vi daygeek5.txt
-
-2daygeek.com is a best Linux blog to learn Linux
-It's FIVE years old blog
-```
-
-Use the ls command to check the created file.
-
-```
-$ ls -lh daygeek5.txt
--rw-rw-r-- 1 daygeek daygeek 75 Feb 4 02:23 daygeek5.txt
-```
-
-To view the content from the file, use the cat command.
-
-```
-$ cat daygeek5.txt
-2daygeek.com is a best Linux blog to learn Linux
-It's FIVE years old blog
-```
-
-### How To Create A File In Linux Using nano Command?
-
-Nano’s is a another editor, an enhanced free Pico clone. nano is a small and friendly editor. It copies the look and feel of Pico, but is free software, and implements several features that Pico lacks, such as: opening multiple files, scrolling per line, undo/redo, syntax coloring, line numbering, and soft-wrapping overlong lines.
-
-```
-$ nano daygeek6.txt
-
-2daygeek.com is a best Linux blog to learn Linux
-It's FIVE years old blog
-This website is maintained by Magesh M, It's licensed under CC BY-NC 4.0.
-```
-
-Use the ls command to check the created file.
-
-```
-$ ls -lh daygeek6.txt
--rw-rw-r-- 1 daygeek daygeek 148 Feb 4 02:26 daygeek6.txt
-```
-
-To view the content from the file, use the cat command.
-
-```
-$ cat daygeek6.txt
-2daygeek.com is a best Linux blog to learn Linux
-It's FIVE years old blog
-This website is maintained by Magesh M, It's licensed under CC BY-NC 4.0.
-```
-
-### How To Create A File In Linux Using head Command?
-
-head command is used to output the first part of files. By default it prints the first 10 lines of each FILE to standard output. With more than one FILE, precede each with a header giving the file name.
-
-```
-$ head -c 0K /dev/zero > daygeek7.txt
-```
-
-Use the ls command to check the created file.
-
-```
-$ ls -lh daygeek7.txt
--rw-rw-r-- 1 daygeek daygeek 0 Feb 4 02:30 daygeek7.txt
-```
-
-### How To Create A File In Linux Using tail Command?
-
-tail command is used to output the last part of files. By default it prints the first 10 lines of each FILE to standard output. With more than one FILE, precede each with a header giving the file name.
-
-```
-$ tail -c 0K /dev/zero > daygeek8.txt
-```
-
-Use the ls command to check the created file.
-
-```
-$ ls -lh daygeek8.txt
--rw-rw-r-- 1 daygeek daygeek 0 Feb 4 02:31 daygeek8.txt
-```
-
-### How To Create A File In Linux Using truncate Command?
-
-truncate command is used to shrink or extend the size of a file to the specified size.
-
-```
-$ truncate -s 0K daygeek9.txt
-```
-
-Use the ls command to check the created file.
-
-```
-$ ls -lh daygeek9.txt
--rw-rw-r-- 1 daygeek daygeek 0 Feb 4 02:37 daygeek9.txt
-```
-
-I have performed totally 10 commands in this article to test this. All together in the single output.
-
-```
-$ ls -lh daygeek*
--rw-rw-r-- 1 daygeek daygeek 0 Feb 4 02:02 daygeek1.txt
--rw-rw-r-- 1 daygeek daygeek 74 Feb 4 02:07 daygeek2.txt
--rw-rw-r-- 1 daygeek daygeek 74 Feb 4 02:15 daygeek3.txt
--rw-rw-r-- 1 daygeek daygeek 148 Feb 4 02:20 daygeek4.txt
--rw-rw-r-- 1 daygeek daygeek 75 Feb 4 02:23 daygeek5.txt
--rw-rw-r-- 1 daygeek daygeek 148 Feb 4 02:26 daygeek6.txt
--rw-rw-r-- 1 daygeek daygeek 148 Feb 4 02:32 daygeek7.txt
--rw-rw-r-- 1 daygeek daygeek 148 Feb 4 02:32 daygeek8.txt
--rw-rw-r-- 1 daygeek daygeek 148 Feb 4 02:38 daygeek9.txt
--rw-rw-r-- 1 daygeek daygeek 0 Feb 4 02:00 daygeek.txt
-```
-
---------------------------------------------------------------------------------
-
-via: https://www.2daygeek.com/linux-command-to-create-a-file/
-
-作者:[Vinoth Kumar][a]
-选题:[lujun9972][b]
-译者:[译者ID](https://github.com/译者ID)
-校对:[校对者ID](https://github.com/校对者ID)
-
-本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
-
-[a]: https://www.2daygeek.com/author/vinoth/
-[b]: https://github.com/lujun9972
-[1]: https://www.2daygeek.com/create-a-file-in-specific-certain-size-linux/
diff --git a/sources/tech/20190207 How to determine how much memory is installed, used on Linux systems.md b/sources/tech/20190207 How to determine how much memory is installed, used on Linux systems.md
deleted file mode 100644
index c6098fa12d..0000000000
--- a/sources/tech/20190207 How to determine how much memory is installed, used on Linux systems.md
+++ /dev/null
@@ -1,227 +0,0 @@
-[#]: collector: (lujun9972)
-[#]: translator: ( )
-[#]: reviewer: ( )
-[#]: publisher: ( )
-[#]: url: ( )
-[#]: subject: (How to determine how much memory is installed, used on Linux systems)
-[#]: via: (https://www.networkworld.com/article/3336174/linux/how-much-memory-is-installed-and-being-used-on-your-linux-systems.html)
-[#]: author: (Sandra Henry-Stocker https://www.networkworld.com/author/Sandra-Henry_Stocker/)
-
-How to determine how much memory is installed, used on Linux systems
-======
-
-
-There are numerous ways to get information on the memory installed on Linux systems and view how much of that memory is being used. Some commands provide an overwhelming amount of detail, while others provide succinct, though not necessarily easy-to-digest, answers. In this post, we'll look at some of the more useful tools for checking on memory and its usage.
-
-Before we get into the details, however, let's review a few details. Physical memory and virtual memory are not the same. The latter includes disk space that configured to be used as swap. Swap may include partitions set aside for this usage or files that are created to add to the available swap space when creating a new partition may not be practical. Some Linux commands provide information on both.
-
-Swap expands memory by providing disk space that can be used to house inactive pages in memory that are moved to disk when physical memory fills up.
-
-One file that plays a role in memory management is **/proc/kcore**. This file looks like a normal (though extremely large) file, but it does not occupy disk space at all. Instead, it is a virtual file like all of the files in /proc.
-
-```
-$ ls -l /proc/kcore
--r--------. 1 root root 140737477881856 Jan 28 12:59 /proc/kcore
-```
-
-Interestingly, the two systems queried below do _not_ have the same amount of memory installed, yet the size of /proc/kcore is the same on both. The first of these two systems has 4 GB of memory installed; the second has 6 GB.
-
-```
-system1$ ls -l /proc/kcore
--r--------. 1 root root 140737477881856 Jan 28 12:59 /proc/kcore
-system2$ ls -l /proc/kcore
--r-------- 1 root root 140737477881856 Feb 5 13:00 /proc/kcore
-```
-
-Explanations that claim the size of this file represents the amount of available virtual memory (maybe plus 4K) don't hold much weight. This number would suggest that the virtual memory on these systems is 128 terrabytes! That number seems to represent instead how much memory a 64-bit systems might be capable of addressing — not how much is available on the system. Calculations of what 128 terrabytes and that number, plus 4K would look like are fairly easy to make on the command line:
-
-```
-$ expr 1024 \* 1024 \* 1024 \* 1024 \* 128
-140737488355328
-$ expr 1024 \* 1024 \* 1024 \* 1024 \* 128 + 4096
-140737488359424
-```
-
-Another and more human-friendly command for examining memory is the **free** command. It gives you an easy-to-understand report on memory.
-
-```
-$ free
- total used free shared buff/cache available
-Mem: 6102476 812244 4090752 13112 1199480 4984140
-Swap: 2097148 0 2097148
-```
-
-With the **-g** option, free reports the values in gigabytes.
-
-```
-$ free -g
- total used free shared buff/cache available
-Mem: 5 0 3 0 1 4
-Swap: 1 0 1
-```
-
-With the **-t** option, free shows the same values as it does with no options (don't confuse -t with terrabytes!) but by adding a total line at the bottom of its output.
-
-```
-$ free -t
- total used free shared buff/cache available
-Mem: 6102476 812408 4090612 13112 1199456 4983984
-Swap: 2097148 0 2097148
-Total: 8199624 812408 6187760
-```
-
-And, of course, you can choose to use both options.
-
-```
-$ free -tg
- total used free shared buff/cache available
-Mem: 5 0 3 0 1 4
-Swap: 1 0 1
-Total: 7 0 5
-```
-
-You might be disappointed in this report if you're trying to answer the question "How much RAM is installed on this system?" This is the same system shown in the example above that was described as having 6GB of RAM. That doesn't mean this report is wrong, but that it's the system's view of the memory it has at its disposal.
-
-The free command also provides an option to update the display every X seconds (10 in the example below).
-
-```
-$ free -s 10
- total used free shared buff/cache available
-Mem: 6102476 812280 4090704 13112 1199492 4984108
-Swap: 2097148 0 2097148
-
- total used free shared buff/cache available
-Mem: 6102476 812260 4090712 13112 1199504 4984120
-Swap: 2097148 0 2097148
-```
-
-With **-l** , the free command provides high and low memory usage.
-
-```
-$ free -l
- total used free shared buff/cache available
-Mem: 6102476 812376 4090588 13112 1199512 4984000
-Low: 6102476 2011888 4090588
-High: 0 0 0
-Swap: 2097148 0 2097148
-```
-
-Another option for looking at memory is the **/proc/meminfo** file. Like /proc/kcore, this is a virtual file and one that gives a useful report showing how much memory is installed, free and available. Clearly, free and available do not represent the same thing. MemFree seems to represent unused RAM. MemAvailable is an estimate of how much memory is available for starting new applications.
-
-```
-$ head -3 /proc/meminfo
-MemTotal: 6102476 kB
-MemFree: 4090596 kB
-MemAvailable: 4984040 kB
-```
-
-If you only want to see total memory, you can use one of these commands:
-
-```
-$ awk '/MemTotal/ {print $2}' /proc/meminfo
-6102476
-$ grep MemTotal /proc/meminfo
-MemTotal: 6102476 kB
-```
-
-The **DirectMap** entries break information on memory into categories.
-
-```
-$ grep DirectMap /proc/meminfo
-DirectMap4k: 213568 kB
-DirectMap2M: 6076416 kB
-```
-
-DirectMap4k represents the amount of memory being mapped to standard 4k pages, while DirectMap2M shows the amount of memory being mapped to 2MB pages.
-
-The **getconf** command is one that will provide quite a bit more information than most of us want to contemplate.
-
-```
-$ getconf -a | more
-LINK_MAX 65000
-_POSIX_LINK_MAX 65000
-MAX_CANON 255
-_POSIX_MAX_CANON 255
-MAX_INPUT 255
-_POSIX_MAX_INPUT 255
-NAME_MAX 255
-_POSIX_NAME_MAX 255
-PATH_MAX 4096
-_POSIX_PATH_MAX 4096
-PIPE_BUF 4096
-_POSIX_PIPE_BUF 4096
-SOCK_MAXBUF
-_POSIX_ASYNC_IO
-_POSIX_CHOWN_RESTRICTED 1
-_POSIX_NO_TRUNC 1
-_POSIX_PRIO_IO
-_POSIX_SYNC_IO
-_POSIX_VDISABLE 0
-ARG_MAX 2097152
-ATEXIT_MAX 2147483647
-CHAR_BIT 8
-CHAR_MAX 127
---More--
-```
-
-Pare that output down to something specific with a command like the one shown below, and you'll get the same kind of information provided by some of the commands above.
-
-```
-$ getconf -a | grep PAGES | awk 'BEGIN {total = 1} {if (NR == 1 || NR == 3) total *=$NF} END {print total / 1024" kB"}'
-6102476 kB
-```
-
-That command calculates memory by multiplying the values in the first and last lines of output like this:
-
-```
-PAGESIZE 4096 <==
-_AVPHYS_PAGES 1022511
-_PHYS_PAGES 1525619 <==
-```
-
-Calculating that independently, we can see how that value is derived.
-
-```
-$ expr 4096 \* 1525619 / 1024
-6102476
-```
-
-Clearly that's one of those commands that deserves to be turned into an alias!
-
-Another command with very digestible output is **top**. In the first five lines of top's output, you'll see some numbers that show how memory is being used.
-
-```
-$ top
-top - 15:36:38 up 8 days, 2:37, 2 users, load average: 0.00, 0.00, 0.00
-Tasks: 266 total, 1 running, 265 sleeping, 0 stopped, 0 zombie
-%Cpu(s): 0.2 us, 0.4 sy, 0.0 ni, 99.4 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st
-MiB Mem : 3244.8 total, 377.9 free, 1826.2 used, 1040.7 buff/cache
-MiB Swap: 3536.0 total, 3535.7 free, 0.3 used. 1126.1 avail Mem
-```
-
-And finally a command that will answer the question "So, how much RAM is installed on this system?" in a succinct fashion:
-
-```
-$ sudo dmidecode -t 17 | grep "Size.*MB" | awk '{s+=$2} END {print s / 1024 "GB"}'
-6GB
-```
-
-Depending on how much detail you want to see, Linux systems provide a lot of options for seeing how much memory is installed on your systems and how much is used and available.
-
-Join the Network World communities on [Facebook][1] and [LinkedIn][2] to comment on topics that are top of mind.
-
---------------------------------------------------------------------------------
-
-via: https://www.networkworld.com/article/3336174/linux/how-much-memory-is-installed-and-being-used-on-your-linux-systems.html
-
-作者:[Sandra Henry-Stocker][a]
-选题:[lujun9972][b]
-译者:[译者ID](https://github.com/译者ID)
-校对:[校对者ID](https://github.com/校对者ID)
-
-本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
-
-[a]: https://www.networkworld.com/author/Sandra-Henry_Stocker/
-[b]: https://github.com/lujun9972
-[1]: https://www.facebook.com/NetworkWorld/
-[2]: https://www.linkedin.com/company/network-world
diff --git a/sources/tech/20190207 Review of Debian System Administrator-s Handbook.md b/sources/tech/20190207 Review of Debian System Administrator-s Handbook.md
new file mode 100644
index 0000000000..7b51459c6b
--- /dev/null
+++ b/sources/tech/20190207 Review of Debian System Administrator-s Handbook.md
@@ -0,0 +1,133 @@
+[#]: collector: (lujun9972)
+[#]: translator: ( )
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+[#]: subject: (Review of Debian System Administrator’s Handbook)
+[#]: via: (https://itsfoss.com/debian-administrators-handbook/)
+[#]: author: (Shirish https://itsfoss.com/author/shirish/)
+
+Review of Debian System Administrator’s Handbook
+======
+
+_**Debian System Administrator’s Handbook is a free-to-download book that covers all the essential part of Debian that a sysadmin might need.**_
+
+This has been on my to-do review list for quite some time. The book was started by two French Debian Developers Raphael Hertzog and Roland Mas to increase awareness about the Debian project in France. The book was a huge hit among francophone Linux users. The English translation followed soon after that.
+
+### Debian Administrator’s Handbook
+
+![][1]
+
+[Debian Administrator’s Handbook][2] is targeted from a newbie who may be looking to understand what the [Debian project][3] is all about to somebody who might be running a Debian in a production server.
+
+The latest version of the book covers Debian 8 while the current stable version is Debian 9. But it doesn’t mean that book is outdated and is of no use to Debian 9 users. Most of the part of the book is valid for all Debian and Linux users.
+
+Let me give you a quick summary of what this book covers.
+
+#### Section 1 – Debian Project
+
+The first section sets the tone of the book where it gives a solid foundation to somebody who might be looking into Debian as to what it actually means. Some of it will probably be updated to match the current scenario.
+
+#### Section 2 – Using fictional case studies for different needs
+
+The second section deals with the various case-scenarios as to where Debian could be used. The idea being how Debian can be used in various hierarchical or functional scenarios. One aspect which I felt that should have stressed upon is the culture mindshift and openness which at least should have been mentioned.
+
+#### Section 3 & 4- Setups and Installation
+
+The third section goes into looking in existing setups. I do think it should have stressed more into documenting existing setups, migrating partial services and users before making a full-fledged transition. While all of the above seem minor points, I have seen many of them come and bit me on the back during a transition.
+
+Section Four covers the various ways you could install, how the installation process flows and things to keep in mind before installing a Debian System. Unfortunately, UEFI was not present at that point so it was not talked about.
+
+#### Section 5 & 6 – Packaging System and Updates
+
+Section Five starts on how a binary package is structured and then goes on to tell how a source package is structured as well. It does mention several gotchas or tricky ways in which a sys-admin can be caught.
+
+Section Six is perhaps where most of the sysadmins spend most of the time apart from troubleshooting which is another chapter altogether. While it starts from many of the most often used sysadmin commands, the interesting point which I liked was on page 156 which is on better solver algorithims.
+
+#### Section 7 – Solving Problems and finding Relevant Solutions
+
+Section Seven, on the other hand, speaks of the various problem scenarios and various ways when you find yourself with a problem. In Debian and most GNU/Linux distributions, the keyword is ‘patience’. If you are patient then many problems in Debian are resolved or can be resolved after a good night’s sleep.
+
+#### Section 8 – Basic Configuration, Network, Accounts, Printing
+
+Section Eight introduces you to the basics of networking and having single or multiple user accounts on the workstation. It goes a bit into user and group configuration and practices then gives a brief introduction to the bash shell and gets a brief overview of the [CUPS][4] printing daemon. There is much to explore here.
+
+#### Section 9 – Unix Service
+
+Section 9 starts with the introduction to specific Unix services. While it starts with the much controversial, hated and reviled in many quarters [systemd][5], they also shared System V which is still used by many a sysadmin.
+
+#### Section 10, 11 & 12 – Networking and Adminstration
+
+Section 10 makes you dive into network infrastructure where it goes into the basics of Virtual Private Networks (OpenVPN), OpenSSH, the PKI credentials and some basics of information security. It also gets into basics of DNS, DHCP and IPv6 and ends with some tools which could help in troubleshooting network issues.
+
+Section 11 starts with basic configuration and workflow of mail server and postfix. It tries to a bit into depth as there is much to play with. It then goes into the popular web server Apache, FTP File server, NFS and CIFS with Windows shares via Samba. Again, much to explore therein.
+
+Section 12 starts with Advanced Administration topics such as RAID, LVM, when one is better than the other. Then gets into Virtualization, Xen and give brief about lxc. Again, there is much more to explore than shared herein.
+
+![Author Raphael Hertzog at a Debian booth circa 2013 | Image Credit][6]
+
+#### Section 13 – Workstation
+
+Section 13 shares about having schemas for xserver, display managers, window managers, menu management, the different desktops i.e. GNOME, KDE, XFCE and others. It does mention about lxde in the others. The one omission I felt which probably will be updated in a new release would be [Wayland][7] and [Xwayland][8]. Again much to explore in this section as well. This is rectified in the conclusion
+
+#### Section 14 – Security
+
+Section 14 is somewhat comprehensive on what constitues security and bits of threats analysis but stops short as it shares in the introduction of the chapter itself that it’s a vast topic.
+
+#### Section 15 – Creating a Debian package
+
+Section 15 explains the tools and processes to ‘ _debianize_ ‘ an application so it becomes part of the Debian archive and available for distribution on the 10 odd hardware architectures that Debian supports.
+
+### Pros and Cons
+
+Where Raphael and Roland have excelled is at breaking the visual monotony of the book by using a different style and structure wherever possible from the rest of the reading material. This compels the reader to refresh her eyes while at the same time focus on the important matter at the hand. The different visual style also indicates that this is somewhat more important from the author’s point of view.
+
+One of the drawbacks, if I may call it that, is the absolute absence of humor in the book.
+
+### Final Thoughts
+
+I have been [using Debian][9] for a decade so lots of it was a refresher for myself. Some of it is outdated if I look it from a buster perspective but is invaluable as a historical artifact.
+
+If you are looking to familiarize yourself with Debian or looking to run Debian 8 or 9 as a production server for your business wouldn’t be able to recommend a better book than this.
+
+### Download Debian Administrator’s Handbook
+
+The Debian Handbook has been available in every Debian release after 2012. The [liberation][10] of the Debian Handbook was done in 2012 using [ulule][11].
+
+You can download an electronic version of the Debian Administrator’s Handbook in PDF, ePub or Mobi format from the link below:
+
+[Download Debian Administrator’s Handbook][12]
+
+You can also buy the book paperback edition of the book if you want to support the amazing work of the authors.
+
+[Buy the paperback edition][13]
+
+Lastly, if you want to motivate Raphael, you can reward by donating to his PayPal [account][14].
+
+--------------------------------------------------------------------------------
+
+via: https://itsfoss.com/debian-administrators-handbook/
+
+作者:[Shirish][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://itsfoss.com/author/shirish/
+[b]: https://github.com/lujun9972
+[1]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/02/Debian-Administrators-Handbook-review.png?resize=800%2C450&ssl=1
+[2]: https://debian-handbook.info/
+[3]: https://www.debian.org/
+[4]: https://www.cups.org
+[5]: https://itsfoss.com/systemd-features/
+[6]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/02/stand-debian-Raphael.jpg?resize=800%2C600&ssl=1
+[7]: https://wayland.freedesktop.org/
+[8]: https://en.wikipedia.org/wiki/X.Org_Server#XWayland
+[9]: https://itsfoss.com/reasons-why-i-love-debian/
+[10]: https://debian-handbook.info/liberation/
+[11]: https://www.ulule.com/debian-handbook/
+[12]: https://debian-handbook.info/get/now/
+[13]: https://debian-handbook.info/get/
+[14]: https://raphaelhertzog.com/
diff --git a/sources/tech/20190208 7 steps for hunting down Python code bugs.md b/sources/tech/20190208 7 steps for hunting down Python code bugs.md
deleted file mode 100644
index 77b2c802a0..0000000000
--- a/sources/tech/20190208 7 steps for hunting down Python code bugs.md
+++ /dev/null
@@ -1,114 +0,0 @@
-[#]: collector: (lujun9972)
-[#]: translator: (LazyWolfLin)
-[#]: reviewer: ( )
-[#]: publisher: ( )
-[#]: url: ( )
-[#]: subject: (7 steps for hunting down Python code bugs)
-[#]: via: (https://opensource.com/article/19/2/steps-hunting-code-python-bugs)
-[#]: author: (Maria Mckinley https://opensource.com/users/parody)
-
-7 steps for hunting down Python code bugs
-======
-Learn some tricks to minimize the time you spend tracking down the reasons your code fails.
-
-
-It is 3 pm on a Friday afternoon. Why? Because it is always 3 pm on a Friday when things go down. You get a notification that a customer has found a bug in your software. After you get over your initial disbelief, you contact DevOps to find out what is happening with the logs for your app, because you remember receiving a notification that they were being moved.
-
-Turns out they are somewhere you can't get to, but they are in the process of being moved to a web application—so you will have this nifty application for searching and reading them, but of course, it is not finished yet. It should be up in a couple of days. I know, totally unrealistic situation, right? Unfortunately not; it seems logs or log messages often come up missing at just the wrong time. Before we track down the bug, a public service announcement: Check your logs to make sure they are where you think they are and logging what you think they should log, regularly. Amazing how these things just change when you aren't looking.
-
-OK, so you found the logs or tried the call, and indeed, the customer has found a bug. Maybe you even think you know where the bug is.
-
-You immediately open the file you think might be the problem and start poking around.
-
-### 1. Don't touch your code yet
-
-Go ahead and look at it, maybe even come up with a hypothesis. But before you start mucking about in the code, take that call that creates the bug and turn it into a test. This will be an integration test because although you may have suspicions, you do not yet know exactly where the problem is.
-
-Make sure this test fails. This is important because sometimes the test you make doesn't mimic the broken call; this is especially true if you are using a web or other framework that can obfuscate the tests. Many things may be stored in variables, and it is unfortunately not always obvious, just by looking at the test, what call you are making in the test. I'm not going to say that I have created a test that passed when I was trying to imitate a broken call, but, well, I have, and I don't think that is particularly unusual. Learn from my mistakes.
-
-### 2. Write a failing test
-
-Now that you have a failing test or maybe a test with an error, it is time to troubleshoot. But before you do that, let's do a review of the stack, as this makes troubleshooting easier.
-
-The stack consists of all of the tasks you have started but not finished. So, if you are baking a cake and adding the flour to the batter, then your stack would be:
-
- * Make cake
- * Make batter
- * Add flour
-
-
-
-You have started making your cake, you have started making the batter, and you are adding the flour. Greasing the pan is not on the list since you already finished that, and making the frosting is not on the list because you have not started that.
-
-If you are fuzzy on the stack, I highly recommend playing around on [Python Tutor][1], where you can watch the stack as you execute lines of code.
-
-Now, if something goes wrong with your Python program, the interpreter helpfully prints out the stack for you. This means that whatever the program was doing at the moment it became apparent that something went wrong is on the bottom.
-
-### 3. Always check the bottom of the stack first
-
-Not only is the bottom of the stack where you can see which error occurred, but often the last line of the stack is where you can find the issue. If the bottom doesn't help, and your code has not been linted in a while, it is amazing how helpful it can be to run. I recommend pylint or flake8. More often than not, it points right to where there is an error that I have been overlooking.
-
-If the error is something that seems obscure, your next move might just be to Google it. You will have better luck if you don't include information that is relevant only to your code, like the name of variables, files, etc. If you are using Python 3 (which you should be), it's helpful to include the 3 in the search; otherwise, Python 2 solutions tend to dominate the top.
-
-Once upon a time, developers had to troubleshoot without the benefit of a search engine. This was a dark time. Take advantage of all the tools available to you.
-
-Unfortunately, sometimes the problem occurred earlier and only became apparent during the line executed on the bottom of the stack. Think about how forgetting to add the baking powder becomes obvious when the cake doesn't rise.
-
-It is time to look up the stack. Chances are quite good that the problem is in your code, and not Python core or even third-party packages, so scan the stack looking for lines in your code first. Plus it is usually much easier to put a breakpoint in your own code. Stick the breakpoint in your code a little further up the stack and look around to see if things look like they should.
-
-"But Maria," I hear you say, "this is all helpful if I have a stack trace, but I just have a failing test. Where do I start?"
-
-Pdb, the Python Debugger.
-
-Find a place in your code where you know this call should hit. You should be able to find at least one place. Stick a pdb break in there.
-
-#### A digression
-
-Why not a print statement? I used to depend on print statements. They still come in handy sometimes. But once I started working with complicated code bases, and especially ones making network calls, print just became too slow. I ended up with print statements all over the place, I lost track of where they were and why, and it just got complicated. But there is a more important reason to mostly use pdb. Let's say you put a print statement in and discover that something is wrong—and must have gone wrong earlier. But looking at the function where you put the print statement, you have no idea how you got there. Looking at code is a great way to see where you are going, but it is terrible for learning where you've been. And yes, I have done a grep of my code base looking for where a function is called, but this can get tedious and doesn't narrow it down much with a popular function. Pdb can be very helpful.
-
-You follow my advice, and put in a pdb break and run your test. And it whooshes on by and fails again, with no break at all. Leave your breakpoint in, and run a test already in your test suite that does something very similar to the broken test. If you have a decent test suite, you should be able to find a test that is hitting the same code you think your failed test should hit. Run that test, and when it gets to your breakpoint, do a `w` and look at the stack. If you have no idea by looking at the stack how/where the other call may have gone haywire, then go about halfway up the stack, find some code that belongs to you, and put a breakpoint in that file, one line above the one in the stack trace. Try again with the new test. Keep going back and forth, moving up the stack to figure out where your call went off the rails. If you get all the way up to the top of the trace without hitting a breakpoint, then congratulations, you have found the issue: Your app was spelled wrong. No experience here, nope, none at all.
-
-### 4. Change things
-
-If you still feel lost, try making a new test where you vary something slightly. Can you get the new test to work? What is different? What is the same? Try changing something else. Once you have your test, and maybe additional tests in place, it is safe to start changing things in the code to see if you can narrow down the problem. Remember to start troubleshooting with a fresh commit so you can easily back out changes that do not help. (This is a reference to version control, if you aren't using version control, it will change your life. Well, maybe it will just make coding easier. See "[A Visual Guide to Version Control][2]" for a nice introduction.)
-
-### 5. Take a break
-
-In all seriousness, when it stops feeling like a fun challenge or game and starts becoming really frustrating, your best course of action is to walk away from the problem. Take a break. I highly recommend going for a walk and trying to think about something else.
-
-### 6. Write everything down
-
-When you come back, if you aren't suddenly inspired to try something, write down any information you have about the problem. This should include:
-
- * Exactly the call that is causing the problem
- * Exactly what happened, including any error messages or related log messages
- * Exactly what you were expecting to happen
- * What you have done so far to find the problem and any clues that you have discovered while troubleshooting
-
-
-
-Sometimes this is a lot of information, but trust me, it is really annoying trying to pry information out of someone piecemeal. Try to be concise, but complete.
-
-### 7. Ask for help
-
-I often find that just writing down all the information triggers a thought about something I have not tried yet. Sometimes, of course, I realize what the problem is immediately after hitting the submit button. At any rate, if you still have not thought of anything after writing everything down, try sending an email to someone. First, try colleagues or other people involved in your project, then move on to project email lists. Don't be afraid to ask for help. Most people are kind and helpful, and I have found that to be especially true in the Python community.
-
-Maria McKinley will present [Hunting the Bugs][3] at [PyCascades 2019][4], February 23-24 in Seattle.
-
---------------------------------------------------------------------------------
-
-via: https://opensource.com/article/19/2/steps-hunting-code-python-bugs
-
-作者:[Maria Mckinley][a]
-选题:[lujun9972][b]
-译者:[译者ID](https://github.com/译者ID)
-校对:[校对者ID](https://github.com/校对者ID)
-
-本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
-
-[a]: https://opensource.com/users/parody
-[b]: https://github.com/lujun9972
-[1]: http://www.pythontutor.com/
-[2]: https://betterexplained.com/articles/a-visual-guide-to-version-control/
-[3]: https://2019.pycascades.com/talks/hunting-the-bugs
-[4]: https://2019.pycascades.com/
diff --git a/sources/tech/20190208 How To Install And Use PuTTY On Linux.md b/sources/tech/20190208 How To Install And Use PuTTY On Linux.md
deleted file mode 100644
index 844d55f040..0000000000
--- a/sources/tech/20190208 How To Install And Use PuTTY On Linux.md
+++ /dev/null
@@ -1,153 +0,0 @@
-[#]: collector: (lujun9972)
-[#]: translator: ( )
-[#]: reviewer: ( )
-[#]: publisher: ( )
-[#]: url: ( )
-[#]: subject: (How To Install And Use PuTTY On Linux)
-[#]: via: (https://www.ostechnix.com/how-to-install-and-use-putty-on-linux/)
-[#]: author: (SK https://www.ostechnix.com/author/sk/)
-
-How To Install And Use PuTTY On Linux
-======
-
-
-
-**PuTTY** is a free and open source GUI client that supports wide range of protocols including SSH, Telnet, Rlogin and serial for Windows and Unix-like operating systems. Generally, Windows admins use PuTTY as a SSH and telnet client to access the remote Linux servers from their local Windows systems. However, PuTTY is not limited to Windows. It is also popular among Linux users as well. This guide explains how to install PuTTY on Linux and how to access and manage the remote Linux servers using PuTTY.
-
-### Install PuTTY on Linux
-
-PuTTY is available in the official repositories of most Linux distributions. For instance, you can install PuTTY on Arch Linux and its variants using the following command:
-
-```
-$ sudo pacman -S putty
-```
-
-On Debian, Ubuntu, Linux Mint:
-
-```
-$ sudo apt install putty
-```
-
-### How to use PuTTY to access remote Linux systems
-
-Once PuTTY is installed, launch it from the menu or from your application launcher. Alternatively, you can launch it from the Terminal by running the following command:
-
-```
-$ putty
-```
-
-This is how PuTTY default interface looks like.
-
-
-
-As you can see, most of the options are self-explanatory. On the left pane of the PuTTY interface, you can do/edit/modify various configurations such as,
-
- 1. PuTTY session logging,
- 2. Options for controlling the terminal emulation, control and change effects of keys,
- 3. Control terminal bell sounds,
- 4. Enable/disable Terminal advanced features,
- 5. Set the size of PuTTY window,
- 6. Control the scrollback in PuTTY window (Default is 2000 lines),
- 7. Change appearance of PuTTY window and cursor,
- 8. Adjust windows border,
- 9. Change fonts for texts in PuTTY window,
- 10. Save login details,
- 11. Set proxy details,
- 12. Options to control various protocols such as SSH, Telnet, Rlogin, Serial etc.
- 13. And more.
-
-
-
-All options are categorized under a distinct name for ease of understanding.
-
-### Access a remote Linux server using PuTTY
-
-Click on the **Session** tab on the left pane. Enter the hostname (or IP address) of your remote system you want to connect to. Next choose the connection type, for example Telnet, Rlogin, SSH etc. The default port number will be automatically selected depending upon the connection type you choose. For example if you choose SSH, port number 22 will be selected. For Telnet, port number 23 will be selected and so on. If you have changed the default port number, don’t forget to mention it in the **Port** section. I am going to access my remote via SSH, hence I choose SSH connection type. After entering the Hostname or IP address of the system, click **Open**.
-
-
-
-If this is the first time you have connected to this remote system, PuTTY will display a security alert dialog box that asks whether you trust the host you are connecting to. Click **Accept** to add the remote system’s host key to the PuTTY’s cache:
-
-![][2]
-
-Next enter your remote system’s user name and password. Congratulations! You’ve successfully connected to your remote system via SSH using PuTTY.
-
-
-
-**Access remote systems configured with key-based authentication**
-
-Some Linux administrators might have configured their remote servers with key-based authentication. For example, when accessing AMS instances from PuTTY, you need to specify the key file’s location. PuTTY supports public key authentication and uses its own key format ( **.ppk** files).
-
-Enter the hostname or IP address in the Session section. Next, In the **Category** pane, expand **Connection** , expand **SSH** , and then choose **Auth**. Browse the location of the **.ppk** key file and click **Open**.
-
-![][3]
-
-Click Accept to add the host key if it is the first time you are connecting to the remote system. Finally, enter the remote system’s passphrase (if the key is protected with a passphrase while generating it) to connect.
-
-**Save PuTTY sessions**
-
-Sometimes, you want to connect to the remote system multiple times. If so, you can save the session and load it whenever you want without having to type the hostname or ip address, port number every time.
-
-Enter the hostname (or IP address) and provide a session name and click **Save**. If you have key file, make sure you have already given the location before hitting the Save button.
-
-![][4]
-
-Now, choose session name under the **Saved sessions** tab and click **Load** and click **Open** to launch it.
-
-**Transferring files to remote systems using the PuTTY Secure Copy Client (pscp)
-**
-
-Usually, the Linux users and admins use **‘scp’** command line tool to transfer files from local Linux system to the remote Linux servers. PuTTY does have a dedicated client named **PuTTY Secure Copy Clinet** ( **PSCP** in short) to do this job. If you’re using windows os in your local system, you may need this tool to transfer files from local system to remote systems. PSCP can be used in both Linux and Windows systems.
-
-The following command will copy **file.txt** to my remote Ubuntu system from Arch Linux.
-
-```
-pscp -i test.ppk file.txt sk@192.168.225.22:/home/sk/
-```
-
-Here,
-
- * **-i test.ppk** : Key file to access remote system,
- * **file.txt** : file to be copied to remote system,
- * **sk@192.168.225.22** : username and ip address of remote system,
- * **/home/sk/** : Destination path.
-
-
-
-To copy a directory. use **-r** (recursive) option like below:
-
-```
- pscp -i test.ppk -r dir/ sk@192.168.225.22:/home/sk/
-```
-
-To transfer files from Windows to remote Linux server using pscp, run the following command from command prompt:
-
-```
-pscp -i test.ppk c:\documents\file.txt.txt sk@192.168.225.22:/home/sk/
-```
-
-You know now what is PuTTY, how to install and use it to access remote systems. Also, you have learned how to transfer files to the remote systems from the local system using pscp program.
-
-And, that’s all for now. Hope this was useful. More good stuffs to come. Stay tuned!
-
-Cheers!
-
-
-
---------------------------------------------------------------------------------
-
-via: https://www.ostechnix.com/how-to-install-and-use-putty-on-linux/
-
-作者:[SK][a]
-选题:[lujun9972][b]
-译者:[译者ID](https://github.com/译者ID)
-校对:[校对者ID](https://github.com/校对者ID)
-
-本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
-
-[a]: https://www.ostechnix.com/author/sk/
-[b]: https://github.com/lujun9972
-[1]: data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7
-[2]: http://www.ostechnix.com/wp-content/uploads/2019/02/putty-2.png
-[3]: http://www.ostechnix.com/wp-content/uploads/2019/02/putty-4.png
-[4]: http://www.ostechnix.com/wp-content/uploads/2019/02/putty-5.png
diff --git a/sources/tech/20190211 How To Remove-Delete The Empty Lines In A File In Linux.md b/sources/tech/20190211 How To Remove-Delete The Empty Lines In A File In Linux.md
deleted file mode 100644
index a7b2c06a16..0000000000
--- a/sources/tech/20190211 How To Remove-Delete The Empty Lines In A File In Linux.md
+++ /dev/null
@@ -1,192 +0,0 @@
-[#]: collector: (lujun9972)
-[#]: translator: ( )
-[#]: reviewer: ( )
-[#]: publisher: ( )
-[#]: url: ( )
-[#]: subject: (How To Remove/Delete The Empty Lines In A File In Linux)
-[#]: via: (https://www.2daygeek.com/remove-delete-empty-lines-in-a-file-in-linux/)
-[#]: author: (Magesh Maruthamuthu https://www.2daygeek.com/author/magesh/)
-
-How To Remove/Delete The Empty Lines In A File In Linux
-======
-
-Some times you may wants to remove or delete the empty lines in a file in Linux.
-
-If so, you can use the one of the below method to achieve it.
-
-It can be done in many ways but i have listed simple methods in the article.
-
-You may aware of that grep, awk and sed commands are specialized for textual data manipulation.
-
-Navigate to the following URL, if you would like to read more about these kind of topics. For **[creating a file in specific size in Linux][1]** multiple ways, for **[creating a file in Linux][2]** multiple ways and for **[removing a matching string from a file in Linux][3]**.
-
-These are fall in advanced commands category because these are used in most of the shell script to do required things.
-
-It can be done using the following 5 methods.
-
- * **`sed Command:`** Stream editor for filtering and transforming text.
- * **`grep Command:`** Print lines that match patterns.
- * **`cat Command:`** It concatenate files and print on the standard output.
- * **`tr Command:`** Translate or delete characters.
- * **`awk Command:`** The awk utility shall execute programs written in the awk programming language, which is specialized for textual data manipulation.
- * **`perl Command:`** Perl is a programming language specially designed for text editing.
-
-
-
-To test this, i had already created the file called `2daygeek.txt` with some texts and empty lines. The details are below.
-
-```
-$ cat 2daygeek.txt
-2daygeek.com is a best Linux blog to learn Linux.
-
-It's FIVE years old blog.
-
-This website is maintained by Magesh M, it's licensed under CC BY-NC 4.0.
-
-He got two GIRL babys.
-
-Her names are Tanisha & Renusha.
-```
-
-Now everything is ready and i’m going to test this in multiple ways.
-
-### How To Remove/Delete The Empty Lines In A File In Linux Using sed Command?
-
-Sed is a stream editor. A stream editor is used to perform basic text transformations on an input stream (a file or input from a pipeline).
-
-```
-$ sed '/^$/d' 2daygeek.txt
-2daygeek.com is a best Linux blog to learn Linux.
-It's FIVE years old blog.
-This website is maintained by Magesh M, it's licensed under CC BY-NC 4.0.
-He got two GIRL babes.
-Her names are Tanisha & Renusha.
-```
-
-Details are follow:
-
- * **`sed:`** It’s a command
- * **`//:`** It holds the searching string.
- * **`^:`** Matches start of string.
- * **`$:`** Matches end of string.
- * **`d:`** Delete the matched string.
- * **`2daygeek.txt:`** Source file name.
-
-
-
-### How To Remove/Delete The Empty Lines In A File In Linux Using grep Command?
-
-grep searches for PATTERNS in each FILE. PATTERNS is one or patterns separated by newline characters, and grep prints each line that matches a pattern.
-
-```
-$ grep . 2daygeek.txt
-or
-$ grep -Ev "^$" 2daygeek.txt
-or
-$ grep -v -e '^$' 2daygeek.txt
-2daygeek.com is a best Linux blog to learn Linux.
-It's FIVE years old blog.
-This website is maintained by Magesh M, it's licensed under CC BY-NC 4.0.
-He got two GIRL babes.
-Her names are Tanisha & Renusha.
-```
-
-Details are follow:
-
- * **`grep:`** It’s a command
- * **`.:`** Replaces any character.
- * **`^:`** matches start of string.
- * **`$:`** matches end of string.
- * **`E:`** For extended regular expressions pattern matching.
- * **`e:`** For regular expressions pattern matching.
- * **`v:`** To select non-matching lines from the file.
- * **`2daygeek.txt:`** Source file name.
-
-
-
-### How To Remove/Delete The Empty Lines In A File In Linux Using awk Command?
-
-The awk utility shall execute programs written in the awk programming language, which is specialized for textual data manipulation. An awk program is a sequence of patterns and corresponding actions.
-
-```
-$ awk NF 2daygeek.txt
-or
-$ awk '!/^$/' 2daygeek.txt
-or
-$ awk '/./' 2daygeek.txt
-2daygeek.com is a best Linux blog to learn Linux.
-It's FIVE years old blog.
-This website is maintained by Magesh M, it's licensed under CC BY-NC 4.0.
-He got two GIRL babes.
-Her names are Tanisha & Renusha.
-```
-
-Details are follow:
-
- * **`awk:`** It’s a command
- * **`//:`** It holds the searching string.
- * **`^:`** matches start of string.
- * **`$:`** matches end of string.
- * **`.:`** Replaces any character.
- * **`!:`** Delete the matched string.
- * **`2daygeek.txt:`** Source file name.
-
-
-
-### How To Delete The Empty Lines In A File In Linux using Combination of cat And tr Command?
-
-cat stands for concatenate. It is very frequently used in Linux to reads data from a file.
-
-cat is one of the most frequently used commands on Unix-like operating systems. It’s offer three functions which is related to text file such as display content of a file, combine multiple files into the single output and create a new file.
-
-Translate, squeeze, and/or delete characters from standard input, writing to standard output.
-
-```
-$ cat 2daygeek.txt | tr -s '\n'
-2daygeek.com is a best Linux blog to learn Linux.
-It's FIVE years old blog.
-This website is maintained by Magesh M, it's licensed under CC BY-NC 4.0.
-He got two GIRL babes.
-Her names are Tanisha & Renusha.
-```
-
-Details are follow:
-
- * **`cat:`** It’s a command
- * **`tr:`** It’s a command
- * **`|:`** Pipe symbol. It pass first command output as a input to another command.
- * **`s:`** Replace each sequence of a repeated character that is listed in the last specified SET.
- * **`\n:`** To add a new line.
- * **`2daygeek.txt:`** Source file name.
-
-
-
-### How To Remove/Delete The Empty Lines In A File In Linux Using perl Command?
-
-Perl stands in for “Practical Extraction and Reporting Language”. Perl is a programming language specially designed for text editing. It is now widely used for a variety of purposes including Linux system administration, network programming, web development, etc.
-
-```
-$ perl -ne 'print if /\S/' 2daygeek.txt
-2daygeek.com is a best Linux blog to learn Linux.
-It's FIVE years old blog.
-This website is maintained by Magesh M, it's licensed under CC BY-NC 4.0.
-He got two GIRL babes.
-Her names are Tanisha & Renusha.
-```
-
---------------------------------------------------------------------------------
-
-via: https://www.2daygeek.com/remove-delete-empty-lines-in-a-file-in-linux/
-
-作者:[Magesh Maruthamuthu][a]
-选题:[lujun9972][b]
-译者:[译者ID](https://github.com/译者ID)
-校对:[校对者ID](https://github.com/校对者ID)
-
-本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
-
-[a]: https://www.2daygeek.com/author/magesh/
-[b]: https://github.com/lujun9972
-[1]: https://www.2daygeek.com/create-a-file-in-specific-certain-size-linux/
-[2]: https://www.2daygeek.com/linux-command-to-create-a-file/
-[3]: https://www.2daygeek.com/empty-a-file-delete-contents-lines-from-a-file-remove-matching-string-from-a-file-remove-empty-blank-lines-from-a-file/
diff --git a/sources/tech/20190212 Top 10 Best Linux Media Server Software.md b/sources/tech/20190212 Top 10 Best Linux Media Server Software.md
new file mode 100644
index 0000000000..8fcea6343a
--- /dev/null
+++ b/sources/tech/20190212 Top 10 Best Linux Media Server Software.md
@@ -0,0 +1,229 @@
+[#]: collector: (lujun9972)
+[#]: translator: ( )
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+[#]: subject: (Top 10 Best Linux Media Server Software)
+[#]: via: (https://itsfoss.com/best-linux-media-server)
+[#]: author: (Ankush Das https://itsfoss.com/author/ankush/)
+
+Top 10 Best Linux Media Server Software
+======
+
+Did someone tell you that Linux is just for programmers? That is so wrong! You have got a lot of great tools for [digital artists][1], [writers][2] and musicians.
+
+We have covered such tools in the past. Today it’s going to be slightly different. Instead of creating new digital content, let’s talk about consuming it.
+
+You have probably heard of media servers? Basically these software (and sometimes gadgets) allow you to view your local or cloud media (music, videos etc) in an intuitive interface. You can even use it to stream the content to other devices on your network. Sort of your personal Netflix.
+
+In this article, we will talk about the best media software available for Linux that you can use as a media player or as a media server software – as per your requirements.
+
+Some of these applications can also be used with Google’s Chromecast and Amazon’s Firestick.
+
+### Best Media Server Software for Linux
+
+![Best Media Server Software for Linux][3]
+
+The mentioned Linux media server software are in no particular order of ranking.
+
+I have tried to provide installation instructions for Ubuntu and Debian based distributions. It’s not possible to list installation steps for all Linux distributions for all the media servers mentioned here. Please take no offence for that.
+
+A couple of software in this list are not open source. If that’s the case, I have highlighted it appropriately.
+
+### 1\. Kodi
+
+![Kodi Media Server][4]
+
+Kod is one of the most popular media server software and player. Recently, Kodi 18.0 dropped in with a bunch of improvements that includes the support for Digital Rights Management (DRM) decryption, game emulators, ROMs, voice control, and more.
+
+It is a completely free and open source software. An active community for discussions and support exists as well. The user interface for Kodi is beautiful. I haven’t had the chance to use it in its early days – but I was amazed to see such a good UI for a Linux application.
+
+It has got great playback support – so you can add any supported 3rd party media service for the content or manually add the ripped video files to watch.
+
+#### How to install Kodi
+
+Type in the following commands in the terminal to install the latest version of Kodi via its [official PPA][5].
+
+```
+sudo apt-get install software-properties-common
+sudo add-apt-repository ppa:team-xbmc/ppa
+sudo apt-get update
+sudo apt-get install kodi
+```
+
+To know more about installing a development build or upgrading Kodi, refer to the [official installation guide][6].
+
+### 2\. Plex
+
+![Plex Media Server][7]
+
+Plex is yet another impressive media player or could be used as a media server software. It is a great alternative to Kodi for the users who mostly utilize it to create an offline network of their media collection to sync and watch across multiple devices.
+
+Unlike Kodi, **Plex is not entirely open source**. It does offer a free account in order to use it. In addition, it offers premium pricing plans to unlock more features and have a greater control over your media while also being able to get a detailed insight on who/what/how Plex is being used.
+
+If you are an audiophile, you would love the integration of Plex with [TIDAL][8] music streaming service. You can also set up Live TV by adding it to your tuner.
+
+#### How to install Plex
+
+You can simply download the .deb file available on their official webpage and install it directly (or using [GDebi][9])
+
+### 3\. Jellyfin
+
+![Emby media server][10]
+
+Yet another open source media server software with a bunch of features. [Jellyfin][11] is actually a fork of Emby media server. It may be one of the best out there available for ‘free’ but the multi-platform support still isn’t there yet.
+
+You can run it on a browser or utilize Chromecast – however – you will have to wait if you want the Android app or if you want it to support several devices.
+
+#### How to install Jellyfin
+
+Jellyfin provides a [detailed documentation][12] on how to install it from the binary packages/image available for Linux, Docker, and more.
+
+You will also find it easy to install it from the repository via the command line for Debian-based distribution. Check out their [installation guide][13] for more information.
+
+### 4\. LibreELEC
+
+![libreELEC][14]
+
+LibreELEC is an interesting media server software which is based on Kodi v18.0. They have recently released a new version (9.0.0) with a complete overhaul of the core OS support, hardware compatibility and user experience.
+
+Of course, being based on Kodi, it also has the DRM support. In addition, you can utilize its generic Linux builds or the special ones tailored for Raspberry Pi builds, WeTek devices, and more.
+
+#### How to install LibreELEC
+
+You can download the installer from their [official site][15]. For detailed instructions on how to use it, please refer to the [installation guide][16].
+
+### 5\. OpenFLIXR Media Server
+
+![OpenFLIXR Media Server][17]
+
+Want something similar that compliments Plex media server but also compatible with VirtualBox or VMWare? You got it!
+
+OpenFLIXR is an automated media server software which integrates with Plex to provide all the features along with the ability to auto download TV shows and movies from Torrents. It even fetches the subtitles automatically giving you a seamless experience when coupled with Plex media software.
+
+You can also automate your home theater with this installed. In case you do not want to run it on a physical instance, it supports VMware, VirtualBox and Hyper-V as well. The best part is – it is an open source solution and based on Ubuntu Server.
+
+#### How to install OpenFLIXR
+
+The best way to do it is by installing VirtualBox – it will be easier. After you do that, just download it from the [official website][18] and import it.
+
+### 6\. MediaPortal
+
+![MediaPortal][19]
+
+MediaPortal is just another open source simple media server software with a decent user interface. It all depends on your personal preference – event though I would recommend Kodi over this.
+
+You can play DVDs, stream videos on your local network, and listen to music as well. It does not offer a fancy set of features but the ones you will mostly need.
+
+It gives you the option to choose from two different versions (one that is stable and the second which tries to incorporate new features – could be unstable).
+
+#### How to install MediaPotal
+
+Depending on what you want to setup (A TV-server only or a complete server setup), follow the [official setup guide][20] to install it properly.
+
+### 7\. Gerbera
+
+![Gerbera Media Center][21]
+
+A simple implementation for a media server to be able to stream using your local network. It does support transcoding which will convert the media in the format your device supports.
+
+If you have been following the options for media server form a very long time, then you might identify this as the rebranded (and improved) version of MediaTomb. Even though it is not a popular choice among the Linux users – it is still something usable when all fails or for someone who prefers a straightforward and a basic media server.
+
+#### How to install Gerbera
+
+Type in the following commands in the terminal to install it on any Ubuntu-based distro:
+
+```
+sudo apt install gerbera
+```
+
+For other Linux distributions, refer to the [documentation][22].
+
+### 8\. OSMC (Open Source Media Center)
+
+![OSMC Open Source Media Center][23]
+
+It is an elegant-looking media server software originally based on Kodi media center. I was quite impressed with the user interface. It is simple and robust, being a free and open source solution. In a nutshell, all the essential features you would expect in a media server software.
+
+You can also opt in to purchase OSMC’s flagship device. It will play just about anything up to 4K standards with HD audio. In addition, it supports Raspberry Pi builds and 1st-gen Apple TV.
+
+#### How to install OSMC
+
+If your device is compatible, you can just select your operating system and download the device installer from the official [download page][24] and create a bootable image to install.
+
+### 9\. Universal Media Server
+
+![][25]
+
+Yet another simple addition to this list. Universal Media Server does not offer any fancy features but just helps you transcode / stream video and audio without needing much configuration.
+
+It supports Xbox 360, PS 3, and just about any other [DLNA][26]-capable devices.
+
+#### How to install Universal Media Center
+
+You can find all the packages listed on [FossHub][27] but you should follow the [official forum][28] to know more about how to install the package that you downloaded from the website.
+
+### 10\. Red5 Media Server
+
+![Red5 Media Server][29]Image Credit: [Red5 Server][30]
+
+A free and open source media server tailored for enterprise usage. You can use it for live streaming solutions – no matter if it is for entertainment or just video conferencing.
+
+They also offer paid licensing options for mobiles and high scalability.
+
+#### How to install Red5
+
+Even though it is not the quickest installation method, follow the [installation guide on GitHub][31] to get started with the server without needing to tinker around.
+
+### Wrapping Up
+
+Every media server software listed here has its own advantages – you should pick one up and try the one which suits your requirement.
+
+Did we miss any of your favorite media server software? Let us know about it in the comments below!
+
+
+--------------------------------------------------------------------------------
+
+via: https://itsfoss.com/best-linux-media-server
+
+作者:[Ankush Das][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://itsfoss.com/author/ankush/
+[b]: https://github.com/lujun9972
+[1]: https://itsfoss.com/best-linux-graphic-design-software/
+[2]: https://itsfoss.com/open-source-tools-writers/
+[3]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/02/best-media-server-linux.png?resize=800%2C450&ssl=1
+[4]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/02/kodi-18-media-server.jpg?fit=800%2C450&ssl=1
+[5]: https://itsfoss.com/ppa-guide/
+[6]: https://kodi.wiki/view/HOW-TO:Install_Kodi_for_Linux
+[7]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/02/plex.jpg?fit=800%2C368&ssl=1
+[8]: https://tidal.com/
+[9]: https://itsfoss.com/gdebi-default-ubuntu-software-center/
+[10]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/02/emby-server.jpg?fit=800%2C373&ssl=1
+[11]: https://jellyfin.github.io/
+[12]: https://jellyfin.readthedocs.io/en/latest/
+[13]: https://jellyfin.readthedocs.io/en/latest/administrator-docs/installing/
+[14]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/02/libreelec.jpg?resize=800%2C600&ssl=1
+[15]: https://libreelec.tv/downloads_new/
+[16]: https://libreelec.wiki/libreelec_usb-sd_creator
+[17]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/02/openflixr-media-server.jpg?fit=800%2C449&ssl=1
+[18]: http://www.openflixr.com/#Download
+[19]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/02/mediaportal.jpg?ssl=1
+[20]: https://www.team-mediaportal.com/wiki/display/MediaPortal1/Quick+Setup
+[21]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/02/gerbera-server-softwarei.jpg?fit=800%2C583&ssl=1
+[22]: http://docs.gerbera.io/en/latest/install.html
+[23]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/02/osmc-server.jpg?fit=800%2C450&ssl=1
+[24]: https://osmc.tv/download/
+[25]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/02/universal-media-server.jpg?ssl=1
+[26]: https://en.wikipedia.org/wiki/Digital_Living_Network_Alliance
+[27]: https://www.fosshub.com/Universal-Media-Server.html?dwl=UMS-7.8.0.tgz
+[28]: https://www.universalmediaserver.com/forum/viewtopic.php?t=10275
+[29]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/02/red5.jpg?resize=800%2C364&ssl=1
+[30]: https://www.red5server.com/
+[31]: https://github.com/Red5/red5-server/wiki/Installation-on-Linux
+[32]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/02/best-media-server-linux.png?fit=800%2C450&ssl=1
diff --git a/sources/tech/20190213 How to build a WiFi picture frame with a Raspberry Pi.md b/sources/tech/20190213 How to build a WiFi picture frame with a Raspberry Pi.md
new file mode 100644
index 0000000000..615f7620ed
--- /dev/null
+++ b/sources/tech/20190213 How to build a WiFi picture frame with a Raspberry Pi.md
@@ -0,0 +1,135 @@
+[#]: collector: (lujun9972)
+[#]: translator: ( )
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+[#]: subject: (How to build a WiFi picture frame with a Raspberry Pi)
+[#]: via: (https://opensource.com/article/19/2/wifi-picture-frame-raspberry-pi)
+[#]: author: (Manuel Dewald https://opensource.com/users/ntlx)
+
+How to build a WiFi picture frame with a Raspberry Pi
+======
+DIY a digital photo frame that streams photos from the cloud.
+
+
+
+Digital picture frames are really nice because they let you enjoy your photos without having to print them out. Plus, adding and removing digital files is a lot easier than opening a traditional frame and swapping the picture inside when you want to display a new photo. Even so, it's still a bit of overhead to remove your SD card, USB stick, or other storage from a digital picture frame, plug it into your computer, and copy new pictures onto it.
+
+An easier option is a digital picture frame that gets its pictures over WiFi, for example from a cloud service. Here's how to make one.
+
+### Gather your materials
+
+ * Old [TFT][1] LCD screen
+ * HDMI-to-DVI cable (as the TFT screen supports DVI)
+ * Raspberry Pi 3
+ * Micro SD card
+ * Raspberry Pi power supply
+ * Keyboard
+ * Mouse (optional)
+
+
+
+Connect the Raspberry Pi to the display using the cable and attach the power supply.
+
+### Install Raspbian
+
+**sudo raspi-config**. There I change the hostname (e.g., to **picframe** ) in Network Options and enable SSH to work remotely on the Raspberry Pi in Interfacing Options. Connect to the Raspberry Pi using (for example) .
+
+### Build and install the cloud client
+
+Download and flash Raspbian to the Micro SD card by following these [directions][2] . Plug the Micro SD card into the Raspberry Pi, boot it up, and configure your WiFi. My first action after a new Raspbian installation is usually running. There I change the hostname (e.g., to) in Network Options and enable SSH to work remotely on the Raspberry Pi in Interfacing Options. Connect to the Raspberry Pi using (for example)
+
+I use [Nextcloud][3] to synchronize my pictures, but you could use NFS, [Dropbox][4], or whatever else fits your needs to upload pictures to the frame.
+
+If you use Nextcloud, get a client for Raspbian by following these [instructions][5]. This is handy for placing new pictures on your picture frame and will give you the client application you may be familiar with on a desktop PC. When connecting the client application to your Nextcloud server, make sure to select only the folder where you'll store the images you want to be displayed on the picture frame.
+
+### Set up the slideshow
+
+The easiest way I've found to set up the slideshow is with a [lightweight slideshow project][6] built for exactly this purpose. There are some alternatives, like configuring a screensaver, but this application appears to be the simplest to set up.
+
+On your Raspberry Pi, download the binaries from the latest release, unpack them, and move them to an executable folder:
+
+```
+wget https://github.com/NautiluX/slide/releases/download/v0.9.0/slide_pi_stretch_0.9.0.tar.gz
+tar xf slide_pi_stretch_0.9.0.tar.gz
+mv slide_0.9.0/slide /usr/local/bin/
+```
+
+Install the dependencies:
+
+```
+sudo apt install libexif12 qt5-default
+```
+
+Run the slideshow by executing the command below (don't forget to modify the path to your images). If you access your Raspberry Pi via SSH, set the **DISPLAY** variable to start the slideshow on the display attached to the Raspberry Pi.
+
+```
+DISPLAY=:0.0 slide -p /home/pi/nextcloud/picframe
+```
+
+### Autostart the slideshow
+
+To autostart the slideshow on Raspbian Stretch, create the following folder and add an **autostart** file to it:
+
+```
+mkdir -p /home/pi/.config/lxsession/LXDE/
+vi /home/pi/.config/lxsession/LXDE/autostart
+```
+
+Insert the following commands to autostart your slideshow. The **slide** command can be adjusted to your needs:
+
+```
+@xset s noblank
+@xset s off
+@xset -dpms
+@slide -p -t 60 -o 200 -p /home/pi/nextcloud/picframe
+```
+
+Disable screen blanking, which the Raspberry Pi normally does after 10 minutes, by editing the following file:
+
+```
+vi /etc/lightdm/lightdm.conf
+```
+
+and adding these two lines to the end:
+
+```
+[SeatDefaults]
+xserver-command=X -s 0 -dpms
+```
+
+### Configure a power-on schedule
+
+You can schedule your picture frame to turn on and off at specific times by using two simple cronjobs. For example, say you want it to turn on automatically at 7 am and turn off at 11 pm. Run **crontab -e** and insert the following two lines.
+
+```
+0 23 * * * /opt/vc/bin/tvservice -o
+
+0 7 * * * /opt/vc/bin/tvservice -p && sudo systemctl restart display-manager
+```
+
+Note that this won't turn the Raspberry Pi power's on and off; it will just turn off HDMI, which will turn the screen off. The first line will power off HDMI at 11 pm. The second line will bring the display back up and restart the display manager at 7 am.
+
+### Add a final touch
+
+By following these simple steps, you can create your own WiFi picture frame. If you want to give it a nicer look, build a wooden frame for the display.
+
+--------------------------------------------------------------------------------
+
+via: https://opensource.com/article/19/2/wifi-picture-frame-raspberry-pi
+
+作者:[Manuel Dewald][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://opensource.com/users/ntlx
+[b]: https://github.com/lujun9972
+[1]: https://en.wikipedia.org/wiki/Thin-film-transistor_liquid-crystal_display
+[2]: https://www.raspberrypi.org/documentation/installation/installing-images/README.md
+[3]: https://nextcloud.com/
+[4]: http://dropbox.com/
+[5]: https://github.com/nextcloud/client_theming#building-on-debian
+[6]: https://github.com/NautiluX/slide/releases/tag/v0.9.0
diff --git a/sources/tech/20190214 The Earliest Linux Distros- Before Mainstream Distros Became So Popular.md b/sources/tech/20190214 The Earliest Linux Distros- Before Mainstream Distros Became So Popular.md
new file mode 100644
index 0000000000..3b9af595d6
--- /dev/null
+++ b/sources/tech/20190214 The Earliest Linux Distros- Before Mainstream Distros Became So Popular.md
@@ -0,0 +1,103 @@
+[#]: collector: (lujun9972)
+[#]: translator: ( )
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+[#]: subject: (The Earliest Linux Distros: Before Mainstream Distros Became So Popular)
+[#]: via: (https://itsfoss.com/earliest-linux-distros/)
+[#]: author: (Avimanyu Bandyopadhyay https://itsfoss.com/author/avimanyu/)
+
+The Earliest Linux Distros: Before Mainstream Distros Became So Popular
+======
+
+In this throwback history article, we’ve tried to look back into how some of the earliest Linux distributions evolved and came into being as we know them today.
+
+![][1]
+
+In here we have tried to explore how the idea of popular distros such as Red Hat, Debian, Slackware, SUSE, Ubuntu and many others came into being after the first Linux kernel became available.
+
+As Linux was initially released in the form of a kernel in 1991, the distros we know today was made possible with the help of numerous collaborators throughout the world with the creation of shells, libraries, compilers and related packages to make it a complete Operating System.
+
+### 1\. The first known “distro” by HJ Lu
+
+The way we know Linux distributions today goes back to 1992, when the first known distro-like tools to get access to Linux were released by HJ Lu. It consisted of two 5.25” floppy diskettes:
+
+![Linux 0.12 Boot and Root Disks | Photo Credit][2]
+
+ * **LINUX 0.12 BOOT DISK** : The “boot” disk was used to boot the system first.
+ * **LINUX 0.12 ROOT DISK** : The second “root” disk for getting a command prompt for access to the Linux file system after booting.
+
+
+
+To install 0.12 on a hard drive, one had to use a hex editor to edit its master boot record (MBR) and that was quite a complex process, especially during that era.
+
+Feeling too nostalgic?
+
+You can [install cool-retro-term application][3] that gives you a Linux terminal in the vintage looks of the 90’s computers.
+
+### 2\. MCC Interim Linux
+
+![MCC Linux 0.99.14, 1993 | Image Credit][4]
+
+Initially released in the same year as “LINUX 0.12” by Owen Le Blanc of Manchester Computing Centre in England, MCC Interim Linux was the first Linux distribution for novice users with a menu driven installer and end user/programming tools. Also in the form of a collection of diskettes, it could be installed on a system to provide a basic text-based environment.
+
+MCC Interim Linux was much more user-friendly than 0.12 and the installation process on a hard drive was much easier and similar to modern ways. It did not require using a hex editor to edit the MBR.
+
+Though it was first released in February 1992, it was also available for download through FTP since November that year.
+
+### 3\. TAMU Linux
+
+![TAMU Linux | Image Credit][5]
+
+TAMU Linux was developed by Aggies at Texas A&M with the Texas A&M Unix & Linux Users Group in May 1992 and was called TAMU 1.0A. It was the first Linux distribution to offer the X Window System instead of just a text based operating system.
+
+### 4\. Softlanding Linux System (SLS)
+
+![SLS Linux 1.05, 1994 | Image Credit][6]
+
+“Gentle Touchdowns for DOS Bailouts” was their slogan! SLS was released by Peter McDonald in May 1992. SLS was quite widely used and popular during its time and greatly promoted the idea of Linux. But due to a decision by the developers to change the executable format in the distro, users stopped using it.
+
+Many of the popular distros the present community is most familiar with, evolved via SLS. Two of them are:
+
+ * **Slackware** : One of the earliest Linux distros, Slackware was created by Patrick Volkerding in 1993. Slackware is based on SLS and was one of the very first Linux distributions.
+ * **Debian** : An initiative by Ian Murdock, Debian was also released in 1993 after moving on from the SLS model. The very popular Ubuntu distro we know today is based on Debian.
+
+
+
+### 5\. Yggdrasil
+
+![LGX Yggdrasil Fall 1993 | Image Credit][7]
+
+Released on December 1992, Yggdrasil was the first distro to give birth to the idea of Live Linux CDs. It was developed by Yggdrasil Computing, Inc., founded by Adam J. Richter in Berkeley, California. It could automatically configure itself on system hardware as “Plug-and-Play”, which is a very regular and known feature in today’s time. The later versions of Yggdrasil included a hack for running any proprietary MS-DOS CD-ROM driver within Linux.
+
+![Yggdrasil’s Plug-and-Play Promo | Image Credit][8]
+
+Their motto was “Free Software For The Rest of Us”.
+
+In the late 90s, one very popular distro was [Mandriva][9], first released in 1998, by unifying the French _Mandrake Linux_ distribution with the Brazilian _Conectiva Linux_ distribution. It had a release lifetime of 18 months for updates related to Linux and system software and desktop based updates were released every year. It also had server versions with 5 years of support. Now we have [Open Mandriva][10].
+
+If you have more nostalgic distros to share from the earliest days of Linux release, please share with us in the comments below.
+
+--------------------------------------------------------------------------------
+
+via: https://itsfoss.com/earliest-linux-distros/
+
+作者:[Avimanyu Bandyopadhyay][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://itsfoss.com/author/avimanyu/
+[b]: https://github.com/lujun9972
+[1]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/02/earliest-linux-distros.png?resize=800%2C450&ssl=1
+[2]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/01/Linux-0.12-Floppies.jpg?ssl=1
+[3]: https://itsfoss.com/cool-retro-term/
+[4]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/01/MCC-Interim-Linux-0.99.14-1993.jpg?fit=800%2C600&ssl=1
+[5]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/01/TAMU-Linux.jpg?ssl=1
+[6]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/01/SLS-1.05-1994.jpg?ssl=1
+[7]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/01/LGX_Yggdrasil_CD_Fall_1993.jpg?fit=781%2C800&ssl=1
+[8]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/01/Yggdrasil-Linux-Summer-1994.jpg?ssl=1
+[9]: https://en.wikipedia.org/wiki/Mandriva_Linux
+[10]: https://www.openmandriva.org/
diff --git a/sources/tech/20190215 Make websites more readable with a shell script.md b/sources/tech/20190215 Make websites more readable with a shell script.md
new file mode 100644
index 0000000000..06b748cfb5
--- /dev/null
+++ b/sources/tech/20190215 Make websites more readable with a shell script.md
@@ -0,0 +1,258 @@
+[#]: collector: (lujun9972)
+[#]: translator: ( )
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+[#]: subject: (Make websites more readable with a shell script)
+[#]: via: (https://opensource.com/article/19/2/make-websites-more-readable-shell-script)
+[#]: author: (Jim Hall https://opensource.com/users/jim-hall)
+
+Make websites more readable with a shell script
+======
+Calculate the contrast ratio between your website's text and background to make sure your site is easy to read.
+
+
+
+If you want people to find your website useful, they need to be able to read it. The colors you choose for your text can affect the readability of your site. Unfortunately, a popular trend in web design is to use low-contrast colors when printing text, such as gray text on a white background. Maybe that looks really cool to the web designer, but it is really hard for many of us to read.
+
+The W3C provides Web Content Accessibility Guidelines, which includes guidance to help web designers pick text and background colors that can be easily distinguished from each other. This is called the "contrast ratio." The W3C definition of the contrast ratio requires several calculations: given two colors, you first compute the relative luminance of each, then calculate the contrast ratio. The ratio will fall in the range 1 to 21 (typically written 1:1 to 21:1). The higher the contrast ratio, the more the text will stand out against the background. For example, black text on a white background is highly visible and has a contrast ratio of 21:1. And white text on a white background is unreadable at a contrast ratio of 1:1.
+
+The [W3C says body text][1] should have a contrast ratio of at least 4.5:1 with headings at least 3:1. But that seems to be the bare minimum. The W3C also recommends at least 7:1 for body text and at least 4.5:1 for headings.
+
+Calculating the contrast ratio can be a chore, so it's best to automate it. I've done that with this handy Bash script. In general, the script does these things:
+
+ 1. Gets the text color and background color
+ 2. Computes the relative luminance of each
+ 3. Calculates the contrast ratio
+
+
+
+### Get the colors
+
+You may know that every color on your monitor can be represented by red, green, and blue (R, G, and B). To calculate the relative luminance of a color, my script will need to know the red, green, and blue components of the color. Ideally, my script would read this information as separate R, G, and B values. Web designers might know the specific RGB code for their favorite colors, but most humans don't know RGB values for the different colors. Instead, most people reference colors by names like "red" or "gold" or "maroon."
+
+Fortunately, the GNOME [Zenity][2] tool has a color-picker app that lets you use different methods to select a color, then returns the RGB values in a predictable format of "rgb( **R** , **G** , **B** )". Using Zenity makes it easy to get a color value:
+
+```
+color=$( zenity --title 'Set text color' --color-selection --color='black' )
+```
+
+In case the user (accidentally) clicks the Cancel button, the script assumes a color:
+
+```
+if [ $? -ne 0 ] ; then
+ echo '** color canceled .. assume black'
+ color='rgb(0,0,0)'
+fi
+```
+
+My script does something similar to set the background color value as **$background**.
+
+### Compute the relative luminance
+
+Once you have the foreground color in **$color** and the background color in **$background** , the next step is to compute the relative luminance for each. On its website, the [W3C provides an algorithm][3] to compute the relative luminance of a color.
+
+> For the sRGB colorspace, the relative luminance of a color is defined as
+> **L = 0.2126 core.md Dict.md lctt2014.md lctt2016.md lctt2018.md LICENSE published README.md scripts sources translated R + 0.7152 core.md Dict.md lctt2014.md lctt2016.md lctt2018.md LICENSE published README.md scripts sources translated G + 0.0722 core.md Dict.md lctt2014.md lctt2016.md lctt2018.md LICENSE published README.md scripts sources translated B** where R, G and B are defined as:
+>
+> if RsRGB <= 0.03928 then R = RsRGB/12.92
+> else R = ((RsRGB+0.055)/1.055) ^ 2.4
+>
+> if GsRGB <= 0.03928 then G = GsRGB/12.92
+> else G = ((GsRGB+0.055)/1.055) ^ 2.4
+>
+> if BsRGB <= 0.03928 then B = BsRGB/12.92
+> else B = ((BsRGB+0.055)/1.055) ^ 2.4
+>
+> and RsRGB, GsRGB, and BsRGB are defined as:
+>
+> RsRGB = R8bit/255
+>
+> GsRGB = G8bit/255
+>
+> BsRGB = B8bit/255
+
+Since Zenity returns color values in the format "rgb( **R** , **G** , **B** )," the script can easily pull apart the R, B, and G values to compute the relative luminance. AWK makes this a simple task, using the comma as the field separator ( **-F,** ) and using AWK's **substr()** string function to pick just the text we want from the "rgb( **R** , **G** , **B** )" color value:
+
+```
+R=$( echo $color | awk -F, '{print substr($1,5)}' )
+G=$( echo $color | awk -F, '{print $2}' )
+B=$( echo $color | awk -F, '{n=length($3); print substr($3,1,n-1)}' )
+```
+
+**(For more on extracting and displaying data with AWK,[Get our AWK cheat sheet][4].)**
+
+Calculating the final relative luminance is best done using the BC calculator. BC supports the simple if-then-else needed in the calculation, which makes this part simple. But since BC cannot directly calculate exponentiation using a non-integer exponent, we need to do some extra math using the natural logarithm instead:
+
+```
+echo "scale=4
+rsrgb=$R/255
+gsrgb=$G/255
+bsrgb=$B/255
+if ( rsrgb <= 0.03928 ) r = rsrgb/12.92 else r = e( 2.4 core.md Dict.md lctt2014.md lctt2016.md lctt2018.md LICENSE published README.md scripts sources translated l((rsrgb+0.055)/1.055) )
+if ( gsrgb <= 0.03928 ) g = gsrgb/12.92 else g = e( 2.4 core.md Dict.md lctt2014.md lctt2016.md lctt2018.md LICENSE published README.md scripts sources translated l((gsrgb+0.055)/1.055) )
+if ( bsrgb <= 0.03928 ) b = bsrgb/12.92 else b = e( 2.4 core.md Dict.md lctt2014.md lctt2016.md lctt2018.md LICENSE published README.md scripts sources translated l((bsrgb+0.055)/1.055) )
+0.2126 core.md Dict.md lctt2014.md lctt2016.md lctt2018.md LICENSE published README.md scripts sources translated r + 0.7152 core.md Dict.md lctt2014.md lctt2016.md lctt2018.md LICENSE published README.md scripts sources translated g + 0.0722 core.md Dict.md lctt2014.md lctt2016.md lctt2018.md LICENSE published README.md scripts sources translated b" | bc -l
+```
+
+This passes several instructions to BC, including the if-then-else statements that are part of the relative luminance formula. BC then prints the final value.
+
+### Calculate the contrast ratio
+
+With the relative luminance of the text color and the background color, now the script can calculate the contrast ratio. The [W3C determines the contrast ratio][5] with this formula:
+
+> (L1 + 0.05) / (L2 + 0.05), where
+> L1 is the relative luminance of the lighter of the colors, and
+> L2 is the relative luminance of the darker of the colors
+
+Given two relative luminance values **$r1** and **$r2** , it's easy to calculate the contrast ratio using the BC calculator:
+
+```
+echo "scale=2
+if ( $r1 > $r2 ) { l1=$r1; l2=$r2 } else { l1=$r2; l2=$r1 }
+(l1 + 0.05) / (l2 + 0.05)" | bc
+```
+
+This uses an if-then-else statement to determine which value ( **$r1** or **$r2** ) is the lighter or darker color. BC performs the resulting calculation and prints the result, which the script can store in a variable.
+
+### The final script
+
+With the above, we can pull everything together into a final script. I use Zenity to display the final result in a text box:
+
+```
+#!/bin/sh
+# script to calculate contrast ratio of colors
+
+# read color and background color:
+# zenity returns values like 'rgb(255,140,0)' and 'rgb(255,255,255)'
+
+color=$( zenity --title 'Set text color' --color-selection --color='black' )
+if [ $? -ne 0 ] ; then
+ echo '** color canceled .. assume black'
+ color='rgb(0,0,0)'
+fi
+
+background=$( zenity --title 'Set background color' --color-selection --color='white' )
+if [ $? -ne 0 ] ; then
+ echo '** background canceled .. assume white'
+ background='rgb(255,255,255)'
+fi
+
+# compute relative luminance:
+
+function luminance()
+{
+ R=$( echo $1 | awk -F, '{print substr($1,5)}' )
+ G=$( echo $1 | awk -F, '{print $2}' )
+ B=$( echo $1 | awk -F, '{n=length($3); print substr($3,1,n-1)}' )
+
+ echo "scale=4
+rsrgb=$R/255
+gsrgb=$G/255
+bsrgb=$B/255
+if ( rsrgb <= 0.03928 ) r = rsrgb/12.92 else r = e( 2.4 core.md Dict.md lctt2014.md lctt2016.md lctt2018.md LICENSE published README.md scripts sources translated l((rsrgb+0.055)/1.055) )
+if ( gsrgb <= 0.03928 ) g = gsrgb/12.92 else g = e( 2.4 core.md Dict.md lctt2014.md lctt2016.md lctt2018.md LICENSE published README.md scripts sources translated l((gsrgb+0.055)/1.055) )
+if ( bsrgb <= 0.03928 ) b = bsrgb/12.92 else b = e( 2.4 core.md Dict.md lctt2014.md lctt2016.md lctt2018.md LICENSE published README.md scripts sources translated l((bsrgb+0.055)/1.055) )
+0.2126 core.md Dict.md lctt2014.md lctt2016.md lctt2018.md LICENSE published README.md scripts sources translated r + 0.7152 core.md Dict.md lctt2014.md lctt2016.md lctt2018.md LICENSE published README.md scripts sources translated g + 0.0722 core.md Dict.md lctt2014.md lctt2016.md lctt2018.md LICENSE published README.md scripts sources translated b" | bc -l
+}
+
+lum1=$( luminance $color )
+lum2=$( luminance $background )
+
+# compute contrast
+
+function contrast()
+{
+ echo "scale=2
+if ( $1 > $2 ) { l1=$1; l2=$2 } else { l1=$2; l2=$1 }
+(l1 + 0.05) / (l2 + 0.05)" | bc
+}
+
+rel=$( contrast $lum1 $lum2 )
+
+# print results
+
+( cat<
+```
+
+Alternatively, add the following to the beginning of each of your BATS test scripts:
+
+```
+#!/usr/bin/env ./test/libs/bats/bin/bats
+load 'libs/bats-support/load'
+load 'libs/bats-assert/load'
+```
+
+and **chmod +x **. This will a) make them executable with the BATS installed in **./test/libs/bats** and b) include these helper libraries. BATS test scripts are typically stored in the **test** directory and named for the script being tested, but with the **.bats** extension. For example, a BATS script that tests **bin/build** should be called **test/build.bats**.
+
+You can also run an entire set of BATS test files by passing a regular expression to BATS, e.g., **./test/lib/bats/bin/bats test/*.bats**.
+
+### Organizing libraries and scripts for BATS coverage
+
+Bash scripts and libraries must be organized in a way that efficiently exposes their inner workings to BATS. In general, library functions and shell scripts that run many commands when they are called or executed are not amenable to efficient BATS testing.
+
+For example, [build.sh][4] is a typical script that many people write. It is essentially a big pile of code. Some might even put this pile of code in a function in a library. But it's impossible to run a big pile of code in a BATS test and cover all possible types of failures it can encounter in separate test cases. The only way to test this pile of code with sufficient coverage is to break it into many small, reusable, and, most importantly, independently testable functions.
+
+It's straightforward to add more functions to a library. An added benefit is that some of these functions can become surprisingly useful in their own right. Once you have broken your library function into lots of smaller functions, you can **source** the library in your BATS test and run the functions as you would any other command to test them.
+
+Bash scripts must also be broken down into multiple functions, which the main part of the script should call when the script is executed. In addition, there is a very useful trick to make it much easier to test Bash scripts with BATS: Take all the code that is executed in the main part of the script and move it into a function, called something like **run_main**. Then, add the following to the end of the script:
+
+```
+if [[ "${BASH_SOURCE[0]}" == "${0}" ]]
+then
+ run_main
+fi
+```
+
+This bit of extra code does something special. It makes the script behave differently when it is executed as a script than when it is brought into the environment with **source**. This trick enables the script to be tested the same way a library is tested, by sourcing it and testing the individual functions. For example, here is [build.sh refactored for better BATS testability][5].
+
+### Writing and running tests
+
+As mentioned above, BATS is a TAP-compliant testing framework with a syntax and output that will be familiar to those who have used other TAP-compliant testing suites, such as JUnit, RSpec, or Jest. Its tests are organized into individual test scripts. Test scripts are organized into one or more descriptive **@test** blocks that describe the unit of the application being tested. Each **@test** block will run a series of commands that prepares the test environment, runs the command to be tested, and makes assertions about the exit and output of the tested command. Many assertion functions are imported with the **bats** , **bats-assert** , and **bats-support** libraries, which are loaded into the environment at the beginning of the BATS test script. Here is a typical BATS test block:
+
+```
+@test "requires CI_COMMIT_REF_SLUG environment variable" {
+ unset CI_COMMIT_REF_SLUG
+ assert_empty "${CI_COMMIT_REF_SLUG}"
+ run some_command
+ assert_failure
+ assert_output --partial "CI_COMMIT_REF_SLUG"
+}
+```
+
+If a BATS script includes **setup** and/or **teardown** functions, they are automatically executed by BATS before and after each test block runs. This makes it possible to create environment variables, test files, and do other things needed by one or all tests, then tear them down after each test runs. [**Build.bats**][6] is a full BATS test of our newly formatted **build.sh** script. (The **mock_docker** command in this test will be explained below, in the section on mocking/stubbing.)
+
+When the test script runs, BATS uses **exec** to run each **@test** block as a separate subprocess. This makes it possible to export environment variables and even functions in one **@test** without affecting other **@test** s or polluting your current shell session. The output of a test run is a standard format that can be understood by humans and parsed or manipulated programmatically by TAP consumers. Here is an example of the output for the **CI_COMMIT_REF_SLUG** test block when it fails:
+
+```
+ ✗ requires CI_COMMIT_REF_SLUG environment variable
+ (from function `assert_output' in file test/libs/bats-assert/src/assert.bash, line 231,
+ in test file test/ci_deploy.bats, line 26)
+ `assert_output --partial "CI_COMMIT_REF_SLUG"' failed
+
+ -- output does not contain substring --
+ substring (1 lines):
+ CI_COMMIT_REF_SLUG
+ output (3 lines):
+ ./bin/deploy.sh: join_string_by: command not found
+ oc error
+ Could not login
+ --
+
+ ** Did not delete , as test failed **
+
+1 test, 1 failure
+```
+
+Here is the output of a successful test:
+
+```
+✓ requires CI_COMMIT_REF_SLUG environment variable
+```
+
+### Helpers
+
+Like any shell script or library, BATS test scripts can include helper libraries to share common code across tests or enhance their capabilities. These helper libraries, such as **bats-assert** and **bats-support** , can even be tested with BATS.
+
+Libraries can be placed in the same test directory as the BATS scripts or in the **test/libs** directory if the number of files in the test directory gets unwieldy. BATS provides the **load** function that takes a path to a Bash file relative to the script being tested (e.g., **test** , in our case) and sources that file. Files must end with the prefix **.bash** , but the path to the file passed to the **load** function can't include the prefix. **build.bats** loads the **bats-assert** and **bats-support** libraries, a small **[helpers.bash][7]** library, and a **docker_mock.bash** library (described below) with the following code placed at the beginning of the test script below the interpreter magic line:
+
+```
+load 'libs/bats-support/load'
+load 'libs/bats-assert/load'
+load 'helpers'
+load 'docker_mock'
+```
+
+### Stubbing test input and mocking external calls
+
+The majority of Bash scripts and libraries execute functions and/or executables when they run. Often they are programmed to behave in specific ways based on the exit status or output ( **stdout** , **stderr** ) of these functions or executables. To properly test these scripts, it is often necessary to make fake versions of these commands that are designed to behave in a specific way during a specific test, a process called "stubbing." It may also be necessary to spy on the program being tested to ensure it calls a specific command, or it calls a specific command with specific arguments, a process called "mocking." For more on this, check out this great [discussion of mocking and stubbing][8] in Ruby RSpec, which applies to any testing system.
+
+The Bash shell provides tricks that can be used in your BATS test scripts to do mocking and stubbing. All require the use of the Bash **export** command with the **-f** flag to export a function that overrides the original function or executable. This must be done before the tested program is executed. Here is a simple example that overrides the **cat** executable:
+
+```
+function cat() { echo "THIS WOULD CAT ${*}" }
+export -f cat
+```
+
+This method overrides a function in the same manner. If a test needs to override a function within the script or library being tested, it is important to source the tested script or library before the function is stubbed or mocked. Otherwise, the stub/mock will be replaced with the actual function when the script is sourced. Also, make sure to stub/mock before you run the command you're testing. Here is an example from **build.bats** that mocks the **raise** function described in **build.sh** to ensure a specific error message is raised by the login fuction:
+
+```
+@test ".login raises on oc error" {
+ source ${profile_script}
+ function raise() { echo "${1} raised"; }
+ export -f raise
+ run login
+ assert_failure
+ assert_output -p "Could not login raised"
+}
+```
+
+Normally, it is not necessary to unset a stub/mock function after the test, since **export** only affects the current subprocess during the **exec** of the current **@test** block. However, it is possible to mock/stub commands (e.g. **cat** , **sed** , etc.) that the BATS **assert** * functions use internally. These mock/stub functions must be **unset** before these assert commands are run, or they will not work properly. Here is an example from **build.bats** that mocks **sed** , runs the **build_deployable** function, and unsets **sed** before running any assertions:
+
+```
+@test ".build_deployable prints information, runs docker build on a modified Dockerfile.production and publish_image when its not a dry_run" {
+ local expected_dockerfile='Dockerfile.production'
+ local application='application'
+ local environment='environment'
+ local expected_original_base_image="${application}"
+ local expected_candidate_image="${application}-candidate:${environment}"
+ local expected_deployable_image="${application}:${environment}"
+ source ${profile_script}
+ mock_docker build --build-arg OAUTH_CLIENT_ID --build-arg OAUTH_REDIRECT --build-arg DDS_API_BASE_URL -t "${expected_deployable_image}" -
+ function publish_image() { echo "publish_image ${*}"; }
+ export -f publish_image
+ function sed() {
+ echo "sed ${*}" >&2;
+ echo "FROM application-candidate:environment";
+ }
+ export -f sed
+ run build_deployable "${application}" "${environment}"
+ assert_success
+ unset sed
+ assert_output --regexp "sed.*${expected_dockerfile}"
+ assert_output -p "Building ${expected_original_base_image} deployable ${expected_deployable_image} FROM ${expected_candidate_image}"
+ assert_output -p "FROM ${expected_candidate_image} piped"
+ assert_output -p "build --build-arg OAUTH_CLIENT_ID --build-arg OAUTH_REDIRECT --build-arg DDS_API_BASE_URL -t ${expected_deployable_image} -"
+ assert_output -p "publish_image ${expected_deployable_image}"
+}
+```
+
+Sometimes the same command, e.g. foo, will be invoked multiple times, with different arguments, in the same function being tested. These situations require the creation of a set of functions:
+
+ * mock_foo: takes expected arguments as input, and persists these to a TMP file
+ * foo: the mocked version of the command, which processes each call with the persisted list of expected arguments. This must be exported with export -f.
+ * cleanup_foo: removes the TMP file, for use in teardown functions. This can test to ensure that a @test block was successful before removing.
+
+
+
+Since this functionality is often reused in different tests, it makes sense to create a helper library that can be loaded like other libraries.
+
+A good example is **[docker_mock.bash][9]**. It is loaded into **build.bats** and used in any test block that tests a function that calls the Docker executable. A typical test block using **docker_mock** looks like:
+
+```
+@test ".publish_image fails if docker push fails" {
+ setup_publish
+ local expected_image="image"
+ local expected_publishable_image="${CI_REGISTRY_IMAGE}/${expected_image}"
+ source ${profile_script}
+ mock_docker tag "${expected_image}" "${expected_publishable_image}"
+ mock_docker push "${expected_publishable_image}" and_fail
+ run publish_image "${expected_image}"
+ assert_failure
+ assert_output -p "tagging ${expected_image} as ${expected_publishable_image}"
+ assert_output -p "tag ${expected_image} ${expected_publishable_image}"
+ assert_output -p "pushing image to gitlab registry"
+ assert_output -p "push ${expected_publishable_image}"
+}
+```
+
+This test sets up an expectation that Docker will be called twice with different arguments. With the second call to Docker failing, it runs the tested command, then tests the exit status and expected calls to Docker.
+
+One aspect of BATS introduced by **mock_docker.bash** is the **${BATS_TMPDIR}** environment variable, which BATS sets at the beginning to allow tests and helpers to create and destroy TMP files in a standard location. The **mock_docker.bash** library will not delete its persisted mocks file if a test fails, but it will print where it is located so it can be viewed and deleted. You may need to periodically clean old mock files out of this directory.
+
+One note of caution regarding mocking/stubbing: The **build.bats** test consciously violates a dictum of testing that states: [Don't mock what you don't own!][10] This dictum demands that calls to commands that the test's developer didn't write, like **docker** , **cat** , **sed** , etc., should be wrapped in their own libraries, which should be mocked in tests of scripts that use them. The wrapper libraries should then be tested without mocking the external commands.
+
+This is good advice and ignoring it comes with a cost. If the Docker CLI API changes, the test scripts will not detect this change, resulting in a false positive that won't manifest until the tested **build.sh** script runs in a production setting with the new version of Docker. Test developers must decide how stringently they want to adhere to this standard, but they should understand the tradeoffs involved with their decision.
+
+### Conclusion
+
+Introducing a testing regime to any software development project creates a tradeoff between a) the increase in time and organization required to develop and maintain code and tests and b) the increased confidence developers have in the integrity of the application over its lifetime. Testing regimes may not be appropriate for all scripts and libraries.
+
+In general, scripts and libraries that meet one or more of the following should be tested with BATS:
+
+ * They are worthy of being stored in source control
+ * They are used in critical processes and relied upon to run consistently for a long period of time
+ * They need to be modified periodically to add/remove/modify their function
+ * They are used by others
+
+
+
+Once the decision is made to apply a testing discipline to one or more Bash scripts or libraries, BATS provides the comprehensive testing features that are available in other software development environments.
+
+Acknowledgment: I am indebted to [Darrin Mann][11] for introducing me to BATS testing.
+
+--------------------------------------------------------------------------------
+
+via: https://opensource.com/article/19/2/testing-bash-bats
+
+作者:[Darin London][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://opensource.com/users/dmlond
+[b]: https://github.com/lujun9972
+[1]: https://github.com/sstephenson/bats
+[2]: http://testanything.org/
+[3]: https://git-scm.com/book/en/v2/Git-Tools-Submodules
+[4]: https://github.com/dmlond/how_to_bats/blob/preBats/build.sh
+[5]: https://github.com/dmlond/how_to_bats/blob/master/bin/build.sh
+[6]: https://github.com/dmlond/how_to_bats/blob/master/test/build.bats
+[7]: https://github.com/dmlond/how_to_bats/blob/master/test/helpers.bash
+[8]: https://www.codewithjason.com/rspec-mocks-stubs-plain-english/
+[9]: https://github.com/dmlond/how_to_bats/blob/master/test/docker_mock.bash
+[10]: https://github.com/testdouble/contributing-tests/wiki/Don't-mock-what-you-don't-own
+[11]: https://github.com/dmann
diff --git a/sources/tech/20190222 Q4OS Linux Revives Your Old Laptop with Windows- Looks.md b/sources/tech/20190222 Q4OS Linux Revives Your Old Laptop with Windows- Looks.md
new file mode 100644
index 0000000000..93549ac45b
--- /dev/null
+++ b/sources/tech/20190222 Q4OS Linux Revives Your Old Laptop with Windows- Looks.md
@@ -0,0 +1,192 @@
+[#]: collector: (lujun9972)
+[#]: translator: ( )
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+[#]: subject: (Q4OS Linux Revives Your Old Laptop with Windows’ Looks)
+[#]: via: (https://itsfoss.com/q4os-linux-review)
+[#]: author: (John Paul https://itsfoss.com/author/john/)
+
+Q4OS Linux Revives Your Old Laptop with Windows’ Looks
+======
+
+There are quite a few Linux distros available that seek to make new users feel at home by [imitating the look and feel of Windows][1]. Today, we’ll look at a distro that attempts to do this with limited success We’ll be looking at [Q4OS][2].
+
+### Q4OS Linux focuses on performance on low hardware
+
+![Q4OS Linux desktop after first boot][3]Q4OS after first boot
+
+> Q4OS is a fast and powerful operating system based on the latest technologies while offering highly productive desktop environment. We focus on security, reliability, long-term stability and conservative integration of verified new features. System is distinguished by speed and very low hardware requirements, runs great on brand new machines as well as legacy computers. It is also very applicable for virtualization and cloud computing.
+>
+> Q4OS Website
+
+Q4OS currently has two different release branches: 2.# Scorpion and 3.# Centaurus. Scorpion is the Long-Term-Support (LTS) release and will be supported for five years. That support should last until 2022. The most recent version of Scorpion is 2.6, which is based on [Debian][4] 9 Stretch. Centaurus is considered the testing branch and is based on Debian Buster. Centaurus will become the LTS when Debian Buster becomes stable.
+
+Q4OS is one of the few Linux distros that still support both 32-bit and 64-bit. It has also been ported to ARM devices, specifically the Raspberry PI and the PineBook.
+
+The one major thing that separates Q4OS from the majority of Linux distros is their use of the Trinity Desktop Environment as the default desktop environment.
+
+#### The not-so-famous Trinity Desktop Environment
+
+![][5]Trinity Desktop Environment
+
+I’m sure that most people are unfamiliar with the [Trinity Desktop Environment (TDE)][6]. I didn’t know until I discovered Q4OS a couple of years ago. TDE is a fork of [KDE][7], specifically KDE 3.5. TDE was created by Timothy Pearson and the first release took place in April 2010.
+
+From what I read, it sounds like TDE was created for the same reason as [MATE][8]). Early versions of KDE 4 were prone to crash and users were unhappy with the direction the new release was taking, it was decided to fork the previous release. That is where the similarities end. MATE has taken on a life of its own and grew to become an equal among desktop environments. Development of TDE seems to have slowed. There were two years between the last two point releases.
+
+Quick side note: TDE uses its own fork of Qt 3, named TQt.
+
+#### System Requirements
+
+According to the [Q4OS download page][9], the system requirements differ based on the desktop environment you install.
+
+**TDE Version**
+
+ * At least 300MHz CPU
+ * 128 MB of RAM
+ * 3 GB Storage
+
+
+
+**KDE Version**
+
+ * At least 1GHz CPU
+ * 1 GB of RAM
+ * 5 GB Storage
+
+
+
+You can see from the system requirements that Q4OS is a [lightweight Linux distribution suitable for older computers][10].
+
+#### Included apps by default
+
+The following applications are included in the full install of Q4OS:
+
+ * Google Chrome
+ * Thunderbird
+ * LibreOffice
+ * VLC player
+ * Konqueror browser
+ * Dolphin file manager
+ * AisleRiot Solitaire
+ * Konsole
+ * Software Center
+
+
+ * KMines
+ * Ockular
+ * KBounce
+ * DigiKam
+ * Kooka
+ * KolourPaint
+ * KSnapshot
+ * Gwenview
+ * Ark
+
+
+ * KMail
+ * SMPlayer
+ * KRec
+ * Brasero
+ * Amarok player
+ * qpdfview
+ * KOrganizer
+ * KMag
+ * KNotes
+
+
+
+Of course, you can install additional applications through the software center. Since Q4OS is based on Debian, you can also [install applications from deb packages][11].
+
+#### Q4OS can be installed from within Windows
+
+I was able to successfully install TrueOS on my Dell Latitude D630 without any issues. This laptop has an Intel Centrino Duo Core processor running at 2.00 GHz, NVIDIA Quadro NVS 135M graphics chip, and 4 GB of RAM.
+
+You have a couple of options to choose from when installing Q4OS. You can either install Q4OS with a CD (Live or install) or you can install it from inside Window. The Windows installer asks for the drive location you want to install to, how much space you want Q4OS to take up and what login information do you want to use.
+
+![][12]Q4OS Windows installer
+
+Compared to most distros, the Live ISOs are small. The KDE version weighs less than 1GB and the TDE version is just a little north of 500 MB.
+
+### Experiencing Q4OS: Feels like older Windows versions
+
+Please note that while there is a KDE installation ISO, I used the TDE installation ISO. The KDE Live CD is a recent addition, so TDE is more in line with the project’s long term goals.
+
+When you boot into Q4OS for the first time, it feels like you jumped through a time portal and are staring at Windows 2000. The initial app offerings are very slim, you have access to a file manager, a web browser and not much else. There isn’t even a screenshot tool installed.
+
+![][13]Konqueror film manager
+
+When you try to use the TDE browser (Konqueror), a dialog box pops up recommending using the Desktop Profiler to [install Google Chrome][14] or some other recent web browser.
+
+The Desktop Profiler allows you to choose between a bare-bones, basic or full desktop and which desktop environment you wish to use as default. You can also use the Desktop Profiler to install other desktop environments, such as MATE, Xfce, LXQT, LXDE, Cinnamon and GNOME.
+
+![Q4OS Welcome Screen][15]![Q4OS Welcome Screen][15]Q4OS Welcome Screen
+
+Q4OS comes with its own application center. However, the offerings are limited to less than 20 options, including Synaptic, Google Chrome, Chromium, Firefox, LibreOffice, Update Manager, VLC, Multimedia codecs, Thunderbird, LookSwitcher, NVIDIA drivers, Network Manager, Skype, GParted, Wine, Blueman, X2Go server, X2Go Client, and Virtualbox additions.
+
+![][16]Q4OS Software Centre
+
+If you want to install anything else, you need to either use the command line or the [synaptic package manager][17]. Synaptic is a very good package manager and has been very serviceable for many years, but it isn’t quite newbie friendly.
+
+If you install an application from the Software Centre, you are treated to an installer that looks a lot like a Windows installer. I can only imagine that this is for people converting to Linux from Windows.
+
+![][18]Firefox installer
+
+As I mentioned earlier, when you boot into Q4OS’ desktop for the first time it looks like something out of the 1990s. Thankfully, you can install a utility named LookSwitcher to install a different theme. Initially, you are only shown half a dozen themes. There are other themes that are considered works-in-progress. You can also enhance the default theme by picking a more vibrant background and making the bottom panel transparent.
+
+![][19]Q4OS using the Debonair theme
+
+### Final Thoughts on Q4OS
+
+I may have mentioned a few times in this review that Q4OS looks like a dated version of Windows. It is obviously a very conscious decision because great care was taken to make even the control panel and file manager look Windows-eque. The problem is that it reminds me more of [ReactOS][20] than something modern. The Q4OS website says that it is made using the latest technology. The look of the system disagrees and will probably put some new users off.
+
+The fact that the install ISOs are smaller than most means that they are very quick to download. Unfortunately, it also means that if you want to be productive, you’ll have to spend quite a bit of time downloading software, either manually or automatically. You’ll also need an active internet connection. There is a reason why most ISOs are several gigabytes.
+
+I made sure to test the Windows installer. I installed a test copy of Windows 10 and ran the Q4OS installer. The process took a few minutes because the installer, which is less than 10 MB had to download an ISO. When the process was done, I rebooted. I selected Q4OS from the menu, but it looked like I was booting into Windows 10 (got the big blue circle). I thought that the install failed, but I eventually got to Q4OS.
+
+One of the few things that I liked about Q4OS was how easy it was to install the NVIDIA drivers. After I logged in for the first time, a little pop-up told me that there were NVIDIA drivers available and asked me if I wanted to install them.
+
+Using Q4OS was definitely an interesting experience, especially using TDE for the first time and the Windows look and feel. However, the lack of apps in the Software Centre and some of the design choices stop me from recommending this distro.
+
+**Do you like Q4OS?**
+
+Have you ever used Q4OS? What is your favorite Debian-based distro? Please let us know in the comments below.
+
+If you found this article interesting, please take a minute to share it on social media, Hacker News or [Reddit][21].
+
+
+--------------------------------------------------------------------------------
+
+via: https://itsfoss.com/q4os-linux-review
+
+作者:[John Paul][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://itsfoss.com/author/john/
+[b]: https://github.com/lujun9972
+[1]: https://itsfoss.com/windows-like-linux-distributions/
+[2]: https://q4os.org/
+[3]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/02/q4os1.jpg?resize=800%2C500&ssl=1
+[4]: https://www.debian.org/
+[5]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/02/q4os4.jpg?resize=800%2C412&ssl=1
+[6]: https://www.trinitydesktop.org/
+[7]: https://en.wikipedia.org/wiki/KDE
+[8]: https://en.wikipedia.org/wiki/MATE_(software
+[9]: https://q4os.org/downloads1.html
+[10]: https://itsfoss.com/lightweight-linux-beginners/
+[11]: https://itsfoss.com/list-installed-packages-ubuntu/
+[12]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/02/q4os-windows-installer.jpg?resize=800%2C610&ssl=1
+[13]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/02/q4os2.jpg?resize=800%2C606&ssl=1
+[14]: https://itsfoss.com/install-chrome-ubuntu/
+[15]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/02/q4os10.png?ssl=1
+[16]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/02/q4os3.jpg?resize=800%2C507&ssl=1
+[17]: https://www.nongnu.org/synaptic/
+[18]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/02/q4os5.jpg?resize=800%2C616&ssl=1
+[19]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/02/q4os8Debonaire.jpg?resize=800%2C500&ssl=1
+[20]: https://www.reactos.org/
+[21]: http://reddit.com/r/linuxusersgroup
+[22]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/02/q4os1.jpg?fit=800%2C500&ssl=1
diff --git a/sources/tech/20190225 How To Identify That The Linux Server Is Integrated With Active Directory (AD).md b/sources/tech/20190225 How To Identify That The Linux Server Is Integrated With Active Directory (AD).md
new file mode 100644
index 0000000000..55d30a7910
--- /dev/null
+++ b/sources/tech/20190225 How To Identify That The Linux Server Is Integrated With Active Directory (AD).md
@@ -0,0 +1,177 @@
+[#]: collector: (lujun9972)
+[#]: translator: ( )
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+[#]: subject: (How To Identify That The Linux Server Is Integrated With Active Directory (AD)?)
+[#]: via: (https://www.2daygeek.com/how-to-identify-that-the-linux-server-is-integrated-with-active-directory-ad/)
+[#]: author: (Vinoth Kumar https://www.2daygeek.com/author/vinoth/)
+
+How To Identify That The Linux Server Is Integrated With Active Directory (AD)?
+======
+
+Single Sign On (SSO) Authentication is an implemented in most of the organizations due to multiple applications access.
+
+It allows a user to logs in with a single ID and password to all the applications which is available in the organization.
+
+It uses a centralized authentication system for all the applications.
+
+A while ago we had written an article, **[how to integrate Linux system with AD][1]**.
+
+Today we are going to show you, how to check that the Linux system is integrated with AD using multiple ways.
+
+It can be done in four ways and we will explain one by one.
+
+ * **`ps Command:`** It report a snapshot of the current processes.
+ * **`id Command:`** It prints user identity.
+ * **`/etc/nsswitch.conf file:`** It is Name Service Switch configuration file.
+ * **`/etc/pam.d/system-auth file:`** It is Common configuration file for PAMified services.
+
+
+
+### How To Identify That The Linux Server Is Integrated With AD Using PS Command?
+
+ps command displays information about a selection of the active processes.
+
+To integrate the Linux server with AD, we need to use either `winbind` or `sssd` or `ldap` service.
+
+So, use the ps command to filter these services.
+
+If you found any of these services is running on system then we can decide that the system is currently integrate with AD using “winbind” or “sssd” or “ldap” service.
+
+You might get the output similar to below if the system is integrated with AD using `SSSD` service.
+
+```
+# ps -ef | grep -i "winbind\|sssd"
+
+root 29912 1 0 2017 ? 00:19:09 /usr/sbin/sssd -f -D
+root 29913 29912 0 2017 ? 04:36:59 /usr/libexec/sssd/sssd_be --domain 2daygeek.com --uid 0 --gid 0 --debug-to-files
+root 29914 29912 0 2017 ? 00:29:28 /usr/libexec/sssd/sssd_nss --uid 0 --gid 0 --debug-to-files
+root 29915 29912 0 2017 ? 00:09:19 /usr/libexec/sssd/sssd_pam --uid 0 --gid 0 --debug-to-files
+root 31584 26666 0 13:41 pts/3 00:00:00 grep sssd
+```
+
+You might get the output similer to below if the system is integrated with AD using `winbind` service.
+
+```
+# ps -ef | grep -i "winbind\|sssd"
+
+root 676 21055 0 2017 ? 00:00:22 winbindd
+root 958 21055 0 2017 ? 00:00:35 winbindd
+root 21055 1 0 2017 ? 00:59:07 winbindd
+root 21061 21055 0 2017 ? 11:48:49 winbindd
+root 21062 21055 0 2017 ? 00:01:28 winbindd
+root 21959 4570 0 13:50 pts/2 00:00:00 grep -i winbind\|sssd
+root 27780 21055 0 2017 ? 00:00:21 winbindd
+```
+
+### How To Identify That The Linux Server Is Integrated With AD Using id Command?
+
+It Prints information for given user name, or the current user. It displays the UID, GUID, User Name, Primary Group Name and Secondary Group Name, etc.,
+
+If the Linux system is integrated with AD then you might get the output like below. The GID clearly shows that the user is coming from AD “domain users”.
+
+```
+# id daygeek
+
+uid=1918901106(daygeek) gid=1918900513(domain users) groups=1918900513(domain users)
+```
+
+### How To Identify That The Linux Server Is Integrated With AD Using nsswitch.conf file?
+
+The Name Service Switch (NSS) configuration file, `/etc/nsswitch.conf`, is used by the GNU C Library and certain other applications to determine the sources from which to obtain name-service information in a range of categories, and in what order. Each category of information is identified by a database name.
+
+You might get the output similar to below if the system is integrated with AD using `SSSD` service.
+
+```
+# cat /etc/nsswitch.conf | grep -i "sss\|winbind\|ldap"
+
+passwd: files sss
+shadow: files sss
+group: files sss
+services: files sss
+netgroup: files sss
+automount: files sss
+```
+
+You might get the output similar to below if the system is integrated with AD using `winbind` service.
+
+```
+# cat /etc/nsswitch.conf | grep -i "sss\|winbind\|ldap"
+
+passwd: files [SUCCESS=return] winbind
+shadow: files [SUCCESS=return] winbind
+group: files [SUCCESS=return] winbind
+```
+
+You might get the output similer to below if the system is integrated with AD using `ldap` service.
+
+```
+# cat /etc/nsswitch.conf | grep -i "sss\|winbind\|ldap"
+
+passwd: files ldap
+shadow: files ldap
+group: files ldap
+```
+
+### How To Identify That The Linux Server Is Integrated With AD Using system-auth file?
+
+It is Common configuration file for PAMified services.
+
+PAM stands for Pluggable Authentication Module that provides dynamic authentication support for applications and services in Linux.
+
+system-auth configuration file is provide a common interface for all applications and service daemons calling into the PAM library.
+
+The system-auth configuration file is included from nearly all individual service configuration files with the help of the include directive.
+
+You might get the output similar to below if the system is integrated with AD using `SSSD` service.
+
+```
+# cat /etc/pam.d/system-auth | grep -i "pam_sss.so\|pam_winbind.so\|pam_ldap.so"
+or
+# cat /etc/pam.d/system-auth-ac | grep -i "pam_sss.so\|pam_winbind.so\|pam_ldap.so"
+
+auth sufficient pam_sss.so use_first_pass
+account [default=bad success=ok user_unknown=ignore] pam_sss.so
+password sufficient pam_sss.so use_authtok
+session optional pam_sss.so
+```
+
+You might get the output similar to below if the system is integrated with AD using `winbind` service.
+
+```
+# cat /etc/pam.d/system-auth | grep -i "pam_sss.so\|pam_winbind.so\|pam_ldap.so"
+or
+# cat /etc/pam.d/system-auth-ac | grep -i "pam_sss.so\|pam_winbind.so\|pam_ldap.so"
+
+auth sufficient pam_winbind.so cached_login use_first_pass
+account [default=bad success=ok user_unknown=ignore] pam_winbind.so cached_login
+password sufficient pam_winbind.so cached_login use_authtok
+```
+
+You might get the output similar to below if the system is integrated with AD using `ldap` service.
+
+```
+# cat /etc/pam.d/system-auth | grep -i "pam_sss.so\|pam_winbind.so\|pam_ldap.so"
+or
+# cat /etc/pam.d/system-auth-ac | grep -i "pam_sss.so\|pam_winbind.so\|pam_ldap.so"
+
+auth sufficient pam_ldap.so cached_login use_first_pass
+account [default=bad success=ok user_unknown=ignore] pam_ldap.so cached_login
+password sufficient pam_ldap.so cached_login use_authtok
+```
+
+--------------------------------------------------------------------------------
+
+via: https://www.2daygeek.com/how-to-identify-that-the-linux-server-is-integrated-with-active-directory-ad/
+
+作者:[Vinoth Kumar][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://www.2daygeek.com/author/vinoth/
+[b]: https://github.com/lujun9972
+[1]: https://www.2daygeek.com/join-integrate-rhel-centos-linux-system-to-windows-active-directory-ad-domain/
diff --git a/sources/tech/20190225 How to Install VirtualBox on Ubuntu -Beginner-s Tutorial.md b/sources/tech/20190225 How to Install VirtualBox on Ubuntu -Beginner-s Tutorial.md
new file mode 100644
index 0000000000..4ba0580ece
--- /dev/null
+++ b/sources/tech/20190225 How to Install VirtualBox on Ubuntu -Beginner-s Tutorial.md
@@ -0,0 +1,156 @@
+[#]: collector: (lujun9972)
+[#]: translator: ( )
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+[#]: subject: (How to Install VirtualBox on Ubuntu [Beginner’s Tutorial])
+[#]: via: (https://itsfoss.com/install-virtualbox-ubuntu)
+[#]: author: (Abhishek Prakash https://itsfoss.com/author/abhishek/)
+
+How to Install VirtualBox on Ubuntu [Beginner’s Tutorial]
+======
+
+**This beginner’s tutorial explains various ways to install VirtualBox on Ubuntu and other Debian-based Linux distributions.**
+
+Oracle’s free and open source offering [VirtualBox][1] is an excellent virtualization tool, specially for desktop operating systems. I prefer using it over [VMWare Workstation in Linux][2], another virtualization tool.
+
+You can use virtualization software like VirtualBox for installing and using another operating system within a virtual machine.
+
+For example, you can [install Linux on VirtualBox inside Windows][3]. Similarly, you can also [install Windows inside Linux using VirtualBox][4].
+
+You can also use VirtualBox for installing another Linux distribution in your current Linux system. Actually, this is what I use it for. If I hear about a nice Linux distribution, instead of installing it on a real system, I test it on a virtual machine. It’s more convenient when you just want to try out a distribution before making a decision about installing it on your actual machine.
+
+![Linux installed inside Linux using VirtualBox][5]Ubuntu 18.10 installed inside Ubuntu 18.04
+
+In this beginner’s tutorial, I’ll show you various ways of installing Oracle VirtualBox on Ubuntu and other Debian-based distributions.
+
+### Installing VirtualBox on Ubuntu and Debian based Linux distributions
+
+The installation methods mentioned here should also work for other Debian and Ubuntu-based Linux distributions such as Linux Mint, elementary OS etc.
+
+#### Method 1: Install VirtualBox from Ubuntu Repository
+
+**Pros** : Easy installation
+
+**Cons** : Installs older version
+
+The easiest way to install VirtualBox on Ubuntu would be to search for it in the Software Center and install it from there.
+
+![VirtualBox in Ubuntu Software Center][6]VirtualBox is available in Ubuntu Software Center
+
+You can also install it from the command line using the command:
+
+```
+sudo apt install virtualbox
+```
+
+However, if you [check the package version before installing it][7], you’ll see that the VirtualBox provided by Ubuntu’s repository is quite old.
+
+For example, the current VirtualBox version at the time of writing this tutorial is 6.0 but the one in Software Center is 5.2. This means you won’t get the newer features introduced in the [latest version of VirtualBox][8].
+
+#### Method 2: Install VirtualBox using Deb file from Oracle’s website
+
+**Pros** : Easily install the latest version
+
+**Cons** : Can’t upgrade to newer version
+
+If you want to use the latest version of VirtualBox on Ubuntu, the easiest way would be to [use the deb file][9].
+
+Oracle provides read to use binary files for VirtualBox releases. If you look at its download page, you’ll see the option to download the deb installer files for Ubuntu and other distributions.
+
+![VirtualBox Linux Download][10]
+
+You just have to download this deb file and double click on it to install it. It’s as simple as that.
+
+However, the problem with this method is that you won’t get automatically updated to the newer VirtualBox releases. The only way is to remove the existing version, download the newer version and install it again. That’s not very convenient, is it?
+
+#### Method 3: Install VirualBox using Oracle’s repository
+
+**Pros** : Automatically updates with system updates
+
+**Cons** : Slightly complicated installation
+
+Now this is the command line method and it may seem complicated to you but it has advantages over the previous two methods. You’ll get the latest version of VirtualBox and it will be automatically updated to the future releases. That’s what you would want, I presume.
+
+To install VirtualBox using command line, you add the Oracle VirtualBox’s repository in your list of repositories. You add its GPG key so that your system trusts this repository. Now when you install VirtualBox, it will be installed from Oracle’s repository instead of Ubuntu’s repository. If there is a new version released, VirtualBox install will be updated along with the system updates. Let’s see how to do that.
+
+First, add the key for the repository. You can download and add the key using this single command.
+
+```
+wget -q https://www.virtualbox.org/download/oracle_vbox_2016.asc -O- | sudo apt-key add -
+```
+
+```
+Important for Mint users
+
+The next step will work for Ubuntu only. If you are using Linux Mint or some other distribution based on Ubuntu, replace $(lsb_release -cs) in the command with the Ubuntu version your current version is based on. For example, Linux Mint 19 series users should use bionic and Mint 18 series users should use xenial. Something like this
+
+sudo add-apt-repository “deb [arch=amd64] **bionic** contrib“
+```
+
+Now add the Oracle VirtualBox repository in the list of repositories using this command:
+
+```
+sudo add-apt-repository "deb [arch=amd64] http://download.virtualbox.org/virtualbox/debian $(lsb_release -cs) contrib"
+```
+
+If you have read my article on [checking Ubuntu version][11], you probably know that ‘lsb_release -cs’ will print the codename of your Ubuntu system.
+
+**Note** : If you see [add-apt-repository command not found][12] error, you’ll have to install software-properties-common package.
+
+Now that you have the correct repository added, refresh the list of available packages through these repositories and install VirtualBox.
+
+```
+sudo apt update && sudo apt install virtualbox-6.0
+```
+
+**Tip** : A good idea would be to type sudo apt install **virtualbox–** and hit tab to see the various VirtualBox versions available for installation and then select one of them by typing it completely.
+
+![Install VirtualBox via terminal][13]
+
+### How to remove VirtualBox from Ubuntu
+
+Now that you have learned to install VirtualBox, I would also mention the steps to remove it.
+
+If you installed it from the Software Center, the easiest way to remove the application is from the Software Center itself. You just have to find it in the [list of installed applications][14] and click the Remove button.
+
+Another ways is to use the command line.
+
+```
+sudo apt remove virtualbox virtualbox-*
+```
+
+Note that this will not remove the virtual machines and the files associated with the operating systems you installed using VirtualBox. That’s not entirely a bad thing because you may want to keep them safe to use it later or in some other system.
+
+**In the end…**
+
+I hope you were able to pick one of the methods to install VirtualBox. I’ll also write about using it effectively in another article. For the moment, if you have and tips or suggestions or any questions, feel free to leave a comment below.
+
+
+--------------------------------------------------------------------------------
+
+via: https://itsfoss.com/install-virtualbox-ubuntu
+
+作者:[Abhishek Prakash][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://itsfoss.com/author/abhishek/
+[b]: https://github.com/lujun9972
+[1]: https://www.virtualbox.org
+[2]: https://itsfoss.com/install-vmware-player-ubuntu-1310/
+[3]: https://itsfoss.com/install-linux-in-virtualbox/
+[4]: https://itsfoss.com/install-windows-10-virtualbox-linux/
+[5]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/02/linux-inside-linux-virtualbox.png?resize=800%2C450&ssl=1
+[6]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/02/virtualbox-ubuntu-software-center.jpg?ssl=1
+[7]: https://itsfoss.com/know-program-version-before-install-ubuntu/
+[8]: https://itsfoss.com/oracle-virtualbox-release/
+[9]: https://itsfoss.com/install-deb-files-ubuntu/
+[10]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/02/virtualbox-download.jpg?resize=800%2C433&ssl=1
+[11]: https://itsfoss.com/how-to-know-ubuntu-unity-version/
+[12]: https://itsfoss.com/add-apt-repository-command-not-found/
+[13]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/02/install-virtualbox-ubuntu-terminal.png?resize=800%2C165&ssl=1
+[14]: https://itsfoss.com/list-installed-packages-ubuntu/
diff --git a/sources/tech/20190225 Netboot a Fedora Live CD.md b/sources/tech/20190225 Netboot a Fedora Live CD.md
new file mode 100644
index 0000000000..2767719b8c
--- /dev/null
+++ b/sources/tech/20190225 Netboot a Fedora Live CD.md
@@ -0,0 +1,187 @@
+[#]: collector: (lujun9972)
+[#]: translator: ( )
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+[#]: subject: (Netboot a Fedora Live CD)
+[#]: via: (https://fedoramagazine.org/netboot-a-fedora-live-cd/)
+[#]: author: (Gregory Bartholomew https://fedoramagazine.org/author/glb/)
+
+Netboot a Fedora Live CD
+======
+
+
+
+[Live CDs][1] are useful for many tasks such as:
+
+ * installing the operating system to a hard drive
+ * repairing a boot loader or performing other rescue-mode operations
+ * providing a consistent and minimal environment for web browsing
+ * …and [much more][2].
+
+
+
+As an alternative to using DVDs and USB drives to store your Live CD images, you can upload them to an [iSCSI][3] server where they will be less likely to get lost or damaged. This guide shows you how to load your Live CD images onto an iSCSI server and access them with the [iPXE][4] boot loader.
+
+### Download a Live CD Image
+
+```
+$ MY_RLSE=27
+$ MY_LIVE=$(wget -q -O - https://dl.fedoraproject.org/pub/archive/fedora/linux/releases/$MY_RLSE/Workstation/x86_64/iso | perl -ne '/(Fedora[^ ]*?-Live-[^ ]*?\.iso)(?{print $^N})/;')
+$ MY_NAME=fc$MY_RLSE
+$ wget -O $MY_NAME.iso https://dl.fedoraproject.org/pub/archive/fedora/linux/releases/$MY_RLSE/Workstation/x86_64/iso/$MY_LIVE
+```
+
+The above commands download the Fedora-Workstation-Live-x86_64-27-1.6.iso Fedora Live image and save it as fc27.iso. Change the value of MY_RLSE to download other archived versions. Or you can browse to to download the latest Fedora live image. Versions prior to 21 used different naming conventions, and must be [downloaded manually here][5]. If you download a Live CD image manually, set the MY_NAME variable to the basename of the file without the extension. That way the commands in the following sections will reference the correct file.
+
+### Convert the Live CD Image
+
+Use the livecd-iso-to-disk tool to convert the ISO file to a disk image and add the netroot parameter to the embedded kernel command line:
+
+```
+$ sudo dnf install -y livecd-tools
+$ MY_SIZE=$(du -ms $MY_NAME.iso | cut -f 1)
+$ dd if=/dev/zero of=$MY_NAME.img bs=1MiB count=0 seek=$(($MY_SIZE+512))
+$ MY_SRVR=server-01.example.edu
+$ MY_RVRS=$(echo $MY_SRVR | tr '.' "\n" | tac | tr "\n" '.' | cut -b -${#MY_SRVR})
+$ MY_LOOP=$(sudo losetup --show --nooverlap --find $MY_NAME.img)
+$ sudo livecd-iso-to-disk --format --extra-kernel-args netroot=iscsi:$MY_SRVR:::1:iqn.$MY_RVRS:$MY_NAME $MY_NAME.iso $MY_LOOP
+$ sudo losetup -d $MY_LOOP
+```
+
+### Upload the Live Image to your Server
+
+Create a directory on your iSCSI server to store your live images and then upload your modified image to it.
+
+**For releases 21 and greater:**
+
+```
+$ MY_FLDR=/images
+$ scp $MY_NAME.img $MY_SRVR:$MY_FLDR/
+```
+
+**For releases prior to 21:**
+
+```
+$ MY_FLDR=/images
+$ MY_LOOP=$(sudo losetup --show --nooverlap --find --partscan $MY_NAME.img)
+$ sudo tune2fs -O ^has_journal ${MY_LOOP}p1
+$ sudo e2fsck ${MY_LOOP}p1
+$ sudo dd status=none if=${MY_LOOP}p1 | ssh $MY_SRVR "dd of=$MY_FLDR/$MY_NAME.img"
+$ sudo losetup -d $MY_LOOP
+```
+
+### Define the iSCSI Target
+
+Run the following commands on your iSCSI server:
+
+```
+$ sudo -i
+# MY_NAME=fc27
+# MY_FLDR=/images
+# MY_SRVR=`hostname`
+# MY_RVRS=$(echo $MY_SRVR | tr '.' "\n" | tac | tr "\n" '.' | cut -b -${#MY_SRVR})
+# cat << END > /etc/tgt/conf.d/$MY_NAME.conf
+
+ backing-store $MY_FLDR/$MY_NAME.img
+ readonly 1
+ allow-in-use yes
+
+END
+# tgt-admin --update ALL
+```
+
+### Create a Bootable USB Drive
+
+The [iPXE][4] boot loader has a [sanboot][6] command you can use to connect to and start the live images hosted on your iSCSI server. It can be compiled in many different [formats][7]. The format that works best depends on the type of hardware you’re running. As an example, the following instructions show how to [chain load][8] iPXE from [syslinux][9] on a USB drive.
+
+First, download iPXE and build it in its lkrn format. This should be done as a normal user on a workstation:
+
+```
+$ sudo dnf install -y git
+$ git clone http://git.ipxe.org/ipxe.git $HOME/ipxe
+$ sudo dnf groupinstall -y "C Development Tools and Libraries"
+$ cd $HOME/ipxe/src
+$ make clean
+$ make bin/ipxe.lkrn
+$ cp bin/ipxe.lkrn /tmp
+```
+
+Next, prepare a USB drive with a MSDOS partition table and a FAT32 file system. The below commands assume that you have already connected the USB drive to be formatted. **Be careful that you do not format the wrong drive!**
+
+```
+$ sudo -i
+# dnf install -y parted util-linux dosfstools
+# echo; find /dev/disk/by-id ! -regex '.*-part.*' -name 'usb-*' -exec readlink -f {} \; | xargs -i bash -c "parted -s {} unit MiB print | perl -0 -ne '/^Model: ([^(]*).*\n.*?([0-9]*MiB)/i && print \"Found: {} = \$2 \$1\n\"'"; echo; read -e -i "$(find /dev/disk/by-id ! -regex '.*-part.*' -name 'usb-*' -exec readlink -f {} \; -quit)" -p "Drive to format: " MY_USB
+# umount $MY_USB?
+# wipefs -a $MY_USB
+# parted -s $MY_USB mklabel msdos mkpart primary fat32 1MiB 100% set 1 boot on
+# mkfs -t vfat -F 32 ${MY_USB}1
+```
+
+Finally, install syslinux on the USB drive and configure it to chain load iPXE:
+
+```
+# dnf install -y syslinux-nonlinux
+# syslinux -i ${MY_USB}1
+# dd if=/usr/share/syslinux/mbr.bin of=${MY_USB}
+# MY_MNT=$(mktemp -d)
+# mount ${MY_USB}1 $MY_MNT
+# MY_NAME=fc27
+# MY_SRVR=server-01.example.edu
+# MY_RVRS=$(echo $MY_SRVR | tr '.' "\n" | tac | tr "\n" '.' | cut -b -${#MY_SRVR})
+# cat << END > $MY_MNT/syslinux.cfg
+ui menu.c32
+default $MY_NAME
+timeout 100
+menu title SYSLINUX
+label $MY_NAME
+ menu label ${MY_NAME^^}
+ kernel ipxe.lkrn
+ append dhcp && sanboot iscsi:$MY_SRVR:::1:iqn.$MY_RVRS:$MY_NAME
+END
+# cp /usr/share/syslinux/menu.c32 $MY_MNT
+# cp /usr/share/syslinux/libutil.c32 $MY_MNT
+# cp /tmp/ipxe.lkrn $MY_MNT
+# umount ${MY_USB}1
+```
+
+You should be able to use this same USB drive to netboot additional iSCSI targets simply by editing the syslinux.cfg file and adding additional menu entries.
+
+This is just one method of loading iPXE. You could install syslinux directly on your workstation. Another option is to compile iPXE as an EFI executable and place it directly in your [ESP][10]. Yet another is to compile iPXE as a PXE loader and place it on your TFTP server to be referenced by DHCP. The best option depends on your environment.
+
+### Final Notes
+
+ * You may want to add the –filename \EFI\BOOT\grubx64.efi parameter to the sanboot command if you compile iPXE in its EFI format.
+ * It is possible to create custom live images. Refer to [Creating and using live CD][11] for more information.
+ * It is possible to add the –overlay-size-mb and –home-size-mb parameters to the livecd-iso-to-disk command to create live images with persistent storage. However, if you have multiple concurrent users, you’ll need to set up your iSCSI server to manage separate per-user writeable overlays. This is similar to what was shown in the “[How to Build a Netboot Server, Part 4][12]” article.
+ * The live images support a persistenthome option on their kernel command line (e.g. persistenthome=LABEL=HOME). Used together with CHAP-authenticated iSCSI targets, the persistenthome option provides an interesting alternative to NFS for centralized home directories.
+
+
+
+
+--------------------------------------------------------------------------------
+
+via: https://fedoramagazine.org/netboot-a-fedora-live-cd/
+
+作者:[Gregory Bartholomew][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://fedoramagazine.org/author/glb/
+[b]: https://github.com/lujun9972
+[1]: https://en.wikipedia.org/wiki/Live_CD
+[2]: https://en.wikipedia.org/wiki/Live_CD#Uses
+[3]: https://en.wikipedia.org/wiki/ISCSI
+[4]: https://ipxe.org/
+[5]: https://dl.fedoraproject.org/pub/archive/fedora/linux/releases/https://dl.fedoraproject.org/pub/archive/fedora/linux/releases/
+[6]: http://ipxe.org/cmd/sanboot/
+[7]: https://ipxe.org/appnote/buildtargets#boot_type
+[8]: https://en.wikipedia.org/wiki/Chain_loading
+[9]: https://www.syslinux.org/wiki/index.php?title=SYSLINUX
+[10]: https://en.wikipedia.org/wiki/EFI_system_partition
+[11]: https://docs.fedoraproject.org/en-US/quick-docs/creating-and-using-a-live-installation-image/#proc_creating-and-using-live-cd
+[12]: https://fedoramagazine.org/how-to-build-a-netboot-server-part-4/
diff --git a/sources/tech/20190227 How to Display Weather Information in Ubuntu 18.04.md b/sources/tech/20190227 How to Display Weather Information in Ubuntu 18.04.md
new file mode 100644
index 0000000000..da0c0df203
--- /dev/null
+++ b/sources/tech/20190227 How to Display Weather Information in Ubuntu 18.04.md
@@ -0,0 +1,290 @@
+[#]: collector: (lujun9972)
+[#]: translator: ( )
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+[#]: subject: (How to Display Weather Information in Ubuntu 18.04)
+[#]: via: (https://itsfoss.com/display-weather-ubuntu)
+[#]: author: (Sergiu https://itsfoss.com/author/sergiu/)
+
+How to Display Weather Information in Ubuntu 18.04
+======
+
+You’ve got a fresh Ubuntu install and you’re [customizing Ubuntu][1] to your liking. You want the best experience and the best apps for your needs.
+
+The only thing missing is a weather app. Luckily for you, we got you covered. Just make sure you have the Universe repository enabled.
+
+![Tools to Display Weather Information in Ubuntu Linux][2]
+
+### 8 Ways to Display Weather Information in Ubuntu 18.04
+
+Back in the Unity days, there were a few popular options like My Weather Indicator to display weather on your system. Those options are either discontinued or not available in Ubuntu 18.04 and higher versions anymore.
+
+Fortunately, there are many other options to choose from. Some are minimalist and plain simple to use, some offer detailed information (or even present you with news headlines) and some are made for terminal gurus. Whatever your needs may be, the right app is waiting for you.
+
+**Note:** The presented apps are in no particular order of ranking.
+
+**Top Panel Apps**
+
+These applications usually sit on the top panel of your screen. Good for quick look at the temperature.
+
+#### 1\. OpenWeather Shell Extension
+
+![Open Weather Gnome Shell Extesnsion][3]
+
+**Key features:**
+
+ * Simple to install and customize
+ * Uses OpenWeatherMap (by default)
+ * Many Units and Layout options
+ * Can save multiple locations (that can easily be changed)
+
+
+
+This is a great extension presenting you information in a simple manner. There are multiple ways to install this. It is the weather app that I find myself using the most, because it’s just a simple, no-hassle integrated weather display for the top panel.
+
+**How to Install:**
+
+I recommend reading this [detailed tutorial about using GNOME extensions][4]. The easiest way to install this extension is to open up a terminal and run:
+
+```
+sudo apt install gnome-shell-extension-weather
+```
+
+Then all you have to restart the gnome shell by executing:
+
+```
+Alt+F2
+```
+
+Enter **r** and press **Enter**.
+
+Now open up **Tweaks** (gnome tweak tool) and enable **Openweather** in the **Extensions** tab.
+
+#### 2\. gnome-weather
+
+![Gnome Weather App UI][5]
+![Gnome Weather App Top Panel][6]
+
+**Key features:**
+
+ * Pleasant Design
+ * Integrated into Calendar (Top Panel)
+ * Simple Install
+ * Flatpak install available
+
+
+
+This app is great for new users. The installation is only one command and the app is easy to use. Although it doesn’t have as many features as other apps, it is still great if you don’t want to bother with multiple settings and a complex install procedure.
+
+**How to Install:**
+
+All you have to do is run:
+
+```
+sudo apt install gnome-weather
+```
+
+Now search for **Weather** and the app should pop up. After logging out (and logging back in), the Calendar extension will be displayed.
+
+If you prefer, you can get a [flatpak][7] version.
+
+#### 3\. Meteo
+
+![Meteo Weather App UI][8]
+![Meteo Weather System Tray][9]
+
+**Key features:**
+
+ * Great UI
+ * Integrated into System Tray (Top Panel)
+ * Simple Install
+ * Great features (Maps)
+
+
+
+Meteo is a snap app on the heavier side. Most of that weight comes from the great Maps features, with maps presenting temperatures, clouds, precipitations, pressure and wind speed. It’s a distinct feature that I haven’t encountered in any other weather app.
+
+**Note** : After changing location, you might have to quit and restart the app for the changes to be applied in the system tray.
+
+**How to Install:**
+
+Open up the **Ubuntu Software Center** and search for **Meteo**. Install and launch.
+
+**Desktop Apps**
+
+These are basically desktop widgets. They look good and provide more information at a glance.
+
+#### 4\. Temps
+
+![Temps Weather App UI][10]
+
+**Key features:**
+
+ * Beautiful Design
+ * Useful Hotkeys
+ * Hourly Temperature Graph
+
+
+
+Temps is an electron app with a beautiful UI (though not exactly “light”). The most unique features are the temperature graphs. The hotkeys might feel unintuitive at first, but they prove to be useful in the long run. The app will minimize when you click somewhere else. Just press Ctrl+Shift+W to bring it back.
+
+This app is **Open-Source** , and the developer can’t afford the cost of a faster API key, so you might want to create your own API at [OpenWeatherMap][11].
+
+**How to Install:**
+
+Go to the website and download the version you need (probably 64-bit). Extract the archive. Open the extracted directory and double-click on **Temps**. Press Ctrl+Shift+W if the window minimizes.
+
+#### 5\. Cumulus
+
+![Cumulus Weather App UI][12]
+
+**Key features:**
+
+ * Color Selector for background and text
+
+ * Re-sizable window
+
+ * Tray Icon (temperature only)
+
+ * Allows multiple instances with different locations etc.
+
+
+
+
+Cumulus is a greatly customizable weather app, with a backend supporting Yahoo! Weather and OpenWeatherMap. The UI is great and the installer is simple to use. This app has amazing features. It’s one of the few weather apps that allow for multiple instances. You should definitely try it you are looking for an experience tailored to your preferences.
+
+**How to Install:**
+
+Go to the website and download the (online) installer. Open up a terminal and **cd** (change directory) to the directory where you downloaded the file.
+
+Then run
+
+```
+chmod +x Cumulus-online-installer-x64
+./Cumulus-online-installer-x64
+```
+
+Search for **Cumulus** and enjoy the app!
+
+**Terminal Apps**
+
+You are a terminal dweller? You can check the weather right in your terminal.
+
+#### 7\. WeGo
+
+![WeGo Weather App Terminal][13]
+
+**Key features:**
+
+ * Supports different APIs
+ * Pretty detailed
+ * Customizable config
+ * Multi-language support
+ * 1 to 7 day forecast
+
+
+
+WeGo is a Go app for displaying weather info in the terminal. It’s install can be a little tricky, but it’s easy to set up. You’ll need to register an API Key [here][14] (if using **forecast.io** , which is default). Once you set it up, it’s fairly practical for someone who mostly works in the terminal.
+
+**How to Install:**
+
+I recommend you to check out the GitHub page for complete information on installation, setup and features.
+
+#### 8\. Wttr.in
+
+![Wttr.in Weather App Terminal][15]
+
+**Key features:**
+
+ * Simple install
+ * Easy to use
+ * Lightweight
+ * 3 day forecast
+ * Moon phase
+
+
+
+If you really live in the terminal, this is the weather app for you. This is as lightweight as it gets. You can specify location (by default the app tries to detect your current location) and a few other parameters (eg. units).
+
+**How to Install:**
+
+Open up a terminal and install Curl:
+
+```
+sudo apt install curl
+```
+
+Then:
+
+```
+curl wttr.in
+```
+
+That’s it. You can specify location and parameters like so:
+
+```
+curl wttr.in/london?m
+```
+
+To check out other options type:
+
+```
+curl wttr.in/:help
+```
+
+If you found some settings you enjoy and you find yourself using them frequently, you might want to add an **alias**. To do so, open **~/.bashrc** with your favorite editor (that’s **vim** , terminal wizard). Go to the end and paste in
+
+```
+alias wttr='curl wttr.in/CITY_NAME?YOUR_PARAMS'
+```
+
+For example:
+
+```
+alias wttr='curl wttr.in/london?m'
+```
+
+Save and close **~/.bashrc** and run the command below to source the new file.
+
+```
+source ~/.bashrc
+```
+
+Now, typing **wttr** in the terminal and pressing Enter should execute your custom command.
+
+**Wrapping Up**
+
+These are a handful of the weather apps available for Ubuntu. We hope our list helped you discover an app fitting your needs, be that something with pleasant aesthetics or just a quick tool.
+
+What is your favorite weather app? Tell us about what you enjoy and why in the comments section.
+
+
+--------------------------------------------------------------------------------
+
+via: https://itsfoss.com/display-weather-ubuntu
+
+作者:[Sergiu][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://itsfoss.com/author/sergiu/
+[b]: https://github.com/lujun9972
+[1]: https://itsfoss.com/gnome-tricks-ubuntu/
+[2]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/02/display-weather-ubuntu.png?resize=800%2C450&ssl=1
+[3]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/02/open_weather_gnome_shell-1-1.jpg?fit=800%2C383&ssl=1
+[4]: https://itsfoss.com/gnome-shell-extensions/
+[5]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/02/gnome_weather_ui.jpg?fit=800%2C599&ssl=1
+[6]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/02/gnome_weather_top_panel.png?fit=800%2C587&ssl=1
+[7]: https://flatpak.org/
+[8]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/02/meteo_ui.jpg?fit=800%2C547&ssl=1
+[9]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/02/meteo_system_tray.png?fit=800%2C653&ssl=1
+[10]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/02/temps_ui.png?fit=800%2C623&ssl=1
+[11]: https://openweathermap.org/
+[12]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/02/cumulus_ui.png?fit=800%2C651&ssl=1
+[13]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/02/wego_terminal.jpg?fit=800%2C531&ssl=1
+[14]: https://developer.forecast.io/register
+[15]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/02/wttr_in_terminal.jpg?fit=800%2C526&ssl=1
diff --git a/sources/tech/20190228 3 open source behavior-driven development tools.md b/sources/tech/20190228 3 open source behavior-driven development tools.md
new file mode 100644
index 0000000000..9c004a14c2
--- /dev/null
+++ b/sources/tech/20190228 3 open source behavior-driven development tools.md
@@ -0,0 +1,83 @@
+[#]: collector: (lujun9972)
+[#]: translator: ( )
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+[#]: subject: (3 open source behavior-driven development tools)
+[#]: via: (https://opensource.com/article/19/2/behavior-driven-development-tools)
+[#]: author: (Christine Ketterlin Fisher https://opensource.com/users/cketterlin)
+
+3 open source behavior-driven development tools
+======
+Having the right motivation is as important as choosing the right tool when implementing BDD.
+
+
+[Behavior-driven development][1] (BDD) seems very easy. Tests are written in an easily readable format that allows for feedback from product owners, business sponsors, and developers. Those tests are living documentation for your team, so you don't need requirements. The tools are easy to use and allow you to automate your test suite. Reports are generated with each test run to document every step and show you where tests are failing.
+
+Quick recap: Easily readable! Living documentation! Automation! Reports! What could go wrong, and why isn't everybody doing this?
+
+### Getting started with BDD
+
+So, you're ready to jump in and can't wait to pick the right open source tool for your team. You want it to be easy to use, automate all your tests, and provide easily understandable reports for each test run. Great, let's get started!
+
+Except, not so fast … First, what is your motivation for trying to implement BDD on your team? If the answer is simply to automate tests, go ahead and choose any of the tools listed below because chances are you're going to see minimal success in the long run.
+
+### My first effort
+
+I manage a team of business analysts (BA) and quality assurance (QA) engineers, but my background is on the business analysis side. About a year ago, I attended a talk where a developer talked about the benefits of BDD. He said that he and his team had given it a try during their last project. That should have been the first red flag, but I didn't realize it at the time. You cannot simply choose to "give BDD a try." It takes planning, preparation, and forethought into what you want your team to accomplish.
+
+However, you can try various parts of BDD without a large investment, and I eventually realized he and his team had written feature files and automated those tests using Cucumber. I also learned it was an experiment done solely by the team's developers, not the BA or QA staff, which defeats the purpose of understanding the end user's behavior.
+
+During the talk we were encouraged to try BDD, so my test analyst and I went to our boss and said we were willing to give it a shot. And then, we didn't know what to do. We had no guidance, no plan in place, and a leadership team who just wanted to automate testing. I don't think I need to tell you how this story ended. Actually, there wasn't even an end, just a slow fizzle after a few initial attempts at writing behavioral scenarios.
+
+### A fresh start
+
+Fast-forward a year, and I'm at a different company with a team of my own and BDD on the brain. I knew there was value there, but I also knew it went deeper than what I had initially been sold. I spent a lot of time thinking about how BDD could make a positive impact, not only on my team, but on our entire development team. Then I read [Discovery: Explore Behaviour Using Examples][2] by Gaspar Nagy and Seb Rose, and one of the first things I learned was that automation of tests is a benefit of BDD, but it should not be the main goal. No wonder we failed!
+
+This book changed how I viewed BDD and helped me start to fill in the pieces I had been missing. We are now on the (hopefully correct!) path to implementing BDD on our team. It involves active involvement from our product owners, business analysts, and manual and automated testers and buy-in and support from our executive leadership. We have a plan in place for our approach and our measures of success.
+
+We are still writing requirements (don't ever let anyone tell you that these scenarios can completely replace requirements!), but we are doing so with a more critical eye and evaluating where requirements and test scenarios overlap and how we can streamline the two.
+
+I have told the team we cannot even try to automate these tests for at least two quarters, at which point we'll evaluate and determine whether we're ready to move forward or not. Our current priorities are defining our team's standard language, practicing writing given/when/then scenarios, learning the Gherkin syntax, determining where to store these tests, and investigating how to integrate these tests into our pipeline.
+
+### 3 BDD tools to choose
+
+At its core, BDD is a way to help the entire team understand the end user's actions and behaviors, which will lead to more clear requirements, tests, and ultimately higher-quality applications. Before you pick your tool, do your pre-work. Think about your motivation, and understand that while the different parts and pieces of BDD are fairly simple, integrating them into your team is more challenging and needs careful thought and planning. Also, think about where your people fit in.
+
+Every organization has different roles, and BDD should not belong solely to developers nor test automation engineers. If you don't involve the business side, you're never going to gain the full benefit of this methodology. Once you have a strategy defined and are ready to move forward with automating your BDD scenarios, there are several open source tools for you to choose from.
+
+#### Cucumber
+
+[Cucumber][3] is probably the most recognized tool available that supports BDD. It is widely seen as a straightforward tool to learn and is easy to get started with. Cucumber relies on test scenarios that are written in plain text and follow the given/when/then format. Each scenario is an individual test. Scenarios are grouped into features, which is comparable to a test suite. Scenarios must be written in the Gherkin syntax for Cucumber to understand and execute the scenario's steps. The human-readable steps in the scenarios are tied to the step definitions in your code through the Cucumber framework. To successfully write and automate the scenarios, you need the right mix of business knowledge and technical ability. Identify the skill sets on your team to determine who will write and maintain the scenarios and who will automate them; most likely these should be managed by different roles. Because these tests are executed from the step definitions, reporting is very robust and can show you at which exact step your test failed. Cucumber works well with a variety of browser and API automation tools.
+
+#### JBehave
+
+[JBehave][4] is very similar to Cucumber. Scenarios are still written in the given/when/then format and are easily understandable by the entire team. JBehave supports Gherkin but also has its own JBehave syntax that can be used. Gherkin is more universal, but either option will work as long as you are consistent in your choice. JBehave has more configuration options than Cucumber, and its reports, although very detailed, need more configuration to get feedback from each step. JBehave is a powerful tool, but because it can be more customized, it is not quite as easy to get started with. Teams need to ask themselves exactly what features they need and whether or not learning the tool's various configurations is worth the time investment.
+
+#### Gauge
+
+Where Cucumber and JBehave are specifically designed to work with BDD, [Gauge][5] is not. If automation is your main goal (and not the entire BDD process), it is worth a look. Gauge tests are written in Markdown, which makes them easily readable. However, without a more standard format, such as the given/when/then BDD scenarios, tests can vary widely and, depending on the author, some tests will be much more digestible for business owners than others. Gauge works with multiple languages, so the automation team can leverage what they already use. Gauge also offers reporting with screenshots to show where the tests failed.
+
+### What are your needs?
+
+Implementing BDD allows the team to test the users' behaviors. This can be done without automating any tests at all, but when done correctly, can result in a powerful, reusable test suite. As a team, you will need to identify exactly what your automation needs are and whether or not you are truly going to use BDD or if you would rather focus on automating tests that are written in plain text. Either way, open source tools are available for you to use and to help support your testing evolution.
+
+
+--------------------------------------------------------------------------------
+
+via: https://opensource.com/article/19/2/behavior-driven-development-tools
+
+作者:[Christine Ketterlin Fisher][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://opensource.com/users/cketterlin
+[b]: https://github.com/lujun9972
+[1]: https://en.wikipedia.org/wiki/Behavior-driven_development
+[2]: https://www.amazon.com/gp/product/1983591254/ref=dbs_a_def_rwt_bibl_vppi_i0
+[3]: https://cucumber.io/
+[4]: https://jbehave.org/
+[5]: https://www.gauge.org/
diff --git a/sources/tech/20190228 MiyoLinux- A Lightweight Distro with an Old-School Approach.md b/sources/tech/20190228 MiyoLinux- A Lightweight Distro with an Old-School Approach.md
new file mode 100644
index 0000000000..3217e304cd
--- /dev/null
+++ b/sources/tech/20190228 MiyoLinux- A Lightweight Distro with an Old-School Approach.md
@@ -0,0 +1,161 @@
+[#]: collector: (lujun9972)
+[#]: translator: ( )
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+[#]: subject: (MiyoLinux: A Lightweight Distro with an Old-School Approach)
+[#]: via: (https://www.linux.com/blog/learn/2019/2/miyolinux-lightweight-distro-old-school-approach)
+[#]: author: (Jack Wallen https://www.linux.com/users/jlwallen)
+
+MiyoLinux: A Lightweight Distro with an Old-School Approach
+======
+
+
+I must confess, although I often wax poetic about the old ways of the Linux desktop, I much prefer my distributions to help make my daily workflow as efficient as possible. Because of that, my taste in Linux desktop distributions veers very far toward the modern side of things. I want a distribution that integrates apps seamlessly, gives me notifications, looks great, and makes it easy to work with certain services that I use.
+
+However, every so often it’s nice to dip my toes back into those old-school waters and remind myself why I fell in love with Linux in the first place. That’s precisely what [MiyoLinux][1] did for me recently. This lightweight distribution is based on [Devuan][2] and makes use of the [i3 Tiling Window Manager][3].
+
+Why is it important that MiyoLinux is based on Devuan? Because that means it doesn’t use systemd. There are many within the Linux community who’d be happy to make the switch to an old-school Linux distribution that opts out of systemd. If that’s you, MiyoLinux might just charm you into submission.
+
+But don’t think MiyoLinux is going to be as easy to get up and running as, say, Ubuntu Linux, Elementary OS, or Linux Mint. Although it’s not nearly as challenging as Arch or Gentoo, MiyoLinux does approach installation and basic usage a bit differently. Let’s take a look at how this particular distro handles things.
+
+### Installation
+
+The installation GUI of MiyoLinux is pretty basic. The first thing you’ll notice is that you are presented with a good amount of notes, regarding the usage of the MiyoLinux desktop. If you happen to be testing MiyoLinux via VirtualBox, you’ll wind up having to deal with the frustration of not being able to resize the window (Figure 1), as the Guest Additions cannot be installed. This also means mouse integration cannot be enabled during the installation, so you’ll have to tab through the windows and use your keyboard cursor keys and Enter key to make selections.
+
+![MiyoLinux][5]
+
+Figure 1: The first step in the MiyoLinux installation.
+
+[Used with permission][6]
+
+Once you click the Install MiyoLinux button, you’ll be prompted to continue using either ‘su” or sudo. Click the use sudo button to continue with the installation.
+
+The next screen of importance is the Installation Options window (Figure 2), where you can select various options for MiyoLinux (such as encryption, file system labels, disable automatic login, etc.).
+
+![Configuration][8]
+
+Figure 2: Configuration Installation options for MiyoLinux.
+
+[Used with permission][6]
+
+The MiyoLinux installation does not include an automatic partition tool. Instead, you’ll be prompted to run either cfdisk or GParted (Figure 3). If you don’t know your way around cfdisk, select GParted and make use of the GUI tool.
+
+![partitioning ][10]
+
+Figure 3: Select your partitioning tool for MiyoLinux.
+
+[Used with permission][6]
+
+With your disk partitioned (Figure 4), you’ll be required to take care of the following steps:
+
+ * Configure the GRUB bootloader.
+
+ * Select the filesystem for the bootloader.
+
+ * Configure time zone and locales.
+
+ * Configure keyboard, keyboard language, and keyboard layout.
+
+ * Okay the installation.
+
+
+
+
+Once, you’ve okay’d the installation, all packages will be installed and you will then be prompted to install the bootloader. Following that, you’ll be prompted to configure the following:
+
+ * Hostname.
+
+ * User (Figure 5).
+
+ * Root password.
+
+
+
+
+With the above completed, reboot and log into your new MiyoLinux installation.
+
+![hostname][12]
+
+Figure 5: Configuring hostname and username.
+
+[Creative Commons Zero][13]
+
+### Usage
+
+Once you’ve logged into the MiyoLinux desktop, you’ll find things get a bit less-than-user-friendly. This is by design. You won’t find any sort of mouse menu available anywhere on the desktop. Instead you use keyboard shortcuts to open the different types of menus. The Alt+m key combination will open the PMenu, which is what one would consider a fairly standard desktop mouse menu (Figure 6).
+
+The Alt+d key combination will open the dmenu, a search tool at the top of the desktop, where you can scroll through (using the cursor keys) or search for an app you want to launch (Figure 7).
+
+![dmenu][15]
+
+Figure 7: The dmenu in action.
+
+[Used with permission][6]
+
+### Installing Apps
+
+If you open the PMenu, click System > Synaptic Package Manager. From within that tool you can search for any app you want to install. However, if you find Synaptic doesn’t want to start from the PMenu, open the dmenu, search for terminal, and (once the terminal opens), issue the command sudo synaptic. That will get the package manager open, where you can start installing any applications you want (Figure 8).
+
+![Synaptic][17]
+
+Figure 8: The Synaptic Package Manager on MiyoLinux.
+
+[Used with permission][6]
+
+Of course, you can always install applications from the command line. MiyoLinux depends upon the Apt package manager, so installing applications is as easy as:
+
+```
+sudo apt-get install libreoffice -y
+```
+
+Once installed, you can start the new package from either the PMenu or dmenu tools.
+
+### MiyoLinux Accessories
+
+If you find you need a bit more from the MiyoLinux desktop, type the keyboard combination Alt+Ctrl+a to open the MiyoLinux Accessories tool (Figure 9). From this tool you can configure a number of options for the desktop.
+
+![Accessories][19]
+
+Figure 9: Configure i3, Conky, Compton, your touchpad, and more with the Accessories tool.
+
+[Used with permission][6]
+
+All other necessary keyboard shortcuts are listed on the default desktop wallpaper. Make sure to put those shortcuts to memory, as you won’t get very far in the i3 desktop without them.
+
+### A Nice Nod to Old-School Linux
+
+If you’re itching to throw it back to a time when Linux offered you a bit of challenge to your daily grind, MiyoLinux might be just the operating system for you. It’s a lightweight operating system that makes good use of a minimal set of tools. Anyone who likes their distributions to be less modern and more streamlined will love this take on the Linux desktop. However, if you prefer your desktop with the standard bells and whistles, found on modern distributions, you’ll probably find MiyoLinux nothing more than a fun distraction from the standard fare.
+
+--------------------------------------------------------------------------------
+
+via: https://www.linux.com/blog/learn/2019/2/miyolinux-lightweight-distro-old-school-approach
+
+作者:[Jack Wallen][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://www.linux.com/users/jlwallen
+[b]: https://github.com/lujun9972
+[1]: https://sourceforge.net/p/miyolinux/wiki/Home/
+[2]: https://devuan.org/
+[3]: https://i3wm.org/
+[4]: /files/images/miyo1jpg
+[5]: https://www.linux.com/sites/lcom/files/styles/rendered_file/public/miyo_1.jpg?itok=5PxRDYRE (MiyoLinux)
+[6]: /licenses/category/used-permission
+[7]: /files/images/miyo2jpg
+[8]: https://www.linux.com/sites/lcom/files/styles/rendered_file/public/miyo_2.jpg?itok=svlVr7VI (Configuration)
+[9]: /files/images/miyo3jpg
+[10]: https://www.linux.com/sites/lcom/files/styles/rendered_file/public/miyo_3.jpg?itok=lpNzZBPz (partitioning)
+[11]: /files/images/miyo5jpg
+[12]: https://www.linux.com/sites/lcom/files/styles/rendered_file/public/miyo_5.jpg?itok=lijIsgZ2 (hostname)
+[13]: /licenses/category/creative-commons-zero
+[14]: /files/images/miyo7jpg
+[15]: https://www.linux.com/sites/lcom/files/styles/rendered_file/public/miyo_7.jpg?itok=I8Ow3PX6 (dmenu)
+[16]: /files/images/miyo8jpg
+[17]: https://www.linux.com/sites/lcom/files/styles/rendered_file/public/miyo_8.jpg?itok=oa502KfM (Synaptic)
+[18]: /files/images/miyo9jpg
+[19]: https://www.linux.com/sites/lcom/files/styles/rendered_file/public/miyo_9.jpg?itok=gUM4mxEv (Accessories)
diff --git a/sources/tech/20190301 Guide to Install VMware Tools on Linux.md b/sources/tech/20190301 Guide to Install VMware Tools on Linux.md
new file mode 100644
index 0000000000..e6a43bcde1
--- /dev/null
+++ b/sources/tech/20190301 Guide to Install VMware Tools on Linux.md
@@ -0,0 +1,143 @@
+[#]: collector: (lujun9972)
+[#]: translator: ( )
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+[#]: subject: (Guide to Install VMware Tools on Linux)
+[#]: via: (https://itsfoss.com/install-vmware-tools-linux)
+[#]: author: (Ankush Das https://itsfoss.com/author/ankush/)
+
+Guide to Install VMware Tools on Linux
+======
+
+**VMware Tools enhances your VM experience by allowing you to share clipboard and folder among other things. Learn how to install VMware tools on Ubuntu and other Linux distributions.**
+
+In an earlier tutorial, you learned to [install VMware Workstation on Ubuntu][1]. You can further enhance the functionality of your virtual machines by installing VMware Tools.
+
+If you have already installed a guest OS on VMware, you must have noticed the requirement for [VMware tools][2] – even though not completely aware of what it is needed for.
+
+In this article, we will highlight the importance of VMware tools, the features it offers, and the method to install VMware tools on Ubuntu or any other Linux distribution.
+
+### VMware Tools: Overview & Features
+
+![Installing VMware Tools on Ubuntu][3]Installing VMware Tools on Ubuntu
+
+For obvious reasons, the virtual machine (your Guest OS) will not behave exactly like the host. There will be certain limitations in terms of its performance and operationg. And, that is why a set of utilities (VMware Tools) was introduced.
+
+VMware tools help in managing the guest OS in an efficient manner while also improving its performance.
+
+#### What exactly is VMware tool responsible for?
+
+![How to Install VMware tools on Linux][4]
+
+You have got a vague idea of what it does – but let us talk about the details:
+
+ * Synchronize the time between the guest OS and the host to make things easier.
+ * Unlocks the ability to pass messages from host OS to guest OS. For example, you copy a text on the host to your clipboard and you can easily paste it to your guest OS.
+ * Enables sound in guest OS.
+ * Improves video resolution.
+ * Improves the cursor movement.
+ * Fixes incorrect network speed data.
+ * Eliminates inadequate color depth.
+
+
+
+These are the major changes that happen when you install VMware tools on Guest OS. But, what exactly does it contain / feature in order to unlock/enhance these functionalities? Let’s see..
+
+#### VMware tools: Core Feature Details
+
+![Sharing clipboard between guest and host OS with VMware Tools][5]Sharing clipboard between guest and host OS with VMware Tools
+
+If you do not want to know what it includes to enable the functionalities, you can skip this part. But, for the curious readers, let us briefly discuss about it:
+
+**VMware device drivers:** It really depends on the OS. Most of the major operating systems do include device drivers by default. So, you do not have to install it separately. This generally involves – memory control driver, mouse driver, audio driver, NIC driver, VGA driver and so on.
+
+**VMware user process:** This is where things get really interesting. With this, you get the ability to copy-paste and drag-drop between the host and the guest OS. You can basically copy and paste the text from the host to the virtual machine or vice versa.
+
+You get to drag and drop files as well. In addition, it enables the pointer release/lock when you do not have an SVGA driver installed.
+
+**VMware tools lifecycle management** : Well, we will take a look at how to install VMware tools below – but this feature helps you easily install/upgrade VMware tools in the virtual machine.
+
+**Shared Folders** : In addition to these, VMware tools also allow you to have shared folders between the guest OS and the host.
+
+![Sharing folder between guest and host OS using VMware Tools in Linux][6]Sharing folder between guest and host OS using VMware Tools in Linux
+
+Of course, what it does and facilitates also depends on the host OS. For example, on Windows, you get a Unity mode on VMware to run programs on virtual machine and operate it from the host OS.
+
+### How to install VMware Tools on Ubuntu & other Linux distributions
+
+**Note:** For Linux guest operating systems, you should already have “Open VM Tools” suite installed, eliminating the need of installing VMware tools separately, most of the time.
+
+Most of the time, when you install a guest OS, you will get a prompt as a software update or a popup telling you to install VMware tools if the operating system supports [Easy Install][7].
+
+Windows and Ubuntu does support Easy Install. So, even if you are using Windows as your host OS or trying to install VMware tools on Ubuntu, you should first get an option to install the VMware tools easily as popup message. Here’s how it should look like:
+
+![Pop-up to install VMware Tools][8]Pop-up to install VMware Tools
+
+This is the easiest way to get it done. So, make sure you have an active network connection when you setup the virtual machine.
+
+If you do not get any of these pop ups – or options to easily install VMware tools. You have to manually install it. Here’s how to do that:
+
+1\. Launch VMware Workstation Player.
+
+2\. From the menu, navigate through **Virtual Machine - > Install VMware tools**. If you already have it installed, and want to repair the installation, you will observe the same option to appear as “ **Re-install VMware tools** “.
+
+3\. Once you click on that, you will observe a virtual CD/DVD mounted in the guest OS.
+
+4\. Open that and copy/paste the **tar.gz** file to any location of your choice and extract it, here we choose the **Desktop**.
+
+![][9]
+
+5\. After extraction, launch the terminal and navigate to the folder inside by typing in the following command:
+
+```
+cd Desktop/VMwareTools-10.3.2-9925305/vmware-tools-distrib
+```
+
+You need to check the name of the folder and path in your case – depending on the version and where you extracted – it might vary.
+
+![][10]
+
+Replace **Desktop** with your storage location (such as cd Downloads) and the rest should remain the same if you are installing **10.3.2 version**.
+
+6\. Now, simply type in the following command to start the installation:
+
+```
+sudo ./vmware-install.pl -d
+```
+
+![][11]
+
+You will be asked the password for permission to install, type it in and you should be good to go.
+
+That’s it. You are done. These set of steps should be applicable to almost any Ubuntu-based guest operating system. If you want to install VMware tools on Ubuntu Server, or any other OS.
+
+**Wrapping Up**
+
+Installing VMware tools on Ubuntu Linux is pretty easy. In addition to the easy method, we have also explained the manual method to do it. If you still need help, or have a suggestion regarding the installation, let us know in the comments down below.
+
+
+--------------------------------------------------------------------------------
+
+via: https://itsfoss.com/install-vmware-tools-linux
+
+作者:[Ankush Das][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://itsfoss.com/author/ankush/
+[b]: https://github.com/lujun9972
+[1]: https://itsfoss.com/install-vmware-player-ubuntu-1310/
+[2]: https://kb.vmware.com/s/article/340
+[3]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/02/vmware-tools-downloading.jpg?fit=800%2C531&ssl=1
+[4]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/03/install-vmware-tools-linux.png?resize=800%2C450&ssl=1
+[5]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/02/vmware-tools-features.gif?resize=800%2C500&ssl=1
+[6]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/02/vmware-tools-shared-folder.jpg?fit=800%2C660&ssl=1
+[7]: https://docs.vmware.com/en/VMware-Workstation-Player-for-Linux/15.0/com.vmware.player.linux.using.doc/GUID-3F6B9D0E-6CFC-4627-B80B-9A68A5960F60.html
+[8]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/02/vmware-tools.jpg?fit=800%2C481&ssl=1
+[9]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/02/vmware-tools-extraction.jpg?fit=800%2C564&ssl=1
+[10]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/02/vmware-tools-folder.jpg?fit=800%2C487&ssl=1
+[11]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/02/vmware-tools-installation-ubuntu.jpg?fit=800%2C492&ssl=1
diff --git a/sources/tech/20190304 What you need to know about Ansible modules.md b/sources/tech/20190304 What you need to know about Ansible modules.md
new file mode 100644
index 0000000000..8330d4bd59
--- /dev/null
+++ b/sources/tech/20190304 What you need to know about Ansible modules.md
@@ -0,0 +1,311 @@
+[#]: collector: (lujun9972)
+[#]: translator: ( )
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+[#]: subject: (What you need to know about Ansible modules)
+[#]: via: (https://opensource.com/article/19/3/developing-ansible-modules)
+[#]: author: (Jairo da Silva Junior https://opensource.com/users/jairojunior)
+
+What you need to know about Ansible modules
+======
+Learn how and when to develop custom modules for Ansible.
+
+
+Ansible works by connecting to nodes and sending small programs called modules to be executed remotely. This makes it a push architecture, where configuration is pushed from Ansible to servers without agents, as opposed to the pull model, common in agent-based configuration management systems, where configuration is pulled.
+
+These modules are mapped to resources and their respective states, which are represented in YAML files. They enable you to manage virtually everything that has an API, CLI, or configuration file you can interact with, including network devices like load balancers, switches, firewalls, container orchestrators, containers themselves, and even virtual machine instances in a hypervisor or in a public (e.g., AWS, GCE, Azure) and/or private (e.g., OpenStack, CloudStack) cloud, as well as storage and security appliances and system configuration.
+
+With Ansible's batteries-included model, hundreds of modules are included and any task in a playbook has a module behind it.
+
+The contract for building modules is simple: JSON in the stdout. The configurations declared in YAML files are delivered over the network via SSH/WinRM—or any other connection plugin—as small scripts to be executed in the target server(s). Modules can be written in any language capable of returning JSON, although most Ansible modules (except for Windows PowerShell) are written in Python using the Ansible API (this eases the development of new modules).
+
+Modules are one way of expanding Ansible capabilities. Other alternatives, like dynamic inventories and plugins, can also increase Ansible's power. It's important to know about them so you know when to use one instead of the other.
+
+Plugins are divided into several categories with distinct goals, like Action, Cache, Callback, Connection, Filters, Lookup, and Vars. The most popular plugins are:
+
+ * **Connection plugins:** These implement a way to communicate with servers in your inventory (e.g., SSH, WinRM, Telnet); in other words, how automation code is transported over the network to be executed.
+ * **Filters plugins:** These allow you to manipulate data inside your playbook. This is a Jinja2 feature that is harnessed by Ansible to solve infrastructure-as-code problems.
+ * **Lookup plugins:** These fetch data from an external source (e.g., env, file, Hiera, database, HashiCorp Vault).
+
+
+
+Ansible's official docs are a good resource on [developing plugins][1].
+
+### When should you develop a module?
+
+Although many modules are delivered with Ansible, there is a chance that your problem is not yet covered or it's something too specific—for example, a solution that might make sense only in your organization. Fortunately, the official docs provide excellent guidelines on [developing modules][2].
+
+**IMPORTANT:** Before you start working on something new, always check for open pull requests, ask developers at #ansible-devel (IRC/Freenode), or search the [development list][3] and/or existing [working groups][4] to see if a module exists or is in development.
+
+Signs that you need a new module instead of using an existing one include:
+
+ * Conventional configuration management methods (e.g., templates, file, get_url, lineinfile) do not solve your problem properly.
+ * You have to use a complex combination of commands, shells, filters, text processing with magic regexes, and API calls using curl to achieve your goals.
+ * Your playbooks are complex, imperative, non-idempotent, and even non-deterministic.
+
+
+
+In the ideal scenario, the tool or service already has an API or CLI for management, and it returns some sort of structured data (JSON, XML, YAML).
+
+### Identifying good and bad playbooks
+
+> "Make love, but don't make a shell script in YAML."
+
+So, what makes a bad playbook?
+
+```
+- name: Read a remote resource
+ command: "curl -v http://xpto/resource/abc"
+ register: resource
+ changed_when: False
+
+ - name: Create a resource in case it does not exist
+ command: "curl -X POST http://xpto/resource/abc -d '{ config:{ client: xyz, url: http://beta, pattern: core.md Dict.md lctt2014.md lctt2016.md lctt2018.md README.md } }'"
+ when: "resource.stdout | 404"
+
+ # Leave it here in case I need to remove it hehehe
+ #- name: Remove resource
+ # command: "curl -X DELETE http://xpto/resource/abc"
+ # when: resource.stdout == 1
+```
+
+Aside from being very fragile—what if the resource state includes a 404 somewhere?—and demanding extra code to be idempotent, this playbook can't update the resource when its state changes.
+
+Playbooks written this way disrespect many infrastructure-as-code principles. They're not readable by human beings, are hard to reuse and parameterize, and don't follow the declarative model encouraged by most configuration management tools. They also fail to be idempotent and to converge to the declared state.
+
+Bad playbooks can jeopardize your automation adoption. Instead of harnessing configuration management tools to increase your speed, they have the same problems as an imperative automation approach based on scripts and command execution. This creates a scenario where you're using Ansible just as a means to deliver your old scripts, copying what you already have into YAML files.
+
+Here's how to rewrite this example to follow infrastructure-as-code principles.
+
+```
+- name: XPTO
+ xpto:
+ name: abc
+ state: present
+ config:
+ client: xyz
+ url: http://beta
+ pattern: "*.*"
+```
+
+The benefits of this approach, based on custom modules, include:
+
+ * It's declarative—resources are properly represented in YAML.
+ * It's idempotent.
+ * It converges from the declared state to the current state.
+ * It's readable by human beings.
+ * It's easily parameterized or reused.
+
+
+
+### Implementing a custom module
+
+Let's use [WildFly][5], an open source Java application server, as an example to introduce a custom module for our not-so-good playbook:
+
+```
+ - name: Read datasource
+ command: "jboss-cli.sh -c '/subsystem=datasources/data-source=DemoDS:read-resource()'"
+ register: datasource
+
+ - name: Create datasource
+ command: "jboss-cli.sh -c '/subsystem=datasources/data-source=DemoDS:add(driver-name=h2, user-name=sa, password=sa, min-pool-size=20, max-pool-size=40, connection-url=.jdbc:h2:mem:demo;DB_CLOSE_DELAY=-1;DB_CLOSE_ON_EXIT=FALSE..)'"
+ when: 'datasource.stdout | outcome => failed'
+```
+
+Problems:
+
+ * It's not declarative.
+ * JBoss-CLI returns plaintext in a JSON-like syntax; therefore, this approach is very fragile, since we need a type of parser for this notation. Even a seemingly simple parser can be too complex to treat many [exceptions][6].
+ * JBoss-CLI is just an interface to send requests to the management API (port 9990).
+ * Sending an HTTP request is more efficient than opening a new JBoss-CLI session, connecting, and sending a command.
+ * It does not converge to the desired state; it only creates the resource when it doesn't exist.
+
+
+
+A custom module for this would look like:
+
+```
+- name: Configure datasource
+ jboss_resource:
+ name: "/subsystem=datasources/data-source=DemoDS"
+ state: present
+ attributes:
+ driver-name: h2
+ connection-url: "jdbc:h2:mem:demo;DB_CLOSE_DELAY=-1;DB_CLOSE_ON_EXIT=FALSE"
+ jndi-name: "java:jboss/datasources/DemoDS"
+ user-name: sa
+ password: sa
+ min-pool-size: 20
+ max-pool-size: 40
+```
+
+This playbook is declarative, idempotent, more readable, and converges to the desired state regardless of the current state.
+
+### Why learn to build custom modules?
+
+Good reasons to learn how to build custom modules include:
+
+ * Improving existing modules
+ * You have bad playbooks and want to improve them, or …
+ * You don't, but want to avoid having bad playbooks.
+ * Knowing how to build a module considerably improves your ability to debug problems in playbooks, thereby increasing your productivity.
+
+
+
+> "…abstractions save us time working, but they don't save us time learning." —Joel Spolsky, [The Law of Leaky Abstractions][7]
+
+#### Custom Ansible modules 101
+
+ * JSON (JavaScript Object Notation) in stdout: that's the contract!
+ * They can be written in any language, but …
+ * Python is usually the best option (or the second best)
+ * Most modules delivered with Ansible ( **lib/ansible/modules** ) are written in Python and should support compatible versions.
+
+
+
+#### The Ansible way
+
+ * First step:
+
+```
+git clone https://github.com/ansible/ansible.git
+```
+
+ * Navigate in **lib/ansible/modules/** and read the existing modules code.
+
+ * Your tools are: Git, Python, virtualenv, pdb (Python debugger)
+
+ * For comprehensive instructions, consult the [official docs][8].
+
+
+
+
+#### An alternative: drop it in the library directory
+
+```
+library/ # if any custom modules, put them here (optional)
+module_utils/ # if any custom module_utils to support modules, put them here (optional)
+filter_plugins/ # if any custom filter plugins, put them here (optional)
+
+site.yml # master playbook
+webservers.yml # playbook for webserver tier
+dbservers.yml # playbook for dbserver tier
+
+roles/
+ common/ # this hierarchy represents a "role"
+ library/ # roles can also include custom modules
+ module_utils/ # roles can also include custom module_utils
+ lookup_plugins/ # or other types of plugins, like lookup in this case
+```
+
+ * It's easier to start.
+ * Doesn't require anything besides Ansible and your favorite IDE/text editor.
+ * This is your best option if it's something that will be used internally.
+
+
+
+**TIP:** You can use this directory layout to overwrite existing modules if, for example, you need to patch a module.
+
+#### First steps
+
+You could do it in your own—including using another language—or you could use the AnsibleModule class, as it is easier to put JSON in the stdout ( **exit_json()** , **fail_json()** ) in the way Ansible expects ( **msg** , **meta** , **has_changed** , **result** ), and it's also easier to process the input ( **params[]** ) and log its execution ( **log()** , **debug()** ).
+
+```
+def main():
+
+ arguments = dict(name=dict(required=True, type='str'),
+ state=dict(choices=['present', 'absent'], default='present'),
+ config=dict(required=False, type='dict'))
+
+ module = AnsibleModule(argument_spec=arguments, supports_check_mode=True)
+ try:
+ if module.check_mode:
+ # Do not do anything, only verifies current state and report it
+ module.exit_json(changed=has_changed, meta=result, msg='Fez alguma coisa ou não...')
+
+ if module.params['state'] == 'present':
+ # Verify the presence of a resource
+ # Desired state `module.params['param_name'] is equal to the current state?
+ module.exit_json(changed=has_changed, meta=result)
+
+ if module.params['state'] == 'absent':
+ # Remove the resource in case it exists
+ module.exit_json(changed=has_changed, meta=result)
+
+ except Error as err:
+ module.fail_json(msg=str(err))
+```
+
+**NOTES:** The **check_mode** ("dry run") allows a playbook to be executed or just verifies if changes are required, but doesn't perform them. **** Also, the **module_utils** directory can be used for shared code among different modules.
+
+For the full Wildfly example, check [this pull request][9].
+
+### Running tests
+
+#### The Ansible way
+
+The Ansible codebase is heavily tested, and every commit triggers a build in its continuous integration (CI) server, [Shippable][10], which includes linting, unit tests, and integration tests.
+
+For integration tests, it uses containers and Ansible itself to perform the setup and verify phase. Here is a test case (written in Ansible) for our custom module's sample code:
+
+```
+- name: Configure datasource
+ jboss_resource:
+ name: "/subsystem=datasources/data-source=DemoDS"
+ state: present
+ attributes:
+ connection-url: "jdbc:h2:mem:demo;DB_CLOSE_DELAY=-1;DB_CLOSE_ON_EXIT=FALSE"
+ ...
+ register: result
+
+- name: assert output message that datasource was created
+ assert:
+ that:
+ - "result.changed == true"
+ - "'Added /subsystem=datasources/data-source=DemoDS' in result.msg"
+```
+
+#### An alternative: bundling a module with your role
+
+Here is a [full example][11] inside a simple role:
+
+```
+[*Molecule*]() + [*Vagrant*]() + [*pytest*](): `molecule init` (inside roles/)
+```
+
+It offers greater flexibility to choose:
+
+ * Simplified setup
+ * How to spin up your infrastructure: e.g., Vagrant, Docker, OpenStack, EC2
+ * How to verify your infrastructure tests: Testinfra and Goss
+
+
+
+But your tests would have to be written using pytest with Testinfra or Goss, instead of plain Ansible. If you'd like to learn more about testing Ansible roles, see my article about [using Molecule][12].
+
+--------------------------------------------------------------------------------
+
+via: https://opensource.com/article/19/3/developing-ansible-modules
+
+作者:[Jairo da Silva Junior][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://opensource.com/users/jairojunior
+[b]: https://github.com/lujun9972
+[1]: https://docs.ansible.com/ansible/latest/dev_guide/developing_plugins.html#developing-plugins
+[2]: https://docs.ansible.com/ansible/latest/dev_guide/developing_modules.html
+[3]: https://groups.google.com/forum/#!forum/ansible-devel
+[4]: https://github.com/ansible/community/
+[5]: http://www.wildfly.org/
+[6]: https://tools.ietf.org/html/rfc7159
+[7]: https://en.wikipedia.org/wiki/Leaky_abstraction#The_Law_of_Leaky_Abstractions
+[8]: https://docs.ansible.com/ansible/latest/dev_guide/developing_modules_general.html#developing-modules-general
+[9]: https://github.com/ansible/ansible/pull/43682/files
+[10]: https://app.shippable.com/github/ansible/ansible/dashboard
+[11]: https://github.com/jairojunior/ansible-role-jboss/tree/with_modules
+[12]: https://opensource.com/article/18/12/testing-ansible-roles-molecule
diff --git a/sources/tech/20190305 How rootless Buildah works- Building containers in unprivileged environments.md b/sources/tech/20190305 How rootless Buildah works- Building containers in unprivileged environments.md
new file mode 100644
index 0000000000..cf046ec1b3
--- /dev/null
+++ b/sources/tech/20190305 How rootless Buildah works- Building containers in unprivileged environments.md
@@ -0,0 +1,133 @@
+[#]: collector: (lujun9972)
+[#]: translator: ( )
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+[#]: subject: (How rootless Buildah works: Building containers in unprivileged environments)
+[#]: via: (https://opensource.com/article/19/3/tips-tricks-rootless-buildah)
+[#]: author: (Daniel J Walsh https://opensource.com/users/rhatdan)
+
+How rootless Buildah works: Building containers in unprivileged environments
+======
+Buildah is a tool and library for building Open Container Initiative (OCI) container images.
+
+
+In previous articles, including [How does rootless Podman work?][1], I talked about [Podman][2], a tool that enables users to manage pods, containers, and container images.
+
+[Buildah][3] is a tool and library for building Open Container Initiative ([OCI][4]) container images that is complementary to Podman. (Both projects are maintained by the [containers][5] organization, of which I'm a member.) In this article, I will talk about rootless Buildah, including the differences between it and Podman.
+
+Our goal with Buildah was to build a low-level tool that could be used either directly or vendored into other tools to build container images.
+
+### Why Buildah?
+
+Here is how I describe a container image: It is basically a rootfs directory that contains the code needed to run your container. This directory is called a rootfs because it usually looks like **/ (root)** on a Linux machine, meaning you are likely to find directories in a rootfs like **/etc** , **/usr** , **/bin** , etc.
+
+The second part of a container image is a JSON file that describes the contents of the rootfs. It contains fields like the command to run the container, the entrypoint, the environment variables required to run the container, the working directory of the container, etc. Basically this JSON file allows the developer of the container image to describe how the container image is expected to be used. The fields in this JSON file have been standardized in the [OCI Image Format specification][6]
+
+The rootfs and the JSON file then get tar'd together to create an image bundle that is stored in a container registry. To create a layered image, you install more software into the rootfs and modify the JSON file. Then you tar up the differences of the new and the old rootfs and store that in another image tarball. The second JSON file refers back to the first JSON file via a checksum.
+
+Many years ago, Docker introduced Dockerfile, a simplified scripting language for building container images. Dockerfile was great and really took off, but it has many shortcomings that users have complained about. For example:
+
+ * Dockerfile encourages the inclusion of tools used to build containers inside the container image. Container images do not need to include yum/dnf/apt, but most contain one of them and all their dependencies.
+
+ * Each line causes a layer to be created. Because of this, secrets can mistakenly get added to container images. If you create a secret in one line of the Dockerfile and delete it in the next, the secret is still in the image.
+
+
+
+
+One of my biggest complaints about the "container revolution" is that six years since it started, the only way to build a container image was still with Dockerfiles. Lots of tools other than **docker build** have appeared besides Buildah, but most still deal only with Dockerfile. So users continue hacking around the problems with Dockerfile.
+
+Note that [umoci][7] is an alternative to **docker build** that allows you to build container images without Dockerfile.
+
+Our goal with Buildah was to build a simple tool that could just create a rootfs directory on disk and allow other tools to populate the directory, then create the JSON file. Finally, Buildah would create the OCI image and push it to a container registry where it could be used by any container engine, like [Docker][8], Podman, [CRI-O][9], or another Buildah.
+
+Buildah also supports Dockerfile, since we know the bulk of people building containers have created Dockerfiles.
+
+### Using Buildah directly
+
+Lots of people use Buildah directly. A cool feature of Buildah is that you can script up the container build directly in Bash.
+
+The example below creates a Bash script called **myapp.sh** , which uses Buildah to pull down the Fedora image, and then uses **dnf** and **make** on a machine to install software into the container image rootfs, **$mnt**. It then adds some fields to the JSON file using **buildah config** and commits the container to a container image **myapp**. Finally, it pushes the container image to a container registry, **quay.io**. (It could push it to any container registry.) Now this OCI image can be used by any container engine or Kubernetes.
+
+```
+cat myapp.sh
+#!/bin/sh
+ctr=$(buildah from fedora)
+mnt=($buildah mount $ctr)
+dnf -y install --installroot $mnt httpd
+make install DESTDIR=$mnt myapp
+rm -rf $mnt/var/cache $mnt/var/log/*
+buildah config --command /usr/bin/myapp -env foo=bar --working-dir=/root $ctr
+buildah commit $ctr myapp
+buildah push myapp http://quay.io/username/myapp
+```
+
+To create really small images, you could replace **fedora** in the script above with **scratch** , and Buildah will build a container image that only has the requirements for the **httpd** package inside the container image. No need for Python or DNF.
+
+### Podman's relationship to Buildah
+
+With Buildah, we have a low-level tool for building container images. Buildah also provides a library for other tools to build container images. Podman was designed to replace the Docker command line interface (CLI). One of the Docker CLI commands is **docker build**. We needed to have **podman build** to support building container images with Dockerfiles. Podman vendored in the Buildah library to allow it to do **podman build**. Any time you do a **podman build** , you are executing Buildah code to build your container images. If you are only going to use Dockerfiles to build container images, we recommend you only use Podman; there's no need for Buildah at all.
+
+### Other tools using the Buildah library
+
+Podman is not the only tool to take advantage of the Buildah library. [OpenShift 4 Source-to-Image][10] (S2I) will also use Buildah to build container images. OpenShift S2I allows developers using OpenShift to use Git commands to modify source code; when they push the changes for their source code to the Git repository, OpenShift kicks off a job to compile the source changes and create a container image. It also uses Buildah under the covers to build this image.
+
+[Ansible-Bender][11] is a new project to build container images via an Ansible playbook. For those familiar with Ansible, Ansible-Bender makes it easy to describe the contents of the container image and then uses Buildah to package up the container image and send it to a container registry.
+
+We would love to see other tools and languages for describing and building a container image and would welcome others use Buildah to do the conversion.
+
+### Problems with rootless
+
+Buildah works fine in rootless mode. It uses user namespace the same way Podman does. If you execute
+
+```
+$ buildah bud --tag myapp -f Dockerfile .
+$ buildah push myapp http://quay.io/username/myapp
+```
+
+in your home directory, everything works great.
+
+However, if you execute the script described above, it will fail!
+
+The problem is that, when running the **buildah mount** command in rootless mode, the **buildah** command must put itself inside the user namespace and create a new mount namespace. Rootless users are not allowed to mount filesystems when not running in a user namespace.
+
+When the Buildah executable exits, the user namespace and mount namespace disappear, so the mount point no longer exists. This means the commands after **buildah mount** that attempt to write to **$mnt** will fail since **$mnt** is no longer mounted.
+
+How can we make the script work in rootless mode?
+
+#### Buildah unshare
+
+Buildah has a special command, **buildah unshare** , that allows you to enter the user namespace. If you execute it with no commands, it will launch a shell in the user namespace, and your shell will seem like it is running as root and all the contents of the home directory will seem like they are owned by root. If you look at the owner or files in **/usr** , it will list them as owned by **nfsnobody** (or nobody). This is because your user ID (UID) is now root inside the user namespace and real root (UID=0) is not mapped into the user namespace. The kernel represents all files owned by UIDs not mapped into the user namespace as the NFSNOBODY user. When you exit the shell, you will exit the user namespace, you will be back to your normal UID, and the home directory will be owned by your UID again.
+
+If you want to execute the **myapp.sh** command defined above, you can execute **buildah unshare myapp.sh** and the script will now run correctly.
+
+#### Conclusion
+
+Building and running containers in unprivileged environments is now possible and quite useable. There is little reason for developers to develop containers as root.
+
+If you want to use a traditional container engine, and use Dockerfile's for builds, then you should probably just use Podman. But if you want to experiment with building container images in new ways without using Dockerfile, then you should really take a look at Buildah.
+
+--------------------------------------------------------------------------------
+
+via: https://opensource.com/article/19/3/tips-tricks-rootless-buildah
+
+作者:[Daniel J Walsh][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://opensource.com/users/rhatdan
+[b]: https://github.com/lujun9972
+[1]: https://opensource.com/article/19/2/how-does-rootless-podman-work
+[2]: https://podman.io/
+[3]: https://github.com/containers/buildah
+[4]: https://www.opencontainers.org/
+[5]: https://github.com/containers
+[6]: https://github.com/opencontainers/image-spec
+[7]: https://github.com/openSUSE/umoci
+[8]: https://github.com/docker
+[9]: https://cri-o.io/
+[10]: https://github.com/openshift/source-to-image
+[11]: https://github.com/TomasTomecek/ansible-bender
diff --git a/sources/tech/20190305 Running the ‘Real Debian- on Raspberry Pi 3- -For DIY Enthusiasts.md b/sources/tech/20190305 Running the ‘Real Debian- on Raspberry Pi 3- -For DIY Enthusiasts.md
new file mode 100644
index 0000000000..785a6eeb5a
--- /dev/null
+++ b/sources/tech/20190305 Running the ‘Real Debian- on Raspberry Pi 3- -For DIY Enthusiasts.md
@@ -0,0 +1,134 @@
+[#]: collector: (lujun9972)
+[#]: translator: ( )
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+[#]: subject: (Running the ‘Real Debian’ on Raspberry Pi 3+ [For DIY Enthusiasts])
+[#]: via: (https://itsfoss.com/debian-raspberry-pi)
+[#]: author: (Shirish https://itsfoss.com/author/shirish/)
+
+Running the ‘Real Debian’ on Raspberry Pi 3+ [For DIY Enthusiasts]
+======
+
+If you have ever used a Raspberry Pi device, you probably already know that it recommends a Linux distribution called [Raspbian][1].
+
+Raspbian is a heavily customized form of Debian to run on low-powered ARM processors. It’s not bad. In fact, it’s an excellent OS for Raspberry Pi devices but it’s not the real Debian.
+
+[Debian purists like me][2] would prefer to run the actual Debian over the Raspberry Pi’s customized Debian version. I trust Debian more than any other distribution to provide me a vast amount of properly vetted free software packages. Moreover, a project like this would help other ARM devices as well.
+
+Above all, running the official Debian on Raspberry Pi is sort of challenge and I like such challenges.
+
+![Real Debian on Raspberry Pi][3]
+
+I am not the only one who thinks like this. There are many other Debian users who share the same feeling and this is why there exists an ongoing project to create a [Debian image for Raspberry Pi][4].
+
+About two and a half months back, a Debian Developer (DD) named [Gunnar Wolf][5] took over that unofficial Raspberry Pi image generation project.
+
+I’ll be quickly showing you how can you install this Raspberry Pi Debian Buster preview image on your Raspberry Pi 3 (or higher) devices.
+
+### Getting Debian on Raspberry Pi [For Experts]
+
+```
+Warning
+
+Be aware this Debian image is very raw and unsupported at the moment. Though it’s very new, I believe experienced Raspberry Pi and Debian users should be able to use it.
+```
+
+Now as far as [Debian][6] is concerned, here is the Debian image and instructions that you could use to put the Debian stock image on your Raspberry pi 3 Model B+.
+
+#### Step 1: Download the Debian Raspberry Pi Buster image
+
+You can download the preview images using wget command:
+
+```
+wget https://people.debian.org/~gwolf/raspberrypi3/20190206/20190206-raspberry-pi-3-buster-PREVIEW.img.xz
+```
+
+#### Step 2: Verify checksum (optional)
+
+It’s optional but you should [verify the checksum][7]. You can do that by downloading the SHA256 hashfile and then comparing it with that of the downloaded Raspberry Pi Debian image.
+
+At my end I had moved both the .sha256 file as img.xz to a directory to make it easier to check although it’s not necessary.
+
+```
+wget https://people.debian.org/~gwolf/raspberrypi3/20190206/20190206-raspberry-pi-3-buster-PREVIEW.img.xz.sha256
+
+sha256sum -c 20190206-raspberry-pi-3-buster-PREVIEW.img.xz.sha256
+```
+
+#### Step 3: Write the image to your SD card
+
+Once you have verified the image, take a look at it. It is around 400MB in the compressed xzip format. You can extract it to get an image of around 1.5GB in size.
+
+Insert your SD card. **Before you carry on to the next command please change the sdX to a suitable name that corresponds to your SD card.**
+
+The command basically extracts the img.xz archive to the SD card. The progress switch/flag enables you to see a progress line with a number as to know how much the archive has extracted.
+
+```
+xzcat 20190206-raspberry-pi-3-buster-PREVIEW.img.xz | dd of=/dev/sdX bs=64k oflag=dsync status=progress$ xzcat 20190206-raspberry-pi-3-buster-PREVIEW.img.xz | dd of=/dev/sdX bs=64k oflag=dsync status=progress
+```
+
+Once you have successfully flashed your SD card, you should be able test if the installation went ok by sshing into your Raspberry Pi. The default root password is raspberry.
+
+```
+ssh root@rpi3
+```
+
+If you are curious to know how the Raspberry Pi image was built, you can look at the [build scripts][8].
+
+You can find more info on the project homepage.
+
+[DEBIAN RASPBERRY PI IMAGE][15]
+
+### How to contribute to the Raspberry Pi Buster effort
+
+There is a mailing list called [debian-arm][9] where people could contribute their efforts and ask questions. As you can see in the list, there is already a new firmware which was released [few days back][10] which might make booting directly a reality instead of the workaround shared above.
+
+If you want you could make a new image using the raspi3-image-spec shared above or wait for Gunnar to make a new image which might take time.
+
+Most of the maintainers also hang out at #vmdb2 at #OFTC. You can either use your IRC client or [Riot client][11], register your name at Nickserv and connect with either Gunnar Wolf, Roman Perier or/and Lars Wirzenius, author of [vmdb2][12]. I might do a follow-up on vmdb2 as it’s a nice little tool by itself.
+
+### The Road Ahead
+
+If there are enough interest and contributors, for instance, the lowest-hanging fruit would be to make sure that the ARM64 port [wiki page][13] is as current as possible. The benefits are and can be enormous.
+
+There are a huge number of projects which could benefit from either having a [Pi farm][14] to making your media server or a SiP phone or whatever you want to play/work with.
+
+Another low-hanging fruit might be synchronization between devices, say an ARM cluster sharing reports to either a Debian desktop by way of notification or on mobile or both ways.
+
+While I have shared about Raspberry Pi, there are loads of single-board computers on the market already and lot more coming, both from MIPS as well as OpenRISC-V so there is going to plenty of competition in the days ahead.
+
+Also, OpenRISC-V is and would be open-sourcing lot of its IP so non-free firmware or binary blobs would not be needed. Even MIPS is rumored to be more open which may challenge ARM if MIPS and OpenRISC-V are able to get their logistics and pricing right, but that is a story for another day.
+
+There are many more vendors, I am just sharing the ones whom I am most interested to see what they come up with.
+
+I hope the above sheds some light why it makes sense to have Debian on the Raspberry Pi.
+
+--------------------------------------------------------------------------------
+
+via: https://itsfoss.com/debian-raspberry-pi
+
+作者:[Shirish][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://itsfoss.com/author/shirish/
+[b]: https://github.com/lujun9972
+[1]: https://www.raspberrypi.org/downloads/raspbian/
+[2]: https://itsfoss.com/reasons-why-i-love-debian/
+[3]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/03/debian-raspberry-pi.png?resize=800%2C450&ssl=1
+[4]: https://wiki.debian.org/RaspberryPi3
+[5]: https://gwolf.org/node/4139
+[6]: https://www.debian.org/
+[7]: https://itsfoss.com/checksum-tools-guide-linux/
+[8]: https://github.com/Debian/raspi3-image-spec
+[9]: https://lists.debian.org/debian-arm/2019/02/threads.html
+[10]: https://alioth-lists.debian.net/pipermail/pkg-raspi-maintainers/Week-of-Mon-20190225/000310.html
+[11]: https://itsfoss.com/riot-desktop/
+[12]: https://liw.fi/vmdb2/
+[13]: https://wiki.debian.org/Arm64Port
+[14]: https://raspi.farm/
+[15]: https://wiki.debian.org/RaspberryPi3
diff --git a/sources/tech/20190306 Getting started with the Geany text editor.md b/sources/tech/20190306 Getting started with the Geany text editor.md
new file mode 100644
index 0000000000..7da5f95686
--- /dev/null
+++ b/sources/tech/20190306 Getting started with the Geany text editor.md
@@ -0,0 +1,141 @@
+[#]: collector: (lujun9972)
+[#]: translator: ( )
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+[#]: subject: (Getting started with the Geany text editor)
+[#]: via: (https://opensource.com/article/19/3/getting-started-geany-text-editor)
+[#]: author: (James Mawson https://opensource.com/users/dxmjames)
+
+Getting started with the Geany text editor
+======
+Geany is a light and swift text editor with IDE features.
+
+
+
+I have to admit, it took me a rather embarrassingly long time to really get into Linux as a daily driver. One thing I recall from these years in the wilderness was how strange it was to watch open source types get so worked up about text editors.
+
+It wasn't just that opinions differed. Disagreements were intense. And you'd see them again and again.
+
+I mean, I suppose it makes some sense. Doing dev or admin work means you're spending a lot of time with a text editor. And when it gets in the way or won't do quite what you want? In that exact moment, that's the most frustrating thing in the world.
+
+And I know what it means to really hate a text editor. I learned this many years ago in the computer labs at university trying to figure out Emacs. I was quite shocked that a piece of software could have so many sadomasochistic overtones. People were doing that to each other deliberately!
+
+So perhaps it's a rite of passage that now I have one I very much like. It's called [Geany][1], it's on GPL, and it's [in the repositories][2] of most popular distributions.
+
+Here's why it works for me.
+
+### I'm into simplicity
+
+The main thing I want from a text editor is just to edit text. I don't think there should be any kind of learning curve in the way. I should be able to open it and use it.
+
+For that reason, I've generally used whatever is included with an operating system. On Windows 10, I used Notepad far longer than I should have. When I finally replaced it, it was with Notepad++. In the Linux terminal, I like Nano.
+
+I was perfectly aware I was missing out on a lot of useful functionality. But it was never enough of a pain point to make a change. And it's not that I've never tried anything more elaborate. I did some of my first real programming on Visual Basic and Borland Delphi.
+
+These development environments gave you a graphical interface to design your windows visually, various windows where you could configure properties and settings, a text interface to write your functions, and various odds and ends for debugging. This was a great way to build desktop applications, so long as you used it the way it was intended.
+
+But if you wanted to do something the authors didn't anticipate, all these extra moving parts suddenly got in the way. As software became more and more about the web and the internet, this situation started happening all the time.
+
+In the past, I used HTML editing suites like Macromedia Dreamweaver (as it was back then) and FirstPage for static websites. Again, I found the features could get in the way as much as they helped. These applications had their own ideas about how to organize your project, and if you had a different view, it was an awful bother.
+
+More recently, after a long break from programming, I started learning the people's language: [Python][3]. I bought a book of introductory tutorials, which said to install [IDLE][4], so I did. I think I got about five minutes into it before ditching it to run the interpreter from the command line. It had way too many moving parts to deal with. Especially for HelloWorld.py.
+
+But I always went back to Notepad++ and Nano whenever I could get away with it.
+
+So what changed? Well, a few months ago I [ditched Windows 10][5] completely (hooray!). Sticking with what I knew, I used Nano as my main text editor for a few weeks.
+
+I learned that Nano is great when you're already on the command line and you need to launch a Navy SEAL mission. You know what I mean. A lightning-fast raid. Get in, complete the objective, and get out.
+
+It's less ideal for long campaigns—or even moderately short ones. Even just adding a new page to a static website turns out to involve many repetitive keystrokes. As much as anything else, I really missed being able to navigate and select text with the mouse.
+
+### Introducing Geany
+
+The Geany project began in 2005 and is still actively developed.
+
+It has minimal dependencies: just the [GTK Toolkit][6] and the libraries that GTK depends on. If you have any kind of desktop environment installed, you almost certainly have GTK on your machine.
+
+I'm using it on Xfce, but thanks to these minimal dependencies, Geany is portable across desktop environments.
+
+Geany is fast and light. Installing Geany from the package manager took mere moments, and it uses only 3.1MB of space on my machine.
+
+So far, I've used it for HTML, CSS, and Python and to edit configuration files. It also recognizes C, Java, JavaScript, Perl, and [more][7].
+
+### No-compromise simplicity
+
+Geany has a lot of great features that make life easier. Just listing them would miss the best bit, which is this: Geany makes sense right out of the box. As soon as it's installed, you can start editing files straightaway, and it just works.
+
+For all the IDE functionality, none of it gets in the way. The default settings are set intelligently, and the menus are laid out nicely enough that it's no hassle to change them.
+
+It doesn't try to organize your project for you, and it doesn't have strong opinions about how you should do anything.
+
+### Handles whitespace beautifully
+
+By default, every time you press Enter, Geany preserves the indentation on the new line. In addition to saving a few tedious keystrokes, it avoids the inconsistent use of tabs and spaces, which can sometimes sneak in when your mind's elsewhere and make your code hard to follow for anyone with a different text editor.
+
+But what if you're editing a file that's already suffered this treatment? For example, I needed to edit an HTML file that was indented with a mix of tabs and spaces, making it a nightmare to figure out how the tags were nested.
+
+With Geany, it took just seconds to hunt through the menus to change the tab length from four spaces to eight. Even better was the option to convert those tabs to spaces. Problem solved!
+
+### Clever shortcuts and automation
+
+How often do you write the correct code on the wrong line? I do it all the time.
+
+Geany makes it easy to move lines of code up and down using Alt+PgUp and Alt+PgDn. This is a little nicer than just a regular cut and paste—instead of needing four or five key presses, you only need one.
+
+When coding HTML, Geany automatically closes tags for you. As well as saving time, this avoids a lot of annoying bugs. When you forget to close a tag, you can spend ages scouring the document looking for something far more complex.
+
+It gets even better in Python, where indentation is crucial. Whenever you end a line with a colon, Geany automatically indents it for you.
+
+One nice little side effect is that when you forget to include the colon—something I do with embarrassing regularity—you realize it immediately when you don't get the automatic indentation you expected.
+
+The default indentation is a single tab, while I prefer two spaces. Because Geany's menus are very well laid out, it took me only a few seconds to figure out how to change it.
+
+You, of course, get syntax highlighting too. In addition, it tracks your [variable scope][8] and offers useful autocompletion.
+
+### Large plugin library
+
+Geany has a [big library of plugins][9], but so far I haven't needed to try any. Even so, I still feel like I benefit from them. How? Well, it means that my editor isn't crammed with functionality I don't use.
+
+I reckon this attitude of adding extra functionality into a big library of plugins is a great ethos—no matter your specific needs, you get to have all the stuff you want and none of what you don't.
+
+### Remote file editing
+
+One thing that's really nice about terminal text editors is that it's no problem to use them in a remote shell.
+
+Geany handles this beautifully, as well. You can open remote files anywhere you have SSH access as easily as you can open files on your own machine.
+
+One frustration I had at first was I only seemed to be able to authenticate with a username and password, which was annoying, because certificates are so much nicer. It turned out that this was just me being a noob by keeping certificates in my home directory rather than in ~/.ssh.
+
+When editing Python scripts remotely, autocompletion doesn't work when you use packages installed on the server and not on your local machine. This isn't really that big a deal for me, but it's there.
+
+### In summary
+
+Text editors are such a personal preference that the right one will be different for different people.
+
+Geany is excellent if you already know what you want to write and want to just get on with it while enjoying plenty of useful shortcuts to speed up the menial parts.
+
+Geany is a great way to have your cake and eat it too.
+
+--------------------------------------------------------------------------------
+
+via: https://opensource.com/article/19/3/getting-started-geany-text-editor
+
+作者:[James Mawson][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://opensource.com/users/dxmjames
+[b]: https://github.com/lujun9972
+[1]: https://www.geany.org/
+[2]: https://www.geany.org/Download/ThirdPartyPackages
+[3]: https://opensource.com/resources/python
+[4]: https://en.wikipedia.org/wiki/IDLE
+[5]: https://blog.dxmtechsupport.com.au/linux-on-the-desktop-are-we-nearly-there-yet/
+[6]: https://www.gtk.org/
+[7]: https://www.geany.org/Main/AllFiletypes
+[8]: https://cscircles.cemc.uwaterloo.ca/11b-how-functions-work/
+[9]: https://plugins.geany.org/
diff --git a/sources/tech/20190311 Building the virtualization stack of the future with rust-vmm.md b/sources/tech/20190311 Building the virtualization stack of the future with rust-vmm.md
new file mode 100644
index 0000000000..b1e7fbf046
--- /dev/null
+++ b/sources/tech/20190311 Building the virtualization stack of the future with rust-vmm.md
@@ -0,0 +1,76 @@
+[#]: collector: (lujun9972)
+[#]: translator: ( )
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+[#]: subject: (Building the virtualization stack of the future with rust-vmm)
+[#]: via: (https://opensource.com/article/19/3/rust-virtual-machine)
+[#]: author: (Andreea Florescu )
+
+Building the virtualization stack of the future with rust-vmm
+======
+rust-vmm facilitates sharing core virtualization components between Rust Virtual Machine Monitors.
+
+
+More than a year ago we started developing [Firecracker][1], a virtual machine monitor (VMM) that runs on top of KVM (the kernel-based virtual machine). We wanted to create a lightweight VMM that starts virtual machines (VMs) in a fraction of a second, with a low memory footprint, to enable high-density cloud environments.
+
+We started out developing Firecracker by forking the Chrome OS VMM ([CrosVM][2]), but we diverged shortly after because we targeted different customer use cases. CrosVM provides Linux application isolation in ChromeOS, while Firecracker is used for running multi-tenant workloads at scale. Even though we now walk different paths, we still have common virtualization components, such as wrappers over KVM input/output controls (ioctls), a minimal kernel loader, and use of the [Virtio][3] device models.
+
+With this in mind, we started thinking about the best approach for sharing the common code. Having a shared codebase raises the security and quality bar for both projects. Currently, fixing security bugs requires duplicated work in terms of porting the changes from one project to the other and going through different review processes for merging the changes. After open sourcing Firecracker, we've received requests for adding features including GPU support and booting [bzImage][4] files. Some of the requests didn't align with Firecracker's goals, but were otherwise valid use cases that just haven't found the right place for an implementation.
+
+### The rust-vmm project
+
+The [rust-vmm][5] project came to life in December 2018 when Amazon, Google, Intel, and Red Hat employees started talking about the best way of sharing virtualization packages. More contributors have joined this initiative along the way. We are still at the beginning of this journey, with only one component published to [Crates.io][6] (Rust's package registry) and several others (such as Virtio devices, Linux kernel loaders, and KVM ioctls wrappers) being developed. With two VMMs written in Rust under active development and growing interest in building other specialized VMMs, rust-vmm was born as the host for sharing core virtualization components.
+
+The goal of rust-vmm is to enable the community to create custom VMMs that import just the required building blocks for their use case. We decided to organize rust-vmm as a multi-repository project, where each repository corresponds to an independent virtualization component. Each individual building block is published on Crates.io.
+
+### Creating custom VMMs with rust-vmm
+
+The components discussed below are currently under development.
+
+
+
+Each box on the right side of the diagram is a GitHub repository corresponding to one package, which in Rust is called a crate. The functionality of one crate can be further split into modules, for example virtio-devices. Let's have a look at these components and some of their potential use cases.
+
+ * **KVM interface:** Creating our VMM on top of KVM requires an interface that can invoke KVM functionality from Rust. The kvm-bindings crate represents the Rust Foreign Function Interface (FFI) to KVM kernel headers. Because headers only include structures and defines, we also have wrappers over the KVM ioctls (kvm-ioctls) that we use for opening dev/kvm, creating a VM, creating vCPUs, and so on.
+
+ * **Virtio devices and rate limiting:** Virtio has a frontend-backend architecture. Currently in rust-vmm, the frontend is implemented in the virtio-devices crate, and the backend lies in the vhost package. Vhost has support for both user-land and kernel-land drivers, but users can also plug virtio-devices to their custom backend. The virtio-bindings are the bindings for Virtio devices generated using the Virtio Linux headers. All devices in the virtio-devices crate are exported independently as modules using conditional compilation. Some devices, such as block, net, and vsock support rate limiting in terms of I/O per second and bandwidth. This can be achieved by using the functionality provided in the rate-limiter crate.
+
+ * The kernel-loader is responsible for loading the contents of an [ELF][7] kernel image in guest memory.
+
+
+
+
+For example, let's say we want to build a custom VMM that allows users to create and configure a single VM running on top of KVM. As part of the configuration, users will be able to specify the kernel image file, the root file system, the number of vCPUs, and the memory size. Creating and configuring the resources of the VM can be implemented using the kvm-ioctls crate. The kernel image can be loaded in guest memory with kernel-loader, and specifying a root filesystem can be achieved with the virtio-devices block module. The last thing needed for our VMM is writing VMM Glue, the code that takes care of integrating rust-vmm components with the VMM user interface, which allows users to create and manage VMs.
+
+### How you can help
+
+This is the beginning of an exciting journey, and we are looking forward to getting more people interested in VMMs, Rust, and the place where you can find both: [rust-vmm][5].
+
+We currently have [sync meetings][8] every two weeks to discuss the future of the rust-vmm organization. The meetings are open to anyone willing to participate. If you have any questions, please open an issue in the [community repository][9] or send an email to the rust-vmm [mailing list][10] (you can also [subscribe][11]). We also have a [Slack channel][12] and encourage you to join, if you are interested.
+
+--------------------------------------------------------------------------------
+
+via: https://opensource.com/article/19/3/rust-virtual-machine
+
+作者:[Andreea Florescu][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]:
+[b]: https://github.com/lujun9972
+[1]: https://github.com/firecracker-microvm/firecracker
+[2]: https://chromium.googlesource.com/chromiumos/platform/crosvm/
+[3]: https://www.linux-kvm.org/page/Virtio
+[4]: https://en.wikipedia.org/wiki/Vmlinux#bzImage
+[5]: https://github.com/rust-vmm
+[6]: https://crates.io/
+[7]: https://en.wikipedia.org/wiki/Executable_and_Linkable_Format
+[8]: http://lists.opendev.org/pipermail/rust-vmm/2019-January/000103.html
+[9]: https://github.com/rust-vmm/community
+[10]: mailto:rust-vmm@lists.opendev.org
+[11]: http://lists.opendev.org/cgi-bin/mailman/listinfo/rust-vmm
+[12]: https://join.slack.com/t/rust-vmm/shared_invite/enQtNTI3NDM2NjA5MzMzLTJiZjUxOGEwMTJkZDVkYTcxYjhjMWU3YzVhOGQ0M2Y5NmU5MzExMjg5NGE3NjlmNzNhZDlhMmY4ZjVhYTQ4ZmQ
diff --git a/sources/tech/20190312 BackBox Linux for Penetration Testing.md b/sources/tech/20190312 BackBox Linux for Penetration Testing.md
new file mode 100644
index 0000000000..b79a4a5cee
--- /dev/null
+++ b/sources/tech/20190312 BackBox Linux for Penetration Testing.md
@@ -0,0 +1,200 @@
+[#]: collector: (lujun9972)
+[#]: translator: ( )
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+[#]: subject: (BackBox Linux for Penetration Testing)
+[#]: via: (https://www.linux.com/blog/learn/2019/3/backbox-linux-penetration-testing)
+[#]: author: (Jack Wallen https://www.linux.com/users/jlwallen)
+
+BackBox Linux for Penetration Testing
+======
+
+
+Any given task can succeed or fail depending upon the tools at hand. For security engineers in particular, building just the right toolkit can make life exponentially easier. Luckily, with open source, you have a wide range of applications and environments at your disposal, ranging from simple commands to complicated and integrated tools.
+
+The problem with the piecemeal approach, however, is that you might wind up missing out on something that can make or break a job… or you waste a lot of time hunting down the right tools for the job. To that end, it’s always good to consider an operating system geared specifically for penetration testing (aka pentesting).
+
+Within the world of open source, the most popular pentesting distribution is [Kali Linux][1]. It is, however, not the only tool in the shop. In fact, there’s another flavor of Linux, aimed specifically at pentesting, called [BackBox][2]. BackBox is based on Ubuntu Linux, which also means you have easy access to a host of other outstanding applications besides those that are included, out of the box.
+
+### What Makes BackBox Special?
+
+BackBox includes a suite of ethical hacking tools, geared specifically toward pentesting. These testing tools include the likes of:
+
+ * Web application analysis
+
+ * Exploitation testing
+
+ * Network analysis
+
+ * Stress testing
+
+ * Privilege escalation
+
+ * Vulnerability assessment
+
+ * Computer forensic analysis and exploitation
+
+ * And much more
+
+
+
+
+Out of the box, one of the most significant differences between Kali Linux and BackBox is the number of installed tools. Whereas Kali Linux ships with hundreds of tools pre-installed, BackBox significantly limits that number to around 70. Nonetheless, BackBox includes many of the tools necessary to get the job done, such as:
+
+ * Ettercap
+
+ * Msfconsole
+
+ * Wireshark
+
+ * ZAP
+
+ * Zenmap
+
+ * BeEF Browser Exploitation
+
+ * Sqlmap
+
+ * Driftnet
+
+ * Tcpdump
+
+ * Cryptcat
+
+ * Weevely
+
+ * Siege
+
+ * Autopsy
+
+
+
+
+BackBox is in active development, the latest version (5.3) was released February 18, 2019. But how is BackBox as a usable tool? Let’s install and find out.
+
+### Installation
+
+If you’ve installed one Linux distribution, you’ve installed them all … with only slight variation. BackBox is pretty much the same as any other installation. [Download the ISO][3], burn the ISO onto a USB drive, boot from the USB drive, and click the Install icon.
+
+The installer (Figure 1) will be instantly familiar to anyone who has installed a Ubuntu or Debian derivative. Just because BackBox is a distribution geared specifically toward security administrators, doesn’t mean the operating system is a challenge to get up and running. In fact, BackBox is a point-and-click affair that anyone, regardless of skills, can install.
+
+![installation][5]
+
+Figure 1: The installation of BackBox will be immediately familiar to anyone.
+
+[Used with permission][6]
+
+The trickiest section of the installation is the Installation Type. As you can see (Figure 2), even this step is quite simple.
+
+![BackBox][8]
+
+Figure 2: Selecting the type of installation for BackBox.
+
+[Used with permission][6]
+
+Once you’ve installed BackBox, reboot the system, remove the USB drive, and wait for it to land on the login screen. Log into the desktop and you’re ready to go (Figure 3).
+
+![desktop][10]
+
+Figure 3: The BackBox Linux desktop, running as a VirtualBox virtual machine.
+
+[Used with permission][6]
+
+### Using BackBox
+
+Thanks to the [Xfce desktop environment][11], BackBox is easy enough for a Linux newbie to navigate. Click on the menu button in the top left corner to reveal the menu (Figure 4).
+
+![desktop menu][13]
+
+Figure 4: The BackBox desktop menu in action.
+
+[Used with permission][6]
+
+From the desktop menu, click on any one of the favorites (in the left pane) or click on a category to reveal the related tools (Figure 5).
+
+![Auditing][15]
+
+Figure 5: The Auditing category in the BackBox menu.
+
+[Used with permission][6]
+
+The menu entries you’ll most likely be interested in are:
+
+ * Anonymous - allows you to start an anonymous networking session.
+
+ * Auditing - the majority of the pentesting tools are found in here.
+
+ * Services - allows you to start/stop services such as Apache, Bluetooth, Logkeys, Networking, Polipo, SSH, and Tor.
+
+
+
+
+Before you run any of the testing tools, I would recommend you first making sure to update and upgrade BackBox. This can be done via a GUI or the command line. If you opt to go the GUI route, click on the desktop menu, click System, and click Software Updater. When the updater completes its check for updates, it will prompt you if any are available, or if (after an upgrade) a reboot is necessary (Figure 6).
+
+![reboot][17]
+
+Figure 6: Time to reboot after an upgrade.
+
+[Used with permission][6]
+
+Should you opt to go the manual route, open a terminal window and issue the following two commands:
+
+```
+sudo apt-get update
+
+sudo apt-get upgrade -y
+```
+
+Many of the BackBox pentesting tools do require a solid understanding of how each tool works, so before you attempt to use any given tool, make sure you know how to use said tool. Some tools (such as Metasploit) are made a bit easier to work with, thanks to BackBox. To run Metasploit, click on the desktop menu button and click msfconsole from the favorites (left pane). When the tool opens for the first time, you’ll be asked to configure a few options. Simply select each default given by clicking your keyboard Enter key when prompted. Once you see the Metasploit prompt, you can run commands like:
+
+```
+db_nmap 192.168.0/24
+```
+
+The above command will list out all discovered ports on a 192.168.1.x network scheme (Figure 7).
+
+![Metasploit][19]
+
+Figure 7: Open port discovery made simple with Metasploit on BackBox.
+
+[Used with permission][6]
+
+Even often-challenging tools like Metasploit are made far easier than they are with other distributions (partially because you don’t have to bother with installing the tools). That alone is worth the price of entry for BackBox (which is, of course, free).
+
+### The Conclusion
+
+Although BackBox usage may not be as widespread as Kali Linux, it still deserves your attention. For anyone looking to do pentesting on their various environments, BackBox makes the task far easier than so many other operating systems. Give this Linux distribution a go and see if it doesn’t aid you in your journey to security nirvana.
+
+--------------------------------------------------------------------------------
+
+via: https://www.linux.com/blog/learn/2019/3/backbox-linux-penetration-testing
+
+作者:[Jack Wallen][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://www.linux.com/users/jlwallen
+[b]: https://github.com/lujun9972
+[1]: https://www.kali.org/
+[2]: https://linux.backbox.org/
+[3]: https://www.backbox.org/download/
+[4]: /files/images/backbox1jpg
+[5]: https://www.linux.com/sites/lcom/files/styles/rendered_file/public/backbox_1.jpg?itok=pn4fQVp7 (installation)
+[6]: /licenses/category/used-permission
+[7]: /files/images/backbox2jpg
+[8]: https://www.linux.com/sites/lcom/files/styles/rendered_file/public/backbox_2.jpg?itok=tf-1zo8Z (BackBox)
+[9]: /files/images/backbox3jpg
+[10]: https://www.linux.com/sites/lcom/files/styles/rendered_file/public/backbox_3.jpg?itok=GLowoAUb (desktop)
+[11]: https://www.xfce.org/
+[12]: /files/images/backbox4jpg
+[13]: https://www.linux.com/sites/lcom/files/styles/rendered_file/public/backbox_4.jpg?itok=VmQXtuZL (desktop menu)
+[14]: /files/images/backbox5jpg
+[15]: https://www.linux.com/sites/lcom/files/styles/rendered_file/public/backbox_5.jpg?itok=UnfM_OxG (Auditing)
+[16]: /files/images/backbox6jpg
+[17]: https://www.linux.com/sites/lcom/files/styles/floated_images/public/backbox_6.jpg?itok=2t1BiKPn (reboot)
+[18]: /files/images/backbox7jpg
+[19]: https://www.linux.com/sites/lcom/files/styles/rendered_file/public/backbox_7.jpg?itok=Vw_GEub3 (Metasploit)
diff --git a/sources/tech/20190312 Star LabTop Mk III Open Source Edition- An Interesting Laptop.md b/sources/tech/20190312 Star LabTop Mk III Open Source Edition- An Interesting Laptop.md
new file mode 100644
index 0000000000..2e4b8f098a
--- /dev/null
+++ b/sources/tech/20190312 Star LabTop Mk III Open Source Edition- An Interesting Laptop.md
@@ -0,0 +1,93 @@
+[#]: collector: (lujun9972)
+[#]: translator: ( )
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+[#]: subject: (Star LabTop Mk III Open Source Edition: An Interesting Laptop)
+[#]: via: (https://itsfoss.com/star-labtop-open-source-edition)
+[#]: author: (Ankush Das https://itsfoss.com/author/ankush/)
+
+Star LabTop Mk III Open Source Edition: An Interesting Laptop
+======
+
+[Star Labs Systems][1] have been producing Laptops tailored for Linux for some time. While you can purchase other variants available on their website, they have recently launched a [Kickstarter campaign][2] for their upcoming ‘Open Source Edition’ laptop that incorporates more features as per the requests by the users or reviewers.
+
+It may not be the best laptop you’ve ever come across for around a **1000 Euros** – but it certainly is interesting for some specific features.
+
+In this article, we will talk about what makes it an interesting deal and whether or not it’s worth investing for.
+
+![star labtop mk III][3]
+
+### Key Highlight: Open-source Coreboot Firmware
+
+Normally, you will observe proprietary firmware (BIOS) on computers, American Megatrends Inc, for example.
+
+But, here, Star Labs have tailored the [coreboot firmware][4] (a.k.a known as the LinuxBIOS) which is an open source alternative to proprietary solutions for this laptop.
+
+Not just open source but it is also a lighter firmware for better control over your laptop. With [TianoCore EDK II][5], it ensures that you get the maximum compatibility for most of the major Operating Systems.
+
+### Other Features of Star LabTop Mk III
+
+![sat labtop mk III][6]
+
+In addition to the open source firmware, the laptop features an **8th-gen i7 chipse** t ( **i7-8550u** ) coupled with **16 Gigs of LPDDR4 RAM** clocked at **2400 MHz**.
+
+The GPU being the integrated **Intel UHD Graphics 620** should be enough for professional tasks – except video editing and gaming. It will be rocking a **Full HD 13.3-inch IPS** panel as the display.
+
+The storage option includes **480 GB or 960 GB of PCIe SSD** – which is impressive as well. In addition to all this, it comes with the **USB Type-C** support.
+
+Interestingly, the **BIOS, Embedded Controller and SSD** will be receiving automatic [firmware updates][7] via the [LVFS][8] (the Mk III standard edition has this feature already).
+
+You should also check out a review video of [Star LabTob Mk III][9] to get an idea of how the open source edition could look like:
+
+If you are curious about the detailed tech specs, you should check out the [Kickstarter page][2].
+
+
+
+### Our Opinion
+
+![star labtop mk III][10]
+
+The inclusion of coreboot firmware and being something tailored for various Linux distributions originally is the reason why it is being termed as the “ **Open Source Edition”**.
+
+The price for the ultimate bundle on Kickstarter is **1087 Euros**.
+
+Can you get better laptop deals at this price? **Yes** , definitely. But, it really comes down to your preference and your passion for open source – of what you require.
+
+However, if you want a performance-driven laptop specifically tailored for Linux, yes, this is an option you might want to consider with something new to offer (and potentially considering your requests for their future builds).
+
+Of course, you cannot consider this for video editing and gaming – for obvious reasons. So, they should considering adding a dedicated GPU to make it a complete package for computing, gaming, video editing and much more. Maybe even a bigger screen, say 15.6-inch?
+
+### Wrapping Up
+
+For what it is worth, if you are a Linux and open source enthusiast and want a performance-driven laptop, this could be an option to go with and back this up on Kickstarter right now.
+
+What do you think about it? Will you be interested in a laptop like this? If not, why?
+
+Let us know your thoughts in the comments below.
+
+
+--------------------------------------------------------------------------------
+
+via: https://itsfoss.com/star-labtop-open-source-edition
+
+作者:[Ankush Das][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://itsfoss.com/author/ankush/
+[b]: https://github.com/lujun9972
+[1]: https://starlabs.systems
+[2]: https://www.kickstarter.com/projects/starlabs/star-labtop-mk-iii-open-source-edition
+[3]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/03/star-labtop-mkiii-2.jpg?resize=800%2C450&ssl=1
+[4]: https://en.wikipedia.org/wiki/Coreboot
+[5]: https://github.com/tianocore/tianocore.github.io/wiki/EDK-II
+[6]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/03/star-labtop-mkiii-1.jpg?ssl=1
+[7]: https://itsfoss.com/update-firmware-ubuntu/
+[8]: https://fwupd.org/
+[9]: https://starlabs.systems/pages/star-labtop
+[10]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/03/star-labtop-mkiii.jpg?resize=800%2C435&ssl=1
+[11]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/03/star-labtop-mkiii-2.jpg?fit=800%2C450&ssl=1
diff --git a/sources/tech/20190313 Game Review- Steel Rats is an Enjoyable Bike-Combat Game.md b/sources/tech/20190313 Game Review- Steel Rats is an Enjoyable Bike-Combat Game.md
new file mode 100644
index 0000000000..5af0ae30d3
--- /dev/null
+++ b/sources/tech/20190313 Game Review- Steel Rats is an Enjoyable Bike-Combat Game.md
@@ -0,0 +1,95 @@
+[#]: collector: (lujun9972)
+[#]: translator: ( )
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+[#]: subject: (Game Review: Steel Rats is an Enjoyable Bike-Combat Game)
+[#]: via: (https://itsfoss.com/steel-rats)
+[#]: author: (Ankush Das https://itsfoss.com/author/ankush/)
+
+Game Review: Steel Rats is an Enjoyable Bike-Combat Game
+======
+
+Steel Rats is a quite impressive 2.5D motorbike combat game with exciting stunts involved. It was already available for Windows on [Steam][1] – however, recently it has been made available for Linux and Mac as well.
+
+In case you didn’t know, you can easily [install Steam on Ubuntu][2] or other distributions and [enable Steam Play feature to run some Windows games on Linux][3].
+
+So, in this article, we shall take a look at what the game is all about and if it is a good purchase for you.
+
+This game is neither free nor open source. We have covered it here because the game developers made an effort to port it to Linux.
+
+### Story Overview
+
+![steel rats][4]
+
+You belong to a biker gang – “ **Steel Rats** ” – who stepped up to protect their city from alien robots invasion. The alien robots aren’t just any tiny toys that you can easily defeat but with deadly weapons and abilities.
+
+The games features the setting as an alternative version of 1940’s USA – with the retro theme in place. You have to use your bike as the ultimate weapon to go against waves of alien robot and boss fights as well.
+
+You will encounter 4 different characters with unique abilities to switch from after progressing through a couple of rounds.
+
+You will start playing as “ **Toshi** ” and unlock other characters as you progress. **Toshi** is a genius and will be using a drone as his gadget to fight the alien robots. **James** – is the leader with the hammer attack as his special ability. **Lisa** would be the one utilizing fire to burn the junk robots. And, **Randall** will have his harpoon ready to destroy aerial robots with ease.
+
+### Gameplay
+
+![][5]
+
+Honestly, I am not a fan of 2.5 D (or 2D games). But, games like [Unravel][6] will be the exception – which is still not available for Linux, such a shame – EA.
+
+In this case, I did end up enjoying “ **Steel Rats** ” as one of the few 2D games I play.
+
+There is really no rocket science for this game – you just have to get good with the controls. No matter whether you use a controller or a keyboard, it is definitely challenging to get comfortable with the controls.
+
+You do not need to plan ahead in order to save your health or nitro boost because you will always have it when needed while also having checkpoints to resume your progress.
+
+You just need to keep the right pace and the perfect jump while hitting every enemy to get the best score in the leader boards. Once you do that, the game ends up being an easy and fun experience.
+
+If you’re curious about the gameplay, we recommend watching this video:
+
+
+#[wasm_bindgen]
+// This is pretty plain Rust code. If you've written Rust before this
+// should look extremely familiar. If not, why wait?! Check this out:
+//
+pub fn excited_greeting(original: &str) -> String {
+format!("HELLO, {}", original.to_uppercase())
+}
+```
+
+Second, we'll have to make two changes to our **Cargo.toml** configuration file:
+
+ * Add **wasm_bindgen** as a dependency.
+ * Configure the type of library binary to be a **cdylib** or dynamic system library. In this case, our system is **wasm** , and setting this option is how we produce **.wasm** binary files.
+
+
+```
+[package]
+name = "my-wasm-library"
+version = "0.1.0"
+authors = ["$YOUR_INFO"]
+edition = "2018"
+
+[lib]
+crate-type = ["cdylib", "rlib"]
+
+[dependencies]
+wasm-bindgen = "0.2.33"
+```
+
+Now let's build! If we just use **cargo build** , we'll get a **.wasm** binary, but in order to make it easy to call our Rust code from JavaScript, we'd like to have some JavaScript code that converts rich JavaScript types like strings and objects to pointers and passes these pointers to the Wasm module on our behalf. Doing this manually is tedious and prone to bugs.
+
+Luckily, in addition to being a library, **wasm-bindgen** also has the ability to create this "glue" JavaScript for us. This means in our code we can interact with our Wasm module using normal JavaScript types, and the generated code from **wasm-bindgen** will do the dirty work of converting these rich types into the pointer types that Wasm actually understands.
+
+We can use the awesome **wasm-pack** to build our Wasm binary, invoke the **wasm-bindgen** CLI tool, and package all of our JavaScript (and any optional generated TypeScript types) into one nice and neat package. Let's do that now!
+
+First we'll need to install **wasm-pack** :
+
+```
+$ cargo install wasm-pack
+```
+
+By default, **wasm-bindgen** produces ES6 modules. We'll use our code from a simple script tag, so we just want it to produce a plain old JavaScript object that gives us access to our Wasm functions. To do this, we'll pass it the **\--target no-modules** option.
+
+```
+$ wasm-pack build --target no-modules
+```
+
+We now have a **pkg** directory in our project. If we look at the contents, we'll see the following:
+
+ * **package.json** : useful if we want to package this up as an NPM module
+ * **my_wasm_library_bg.wasm** : our actual Wasm code
+ * **my_wasm_library.js** : the JavaScript "glue" code
+ * Some TypeScript definition files
+
+
+
+Now we can create an **index.html** file that will make use of our JavaScript and Wasm:
+
+```
+<[html][9]>
+<[head][10]>
+<[meta][11] content="text/html;charset=utf-8" http-equiv="Content-Type" />
+[head][10]>
+<[body][12]>
+
+<[script][13] src='./pkg/my_wasm_library.js'>[script][13]>
+
+<[script][13]>
+window.addEventListener('load', async () => {
+// Load the wasm file
+await wasm_bindgen('./pkg/my_wasm_library_bg.wasm');
+// Once it's loaded the `wasm_bindgen` object is populated
+// with the functions defined in our Rust code
+const greeting = wasm_bindgen.excited_greeting("Ryan")
+console.log(greeting)
+});
+[script][13]>
+[body][12]>
+[html][9]>
+```
+
+You may be tempted to open the HTML file in your browser, but unfortunately, this is not possible. For security reasons, Wasm files have to be served from the same domain as the HTML file. You'll need an HTTP server. If you have a favorite static HTTP server that can serve files from your filesystem, feel free to use that. I like to use [**basic-http-server**][14], which you can install and run like so:
+
+```
+$ cargo install basic-http-server
+$ basic-http-server
+```
+
+Now open the **index.html** file through the web server by going to **** and check your JavaScript console. You should see a very exciting greeting there!
+
+If you have any questions, please [let me know][15]. Next time, we'll take a look at how we can use various browser and JavaScript APIs from within our Rust code.
+
+--------------------------------------------------------------------------------
+
+via: https://opensource.com/article/19/3/calling-rust-javascript
+
+作者:[Ryan Levick][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://opensource.com/users/ryanlevick
+[b]: https://github.com/lujun9972
+[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/javascript_vim.jpg?itok=mqkAeakO (JavaScript in Vim)
+[2]: https://opensource.com/article/19/2/why-use-rust-webassembly
+[3]: https://rustup.rs/
+[4]: https://doc.rust-lang.org/cargo/
+[5]: https://github.com/rustwasm/wasm-bindgen
+[6]: https://github.com/koute/stdweb
+[7]: https://github.com/koute/stdweb/issues/318
+[8]: https://www.rust-lang.org/governance/wgs/wasm
+[9]: http://december.com/html/4/element/html.html
+[10]: http://december.com/html/4/element/head.html
+[11]: http://december.com/html/4/element/meta.html
+[12]: http://december.com/html/4/element/body.html
+[13]: http://december.com/html/4/element/script.html
+[14]: https://github.com/brson/basic-http-server
+[15]: https://twitter.com/ryan_levick
diff --git a/sources/tech/20190318 Install MEAN.JS Stack In Ubuntu 18.04 LTS.md b/sources/tech/20190318 Install MEAN.JS Stack In Ubuntu 18.04 LTS.md
new file mode 100644
index 0000000000..925326e0d7
--- /dev/null
+++ b/sources/tech/20190318 Install MEAN.JS Stack In Ubuntu 18.04 LTS.md
@@ -0,0 +1,266 @@
+[#]: collector: (lujun9972)
+[#]: translator: ( )
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+[#]: subject: (Install MEAN.JS Stack In Ubuntu 18.04 LTS)
+[#]: via: (https://www.ostechnix.com/install-mean-js-stack-ubuntu/)
+[#]: author: (sk https://www.ostechnix.com/author/sk/)
+
+Install MEAN.JS Stack In Ubuntu 18.04 LTS
+======
+
+![Install MEAN.JS Stack][1]
+
+**MEAN.JS** is an Open-Source, full-Stack JavaScript solution for building fast, and robust web applications. **MEAN.JS** stack consists of **MongoDB** (NoSQL database), **ExpressJs** (NodeJS server-side application web framework), **AngularJS** (Client-side web application framework), and **Node.js** (JavaScript run-time, popular for being a web server platform). In this tutorial, we will be discussing how to install MEAN.JS stack in Ubuntu. This guide was tested in Ubuntu 18.04 LTS server. However, it should work on other Ubuntu versions and Ubuntu variants.
+
+### Install MongoDB
+
+**MongoDB** is a free, cross-platform, open source, NoSQL document-oriented database. To install MongoDB on your Ubuntu system, refer the following guide:
+
+ * [**Install MongoDB Community Edition In Linux**][2]
+
+
+
+### Install Node.js
+
+**NodeJS** is an open source, cross-platform, and lightweight JavaScript run-time environment that can be used to build scalable network applications.
+
+To install NodeJS on your system, refer the following guide:
+
+ * [**How To Install NodeJS On Linux**][3]
+
+
+
+After installing, MongoDB, and Node.js, we need to install the other required components such as **Yarn** , **Grunt** , and **Gulp** for MEAN.js stack.
+
+### Install Yarn package manager
+
+Yarn is a package manager used by MEAN.JS stack to manage front-end packages.
+
+To install Bower, run the following command:
+
+```
+$ npm install -g yarn
+```
+
+### Install Grunt Task Runner
+
+Grunt Task Runner is used to to automate the development process.
+
+To install Grunt, run:
+
+```
+$ npm install -g grunt-cli
+```
+
+To verify if Yarn and Grunt have been installed, run:
+
+```
+$ npm list -g --depth=0 /home/sk/.nvm/versions/node/v11.11.0/lib ├── [email protected] ├── [email protected] └── [email protected]
+```
+
+### Install Gulp Task Runner (Optional)
+
+This is optional. You can use Gulp instead of Grunt. To install Gulp Task Runner, run the following command:
+
+```
+$ npm install -g gulp
+```
+
+We have installed all required prerequisites. Now, let us deploy MEAN.JS stack.
+
+### Download and Install MEAN.JS Stack
+
+Install Git if it is not installed already:
+
+```
+$ sudo apt-get install git
+```
+
+Next, git clone the MEAN.JS repository with command:
+
+```
+$ git clone https://github.com/meanjs/mean.git meanjs
+```
+
+**Sample output:**
+
+```
+Cloning into 'meanjs'...
+remote: Counting objects: 8596, done.
+remote: Compressing objects: 100% (12/12), done.
+remote: Total 8596 (delta 3), reused 0 (delta 0), pack-reused 8584 Receiving objects: 100% (8596/8596), 2.62 MiB | 140.00 KiB/s, done.
+Resolving deltas: 100% (4322/4322), done.
+Checking connectivity... done.
+```
+
+The above command will clone the latest version of the MEAN.JS repository to **meanjs** folder in your current working directory.
+
+Go to the meanjs folder:
+
+```
+$ cd meanjs/
+```
+
+Run the following command to install the Node.js dependencies required for testing and running our application:
+
+```
+$ npm install
+```
+
+This will take some time. Please be patient.
+
+* * *
+
+**Troubleshooting:**
+
+When I run the above command in Ubuntu 18.04 LTS, I get the following error:
+
+```
+Downloading binary from https://github.com/sass/node-sass/releases/download/v4.5.3/linux-x64-67_binding.node
+Cannot download "https://github.com/sass/node-sass/releases/download/v4.5.3/linux-x64-67_binding.node":
+
+HTTP error 404 Not Found
+
+[....]
+```
+
+If you ever get these type of common errors like “node-sass and gulp-sass”, do the following:
+
+First uninstall the project and global gulp-sass modules using the following commands:
+
+```
+$ npm uninstall gulp-sass
+$ npm uninstall -g gulp-sass
+```
+
+Next uninstall the global node-sass module:
+
+```
+$ npm uninstall -g node-sass
+```
+
+Install the global node-sass first. Then install the gulp-sass module at the local project level.
+
+```
+$ npm install -g node-sass
+$ npm install gulp-sass
+```
+
+Now try the npm install again from the project folder using command:
+
+```
+$ npm install
+```
+
+Now all dependencies will start to install without any issues.
+
+* * *
+
+Once all dependencies are installed, run the following command to install all the front-end modules needed for the application:
+
+```
+$ yarn --allow-root --config.interactive=false install
+```
+
+Or,
+
+```
+$ yarn --allow-root install
+```
+
+You will see the following message at the end if the installation is successful.
+
+```
+[...]
+> meanjs@0.6.0 snyk-protect /home/sk/meanjs
+> snyk protect
+
+Successfully applied Snyk patches
+
+Done in 99.47s.
+```
+
+### Test MEAN.JS
+
+MEAN.JS stack has been installed. We can now able to start a sample application using command:
+
+```
+$ npm start
+```
+
+After a few seconds, you will see a message like below. This means MEAN.JS stack is working!
+
+```
+[...]
+MEAN.JS - Development Environment
+
+Environment: development
+Server: http://0.0.0.0:3000
+Database: mongodb://localhost/mean-dev
+App version: 0.6.0
+MEAN.JS version: 0.6.0
+```
+
+![][4]
+
+To verify, open up the browser and navigate to **** or ****. You should see a screen something like below.
+
+![][5]
+
+Mean stack test page
+
+Congratulations! MEAN.JS stack is ready to start building web applications.
+
+For further details, I recommend you to refer **[MEAN.JS stack official documentation][6]**.
+
+* * *
+
+Want to setup MEAN.JS stack in CentOS, RHEL, Scientific Linux? Check the following link for more details.
+
+ * **[Install MEAN.JS Stack in CentOS 7][7]**
+
+
+
+* * *
+
+And, that’s all for now, folks. Hope this tutorial will help you to setup MEAN.JS stack.
+
+If you find this tutorial useful, please share it on your social, professional networks and support OSTechNix.
+
+More good stuffs to come. Stay tuned!
+
+Cheers!
+
+**Resources:**
+
+ * **[MEAN.JS website][8]**
+ * [**MEAN.JS GitHub Repository**][9]
+
+
+
+
+
+--------------------------------------------------------------------------------
+
+via: https://www.ostechnix.com/install-mean-js-stack-ubuntu/
+
+作者:[sk][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://www.ostechnix.com/author/sk/
+[b]: https://github.com/lujun9972
+[1]: data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7
+[2]: https://www.ostechnix.com/install-mongodb-linux/
+[3]: https://www.ostechnix.com/install-node-js-linux/
+[4]: http://www.ostechnix.com/wp-content/uploads/2016/03/meanjs.png
+[5]: http://www.ostechnix.com/wp-content/uploads/2016/03/mean-stack-test-page.png
+[6]: http://meanjs.org/docs.html
+[7]: http://www.ostechnix.com/install-mean-js-stack-centos-7/
+[8]: http://meanjs.org/
+[9]: https://github.com/meanjs/mean
diff --git a/sources/tech/20190318 Let-s try dwm - dynamic window manager.md b/sources/tech/20190318 Let-s try dwm - dynamic window manager.md
new file mode 100644
index 0000000000..48f44a33cb
--- /dev/null
+++ b/sources/tech/20190318 Let-s try dwm - dynamic window manager.md
@@ -0,0 +1,150 @@
+[#]: collector: (lujun9972)
+[#]: translator: ( )
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+[#]: subject: (Let’s try dwm — dynamic window manager)
+[#]: via: (https://fedoramagazine.org/lets-try-dwm-dynamic-window-manger/)
+[#]: author: (Adam Šamalík https://fedoramagazine.org/author/asamalik/)
+
+Let’s try dwm — dynamic window manager
+======
+
+![][1]
+
+If you like efficiency and minimalism, and are looking for a new window manager for your Linux desktop, you should try _dwm_ — dynamic window manager. Written in under 2000 standard lines of code, dwm is extremely fast yet powerful and highly customizable window manager.
+
+You can dynamically choose between tiling, monocle and floating layouts, organize your windows into multiple workspaces using tags, and quickly navigate through using keyboard shortcuts. This article helps you get started using dwm.
+
+## **Installation**
+
+To install dwm on Fedora, run:
+
+```
+$ sudo dnf install dwm dwm-user
+```
+
+The _dwm_ package installs the window manager itself, and the _dwm-user_ package significantly simplifies configuration which will be explained later in this article.
+
+Additionally, to be able to lock the screen when needed, we’ll also install _slock_ — a simple X display locker.
+
+```
+$ sudo dnf install slock
+```
+
+However, you can use a different one based on your personal preference.
+
+## **Quick start**
+
+To start dwm, choose the _dwm-user_ option on the login screen.
+
+![][2]
+
+After you log in, you’ll see a very simple desktop. In fact, the only thing there will be a bar at the top listing our nine tags that represent workspaces and a _[]=_ symbol that represents the layout of your windows.
+
+### Launching applications
+
+Before looking into the layouts, first launch some applications so you can play with the layouts as you go. Apps can be started by pressing _Alt+p_ and typing the name of the app followed by _Enter_. There’s also a shortcut _Alt+Shift+Enter_ for opening a terminal.
+
+Now that some apps are running, have a look at the layouts.
+
+### Layouts
+
+There are three layouts available by default: the tiling layout, the monocle layout, and the floating layout.
+
+The tiling layout, represented by _[]=_ on the bar, organizes windows into two main areas: master on the left, and stack on the right. You can activate the tiling layout by pressing _Alt+t._
+
+![][3]
+
+The idea behind the tiling layout is that you have your primary window in the master area while still seeing the other ones in the stack. You can quickly switch between them as needed.
+
+To swap windows between the two areas, hover your mouse over one in the stack area and press _Alt+Enter_ to swap it with the one in the master area.
+
+![][4]
+
+The monocle layout, represented by _[N]_ on the top bar, makes your primary window take the whole screen. You can switch to it by pressing _Alt+m_.
+
+Finally, the floating layout lets you move and resize your windows freely. The shortcut for it is _Alt+f_ and the symbol on the top bar is _> <>_.
+
+### Workspaces and tags
+
+Each window is assigned to a tag (1-9) listed at the top bar. To view a specific tag, either click on its number using your mouse or press _Alt+1..9._ You can even view multiple tags at once by clicking on their number using the secondary mouse button.
+
+Windows can be moved between different tags by highlighting them using your mouse, and pressing _Alt+Shift+1..9._
+
+## **Configuration**
+
+To make dwm as minimalistic as possible, it doesn’t use typical configuration files. Instead, you modify a C header file representing the configuration, and recompile it. But don’t worry, in Fedora it’s as simple as just editing one file in your home directory and everything else happens in the background thanks to the _dwm-user_ package provided by the maintainer in Fedora.
+
+First, you need to copy the file into your home directory using a command similar to the following:
+
+```
+$ mkdir ~/.dwm
+$ cp /usr/src/dwm-VERSION-RELEASE/config.def.h ~/.dwm/config.h
+```
+
+You can get the exact path by running _man dwm-start._
+
+Second, just edit the _~/.dwm/config.h_ file. As an example, let’s configure a new shortcut to lock the screen by pressing _Alt+Shift+L_.
+
+Considering we’ve installed the _slock_ package mentioned earlier in this post, we need to add the following two lines into the file to make it work:
+
+Under the _/* commands */_ comment, add:
+
+```
+static const char *slockcmd[] = { "slock", NULL };
+```
+
+And the following line into _static Key keys[]_ :
+
+```
+{ MODKEY|ShiftMask, XK_l, spawn, {.v = slockcmd } },
+```
+
+In the end, it should look like as follows: (added lines are highlighted)
+
+```
+...
+ /* commands */
+ static char dmenumon[2] = "0"; /* component of dmenucmd, manipulated in spawn() */
+ static const char *dmenucmd[] = { "dmenu_run", "-m", dmenumon, "-fn", dmenufont, "-nb", normbgcolor, "-nf", normfgcolor, "-sb", selbgcolor, "-sf", selfgcolor, NULL };
+ static const char *termcmd[] = { "st", NULL };
+ static const char *slockcmd[] = { "slock", NULL };
+
+ static Key keys[] = {
+ /* modifier key function argument */
+ { MODKEY|ShiftMask, XK_l, spawn, {.v = slockcmd } },
+ { MODKEY, XK_p, spawn, {.v = dmenucmd } },
+ { MODKEY|ShiftMask, XK_Return, spawn, {.v = termcmd } },
+ ...
+```
+
+Save the file.
+
+Finally, just log out by pressing _Alt+Shift+q_ and log in again. The scripts provided by the _dwm-user_ package will recognize that you have changed the _config.h_ file in your home directory and recompile dwm on login. And becuse dwm is so tiny, it’s fast enough you won’t even notice it.
+
+You can try locking your screen now by pressing _Alt+Shift+L_ , and then logging back in again by typing your password and pressing enter.
+
+## **Conclusion**
+
+If you like minimalism and want a very fast yet powerful window manager, dwm might be just what you’ve been looking for. However, it probably isn’t for beginners. There might be a lot of additional configuration you’ll need to do in order to make it just as you like it.
+
+To learn more about dwm, see the project’s homepage at .
+
+--------------------------------------------------------------------------------
+
+via: https://fedoramagazine.org/lets-try-dwm-dynamic-window-manger/
+
+作者:[Adam Šamalík][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://fedoramagazine.org/author/asamalik/
+[b]: https://github.com/lujun9972
+[1]: https://fedoramagazine.org/wp-content/uploads/2019/03/dwm-magazine-image-816x345.png
+[2]: https://fedoramagazine.org/wp-content/uploads/2019/03/choosing-dwm-1024x469.png
+[3]: https://fedoramagazine.org/wp-content/uploads/2019/03/dwm-desktop-1024x593.png
+[4]: https://fedoramagazine.org/wp-content/uploads/2019/03/Screenshot-2019-03-15-at-11.12.32-1024x592.png
diff --git a/sources/tech/20190318 Solus 4 ‘Fortitude- Released with Significant Improvements.md b/sources/tech/20190318 Solus 4 ‘Fortitude- Released with Significant Improvements.md
new file mode 100644
index 0000000000..c7a8d4bc55
--- /dev/null
+++ b/sources/tech/20190318 Solus 4 ‘Fortitude- Released with Significant Improvements.md
@@ -0,0 +1,108 @@
+[#]: collector: (lujun9972)
+[#]: translator: ( )
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+[#]: subject: (Solus 4 ‘Fortitude’ Released with Significant Improvements)
+[#]: via: (https://itsfoss.com/solus-4-release)
+[#]: author: (Ankush Das https://itsfoss.com/author/ankush/)
+
+Solus 4 ‘Fortitude’ Released with Significant Improvements
+======
+
+Finally, after a year of work, the much anticipated Solus 4 is here. It’s a significant release not just because this is a major upgrade, but also because this is the first major release after [Ikey Doherty (the founder of Solus) left the project][1] a few months ago.
+
+Now that everything’s under control with the new _management_ , **Solus 4 Fortitude** with updated Budgie desktop and other significant improvements has officially released.
+
+### What’s New in Solus 4
+
+![Solus 4 Fortitude][2]
+
+#### Core Improvements
+
+Solus 4 comes loaded with **[Linux Kernel 4.20.16][3]** which enables better hardware support (like Touchpad support, improved support for Intel Coffee Lake and Ice Lake CPUs, and for AMD Picasso & Raven2 APUs).
+
+This release also ships with the latest [FFmpeg 4.1.1][4]. Also, they have enabled the support for [dav1d][5] in [VLC][6] – which is an open source AV1 decoder. So, you can consider these upgrades to significantly improve the Multimedia experience.
+
+It also includes some minor fixes to the Software Center – if you were encountering any issues while finding an application or viewing the description.
+
+In addition, WPS Office has been removed from the listing.
+
+#### UI Improvements
+
+![Budgie 10.5][7]
+
+The Budgie desktop update includes some minor changes and also comes baked in with the [Plata (Noir) GTK Theme.][8]
+
+You will no longer observe same applications multiple times in the menu, they’ve fixed this. They have also introduced a “ **Caffeine** ” mode as applet which prevents the system from suspending, locking the screen or changing the brightness while you are working. You can schedule the time accordingly.
+
+![Caffeine Mode][9]
+
+The new Budgie desktop experience also adds quick actions to the app icons on the task bar, dubbed as “ **Icon Tasklist** “. It makes it easy to manage the active tabs on a browser or the actions to minimize and move it to a new workplace (as shown in the image below).
+
+![Icon Tasklist][10]
+
+As the [change log][11] mentions, the above pop over design lets you do more:
+
+ * _Close all instances of the selected application_
+ * _Easily access per-window controls for marking it always on top, maximizing / unmaximizing, minimizing, and moving it to various workspaces._
+ * _Quickly favorite / unfavorite apps_
+ * _Quickly launch a new instance of the selected application_
+ * _Scroll up or down on an IconTasklist button when a single window is open to activate and bring it into focus, or minimize it, based on the scroll direction._
+ * _Toggle to minimize and unminimize various application windows_
+
+
+
+The notification area now groups the notifications from specific applications instead of piling it all up. So, that’s a good improvement.
+
+In addition to these, the sound widget got some cool improvements while letting you personalize the look and feel of your desktop in an efficient manner.
+
+To know about all the nitty-gritty details, do refer the official [release note][11]s.
+
+### Download Solus 4
+
+You can get the latest version of Solus from its download page below. It is available in the default Budgie, GNOME and MATE desktop flavors.
+
+[Get Solus 4][12]
+
+### Wrapping Up**
+
+Solus 4 is definitely an impressive upgrade – without introducing any unnecessary fancy features but by adding only the useful ones, subtle changes.
+
+What do you think about the latest Solus 4 Fortitude? Have you tried it yet?
+
+Let us know your thoughts in the comments below.
+
+
+
+
+--------------------------------------------------------------------------------
+
+via: https://itsfoss.com/solus-4-release
+
+作者:[Ankush Das][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://itsfoss.com/author/ankush/
+[b]: https://github.com/lujun9972
+[1]: https://itsfoss.com/ikey-leaves-solus/
+[2]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/03/solus-4-featured.jpg?fit=800%2C450&ssl=1
+[3]: https://itsfoss.com/kernel-4-20-release/
+[4]: https://www.ffmpeg.org/
+[5]: https://code.videolan.org/videolan/dav1d
+[6]: https://www.videolan.org/index.html
+[7]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/03/Budgie-desktop.jpg?resize=800%2C450&ssl=1
+[8]: https://gitlab.com/tista500/plata-theme
+[9]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/03/caffeine-mode.jpg?ssl=1
+[10]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/03/IconTasklistPopover.jpg?ssl=1
+[11]: https://getsol.us/2019/03/17/solus-4-released/
+[12]: https://getsol.us/download/
+[13]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/03/Budgie-desktop.jpg?fit=800%2C450&ssl=1
+[14]: https://www.facebook.com/sharer.php?t=Solus%204%20%E2%80%98Fortitude%E2%80%99%20Released%20with%20Significant%20Improvements&u=https%3A%2F%2Fitsfoss.com%2Fsolus-4-release%2F
+[15]: https://twitter.com/intent/tweet?text=Solus+4+%E2%80%98Fortitude%E2%80%99+Released+with+Significant+Improvements&url=https%3A%2F%2Fitsfoss.com%2Fsolus-4-release%2F&via=itsfoss2
+[16]: https://www.linkedin.com/shareArticle?title=Solus%204%20%E2%80%98Fortitude%E2%80%99%20Released%20with%20Significant%20Improvements&url=https%3A%2F%2Fitsfoss.com%2Fsolus-4-release%2F&mini=true
+[17]: https://www.reddit.com/submit?title=Solus%204%20%E2%80%98Fortitude%E2%80%99%20Released%20with%20Significant%20Improvements&url=https%3A%2F%2Fitsfoss.com%2Fsolus-4-release%2F
diff --git a/sources/tech/20190319 Five Commands To Use Calculator In Linux Command Line.md b/sources/tech/20190319 Five Commands To Use Calculator In Linux Command Line.md
new file mode 100644
index 0000000000..c419d15268
--- /dev/null
+++ b/sources/tech/20190319 Five Commands To Use Calculator In Linux Command Line.md
@@ -0,0 +1,342 @@
+[#]: collector: (lujun9972)
+[#]: translator: ( )
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+[#]: subject: (Five Commands To Use Calculator In Linux Command Line?)
+[#]: via: (https://www.2daygeek.com/linux-command-line-calculator-bc-calc-qalc-gcalccmd/)
+[#]: author: (Magesh Maruthamuthu https://www.2daygeek.com/author/magesh/)
+
+Five Commands To Use Calculator In Linux Command Line?
+======
+
+As a Linux administrator you may use the command line calculator many times in a day for some purpose.
+
+I had used this especially when LVM creation using the PE values.
+
+There are many commands available for this purpose and i’m going to list most used commands in this article.
+
+These command line calculators are allow us to perform all kind of actions such as scientific, financial, or even simple calculation.
+
+Also, we can use these commands in shell scripts for complex math.
+
+In this article, I’m listing the top five command line calculator commands.
+
+Those command line calculator commands are below.
+
+ * **`bc:`** An arbitrary precision calculator language
+ * **`calc:`** arbitrary precision calculator
+ * **`expr:`** evaluate expressions
+ * **`gcalccmd:`** gnome-calculator – a desktop calculator
+ * **`qalc:`**
+ * **`Linux Shell:`**
+
+
+
+### How To Perform Calculation In Linux Using bc Command?
+
+bs stands for Basic Calculator. bc is a language that supports arbitrary precision numbers with interactive execution of statements. There are some similarities in the syntax to the C programming language.
+
+A standard math library is available by command line option. If requested, the math library is defined before processing any files. bc starts by processing code from all the files listed on the command line in the order listed.
+
+After all files have been processed, bc reads from the standard input. All code is executed as it is read.
+
+By default bc command has installed in all the Linux system. If not, use the following procedure to install it.
+
+For **`Fedora`** system, use **[DNF Command][1]** to install bc.
+
+```
+$ sudo dnf install bc
+```
+
+For **`Debian/Ubuntu`** systems, use **[APT-GET Command][2]** or **[APT Command][3]** to install bc.
+
+```
+$ sudo apt install bc
+```
+
+For **`Arch Linux`** based systems, use **[Pacman Command][4]** to install bc.
+
+```
+$ sudo pacman -S bc
+```
+
+For **`RHEL/CentOS`** systems, use **[YUM Command][5]** to install bc.
+
+```
+$ sudo yum install bc
+```
+
+For **`openSUSE Leap`** system, use **[Zypper Command][6]** to install bc.
+
+```
+$ sudo zypper install bc
+```
+
+### How To Use The bc Command To Perform Calculation In Linux?
+
+We can use the bc command to perform all kind of calculation right from the terminal.
+
+```
+$ bc
+bc 1.07.1
+Copyright 1991-1994, 1997, 1998, 2000, 2004, 2006, 2008, 2012-2017 Free Software Foundation, Inc.
+This is free software with ABSOLUTELY NO WARRANTY.
+For details type `warranty'.
+
+1+2
+3
+
+10-5
+5
+
+2*5
+10
+
+10/2
+5
+
+(2+4)*5-5
+25
+
+quit
+```
+
+Use `-l` flag to define the standard math library.
+
+```
+$ bc -l
+bc 1.07.1
+Copyright 1991-1994, 1997, 1998, 2000, 2004, 2006, 2008, 2012-2017 Free Software Foundation, Inc.
+This is free software with ABSOLUTELY NO WARRANTY.
+For details type `warranty'.
+
+3/5
+.60000000000000000000
+
+quit
+```
+
+### How To Perform Calculation In Linux Using calc Command?
+
+calc is an arbitrary precision calculator. It’s a simple calculator that allow us to perform all kind of calculation in Linux command line.
+
+For **`Fedora`** system, use **[DNF Command][1]** to install calc.
+
+```
+$ sudo dnf install calc
+```
+
+For **`Debian/Ubuntu`** systems, use **[APT-GET Command][2]** or **[APT Command][3]** to install calc.
+
+```
+$ sudo apt install calc
+```
+
+For **`Arch Linux`** based systems, use **[Pacman Command][4]** to install calc.
+
+```
+$ sudo pacman -S calc
+```
+
+For **`RHEL/CentOS`** systems, use **[YUM Command][5]** to install calc.
+
+```
+$ sudo yum install calc
+```
+
+For **`openSUSE Leap`** system, use **[Zypper Command][6]** to install calc.
+
+```
+$ sudo zypper install calc
+```
+
+### How To Use The calc Command To Perform Calculation In Linux?
+
+We can use the calc command to perform all kind of calculation right from the terminal.
+
+Intractive mode
+
+```
+$ calc
+C-style arbitrary precision calculator (version 2.12.7.1)
+Calc is open software. For license details type: help copyright
+[Type "exit" to exit, or "help" for help.]
+
+; 5+1
+ 6
+; 5-1
+ 4
+; 5*2
+ 10
+; 10/2
+ 5
+; quit
+```
+
+Non-Intractive mode
+
+```
+$ calc 3/5
+ 0.6
+```
+
+### How To Perform Calculation In Linux Using expr Command?
+
+Print the value of EXPRESSION to standard output. A blank line below separates increasing precedence groups. It’s part of coreutils so, we no need to install it.
+
+### How To Use The expr Command To Perform Calculation In Linux?
+
+Use the following format for basic calculations.
+
+For addition
+
+```
+$ expr 5 + 1
+6
+```
+
+For subtraction
+
+```
+$ expr 5 - 1
+4
+```
+
+For division.
+
+```
+$ expr 10 / 2
+5
+```
+
+### How To Perform Calculation In Linux Using gcalccmd Command?
+
+gnome-calculator is the official calculator of the GNOME desktop environment. gcalccmd is the console version of Gnome Calculator utility. By default it has installed in the GNOME desktop.
+
+### How To Use The gcalccmd Command To Perform Calculation In Linux?
+
+I have added few examples on this.
+
+```
+$ gcalccmd
+
+> 5+1
+6
+
+> 5-1
+4
+
+> 5*2
+10
+
+> 10/2
+5
+
+> sqrt(16)
+4
+
+> 3/5
+0.6
+
+> quit
+```
+
+### How To Perform Calculation In Linux Using qalc Command?
+
+Qalculate is a multi-purpose cross-platform desktop calculator. It is simple to use but provides power and versatility normally reserved for complicated math packages, as well as useful tools for everyday needs (such as currency conversion and percent calculation).
+
+Features include a large library of customizable functions, unit calculations and conversion, symbolic calculations (including integrals and equations), arbitrary precision, uncertainty propagation, interval arithmetic, plotting, and a user-friendly interface (GTK+ and CLI).
+
+For **`Fedora`** system, use **[DNF Command][1]** to install qalc.
+
+```
+$ sudo dnf install libqalculate
+```
+
+For **`Debian/Ubuntu`** systems, use **[APT-GET Command][2]** or **[APT Command][3]** to install qalc.
+
+```
+$ sudo apt install libqalculate
+```
+
+For **`Arch Linux`** based systems, use **[Pacman Command][4]** to install qalc.
+
+```
+$ sudo pacman -S libqalculate
+```
+
+For **`RHEL/CentOS`** systems, use **[YUM Command][5]** to install qalc.
+
+```
+$ sudo yum install libqalculate
+```
+
+For **`openSUSE Leap`** system, use **[Zypper Command][6]** to install qalc.
+
+```
+$ sudo zypper install libqalculate
+```
+
+### How To Use The qalc Command To Perform Calculation In Linux?
+
+I have added few examples on this.
+
+```
+$ qalc
+> 5+1
+
+ 5 + 1 = 6
+
+> ans*2
+
+ ans * 2 = 12
+
+> ans-2
+
+ ans - 2 = 10
+
+> 1 USD to INR
+It has been 36 day(s) since the exchange rates last were updated.
+Do you wish to update the exchange rates now? y
+
+ error: Failed to download exchange rates from coinbase.com: Resolving timed out after 15000 milliseconds.
+ 1 * dollar = approx. INR 69.638581
+
+> 10 USD to INR
+
+ 10 * dollar = approx. INR 696.38581
+
+> quit
+```
+
+### How To Perform Calculation In Linux Using Linux Shell Command?
+
+We can use the shell commands such as echo, awk, etc to perform the calculation.
+
+For Addition using echo command.
+
+```
+$ echo $((5+5))
+10
+```
+
+--------------------------------------------------------------------------------
+
+via: https://www.2daygeek.com/linux-command-line-calculator-bc-calc-qalc-gcalccmd/
+
+作者:[Magesh Maruthamuthu][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://www.2daygeek.com/author/magesh/
+[b]: https://github.com/lujun9972
+[1]: https://www.2daygeek.com/dnf-command-examples-manage-packages-fedora-system/
+[2]: https://www.2daygeek.com/apt-get-apt-cache-command-examples-manage-packages-debian-ubuntu-systems/
+[3]: https://www.2daygeek.com/apt-command-examples-manage-packages-debian-ubuntu-systems/
+[4]: https://www.2daygeek.com/pacman-command-examples-manage-packages-arch-linux-system/
+[5]: https://www.2daygeek.com/yum-command-examples-manage-packages-rhel-centos-systems/
+[6]: https://www.2daygeek.com/zypper-command-examples-manage-packages-opensuse-system/
diff --git a/sources/tech/20190319 How To Set Up a Firewall with GUFW on Linux.md b/sources/tech/20190319 How To Set Up a Firewall with GUFW on Linux.md
new file mode 100644
index 0000000000..26b9850109
--- /dev/null
+++ b/sources/tech/20190319 How To Set Up a Firewall with GUFW on Linux.md
@@ -0,0 +1,365 @@
+[#]: collector: (lujun9972)
+[#]: translator: ( )
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+[#]: subject: (How To Set Up a Firewall with GUFW on Linux)
+[#]: via: (https://itsfoss.com/set-up-firewall-gufw)
+[#]: author: (Sergiu https://itsfoss.com/author/sergiu/)
+
+How To Set Up a Firewall with GUFW on Linux
+======
+
+**UFW (Uncomplicated Firewall)** is a simple to use firewall utility with plenty of options for most users. It is an interface for the **iptables** , which is the classic (and harder to get comfortable with) way to set up rules for your network.
+
+**Do you really need a firewall for desktop?**
+
+![][1]
+
+A **[firewall][2]** is a way to regulate the incoming and outgoing traffic on your network. A well-configured firewall is crucial for the security of servers.
+
+But what about normal, desktop users? Do you need a firewall on your Linux system? Most likely you are connected to internet via a router linked to your internet service provider (ISP). Some routers already have built-in firewall. On top of that, your actual system is hidden behind NAT. In other words, you probably have a security layer when you are on your home network.
+
+Now that you know you should be using a firewall on your system, let’s see how you can easily install and configure a firewall on Ubuntu or any other Linux distribution.
+
+### Setting Up A Firewall With GUFW
+
+**[GUFW][3]** is a graphical utility for managing [Uncomplicated Firewall][4] ( **UFW** ). In this guide, I’ll go over configuring a firewall using **GUFW** that suits your needs, going over the different modes and rules.
+
+But first, let’s see how to install GUFW.
+
+#### Installing GUFW on Ubuntu and other Linux
+
+GUFW is available in all major Linux distributions. I advise using your distribution’s package manager for installing GUFW.
+
+If you are using Ubuntu, make sure you have the Universe Repository enabled. To do that, open up a terminal (default hotkey**:** CTRL+ALT+T) and enter:
+
+```
+sudo add-apt-repository universe
+sudo apt update -y
+```
+
+Now you can install GUFW with this command:
+
+```
+sudo apt install gufw -y
+```
+
+That’s it! If you prefer not touching the terminal, you can install it from the Software Center as well.
+
+Open Software Center and search for **gufw** and click on the search result.
+
+![Search for gufw in software center][5]
+
+Go ahead and click **Install**.
+
+![Install GUFW from the Software Center][6]
+
+To open **gufw** , go to your menu and search for it.
+
+![Start GUFW][7]
+
+This will open the firewall application and you’ll be greeted by a “ **Getting Started** ” section.
+
+![GUFW Interface and Welcome Screen][8]
+
+#### Turn on the firewall
+
+The first thing to notice about this menu is the **Status** toggle. Pressing this button will turn on/off the firewall ( **default:** off), applying your preferences (policies and rules).
+
+![Turn on the firewall][9]
+
+If turned on, the shield icon turn from grey to colored. The colors, as noted later in this article, reflect your policies. This will also make the firewall **automatically start** on system startup.
+
+**Note:** _**Home** will be turned **off** by default. The other profiles (see next section) will be turned **on.**_
+
+#### Understanding GUFW and its profiles
+
+As you can see in the menu, you can select different **profiles**. Each profile comes with different **default policies**. What this means is that they offer different behaviors for incoming and outgoing traffic.
+
+The **default profiles** are:
+
+ * Home
+ * Public
+ * Office
+
+
+
+You can select another profile by clicking on the current one ( **default: Home** ).
+
+![][10]
+
+Selecting one of them will modify the default behavior. Further down, you can change Incoming and Outgoing traffic preferences.
+
+By default, both in **Home** and in **Office** , these policies are **Deny Incoming** and **Allow Outgoing**. This enables you to use services such as http/https without letting anything get in ( **e.g.** ssh).
+
+For **Public** , they are **Reject Incoming** and **Allow Outgoing**. **Reject** , similar to **deny** , doesn’t let services in, but also sends feedback to the user/service that tried accessing your machine (instead of simply dropping/hanging the connection).
+
+Note
+
+If you are an average desktop user, you can stick with the default profiles. You’ll have to manually change the profiles if you change the network.
+
+So if you are travelling, set the firewall on public profile and the from here forwards, firewall will be set in public mode on each reboot.
+
+#### Configuring firewall rules and policies [for advanced users]
+
+All profiles use the same rules, only the policies the rules build upon will differ. Changing the behavior of a policy ( **Incoming/Outgoing** ) will apply the changes to the selected profile.
+
+Note that the policies can only be changed while the firewall is active (Status: ON).
+
+Profiles can easily be added, deleted and renamed from the **Preferences** menu.
+
+##### Preferences
+
+In the top bar, click on **Edit**. Select **Preferences**.
+
+![Open Preferences Menu in GUFW][11]
+
+This will open up the **Preferences** menu.
+
+![][12]
+
+Let’s go over the options you have here!
+
+**Logging** means exactly what you would think: how much information does the firewall write down in the log files.
+
+The options under **Gufw** are quite self-explanatory.
+
+In the section under **Profiles** is where we can add, delete and rename profiles. Double-clicking on a profile will allow you to **rename** it. Pressing **Enter** will complete this process and pressing **Esc** will cancel the rename.
+
+![][13]
+
+To **add** a new profile, click on the **+** under the list of profiles. This will add a new profile. However, it won’t notify you about it. You’ll also have to scroll down the list to see the profile you created (using the mouse wheel or the scroll bar on the right side of the list).
+
+**Note:** _The newly added profile will **Deny Incoming** and **Allow Outgoing** traffic._
+
+![][14]
+
+Clicking a profile highlight that profile. Pressing the **–** button will **delete** the highlighted profile.
+
+![][15]
+
+**Note:** _You can’t rename/remove the currently selected profile_.
+
+You can now click on **Close**. Next, I’ll go into setting up different **rules**.
+
+##### Rules
+
+Back to the main menu, somewhere in the middle of the screen you can select different tabs ( **Home, Rules, Report, Logs)**. We already covered the **Home** tab (that’s the quick guide you see when you start the app).
+
+![][16]
+
+Go ahead and select **Rules**.
+
+![][17]
+
+This will be the bulk of your firewall configuration: networking rules. You need to understand the concepts UFW is based on. That is **allowing, denying, rejecting** and **limiting** traffic.
+
+**Note:** _In UFW, the rules apply from top to bottom (the top rules take effect first and on top of them are added the following ones)._
+
+**Allow, Deny, Reject, Limit:**These are the available policies for the rules you’ll add to your firewall.
+
+Let’s see exactly what each of them means:
+
+ * **Allow:** allows any entry traffic to a port
+ * **Deny:** denies any entry traffic to a port
+ * **Reject:** denies any entry traffic to a port and informs the requester about the rejection
+ * **Limit:** denies entry traffic if an IP address has attempted to initiate 6 or more connections in the last 30 seconds
+
+
+
+##### Adding Rules
+
+There are three ways to add rules in GUFW. I’ll present all three methods in the following section.
+
+**Note:** _After you added the rules, changing their order is a very tricky process and it’s easier to just delete them and add them in the right order._
+
+But first, click on the **+** at the bottom of the **Rules** tab.
+
+![][18]
+
+This should open a pop-up menu ( **Add a Firewall Rule** ).
+
+![][19]
+
+At the top of this menu, you can see the three ways you can add rules. I’ll guide you through each method i.e. **Preconfigured, Simple, Advanced**. Click to expand each section.
+
+**Preconfigured Rules**
+
+This is the most beginner-friendly way to add rules.
+
+The first step is choosing a policy for the rule (from the ones detailed above).
+
+![][20]
+
+The next step is to choose the direction the rule will affect ( **Incoming, Outgoing, Both** ).
+
+![][21]
+
+The **Category** and **Subcategory** choices are plenty. These narrow down the **Applications** you can select
+
+Choosing an **Application** will set up a set of ports based on what is needed for that particular application. This is especially useful for apps that might operate on multiple ports, or if you don’t want to bother with manually creating rules for handwritten port numbers.
+
+If you wish to further customize the rule, you can click on the **orange arrow icon**. This will copy the current settings (Application with it’s ports etc.) and take you to the **Advanced** rule menu. I’ll cover that later in this article.
+
+For this example, I picked an **Office Database** app: **MySQL**. I’ll deny all incoming traffic to the ports used by this app.
+To create the rule, click on **Add**.
+
+![][22]
+
+You can now **Close** the pop-up (if you don’t want to add any other rules). You can see that the rule has been successfully added.
+
+![][23]
+
+The ports have been added by GUFW, and the rules have been automatically numbered. You may wonder why are there two new rules instead of just one; the answer is that UFW automatically adds both a standard **IP** rule and an **IPv6** rule.
+
+**Simple Rules**
+
+Although setting up preconfigured rules is nice, there is another easy way to add a rule. Click on the **+** icon again and go to the **Simple** tab.
+
+![][24]
+
+The options here are straight forward. Enter a name for your rule and select the policy and the direction. I’ll add a rule for rejecting incoming SSH attempts.
+
+![][25]
+
+The **Protocols** you can choose are **TCP, UDP** or **Both**.
+
+You must now enter the **Port** for which you want to manage the traffic. You can enter a **port number** (e.g. 22 for ssh), a **port range** with inclusive ends separated by a **:** ( **colon** ) (e.g. 81:89) or a **service name** (e.g. ssh). I’ll use **ssh** and select **both TCP and UDP** for this example. As before, click on **Add** to completing the creation of your rule. You can click the **red arrow icon** to copy the settings to the **Advanced** rule creation menu.
+
+![][26]
+
+If you select **Close** , you can see that the new rule (along with the corresponding IPv6 rule) has been added.
+
+![][27]
+
+**Advanced Rules**
+
+I’ll now go into how to set up more advanced rules, to handle traffic from specific IP addresses and subnets and targeting different interfaces.
+
+Let’s open up the **Rules** menu again. Select the **Advanced** tab.
+
+![][28]
+
+By now, you should already be familiar with the basic options: **Name, Policy, Direction, Protocol, Port**. These are the same as before.
+
+![][29]
+
+**Note:** _You can choose both a receiving port and a requesting port._
+
+What changes is that now you have additional options to further specialize our rules.
+
+I mentioned before that rules are automatically numbered by GUFW. With **Advanced** rules you specify the position of your rule by entering a number in the **Insert** option.
+
+**Note:** _Inputting **position 0** will add your rule after all existing rules._
+
+**Interface** let’s you select any network interface available on your machine. By doing so, the rule will only have effect on traffic to and from that specific interface.
+
+**Log** changes exactly that: what will and what won’t be logged.
+
+You can also choose IPs for the requesting and for the receiving port/service ( **From** , **To** ).
+
+All you have to do is specify an **IP address** (e.g. 192.168.0.102) or an entire **subnet** (e.g. 192.168.0.0/24 for IPv4 addresses ranging from 192.168.0.0 to 192.168.0.255).
+
+In my example, I’ll set up a rule to allow all incoming TCP SSH requests from systems on my subnet to a specific network interface of the machine I’m currently running. I’ll add the rule after all my standard IP rules, so that it takes effect on top of the other rules I have set up.
+
+![][30]
+
+**Close** the menu.
+
+![][31]
+
+The rule has been successfully added after the other standard IP rules.
+
+##### Edit Rules
+
+Clicking a rule in the rules list will highlight it. Now, if you click on the **little cog icon** at the bottom, you can **edit** the highlighted rule.
+
+![][32]
+
+This will open up a menu looking something like the **Advanced** menu I explained in the last section.
+
+![][33]
+
+**Note:** _Editing any options of a rule will move it to the end of your list._
+
+You can now ether select on **Apply** to modify your rule and move it to the end of the list, or hit **Cancel**.
+
+##### Delete Rules
+
+After selecting (highlighting) a rule, you can also click on the **–** icon.
+
+![][34]
+
+##### Reports
+
+Select the **Report** tab. Here you can see services that are currently running (along with information about them, such as Protocol, Port, Address and Application name). From here, you can **Pause Listening Report (Pause Icon)** or **Create a rule from a highlighted service from the listening report (+ Icon)**.
+
+![][35]
+
+##### Logs
+
+Select the **Logs** tab. Here is where you’ll have to check for any errors are suspicious rules. I’ve tried creating some invalid rules to show you what these might look like when you don’t know why you can’t add a certain rule. In the bottom section, there are two icons. Clicking the **first icon copies the logs** to your clipboard and clicking the **second icon** **clears the log**.
+
+![][36]
+
+### Wrapping Up
+
+Having a firewall that is properly configured can greatly contribute to your Ubuntu experience, making your machine safer to use and allowing you to have full control over incoming and outgoing traffic.
+
+I have covered the different uses and modes of **GUFW** , going into how to set up different rules and configure a firewall to your needs. I hope that this guide has been helpful to you.
+
+If you are a beginner, this should prove to be a comprehensive guide; even if you are more versed in the Linux world and maybe getting your feet wet into servers and networking, I hope you learned something new.
+
+Let us know in the comments if this article helped you and why did you decide a firewall would improve your system!
+
+--------------------------------------------------------------------------------
+
+via: https://itsfoss.com/set-up-firewall-gufw
+
+作者:[Sergiu][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://itsfoss.com/author/sergiu/
+[b]: https://github.com/lujun9972
+[1]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/03/firewall-linux.png?resize=800%2C450&ssl=1
+[2]: https://en.wikipedia.org/wiki/Firewall_(computing)
+[3]: http://gufw.org/
+[4]: https://en.wikipedia.org/wiki/Uncomplicated_Firewall
+[5]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/03/ubuntu_software_gufw-1.jpg?ssl=1
+[6]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/03/ubuntu_software_install_gufw.jpg?ssl=1
+[7]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/03/show_applications_gufw.jpg?ssl=1
+[8]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/03/gufw.jpg?ssl=1
+[9]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/03/gufw_toggle_status.jpg?ssl=1
+[10]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/03/gufw_select_profile-1.jpg?ssl=1
+[11]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/03/gufw_open_preferences.jpg?ssl=1
+[12]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/03/gufw_preferences.png?fit=800%2C585&ssl=1
+[13]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/03/gufw_rename_profile.png?fit=800%2C551&ssl=1
+[14]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/03/gufw_add_profile.png?ssl=1
+[15]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/03/gufw_delete_profile.png?ssl=1
+[16]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/03/gufw_home_tab.png?ssl=1
+[17]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/03/gufw_rules_tab.png?ssl=1
+[18]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/03/gufw_add_rule.png?ssl=1
+[19]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/03/gufw_add_rules_menu.png?ssl=1
+[20]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/03/gufw_preconfigured_rule_policy.png?ssl=1
+[21]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/03/gufw_preconfigured_rule_direction.png?ssl=1
+[22]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/03/gufw_preconfigured_add_rule.png?ssl=1
+[23]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/03/gufw_preconfigured_rule_added.png?ssl=1
+[24]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/03/gufw_add_simple_rules_menu.png?ssl=1
+[25]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/03/gufw_simple_rule_name_policy_direction.png?ssl=1
+[26]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/03/gufw_add_simple_rule.png?ssl=1
+[27]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/03/gufw_simple_rule_added.png?ssl=1
+[28]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/03/gufw_add_advanced_rules_menu.png?ssl=1
+[29]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/03/gufw_advanced_rule_basic_options.png?ssl=1
+[30]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/03/gufw_add_advanced_rule.png?ssl=1
+[31]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/03/gufw_advanced_rule_added.png?ssl=1
+[32]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/03/gufw_edit_highlighted_rule.png?ssl=1
+[33]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/03/gufw_edit_rule_menu.png?ssl=1
+[34]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/03/gufw_delete_rule.png?ssl=1
+[35]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/03/gufw_report_tab.png?ssl=1
+[36]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/03/gufw_log_tab-1.png?ssl=1
+[37]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/03/firewall-linux.png?fit=800%2C450&ssl=1
diff --git a/sources/tech/20190319 How to set up a homelab from hardware to firewall.md b/sources/tech/20190319 How to set up a homelab from hardware to firewall.md
new file mode 100644
index 0000000000..d8bb34395b
--- /dev/null
+++ b/sources/tech/20190319 How to set up a homelab from hardware to firewall.md
@@ -0,0 +1,107 @@
+[#]: collector: (lujun9972)
+[#]: translator: ( )
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+[#]: subject: (How to set up a homelab from hardware to firewall)
+[#]: via: (https://opensource.com/article/19/3/home-lab)
+[#]: author: (Michael Zamot https://opensource.com/users/mzamot)
+
+How to set up a homelab from hardware to firewall
+======
+
+Take a look at hardware and software options for building your own homelab.
+
+![][1]
+
+Do you want to create a homelab? Maybe you want to experiment with different technologies, create development environments, or have your own private cloud. There are many reasons to have a homelab, and this guide aims to make it easier to get started.
+
+There are three categories to consider when planning a home lab: hardware, software, and maintenance. We'll look at the first two categories here and save maintaining your computer lab for a future article.
+
+### Hardware
+
+When thinking about your hardware needs, first consider how you plan to use your lab as well as your budget, noise, space, and power usage.
+
+If buying new hardware is too expensive, search local universities, ads, and websites like eBay or Craigslist for recycled servers. They are usually inexpensive, and server-grade hardware is built to last many years. You'll need three types of hardware: a virtualization server, storage, and a router/firewall.
+
+#### Virtualization servers
+
+A virtualization server allows you to run several virtual machines that share the physical box's resources while maximizing and isolating resources. If you break one virtual machine, you won't have to rebuild the entire server, just the virtual one. If you want to do a test or try something without the risk of breaking your entire system, just spin up a new virtual machine and you're ready to go.
+
+The two most important factors to consider in a virtualization server are the number and speed of its CPU cores and its memory. If there are not enough resources to share among all the virtual machines, they'll be overallocated and try to steal each other's CPU cycles and memory.
+
+So, consider a CPU platform with multiple cores. You want to ensure the CPU supports virtualization instructions (VT-x for Intel and AMD-V for AMD). Examples of good consumer-grade processors that can handle virtualization are Intel i5 or i7 and AMD Ryzen. If you are considering server-grade hardware, the Xeon class for Intel and EPYC for AMD are good options. Memory can be expensive, especially the latest DDR4 SDRAM. When estimating memory requirements, factor at least 2GB for the host operating system's memory consumption.
+
+If your electricity bill or noise is a concern, solutions like Intel's NUC devices provide a small form factor, low power usage, and reduced noise, but at the expense of expandability.
+
+#### Network-attached storage (NAS)
+
+If you want a machine loaded with hard drives to store all your personal data, movies, pictures, etc. and provide storage for the virtualization server, network-attached storage (NAS) is what you want.
+
+In most cases, you won't need a powerful CPU; in fact, many commercial NAS solutions use low-powered ARM CPUs. A motherboard that supports multiple SATA disks is a must. If your motherboard doesn't have enough ports, use a host bus adapter (HBA) SAS controller to add extras.
+
+Network performance is critical for a NAS, so select a gigabit network interface (or better).
+
+Memory requirements will differ based on your filesystem. ZFS is one of the most popular filesystems for NAS, and you'll need more memory to use features such as caching or deduplication. Error-correcting code (ECC) memory is your best bet to protect data from corruption (but make sure your motherboard supports it before you buy). Last, but not least, don't forget an uninterruptible power supply (UPS), because losing power can cause data corruption.
+
+#### Firewall and router
+
+Have you ever realized that a cheap router/firewall is usually the main thing protecting your home network from the exterior world? These routers rarely receive timely security updates, if they receive any at all. Scared now? Well, [you should be][2]!
+
+You usually don't need a powerful CPU or a great deal of memory to build your own router/firewall, unless you are handling a huge throughput or want to do CPU-intensive tasks, like a VPN server or traffic filtering. In such cases, you'll need a multicore CPU with AES-NI support.
+
+You may want to get at least two 1-gigabit or better Ethernet network interface cards (NICs), also, not needed, but recommended, a managed switch to connect your DIY-router to create VLANs to further isolate and secure your network.
+
+![Home computer lab PfSense][4]
+
+### Software
+
+After you've selected your virtualization server, NAS, and firewall/router, the next step is exploring the different operating systems and software to maximize their benefits. While you could use a regular Linux distribution like CentOS, Debian, or Ubuntu, they usually take more time to configure and administer than the following options.
+
+#### Virtualization software
+
+**[KVM][5]** (Kernel-based Virtual Machine) lets you turn Linux into a hypervisor so you can run multiple virtual machines in the same box. The best thing is that KVM is part of Linux, and it is the go-to option for many enterprises and home users. If you are comfortable, you can install **[libvirt][6]** and **[virt-manager][7]** to manage your virtualization platform.
+
+**[Proxmox VE][8]** is a robust, enterprise-grade solution and a full open source virtualization and container platform. It is based on Debian and uses KVM as its hypervisor and LXC for containers. Proxmox offers a powerful web interface, an API, and can scale out to many clustered nodes, which is helpful because you'll never know when you'll run out of capacity in your lab.
+
+**[oVirt][9] (RHV)** is another enterprise-grade solution that uses KVM as the hypervisor. Just because it's enterprise doesn't mean you can't use it at home. oVirt offers a powerful web interface and an API and can handle hundreds of nodes (if you are running that many servers, I don't want to be your neighbor!). The potential problem with oVirt for a home lab is that it requires a minimum set of nodes: You'll need one external storage, such as a NAS, and at least two additional virtualization nodes (you can run it just on one, but you'll run into problems in maintenance of your environment).
+
+#### NAS software
+
+**[FreeNAS][10]** is the most popular open source NAS distribution, and it's based on the rock-solid FreeBSD operating system. One of its most robust features is its use of the ZFS filesystem, which provides data-integrity checking, snapshots, replication, and multiple levels of redundancy (mirroring, striped mirrors, and striping). On top of that, everything is managed from the powerful and easy-to-use web interface. Before installing FreeNAS, check its hardware support, as it is not as wide as Linux-based distributions.
+
+Another popular alternative is the Linux-based **[OpenMediaVault][11]**. One of its main features is its modularity, with plugins that extend and add features. Among its included features are a web-based administration interface; protocols like CIFS, SFTP, NFS, iSCSI; and volume management, including software RAID, quotas, access control lists (ACLs), and share management. Because it is Linux-based, it has extensive hardware support.
+
+#### Firewall/router software
+
+**[pfSense][12]** is an open source, enterprise-grade FreeBSD-based router and firewall distribution. It can be installed directly on a server or even inside a virtual machine (to manage your virtual or physical networks and save space). It has many features and can be expanded using packages. It is managed entirely using the web interface, although it also has command-line access. It has all the features you would expect from a router and firewall, like DHCP and DNS, as well as more advanced features, such as intrusion detection (IDS) and intrusion prevention (IPS) systems. You can create multiple networks listening on different interfaces or using VLANs, and you can create a secure VPN server with a few clicks. pfSense uses pf, a stateful packet filter that was developed for the OpenBSD operating system using a syntax similar to IPFilter. Many companies and organizations use pfSense.
+
+* * *
+
+With all this information in mind, it's time for you to get your hands dirty and start building your lab. In a future article, I will get into the third category of running a home lab: using automation to deploy and maintain it.
+
+--------------------------------------------------------------------------------
+
+via: https://opensource.com/article/19/3/home-lab
+
+作者:[Michael Zamot (Red Hat)][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://opensource.com/users/mzamot
+[b]: https://github.com/lujun9972
+[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/computer_keyboard_laptop_development_code_woman.png?itok=vbYz6jjb
+[2]: https://opensource.com/article/18/5/how-insecure-your-router
+[3]: /file/427426
+[4]: https://opensource.com/sites/default/files/uploads/pfsense2.png (Home computer lab PfSense)
+[5]: https://www.linux-kvm.org/page/Main_Page
+[6]: https://libvirt.org/
+[7]: https://virt-manager.org/
+[8]: https://www.proxmox.com/en/proxmox-ve
+[9]: https://ovirt.org/
+[10]: https://freenas.org/
+[11]: https://www.openmediavault.org/
+[12]: https://www.pfsense.org/
diff --git a/sources/tech/20190320 Choosing an open messenger client- Alternatives to WhatsApp.md b/sources/tech/20190320 Choosing an open messenger client- Alternatives to WhatsApp.md
new file mode 100644
index 0000000000..5f940e9b0b
--- /dev/null
+++ b/sources/tech/20190320 Choosing an open messenger client- Alternatives to WhatsApp.md
@@ -0,0 +1,97 @@
+[#]: collector: (lujun9972)
+[#]: translator: ( )
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+[#]: subject: (Choosing an open messenger client: Alternatives to WhatsApp)
+[#]: via: (https://opensource.com/article/19/3/open-messenger-client)
+[#]: author: (Chris Hermansen https://opensource.com/users/clhermansen)
+
+Choosing an open messenger client: Alternatives to WhatsApp
+======
+
+Keep in touch with far-flung family, friends, and colleagues without sacrificing your privacy.
+
+![Team communication, chat][1]
+
+Like many families, mine is inconveniently spread around, and I have many colleagues in North and South America. So, over the years, I've relied more and more on WhatsApp to stay in touch with people. The claimed end-to-end encryption appeals to me, as I prefer to maintain some shreds of privacy, and moreover to avoid forcing those with whom I communicate to use an insecure mechanism. But all this [WhatsApp/Facebook/Instagram "convergence"][2] has led our family to decide to vote with our feet. We no longer use WhatsApp for anything except communicating with others who refuse to use anything else, and we're working on them.
+
+So what do we use instead? Before I spill the beans, I'd like to explain what other options we looked at and how we chose.
+
+### Options we considered and how we evaluated them
+
+There is an absolutely [crazy number of messaging apps out there][3], and we spent a good deal of time thinking about what we needed for a replacement. We started by reading Dan Arel's article on [five social media alternatives to protect privacy][4].
+
+Then we came up with our list of core needs:
+
+ * Our entire family uses Android phones.
+ * One of us has a Windows desktop; the rest use Linux.
+ * Our main interest is something we can use to chat, both individually and as a group, on our phones, but it would be nice to have a desktop client available.
+ * It would also be nice to have voice and video calling as well.
+ * Our privacy is important. Ideally, the code should be open source to facilitate security reviews. If the operation is not pure peer-to-peer, then the organization operating the server components should not operate a business based on the commercialization of our personal information.
+
+
+
+At that point, we narrowed the long list down to [Viber][5], [Line][6], [Signal][7], [Threema][8], [Wire][9], and [Riot.im][10]. While I lean strongly to open source, we wanted to include some closed source and paid solutions to make sure we weren't missing something important. Here's how those six alternatives measured up.
+
+### Line
+
+[Line][11] is a popular messaging application, and it's part of a larger Line "ecosystem"—online gaming, Taxi (an Uber-like service in Japan), Wow (a food delivery service), Today (a news hub), shopping, and others. For us, Line checks a few too many boxes with all those add-on features. Also, I could not determine its current security quality, and it's not open source. The business model seems to be to build a community and figure out how to make money through that community.
+
+### Riot.im
+
+[Riot.im][12] operates on top of the Matrix protocol and therefore lets the user choose a Matrix provider. It also appears to check all of our "needs" boxes, although in operation it looks more like Slack, with a room-oriented and interoperable/federated design. It offers desktop clients, and it's open source. Since the Matrix protocol can be hosted anywhere, any business model would be particular to the Matrix provider.
+
+### Signal
+
+[Signal][13] offers a similar user experience to WhatsApp. It checks all of our "needs" boxes, with solid security validated by external audit. It is open source, and it is developed and operated by a not-for-profit foundation, in principle similar to the Mozilla Foundation. Interestingly, Signal's communications protocol appears to be used by other messaging apps, [including WhatsApp][14].
+
+### Threema
+
+[Threema][15] is extremely privacy-focused. It checks some of our "needs" boxes, with decent external audit results of its security. It doesn't offer a desktop client, and it [isn't fully open source][16] though some of its core components are. Threema's business model appears to be to offer paid secure communications.
+
+### Viber
+
+[Viber][17] is a very popular messaging application. It checks most of our "needs" boxes; however, it doesn't seem to have solid proof of its security—it seems to use a proprietary encryption mechanism, and as far as I could determine, its current security mechanisms are not externally audited. It's not open source. The owner, Rakuten, seems to be planning for a paid subscription as a business model.
+
+### Wire
+
+[Wire][18] was started and is built by some ex-Skype people. It appears to check all of our "needs" boxes, although I am not completely comfortable with its security profile since it stores client data that apparently is not encrypted on its servers. It offers desktop clients and is open source. The developer and operator, Wire Swiss, appears to have a [pay-for-service track][9] as its future business model.
+
+### The final verdict
+
+In the end, we picked Signal. We liked its open-by-design approach, its serious and ongoing [privacy and security stance][7] and having a Signal app on our GNOME (and Windows) desktops. It performs very well on our Android handsets and our desktops. Moreover, it wasn't a big surprise to our small user community; it feels much more like WhatsApp than, for example, Riot.im, which we also tried extensively. Having said that, if we were trying to replace Slack, we'd probably move to Riot.im.
+
+_Have a favorite messenger? Tell us about it in the comments below._
+
+--------------------------------------------------------------------------------
+
+via: https://opensource.com/article/19/3/open-messenger-client
+
+作者:[Chris Hermansen (Community Moderator)][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://opensource.com/users/clhermansen
+[b]: https://github.com/lujun9972
+[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/talk_chat_team_mobile_desktop.png?itok=d7sRtKfQ (Team communication, chat)
+[2]: https://www.cnbc.com/2018/03/28/facebook-new-privacy-settings-dont-address-instagram-whatsapp.html
+[3]: https://en.wikipedia.org/wiki/Comparison_of_instant_messaging_clients
+[4]: https://opensource.com/article/19/1/open-source-social-media-alternatives
+[5]: https://en.wikipedia.org/wiki/Viber
+[6]: https://en.wikipedia.org/wiki/Line_(software)
+[7]: https://en.wikipedia.org/wiki/Signal_(software)
+[8]: https://en.wikipedia.org/wiki/Threema
+[9]: https://en.wikipedia.org/wiki/Wire_(software)
+[10]: https://en.wikipedia.org/wiki/Riot.im
+[11]: https://line.me/en/
+[12]: https://about.riot.im/
+[13]: https://signal.org/
+[14]: https://en.wikipedia.org/wiki/Signal_Protocol
+[15]: https://threema.ch/en
+[16]: https://threema.ch/en/faq/source_code
+[17]: https://www.viber.com/
+[18]: https://wire.com/en/
diff --git a/sources/tech/20190320 Getting started with Jaeger to build an Istio service mesh.md b/sources/tech/20190320 Getting started with Jaeger to build an Istio service mesh.md
new file mode 100644
index 0000000000..c4200355e4
--- /dev/null
+++ b/sources/tech/20190320 Getting started with Jaeger to build an Istio service mesh.md
@@ -0,0 +1,157 @@
+[#]: collector: (lujun9972)
+[#]: translator: ( )
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+[#]: subject: (Getting started with Jaeger to build an Istio service mesh)
+[#]: via: (https://opensource.com/article/19/3/getting-started-jaeger)
+[#]: author: (Daniel Oh https://opensource.com/users/daniel-oh)
+
+Getting started with Jaeger to build an Istio service mesh
+======
+
+Improve monitoring and tracing of cloud-native apps on a distributed networking system.
+
+![Mesh networking connected dots][1]
+
+[Service mesh][2] provides a dedicated network for service-to-service communication in a transparent way. [Istio][3] aims to help developers and operators address service mesh features such as dynamic service discovery, mutual transport layer security (TLS), circuit breakers, rate limiting, and tracing. [Jaeger][4] with Istio augments monitoring and tracing of cloud-native apps on a distributed networking system. This article explains how to get started with Jaeger to build an Istio service mesh on the Kubernetes platform.
+
+### Spinning up a Kubernetes cluster
+
+[Minikube][5] allows you to run a single-node Kubernetes cluster based on a virtual machine such as [KVM][6], [VirtualBox][7], or [HyperKit][8] on your local machine. [Install Minikube][9] and use the following shell script to run it:
+
+```
+#!/bin/bash
+
+export MINIKUBE_PROFILE_NAME=istio-jaeger
+minikube profile $MINIKUBE_PROFILE_NAME
+minikube config set cpus 3
+minikube config set memory 8192
+
+# You need to replace appropriate VM driver on your local machine
+minikube config set vm-driver hyperkit
+
+minikube start
+```
+
+In the above script, replace the **\--vm-driver=xxx** option with the appropriate virtual machine driver on your operating system (OS).
+
+### Deploying Istio service mesh with Jaeger
+
+Download the Istio installation file for your OS from the [Istio release page][10]. In the Istio package directory, you will find the Kubernetes installation YAML files in **install/** and the sample applications in **sample/**. Use the following commands:
+
+```
+$ curl -L | sh -
+$ cd istio-1.0.5
+$ export PATH=$PWD/bin:$PATH
+```
+
+The easiest way to deploy Istio with Jaeger on your Kubernetes cluster is to use [Custom Resource Definitions][11]. Install Istio with mutual TLS authentication between sidecars with these commands:
+
+```
+$ kubectl apply -f install/kubernetes/helm/istio/templates/crds.yaml
+$ kubectl apply -f install/kubernetes/istio-demo-auth.yaml
+```
+
+Check if all pods of Istio on your Kubernetes cluster are deployed and running correctly by using the following command and review the output:
+
+```
+$ kubectl get pods -n istio-system
+NAME READY STATUS RESTARTS AGE
+grafana-59b8896965-p2vgs 1/1 Running 0 3h
+istio-citadel-856f994c58-tk8kq 1/1 Running 0 3h
+istio-cleanup-secrets-mq54t 0/1 Completed 0 3h
+istio-egressgateway-5649fcf57-n5ql5 1/1 Running 0 3h
+istio-galley-7665f65c9c-wx8k7 1/1 Running 0 3h
+istio-grafana-post-install-nh5rw 0/1 Completed 0 3h
+istio-ingressgateway-6755b9bbf6-4lf8m 1/1 Running 0 3h
+istio-pilot-698959c67b-d2zgm 2/2 Running 0 3h
+istio-policy-6fcb6d655f-lfkm5 2/2 Running 0 3h
+istio-security-post-install-st5xc 0/1 Completed 0 3h
+istio-sidecar-injector-768c79f7bf-9rjgm 1/1 Running 0 3h
+istio-telemetry-664d896cf5-wwcfw 2/2 Running 0 3h
+istio-tracing-6b994895fd-h6s9h 1/1 Running 0 3h
+prometheus-76b7745b64-hzm27 1/1 Running 0 3h
+servicegraph-5c4485945b-mk22d 1/1 Running 1 3h
+```
+
+### Building sample microservice apps
+
+You can use the [Bookinfo][12] app to learn about Istio's features. Bookinfo consists of four microservice apps: _productpage_ , _details_ , _reviews_ , and _ratings_ deployed independently on Minikube. Each microservice will be deployed with an Envoy sidecar via Istio by using the following commands:
+
+```
+// Enable sidecar injection automatically
+$ kubectl label namespace default istio-injection=enabled
+$ kubectl apply -f samples/bookinfo/platform/kube/bookinfo.yaml
+
+// Export the ingress IP, ports, and gateway URL
+$ kubectl apply -f samples/bookinfo/networking/bookinfo-gateway.yaml
+
+$ export INGRESS_PORT=$(kubectl -n istio-system get service istio-ingressgateway -o jsonpath='{.spec.ports[?(@.name=="http2")].nodePort}')
+$ export SECURE_INGRESS_PORT=$(kubectl -n istio-system get service istio-ingressgateway -o jsonpath='{.spec.ports[?(@.name=="https")].nodePort}')
+$ export INGRESS_HOST=$(minikube ip)
+
+$ export GATEWAY_URL=$INGRESS_HOST:$INGRESS_PORT
+```
+
+### Accessing the Jaeger dashboard
+
+To view tracing information for each HTTP request, create some traffic by running the following commands at the command line:
+```
+
+```
+
+$ while true; do
+ curl -s http://${GATEWAY_URL}/productpage > /dev/null
+ echo -n .;
+ sleep 0.2
+done
+
+You can access the Jaeger dashboard through a web browser with [http://localhost:16686][13] if you set up port forwarding as follows:
+
+```
+kubectl port-forward -n istio-system $(kubectl get pod -n istio-system -l app=jaeger -o jsonpath='{.items[0].metadata.name}') 16686:16686 &
+```
+
+You can explore all traces by clicking "Find Traces" after selecting the _productpage_ service. Your dashboard will look similar to this:
+
+![Find traces in Jaeger][14]
+
+You can also view more details about each trace to dig into performance issues or elapsed time by clicking on a certain trace.
+
+![Viewing details about a trace][15]
+
+### Conclusion
+
+A distributed tracing platform allows you to understand what happened from service to service for individual ingress/egress traffic. Istio sends individual trace information automatically to Jaeger, the distributed tracing platform, even if your modern applications aren't aware of Jaeger at all. In the end, this capability helps developers and operators do troubleshooting easier and quicker at scale.
+
+* * *
+
+--------------------------------------------------------------------------------
+
+via: https://opensource.com/article/19/3/getting-started-jaeger
+
+作者:[Daniel Oh (Red Hat)][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://opensource.com/users/daniel-oh
+[b]: https://github.com/lujun9972
+[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/mesh_networking_dots_connected.png?itok=ovINTRR3 (Mesh networking connected dots)
+[2]: https://blog.buoyant.io/2017/04/25/whats-a-service-mesh-and-why-do-i-need-one/
+[3]: https://istio.io/docs/concepts/what-is-istio/
+[4]: https://www.jaegertracing.io/docs/1.9/
+[5]: https://opensource.com/article/18/10/getting-started-minikube
+[6]: https://www.linux-kvm.org/page/Main_Page
+[7]: https://www.virtualbox.org/wiki/Downloads
+[8]: https://github.com/moby/hyperkit
+[9]: https://kubernetes.io/docs/tasks/tools/install-minikube/
+[10]: https://github.com/istio/istio/releases
+[11]: https://kubernetes.io/docs/concepts/extend-kubernetes/api-extension/custom-resources/#customresourcedefinitions
+[12]: https://github.com/istio/istio/tree/master/samples/bookinfo
+[13]: http://localhost:16686/
+[14]: https://opensource.com/sites/default/files/uploads/traces_productpages.png (Find traces in Jaeger)
+[15]: https://opensource.com/sites/default/files/uploads/traces_performance.png (Viewing details about a trace)
diff --git a/sources/tech/20190320 Move your dotfiles to version control.md b/sources/tech/20190320 Move your dotfiles to version control.md
new file mode 100644
index 0000000000..7d070760c7
--- /dev/null
+++ b/sources/tech/20190320 Move your dotfiles to version control.md
@@ -0,0 +1,130 @@
+[#]: collector: (lujun9972)
+[#]: translator: ( )
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+[#]: subject: (Move your dotfiles to version control)
+[#]: via: (https://opensource.com/article/19/3/move-your-dotfiles-version-control)
+[#]: author: (Matthew Broberg https://opensource.com/users/mbbroberg)
+
+Move your dotfiles to version control
+======
+Back up or sync your custom configurations across your systems by sharing dotfiles on GitLab or GitHub.
+
+
+
+There is something truly exciting about customizing your operating system through the collection of hidden files we call dotfiles. In [What a Shell Dotfile Can Do For You][1], H. "Waldo" Grunenwald goes into excellent detail about the why and how of setting up your dotfiles. Let's dig into the why and how of sharing them.
+
+### What's a dotfile?
+
+"Dotfiles" is a common term for all the configuration files we have floating around our machines. These files usually start with a **.** at the beginning of the filename, like **.gitconfig** , and operating systems often hide them by default. For example, when I use **ls -a** on MacOS, it shows all the lovely dotfiles that would otherwise not be in the output.
+
+```
+dotfiles on master
+➜ ls
+README.md Rakefile bin misc profiles zsh-custom
+
+dotfiles on master
+➜ ls -a
+. .gitignore .oh-my-zsh README.md zsh-custom
+.. .gitmodules .tmux Rakefile
+.gemrc .global_ignore .vimrc bin
+.git .gvimrc .zlogin misc
+.gitconfig .maid .zshrc profiles
+```
+
+If I take a look at one, **.gitconfig** , which I use for Git configuration, I see a ton of customization. I have account information, terminal color preferences, and tons of aliases that make my command-line interface feel like mine. Here's a snippet from the **[alias]** block:
+
+```
+87 # Show the diff between the latest commit and the current state
+88 d = !"git diff-index --quiet HEAD -- || clear; git --no-pager diff --patch-with-stat"
+89
+90 # `git di $number` shows the diff between the state `$number` revisions ago and the current state
+91 di = !"d() { git diff --patch-with-stat HEAD~$1; }; git diff-index --quiet HEAD -- || clear; d"
+92
+93 # Pull in remote changes for the current repository and all its submodules
+94 p = !"git pull; git submodule foreach git pull origin master"
+95
+96 # Checkout a pull request from origin (of a github repository)
+97 pr = !"pr() { git fetch origin pull/$1/head:pr-$1; git checkout pr-$1; }; pr"
+```
+
+Since my **.gitconfig** has over 200 lines of customization, I have no interest in rewriting it on every new computer or system I use, and either does anyone else. This is one reason sharing dotfiles has become more and more popular, especially with the rise of the social coding site GitHub. The canonical article advocating for sharing dotfiles is Zach Holman's [Dotfiles Are Meant to Be Forked][2] from 2008. The premise is true to this day: I want to share them, with myself, with those new to dotfiles, and with those who have taught me so much by sharing their customizations.
+
+### Sharing dotfiles
+
+Many of us have multiple systems or know hard drives are fickle enough that we want to back up our carefully curated customizations. How do we keep these wonderful files in sync across environments?
+
+My favorite answer is distributed version control, preferably a service that will handle the heavy lifting for me. I regularly use GitHub and continue to enjoy GitLab as I get more experienced with it. Either one is a perfect place to share your information. To set yourself up:
+
+ 1. Sign into your preferred Git-based service.
+ 2. Create a repository called "dotfiles." (Make it public! Sharing is caring.)
+ 3. Clone it to your local environment.*
+ 4. Copy your dotfiles into the folder.
+ 5. Symbolically link (symlink) them back to their target folder (most often **$HOME** ).
+ 6. Push them to the remote repository.
+
+
+
+* You may need to set up your Git configuration commands to clone the repository. Both GitHub and GitLab will prompt you with the commands to run.
+
+
+
+Step 4 above is the crux of this effort and can be a bit tricky. Whether you use a script or do it by hand, the workflow is to symlink from your dotfiles folder to the dotfiles destination so that any updates to your dotfiles are easily pushed to the remote repository. To do this for my **.gitconfig** file, I would enter:
+
+```
+$ cd dotfiles/
+$ ln -nfs .gitconfig $HOME/.gitconfig
+```
+
+The flags added to the symlinking command offer a few additional benefits:
+
+ * **-s** creates a symbolic link instead of a hard link
+ * **-f** continues with other symlinking when an error occurs (not needed here, but useful in loops)
+ * **-n** avoids symlinking a symlink (same as **-h** for other versions of **ln** )
+
+
+
+You can review the IEEE and Open Group [specification of **ln**][3] and the version on [MacOS 10.14.3][4] if you want to dig deeper into the available parameters. I had to look up these flags since I pulled them from someone else's dotfiles.
+
+You can also make updating simpler with a little additional code, like the [Rakefile][5] I forked from [Brad Parbs][6]. Alternatively, you can keep it incredibly simple, as Jeff Geerling does [in his dotfiles][7]. He symlinks files using [this Ansible playbook][8]. Keeping everything in sync at this point is easy: you can cron job or occasionally **git push** from your dotfiles folder.
+
+### Quick aside: What not to share
+
+Before we move on, it is worth noting what you should not add to a shared dotfile repository—even if it starts with a dot. Anything that is a security risk, like files in your **.ssh/** folder, is not a good choice to share using this method. Be sure to double-check your configuration files before publishing them online and triple-check that no API tokens are in your files.
+
+### Where should I start?
+
+If Git is new to you, my [article about the terminology][9] and [a cheat sheet][10] of my most frequently used commands should help you get going.
+
+There are other incredible resources to help you get started with dotfiles. Years ago, I came across [dotfiles.github.io][11] and continue to go back to it for a broader look at what people are doing. There is a lot of tribal knowledge hidden in other people's dotfiles. Take the time to scroll through some and don't be shy about adding them to your own.
+
+I hope this will get you started on the joy of having consistent dotfiles across your computers.
+
+What's your favorite dotfile trick? Add a comment or tweet me [@mbbroberg][12].
+
+--------------------------------------------------------------------------------
+
+via: https://opensource.com/article/19/3/move-your-dotfiles-version-control
+
+作者:[Matthew Broberg][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://opensource.com/users/mbbroberg
+[b]: https://github.com/lujun9972
+[1]: https://opensource.com/article/18/9/shell-dotfile
+[2]: https://zachholman.com/2010/08/dotfiles-are-meant-to-be-forked/
+[3]: http://pubs.opengroup.org/onlinepubs/9699919799/utilities/ln.html
+[4]: https://www.unix.com/man-page/FreeBSD/1/ln/
+[5]: https://github.com/mbbroberg/dotfiles/blob/master/Rakefile
+[6]: https://github.com/bradp/dotfiles
+[7]: https://github.com/geerlingguy/dotfiles
+[8]: https://github.com/geerlingguy/mac-dev-playbook
+[9]: https://opensource.com/article/19/2/git-terminology
+[10]: https://opensource.com/downloads/cheat-sheet-git
+[11]: http://dotfiles.github.io/
+[12]: https://twitter.com/mbbroberg?lang=en
diff --git a/sources/tech/20190320 Nuvola- Desktop Music Player for Streaming Services.md b/sources/tech/20190320 Nuvola- Desktop Music Player for Streaming Services.md
new file mode 100644
index 0000000000..ba0d8d550d
--- /dev/null
+++ b/sources/tech/20190320 Nuvola- Desktop Music Player for Streaming Services.md
@@ -0,0 +1,186 @@
+[#]: collector: (lujun9972)
+[#]: translator: ( )
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+[#]: subject: (Nuvola: Desktop Music Player for Streaming Services)
+[#]: via: (https://itsfoss.com/nuvola-music-player)
+[#]: author: (Atharva Lele https://itsfoss.com/author/atharva/)
+
+Nuvola: Desktop Music Player for Streaming Services
+======
+
+[Nuvola][1] is not like your usual music players. It’s different because it allows you to play a number of streaming services in a desktop music player.
+
+Nuvola provides a runtime called [Nuvola Apps Runtime][2] which runs web apps. This is why Nuvola can support a host of streaming services. Some of the major players it supports are:
+
+ * Spotify
+ * Google Play Music
+ * YouTube, YouTube Music
+ * [Pandora][3]
+ * [SoundCloud][4]
+ * and many many more.
+
+
+
+You can find the full list [here][1] in the Music streaming services section. Apple Music is not supported, if you were wondering.
+
+Why would you use a streaming music service in a different desktop player when you can run it in a web browser? The advantage with Nuvola is that it provides tight integration with many [desktop environments][5].
+
+Ideally it should work with all DEs, but the officially supported ones are GNOME, Unity, and Pantheon (elementary OS).
+
+### Features of Nuvola Music Player
+
+Let’s see some of the main features of the open source project Nuvola:
+
+ * Supports a wide variety of music streaming services
+ * Desktop integration with GNOME, Unity, and Pantheon.
+ * Keyboard shortcuts with the ability to customize them
+ * Support for keyboard’s multimedia keys (paid feature)
+ * Background play with notifications
+ * [GNOME Media Player][6] extension support
+ * App Tray indicator
+ * Dark and Light themes
+ * Enable or disable features
+ * Password Manager for web services
+ * Remote control over internet (paid feature)
+ * Available for a lot of distros ([Flatpak][7] packages)
+
+
+
+Complete list of features is available [here][8].
+
+### How to install Nuvola on Ubuntu & other Linux distributions
+
+Installing Nuvola consists of a few more steps than simply adding a PPA and then installing the software. Since it is based on [Flatpak][7], you have to set up Flatpak first and then you can install Nuvola.
+
+[Enable Flatpak Support][9]
+
+The steps are pretty simple. You can follow the guide [here][10] if you want to install using the GUI, however I prefer terminal commands since they’re easier and faster.
+
+**Warning: If already installed, remove the older version of Nuvola (Click to expand)**
+
+If you have ever installed Nuvola before, you need to uninstall it to avoid issues. Run these commands in the terminal to do so.
+
+```
+sudo apt remove nuvolaplayer*
+```
+
+```
+rm -rf ~/.cache/nuvolaplayer3 ~/.local/share/nuvolaplayer ~/.config/nuvolaplayer3 ~/.local/share/applications/nuvolaplayer3*
+```
+
+Once you have made sure that your system has Flatpak, you can install Nuvola using this command:
+
+```
+flatpak remote-add --if-not-exists flathub https://dl.flathub.org/repo/flathub.flatpakrepo
+flatpak remote-add --if-not-exists nuvola https://dl.tiliado.eu/flatpak/nuvola.flatpakrepo
+```
+
+This is an optional step but I recommend you install this since the service allows you to commonly configure settings like shortcuts for each of the streaming service that you might use.
+
+```
+flatpak install nuvola eu.tiliado.Nuvola
+```
+
+Nuvola supports 29 streaming services. To get them, you need to add those services individually. You can find all the supported music services are available on this [page][10].
+
+For the purpose of this tutorial, I’m going to go with [YouTube Music][11].
+
+```
+flatpak install nuvola eu.tiliado.NuvolaAppYoutubeMusic
+```
+
+After this, you should have the app installed and should be able to see the icon if you search for it.
+
+![Nuvola App specific icons][12]
+
+Clicking on the icon will pop-up the first time setup. You’ll have to accept the Privacy Policy and then continue.
+
+![Terms and Conditions page][13]
+
+After accepting terms and conditions, you should launch into the web app of the respective streaming service, YouTube Music in this case.
+
+![YouTube Music web app running on Nuvola Runtime][14]
+
+In case of installation on other distributions, specific guidelines are available on the [Nuvola website][15].
+
+### My experience with Nuvola Music Player
+
+Initially I thought that it wouldn’t be too different than simply running the web app in [Firefox][16], since many desktop environments like KDE support media controls and shortcuts for media playing in Firefox.
+
+However, this isn’t the case with many other desktops environments and that’s where Nuvola comes in handy. Often, it’s also faster to access than loading the website on the browser.
+
+Once loaded, it behaves pretty much like a normal web app with the benefit of keyboard shortcuts. Speaking of shortcuts, you should check out the list of must know [Ubuntu shortcuts][17].
+
+![Viewing an Artist’s page][18]
+
+Integration with the DE comes in handy when you quickly want to change a song or play/pause your music without leaving your current application. Nuvola gives you access in GNOME notifications as well as provides an app tray icon.
+
+ * ![Notification music controls][19]
+
+ * ![App tray music controls][20]
+
+
+
+
+Keyboard shortcuts work well, globally as well as in-app. You get a notification when the song changes. Whether you do it yourself or it automatically switches to the next song.
+
+![][21]
+
+By default, very few keyboard shortcuts are provided. However you can enable them for almost everything you can do with the app. For example I set the song change shortcuts to Ctrl + Arrow keys as you can see in the screenshot.
+
+![Keyboard Shortcuts][22]
+
+All in all, it works pretty well and it’s fast and responsive. Definitely more so than your usual Snap app.
+
+**Some criticism**
+
+Some thing that did not please me as much was the installation size. Since it requires a browser back-end and GNOME integration it essentially installs a browser and necessary GNOME libraries for Flatpak, so that results in having to install almost 350MB in dependencies.
+
+After that, you install individual apps. The individual apps themselves are not heavy at all. But if you just use one streaming service, having a 300+ MB installation might not be ideal if you’re concerned about disk space.
+
+Nuvola also does not support local music, at least as far as I could find.
+
+**Conclusion**
+
+Hope this article helped you to know more about Nuvola Music Player and its features. If you like such different applications, why not take a look at some of the [lesser known music players for Linux][23]?
+
+As always, if you have any suggestions or questions, I look forward to reading your comments.
+
+--------------------------------------------------------------------------------
+
+via: https://itsfoss.com/nuvola-music-player
+
+作者:[Atharva Lele][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://itsfoss.com/author/atharva/
+[b]: https://github.com/lujun9972
+[1]: https://nuvola.tiliado.eu/
+[2]: https://nuvola.tiliado.eu/#fn:1
+[3]: https://itsfoss.com/install-pandora-linux-client/
+[4]: https://itsfoss.com/install-soundcloud-linux/
+[5]: https://itsfoss.com/best-linux-desktop-environments/
+[6]: https://extensions.gnome.org/extension/55/media-player-indicator/
+[7]: https://flatpak.org/
+[8]: http://tiliado.github.io/nuvolaplayer/documentation/4/explore.html
+[9]: https://itsfoss.com/flatpak-guide/
+[10]: https://nuvola.tiliado.eu/nuvola/ubuntu/bionic/
+[11]: https://nuvola.tiliado.eu/app/youtube_music/ubuntu/bionic/
+[12]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/03/nuvola_youtube_music_icon.png?resize=800%2C450&ssl=1
+[13]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/03/nuvola_eula.png?resize=800%2C450&ssl=1
+[14]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/03/nuvola_youtube_music.png?resize=800%2C450&ssl=1
+[15]: https://nuvola.tiliado.eu/index/
+[16]: https://itsfoss.com/why-firefox/
+[17]: https://itsfoss.com/ubuntu-shortcuts/
+[18]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/03/nuvola_web_player.png?resize=800%2C449&ssl=1
+[19]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/03/nuvola_music_controls.png?fit=800%2C450&ssl=1
+[20]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/03/nuvola_web_player2.png?fit=800%2C450&ssl=1
+[21]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/03/nuvola_song_change_notification-e1553077619208.png?ssl=1
+[22]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/03/nuvola_shortcuts.png?resize=800%2C450&ssl=1
+[23]: https://itsfoss.com/lesser-known-music-players-linux/
diff --git a/sources/tech/20190321 4 ways to jumpstart productivity at work.md b/sources/tech/20190321 4 ways to jumpstart productivity at work.md
new file mode 100644
index 0000000000..679fa75607
--- /dev/null
+++ b/sources/tech/20190321 4 ways to jumpstart productivity at work.md
@@ -0,0 +1,96 @@
+[#]: collector: (lujun9972)
+[#]: translator: ( )
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+[#]: subject: (4 ways to jumpstart productivity at work)
+[#]: via: (https://opensource.com/article/19/3/guide-being-more-productive)
+[#]: author: (Sarah Wall https://opensource.com/users/sarahwall)
+
+4 ways to jumpstart productivity at work
+======
+
+This article includes six open source productivity tools.
+
+![][1]
+
+Time poverty—the idea that there's not enough time to do all the work we need to do—is it a perception or a reality?
+
+The truth is you'll never get more than 24 hours out of any day. Working longer hours doesn't help. Your productivity actually decreases the longer you work in a given day. Your perception, or intuitive understanding of your time, is what matters. One key to managing productivity is how you use the time you've got.
+
+You have lots of time that you can use more efficiently, including time lost to ineffective meetings, distractions, and context switching between tasks. By spending your time more wisely, you can get more done and achieve higher overall job performance. You will also have a higher level of job satisfaction and feel lower levels of stress.
+
+### Jumpstart your productivity
+
+#### 1\. Eliminate distractions
+
+When you have too many things vying for your attention, it slows you down and decreases your productivity. Do your best to remove every distraction that pulls you off tasks.
+
+Cellphones, email, and messaging apps are the most common drains on productivity. Set the ringer on your phone to vibrate, set specific times for checking email, and close irrelevant browser tabs. With this approach, your work will be interrupted less throughout the day.
+
+#### 2\. Make your to-do list _verb-oriented_
+
+To-do lists are a great way to help you focus on exactly what you need to accomplish each day. Some people do best with a physical list, like a notebook, and others do better with digital tools. Check out these suggestions for [open source productivity tools][2] to help you manage your workflow. Or check these six open source tools to stay organized:
+
+ * [Joplin, a note-taking app][3]
+ * [Wekan, an open source kanban board][4]
+ * [TaskBoard, a lightweight kanban board][5]
+ * [Go For It, a flexible to-do list application][6]
+ * [Org mode without Emacs][7]
+ * [Freeplane, an open source mind-mapping application][8]
+
+
+
+Your list can be as sophisticated or as simple as you like, but just making a list is not enough. What goes on your list makes all the difference. Every item that goes on your list should be actionable. The trick is to make sure there's a verb. For example, "Smith project" is not actionable enough. "Outline key deliverables on Smith project" gives you a more concrete task to complete.
+
+#### 3\. Stick to the 10-minute rule
+
+Overwhelmed by an unclear or unwieldy task? Break it into 10-minute mini-tasks instead. This can be a great way to take something unmanageable and turn it into something achievable.
+
+The beauty of 10-minute tasks is they can be fit into many parts of your day. When you get into the office in the morning and are feeling fresh, kick off your day with a burst of productivity with a few 10-minute tasks. Losing momentum in the afternoon? A 10-minute job can help you regain speed.
+
+Ten-minute tasks are also a good way to identify tasks that can be delegated to others. The ability to delegate work is often one of the most effective management techniques. By finding a simple task that can be accomplished by another member of your team, you can make short work of a big job.
+
+#### 4\. Take a break
+
+Another drain on productivity is the urge to keep pressing ahead on a task to complete it without taking a break. Suddenly you feel really fatigued or hungry, and you realize you haven't gone to the bathroom in hours! Your concentration is affected, and therefore your productivity decreases.
+
+Set benchmarks for taking breaks and stick to them. For example, commit to once per hour to get up and move around for five minutes. If you're pressed for time, stand up and stretch for two minutes. Changing your body position and focusing on the present moment will help relieve any mental tension that has built up.
+
+Hydrate your mind with a glass of water. When your body is not properly hydrated, it can put increased stress on your brain. As little as a one to three percent decrease in hydration can negatively affect your memory, concentration, and decision-making.
+
+### Don't fall into the time-poverty trap
+
+Time is limited and time poverty is just an idea. How you choose to spend the time you have each day is what's important. When you develop new, healthy habits, you can increase your productivity and direct your time in the ways that give the most value.
+
+* * *
+
+_This article was adapted from "[The Keys to Productivity][9]" on ImageX's blog._
+
+_Sarah Wall will present_ [_Mindless multitasking: a dummy's guide to productivity_][10], _at_ [_DrupalCon_][11] _in Seattle, April 8-12, 2019._
+
+
+--------------------------------------------------------------------------------
+
+via: https://opensource.com/article/19/3/guide-being-more-productive
+
+作者:[Sarah Wall][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://opensource.com/users/sarahwall
+[b]: https://github.com/lujun9972
+[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/rh_commun_4604_02_mech_connections_rhcz0.5x.png?itok=YPPU4dMj
+[2]: https://opensource.com/article/16/11/open-source-productivity-hacks
+[3]: https://opensource.com/article/19/1/productivity-tool-joplin
+[4]: https://opensource.com/article/19/1/productivity-tool-wekan
+[5]: https://opensource.com/article/19/1/productivity-tool-taskboard
+[6]: https://opensource.com/article/19/1/productivity-tool-go-for-it
+[7]: https://opensource.com/article/19/1/productivity-tool-org-mode
+[8]: https://opensource.com/article/19/1/productivity-tool-freeplane
+[9]: https://imagexmedia.com/managing-productivity
+[10]: https://events.drupal.org/seattle2019/sessions/mindless-multitasking-dummy%E2%80%99s-guide-productivity
+[11]: https://events.drupal.org/seattle2019
diff --git a/sources/tech/20190321 How To Setup Linux Media Server Using Jellyfin.md b/sources/tech/20190321 How To Setup Linux Media Server Using Jellyfin.md
new file mode 100644
index 0000000000..9c3de11bc5
--- /dev/null
+++ b/sources/tech/20190321 How To Setup Linux Media Server Using Jellyfin.md
@@ -0,0 +1,268 @@
+[#]: collector: (lujun9972)
+[#]: translator: ( )
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+[#]: subject: (How To Setup Linux Media Server Using Jellyfin)
+[#]: via: (https://www.ostechnix.com/how-to-setup-linux-media-server-using-jellyfin/)
+[#]: author: (sk https://www.ostechnix.com/author/sk/)
+
+How To Setup Linux Media Server Using Jellyfin
+======
+
+![Setup Linux Media Server Using Jellyfin][1]
+
+We’ve already written about setting up your own streaming media server on Linux using [**Streama**][2]. Today, we will going to setup yet another media server using **Jellyfin**. Jellyfin is a free, cross-platform and open source alternative to propriety media streaming applications such as **Emby** and **Plex**. The main developer of Jellyfin forked it from Emby after the announcement of Emby transitioning to a proprietary model. Jellyfin doesn’t include any premium features, licenses or membership plans. It is completely free and open source project supported by hundreds of community members. Using jellyfin, we can instantly setup Linux media server in minutes and access it via LAN/WAN from any devices using multiple apps.
+
+### Setup Linux Media Server Using Jellyfin
+
+Jellyfin supports GNU/Linux, Mac OS and Microsoft Windows operating systems. You can install it on your Linux distribution as described below.
+
+##### Install Jellyfin On Linux
+
+As of writing this guide, Jellyfin packages are available for most popular Linux distributions, such as Arch Linux, Debian, CentOS, Fedora and Ubuntu.
+
+On **Arch Linux** and its derivatives like **Antergos** , **Manjaro Linux** , you can install Jellyfin using any AUR helper tools, for example [**YaY**][3].
+
+```
+$ yay -S jellyfin-git
+```
+
+On **CentOS/RHEL** :
+
+Download the latest Jellyfin rpm package from [**here**][4] and install it as shown below.
+
+```
+$ wget https://repo.jellyfin.org/releases/server/centos/jellyfin-10.2.2-1.el7.x86_64.rpm
+
+$ sudo yum localinstall jellyfin-10.2.2-1.el7.x86_64.rpm
+```
+
+On **Fedora** :
+
+Download Jellyfin for Fedora from [**here**][5].
+
+```
+$ wget https://repo.jellyfin.org/releases/server/fedora/jellyfin-10.2.2-1.fc29.x86_64.rpm
+
+$ sudo dnf install jellyfin-10.2.2-1.fc29.x86_64.rpm
+```
+
+On **Debian** :
+
+Install HTTPS transport for APT if it is not installed already:
+
+```
+$ sudo apt install apt-transport-https
+```
+
+Import Jellyfin GPG signing key:``
+
+```
+$ wget -O - https://repo.jellyfin.org/debian/jellyfin_team.gpg.key | sudo apt-key add -
+```
+
+Add Jellyfin repository:
+
+```
+$ sudo touch /etc/apt/sources.list.d/jellyfin.list
+
+$ echo "deb [arch=amd64] https://repo.jellyfin.org/debian $( lsb_release -c -s ) main" | sudo tee /etc/apt/sources.list.d/jellyfin.list
+```
+
+Finally, update Jellyfin repository and install Jellyfin using commands:``
+
+```
+$ sudo apt update
+
+$ sudo apt install jellyfin
+```
+
+On **Ubuntu 18.04 LTS** :
+
+Install HTTPS transport for APT if it is not installed already:
+
+```
+$ sudo apt install apt-transport-https
+```
+
+Import and add Jellyfin GPG signing key:``
+
+```
+$ wget -O - https://repo.jellyfin.org/debian/jellyfin_team.gpg.key | sudo apt-key add -
+```
+
+Add the Jellyfin repository:
+
+```
+$ sudo touch /etc/apt/sources.list.d/jellyfin.list
+
+$ echo "deb https://repo.jellyfin.org/ubuntu bionic main" | sudo tee /etc/apt/sources.list.d/jellyfin.list
+```
+
+For Ubuntu 16.04, just replace **bionic** with **xenial** in the above URL.
+
+Finally, update Jellyfin repository and install Jellyfin using commands:``
+
+```
+$ sudo apt update
+
+$ sudo apt install jellyfin
+```
+
+##### Start Jellyfin service
+
+Run the following commands to enable and start jellyfin service on every reboot:
+
+```
+$ sudo systemctl enable jellyfin
+
+$ sudo systemctl start jellyfin
+```
+
+To check if the service has been started or not, run:
+
+```
+$ sudo systemctl status jellyfin
+```
+
+Sample output:
+
+```
+● jellyfin.service - Jellyfin Media Server
+Loaded: loaded (/lib/systemd/system/jellyfin.service; enabled; vendor preset: enabled)
+Drop-In: /etc/systemd/system/jellyfin.service.d
+└─jellyfin.service.conf
+Active: active (running) since Wed 2019-03-20 12:20:19 UTC; 1s ago
+Main PID: 4556 (jellyfin)
+Tasks: 11 (limit: 2320)
+CGroup: /system.slice/jellyfin.service
+└─4556 /usr/bin/jellyfin --datadir=/var/lib/jellyfin --configdir=/etc/jellyfin --logdir=/var/log/jellyfin --cachedir=/var/cache/jellyfin --r
+
+Mar 20 12:20:21 ubuntuserver jellyfin[4556]: [12:20:21] [INF] Loading Emby.Photos, Version=10.2.2.0, Culture=neutral, PublicKeyToken=null
+Mar 20 12:20:21 ubuntuserver jellyfin[4556]: [12:20:21] [INF] Loading Emby.Server.Implementations, Version=10.2.2.0, Culture=neutral, PublicKeyToken=nu
+Mar 20 12:20:21 ubuntuserver jellyfin[4556]: [12:20:21] [INF] Loading MediaBrowser.MediaEncoding, Version=10.2.2.0, Culture=neutral, PublicKeyToken=nul
+Mar 20 12:20:21 ubuntuserver jellyfin[4556]: [12:20:21] [INF] Loading Emby.Dlna, Version=10.2.2.0, Culture=neutral, PublicKeyToken=null
+Mar 20 12:20:21 ubuntuserver jellyfin[4556]: [12:20:21] [INF] Loading MediaBrowser.LocalMetadata, Version=10.2.2.0, Culture=neutral, PublicKeyToken=nul
+Mar 20 12:20:21 ubuntuserver jellyfin[4556]: [12:20:21] [INF] Loading Emby.Notifications, Version=10.2.2.0, Culture=neutral, PublicKeyToken=null
+Mar 20 12:20:21 ubuntuserver jellyfin[4556]: [12:20:21] [INF] Loading MediaBrowser.XbmcMetadata, Version=10.2.2.0, Culture=neutral, PublicKeyToken=null
+Mar 20 12:20:21 ubuntuserver jellyfin[4556]: [12:20:21] [INF] Loading jellyfin, Version=10.2.2.0, Culture=neutral, PublicKeyToken=null
+Mar 20 12:20:21 ubuntuserver jellyfin[4556]: [12:20:21] [INF] Sqlite version: 3.26.0
+Mar 20 12:20:21 ubuntuserver jellyfin[4556]: [12:20:21] [INF] Sqlite compiler options: COMPILER=gcc-5.4.0 20160609,DEFAULT_FOREIGN_KEYS,ENABLE_COLUMN_M
+```
+
+If you see an output something, congratulations! Jellyfin service has been started.
+
+Next, we should do some initial configuration.
+
+##### Configure Jellyfin
+
+Once jellyfin is installed, open the browser and navigate to – **http:// :8096** or **http:// :8096** URL.
+
+You will see the following welcome screen. Select your preferred language and click Next.
+
+![][6]
+
+Enter your user details. You can add more users later from the Jellyfin Dashboard.
+
+![][7]
+
+The next step is to select media files which we want to stream. To do so, click “Add media Library” button:
+
+![][8]
+
+Choose the content type (i.e audio, video, movies etc.,), display name and click plus (+) sign next to the Folders icon to choose the location where you kept your media files. You can further choose other library settings such as the preferred download language, country etc. Click Ok after choosing the preferred options.
+
+![][9]
+
+Similarly, add all of the media files. Once you have chosen everything to stream, click Next.
+
+![][10]
+
+Choose the Metadata language and click Next:
+
+![][11]
+
+Next, you need to configure whether you want to allow remote connections to this media server. Make sure you have allowed the remote connections. Also, enable automatic port mapping and click Next:
+
+![][12]
+
+You’re all set! Click Finish to complete Jellyfin configuration.
+
+![][13]
+
+You will now be redirected to Jellyfin login page. Click on the username and enter it’s password which we setup earlier.
+
+![][14]
+
+This is how Jellyfin dashboard looks like.
+
+![][15]
+
+As you see in the screenshot, all of your media files are shown in the dashboard itself under My Media section. Just click on the any media file of your choice and start watching it!!
+
+![][16]
+
+You can access this Jellyfin media server from any systems on the network using URL – . You need not to install any extra apps. All you need is a modern web browser.
+
+If you want to change anything or reconfigure, click on the three horizontal bars from the Home screen. Here, you can add users, media files, change playback settings, add TV/DVR, install plugins, change default port no and a lot more settings.
+
+![][17]
+
+For more details, check out [**Jellyfin official documentation**][18] page.
+
+And, that’s all for now. As you can see setting up a streaming media server on Linux is no big-deal. I tested it on my Ubuntu 18.04 LTS VM. It worked fine out of the box. I can be able to watch the movies from other systems in my LAN. If you’re looking for easy, quick and free solution for hosting a media server, Jellyfin is a good choice.
+
+Cheers!
+
+
+
+--------------------------------------------------------------------------------
+
+via: https://www.ostechnix.com/how-to-setup-linux-media-server-using-jellyfin/
+
+作者:[sk][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://www.ostechnix.com/author/sk/
+[b]: https://github.com/lujun9972
+[1]: data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7
+[2]: https://www.ostechnix.com/streama-setup-your-own-streaming-media-server-in-minutes/
+[3]: https://www.ostechnix.com/yay-found-yet-another-reliable-aur-helper/
+[4]: https://repo.jellyfin.org/releases/server/centos/
+[5]: https://repo.jellyfin.org/releases/server/fedora/
+[6]: http://www.ostechnix.com/wp-content/uploads/2019/03/jellyfin-1.png
+[7]: http://www.ostechnix.com/wp-content/uploads/2019/03/jellyfin-2-1.png
+[8]: http://www.ostechnix.com/wp-content/uploads/2019/03/jellyfin-3-1.png
+[9]: http://www.ostechnix.com/wp-content/uploads/2019/03/jellyfin-4-1.png
+[10]: http://www.ostechnix.com/wp-content/uploads/2019/03/jellyfin-5-1.png
+[11]: http://www.ostechnix.com/wp-content/uploads/2019/03/jellyfin-6.png
+[12]: http://www.ostechnix.com/wp-content/uploads/2019/03/jellyfin-7.png
+[13]: http://www.ostechnix.com/wp-content/uploads/2019/03/jellyfin-8-1.png
+[14]: http://www.ostechnix.com/wp-content/uploads/2019/03/jellyfin-9.png
+[15]: http://www.ostechnix.com/wp-content/uploads/2019/03/jellyfin-10.png
+[16]: http://www.ostechnix.com/wp-content/uploads/2019/03/jellyfin-11.png
+[17]: http://www.ostechnix.com/wp-content/uploads/2019/03/jellyfin-12.png
+[18]: https://jellyfin.readthedocs.io/en/latest/
+[19]: https://github.com/jellyfin/jellyfin
+[20]: http://feedburner.google.com/fb/a/mailverify?uri=ostechnix (Subscribe to our Email newsletter)
+[21]: https://www.paypal.me/ostechnix (Donate Via PayPal)
+[22]: http://ostechnix.tradepub.com/category/information-technology/1207/
+[23]: https://www.facebook.com/ostechnix/
+[24]: https://twitter.com/ostechnix
+[25]: https://plus.google.com/+SenthilkumarP/
+[26]: https://www.linkedin.com/in/ostechnix
+[27]: http://feeds.feedburner.com/Ostechnix
+[28]: https://www.ostechnix.com/how-to-setup-linux-media-server-using-jellyfin/?share=reddit (Click to share on Reddit)
+[29]: https://www.ostechnix.com/how-to-setup-linux-media-server-using-jellyfin/?share=twitter (Click to share on Twitter)
+[30]: https://www.ostechnix.com/how-to-setup-linux-media-server-using-jellyfin/?share=facebook (Click to share on Facebook)
+[31]: https://www.ostechnix.com/how-to-setup-linux-media-server-using-jellyfin/?share=linkedin (Click to share on LinkedIn)
+[32]: https://www.ostechnix.com/how-to-setup-linux-media-server-using-jellyfin/?share=pocket (Click to share on Pocket)
+[33]: https://api.whatsapp.com/send?text=How%20To%20Setup%20Linux%20Media%20Server%20Using%20Jellyfin%20https%3A%2F%2Fwww.ostechnix.com%2Fhow-to-setup-linux-media-server-using-jellyfin%2F (Click to share on WhatsApp)
+[34]: https://www.ostechnix.com/how-to-setup-linux-media-server-using-jellyfin/?share=telegram (Click to share on Telegram)
+[35]: https://www.ostechnix.com/how-to-setup-linux-media-server-using-jellyfin/?share=email (Click to email this to a friend)
+[36]: https://www.ostechnix.com/how-to-setup-linux-media-server-using-jellyfin/#print (Click to print)
diff --git a/sources/tech/20190321 How to use Spark SQL- A hands-on tutorial.md b/sources/tech/20190321 How to use Spark SQL- A hands-on tutorial.md
new file mode 100644
index 0000000000..0e4be0aa01
--- /dev/null
+++ b/sources/tech/20190321 How to use Spark SQL- A hands-on tutorial.md
@@ -0,0 +1,540 @@
+[#]: collector: (lujun9972)
+[#]: translator: ( )
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+[#]: subject: (How to use Spark SQL: A hands-on tutorial)
+[#]: via: (https://opensource.com/article/19/3/apache-spark-and-dataframes-tutorial)
+[#]: author: (Dipanjan Sarkar https://opensource.com/users/djsarkar)
+
+How to use Spark SQL: A hands-on tutorial
+======
+
+This tutorial explains how to leverage relational databases at scale using Spark SQL and DataFrames.
+
+![Team checklist and to dos][1]
+
+In the [first part][2] of this series, we looked at advances in leveraging the power of relational databases "at scale" using [Apache Spark SQL and DataFrames][3]. We will now do a simple tutorial based on a real-world dataset to look at how to use Spark SQL. We will be using Spark DataFrames, but the focus will be more on using SQL. In a separate article, I will cover a detailed discussion around Spark DataFrames and common operations.
+
+I love using cloud services for my machine learning, deep learning, and even big data analytics needs, instead of painfully setting up my own Spark cluster. I will be using the Databricks Platform for my Spark needs. Databricks is a company founded by the creators of Apache Spark that aims to help clients with cloud-based big data processing using Spark.
+
+![Apache Spark and Databricks][4]
+
+The simplest (and free of charge) way is to go to the [Try Databricks page][5] and [sign up for a community edition][6] account. You get a cloud-based cluster, which is a single-node cluster with 6GB and unlimited notebooks—not bad for a free version! I recommend using the Databricks Platform if you have serious needs for analyzing big data.
+
+Let's get started with our case study now. Feel free to create a new notebook from your home screen in Databricks or your own Spark cluster.
+
+![Create a notebook][7]
+
+You can also import my notebook containing the entire tutorial, but please make sure to run every cell and play around and explore with it, instead of just reading through it. Unsure of how to use Spark on Databricks? Follow [this short but useful tutorial][8].
+
+This tutorial will familiarize you with essential Spark capabilities to deal with structured data often obtained from databases or flat files. We will explore typical ways of querying and aggregating relational data by leveraging concepts of DataFrames and SQL using Spark. We will work on an interesting dataset from the [KDD Cup 1999][9] and try to query the data using high-level abstractions like the dataframe that has already been a hit in popular data analysis tools like R and Python. We will also look at how easy it is to build data queries using the SQL language and retrieve insightful information from our data. This also happens at scale without us having to do a lot more since Spark distributes these data structures efficiently in the backend, which makes our queries scalable and as efficient as possible. We'll start by loading some basic dependencies.
+
+```
+import pandas as pd
+import matplotlib.pyplot as plt
+plt.style.use('fivethirtyeight')
+```
+
+#### Data retrieval
+
+The [KDD Cup 1999][9] dataset was used for the Third International Knowledge Discovery and Data Mining Tools Competition, which was held in conjunction with KDD-99, the Fifth International Conference on Knowledge Discovery and Data Mining. The competition task was to build a network-intrusion detector, a predictive model capable of distinguishing between _bad connections_ , called intrusions or attacks, and _good, normal connections_. This database contains a standard set of data to be audited, which includes a wide variety of intrusions simulated in a military network environment.
+
+We will be using the reduced dataset **kddcup.data_10_percent.gz** that contains nearly a half-million network interactions. We will download this Gzip file from the web locally and then work on it. If you have a good, stable internet connection, feel free to download and work with the full dataset, **kddcup.data.gz**.
+
+#### Working with data from the web
+
+Dealing with datasets retrieved from the web can be a bit tricky in Databricks. Fortunately, we have some excellent utility packages like **dbutils** that help make our job easier. Let's take a quick look at some essential functions for this module.
+
+```
+dbutils.help()
+```
+
+```
+This module provides various utilities for users to interact with the rest of Databricks.
+
+fs: DbfsUtils -> Manipulates the Databricks filesystem (DBFS) from the console
+meta: MetaUtils -> Methods to hook into the compiler (EXPERIMENTAL)
+notebook: NotebookUtils -> Utilities for the control flow of a notebook (EXPERIMENTAL)
+preview: Preview -> Utilities under preview category
+secrets: SecretUtils -> Provides utilities for leveraging secrets within notebooks
+widgets: WidgetsUtils -> Methods to create and get bound value of input widgets inside notebooks
+```
+
+#### Retrieve and store data in Databricks
+
+We will now leverage the Python **urllib** library to extract the KDD Cup 99 data from its web repository, store it in a temporary location, and move it to the Databricks filesystem, which can enable easy access to this data for analysis
+
+> **Note:** If you skip this step and download the data directly, you may end up getting a **InvalidInputException: Input path does not exist** error.
+
+```
+import urllib
+urllib.urlretrieve("", "/tmp/kddcup_data.gz")
+dbutils.fs.mv("file:/tmp/kddcup_data.gz", "dbfs:/kdd/kddcup_data.gz")
+display(dbutils.fs.ls("dbfs:/kdd"))
+```
+
+![Spark Job kddcup_data.gz][10]
+
+#### Build the KDD dataset
+
+Now that we have our data stored in the Databricks filesystem, let's load up our data from the disk into Spark's traditional abstracted data structure, the [Resilient Distributed Dataset][11] (RDD).
+
+```
+data_file = "dbfs:/kdd/kddcup_data.gz"
+raw_rdd = sc.textFile(data_file).cache()
+raw_rdd.take(5)
+```
+
+![Data in Resilient Distributed Dataset \(RDD\)][12]
+
+You can also verify the type of data structure of our data (RDD) using the following code.
+
+```
+type(raw_rdd)
+```
+
+![output][13]
+
+#### Build a Spark DataFrame on our data
+
+A Spark DataFrame is an interesting data structure representing a distributed collecion of data. Typically the entry point into all SQL functionality in Spark is the **SQLContext** class. To create a basic instance of this call, all we need is a **SparkContext** reference. In Databricks, this global context object is available as **sc** for this purpose.
+
+```
+from pyspark.sql import SQLContext
+sqlContext = SQLContext(sc)
+sqlContext
+```
+
+![output][14]
+
+#### Split the CSV data
+
+Each entry in our RDD is a comma-separated line of data, which we first need to split before we can parse and build our dataframe.
+
+```
+csv_rdd = raw_rdd.map(lambda row: row.split(","))
+print(csv_rdd.take(2))
+print(type(csv_rdd))
+```
+
+![Splitting RDD entries][15]
+
+#### Check the total number of features (columns)
+
+We can use the following code to check the total number of potential columns in our dataset.
+
+```
+len(csv_rdd.take(1)[0])
+
+Out[57]: 42
+```
+
+#### Understand and parse data
+
+The KDD 99 Cup data consists of different attributes captured from connection data. You can obtain the [full list of attributes in the data][16] and further details pertaining to the [description for each attribute/column][17]. We will just be using some specific columns from the dataset, the details of which are specified as follows.
+
+feature num | feature name | description | type
+---|---|---|---
+1 | duration | length (number of seconds) of the connection | continuous
+2 | protocol_type | type of the protocol, e.g., tcp, udp, etc. | discrete
+3 | service | network service on the destination, e.g., http, telnet, etc. | discrete
+4 | src_bytes | number of data bytes from source to destination | continuous
+5 | dst_bytes | number of data bytes from destination to source | continuous
+6 | flag | normal or error status of the connection | discrete
+7 | wrong_fragment | number of "wrong" fragments | continuous
+8 | urgent | number of urgent packets | continuous
+9 | hot | number of "hot" indicators | continuous
+10 | num_failed_logins | number of failed login attempts | continuous
+11 | num_compromised | number of "compromised" conditions | continuous
+12 | su_attempted | 1 if "su root" command attempted; 0 otherwise | discrete
+13 | num_root | number of "root" accesses | continuous
+14 | num_file_creations | number of file creation operations | continuous
+
+We will be extracting the following columns based on their positions in each data point (row) and build a new RDD as follows.
+
+```
+from pyspark.sql import Row
+
+parsed_rdd = csv_rdd.map(lambda r: Row(
+ duration=int(r[0]),
+ protocol_type=r[1],
+ service=r[2],
+ flag=r[3],
+ src_bytes=int(r[4]),
+ dst_bytes=int(r[5]),
+ wrong_fragment=int(r[7]),
+ urgent=int(r[8]),
+ hot=int(r[9]),
+ num_failed_logins=int(r[10]),
+ num_compromised=int(r[12]),
+ su_attempted=r[14],
+ num_root=int(r[15]),
+ num_file_creations=int(r[16]),
+ label=r[-1]
+ )
+)
+parsed_rdd.take(5)
+```
+
+![Extracting columns][18]
+
+#### Construct the DataFrame
+
+Now that our data is neatly parsed and formatted, let's build our DataFrame!
+```
+
+```
+
+df = sqlContext.createDataFrame(parsed_rdd)
+display(df.head(10))
+
+![DataFrame][19]
+
+You can also now check out the schema of our DataFrame using the following code.
+
+```
+df.printSchema()
+```
+
+![Dataframe schema][20]
+
+#### Build a temporary table
+
+We can leverage the **registerTempTable()** function to build a temporary table to run SQL commands on our DataFrame at scale! A point to remember is that the lifetime of this temp table is tied to the session. It creates an in-memory table that is scoped to the cluster in which it was created. The data is stored using Hive's highly optimized, in-memory columnar format.
+
+You can also check out **saveAsTable()** , which creates a permanent, physical table stored in S3 using the Parquet format. This table is accessible to all clusters. The table metadata, including the location of the file(s), is stored within the Hive metastore.
+
+```
+help(df.registerTempTable)
+```
+
+![help\(df.registerTempTable\)][21]
+
+```
+df.registerTempTable("connections")
+```
+
+### Execute SQL at Scale
+
+Let's look at a few examples of how we can run SQL queries on our table based off of our dataframe. We will start with some simple queries and then look at aggregations, filters, sorting, sub-queries, and pivots in this tutorial.
+
+#### Connections based on the protocol type
+
+Let's look at how we can get the total number of connections based on the type of connectivity protocol. First, we will get this information using normal DataFrame DSL syntax to perform aggregations.
+
+```
+display(df.groupBy('protocol_type')
+.count()
+.orderBy('count', ascending=False))
+```
+
+![Total number of connections][22]
+
+Can we also use SQL to perform the same aggregation? Yes, we can leverage the table we built earlier for this!
+
+```
+protocols = sqlContext.sql("""
+ SELECT protocol_type, count(*) as freq
+ FROM connections
+ GROUP BY protocol_type
+ ORDER BY 2 DESC
+ """)
+display(protocols)
+```
+
+![protocol type and frequency][23]
+
+You can clearly see that you get the same results and don't need to worry about your background infrastructure or how the code is executed. Just write simple SQL!
+
+#### Connections based on good or bad (attack types) signatures
+
+We will now run a simple aggregation to check the total number of connections based on good (normal) or bad (intrusion attacks) types.
+
+```
+labels = sqlContext.sql("""
+ SELECT label, count(*) as freq
+ FROM connections
+ GROUP BY label
+ ORDER BY 2 DESC
+""")
+display(labels)
+```
+
+![Connection by type][24]
+
+We have a lot of different attack types. We can visualize this in the form of a bar chart. The simplest way is to use the excellent interface options in the Databricks notebook.
+
+![Databricks chart types][25]
+
+This gives us a nice-looking bar chart, which you can customize further by clicking on **Plot Options**.
+
+![Bar chart][26]
+
+Another way is to write the code to do it. You can extract the aggregated data as a Pandas DataFrame and plot it as a regular bar chart.
+
+```
+labels_df = pd.DataFrame(labels.toPandas())
+labels_df.set_index("label", drop=True,inplace=True)
+labels_fig = labels_df.plot(kind='barh')
+
+plt.rcParams["figure.figsize"] = (7, 5)
+plt.rcParams.update({'font.size': 10})
+plt.tight_layout()
+display(labels_fig.figure)
+```
+
+![Bar chart][27]
+
+### Connections based on protocols and attacks
+
+Let's look at which protocols are most vulnerable to attacks by using the following SQL query.
+
+```
+
+attack_protocol = sqlContext.sql("""
+ SELECT
+ protocol_type,
+ CASE label
+ WHEN 'normal.' THEN 'no attack'
+ ELSE 'attack'
+ END AS state,
+ COUNT(*) as freq
+ FROM connections
+ GROUP BY protocol_type, state
+ ORDER BY 3 DESC
+""")
+display(attack_protocol)
+```
+
+![Protocols most vulnerable to attacks][28]
+
+Well, it looks like ICMP connections, followed by TCP connections have had the most attacks.
+
+#### Connection stats based on protocols and attacks
+
+Let's take a look at some statistical measures pertaining to these protocols and attacks for our connection requests.
+
+```
+attack_stats = sqlContext.sql("""
+ SELECT
+ protocol_type,
+ CASE label
+ WHEN 'normal.' THEN 'no attack'
+ ELSE 'attack'
+ END AS state,
+ COUNT(*) as total_freq,
+ ROUND(AVG(src_bytes), 2) as mean_src_bytes,
+ ROUND(AVG(dst_bytes), 2) as mean_dst_bytes,
+ ROUND(AVG(duration), 2) as mean_duration,
+ SUM(num_failed_logins) as total_failed_logins,
+ SUM(num_compromised) as total_compromised,
+ SUM(num_file_creations) as total_file_creations,
+ SUM(su_attempted) as total_root_attempts,
+ SUM(num_root) as total_root_acceses
+ FROM connections
+ GROUP BY protocol_type, state
+ ORDER BY 3 DESC
+""")
+display(attack_stats)
+```
+
+![Statistics pertaining to protocols and attacks][29]
+
+Looks like the average amount of data being transmitted in TCP requests is much higher, which is not surprising. Interestingly, attacks have a much higher average payload of data being transmitted from the source to the destination.
+
+#### Filtering connection stats based on the TCP protocol by service and attack type
+
+Let's take a closer look at TCP attacks, given that we have more relevant data and statistics for the same. We will now aggregate different types of TCP attacks based on service and attack type and observe different metrics.
+
+```
+tcp_attack_stats = sqlContext.sql("""
+SELECT
+service,
+label as attack_type,
+COUNT(*) as total_freq,
+ROUND(AVG(duration), 2) as mean_duration,
+SUM(num_failed_logins) as total_failed_logins,
+SUM(num_file_creations) as total_file_creations,
+SUM(su_attempted) as total_root_attempts,
+SUM(num_root) as total_root_acceses
+FROM connections
+WHERE protocol_type = 'tcp'
+AND label != 'normal.'
+GROUP BY service, attack_type
+ORDER BY total_freq DESC
+""")
+display(tcp_attack_stats)
+```
+
+![TCP attack data][30]
+
+There are a lot of attack types, and the preceding output shows a specific section of them.
+
+#### Filtering connection stats based on the TCP protocol by service and attack type
+
+We will now filter some of these attack types by imposing some constraints in our query based on duration, file creations, and root accesses.
+
+```
+tcp_attack_stats = sqlContext.sql("""
+SELECT
+service,
+label as attack_type,
+COUNT(*) as total_freq,
+ROUND(AVG(duration), 2) as mean_duration,
+SUM(num_failed_logins) as total_failed_logins,
+SUM(num_file_creations) as total_file_creations,
+SUM(su_attempted) as total_root_attempts,
+SUM(num_root) as total_root_acceses
+FROM connections
+WHERE (protocol_type = 'tcp'
+AND label != 'normal.')
+GROUP BY service, attack_type
+HAVING (mean_duration >= 50
+AND total_file_creations >= 5
+AND total_root_acceses >= 1)
+ORDER BY total_freq DESC
+""")
+display(tcp_attack_stats)
+```
+
+![Filtered by attack type][31]
+
+It's interesting to see that [multi-hop attacks][32] can get root accesses to the destination hosts!
+
+#### Subqueries to filter TCP attack types based on service
+
+Let's try to get all the TCP attacks based on service and attack type such that the overall mean duration of these attacks is greater than zero ( **> 0** ). For this, we can do an inner query with all aggregation statistics and extract the relevant queries and apply a mean duration filter in the outer query, as shown below.
+
+```
+tcp_attack_stats = sqlContext.sql("""
+SELECT
+t.service,
+t.attack_type,
+t.total_freq
+FROM
+(SELECT
+service,
+label as attack_type,
+COUNT(*) as total_freq,
+ROUND(AVG(duration), 2) as mean_duration,
+SUM(num_failed_logins) as total_failed_logins,
+SUM(num_file_creations) as total_file_creations,
+SUM(su_attempted) as total_root_attempts,
+SUM(num_root) as total_root_acceses
+FROM connections
+WHERE protocol_type = 'tcp'
+AND label != 'normal.'
+GROUP BY service, attack_type
+ORDER BY total_freq DESC) as t
+WHERE t.mean_duration > 0
+""")
+display(tcp_attack_stats)
+```
+
+![TCP attacks based on service and attack type][33]
+
+This is nice! Now another interesting way to view this data is to use a pivot table, where one attribute represents rows and another one represents columns. Let's see if we can leverage Spark DataFrames to do this!
+
+#### Build a pivot table from aggregated data
+
+We will build upon the previous DataFrame object where we aggregated attacks based on type and service. For this, we can leverage the power of Spark DataFrames and the DataFrame DSL.
+
+```
+display((tcp_attack_stats.groupby('service')
+.pivot('attack_type')
+.agg({'total_freq':'max'})
+.na.fill(0))
+)
+```
+
+![Pivot table][34]
+
+We get a nice, neat pivot table showing all the occurrences based on service and attack type!
+
+### Next steps
+
+I would encourage you to go out and play with Spark SQL and DataFrames. You can even [import my notebook][35] and play with it in your own account.
+
+Feel free to refer to [my GitHub repository][36] also for all the code and notebooks used in this article. It covers things we didn't cover here, including:
+
+ * Joins
+ * Window functions
+ * Detailed operations and transformations of Spark DataFrames
+
+
+
+You can also access my tutorial as a [Jupyter Notebook][37], in case you want to use it offline.
+
+There are plenty of articles and tutorials available online, so I recommend you check them out. One useful resource is Databricks' complete [guide to Spark SQL][38].
+
+Thinking of working with JSON data but unsure of using Spark SQL? Databricks supports it! Check out this excellent guide to [JSON support in Spark SQL][39].
+
+Interested in advanced concepts like window functions and ranks in SQL? Take a look at "[Introducing Window Functions in Spark SQL][40]."
+
+I will write another article covering some of these concepts in an intuitive way, which should be easy for you to understand. Stay tuned!
+
+In case you have any feedback or queries, you can reach out to me on [LinkedIn][41].
+
+* * *
+
+*This article originally appeared on Medium's [Towards Data Science][42] channel and is republished with permission. *
+
+* * *
+
+
+--------------------------------------------------------------------------------
+
+via: https://opensource.com/article/19/3/apache-spark-and-dataframes-tutorial
+
+作者:[Dipanjan (DJ) Sarkar (Red Hat)][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://opensource.com/users/djsarkar
+[b]: https://github.com/lujun9972
+[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/todo_checklist_team_metrics_report.png?itok=oB5uQbzf (Team checklist and to dos)
+[2]: https://opensource.com/article/19/3/sql-scale-apache-spark-sql-and-dataframes
+[3]: https://spark.apache.org/sql/
+[4]: https://opensource.com/sites/default/files/uploads/13_spark-databricks.png (Apache Spark and Databricks)
+[5]: https://databricks.com/try-databricks
+[6]: https://databricks.com/signup#signup/community
+[7]: https://opensource.com/sites/default/files/uploads/14_create-notebook.png (Create a notebook)
+[8]: https://databricks.com/spark/getting-started-with-apache-spark
+[9]: http://kdd.ics.uci.edu/databases/kddcup99/kddcup99.html
+[10]: https://opensource.com/sites/default/files/uploads/15_dbfs-kdd-kddcup_data-gz.png (Spark Job kddcup_data.gz)
+[11]: https://spark.apache.org/docs/latest/rdd-programming-guide.html#resilient-distributed-datasets-rdds
+[12]: https://opensource.com/sites/default/files/uploads/16_rdd-data.png (Data in Resilient Distributed Dataset (RDD))
+[13]: https://opensource.com/sites/default/files/uploads/16a_output.png (output)
+[14]: https://opensource.com/sites/default/files/uploads/16b_output.png (output)
+[15]: https://opensource.com/sites/default/files/uploads/17_split-csv.png (Splitting RDD entries)
+[16]: http://kdd.ics.uci.edu/databases/kddcup99/kddcup.names
+[17]: http://kdd.ics.uci.edu/databases/kddcup99/task.html
+[18]: https://opensource.com/sites/default/files/uploads/18_extract-columns.png (Extracting columns)
+[19]: https://opensource.com/sites/default/files/uploads/19_build-dataframe.png (DataFrame)
+[20]: https://opensource.com/sites/default/files/uploads/20_dataframe-schema.png (Dataframe schema)
+[21]: https://opensource.com/sites/default/files/uploads/21_registertemptable.png (help(df.registerTempTable))
+[22]: https://opensource.com/sites/default/files/uploads/22_number-of-connections.png (Total number of connections)
+[23]: https://opensource.com/sites/default/files/uploads/23_sql.png (protocol type and frequency)
+[24]: https://opensource.com/sites/default/files/uploads/24_intrusion-type.png (Connection by type)
+[25]: https://opensource.com/sites/default/files/uploads/25_chart-interface.png (Databricks chart types)
+[26]: https://opensource.com/sites/default/files/uploads/26_plot-options-chart.png (Bar chart)
+[27]: https://opensource.com/sites/default/files/uploads/27_pandas-barchart.png (Bar chart)
+[28]: https://opensource.com/sites/default/files/uploads/28_most-attacked.png (Protocols most vulnerable to attacks)
+[29]: https://opensource.com/sites/default/files/uploads/29_data-transmissions.png (Statistics pertaining to protocols and attacks)
+[30]: https://opensource.com/sites/default/files/uploads/30_tcp-attack-metrics.png (TCP attack data)
+[31]: https://opensource.com/sites/default/files/uploads/31_attack-type.png (Filtered by attack type)
+[32]: https://attack.mitre.org/techniques/T1188/
+[33]: https://opensource.com/sites/default/files/uploads/32_tcp-attack-types.png (TCP attacks based on service and attack type)
+[34]: https://opensource.com/sites/default/files/uploads/33_pivot-table.png (Pivot table)
+[35]: https://databricks-prod-cloudfront.cloud.databricks.com/public/4027ec902e239c93eaaa8714f173bcfc/3137082781873852/3704545280501166/1264763342038607/latest.html
+[36]: https://github.com/dipanjanS/data_science_for_all/tree/master/tds_spark_sql_intro
+[37]: http://nbviewer.jupyter.org/github/dipanjanS/data_science_for_all/blob/master/tds_spark_sql_intro/Working%20with%20SQL%20at%20Scale%20-%20Spark%20SQL%20Tutorial.ipynb
+[38]: https://docs.databricks.com/spark/latest/spark-sql/index.html
+[39]: https://databricks.com/blog/2015/02/02/an-introduction-to-json-support-in-spark-sql.html
+[40]: https://databricks.com/blog/2015/07/15/introducing-window-functions-in-spark-sql.html
+[41]: https://www.linkedin.com/in/dipanzan/
+[42]: https://towardsdatascience.com/sql-at-scale-with-apache-spark-sql-and-dataframes-concepts-architecture-and-examples-c567853a702f
diff --git a/sources/tech/20190321 NVIDIA Jetson Nano is a -99 Raspberry Pi Rival for AI Development.md b/sources/tech/20190321 NVIDIA Jetson Nano is a -99 Raspberry Pi Rival for AI Development.md
new file mode 100644
index 0000000000..52f02edc95
--- /dev/null
+++ b/sources/tech/20190321 NVIDIA Jetson Nano is a -99 Raspberry Pi Rival for AI Development.md
@@ -0,0 +1,98 @@
+[#]: collector: (lujun9972)
+[#]: translator: ( )
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+[#]: subject: (NVIDIA Jetson Nano is a $99 Raspberry Pi Rival for AI Development)
+[#]: via: (https://itsfoss.com/nvidia-jetson-nano/)
+[#]: author: (Atharva Lele https://itsfoss.com/author/atharva/)
+
+NVIDIA Jetson Nano is a $99 Raspberry Pi Rival for AI Development
+======
+
+At the [GPU Technology Conference][1] NVIDIA announced the [Jetson Nano Module][2] and the [Jetson Nano Developer Kit][3]. Compared to other Jetson boards which cost between $299 and $1099, the Jetson Nano bears a low cost of $99. This puts it within the reach of many developers, educators, and researchers who could not spend hundreds of dollars to get such a product.
+
+![The Jetson Nano Development Kit \(left\) and the Jetson Nano Module \(right\)][4]
+
+### Bringing back AI development from ‘cloud’
+
+In the last few years, we have seen a lot of [advances in AI research][5]. Traditionally AI computing was always done in the cloud, where there was plenty of processing power available.
+
+Recently, there’s been a trend in shifting this computation away from the cloud and do it locally. This is called [Edge Computing][6]. Now at the embedded level, products which could do such complex calculations required for AI and Machine Learning were sparse, but we’re seeing a great explosion these days in this product segment.
+
+Products like the [SparkFun Edge][7] and [OpenMV Board][8] are good examples. The Jetson Nano, is NVIDIA’s latest offering in this market. When connected to your system, it will be able to supply the processing power needed for Machine Learning and AI tasks without having to rely on the cloud.
+
+This is great for privacy as well as saving on internet bandwidth. It is also more secure since your data always stays on the device itself.
+
+### Jetson Nano focuses on smaller AI projects
+
+![Jetson Nano powered JetBot][9]
+
+Previously released Jetson Boards like the [TX2][10] and [AGX Xavier][11] were used in products like drones and cars, the Jetson Nano is targeting smaller projects, projects where you need to have the processing power which boards like the [Raspberry Pi][12] cannot provide.
+
+Did you know?
+
+NVIDIA’s JetPack SDK provides a ‘complete desktop Linux environment based on Ubuntu 18.04 LTS’. In other words, the Jetson Nano is powered by Ubuntu Linux.
+
+### NVIDIA Jetson Nano Specifications
+
+For $99, you get 472 GFLOPS of processing power due to 128 NVIDIA Maxwell Architecture CUDA Cores, a quad-core ARM A57 processor, 4GB of LP-DDR4 RAM, 16GB of on-board storage, and 4k video encode/decode capabilities. The port selection is also pretty decent with the Nano having Gigabit Ethernet, MIPI Camera, Display outputs, and a couple of USB ports (1×3.0, 3×2.0). Full range of specifications can be found [here][13].
+
+CPU | Quad-core ARM® Cortex®-A57 MPCore processor
+---|---
+GPU | NVIDIA Maxwell™ architecture with 128 NVIDIA CUDA® cores
+RAM | 4 GB 64-bit LPDDR4
+Storage | 16 GB eMMC 5.1 Flash
+Camera | 12 lanes (3×4 or 4×2) MIPI CSI-2 DPHY 1.1 (1.5 Gbps)
+Connectivity | Gigabit Ethernet
+Display Ports | HDMI 2.0 and DP 1.2
+USB Ports | 1 USB 3.0 and 3 USB 2.0
+Other | 1 x1/2/4 PCIE, 1x SDIO / 2x SPI / 6x I2C / 2x I2S / GPIOs
+Size | 69.6 mm x 45 mm
+
+Along with good hardware, you get support for the majority of popular AI frameworks like TensorFlow, PyTorch, Keras, etc. It also has support for NVIDIA’s [JetPack][14] and [DeepStream][15] SDKs, same as the more expensive TX2 and AGX Boards.
+
+“Jetson Nano makes AI more accessible to everyone — and is supported by the same underlying architecture and software that powers our nation’s supercomputer. Bringing AI to the maker movement opens up a whole new world of innovation, inspiring people to create the next big thing.” said Deepu Talla, VP and GM of Autonomous Machines at NVIDIA.
+
+[Subscribe to It’s FOSS YouTube Channel][16]
+
+**What do you think of Jetson Nano?**
+
+The availability of Jetson Nano differs from country to country.
+
+The [Intel Neural Stick][17], is also one such accelerator which is competitively prices at $79. It’s good to see competition stirring up at these lower price points from the big manufacturers.
+
+I’m looking forward to getting my hands on the product if possible.
+
+What do you guys think about a product like this? Let us know in the comments below.
+
+--------------------------------------------------------------------------------
+
+via: https://itsfoss.com/nvidia-jetson-nano/
+
+作者:[Atharva Lele][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://itsfoss.com/author/atharva/
+[b]: https://github.com/lujun9972
+[1]: https://www.nvidia.com/en-us/gtc/
+[2]: https://www.nvidia.com/en-us/autonomous-machines/embedded-systems/jetson-nano/
+[3]: https://developer.nvidia.com/embedded/buy/jetson-nano-devkit
+[4]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/03/jetson-nano-family-press-image-hd.jpg?ssl=1
+[5]: https://itsfoss.com/nanotechnology-open-science-ai/
+[6]: https://en.wikipedia.org/wiki/Edge_computing
+[7]: https://www.sparkfun.com/news/2886
+[8]: https://openmv.io/
+[9]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/03/nvidia_jetson_bot.jpg?ssl=1
+[10]: https://www.nvidia.com/en-us/autonomous-machines/embedded-systems/jetson-tx2/
+[11]: https://www.nvidia.com/en-us/autonomous-machines/embedded-systems/jetson-agx-xavier/
+[12]: https://itsfoss.com/things-you-need-to-get-your-raspberry-pi-working/
+[13]: https://www.nvidia.com/en-us/autonomous-machines/embedded-systems/jetson-nano/#specifications
+[14]: https://developer.nvidia.com/embedded/jetpack
+[15]: https://developer.nvidia.com/deepstream-sdk
+[16]: https://www.youtube.com/c/itsfoss?sub_confirmation=1
+[17]: https://software.intel.com/en-us/movidius-ncs-get-started
diff --git a/sources/tech/20190321 Top 10 New Linux SBCs to Watch in 2019.md b/sources/tech/20190321 Top 10 New Linux SBCs to Watch in 2019.md
new file mode 100644
index 0000000000..f3f1f7c72b
--- /dev/null
+++ b/sources/tech/20190321 Top 10 New Linux SBCs to Watch in 2019.md
@@ -0,0 +1,101 @@
+[#]: collector: (lujun9972)
+[#]: translator: ( )
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+[#]: subject: (Top 10 New Linux SBCs to Watch in 2019)
+[#]: via: (https://www.linux.com/blog/2019/3/top-10-new-linux-sbcs-watch-2019)
+[#]: author: (Eric Brown https://www.linux.com/users/ericstephenbrown)
+
+Top 10 New Linux SBCs to Watch in 2019
+======
+
+![UP Xtreme][1]
+
+Aaeon's Linux-ready UP Xtreme SBC.
+
+[Used with permission][2]
+
+A recent [Global Market Insights report][3] projects the single board computer market will grow from $600 million in 2018 to $1 billion by 2025. Yet, you don’t need to read a market research report to realize the SBC market is booming. Driven by the trends toward IoT and AI-enabled edge computing, new boards keep rolling off the assembly lines, many of them [tailored for highly specific applications][4].
+
+Much of the action has been in Linux-compatible boards, including the insanely popular Raspberry Pi. The number of different vendors and models has exploded thanks in part to the rise of [community-backed, open-spec SBCs][5].
+
+Here we examine 10 of the most intriguing, Linux-driven SBCs among the many products announced in the last four weeks that bookended the recent [Embedded World show][6] in Nuremberg. (There was also some [interesting Linux software news][7] at the show.) Two of the SBCs—the Intel Whiskey Lake based UP Xtreme and Nvidia Jetson Nano driven Jetson Nano Dev Kit—were announced only this week.
+
+Our mostly open source list also includes a few commercial boards. Processors range from the modest, Cortex-A7 driven STM32MP1 to the high-powered Whiskey Lake and Snapdragon 845. Mid-range models include Google’s i.MX8M powered Coral Dev Board and a similarly AI-enhanced, TI AM5729 based BeagleBone AI. Deep learning acceleration chips—and standard RPi 40-pin or 96Boards expansion connectors—are common themes among most of these boards.
+
+The SBCs are listed in reverse chronological order according to their announcement dates. The links in the product names go to recent LinuxGizmos reports, which link to vendor product pages.
+
+**[UP Xtreme][8]** —The latest in Aaeon’s line of community-backed SBCs taps Intel’s 8th Gen Whiskey Lake-U CPUs, which maintain a modest 15W TDP while boosting performance with up to quad-core, dual threaded configurations. Depending on when it ships, this Linux-ready model will likely be the most powerful community-backed SBC around -- and possibly the most expensive.
+
+The SBC supports up to 16GB DDR4 and 128GB eMMC and offers 4K displays via HDMI, DisplayPort, and eDP. Other features include SATA, 2x GbE, 4x USB 3.0, and 40-pin “HAT” and 100-pin GPIO add-on board connectors. You also get mini-PCIe and dual M.2 slots that support wireless modems and more SATA options. The slots also support Aaeon’s new AI Core X modules, which offer Intel’s latest Movidius Myriad X VPUs for 1TOPS neural processing acceleration.
+
+**[Jetson Nano Dev Kit][9]** —Nvidia just announced a low-end Jetson Nano compute module that’s sort of like a smaller (70 x 45mm) version of the old Jetson TX1. It offers the same 4x Cortex-A57 cores but has an even lower-end 128-core Maxwell GPU. The module has half the RAM and flash (4GB/16GB) of the TX1 and TX2, and no WiFi/Bluetooth radios. Like the hexa-core Jetson TX2, however, it supports 4K video and the GPU offers similar CUDA-X deep learning libraries.
+
+Although Nvidia has backed all its Linux-driven Jetson modules with development kits, the Jetson Nano Dev Kit is its first community-backed, maker-oriented kit. It does not appear to offer open specifications, but it costs only $99 and there’s a forum and other community resources. Many of the specs match or surpass the Raspberry Pi 3B+, including the addition of a 40-pin GPIO. Highlights include an M.2 slot, GbE with Power-over-Ethernet, HDMI 2.0 and eDP links, and 4x USB 3.0 ports.
+
+**[Coral Dev Board][10]** —Google’s very first Linux maker board arrived earlier this month featuring an NXP i.MX8M and Google’s Edge TPU AI chip—a stripped-down version of Google’s TPU Unit is designed to run TensorFlow Lite ML models. The $150, Raspberry Pi-like Coral Dev Board was joined by a similarly Edge TPU-enabled Coral USB Accelerator USB stick. These will be followed by an Edge TPU based Coral PCIe Accelerator and a Coral SOM compute module. All these devices are backed with schematics, community resources, and other open-spec resources.
+
+The Coral Dev Board combines the Edge TPU chip with NXP’s quad-core, 1.5GHz Cortex-A53 i.MX8M with a 3D Vivante GPU/VPU and a Cortex-M4 MCU. The SBC is even more like the Raspberry Pi 3B+ than Nvidia’s Dev Kit, mimicking the size and much of the layout and I/O, including the 40-pin GPIO connector. Highlights include 4K-ready GbE, HDMI 2.0a, 4-lane MIPI-DSI and CSI, and USB 3.0 host and Type-C ports.
+
+**[SBC-C43][11]** —Seco’s commercial, industrial temperature SBC-C43 board is the first SBC based on NXP’s high-end, up to hexa-core i.MX8. The 3.5-inch SBC supports the i.MX8 QuadMax with 2x Cortex-A72 cores and 4x Cortex-A53 cores, the QuadPlus with a single Cortex-A72 and 4x -A53, and the Quad with no -A72 cores and 4x -A53. There are also 2x Cortex-M4F real-time cores and 2x Vivante GPU/VPU cores. Yocto Project, Wind River Linux, and Android are available.
+
+The feature-rich SBC-C43 supports up to 8GB DDR4 and 32GB eMMC, both soldered for greater reliability. Highlights include dual GbE, HDMI 2.0a in and out ports, WiFi/Bluetooth, and a variety of industrial interfaces. Dual M.2 slots support SATA, wireless, and more.
+
+**[Nitrogen8M_Mini][12]** —This Boundary Devices cousin to the earlier, i.MX8M based Nitrogen8M is available for $135, with shipments due this Spring. The open-spec Nitrogen8M_Mini is the first SBC to feature NXP’s new i.MX8M Mini SoC. The Mini uses a more advanced 14LPC FinFET process than the i.MX8M, resulting in lower power consumption and higher clock rates for both the 4x Cortex-A53 (1.5GHz to 2GHz) and Cortex-M4 (400MHz) cores. The drawback is that you’re limited to HD video resolution.
+
+Supported with Linux and Android, the Nitrogen8M_Mini ships with 2GB to 4GB LPDDR4 RAM and 8GB to 128GB eMMC. MIPI-DSI and -CSI interfaces support optional touchscreens and cameras, respectively. A GbE port is standard and PoE and WiFi/BT are optional. Other features include 3x USB ports, one or two PCIe slots, and optional -40 to 85°C support. A Nitrogen8M_Mini SOM module with similar specs is also in the works.
+
+**[Pine H64 Model B][13]** —Pine64’s latest hacker board was teased in late January as part of an [ambitious roll-out][14] of open source products, including a laptop, tablet, and phone. The Raspberry Pi semi-clone, which recently went on sale for $39 (2GB) or $49 (3GB), showcases the high-end, but low-cost Allwinner H64. The quad -A53 SoC is notable for its 4K video with HDR support.
+
+The Pine H64 Model B offers up to 128GB eMMC storage, WiFi/BT, and a GbE port. I/O includes 2x USB 2.0 and single USB 3.0 and HDMI 2.0a ports plus SPDIF audio and an RPi-like 40-pin connector. Images include Android 7.0 and an “in progress” Armbian Debian Stretch.
+
+**[AI-ML Board][15]** —Arrow unveiled this i.MX8X based SBC early this month along with a similarly 96Boards CE Extended format, i.MX8M based Thor96 SBC. While there are plenty of i.MX8M boards these days, we’re more intrigued with the lowest-end i.MX8X member of the i.MX8 family. The AI-ML Board is the first SBC we’ve seen to feature the low-power i.MX8X, which offers up to 4x 64-bit, 1.2GHz Cortex-A35 cores, a 4-shader, 4K-ready Vivante GPU/VPU, a Cortex-M4F chip, and a Tensilica HiFi 4 DSP.
+
+The open-spec, Yocto Linux driven AI-ML Board is targeted at low-power, camera-equipped applications such as drones. The board has 2GB LPDDR4, Ethernet, WiFi/BT, and a pair each of MIPI-DSI and USB 3.0 ports. Cameras are controlled via the 96Boards 60-pin, high-power GPIO connector, which is joined by the usual 40-pin low-power link. The launch is expected June 1.
+
+**[BeagleBone AI][16]** —The long-awaited successor to the Cortex-A8 AM3358 based BeagleBone family of boards advances to TIs dual-core Cortex-A15 AM5729, with similar PowerVR GPU and MCU-like PRU cores. The real story, however, is the AI firepower enabled by the SoC’s dual TI C66x DSPs and four embedded-vision-engine (EVE) neural processing cores. BeagleBoard.org claims that calculations for computer-vision models using EVE run at 8x times the performance per watt compared to the similar, but EVE-less, AM5728. The EVE and DSP chips are supported through a TIDL machine learning OpenCL API and pre-installed tools.
+
+Due to go on sale in April for about $100, the Linux-powered BeagleBone AI is based closely on the BeagleBone Black and offers backward header, mechanical, and software compatibility. It doubles the RAM to 1GB and quadruples the eMMC storage to 16GB. You now get GbE and high-speed WiFi, as well as a USB Type-C port.
+
+**[Robotics RB3 Platform (DragonBoard 845c)][17]** —Qualcomm and Thundercomm are initially launching their 96Boards CE form factor, Snapdragon 845-based upgrade to the Snapdragon 820-based [DragonBoard 820c][18] SBC as part of a Qualcomm Robotics RB3 Platform. Yet, 96Boards.org has already posted a [DragonBoard 845c product page][17], and we imagine the board will be available in the coming months without all the robotics bells and whistles. A compute module version is also said to be in the works.
+
+The 10nm, octa-core, “Kryo” based Snapdragon 845 is one of the most powerful Arm SoCs around. It features an advanced Adreno 630 GPU with “eXtended Reality” (XR) VR technology and a Hexagon 685 DSP with a third-gen Neural Processing Engine (NPE) for AI applications. On the RB3 kit, the board’s expansion connectors are pre-stocked with Qualcomm cellular and robotics camera mezzanines. The $449 and up kit also includes standard 4K video and tracking cameras, and there are optional Time-of-Flight (ToF) and stereo SLM camera depth cameras. The SBC runs Linux with ROS (Robot Operating System).
+
+**[Avenger96][19]** —Like Arrow’s AI-ML Board, the Avenger96 is a 96Boards CE Extended SBC aimed at low-power IoT applications. Yet, the SBC features an even more power-efficient (and slower) SoC: ST’s recently announced [STM32MP153][20]. The Avenger96 runs Linux on the high-end STM32MP157 model, which has dual, 650MHz Cortex-A7 cores, a Cortex-M4, and a Vivante 3D GPU.
+
+This sandwich-style board features an Avenger96 module with the STM32MP157 SoC, 1GB of DDR3L, 2MB SPI flash, and a power management IC. It’s unclear if the 8GB eMMC and WiFi-ac/Bluetooth 4.2 module are on the module or carrier board. The Avenger96 SBC is further equipped with GbE, HDMI, micro-USB OTG, and dual USB 2.0 host ports. There’s also a microSD slot and the usual 40- and 60-pin GPIO connectors. The board is expected to go on sale in April.
+
+--------------------------------------------------------------------------------
+
+via: https://www.linux.com/blog/2019/3/top-10-new-linux-sbcs-watch-2019
+
+作者:[Eric Brown][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://www.linux.com/users/ericstephenbrown
+[b]: https://github.com/lujun9972
+[1]: https://www.linux.com/sites/lcom/files/styles/rendered_file/public/aaeon_upxtreme.jpg?itok=QnwAt3mp (UP Xtreme)
+[2]: /LICENSES/CATEGORY/USED-PERMISSION
+[3]: https://www.globenewswire.com/news-release/2019/02/13/1724445/0/en/Single-Board-Computer-Market-to-surpass-1bn-by-2025-Global-Market-Insights-Inc.html
+[4]: https://www.linux.com/blog/2019/1/linux-hacker-board-trends-2018-and-beyond
+[5]: http://linuxgizmos.com/catalog-of-122-open-spec-linux-hacker-boards/
+[6]: https://www.embedded-world.de/en
+[7]: https://www.linux.com/news/2019/2/embedded-linux-software-highlights-embedded-world
+[8]: http://linuxgizmos.com/latest-up-board-combines-whiskey-lake-with-ai-core-x-modules/
+[9]: http://linuxgizmos.com/trimmed-down-jetson-nano-modules-ships-on-99-linux-dev-kit/
+[10]: http://linuxgizmos.com/google-launches-i-mx8m-dev-board-with-edge-tpu-ai-chip/
+[11]: http://linuxgizmos.com/first-i-mx8-quadmax-sbc-breaks-cover/
+[12]: http://linuxgizmos.com/open-spec-nitrogen8m_mini-sbc-ships-along-with-new-mini-based-som/
+[13]: http://linuxgizmos.com/revised-allwiner-h64-based-pine-h64-sbc-has-rpi-size-and-gpio/
+[14]: https://www.linux.com/blog/2019/2/pine64-launch-open-source-phone-laptop-tablet-and-camera
+[15]: http://linuxgizmos.com/arrows-latest-96boards-sbcs-tap-i-mx8x-and-i-mx8m/
+[16]: http://linuxgizmos.com/beaglebone-ai-sbc-features-dual-a15-soc-with-eve-ai-cores/
+[17]: http://linuxgizmos.com/robotics-kit-runs-linux-on-new-dragonboard-845c-96boards-sbc/
+[18]: http://linuxgizmos.com/debian-driven-dragonboard-expands-to-96boards-extended-spec/
+[19]: http://linuxgizmos.com/sandwich-style-96boards-sbc-runs-linux-on-sts-new-cortex-a7-m4-soc/
+[20]: https://www.linux.com/news/2019/2/st-spins-its-first-linux-powered-cortex-soc
diff --git a/sources/tech/20190322 12 open source tools for natural language processing.md b/sources/tech/20190322 12 open source tools for natural language processing.md
new file mode 100644
index 0000000000..9d2822926f
--- /dev/null
+++ b/sources/tech/20190322 12 open source tools for natural language processing.md
@@ -0,0 +1,113 @@
+[#]: collector: (lujun9972)
+[#]: translator: ( )
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+[#]: subject: (12 open source tools for natural language processing)
+[#]: via: (https://opensource.com/article/19/3/natural-language-processing-tools)
+[#]: author: (Dan Barker https://opensource.com/users/barkerd427)
+
+12 open source tools for natural language processing
+======
+
+Take a look at a dozen options for your next NLP application.
+
+![Chat bubbles][1]
+
+Natural language processing (NLP), the technology that powers all the chatbots, voice assistants, predictive text, and other speech/text applications that permeate our lives, has evolved significantly in the last few years. There are a wide variety of open source NLP tools out there, so I decided to survey the landscape to help you plan your next voice- or text-based application.
+
+For this review, I focused on tools that use languages I'm familiar with, even though I'm not familiar with all the tools. (I didn't find a great selection of tools in the languages I'm not familiar with anyway.) That said, I excluded tools in three languages I am familiar with, for various reasons.
+
+The most obvious language I didn't include might be R, but most of the libraries I found hadn't been updated in over a year. That doesn't always mean they aren't being maintained well, but I think they should be getting updates more often to compete with other tools in the same space. I also chose languages and tools that are most likely to be used in production scenarios (rather than academia and research), and I have mostly used R as a research and discovery tool.
+
+I was also surprised to see that the Scala libraries are fairly stagnant. It has been a couple of years since I last used Scala, when it was pretty popular. Most of the libraries haven't been updated since that time—or they've only had a few updates.
+
+Finally, I excluded C++. This is mostly because it's been many years since I last wrote in C++, and the organizations I've worked in have not used C++ for NLP or any data science work.
+
+### Python tools
+
+#### Natural Language Toolkit (NLTK)
+
+It would be easy to argue that [Natural Language Toolkit (NLTK)][2] is the most full-featured tool of the ones I surveyed. It implements pretty much any component of NLP you would need, like classification, tokenization, stemming, tagging, parsing, and semantic reasoning. And there's often more than one implementation for each, so you can choose the exact algorithm or methodology you'd like to use. It also supports many languages. However, it represents all data in the form of strings, which is fine for simple constructs but makes it hard to use some advanced functionality. The documentation is also quite dense, but there is a lot of it, as well as [a great book][3]. The library is also a bit slow compared to other tools. Overall, this is a great toolkit for experimentation, exploration, and applications that need a particular combination of algorithms.
+
+#### SpaCy
+
+[SpaCy][4] is probably the main competitor to NLTK. It is faster in most cases, but it only has a single implementation for each NLP component. Also, it represents everything as an object rather than a string, which simplifies the interface for building applications. This also helps it integrate with many other frameworks and data science tools, so you can do more once you have a better understanding of your text data. However, SpaCy doesn't support as many languages as NLTK. It does have a simple interface with a simplified set of choices and great documentation, as well as multiple neural models for various components of language processing and analysis. Overall, this is a great tool for new applications that need to be performant in production and don't require a specific algorithm.
+
+#### TextBlob
+
+[TextBlob][5] is kind of an extension of NLTK. You can access many of NLTK's functions in a simplified manner through TextBlob, and TextBlob also includes functionality from the Pattern library. If you're just starting out, this might be a good tool to use while learning, and it can be used in production for applications that don't need to be overly performant. Overall, TextBlob is used all over the place and is great for smaller projects.
+
+#### Textacy
+
+This tool may have the best name of any library I've ever used. Say "[Textacy][6]" a few times while emphasizing the "ex" and drawing out the "cy." Not only is it great to say, but it's also a great tool. It uses SpaCy for its core NLP functionality, but it handles a lot of the work before and after the processing. If you were planning to use SpaCy, you might as well use Textacy so you can easily bring in many types of data without having to write extra helper code.
+
+#### PyTorch-NLP
+
+[PyTorch-NLP][7] has been out for just a little over a year, but it has already gained a tremendous community. It is a great tool for rapid prototyping. It's also updated often with the latest research, and top companies and researchers have released many other tools to do all sorts of amazing processing, like image transformations. Overall, PyTorch is targeted at researchers, but it can also be used for prototypes and initial production workloads with the most advanced algorithms available. The libraries being created on top of it might also be worth looking into.
+
+### Node tools
+
+#### Retext
+
+[Retext][8] is part of the [unified collective][9]. Unified is an interface that allows multiple tools and plugins to integrate and work together effectively. Retext is one of three syntaxes used by the unified tool; the others are Remark for markdown and Rehype for HTML. This is a very interesting idea, and I'm excited to see this community grow. Retext doesn't expose a lot of its underlying techniques, but instead uses plugins to achieve the results you might be aiming for with NLP. It's easy to do things like checking spelling, fixing typography, detecting sentiment, or making sure text is readable with simple plugins. Overall, this is an excellent tool and community if you just need to get something done without having to understand everything in the underlying process.
+
+#### Compromise
+
+[Compromise][10] certainly isn't the most sophisticated tool. If you're looking for the most advanced algorithms or the most complete system, this probably isn't the right tool for you. However, if you want a performant tool that has a wide breadth of features and can function on the client side, you should take a look at Compromise. Overall, its name is accurate in that the creators compromised on functionality and accuracy by focusing on a small package with much more specific functionality that benefits from the user understanding more of the context surrounding the usage.
+
+#### Natural
+
+[Natural][11] includes most functions you might expect in a general NLP library. It is mostly focused on English, but some other languages have been contributed, and the community is open to additional contributions. It supports tokenizing, stemming, classification, phonetics, term frequency–inverse document frequency, WordNet, string similarity, and some inflections. It might be most comparable to NLTK, in that it tries to include everything in one package, but it is easier to use and isn't necessarily focused around research. Overall, this is a pretty full library, but it is still in active development and may require additional knowledge of underlying implementations to be fully effective.
+
+#### Nlp.js
+
+[Nlp.js][12] is built on top of several other NLP libraries, including Franc and Brain.js. It provides a nice interface into many components of NLP, like classification, sentiment analysis, stemming, named entity recognition, and natural language generation. It also supports quite a few languages, which is helpful if you plan to work in something other than English. Overall, this is a great general tool with a simplified interface into several other great tools. This will likely take you a long way in your applications before you need something more powerful or more flexible.
+
+### Java tools
+
+#### OpenNLP
+
+[OpenNLP][13] is hosted by the Apache Foundation, so it's easy to integrate it into other Apache projects, like Apache Flink, Apache NiFi, and Apache Spark. It is a general NLP tool that covers all the common processing components of NLP, and it can be used from the command line or within an application as a library. It also has wide support for multiple languages. Overall, OpenNLP is a powerful tool with a lot of features and ready for production workloads if you're using Java.
+
+#### StanfordNLP
+
+[Stanford CoreNLP][14] is a set of tools that provides statistical NLP, deep learning NLP, and rule-based NLP functionality. Many other programming language bindings have been created so this tool can be used outside of Java. It is a very powerful tool created by an elite research institution, but it may not be the best thing for production workloads. This tool is dual-licensed with a special license for commercial purposes. Overall, this is a great tool for research and experimentation, but it may incur additional costs in a production system. The Python implementation might also interest many readers more than the Java version. Also, one of the best Machine Learning courses is taught by a Stanford professor on Coursera. [Check it out][15] along with other great resources.
+
+#### CogCompNLP
+
+[CogCompNLP][16], developed by the University of Illinois, also has a Python library with similar functionality. It can be used to process text, either locally or on remote systems, which can remove a tremendous burden from your local device. It provides processing functions such as tokenization, part-of-speech tagging, chunking, named-entity tagging, lemmatization, dependency and constituency parsing, and semantic role labeling. Overall, this is a great tool for research, and it has a lot of components that you can explore. I'm not sure it's great for production workloads, but it's worth trying if you plan to use Java.
+
+* * *
+
+What are your favorite open source tools and libraries for NLP? Please share in the comments—especially if there's one I didn't include.
+
+--------------------------------------------------------------------------------
+
+via: https://opensource.com/article/19/3/natural-language-processing-tools
+
+作者:[Dan Barker (Community Moderator)][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://opensource.com/users/barkerd427
+[b]: https://github.com/lujun9972
+[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/talk_chat_communication_team.png?itok=CYfZ_gE7 (Chat bubbles)
+[2]: http://www.nltk.org/
+[3]: http://www.nltk.org/book_1ed/
+[4]: https://spacy.io/
+[5]: https://textblob.readthedocs.io/en/dev/
+[6]: https://readthedocs.org/projects/textacy/
+[7]: https://pytorchnlp.readthedocs.io/en/latest/
+[8]: https://www.npmjs.com/package/retext
+[9]: https://unified.js.org/
+[10]: https://www.npmjs.com/package/compromise
+[11]: https://www.npmjs.com/package/natural
+[12]: https://www.npmjs.com/package/node-nlp
+[13]: https://opennlp.apache.org/
+[14]: https://stanfordnlp.github.io/CoreNLP/
+[15]: https://opensource.com/article/19/2/learn-data-science-ai
+[16]: https://github.com/CogComp/cogcomp-nlp
diff --git a/sources/tech/20190322 Easy means easy to debug.md b/sources/tech/20190322 Easy means easy to debug.md
new file mode 100644
index 0000000000..4b0b4d52d2
--- /dev/null
+++ b/sources/tech/20190322 Easy means easy to debug.md
@@ -0,0 +1,83 @@
+[#]: collector: (lujun9972)
+[#]: translator: ( )
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+[#]: subject: (Easy means easy to debug)
+[#]: via: (https://arp242.net/weblog/easy.html)
+[#]: author: (Martin Tournoij https://arp242.net/)
+
+
+What does it mean for a framework, library, or tool to be “easy”? There are many possible definitions one could use, but my definition is usually that it’s easy to debug. I often see people advertise a particular program, framework, library, file format, or something else as easy because “look with how little effort I can do task X, this is so easy!” That’s great, but an incomplete picture.
+
+You only write software once, but will almost always go through several debugging cycles. With debugging cycle I don’t mean “there is a bug in the code you need to fix”, but rather “I need to look at this code to fix the bug”. To debug code, you need to understand it, so “easy to debug” by extension means “easy to understand”.
+
+Abstractions which make something easier to write often come at the cost of make things harder to understand. Sometimes this is a good trade-off, but often it’s not. In general I will happily spend a little but more effort writing something now if that makes things easier to understand and debug later on, as it’s often a net time-saver.
+
+Simplicity isn’t the only thing that makes programs easier to debug, but it is probably the most important. Good documentation helps too, but unfortunately good documentation is uncommon (note that quality is not measured by word count!)
+
+This is not exactly a novel insight; from the 1974 The Elements of Programming Style by Brian W. Kernighan and P. J. Plauger:
+
+> Everyone knows that debugging is twice as hard as writing a program in the first place. So if you’re as clever as you can be when you write it, how will you ever debug it?
+
+A lot of stuff I see seems to be written “as clever as can be” and is consequently hard to debug. I’ll list a few examples of this pattern below. It’s not my intention to argue that any of these things are bad per se, I just want to highlight the trade-offs in “easy to use” vs. “easy to debug”.
+
+ * When I tried running [Let’s Encrypt][1] a few years ago it required running a daemon as root(!) to automatically rewrite nginx files. I looked at the source a bit to understand how it worked and it was all pretty complex, so I was “let’s not” and opted to just pay €10 to the CA mafia, as not much can go wrong with putting a file in /etc/nginx/, whereas a lot can go wrong with complex Python daemons running as root.
+
+(I don’t know the current state/options for Let’s Encrypt; at a quick glance there may be better/alternative ACME clients that suck less now.)
+
+ * Some people claim that systemd is easier than SysV init.d scripts because it’s easier to write systemd unit files than it is to write shell scripts. In particular, this is the argument Lennart Poettering used in his [systemd myths][2] post (point 5).
+
+I think is completely missing the point. I agree with Poettering that shell scripts are hard – [I wrote an entire post about that][3] – but by making the interface easier doesn’t mean the entire system becomes easier. Look at [this issue][4] I encountered and [the fix][5] for it. Does that look easy to you?
+
+ * Many JavaScript frameworks I’ve used can be hard to fully understand. Clever state keeping logic is great and all, until that state won’t work as you expect, and then you better hope there’s a Stack Overflow post or GitHub issue to help you out.
+
+ * Docker is great, right up to the point you get:
+
+```
+ ERROR: for elasticsearch Cannot start service elasticsearch:
+oci runtime error: container_linux.go:247: starting container process caused "process_linux.go:258:
+applying cgroup configuration for process caused \"failed to write 898 to cgroup.procs: write
+/sys/fs/cgroup/cpu,cpuacct/docker/b13312efc203e518e3864fc3f9d00b4561168ebd4d9aad590cc56da610b8dd0e/cgroup.procs:
+invalid argument\""
+```
+
+or
+
+```
+ERROR: for elasticsearch Cannot start service elasticsearch: EOF
+```
+
+And … now what?
+
+ * Many testing libraries can make things harder to debug. Ruby’s rspec is a good example where I’ve occasionally used the library wrong by accident and had to spend quite a long time figuring out what exactly went wrong (as the errors it gave me were very confusing!)
+
+I wrote a bit more about that in my [Testing isn’t everything][6] post.
+
+ * ORM libraries can make database queries a lot easier, at the cost of making things a lot harder to understand once you want to solve a problem.
+
+
+
+
+
+--------------------------------------------------------------------------------
+
+via: https://arp242.net/weblog/easy.html
+
+作者:[Martin Tournoij][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://arp242.net/
+[b]: https://github.com/lujun9972
+[1]: https://en.wikipedia.org/wiki/Let%27s_Encrypt
+[2]: http://0pointer.de/blog/projects/the-biggest-myths.html
+[3]: https://arp242.net/weblog/shell-scripting-trap.html
+[4]: https://unix.stackexchange.com/q/185495/33645
+[5]: https://cgit.freedesktop.org/systemd/systemd/commit/?id=6e392c9c45643d106673c6643ac8bf4e65da13c1
+[6]: /weblog/testing.html
+[7]: mailto:martin@arp242.net
+[8]: https://github.com/Carpetsmoker/arp242.net/issues/new
diff --git a/sources/tech/20190322 How to Install OpenLDAP on Ubuntu Server 18.04.md b/sources/tech/20190322 How to Install OpenLDAP on Ubuntu Server 18.04.md
new file mode 100644
index 0000000000..a4325fe74b
--- /dev/null
+++ b/sources/tech/20190322 How to Install OpenLDAP on Ubuntu Server 18.04.md
@@ -0,0 +1,205 @@
+[#]: collector: (lujun9972)
+[#]: translator: ( )
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+[#]: subject: (How to Install OpenLDAP on Ubuntu Server 18.04)
+[#]: via: (https://www.linux.com/blog/2019/3/how-install-openldap-ubuntu-server-1804)
+[#]: author: (Jack Wallen https://www.linux.com/users/jlwallen)
+
+How to Install OpenLDAP on Ubuntu Server 18.04
+======
+
+![OpenLDAP][1]
+
+In part one of this short tutorial series, Jack Wallen explains how to install OpenLDAP.
+
+[Creative Commons Zero][2]
+
+The Lightweight Directory Access Protocol (LDAP) allows for the querying and modification of an X.500-based directory service. In other words, LDAP is used over a Local Area Network (LAN) to manage and access a distributed directory service. LDAPs primary purpose is to provide a set of records in a hierarchical structure. What can you do with those records? The best use-case is for user validation/authentication against desktops. If both server and client are set up properly, you can have all your Linux desktops authenticating against your LDAP server. This makes for a great single point of entry so that you can better manage (and control) user accounts.
+
+The most popular iteration of LDAP for Linux is [OpenLDAP][3]. OpenLDAP is a free, open-source implementation of the Lightweight Directory Access Protocol, and makes it incredibly easy to get your LDAP server up and running.
+
+In this three-part series, I’ll be walking you through the steps of:
+
+ 1. Installing OpenLDAP server.
+
+ 2. Installing the web-based LDAP Account Manager.
+
+ 3. Configuring Linux desktops, such that they can communicate with your LDAP server.
+
+
+
+
+In the end, all of your Linux desktop machines (that have been configured properly) will be able to authenticate against a centralized location, which means you (as the administrator) have much more control over the management of users on your network.
+
+In this first piece, I’ll be demonstrating the installation and configuration of OpenLDAP on Ubuntu Server 18.04. All you will need to make this work is a running instance of Ubuntu Server 18.04 and a user account with sudo privileges.
+Let’s get to work.
+
+### Update/Upgrade
+
+The first thing you’ll want to do is update and upgrade your server. Do note, if the kernel gets updated, the server will need to be rebooted (unless you have Live Patch, or a similar service running). Because of this, run the update/upgrade at a time when the server can be rebooted.
+To update and upgrade Ubuntu, log into your server and run the following commands:
+
+```
+sudo apt-get update
+
+sudo apt-get upgrade -y
+```
+
+When the upgrade completes, reboot the server (if necessary), and get ready to install and configure OpenLDAP.
+
+### Installing OpenLDAP
+
+Since we’ll be using OpenLDAP as our LDAP server software, it can be installed from the standard repository. To install the necessary pieces, log into your Ubuntu Server and issue the following command:
+
+### sudo apt-get instal slapd ldap-utils -y
+
+During the installation, you’ll be first asked to create an administrator password for the LDAP directory. Type and verify that password (Figure 1).
+
+![password][4]
+
+Figure 1: Creating an administrator password for LDAP.
+
+[Used with permission][5]
+
+Configuring LDAP
+
+With the installation of the components complete, it’s time to configure LDAP. Fortunately, there’s a handy tool we can use to make this happen. From the terminal window, issue the command:
+
+```
+sudo dpkg-reconfigure slapd
+```
+
+In the first window, hit Enter to select No and continue on. In the second window of the configuration tool (Figure 2), you must type the DNS domain name for your server. This will serve as the base DN (the point from where a server will search for users) for your LDAP directory. In my example, I’ve used example.com (you’ll want to change this to fit your needs).
+
+![domain name][6]
+
+Figure 2: Configuring the domain name for LDAP.
+
+[Used with permission][5]
+
+In the next window, type your Organizational name (ie the name of your company or department). You will then be prompted to (once again) create an administrator password (you can use the same one as you did during the installation). Once you’ve taken care of that, you’ll be asked the following questions:
+
+ * Database backend to use - select **MDB**.
+
+ * Do you want the database to be removed with slapd is purged? - Select **No.**
+
+ * Move old database? - Select **Yes.**
+
+
+
+
+OpenLDAP is now ready for data.
+
+### Adding Initial Data
+
+Now that OpenLDAP is installed and running, it’s time to populate the directory with a bit of initial data. In the second piece of this series, we’ll be installing a web-based GUI that makes it much easier to handle this task, but it’s always good to know how to add data the manual way.
+
+One of the best ways to add data to the LDAP directory is via text file, which can then be imported in with the __ldapadd__ command. Create a new file with the command:
+
+```
+nano ldap_data.ldif
+```
+
+In that file, paste the following contents:
+
+```
+dn: ou=People,dc=example,dc=com
+
+objectClass: organizationalUnit
+
+ou: People
+
+
+dn: ou=Groups,dc=EXAMPLE,dc=COM
+
+objectClass: organizationalUnit
+
+ou: Groups
+
+
+dn: cn=DEPARTMENT,ou=Groups,dc=EXAMPLE,dc=COM
+
+objectClass: posixGroup
+
+cn: SUBGROUP
+
+gidNumber: 5000
+
+
+dn: uid=USER,ou=People,dc=EXAMPLE,dc=COM
+
+objectClass: inetOrgPerson
+
+objectClass: posixAccount
+
+objectClass: shadowAccount
+
+uid: USER
+
+sn: LASTNAME
+
+givenName: FIRSTNAME
+
+cn: FULLNAME
+
+displayName: DISPLAYNAME
+
+uidNumber: 10000
+
+gidNumber: 5000
+
+userPassword: PASSWORD
+
+gecos: FULLNAME
+
+loginShell: /bin/bash
+
+homeDirectory: USERDIRECTORY
+```
+
+In the above file, every entry in all caps needs to be modified to fit your company needs. Once you’ve modified the above file, save and close it with the [Ctrl]+[x] key combination.
+
+To add the data from the file to the LDAP directory, issue the command:
+
+```
+ldapadd -x -D cn=admin,dc=EXAMPLE,dc=COM -W -f ldap_data.ldif
+```
+
+Remember to alter the dc entries (EXAMPLE and COM) in the above command to match your domain name. After running the command, you will be prompted for the LDAP admin password. When you successfully authentication to the LDAP server, the data will be added. You can then ensure the data is there, by running a search like so:
+
+```
+ldapsearch -x -LLL -b dc=EXAMPLE,dc=COM 'uid=USER' cn gidNumber
+```
+
+Where EXAMPLE and COM is your domain name and USER is the user to search for. The command should report the entry you searched for (Figure 3).
+
+![search][7]
+
+Figure 3: Our search was successful.
+
+[Used with permission][5]
+
+Now that you have your first entry into your LDAP directory, you can edit the above file to create even more. Or, you can wait until the next entry into the series (installing LDAP Account Manager) and take care of the process with the web-based GUI. Either way, you’re one step closer to having LDAP authentication on your network.
+
+--------------------------------------------------------------------------------
+
+via: https://www.linux.com/blog/2019/3/how-install-openldap-ubuntu-server-1804
+
+作者:[Jack Wallen][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://www.linux.com/users/jlwallen
+[b]: https://github.com/lujun9972
+[1]: https://www.linux.com/sites/lcom/files/styles/rendered_file/public/ldap.png?itok=r9viT8n6 (OpenLDAP)
+[2]: /LICENSES/CATEGORY/CREATIVE-COMMONS-ZERO
+[3]: https://www.openldap.org/
+[4]: https://www.linux.com/sites/lcom/files/styles/rendered_file/public/ldap_1.jpg?itok=vbWScztB (password)
+[5]: /LICENSES/CATEGORY/USED-PERMISSION
+[6]: https://www.linux.com/sites/lcom/files/styles/rendered_file/public/ldap_2.jpg?itok=10CSCm6Z (domain name)
+[7]: https://www.linux.com/sites/lcom/files/styles/rendered_file/public/ldap_3.jpg?itok=df2Y65Dv (search)
diff --git a/sources/tech/20190322 How to set up Fedora Silverblue as a gaming station.md b/sources/tech/20190322 How to set up Fedora Silverblue as a gaming station.md
new file mode 100644
index 0000000000..2d794f2d29
--- /dev/null
+++ b/sources/tech/20190322 How to set up Fedora Silverblue as a gaming station.md
@@ -0,0 +1,100 @@
+[#]: collector: (lujun9972)
+[#]: translator: ( )
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+[#]: subject: (How to set up Fedora Silverblue as a gaming station)
+[#]: via: (https://fedoramagazine.org/set-up-fedora-silverblue-gaming-station/)
+[#]: author: (Michal Konečný https://fedoramagazine.org/author/zlopez/)
+
+How to set up Fedora Silverblue as a gaming station
+======
+
+![][1]
+
+This article gives you a step by step guide to turn your Fedora Silverblue into an awesome gaming station with the help of Flatpak and Steam.
+
+Note: Do you need the NVIDIA proprietary driver on Fedora 29 Silverblue for a complete experience? Check out [this blog post][2] for pointers.
+
+### Add the Flathub repository
+
+This process starts with a clean Fedora 29 Silverblue installation with a user already created for you.
+
+First, go to and enable the Flathub repository on your system. To do this, click the _Quick setup_ button on the main page.
+
+![Quick setup button on flathub.org/home][3]
+
+This redirects you to where you should click on the Fedora icon.
+
+![Fedora icon on flatpak.org/setup][4]
+
+Now you just need to click on _Flathub repository file._ Open the downloaded file with the _Software Install_ application.
+
+![Flathub repository file button on flatpak.org/setup/Fedora][5]
+
+The GNOME Software application opens. Next, click on the _Install_ button. This action needs _sudo_ permissions, because it installs the Flathub repository for use by the whole system.
+
+![Install button in GNOME Software][6]
+
+### Install the Steam flatpak
+
+You can now search for the S _team_ flatpak in _GNOME Software_. If you can’t find it, try rebooting — or logout and login — in case _GNOME Software_ didn’t read the metadata. That happens automatically when you next login.
+
+![Searching for Steam][7]
+
+Click on the _Steam_ row and the _Steam_ page opens in _GNOME Software._ Next, click on _Install_.
+
+![Steam page in GNOME Software][8]
+
+And now you have installed _Steam_ flatpak on your system.
+
+### Enable Steam Play in Steam
+
+Now that you have _Steam_ installed, launch it and log in. To play Windows games too, you need to enable _Steam Play_ in _Steam._ To enable it, choose _Steam > Settings_ from the menu in the main window.
+
+![Settings button in Steam][9]
+
+Navigate to the _Steam Play_ section. You should see the option _Enable Steam Play for supported titles_ is already ticked, but it’s recommended you also tick the _Enable Steam Play_ option for all other titles. There are plenty of games that are actually playable, but not whitelisted yet on _Steam._ To see which games are playable, visit [ProtonDB][10] and search for your favorite game. Or just look for the games with the most platinum reports.
+
+![Steam Play settings menu on Steam][11]
+
+If you want to know more about Steam Play, you can read the [article][12] about it here on Fedora Magazine:
+
+> [Play Windows games on Fedora with Steam Play and Proton][12]
+
+### Appendix
+
+You’re now ready to play plenty of games on Linux. Please remember to share your experience with others using the _Contribute_ button on [ProtonDB][10] and report bugs you find on [GitHub][13], because sharing is nice. 🙂
+
+* * *
+
+_Photo by _[ _Hardik Sharma_][14]_ on _[_Unsplash_][15]_._
+
+--------------------------------------------------------------------------------
+
+via: https://fedoramagazine.org/set-up-fedora-silverblue-gaming-station/
+
+作者:[Michal Konečný][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://fedoramagazine.org/author/zlopez/
+[b]: https://github.com/lujun9972
+[1]: https://fedoramagazine.org/wp-content/uploads/2019/03/silverblue-gaming-816x345.jpg
+[2]: https://blogs.gnome.org/alexl/2019/03/06/nvidia-drivers-in-fedora-silverblue/
+[3]: https://fedoramagazine.org/wp-content/uploads/2019/03/Screenshot-from-2019-03-15-12-29-00.png
+[4]: https://fedoramagazine.org/wp-content/uploads/2019/03/Screenshot-from-2019-03-15-12-36-35-1024x713.png
+[5]: https://fedoramagazine.org/wp-content/uploads/2019/03/Screenshot-from-2019-03-15-12-45-12.png
+[6]: https://fedoramagazine.org/wp-content/uploads/2019/03/Screenshot-from-2019-03-15-12-57-37.png
+[7]: https://fedoramagazine.org/wp-content/uploads/2019/03/Screenshot-from-2019-03-15-13-08-21.png
+[8]: https://fedoramagazine.org/wp-content/uploads/2019/03/Screenshot-from-2019-03-15-13-13-59-1024x769.png
+[9]: https://fedoramagazine.org/wp-content/uploads/2019/03/Screenshot-from-2019-03-15-13-30-20.png
+[10]: https://www.protondb.com/
+[11]: https://fedoramagazine.org/wp-content/uploads/2019/03/Screenshot-from-2019-03-15-13-41-53.png
+[12]: https://fedoramagazine.org/play-windows-games-steam-play-proton/
+[13]: https://github.com/ValveSoftware/Proton
+[14]: https://unsplash.com/photos/I7rXyzBNVQM?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText
+[15]: https://unsplash.com/search/photos/video-game-laptop?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText
diff --git a/sources/tech/20190322 Printing from the Linux command line.md b/sources/tech/20190322 Printing from the Linux command line.md
new file mode 100644
index 0000000000..75aec13bb3
--- /dev/null
+++ b/sources/tech/20190322 Printing from the Linux command line.md
@@ -0,0 +1,177 @@
+[#]: collector: (lujun9972)
+[#]: translator: ( )
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+[#]: subject: (Printing from the Linux command line)
+[#]: via: (https://www.networkworld.com/article/3373502/printing-from-the-linux-command-line.html)
+[#]: author: (Sandra Henry-Stocker https://www.networkworld.com/author/Sandra-Henry_Stocker/)
+
+Printing from the Linux command line
+======
+
+There's a lot more to printing from the Linux command line than the lp command. Check out some of the many available options.
+
+![Sherry \(CC BY 2.0\)][1]
+
+Printing from the Linux command line is easy. You use the **lp** command to request a print, and **lpq** to see what print jobs are in the queue, but things get a little more complicated when you want to print double-sided or use portrait mode. And there are lots of other things you might want to do — such as printing multiple copies of a document or canceling a print job. Let's check out some options for getting your printouts to look just the way you want them to when you're printing from the command line.
+
+### Displaying printer settings
+
+To view your printer settings from the command line, use the **lpoptions** command. The output should look something like this:
+
+```
+$ lpoptions
+copies=1 device-uri=dnssd://HP%20Color%20LaserJet%20CP2025dn%20(F47468)._pdl-datastream._tcp.local/ finishings=3 job-cancel-after=10800 job-hold-until=no-hold job-priority=50 job-sheets=none,none marker-change-time=1553023232 marker-colors=#000000,#00FFFF,#FF00FF,#FFFF00 marker-levels=18,62,62,63 marker-names='Black\ Cartridge\ HP\ CC530A,Cyan\ Cartridge\ HP\ CC531A,Magenta\ Cartridge\ HP\ CC533A,Yellow\ Cartridge\ HP\ CC532A' marker-types=toner,toner,toner,toner number-up=1 printer-commands=none printer-info='HP Color LaserJet CP2025dn (F47468)' printer-is-accepting-jobs=true printer-is-shared=true printer-is-temporary=false printer-location printer-make-and-model='HP Color LaserJet cp2025dn pcl3, hpcups 3.18.7' printer-state=3 printer-state-change-time=1553023232 printer-state-reasons=none printer-type=167964 printer-uri-supported=ipp://localhost/printers/Color-LaserJet-CP2025dn sides=one-sided
+```
+
+This output is likely to be a little more human-friendly if you turn its blanks into carriage returns. Notice how many settings are listed.
+
+NOTE: In the output below, some lines have been reconnected to make this output more readable.
+
+```
+$ lpoptions | tr " " '\n'
+copies=1
+device-uri=dnssd://HP%20Color%20LaserJet%20CP2025dn%20(F47468)._pdl-datastream._tcp.local/
+finishings=3
+job-cancel-after=10800
+job-hold-until=no-hold
+job-priority=50
+job-sheets=none,none
+marker-change-time=1553023232
+marker-colors=#000000,#00FFFF,#FF00FF,#FFFF00
+marker-levels=18,62,62,63
+marker-names='Black\ Cartridge\ HP\ CC530A,
+Cyan\ Cartridge\ HP\ CC531A,
+Magenta\ Cartridge\ HP\ CC533A,
+Yellow\ Cartridge\ HP\ CC532A'
+marker-types=toner,toner,toner,toner
+number-up=1
+printer-commands=none
+printer-info='HP Color LaserJet CP2025dn (F47468)'
+printer-is-accepting-jobs=true
+printer-is-shared=true
+printer-is-temporary=false
+printer-location
+printer-make-and-model='HP Color LaserJet cp2025dn pcl3, hpcups 3.18.7'
+printer-state=3
+printer-state-change-time=1553023232
+printer-state-reasons=none
+printer-type=167964
+printer-uri-supported=ipp://localhost/printers/Color-LaserJet-CP2025dn
+sides=one-sided
+```
+
+With the **-v** option, the **lpinfo** command will list drivers and related information.
+
+```
+$ lpinfo -v
+network ipp
+network https
+network socket
+network beh
+direct hp
+network lpd
+file cups-brf:/
+network ipps
+network http
+direct hpfax
+network dnssd://HP%20Color%20LaserJet%20CP2025dn%20(F47468)._pdl-datastream._tcp.local/ <== printer
+network socket://192.168.0.23 <== printer IP
+```
+
+The lpoptions command will show the settings of your default printer. Use the **-p** option to specify one of a number of available printers.
+
+```
+$ lpoptions -p LaserJet
+```
+
+The **lpstat -p** command displays the status of a printer while **lpstat -p -d** also lists available printers.
+
+```
+$ lpstat -p -d
+printer Color-LaserJet-CP2025dn is idle. enabled since Tue 19 Mar 2019 05:07:45 PM EDT
+system default destination: Color-LaserJet-CP2025dn
+```
+
+### Useful commands
+
+To print a document on the default printer, just use the **lp** command followed by the name of the file you want to print. If the filename includes blanks (rare on Linux systems), either put the name in quotes or start entering the file name and press the tab key to invoke file completion (as shown in the second example below).
+
+```
+$ lp "never leave home angry"
+$ lp never\ leave\ home\ angry
+```
+
+The **lpq** command displays the print queue.
+
+```
+$ lpq
+Color-LaserJet-CP2025dn is ready and printing
+Rank Owner Job File(s) Total Size
+active shs 234 agenda 2048 bytes
+```
+
+With the **-n** option, the lp command allows you to specify the number of copies of a printout you want.
+
+```
+$ lp -n 11 agenda
+```
+
+To cancel a print job, you can use the **cancel** or **lprm** command. If you don't act quickly, you might see this:
+
+```
+$ cancel 229
+cancel: cancel-job failed: Job #229 is already completed - can't cancel.
+```
+
+### Two-sided printing
+
+To print in two-sided mode, you can issue your lp command with a **sides** option that says both to print on both sides of the paper and which edge to turn the paper on. This setting represents the normal way that you would expect two-sided portrait documents to look.
+
+```
+$ lp -o sides=two-sided-long-edge Notes.pdf
+```
+
+If you want all of your documents to print in two-side mode, you can change your lp settings by using the **lpoptions** command to change the setting for **sides**.
+
+```
+$ lpoptions -o sides=two-sided-short-edge
+```
+
+To revert to single-sided printing, you would use a command like this one:
+
+```
+$ lpoptions -o sides=one-sided
+```
+
+#### Printing in landscape mode
+
+To print in landscape mode, you would use the **landscape** option with the lp command.
+
+```
+$ lp -o landscape penguin.jpg
+```
+
+### CUPS
+
+The print system used on Linux systems is the standards-based, open source printing system called CUPS, originally standing for **Common Unix Printing System**. It allows a computer to act as a print server.
+
+Join the Network World communities on [Facebook][2] and [LinkedIn][3] to comment on topics that are top of mind.
+
+--------------------------------------------------------------------------------
+
+via: https://www.networkworld.com/article/3373502/printing-from-the-linux-command-line.html
+
+作者:[Sandra Henry-Stocker][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://www.networkworld.com/author/Sandra-Henry_Stocker/
+[b]: https://github.com/lujun9972
+[1]: https://images.idgesg.net/images/article/2019/03/printouts-paper-100791390-large.jpg
+[2]: https://www.facebook.com/NetworkWorld/
+[3]: https://www.linkedin.com/company/network-world
diff --git a/sources/tech/20190325 Backup on Fedora Silverblue with Borg.md b/sources/tech/20190325 Backup on Fedora Silverblue with Borg.md
new file mode 100644
index 0000000000..8aa5c65139
--- /dev/null
+++ b/sources/tech/20190325 Backup on Fedora Silverblue with Borg.md
@@ -0,0 +1,314 @@
+[#]: collector: (lujun9972)
+[#]: translator: ( )
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+[#]: subject: (Backup on Fedora Silverblue with Borg)
+[#]: via: (https://fedoramagazine.org/backup-on-fedora-silverblue-with-borg/)
+[#]: author: (Steven Snow https://fedoramagazine.org/author/jakfrost/)
+
+Backup on Fedora Silverblue with Borg
+======
+
+![][1]
+
+When it comes to backing up a Fedora Silverblue system, some of the traditional tools may not function as expected. BorgBackup (Borg) is an alternative available that can provide backup capability for your Silverblue based systems. This how-to explains the steps for using BorgBackup 1.1.8 as a layered package to back up Fedora Silverblue 29 system.
+
+On a normal Fedora Workstation system, _dnf_ is used to install a package. However, on Fedora Silverblue, _rpm-ostree install_ is used to install new software. This is termed layering on the Silverblue system, since the core ostree is an immutable image and the rpm package is layered onto the core system during the install process resulting in a new local image with the layered package.
+
+> “BorgBackup (short: Borg) is a deduplicating backup program. Optionally, it supports compression and authenticated encryption.”
+>
+> From the Borg website
+
+Additionally, the main way to interact with Borg is via the command line. Reading the Quick Start guide it becomes apparent that Borg is well suited to scripting. In fact, it is pretty much necessary to use some form of shell script when performing repeated thorough backup’s of a system. A basic script is provided in the [Borg Quick Start guide][2] , as a point to get started.
+
+### Installing Borg
+
+In a terminal, type the following command to install BorgBackup as a layered package:
+
+```
+$rpm-ostree install borgbackup
+```
+This installs BorgBackup to the Fedora Silverblue system. To use it, reboot into the new ostree with:
+
+```
+$systemctl reboot
+```
+
+Now Borg is installed, and ready to use.
+
+### Some notes about Silverblue and its file system, layered packages and flatpaks
+
+#### The file system
+
+Silverblue is an immutable operating system based on ostree, with support for layering rpm’s through the use of rpm-ostree. At the user level, this means the path that appears as _/home_ in a flatpak, will actually be _/var/home_ to the system. For programs like Borg, and other backup tools this is important to remember since they often require the actual path, so in this example that would be _/var/home_ instead of just _/home_.
+
+Before starting a backup it’s a good idea to understand where potential data could be stored, and then if that data should be backed up. Silverblue’s file system layout is very specific with respect to what is writable and what is not. On Silverblue _/etc_ and _/var_ are the only places that are not immutable, therefore writable. On a single user system, typically the user home directory would be a likely choice for data backup. Normally excluding Downloads, but including Documents and more. Also, _/etc_ is a logical choice for some configuration options you don’t want to go through again. Take notes of what to exclude from your home directory and from _/etc_. Some files and subdirectories of /etc you need root or sudo privileges to access.
+
+#### Flatpaks
+
+Flatpak applications store data in your home directory under _$HOME/.var/app/flatpakapp_ , regardless of whether they were installed as user or system. If installed at a user level, there is also data found in _$HOME/.local/share/flatpak/app/_ , or if installed at a system level it will be found in _/var/lib/flatpak/app_ For the purposes of this article, it was enough to list the flatpak’s installed and redirect the output to a file for backing up. Reasoning that if there is a need to reinstall them (flatpaks) the list file could be used to do it from. For a more robust approach, examining the flatpak file system layouts can be done [here.][3]
+
+#### Layering and rpm-ostree
+
+There is no easy way for a user to retrieve the layered package information aside from the
+
+$rpm-ostree status
+
+command. Which shows the current and previous ostree commit’s layered packages, and if any commits are pinned they would be listed too. Below is the output on my system, note the LayeredPackages label at the end of each commit listing.
+
+![][4]
+
+The command
+
+$ostree log
+
+is useful to retrieve a history of commits for the system. Type it in your terminal to see the output.
+
+### Preparing the backup repo
+
+In order to use Borg to back up a system, you need to first initialize a Borg repo. Before initializing, the decision must be made to use encryption (or not) and if so, what mode.
+
+With Borg the data can be protected using 256-bit AES encryption. The integrity and authenticity of the data, which is encrypted on the clientside, is verified using HMAC-SHA256. The encryption modes are listed below.
+
+#### Encryption modes
+
+Hash/MAC | Not encrypted no auth | Not encrypted, but authenticated | Encrypted (AEAD w/ AES) and authenticated
+---|---|---|---
+SHA-256 | none | authenticated | repokey keyfile
+BLAKE2b | n/a | authenticated-blake2 | repokey-blake2 keyfile-blake2
+
+The encryption mode decided on was keyfile-blake2, which requires a passphrase to be entered as well as the keyfile being needed.
+
+Borg can use the following compression types which you can specify at backup creation time.
+
+ * lz4 (super fast, low compression)
+ * zstd (wide range from high speed and low compression to high compression and lower speed)
+ * zlib (medium speed and compression)
+ * lzma (low speed, high compression)
+
+
+
+For compression lzma was chosen at setting 6, the highest sensible compression level. The initial backup took 4 minutes 59.98 seconds to complete, while subsequent ones have taken less than 20 seconds as a rule.
+
+#### Borg init
+
+To be able to perform backups with Borg, first, create a directory for your Borg repo:
+
+```
+$mkdir borg_testdir
+```
+
+and then change to it.
+
+```
+$cd borg_testdir
+```
+
+Next, initialize the Borg repo with the borg init command:
+
+```
+$borg init -e=keyfile-blake2 .
+```
+
+Borg will prompt for your passphrase, which is case sensitive, and at creation must be entered twice. A suitable passphrase of alpha-numeric characters and symbols, and of a reasonable length should be created. It can be changed later on if needed without affecting the keyfile, or your encrypted data. The keyfile can be exported and should be for backup purposes, along with the passphrase, and stored somewhere secure.
+
+#### Creating a backup
+
+Next, create a test backup of the Documents directory, remember on Silverblue the actual path to the user Documents directory is _/var/home/username/Documents_. In practice on Silverblue, it is suitable to use _~/_ or _$HOME_ to indicate your home directory. The distinction between the actual path and environment variables being the real path does not change whereas the environment variable can be changed. From within the Borg repo, type the following command
+
+```
+$borg create .::borgtest /var/home/username/Documents
+```
+
+and that will create a backup of the Documents directory named **borgtest**. To break down the command a bit; **create** requires a **repo location** , in this case **.** since we are in the **top level** of the **repo**. That makes the path **.::borgtest** for the backup name. Finally **/var/home/username/Documents** is the location of the data we are backing up.
+
+The following command
+
+```
+$borg list
+```
+
+returns a listing of your backups, after a few days it look similar to this:
+
+![Output of borg list command in my backup repo.][5]
+
+To delete the test backup, type the following in the terminal
+
+```
+$borg delete .::borgtest
+```
+
+at this time Borg will prompt for the encryption passphrase in order to delete the backup.
+
+### Pulling it together into a shell script
+
+As mentioned Borg is an eminently script friendly tool. The Borg documentation links provided are great places to find out more about BorgBackup, and there is more. The example script provided by Borg was modified to suit this article. Below is a version with the basic parts that others could use as a starting point if desired. It tries to capture the three information pieces of the system and apps mentioned earlier. The output of _flatpak list_ , _rpm-ostree status_ , and _ostree log_ as human readable files given the same names each time so overwritten each time. The repo setup had to be changed since the original example is for a remote server login with ssh, and this was intended to be used locally. The other changes mostly involved correcting directory paths, tailoring the excluded content to suit this systems home directory, and choosing the compression.
+```
+#!/bin/sh
+
+
+
+# This gets the ostree commit data, this file is overwritten each time
+
+sudo ostree log fedora-workstation:fedora/29/x86_64/silverblue > ostree.log
+
+
+
+rpm-ostree status > rpm-ostree-status.lst
+
+
+
+# Flatpaks get listed too
+
+flatpak list > flatpak.lst
+
+
+
+# Setting this, so the repo does not need to be given on the commandline:
+
+export BORG_REPO=/var/home/usernamehere/borg_testdir
+
+
+
+# Setting this, so you won't be asked for your repository passphrase:(Caution advised!)
+
+export BORG_PASSPHRASE='usercomplexpassphrasehere'
+
+
+
+# some helpers and error handling:
+
+info() { printf "\n%s %s\n\n" "$( date )" "$*" >&2; }
+
+trap 'echo $( date ) Backup interrupted >&2; exit 2' INT TERM
+
+
+
+info "Starting backup"
+
+
+
+# Backup the most important directories into an archive named after
+
+# the machine this script is currently running on:
+
+borg create \
+
+ --verbose \
+
+ --filter AME \
+
+ --list \
+
+ --stats \
+
+ --show-rc \
+
+ --compression auto,lzma,6 \
+
+ --exclude-caches \
+
+ --exclude '/var/home/*/borg_testdir'\
+
+ --exclude '/var/home/*/Downloads/'\
+
+ --exclude '/var/home/*/.var/' \
+
+ --exclude '/var/home/*/Desktop/'\
+
+ --exclude '/var/home/*/bin/' \
+
+ \
+
+ ::'{hostname}-{now}' \
+
+ /etc \
+
+ /var/home/ssnow \
+
+
+
+ backup_exit=$?
+
+
+
+ info "Pruning repository"
+
+
+
+ # Use the `prune` subcommand to maintain 7 daily, 4 weekly and 6 monthly
+
+ # archives of THIS machine. The '{hostname}-' prefix is very important to
+
+ # limit prune's operation to this machine's archives and not apply to
+
+ # other machines' archives also:
+
+
+
+ borg prune \
+
+ --list \
+
+ --prefix '{hostname}-' \
+
+ --show-rc \
+
+ --keep-daily 7 \
+
+ --keep-weekly 4 \
+
+ --keep-monthly 6 \
+
+
+
+ prune_exit=$?
+
+
+
+ # use highest exit code as global exit code
+
+ global_exit=$(( backup_exit > prune_exit ? backup_exit : prune_exit ))
+
+
+
+ if [ ${global_exit} -eq 0 ]; then
+
+ info "Backup and Prune finished successfully"
+
+ elif [ ${global_exit} -eq 1 ]; then
+
+ info "Backup and/or Prune finished with warnings"
+
+ else
+
+ info "Backup and/or Prune finished with errors"
+
+ fi
+
+
+
+ exit ${global_exit}
+```
+
+This listing is missing some more excludes that were specific to the test system setup and backup intentions, and is very basic with room for customization and improvement. For this test to write an article it wasn’t a problem having the passphrase inside of a shell script file. Under normal use it is better to enter the passphrase each time when performing the backup.
+
+--------------------------------------------------------------------------------
+
+via: https://fedoramagazine.org/backup-on-fedora-silverblue-with-borg/
+
+作者:[Steven Snow][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://fedoramagazine.org/author/jakfrost/
+[b]: https://github.com/lujun9972
+[1]: https://fedoramagazine.org/wp-content/uploads/2019/03/borg-816x345.jpg
+[2]: https://borgbackup.readthedocs.io/en/stable/quickstart.html
+[3]: https://github.com/flatpak/flatpak/wiki/Filesystem
+[4]: https://fedoramagazine.org/wp-content/uploads/2019/03/Screenshot-from-2019-03-18-17-11-21-1024x285.png
+[5]: https://fedoramagazine.org/wp-content/uploads/2019/03/Screenshot-from-2019-03-18-18-56-03.png
diff --git a/sources/tech/20190325 Contribute at the Fedora Test Day for Fedora Modularity.md b/sources/tech/20190325 Contribute at the Fedora Test Day for Fedora Modularity.md
new file mode 100644
index 0000000000..3de297db06
--- /dev/null
+++ b/sources/tech/20190325 Contribute at the Fedora Test Day for Fedora Modularity.md
@@ -0,0 +1,50 @@
+[#]: collector: (lujun9972)
+[#]: translator: ( )
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+[#]: subject: (Contribute at the Fedora Test Day for Fedora Modularity)
+[#]: via: (https://fedoramagazine.org/contribute-at-the-fedora-test-day-for-fedora-modularity/)
+[#]: author: (Sumantro Mukherjee https://fedoramagazine.org/author/sumantrom/)
+
+Contribute at the Fedora Test Day for Fedora Modularity
+======
+
+![][1]
+
+Modularity lets you keep the right version of an application, language runtime, or other software on your Fedora system even as the operating system is updated. You can read more about Modularity in general on the [Fedora documentation site][2].
+
+The Modularity folks have been working on Modules for everyone. As a result, the Fedora Modularity and QA teams have organized a test day for **Tuesday, March 26, 2019**. Refer to the [wiki page][3] for links to the test images you’ll need to participate. Read on for more information on the test day.
+
+### How do test days work?
+
+A test day is an event where anyone can help make sure changes in Fedora work well in an upcoming release. Fedora community members often participate, and the public is welcome at these events. If you’ve never contributed before, this is a perfect way to get started.
+
+To contribute, you only need to be able to do the following things:
+
+ * Download test materials, which include some large files
+ * Read and follow directions step by step
+
+
+
+The [wiki page][3] for the modularity test day has a lot of good information on what and how to test. After you’ve done some testing, you can log your results in the test day [web application][4]. If you’re available on or around the day of the event, please do some testing and report your results.
+
+Happy testing, and we hope to see you on test day.
+
+--------------------------------------------------------------------------------
+
+via: https://fedoramagazine.org/contribute-at-the-fedora-test-day-for-fedora-modularity/
+
+作者:[Sumantro Mukherjee][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://fedoramagazine.org/author/sumantrom/
+[b]: https://github.com/lujun9972
+[1]: https://fedoramagazine.org/wp-content/uploads/2015/03/test-days-945x400.png
+[2]: https://docs.fedoraproject.org/en-US/modularity/
+[3]: https://fedoraproject.org/wiki/Test_Day:2019-03-26_Modularity_Test_Day
+[4]: http://testdays.fedorainfracloud.org/events/61
diff --git a/sources/tech/20190325 How Open Source Is Accelerating NFV Transformation.md b/sources/tech/20190325 How Open Source Is Accelerating NFV Transformation.md
new file mode 100644
index 0000000000..22f7df8876
--- /dev/null
+++ b/sources/tech/20190325 How Open Source Is Accelerating NFV Transformation.md
@@ -0,0 +1,77 @@
+[#]: collector: (lujun9972)
+[#]: translator: ( )
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+[#]: subject: (How Open Source Is Accelerating NFV Transformation)
+[#]: via: (https://www.linux.com/blog/2019/3/how-open-source-accelerating-nfv-transformation)
+[#]: author: (Pam Baker https://www.linux.com/users/pambaker)
+
+How Open Source Is Accelerating NFV Transformation
+======
+
+![NFV][1]
+
+In anticipation of the upcoming Open Networking Summit, we talked with Thomas Nadeau, Technical Director NFV at Red Hat, about the role of open source in innovation for telecommunications service providers.
+
+[Creative Commons Zero][2]
+
+Red Hat is noted for making open source a culture and business model, not just a way of developing software, and its message of [open source as the path to innovation][3] resonates on many levels.
+
+In anticipation of the upcoming [Open Networking Summit][4], we talked with [Thomas Nadeau][5], Technical Director NFV at Red Hat, who gave a [keynote address][6] at last year’s event, to hear his thoughts regarding the role of open source in innovation for telecommunications service providers.
+
+One reason for open source’s broad acceptance in this industry, he said, was that some very successful projects have grown too large for any one company to manage, or single-handedly push their boundaries toward additional innovative breakthroughs.
+
+“There are projects now, like Kubernetes, that are too big for any one company to do. There's technology that we as an industry need to work on, because no one company can push it far enough alone,” said Nadeau. “Going forward, to solve these really hard problems, we need open source and the open source software development model.”
+
+Here are more insights he shared on how and where open source is making an innovative impact on telecommunications companies.
+
+**Linux.com: Why is open source central to innovation in general for telecommunications service providers?**
+
+**Nadeau:** The first reason is that the service providers can be in more control of their own destiny. There are some service providers that are more aggressive and involved in this than others. Second, open source frees service providers from having to wait for long periods for the features they need to be developed.
+
+And third, open source frees service providers from having to struggle with using and managing monolith systems when all they really wanted was a handful of features. Fortunately, network equipment providers are responding to this overkill problem. They're becoming much more flexible, more modular, and open source is the best means to achieve that.
+
+**Linux.com: In your ONS keynote presentation, you said open source levels the playing field for traditional carriers in competing with cloud-scale companies in creating digital services and revenue streams. Please explain how open source helps.**
+
+**Nadeau:** Kubernetes again. OpenStack is another one. These are tools that these businesses really need, not to just expand, but to exist in today's marketplace. Without open source in that virtualization space, you’re stuck with proprietary monoliths, no control over your future, and incredibly long waits to get the capabilities you need to compete.
+
+There are two parts in the NFV equation: the infrastructure and the applications. NFV is not just the underlying platforms, but this constant push and pull between the platforms and the applications that use the platforms.
+
+NFV is really virtualization of functions. It started off with monolithic virtual machines (VMs). Then came "disaggregated VMs" where individual functions, for a variety of reasons, were run in a more distributed way. To do so meant separating them, and this is where SDN came in, with the separation of the control plane from the data plane. Those concepts were driving changes in the underlying platforms too, which drove up the overhead substantially. That in turn drove interest in container environments as a potential solution, but it's still NFV.
+
+You can think of it as the latest iteration of SOA with composite applications. Kubernetes is the kind of SOA model that they had at Google, which dropped the worry about the complicated networking and storage underneath and simply allowed users to fire up applications that just worked. And for the enterprise application model, this works great.
+
+But not in the NFV case. In the NFV case, in the previous iteration of the platform at OpenStack, everybody enjoyed near one-for-one network performance. But when we move it over here to OpenShift, we're back to square one where you lose 80% of the performance because of the latest SOA model that they've implemented. And so now evolving the underlying platform rises in importance, and so the pendulum swing goes, but it's still NFV. Open source allows you to adapt to these changes and influences effectively and quickly. Thus innovations happen rapidly and logically, and so do their iterations.
+
+**Linux.com: Tell us about the underlying Linux in NFV, and why that combo is so powerful.**
+
+**Nadeau:** Linux is open source and it always has been in some of the purest senses of open source. The other reason is that it's the predominant choice for the underlying operating system. The reality is that all major networks and all of the top networking companies run Linux as the base operating system on all their high-performance platforms. Now it's all in a very flexible form factor. You can lay it on a Raspberry Pi, or you can lay it on a gigantic million-dollar router. It's secure, it's flexible, and scalable, so operators can really use it as a tool now.
+
+**Linux.com: Carriers are always working to redefine themselves. Indeed, many are actively seeking ways to move out of strictly defensive plays against disruptors, and onto offense where they ARE the disruptor. How can network function virtualization (NFV) help in either or both strategies?**
+
+**Nadeau:** Telstra and Bell Canada are good examples. They are using open source code in concert with the ecosystem of partners they have around that code which allows them to do things differently than they have in the past. There are two main things they do differently today. One is they design their own network. They design their own things in a lot of ways, whereas before they would possibly need to use a turnkey solution from a vendor that looked a lot, if not identical, to their competitors’ businesses.
+
+These telcos are taking a real “in-depth, roll up your sleeves” approach. ow that they understand what they're using at a much more intimate level, they can collaborate with the downstream distro providers or vendors. This goes back to the point that the ecosystem, which is analogous to partner programs that we have at Red Hat, is the glue that fills in gaps and rounds out the network solution that the telco envisions.
+
+_Learn more at[Open Networking Summit][4], happening April 3-5 at the San Jose McEnery Convention Center._
+
+--------------------------------------------------------------------------------
+
+via: https://www.linux.com/blog/2019/3/how-open-source-accelerating-nfv-transformation
+
+作者:[Pam Baker][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://www.linux.com/users/pambaker
+[b]: https://github.com/lujun9972
+[1]: https://www.linux.com/sites/lcom/files/styles/rendered_file/public/nfv-443852_1920.jpg?itok=uFbzmEPY (NFV)
+[2]: /LICENSES/CATEGORY/CREATIVE-COMMONS-ZERO
+[3]: https://www.linuxfoundation.org/blog/2018/02/open-source-standards-team-red-hat-measures-open-source-success/
+[4]: https://events.linuxfoundation.org/events/open-networking-summit-north-america-2019/
+[5]: https://www.linkedin.com/in/tom-nadeau/
+[6]: https://onseu18.sched.com/event/Fmpr
diff --git a/sources/tech/20190325 Reducing sysadmin toil with Kubernetes controllers.md b/sources/tech/20190325 Reducing sysadmin toil with Kubernetes controllers.md
new file mode 100644
index 0000000000..80ddb77264
--- /dev/null
+++ b/sources/tech/20190325 Reducing sysadmin toil with Kubernetes controllers.md
@@ -0,0 +1,166 @@
+[#]: collector: (lujun9972)
+[#]: translator: ( )
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+[#]: subject: (Reducing sysadmin toil with Kubernetes controllers)
+[#]: via: (https://opensource.com/article/19/3/reducing-sysadmin-toil-kubernetes-controllers)
+[#]: author: (Paul Czarkowski https://opensource.com/users/paulczar)
+
+Reducing sysadmin toil with Kubernetes controllers
+======
+
+Controllers can ease a sysadmin's workload by handling things like creating and managing DNS addresses and SSL certificates.
+
+![][1]
+
+Kubernetes is a platform for reducing toil cunningly disguised as a platform for running containers. The element that allows for both running containers and reducing toil is the Kubernetes concept of a **Controller**.
+
+Most resources in Kubernetes are managed by **kube-controller-manager** , or "controller" for short. A [controller][2] is defined as "a control loop that watches the shared state of a cluster … and makes changes attempting to move the current state toward the desired state." Think of it like this: A Kubernetes controller is to a microservice as a Chef recipe (or an Ansible playbook) is to a monolith.
+
+Each Kubernetes resource is controlled by its own control loop. This is a step forward from previous systems like Chef or Puppet, which both have control loops at the server level, but not the resource level. A controller is a fairly simple piece of code that creates a control loop over a single resource to ensure the resource is behaving correctly. These control loops can stack together to create complex functionality with simple interfaces.
+
+The canonical example of this in action is in how we manage Pods in Kubernetes. A Pod is effectively a running copy of an application that a specific worker node is asked to run. If that application crashes, the kubelet running on that node will start it again. However, if that node crashes, the Pod is not recovered, as the control loop (via the kubelet process) responsible for the resource no longer exists. To make applications more resilient, Kubernetes has the ReplicaSet controller.
+
+The ReplicaSet controller is bundled inside the Kubernetes **controller-manager** , which runs on the Kubernetes master node and contains the controllers for these more advanced resources. The ReplicaSet controller is responsible for ensuring that a set number of copies of your application is always running. To do this, the ReplicaSet controller requests that a given number of Pods is created. It then routinely checks that the correct number of Pods is still running and will request more Pods or destroy existing Pods to do so.
+
+By requesting a ReplicaSet from Kubernetes, you get a self-healing deployment of your application. You can further add lifecycle management to your workload by requesting [a Deployment][3], which is a controller that manages ReplicaSets and provides rolling upgrades by managing multiple versions of your application's ReplicaSets.
+
+These controllers are great for managing Kubernetes resources and fantastic for managing resources outside of Kubernetes. The [Cloud Controller Manager][4] is a grouping of Kubernetes controllers that acts on resources external to Kubernetes, specifically resources that provide functionality to Kubernetes on the underlying cloud infrastructure. This is what drives Kubernetes' ability to do things like having a **LoadBalancer** [Service][5] type create and manage a cloud-specific load-balancer (e.g., an Elastic Load Balancer on AWS).
+
+Furthermore, you can extend Kubernetes by writing a controller that watches for events and annotations and performs extra work, acting on Kubernetes resources or external resources that have some form of programmable API.
+
+To review:
+
+ * Controllers are a fundamental building block of Kubernetes' functionality.
+ * A controller forms a control loop to ensure that the state of a given resource matches the requested state.
+ * Kubernetes provides controllers via Controller Manager and Cloud Controller Manager processes that provide additional resilience and functionality.
+ * The ReplicaSet controller adds resiliency to pods by ensuring the correct number of replicas is running.
+ * A Deployment controller adds rolling upgrade capabilities to ReplicaSets.
+ * You can extend Kubernetes' functionality by writing your own controllers.
+
+
+
+### Controllers reduce sysadmin toil
+
+Some of the most common tickets in a sysadmin's queue are for fairly simple tasks that should be automated, but for various reasons are not. For example, creating or updating a DNS record generally requires updating a [zone file][6], but one bad entry and you can take down your entire DNS infrastructure. Or how about those tickets that look like _[SYSAD-42214] Expired SSL Certificate - Production is down_?
+
+[![DNS Haiku][7]][8]
+
+DNS haiku, image by HasturHasturHamster
+
+What if I told you that Kubernetes could manage these things for you by running some additional controllers?
+
+Imagine a world where asking Kubernetes to run applications for you would automatically create and manage DNS addresses and SSL certificates. What a world we live in!
+
+#### Example: External DNS controller
+
+The **[external-dns][9]** controller is a perfect example of Kubernetes treating operations as a microservice. You configure it with your DNS provider, and it will watch resources including Services and Ingress controllers. When one of those resources changes, it will inspect them for annotations that will tell it when it needs to perform an action.
+
+With the **external-dns** controller running in your cluster, you can add the following annotation to a service, and it will go out and create a matching [DNS A record][10] for that resource:
+```
+kubectl annotate service nginx \
+"external-dns.alpha.kubernetes.io/hostname=nginx.example.org."
+```
+You can change other characteristics, such as the DNS record's TTL value:
+```
+kubectl annotate service nginx \
+"external-dns.alpha.kubernetes.io/ttl=10"
+```
+Just like that, you now have automatic DNS management for your applications and services in Kubernetes that reacts to any changes in your cluster to ensure your DNS is correct.
+
+#### Example: Certificate manager operator
+
+Like the **external-dns** controller, the [**cert-manager**][11] will react to changes in resources, but it also comes with a custom resource definition (CRD) that will allow you to request certificates as a resource on their own, not just as a byproduct of an annotation.
+
+**cert-manager** works with [Let's Encrypt][12] and other sources of certificates to request valid, signed Transport Layer Security (TLS) certificates. You can even use it in combination with **external-dns** , like in the following example, which registers **web.example.com** , retrieves a TLS certificate from Let's Encrypt, and stores it in a Secret.
+
+```
+apiVersion: extensions/v1beta1
+kind: Ingress
+metadata:
+ annotations:
+ certmanager.k8s.io/acme-http01-edit-in-place: "true"
+ certmanager.k8s.io/cluster-issuer: letsencrypt-prod
+ kubernetes.io/tls-acme: "true"
+ name: example
+spec:
+ rules:
+ - host: web.example.com
+ http:
+ paths:
+ - backend:
+ serviceName: example
+ servicePort: 80
+ path: /*
+ tls:
+ - hosts:
+ - web.example.com
+ secretName: example-tls
+```
+
+You can also request a certificate directly from the **cert-manager** CRD, like in the following example. As in the above, it will result in a certificate key pair stored in a Kubernetes Secret:
+```
+apiVersion: certmanager.k8s.io/v1alpha1
+kind: Certificate
+metadata:
+ name: example-com
+ namespace: default
+spec:
+ secretName: example-com-tls
+ issuerRef:
+ name: letsencrypt-staging
+ commonName: example.com
+ dnsNames:
+ - www.example.com
+ acme:
+ config:
+ - http01:
+ ingressClass: nginx
+ domains:
+ - example.com
+ - http01:
+ ingress: my-ingress
+ domains:
+ - www.example.com
+```
+
+### Conclusion
+
+This was a quick look at one way Kubernetes is helping enable a new wave of changes in how we operate software. This is one of my favorite topics, and I look forward to sharing more on [Opensource.com][14] and my [blog][15]. I'd also like to hear how you use controllers—message me on Twitter [@pczarkowski][16].
+
+* * *
+
+_This article is based on[Cloud Native Operations - Kubernetes Controllers][17] originally published on Paul Czarkowski's blog._
+
+
+--------------------------------------------------------------------------------
+
+via: https://opensource.com/article/19/3/reducing-sysadmin-toil-kubernetes-controllers
+
+作者:[Paul Czarkowski][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://opensource.com/users/paulczar
+[b]: https://github.com/lujun9972
+[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/ship_wheel_gear_devops_kubernetes.png?itok=xm4a74Kv
+[2]: https://kubernetes.io/docs/reference/command-line-tools-reference/kube-controller-manager/
+[3]: https://kubernetes.io/docs/concepts/workloads/controllers/deployment/
+[4]: https://kubernetes.io/docs/tasks/administer-cluster/running-cloud-controller/
+[5]: https://kubernetes.io/docs/concepts/services-networking/service/#publishing-services-service-types
+[6]: https://en.wikipedia.org/wiki/Zone_file
+[7]: https://opensource.com/sites/default/files/uploads/dns_haiku.png (DNS Haiku)
+[8]: https://www.reddit.com/r/sysadmin/comments/4oj7pv/network_solutions_haiku/
+[9]: https://github.com/kubernetes-incubator/external-dns
+[10]: https://en.wikipedia.org/wiki/List_of_DNS_record_types#Resource_records
+[11]: http://docs.cert-manager.io/en/latest/
+[12]: https://letsencrypt.org/
+[13]: http://www.example.com
+[14]: http://Opensource.com
+[15]: https://tech.paulcz.net/blog/
+[16]: https://twitter.com/pczarkowski
+[17]: https://tech.paulcz.net/blog/cloud-native-operations-k8s-controllers/
diff --git a/sources/tech/20190326 An inside look at an IIoT-powered smart factory.md b/sources/tech/20190326 An inside look at an IIoT-powered smart factory.md
new file mode 100644
index 0000000000..52c7c925dd
--- /dev/null
+++ b/sources/tech/20190326 An inside look at an IIoT-powered smart factory.md
@@ -0,0 +1,74 @@
+[#]: collector: (lujun9972)
+[#]: translator: ( )
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+[#]: subject: (An inside look at an IIoT-powered smart factory)
+[#]: via: (https://www.networkworld.com/article/3384378/an-inside-look-at-tempo-automations-iiot-powered-smart-factory.html#tk.rss_all)
+[#]: author: (Fredric Paul https://www.networkworld.com/author/Fredric-Paul/)
+
+An inside look at an IIoT-powered smart factory
+======
+
+### Despite housing some 50 robots and 50 people, Tempo Automation’s gleaming connected factory relies on industrial IoT and looks more like a high-tech startup office than a manufacturing plant.
+
+![Tempo Automation][1]
+
+As someone who’s spent his whole career working in offices, not factories, I had very little idea what a modern “smart factory” powered by the industrial Internet of Things (IIoT) might look like. That’s why I was so interested in [Tempo Automation][2]’s new 42,000-square-foot facility in San Francisco’s trendy Design District.
+
+Frankly, I pictured the company’s facility, which uses IIoT to automatically configure, operate, and monitor the prototyping and low-volume production of printed circuit board assemblies (PCBAs), as a cacophony of robots and conveyor belts attended to by a grizzled band of grease-stained technicians. You know, a 21stcentury update of Charlie Chaplin’s 1936 classic *Modern Times *making equipment for customers in the aerospace, medtech, industrial automation, consumer electronics, and automotive industries. (The company just inked a [new contract with Lockheed Martin][3].)
+
+**[ Learn more about the[industrial Internet of Things][4]. | Get regularly scheduled insights by [signing up for Network World newsletters][5]. ]**
+
+Not exactly. As you can see from the below pictures, despite housing some 50 robots and 50 people, this gleaming “connected factory” looks more like a high-tech startup office, with just as many computers and few more hard-to-identify machines, including Solder Jet and Stencil Printers, zone reflow ovens, 3D X-ray devices and many more.
+
+![Tempo Automation office space][6]
+
+![Tempo Automation factory floor][7]
+
+## How Tempo Automation's 'smart factory' works
+
+On the front end, Tempo’s customers upload CAD files with their board designs and Bills of Materials (BOM) listing the required parts to be used. After performing feature extraction on the design and developing a virtual model of the finished product, the Tempo system, the platform (called Tempocom) creates a manufacturing plan and automatically programs the factory’s machines. Tempocom also creates work plans for the factory employees, uploading them to the networked IIoT mobile devicesthey all carry. Updated in real time based on design and process changes, this“digital traveler” tells workers where to go and what to work on next.
+
+While Tempocom is planning and organizing the internal work of production, the system is also connected to supplier databases, seeking and ordering the parts that will be used in assembly, optimizing for speed of delivery to the Tempo factory.
+
+## Connecting the digital thread
+
+“There could be up to 20 robots, 400 unique parts, and 25 people working on the factory floor to produce one order start to finish in a matter of hours,” explained [Shashank Samala][8], Tempo’s co-founder and vice president of product in an email. Tempo “employs IIoT to automatically configure, operate, and monitor” the entire process, coordinated by a “connected manufacturing system” that creates an “unbroken digital thread from design intent of the engineer captured on the website, to suppliers distributed across the country, to robots and people on the factory floor.”
+
+Rather than the machines on the floor functioning as “isolated islands of technology,” Samala added, Tempo Automation uses [Amazon Web Services (AWS) GovCloud][9] to network everything in a bi-directional feedback loop.
+
+“After customers upload their design to the Tempo platform, our software extracts the design features and then streams relevant data down to all the devices, processes, and robots on the factory floor,” he said. “This loop then works the other way: As the robots build the products, they collect data and feedback about the design during production. This data is then streamed back through the Tempo secure cloud architecture to the customer as a ‘Production Forensics’ report.”
+
+Samala claimed the system has “streamlined operations, improved collaboration, and simplified remote management and control.”
+
+## Traditional IoT, too
+
+Of course, the Tempo factory isn’t all fancy, cutting-edge IIoT implementations. According to Ryan Saul, vice president of manufacturing,the plant also includes an array of IoT sensors that track temperature, humidity, equipment status, job progress, reported defects, and so on to help engineers and executives understand how the facility is operating.
+
+Join the Network World communities on [Facebook][10] and [LinkedIn][11] to comment on topics that are top of mind.
+
+--------------------------------------------------------------------------------
+
+via: https://www.networkworld.com/article/3384378/an-inside-look-at-tempo-automations-iiot-powered-smart-factory.html#tk.rss_all
+
+作者:[Fredric Paul][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://www.networkworld.com/author/Fredric-Paul/
+[b]: https://github.com/lujun9972
+[1]: https://images.idgesg.net/images/article/2019/03/tempo-automation-iiot-factory-floor-100791923-large.jpg
+[2]: http://www.tempoautomation.com/
+[3]: https://www.businesswire.com/news/home/20190325005097/en/Tempo-Automation-Announces-Contract-Lockheed-Martin
+[4]: https://www.networkworld.com/article/3243928/internet-of-things/what-is-the-industrial-iot-and-why-the-stakes-are-so-high.html#nww-fsb
+[5]: https://www.networkworld.com/newsletters/signup.html#nww-fsb
+[6]: https://images.idgesg.net/images/article/2019/03/tempo-automation-iiot-factory-2-100791921-large.jpg
+[7]: https://images.idgesg.net/images/article/2019/03/tempo-automation-iiot-factory-100791922-large.jpg
+[8]: https://www.linkedin.com/in/shashanksamala/
+[9]: https://aws.amazon.com/govcloud-us/
+[10]: https://www.facebook.com/NetworkWorld/
+[11]: https://www.linkedin.com/company/network-world
diff --git a/sources/tech/20190326 Bringing Kubernetes to the bare-metal edge.md b/sources/tech/20190326 Bringing Kubernetes to the bare-metal edge.md
new file mode 100644
index 0000000000..836eac23be
--- /dev/null
+++ b/sources/tech/20190326 Bringing Kubernetes to the bare-metal edge.md
@@ -0,0 +1,72 @@
+[#]: collector: (lujun9972)
+[#]: translator: ( )
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+[#]: subject: (Bringing Kubernetes to the bare-metal edge)
+[#]: via: (https://opensource.com/article/19/3/bringing-kubernetes-bare-metal-edge)
+[#]: author: (John Studarus https://opensource.com/users/studarus)
+
+Bringing Kubernetes to the bare-metal edge
+======
+New Kubespray features enable Kubernetes clusters to be deployed across
+next-generation edge locations.
+![cubes coming together to create a larger cube][1]
+
+[Kubespray][2], a community project that provides Ansible playbooks for the deployment and management of Kubernetes clusters, recently added support for the bare-metal cloud [Packet][3]. This allows Kubernetes clusters to be deployed across next-generation edge locations, including [cell-tower based micro datacenters][4].
+
+Packet, which is unique in its bare-metal focus, expands Kubespray's support beyond the usual clouds—Amazon Web Services, Google Compute Engine, Azure, OpenStack, vSphere, and Oracle Cloud Infrastructure. Kubespray removes the complexities of standing up a Kubernetes cluster through automation using Terraform and Ansible. Terraform provisions the infrastructure and installs the prerequisites for the Ansible installation. Terraform provider plugins enable support for a variety of different cloud providers. The Ansible playbook then deploys and configures Kubernetes.
+
+Since there are already [detailed instructions online][5] for deploying with Kubespray on Packet, I'll focus on why bare-metal support is important for Kubernetes and what's required to make it happen.
+
+### Why bare metal?
+
+Historically, Kubernetes deployments relied upon the "creature comforts" of a public cloud or a fully managed private cloud to provide virtual machines and networking infrastructure for running Kubernetes. This adds a layer of abstraction (e.g., a hypervisor with virtual machines) that Kubernetes doesn't necessarily need. In fact, Kubernetes began its life on bare metal as Google's Borg.
+
+As we move workloads closer to the end user (in the form of edge computing) and deploy to more diverse environments (including hybrid and on-premises infrastructure of different architectures and sizes), relying on a homogenous public cloud substrate isn't always possible or ideal. For instance, with edge locations being resource constrained, it is more efficient and practical to run Kubernetes directly on bare metal.
+
+### Mind the gaps
+
+Without a full-featured public cloud underneath a bare-metal cluster, some traditional capabilities, such as load balancing and storage orchestration, will need to be managed directly within the Kubernetes cluster. Luckily there are projects, such as [MetalLB][6] and [Rook][7], that provide this support for Kubernetes.
+
+MetalLB, a Layer 2 and Layer 3 load balancer, is integrated into Kubespray, and it's easy to install support for Rook, which orchestrates Ceph to provide distributed and replicated storage for a Kubernetes cluster, on a bare-metal cluster. In addition to enabling full functionality, this "bring your own" approach to storage and load balancing removes reliance upon specific cloud services, helping you avoid lock-in with an approach that can be installed anywhere.
+
+Kubespray has support for ARM64 processors. The ARM architecture (which is starting to show up regularly in datacenter-grade hardware, SmartNICs, and other custom accelerators) has a long history in mobile and embedded devices, making it well-suited for edge deployments.
+
+Going forward, I hope to see deeper integration with MetalLB and Rook as well as bare-metal continuous integration (CI) of daily builds atop a number of different hardware configurations. Access to automated bare metal at Packet enables testing and maintaining support across various processor types, storage options, and networking setups. This will help ensure that Kubespray-powered Kubernetes can be deployed and managed confidently across public clouds, bare metal, and edge environments.
+
+### It takes a village
+
+Kubespray is an open source project driven by the community, indebted to its core developers and contributors as well as the folks that assisted with the Packet integration. Contributors include [Maxime Guyot][8] and [Aivars Sterns][9] for the initial commits and code reviews, [Rong Zhang][10] and [Ed Vielmetti][11] for document reviews, as well as [Tomáš Karásek][12] (who maintains the Packet Go library and Terraform provider).
+
+* * *
+
+_John Studarus will present[The Open Micro Edge Data Center][13] at the [Open Infrastructure Summit][14], April 29-May 1 in Denver._
+
+--------------------------------------------------------------------------------
+
+via: https://opensource.com/article/19/3/bringing-kubernetes-bare-metal-edge
+
+作者:[John Studarus][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://opensource.com/users/studarus
+[b]: https://github.com/lujun9972
+[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/cube_innovation_process_block_container.png?itok=vkPYmSRQ (cubes coming together to create a larger cube)
+[2]: https://kubespray.io/
+[3]: https://www.packet.com/
+[4]: https://twitter.com/packethost/status/1062147355108085760
+[5]: https://github.com/kubernetes-sigs/kubespray/blob/master/docs/packet.md
+[6]: https://metallb.universe.tf/
+[7]: https://rook.io/
+[8]: https://twitter.com/Miouge
+[9]: https://github.com/Atoms
+[10]: https://github.com/riverzhang
+[11]: https://twitter.com/vielmetti
+[12]: https://t0mk.github.io/
+[13]: https://www.openstack.org/summit/denver-2019/summit-schedule/events/23153/the-open-micro-edge-data-center
+[14]: https://openstack.org/summit
diff --git a/sources/tech/20190326 Changes in SD-WAN Purchase Drivers Show Maturity of the Technology.md b/sources/tech/20190326 Changes in SD-WAN Purchase Drivers Show Maturity of the Technology.md
new file mode 100644
index 0000000000..803b6a993d
--- /dev/null
+++ b/sources/tech/20190326 Changes in SD-WAN Purchase Drivers Show Maturity of the Technology.md
@@ -0,0 +1,65 @@
+[#]: collector: (lujun9972)
+[#]: translator: ( )
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+[#]: subject: (Changes in SD-WAN Purchase Drivers Show Maturity of the Technology)
+[#]: via: (https://www.networkworld.com/article/3384103/changes-in-sd-wan-purchase-drivers-show-maturity-of-the-technology.html#tk.rss_all)
+[#]: author: (Cliff Grossner https://www.networkworld.com/author/Cliff-Grossner/)
+
+Changes in SD-WAN Purchase Drivers Show Maturity of the Technology
+======
+
+![istock][1]
+
+[SD-WANs][2] have been available now for the past five years, but adoption has been light compared to that of the overall WAN market. This should be no surprise, as the technology was immature, and customers were dipping their toes in the water first as a test. Recently, however, there are signs that the market is maturing, which also happens to coincide with an acceleration of the market.
+
+Evidence of the maturation of SD-WANs can be seen in the most recent IHS Markit _Campus LAN and WAN SDN Strategies and Leadership North American Enterprise Survey_. Exhibit 1 shows that the top drivers of SD-WAN deployments are the simplification of WAN provisioning, automation capabilities. and direct cloud connectivity—all of which require an architectural change.
+
+This is in stark contrast to the approach of early adopters looking for a reduction in opex and capex savings, doing so in the past by shifting to cheap broadband and low-cost branch hardware. The survey data finds that opex savings now ranks tied in fifth place among the purchase drivers of SD-WAN; and that reduced capex is last, indicating that cost savings no longer possess the same level of importance as with early adopters.
+
+The shift in purchase drivers indicates companies are looking for SD-WAN to provide more value than legacy WAN.
+
+With [SD-WAN][3], the “software defined” indicates that the control plane has been separated from the data plane, enabling the control plane to be abstracted away from the hardware and allowing centralized, distributed, and hybrid control architectures, working alongside the centralized management of those architectures. This provides many benefits, the biggest of which is to make WAN provisioning easier.
+
+![Exhibit 1: Simplification and automation are top drivers for SD-WAN.][4]
+
+With SD-WAN, most mainstream buyers now demand Zero Touch Provisioning, where the SD-WAN appliance automatically calls home when it attaches to the network and pulls its configuration down from a centralized location. Also, changes can be made through a centralized console and then immediately pushed out to every device. This can automate many of the mundane and repetitive tasks associated with running a network.
+
+Such a setup carries many benefits—the most important being that highly skilled network engineers can dedicate more time to innovation and less time to working on tasks associated with “keeping the lights on.”
+
+At present, most resources—time and money—associated with running the WAN are allocated to maintaining the status quo. In the cloud era, however, business leaders embracing digital transformation are looking to their IT organization to help drive innovation and leapfrog the competition. SD-WANs can modernize the network, and the technology will tip the IT resource scale back in favor of innovation.
+
+### Mainstream buyers set new expectations for SD-WAN
+
+With early adopters, technology innovation is key because adopters are generally tech-savvy buyers and are always looking to use the latest and greatest to gain an edge. With mainstream buyers, other concerns arise. Exhibit 2 from the IHS Markit survey shows that technological innovation now ranks tied in fourth place in what buyers look for from an SD-WAN provider. While innovation is still important, factors such as security, financial stability, and product service and reliability rank higher. And although businesses need a strong technical solution, it cannot be achieved at the expense of security, vendor stability, or quality without putting operations at risk.
+
+It’s not surprising, then, that security turned out to be the overwhelming top evaluation criterion, as SD-WANs enable businesses to implement local internet breakout and cloud on-ramp features. Overall, SD-WANs help make applications perform better, especially as enterprises deploy workloads in off-premises, cloud-service-provider-operated data centers as they build their hybrid and multi-clouds.
+
+Another security capability of SD-WANs is their ability to easily implement segmentation, which enables businesses to establish centrally defined and globally consistent security policies that isolate traffic. For example, a retailer could isolate point-of-sale systems from its guest Wi-Fi network. [SD-WAN vendors][5] can also establish partnerships with well-known security vendors that enable the SD-WAN software to be service chained into application traffic flows, in the process allowing mainstream buyers their choice of security technology.
+
+![Exhibit 2: SD-WAN buyers now want security and financially viable vendors.][6]
+
+### The bottom line
+
+The SD-WAN market is maturing, and the shift from early adopters to mainstream businesses will create a “rising tide” that will benefit all SD-WAN buyers in the WAN ecosystem. As a result, vendors will work to meet calls emphasizing greater simplicity and risk reduction, as well as bring about features that provide an integrated connectivity fabric for enterprise edge, hybrid, and multi-clouds.
+
+--------------------------------------------------------------------------------
+
+via: https://www.networkworld.com/article/3384103/changes-in-sd-wan-purchase-drivers-show-maturity-of-the-technology.html#tk.rss_all
+
+作者:[Cliff Grossner][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://www.networkworld.com/author/Cliff-Grossner/
+[b]: https://github.com/lujun9972
+[1]: https://images.idgesg.net/images/article/2019/03/istock-998475736-100791932-large.jpg
+[2]: https://www.silver-peak.com/sd-wan
+[3]: https://www.silver-peak.com/sd-wan/sd-wan-explained
+[4]: https://images.idgesg.net/images/article/2019/03/chart-1_post-10-100791930-large.jpg
+[5]: https://www.silver-peak.com/sd-wan/choosing-an-sd-wan-vendor
+[6]: https://images.idgesg.net/images/article/2019/03/chart-2_post-10-100791931-large.jpg
diff --git a/sources/tech/20190326 How to use NetBSD on a Raspberry Pi.md b/sources/tech/20190326 How to use NetBSD on a Raspberry Pi.md
new file mode 100644
index 0000000000..37c14fec39
--- /dev/null
+++ b/sources/tech/20190326 How to use NetBSD on a Raspberry Pi.md
@@ -0,0 +1,229 @@
+[#]: collector: (lujun9972)
+[#]: translator: ( )
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+[#]: subject: (How to use NetBSD on a Raspberry Pi)
+[#]: via: (https://opensource.com/article/19/3/netbsd-raspberry-pi)
+[#]: author: (Seth Kenlon https://opensource.com/users/seth)
+
+How to use NetBSD on a Raspberry Pi
+======
+
+Experiment with NetBSD, an open source OS with direct lineage back to the original UNIX source code, on your Raspberry Pi.
+
+![][1]
+
+Do you have an old Raspberry Pi lying around gathering dust, maybe after a recent Pi upgrade? Are you curious about [BSD Unix][2]? If you answered "yes" to both of these questions, you'll be pleased to know that the first is the solution to the second, because you can run [NetBSD][3], as far back as the very first release, on a Raspberry Pi.
+
+BSD is the Berkley Software Distribution of [Unix][4]. In fact, it's the only open source Unix with direct lineage back to the original source code written by Dennis Ritchie and Ken Thompson at Bell Labs. Other modern versions are either proprietary (such as AIX and Solaris) or clever re-implementations (such as Minix and GNU/Linux). If you're used to Linux, you'll feel mostly right at home with BSD, but there are plenty of new commands and conventions to discover. If you're still relatively new to open source, trying BSD is a good way to experience a traditional Unix.
+
+Admittedly, NetBSD isn't an operating system that's perfectly suited for the Pi. It's a minimal install compared to many Linux distributions designed specifically for the Pi, and not all components of recent Pi models are functional under NetBSD yet. However, it's arguably an ideal OS for the older Pi models, since it's lightweight and lovingly maintained. And if nothing else, it's a lot of fun for any die-hard Unix geek to experience another side of the [POSIX][5] world.
+
+### Download NetBSD
+
+There are different versions of BSD. NetBSD has cultivated a reputation for being lightweight and versatile (its website features the tagline "Of course it runs NetBSD"). It offers an image of the latest version of the OS for every version of the Raspberry Pi since the original. To download a version for your Pi, you must first [determine what variant of the ARM architecture your Pi uses][6]. Some information about this is available on the NetBSD site, but for a comprehensive overview, you can also refer to [RPi Hardware History][7].
+
+The Pi I used for this article is, as far as I can tell, a Raspberry Pi Model B Rev 2.0 (with two USB ports and no mounting holes). According to the [Raspberry Pi FAQ][8], this means the architecture is ARMv6, which translates to **earmv6hf** in NetBSD's architecture notation.
+
+![NetBSD on Raspberry Pi][9]
+
+If you're not sure what kind of Pi you have, the good news is that there are only two Pi images, so try **earmv7hf** first; if it doesn't work, fall back to **earmv6hf**.
+
+For the easiest and quickest install, use the binary image instead of an installer. Using the image is the most common method of getting an OS onto your Pi: you copy the image to your SD card and boot it up. There's no install necessary, because the image is a generic installation of the OS, and you've just copied it, bit for bit, onto the media that the Pi uses as its boot drive.
+
+The image files are found in the **binary > gzimg** directories of the NetBSD installation media server, which you can reach from the [front page][3] of NetBSD.org. The image is **rpi.img.gz** , a compressed **.img** file. Download it to your hard drive.
+
+Once you have downloaded the entire image, extract it. If you're running Linux, BSD, or MacOS, you can use the **gunzip** command:
+
+```
+$ gunzip ~/Downloads/rpi.img.gz
+```
+
+If you're working on Windows, you can install the open source [7-Zip][10] archive utility.
+
+### Copy the image to your SD card
+
+Once the image file is uncompressed, you must copy it to your Pi's SD card. There are two ways to do this, so use the one that works best for you.
+
+#### 1\. Using Etcher
+
+Etcher is a cross-platform application specifically designed to copy OS images to USB drives and SD cards. Download it from [Etcher.io][11] and launch it.
+
+In the Etcher interface, select the image file on your hard drive and the SD card you want to flash, then click the Flash button.
+
+![Etcher][12]
+
+That's it.
+
+#### 2\. Using the dd command
+
+On Linux, BSD, or MacOS, you can use the **dd** command to copy the image to your SD card.
+
+ 1. First, insert your SD card into a card reader. Don't mount the card to your system because **dd** needs the device to be disengaged to copy data onto it.
+
+ 2. Run **dmesg | tail** to find out where the card is located without it being mounted. On MacOS, use **diskutil list**.
+
+ 3. Copy the image file to the SD card:
+
+```
+$ sudo dd if=~/Downloads/rpi.img of=/dev/mmcblk0 bs=2M status=progress
+```
+
+Before doing this, you _must be sure_ you have the correct location of the SD card. If you copy the image file to the incorrect device, you could lose data. If you are at all unsure about this, use Etcher instead!
+
+
+
+
+When either **dd** or Etcher has written the image to the SD card, place the card in your Pi and power it on.
+
+### First boot
+
+The first time it's booted, NetBSD detects that the SD card's filesystem does not occupy all the free space available and resizes the filesystem accordingly.
+
+![Booting NetBSD on Raspberry Pi][13]
+
+Once that's finished, the Pi reboots and presents a login prompt. Log into your NetBSD system using **root** as the user name. No password is required.
+
+### Set up a user account
+
+First, set a password for the root user:
+
+```
+# passwd
+```
+
+Then create a user account for yourself with the **-m** option to prompt NetBSD to create a home directory and the **-G wheel** option to add your account to the wheel group so that you can become the administrative user (root) as needed:
+
+```
+# useradd -m -G wheel seth
+```
+
+Use the **passwd** command again to set a password for your user account:
+
+```
+# passwd seth
+```
+
+Log out, and then log back in with your new credentials.
+
+### Add software to NetBSD
+
+If you've ever used a Pi, you probably know that the way to add more software to your system is with a special command like **apt** or **dnf** (depending on whether you prefer to run [Raspbian][14] or [FedBerry][15] on your Pi). On NetBSD, use the **pkg_add** command. But some setup is required before the command knows where to go to get the packages you want to install.
+
+There are ready-made (pre-compiled) packages for NetBSD on NetBSD's servers using the scheme **<[ftp://ftp.netbsd.org/pub/pkgsrc/packages/NetBSD/[PORT]/[VERSION]/All>][16]**. Replace PORT with the architecture you are using, either **earmv6hf** or **earmv7hf**. Replace VERSION with the NetBSD release you are using; at the time of this writing, that's **8.0**.
+
+Place this value in a file called **/etc/pkg_install.conf**. Since that's a system file outside your user folder, you must invoke root privileges to create it:
+
+```
+$ su -
+
+# echo "PKG_PATH=" >> /etc/pkg_install.conf
+```
+
+Now you can install packages from the NetBSD software distribution. A good first candidate is Bash, commonly the default shell on a Linux (and Mac) system. Also, if you're not already a Vi text editor user, you may want to try something more intuitive such as [Jove][17] or [Nano][18]:
+
+```
+# pkg_add -v bash jove nano
+# exit
+$
+```
+
+Unlike many Linux distributions ([Slackware][19] being a notable exception), NetBSD does very little configuration on your behalf, and this is considered a feature. So, to use Bash, Jove, or Nano as your default toolset, you must set the configuration yourself.
+
+You can set many of your preferences dynamically using environment variables, which are special variables that your whole system can access. For instance, most applications in Unix know that if there is a **VISUAL** or **EDITOR** variable set, the value of those variables should be used as the default text editor. You can set these two variables temporarily, just for your current login session:
+
+```
+$ export EDITOR=nano
+# export VISUAL=nano
+```
+
+Or you can make them permanent by adding them to the default NetBSD **.profile** file:
+
+```
+$ sed -i 's/EDITOR=vi/EDITOR=nano/' ~/.profile
+```
+
+Load your new settings:
+
+```
+$ . ~/.profile
+```
+
+To make Bash your default shell, use the **chsh** (change shell) command, which now loads into your preferred editor. Before running **chsh** , though, make sure you know where Bash is located:
+
+```
+$ which bash
+/usr/pkg/bin/bash
+```
+
+Set the value for **shell** in the **chsh** entry to **/usr/pkg/bin/bash** , then save the document.
+
+### Add sudo
+
+The **pkg_add** command is a privileged command, which means to use it, you must become the root user with the **su** command. If you prefer, you can also set up the **sudo** command, which allows certain users to use their own password to execute administrative tasks.
+
+First, install it:
+
+```
+# pkg_add -v sudo
+```
+
+And then use the **visudo** command to edit its configuration file. You must use the **visudo** command to edit the **sudo** configuration, and it must be run as root:
+
+```
+$ su
+# SUDO_EDITOR=nano visudo
+```
+
+Once you are in the editor, find the line allowing members of the wheel group to execute any command, and uncomment it (by removing **#** from the beginning of the line):
+
+```
+### Uncomment to allow members of group wheel to execute any command
+%wheel ALL=(ALL) ALL
+```
+
+Save the document as described in Nano's bottom menu panel and exit the root shell.
+
+Now you can use **pkg_add** with **sudo** instead of becoming root:
+
+```
+$ sudo pkg_add -v fluxbox
+```
+
+### Net gain
+
+NetBSD is a full-featured Unix operating system, and now that you have it set up on your Pi, you can explore every nook and cranny. It happens to be a pretty lightweight OS, so even an old Pi with a 700mHz processor and 256MB of RAM can run it with ease. If this article has sparked your interest and you have an old Pi sitting in a drawer somewhere, try it out!
+
+--------------------------------------------------------------------------------
+
+via: https://opensource.com/article/19/3/netbsd-raspberry-pi
+
+作者:[Seth Kenlon (Red Hat, Community Moderator)][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://opensource.com/users/seth
+[b]: https://github.com/lujun9972
+[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/code_computer_development_programming.png?itok=4OM29-82
+[2]: https://en.wikipedia.org/wiki/Berkeley_Software_Distribution
+[3]: http://netbsd.org/
+[4]: https://en.wikipedia.org/wiki/Unix
+[5]: https://en.wikipedia.org/wiki/POSIX
+[6]: http://wiki.netbsd.org/ports/evbarm/raspberry_pi
+[7]: https://elinux.org/RPi_HardwareHistory
+[8]: https://www.raspberrypi.org/documentation/faqs/
+[9]: https://opensource.com/sites/default/files/uploads/pi.jpg (NetBSD on Raspberry Pi)
+[10]: https://www.7-zip.org/
+[11]: https://www.balena.io/etcher/
+[12]: https://opensource.com/sites/default/files/uploads/etcher_0.png (Etcher)
+[13]: https://opensource.com/sites/default/files/uploads/boot.png (Booting NetBSD on Raspberry Pi)
+[14]: http://raspbian.org/
+[15]: http://fedberry.org/
+[16]: ftp://ftp.netbsd.org/pub/pkgsrc/packages/NetBSD/%5BPORT%5D/%5BVERSION%5D/All%3E
+[17]: https://opensource.com/article/17/1/jove-lightweight-alternative-vim
+[18]: https://www.nano-editor.org/
+[19]: http://www.slackware.com/
diff --git a/sources/tech/20190326 Today-s Retailer is Turning to the Edge for CX.md b/sources/tech/20190326 Today-s Retailer is Turning to the Edge for CX.md
new file mode 100644
index 0000000000..babc54c0f7
--- /dev/null
+++ b/sources/tech/20190326 Today-s Retailer is Turning to the Edge for CX.md
@@ -0,0 +1,52 @@
+[#]: collector: (lujun9972)
+[#]: translator: ( )
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+[#]: subject: (Today’s Retailer is Turning to the Edge for CX)
+[#]: via: (https://www.networkworld.com/article/3384202/today-s-retailer-is-turning-to-the-edge-for-cx.html#tk.rss_all)
+[#]: author: (Cindy Waxer https://www.networkworld.com/author/Cindy-Waxer/)
+
+Today’s Retailer is Turning to the Edge for CX
+======
+
+### Despite the increasing popularity and convenience of ecommerce, 92% of purchases continue to be made off-line, according to the U.S. Census.
+
+![iStock][1]
+
+Despite the increasing popularity and convenience of ecommerce, 92% of purchases continue to be made off-line, according to the [U.S. Census][2]. That’s putting enormous pressure on retailers to meet new consumer expectations around real-time access to merchandise and order information. In fact, 85.3% of shoppers expect retailers to provide associates with handheld or fixed devices to check inventory and price within a store, a nearly 51% increase over 2017, according to a [survey from SOTI][3].
+
+With an eye on transforming the customer experience of spending time in a store, retailers are investing aggressively in compute power located closer to the buyer, also known as [edge computing][4].
+
+So what new and innovative technologies are edge environments supporting? Here’s where retail is headed with customer service and how edge computing will help them get there.
+
+**Face forward** : Facial recognition technology is on the rise in retail as brands search for new ways to engage customers. Take, CaliBurger, for example. The restaurant chain recently tested out self-ordering kiosks that use AI and facial-recognition technology to identify registered customers and pull up their loyalty accounts and order preferences. By automatically displaying a customer’s most popular purchases, the system aims to help patrons complete their orders in seconds flat for greater speed and convenience.
+
+**Customer experience on display** : Forget about traditional counter displays. Savvy retailers are experimenting with high-tech, in-store digital signage solutions to attract consumers and gather valuable data. For instance, Glass Media’s projection-based, end-to-end digital retail signage combines display technology, a cloud-based IoT platform, and data analytic capabilities. Through projection, the solution can influence customers at the point-of-decision.
+
+**Backroom access** : Tracking inventory manually requires substantial human resources. IoT-powered backroom technologies such as RFID, real-time point of sale (POS), and smart shelving systems promise to change that by improving the accuracy of inventory tracking throughout the supply chain. These automated solutions can track and reorder items automatically, eliminating the need for humans to take inventory and reducing the risk of product shortages.
+
+**Robots to the rescue** : Hoping to transform the branch experience, HSBC recently unveiled Pepper, a concierge robot whose job is to help customers with simple tasks, from answering commonly asked questions to directing them to available tellers. Pepper also acts as an online banking station where customers can log into their mobile banking account or access information about products. By putting Pepper on the payroll, HSBC hopes to reduce customer wait times and free up its “human” bankers.
+
+These innovative technologies provide retailers with unique opportunities to enhance customer experience, develop new revenue streams, and boost customer loyalty. But many of them require edge computing to work properly. Bandwidth-intensive content and vast volumes of data can lead to latency issues, outages, and other IT headaches. Fortunately, by placing computing power and storage capabilities directly on the edge of the network, edge computing can help retailers deliver the best customer experience possible.
+
+To find out more about how edge computing is transforming the customer experience in retail, visit [APC.com][5].
+
+--------------------------------------------------------------------------------
+
+via: https://www.networkworld.com/article/3384202/today-s-retailer-is-turning-to-the-edge-for-cx.html#tk.rss_all
+
+作者:[Cindy Waxer][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://www.networkworld.com/author/Cindy-Waxer/
+[b]: https://github.com/lujun9972
+[1]: https://images.idgesg.net/images/article/2019/03/istock-508154656-100791924-large.jpg
+[2]: https://ycharts.com/indicators/ecommerce_sales_as_percent_retail_sales
+[3]: https://www.soti.net/resources/newsroom/2019/annual-connected-retailer-survey-new-soti-survey-reveals-us-consumers-prefer-speed-and-convenience-when-shopping-with-limited-human-interaction/
+[4]: https://www.hpe.com/us/en/servers/edgeline-iot-systems.html?pp=false&jumpid=ps_83cqske5um_aid-510380402&gclid=CjwKCAjw6djYBRB8EiwAoAF6oWwk-M6LWcfCbbZ331fXhEHShXGbLWoSwTIzue6mxQg4gDvYx59XZxoC_4oQAvD_BwE&gclsrc=aw.ds
+[5]: https://www.apc.com/us/en/solutions/business-solutions/edge-computing.jsp
diff --git a/sources/tech/20190327 Cisco forms VC firm looking to weaponize fledgling technology companies.md b/sources/tech/20190327 Cisco forms VC firm looking to weaponize fledgling technology companies.md
new file mode 100644
index 0000000000..2a0dde5fb3
--- /dev/null
+++ b/sources/tech/20190327 Cisco forms VC firm looking to weaponize fledgling technology companies.md
@@ -0,0 +1,66 @@
+[#]: collector: (lujun9972)
+[#]: translator: ( )
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+[#]: subject: (Cisco forms VC firm looking to weaponize fledgling technology companies)
+[#]: via: (https://www.networkworld.com/article/3385039/cisco-forms-vc-firm-looking-to-weaponize-fledgling-technology-companies.html#tk.rss_all)
+[#]: author: (Michael Cooney https://www.networkworld.com/author/Michael-Cooney/)
+
+Cisco forms VC firm looking to weaponize fledgling technology companies
+======
+
+### Decibel, an investment firm focused on early stage funding for enterprise-product startups, will back technologies related to Cisco's core interests.
+
+![BrianaJackson / Getty][1]
+
+Cisco this week stepped deeper into the venture capital world by announcing Decibel, an early-stage investment firm that will focus on bringing enterprise-oriented startups to market.
+
+Veteran VC groundbreaker and former general partner at New Enterprise Associates [Jon Sakoda][2] will lead Decibel. Sakoda had been with NEA since 2006 and focused on startup investments in software and Internet companies.
+
+**[ Now see[7 free network tools you must have][3]. ]**
+
+Of Decibel Sakoda said: “We want to invest in companies that are helping our customers use innovation as a weapon in the game to transform their respective industries.”
+
+“Decibel combines the speed, agility, and independent risk-taking traditionally found in the best VC firms, while offering differentiated access to the scale, entrepreneurial talent, and deep customer relationships found in one of the largest tech companies in the world,” [Sakoda said][4]. “This approach is an industry first and provides a unique way for entrepreneurs to get access to unparalleled resources at a time and stage when they need it most.”
+
+“As one of the most prolific strategic venture capitalists in the world, Cisco already has a view into future technologies shaping our markets through our rich portfolio of companies,” wrote Rob Salvagno, vice president of Corporate Development and Cisco Investments in a [blog about Decibel][5]. “But we realized we could do even more by engaging with the startup community earlier in its lifecycle.”
+
+Indeed Cisco already has an investment arm, Cisco Investments, that focuses on later stage startups, the company says. Cisco said this arm invests $200 to $300 million annually, and it will continue its charter of investing and partnering with best-in-class companies in core and adjacent markets.
+
+Cisco didn’t talk about how much money would be involved in Decibel, but according to a [CNBC report][6], Cisco is setting up Decibel as an independent firm with a separate pool of cash, an unusual model for corporate investors. The fund hasn’t closed yet, but a [Securities and Exchange Commission filing][7] from October indicated that Sakoda was setting out to [raise $500 million][8], CNBC wrote.
+
+**[[Become a Microsoft Office 365 administrator in record time with this quick start course from PluralSight.][9] ]**
+
+Decibel does plan to invest anywhere from $5M – 15M in each start up in their portfolio, Cisco says.
+
+“Cisco has a culture of leveraging both internal and external innovation – accelerating our rich internal development capabilities by our ability to also partner, invest and acquire, Salvagno said.
+
+He said the company recognizes that significant innovation happens outside of the walls of Cisco. Cisco has acquired more than 200 companies, accounting for more than one in eight Cisco employees have joined as a result. "We have a deep bench of acquired founders, many of which play leadership roles within the company today, which continues to reinforce this entrepreneurial spirit," Salvagno said.
+
+Join the Network World communities on [Facebook][10] and [LinkedIn][11] to comment on topics that are top of mind.
+
+--------------------------------------------------------------------------------
+
+via: https://www.networkworld.com/article/3385039/cisco-forms-vc-firm-looking-to-weaponize-fledgling-technology-companies.html#tk.rss_all
+
+作者:[Michael Cooney][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://www.networkworld.com/author/Michael-Cooney/
+[b]: https://github.com/lujun9972
+[1]: https://images.idgesg.net/images/article/2019/02/money_salary_magnet_flying-money_money-magnet-by-brianajackson-getty-100787974-large.jpg
+[2]: https://twitter.com/jonsakoda
+[3]: https://www.networkworld.com/article/2825879/7-free-open-source-network-monitoring-tools.html
+[4]: https://www.decibel.vc/the-blast/announcingdecibel
+[5]: https://blogs.cisco.com/news/cisco-fuels-innovation-engine-with-investment-in-new-early-stage-vc-fund
+[6]: https://www.cnbc.com/2019/03/26/cisco-introduces-decibel-an-early-stage-venture-firm-with-jon-sakoda.html
+[7]: https://www.sec.gov/Archives/edgar/data/1754260/000175426018000002/xslFormDX01/primary_doc.xml
+[8]: https://www.cnbc.com/2018/10/08/cisco-lead-investor-jon-sakoda-catalyst-labs-500-million.html
+[9]: https://pluralsight.pxf.io/c/321564/424552/7490?u=https%3A%2F%2Fwww.pluralsight.com%2Fcourses%2Fadministering-office-365-quick-start
+[10]: https://www.facebook.com/NetworkWorld/
+[11]: https://www.linkedin.com/company/network-world
diff --git a/sources/tech/20190327 How to make a Raspberry Pi gamepad.md b/sources/tech/20190327 How to make a Raspberry Pi gamepad.md
new file mode 100644
index 0000000000..694c09d4c9
--- /dev/null
+++ b/sources/tech/20190327 How to make a Raspberry Pi gamepad.md
@@ -0,0 +1,235 @@
+[#]: collector: (lujun9972)
+[#]: translator: ( )
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+[#]: subject: (How to make a Raspberry Pi gamepad)
+[#]: via: (https://opensource.com/article/19/3/gamepad-raspberry-pi)
+[#]: author: (Leon Anavi https://opensource.com/users/leon-anavi)
+
+How to make a Raspberry Pi gamepad
+======
+
+This DIY retro video game controller for the Raspberry Pi is fun and not difficult to build but requires some time.
+
+![Raspberry Pi Gamepad device][1]
+
+From time to time, I get nostalgic about the video games I played during my childhood in the late '80s and the '90s. Although most of my old computers and game consoles are long gone, my Raspberry Pi can fulfill my retro-gaming fix. I enjoy the simple games included in Raspbian, and the open source RetroPie project helped me turn my Raspberry Pi into an advanced retro-gaming machine.
+
+But, for a more authentic experience, like back in the "old days," I needed a gamepad. There are a lot of options on the market for USB gamepads and joysticks, but as an open source enthusiast, maker, and engineer, I prefer doing it the hard way. So, I made my own simple open source hardware gamepad, which I named the [ANAVI Play pHAT][2]. I designed it as an add-on board for Raspberry Pi using an [EEPROM][3] and a devicetree binary overlay I created for mapping the keys.
+
+### Get the gamepad buttons and EEPROM
+
+There are a huge variety of gamepads available for purchase, and some of them are really complex. However, it's not hard to make a gamepad similar to the iconic NES controller using the design I created.
+
+The gamepad uses eight "momentary" buttons (i.e., switches that are active only while they're pushed): four tactile (tact) switches for movement (Up, Down, Left, Right), two tact buttons for A and B, and two smaller tact buttons for Select and Start. I used [through-hole][4] tact switches: six 6x6x4.3mm switches for movement and the A and B buttons, and two 3x6x4.3mm switches for the Start and Select buttons.
+
+While the gamepad's primary purpose is to play retro games, the add-on board is large enough to include home-automation features, such as monitoring temperature, humidity, light, or barometric pressure, that you can use when you're not playing games. I added three slots for attaching [I2C][5] sensors to the primary I2C bus on physical pins 3 and 5.
+
+The most interesting and important part of the hardware design is the EEPROM (electrically erasable programmable read-only memory). A through-hole mounted EEPROM is easier to flash on a breadboard and solder to the gamepad. An article in the [MagPi magazine][6] recommends CAT24C32 EEPROM; if that model isn't available, try to find a model with similar technical specifications. All Raspberry Pi models and versions released after 2014 (Raspberry Pi B+ and newer) have a secondary I2C bus on physical pins 27 and 28.
+
+Once you have this hardware, use a breadboard to check that it works.
+
+### Create the printed circuit board
+
+The next step is to create a printed circuit board (PCB) design and have it manufactured. As an open source enthusiast, I believe that free and open source software should be used for creating open source hardware. I rely on [KiCad][7], electronic design automation (EDA) software available under the GPLv3+ license. KiCad works on Windows, MacOS, and GNU/Linux. (I use KiCad version 5 on Ubuntu 18.04.)
+
+KiCad allows you to create PCBs with up to 32 copper layers plus 14 fixed-purpose technical layers. It also has an integrated 3D viewer. It's actively developed, including many contributions by CERN developers, and used for industrial applications; for example, Olimex uses KiCad to design complex PCBs with multiple layers, like the one in its [TERES-I][8] DIY open source hardware laptop.
+
+The KiCad workflow includes three major steps:
+
+ * Designing the schematics in the schematic layout editor
+ * Drawing the edge cuts, placing the components, and routing the tracks in the PCB layout editor
+ * Exporting Gerber and drill files for manufacture
+
+
+
+If you haven't designed PCBs before, keep in mind there is a steep learning curve. Go through the [examples and user's guides][9] provided by KiCad to learn how to work with the schematic and the PCB layout editor. (If you are not in the mood to do everything from scratch, you can just clone the ANAVI Play pHAT project in my [GitHub repository][10].)
+
+![KiCad schematic][11]
+
+In KiCad's schematic layout editor, connect the Raspberry Pi's GPIOs to the buttons, the slots for sensors to the primary I2C, and the EEPROM to the secondary I2C. Assign an appropriate footprint to each component. Perform an electrical rule check and, if there are no errors, generate the [netlist][12], which describes an electronic circuit's connectivity.
+
+Open the PCB layout editor. It contains several layers. Read the netlist. All components and tracks must be on the front and bottom copper layers (F.Cu and B.Cu), and the board's form must be created in the Edge.Cuts layer. Any text, including button labels, must be on the silkscreen layers.
+
+![Printable circuit board design][13]
+
+Finally, export the Gerber and drill files that you'll send to the company that will produce your PCB. The Gerber format is the de facto industry standard for PCBs. It is an open ASCII vector format for 2D binary images; simply explained, it is like a PDF for PCB manufacturing.
+
+There are numerous companies that can make a simple two-layer board like the gamepad's. For a few prototypes, you can count on [OSHPark in the US][14] or [Aisler in Europe][15]. There are also a lot of Chinese manufacturers, such as JLCPCB, PCBWay, ALLPCB, Seeed Studio, and many more. Alternatively, if you prefer to skip the hassle of PCB manufacturing and sourcing components, you can order the [ANAVI Play pHAT maker kit from Crowd Supply][2] and solder all the through-hole components on your own.
+
+### Understanding devicetree
+
+[Devicetree][16] is a specification for a software data structure that describes the hardware components. Its purpose is to allow the compiled Linux kernel to handle a variety of different hardware configurations within a wider architecture family. The bootloader loads the devicetree into memory and passes it to the Linux kernel.
+
+The devicetree includes three components:
+
+ * Devicetree source (DTS)
+ * Devicetree blob (DTB) and overlay (DTBO)
+ * Devicetree compiler (DTC)
+
+
+
+The DTC creates binaries from a textual source. Devicetree overlays allow a central DTB to be overlaid on the devicetree. Overlays include a number of fragments.
+
+For several years, a devicetree has been required for all new ARM systems on a chip (SoCs), including Broadcom SoCs in all Raspberry Pi models and versions. With the default bootloader in Raspberry Pi's popular Raspbian distribution, DTO can be set in the configuration file ( **config.txt** ) on the FAT partition of a bootable microSD card using the keyword **device_tree=**.
+
+Since 2014, the Raspberry Pi's pin header has been extended to 40 pins. Pins 27 and 28 are dedicated for a secondary I2C bus. This way, the DTBO can be automatically loaded from an EEPROM attached to these pins. Furthermore, additional system information can be saved in the EEPROM. This feature is among the Raspberry Pi Foundation's requirements for any Raspberry Pi HAT (hardware attached on top) add-on board. On Raspbian and other GNU/Linux distributions for Raspberry Pi, the information from the EEPROM can be seen from userspace at **/proc/device-tree/hat/** after booting.
+
+In my opinion, the devicetree is one of the most fascinating features added in the Linux ecosystem over the past decade. Creating devicetree blobs and overlays is an advanced task and requires some background knowledge. However, it's possible to create a devicetree binary overlay for the Raspberry Pi add-on board and flash it on an appropriate EEPROM. The device binary overlay defines the Linux key codes for each key of the gamepad. The result is a gamepad for Raspberry Pi with keys that work as soon as you boot Raspbian.
+
+#### Creating the DTBO
+
+There are three major steps to create a devicetree binary overlay for the gamepad:
+
+ * Creating the devicetree source with mapping for the keys based on the Linux key codes
+ * Compiling the devicetree binary overlay using the devicetree compiles
+ * Creating an **.eep** file and flashing it on an EEPROM using the open source tools provided by the Raspberry Pi Foundation
+
+
+
+Linux key codes are defined in the file **/usr/include/linux/input-event-codes.h**. The device source file should describe which Raspberry Pi GPIO pin is connected to which hardware button and which Linux key code should be triggered when the button is pressed. In this gamepad, GPIO17 (pin 11) is connected to the tactile button for Right, GPIO4 (pin 7) to Left, GPIO22 (pin 15) to Up, GPIO27 (pin 13) to Down, GPIO5 (pin 29) to Start, GPIO6 (pin 31) to Select, GPIO19 (pin 35) to A, and GPIO26 (pin 37) to B.
+
+Please note there is a difference between the GPIO numbers and the physical position of the pin on the header. For convenience, all pins are located on the second row of the Raspberry Pi's 40-pin header. This approach makes it easier to route the printed circuit board in KiCad.
+
+The entire devicetree source for the gamepad is [available on GitHub][17]. As an example, the following is a short code snippet that demonstrates how GPIO17, corresponding to physical pin 11 on the Raspberry Pi, is mapped to the tact button for Right:
+
+```
+button@17 {
+label = "right";
+linux,code = <106>;
+gpios = <&gpio 17 1>;
+};
+```
+
+To compile the DTS directly on the Raspberry Pi, install the devicetree compiler on Raspbian by executing the following command in the terminal:
+```
+sudo apt-get update
+sudo apt-get install device-tree-compiler
+```
+Run DTC and provide as arguments the name of the output DTBO and the path to the source file. For example:
+
+```
+dtc -I dts -O dtb -o anavi-play-phat.dtbo anavi-play-phat.dts
+```
+
+The Raspberry Pi Foundation provides a [GitHub repository with the mechanical, hardware, and software specifications for HATs][18]. It also includes three very convenient tools:
+
+ * **eepmake:** Creates an **.eep** file from a text file with settings
+ * **eepdump:** Useful for debugging, as it dumps a binary **.eep** file as human-readable text
+ * **eepflash:** Writes or reads an **.eep** binary image to/from an EEPROM
+
+
+
+The **eeprom_settings.txt** file can be used as a template. [The Raspberry Pi Foundation][19] and [MagPi magazine][6] have helpful articles and tutorials, so I won't go into too many details. As I wrote above, the recommended EEPROM is CAT24C32, but it can be replaced with any other EEPROM with the same technical specifications. Using an EEPROM with an eight-pin, through-hole, dual in-line (DIP) package is easier for hobbyists to flash because it can be done with a breadboard. The following example command creates a file ready to be flashed on the EEPROM using the **eepmake** tool from the Raspberry Pi GitHub repository:
+
+```
+./eepmake settings.txt settings.eep anavi-play-phat.dtbo
+```
+
+Before proceeding with flashing, ensure that the EEPROM is connected properly to the primary I2C bus (pins 3 and 5) on the Raspberry Pi. (You can consult the MagPi magazine article linked above for a discussion on wiring schematics.) Then run the following command and follow the onscreen instructions to flash the **.eep** file on the EEPROM:
+
+```
+sudo ./eepflash.sh -w -f=settings.eep -t=24c32
+```
+
+Before soldering the EEPROM to the printed circuit board, move it to the secondary I2C bus on the breadboard and test it to ensure it works as expected. If you detect any issues while testing the EEPROM on the breadboard, correct the settings files, move it back to the primary I2C bus, and flash it again.
+
+### Testing the gamepad
+
+Now comes the fun part! It is time to test the add-on board using Raspbian, which you can [download][20] from RaspberryPi.org. After booting, open a terminal and enter the following commands:
+
+```
+cat /proc/device-tree/hat/product
+cat /proc/device-tree/hat/vendor
+```
+
+The output should be similar to this:
+
+![Testing output][21]
+
+If it is, congratulations! The data from the EEPROM has been read successfully.
+
+The next step is to verify that the keys on the Play pHAT are set properly and working. In a terminal or a text editor, press each of the eight buttons and verify they are acting as configured.
+
+Finally, it is time to play games! By default, Raspbian's desktop includes [Python Games][22]. Launch them from the application menu. Make an audio output selection and pick a game from the list. My favorite is Wormy, a Snake-like game. As a former Symbian mobile application developer, I find playing Wormy brings back memories of the glorious days of Nokia.
+
+### Retro gaming with RetroPie
+
+![RetroPie with the Play pHAT][23]
+
+Raspbian is amazing, but [RetroPie][24] offers so much more for retro games fans. It is a GNU/Linux distribution optimized for playing retro games and combines the open source projects RetroArch and Emulation Station. It's available for Raspberry Pi, the [Odroid][25] C1/C2, and personal computers running Debian or Ubuntu. It provides emulators for loading ROMs—the digital versions of game cartridges. Keep in mind that no ROMs are included in RetroPie due to copyright issues. You will have to [find appropriate ROMs and copy them][26] to the Raspberry Pi after booting RetroPie.
+
+The open source hardware gamepad works fine in RetroPie's menus, but I discovered that the keys fail after launching some games and emulators. After debugging, I found a solution to ensuring they work in the game emulators: add a Python script for additional software emulation of the keys. [The script is available on GitHub.][27] Here's how to get it and install Python on RetroPie:
+
+```
+
+sudo apt-get update
+sudo apt-get install -y python-pip
+sudo pip install evdev
+cd ~
+git clone
+```
+
+Finally, add the following line to **/etc/rc.local** so it will be executed automatically when RetroPie boots:
+
+```
+sudo python /home/pi/anavi-examples/anavi-play-phat/anavi-play-gamepad.py &
+```
+
+That's it! After following these steps, you can create an entirely open source hardware gamepad as an add-on board for any Raspberry Pi model with a 40-pin header and use it with Raspbian and RetroPie!
+
+### What's next?
+
+Combining free and open source software with open source hardware is fun and not difficult, but it requires a significant amount of time. After creating the open source hardware gamepad in my spare time, I ran a modest crowdfunding campaign at [Crowd Supply][2] for low-volume manufacturing in my hometown in Plovdiv, Bulgaria. [The Open Source Hardware Association][28] certified the ANAVI Play pHAT as an open source hardware project under [BG000007][29]. Even [the acrylic enclosures][30] that protect the board from dust are open source hardware created with the free and open source software OpenSCAD.
+
+![Game pad in acrylic enclosure][31]
+
+If you enjoyed reading this article, I encourage you to try creating your own open source hardware add-on board for Raspberry Pi with KiCad. If you don't have enough spare time, you can order an [ANAVI Play pHAT maker kit][2], grab your soldering iron, and assemble the through-hole components. If you're not comfortable with the soldering iron, you can just order a fully assembled version.
+
+Happy retro gaming everybody! Next time someone irritably asks what you can learn from playing vintage computer games, tell them about Raspberry Pi, open source hardware, Linux, and devicetree.
+
+--------------------------------------------------------------------------------
+
+via: https://opensource.com/article/19/3/gamepad-raspberry-pi
+
+作者:[Leon Anavi][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://opensource.com/users/leon-anavi
+[b]: https://github.com/lujun9972
+[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/gamepad_raspberrypi_hardware.jpg?itok=W16gOnay (Raspberry Pi Gamepad device)
+[2]: https://www.crowdsupply.com/anavi-technology/anavi-play-phat
+[3]: https://en.wikipedia.org/wiki/EEPROM
+[4]: https://en.wikipedia.org/wiki/Through-hole_technology
+[5]: https://en.wikipedia.org/wiki/I%C2%B2C
+[6]: https://www.raspberrypi.org/magpi/make-your-own-hat/
+[7]: http://kicad-pcb.org/
+[8]: https://www.olimex.com/Products/DIY-Laptop/
+[9]: http://kicad-pcb.org/help/getting-started/
+[10]: https://github.com/AnaviTechnology/anavi-play-phat
+[11]: https://opensource.com/sites/default/files/uploads/kicad-schematic.png (KiCad schematic)
+[12]: https://en.wikipedia.org/wiki/Netlist
+[13]: https://opensource.com/sites/default/files/uploads/circuitboard.png (Printable circuit board design)
+[14]: https://oshpark.com/
+[15]: https://aisler.net/
+[16]: https://www.devicetree.org/
+[17]: https://github.com/AnaviTechnology/hats/blob/anavi/eepromutils/anavi-play-phat.dts
+[18]: https://github.com/raspberrypi/hats
+[19]: https://www.raspberrypi.org/blog/introducing-raspberry-pi-hats/
+[20]: https://www.raspberrypi.org/downloads/
+[21]: https://opensource.com/sites/default/files/uploads/testing-output.png (Testing output)
+[22]: https://www.raspberrypi.org/documentation/usage/python-games/
+[23]: https://opensource.com/sites/default/files/uploads/retropie.jpg (RetroPie with the Play pHAT)
+[24]: https://retropie.org.uk/
+[25]: https://www.hardkernel.com/product-category/odroid-board/
+[26]: https://opensource.com/article/19/1/retropie
+[27]: https://github.com/AnaviTechnology/anavi-examples/blob/master/anavi-play-phat/anavi-play-gamepad.py
+[28]: https://www.oshwa.org/
+[29]: https://certification.oshwa.org/bg000007.html
+[30]: https://github.com/AnaviTechnology/anavi-cases/tree/master/anavi-play-phat
+[31]: https://opensource.com/sites/default/files/uploads/gamepad-acrylic.jpg (Game pad in acrylic enclosure)
diff --git a/sources/tech/20190327 Identifying exceptional user experience (UX) in IoT platforms.md b/sources/tech/20190327 Identifying exceptional user experience (UX) in IoT platforms.md
new file mode 100644
index 0000000000..f7c49381f4
--- /dev/null
+++ b/sources/tech/20190327 Identifying exceptional user experience (UX) in IoT platforms.md
@@ -0,0 +1,126 @@
+[#]: collector: (lujun9972)
+[#]: translator: ( )
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+[#]: subject: (Identifying exceptional user experience (UX) in IoT platforms)
+[#]: via: (https://www.networkworld.com/article/3384738/identifying-exceptional-user-experience-ux-in-iot-platforms.html#tk.rss_all)
+[#]: author: (Steven Hilton https://www.networkworld.com/author/Steven-Hilton/)
+
+Identifying exceptional user experience (UX) in IoT platforms
+======
+
+### Examples of excellent IoT platform UX from the perspectives of 5 typical IoT platform personas.
+
+![Leo Wolfert / Getty Images][1]
+
+Enterprises are inundated with information about IoT platforms’ features and capabilities. But to find a long-lived IoT platform that minimizes ongoing development costs, enterprises must focus on exceptional user experience (UX) for 5 types of IoT platform users.
+
+Marketing and sales literature from IoT platform vendors is filled with information about IoT platform features. And no doubt, enterprises choosing to buy IoT platform services need to understand the actual capabilities of IoT platforms – preferably by [testing a variety of IoT platforms][2] – before making a purchase decision.
+
+However, it is a lot harder to gauge the quality of an IoT platform UX than itemizing an IoT platform’s features. Having excellent UX leads to lower platform deployment and management costs and higher customer satisfaction and retention. So enterprises should make UX one of their top criteria when selecting an IoT platform.
+
+[RELATED: Storage tank operator turns to IoT for energy savings][3]
+
+One of the ways to determine excellent IoT platform UX is to simulate the tasks conducted by typical IoT platform users. By completing these tasks, it becomes readily apparent when an IoT platform is exceptional or annoyingly bad.
+
+In this blog, I describe excellent IoT platform UX from the perspectives of five typical IoT platform users or personas.
+
+## Persona 1: platform administrator
+
+A platform administrator’s primary role is to configure, monitor, and maintain the functionality of an IoT platform. A platform administrator is typically an IT employee responsible for maintaining and configuring the various data management, device management, access control, external integration, and monitoring services that comprise an IoT platform.
+
+Typical platform administrator tasks include
+
+ * configuration of the on-platform data visualization and data aggregation tools
+ * configuration of available device management functionality or execution of in-bulk device management tasks
+ * configuration and creation of on-platform complex event processing (CEP) workflows
+ * management and configuration of platform service orchestration
+
+
+
+Enterprises should pick IoT platforms with superlative access to on-platform configuration functionality with an emphasis on declarative interfaces for configuration management. Although many platform administrators are capable of working with RESTful API endpoints, good UX design should not require that platform administrators use third-party tools to automate basic functionality or execute bulk tasks. Some programmatic interfaces, such as SQL syntax for limiting monitoring views or dashboards for setting event processing trigger criteria, are acceptable and expected, although a fully declarative solution that maintains similar functionality is preferred.
+
+## Persona 2: platform operator
+
+A platform operator’s primary role is to leverage an IoT platform to execute common day-to-day business-centric operations and services. While the responsibilities of a platform operator will vary based on enterprise vertical and use case, all platform operators conduct business rather than IoT domain tasks.
+
+Typical platform operator tasks include
+
+ * visualizing and aggregating on-platform data to view key business KPIs
+ * using device management functionality on a per-device basis
+ * creating, managing, and monitoring per-device and per-location event processing rules
+ * executing self-service administrative tasks, such as enrolling downstream operators
+
+
+
+Enterprises should pick IoT platforms centered on excellent ease-of-use for a business user. In general, the UX should be focused on providing information immediately required for the execution of day-to-day operational tasks while removing more complex functionality. These platforms should have easy access to well-defined and well-constrained operational functions or data visualization. An effective UX should enable easy creation and modification of data views, graphs, dashboards, and other visualizations by allowing operators to select devices using a declarative rather than SQL or other programmatic interfaces.
+
+## Persona 3: hardware and systems developer
+
+A hardware and systems developer’s primary role is the integration and configuration of IoT assets into an IoT platform. The hardware and systems developer possesses very specific, detailed knowledge about IoT hardware (e.g., specific multipoint control units, embedded platforms, or PLC/SCADA control systems), and leverages this knowledge to enable protocol and asset compatibility with northbound platform services.
+
+Typical hardware and systems developer tasks include
+
+ * designing and implementing firmware for IoT assets based on either standardized IoT SDKs or platform-specific SDKs
+ * updating firmware or software packages over deployment lifecycles
+ * integrating manufacturer-specific protocols adapters into either IoT assets or the northbound platform
+
+
+
+Enterprises should pick IoT platforms that allow hardware and systems developers to most efficiently design and implement low-level device and protocol functionality. An effective developer experience provides well-documented and fully-featured SDKs supporting a variety of languages and device architectures to enable integration with various types of IoT hardware.
+
+## Persona 4: platform and backend developer
+
+A platform and backend developer’s primary role is to execute customer-specific application logic and integrations within an IoT deployment. Customer-specific logic may include on-platform or on-edge custom applications, such as those used for analytics, data aggregation and normalization, or any type of event processing workflow. In addition, a platform and backend developer is responsible for integrating the IoT platform with external databases, analytic solutions, or business systems such as MES, ERP, or CRM applications.
+
+Typical platform and backend developer tasks include
+
+ * integrating streaming data from the IoT platform into external systems and applications
+ * configuring inbound and outbound platform actions and interactions with external systems
+ * configuring complex code-based event processing capabilities beyond the scope of a platform administrator’s knowledge or ability
+ * debugging low-level platform functionalities that require coding to detect or resolve
+
+
+
+Enterprises should pick excellent IoT platforms that provide access to well-documented and well-featured platform-level SDKs for application or service development. A best-in-class platform UX should provide real-time logging tools, debugging tools, and indexed and searchable access to all platform logs. Finally, a platform and backend developer is particularly dependent upon high-quality, platform-level documentation, especially for platform APIs.
+
+## Persona 5: user interface and experience (UI/UX) developer
+
+A UI/UX developer’s primary role is to design the various operator interfaces and monitoring views for an IoT platform. In more complex IoT deployments, various operator audiences will need to be addressed, including solution domain experts such as a factory manager; role-specific experts such as an equipment operator or factory technician; and business experts such as a supply-chain analyst or company executive.
+
+Typical UI/UX developer tasks include
+
+ * building and maintaining customer-specific dashboards and monitoring views on either the IoT platform or edge devices
+ * designing, implementing, and maintaining various operator consoles for a variety of operator audiences and customer-specific use cases
+ * ensuring good user experience for customers over the lifetime of an IoT implementation
+
+
+
+Enterprises should pick IoT platforms that provide an exceptional variety and quality of UI/UX tools, such as dashboarding frameworks for on-platform monitoring solutions that are declaratively or programmatically customizable, as well as various widget and display blocks to help the developer rapidly implement customer-specific views. An IoT platform must also provide a UI/UX developer with appropriate debugging and logging tools for monitoring and operator console frameworks and platform APIs. Finally, a best-in-class platform should provide a sample dashboard, operator console, and on-edge monitoring implementation in order to enable the UI/UX developer to quickly become accustomed with platform paradigms and best practices.
+
+Enterprises should make UX one of their top criteria when selecting an IoT platform. Having excellent UX allows enterprises to minimize platform deployment and management costs. At the same time, excellent UX allows enterprises to more readily launch new solutions to the market thereby increasing customer satisfaction and retention.
+
+**This article is published as part of the IDG Contributor Network.[Want to Join?][4]**
+
+Join the Network World communities on [Facebook][5] and [LinkedIn][6] to comment on topics that are top of mind.
+
+--------------------------------------------------------------------------------
+
+via: https://www.networkworld.com/article/3384738/identifying-exceptional-user-experience-ux-in-iot-platforms.html#tk.rss_all
+
+作者:[Steven Hilton][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://www.networkworld.com/author/Steven-Hilton/
+[b]: https://github.com/lujun9972
+[1]: https://images.idgesg.net/images/article/2019/02/industry_4-0_industrial_iot_smart_factory_by_leowolfert_gettyimages-689799380_2400x1600-100788464-large.jpg
+[2]: https://www.machnation.com/2018/09/25/announcing-mit-e-2-0-hands-on-benchmarking-for-iot-cloud-edge-and-analytics-platforms/
+[3]: https://www.networkworld.com/article/3169384/internet-of-things/storage-tank-operator-turns-to-iot-for-energy-savings.html#tk.nww-fsb
+[4]: /contributor-network/signup.html
+[5]: https://www.facebook.com/NetworkWorld/
+[6]: https://www.linkedin.com/company/network-world
diff --git a/sources/tech/20190327 IoT roundup- Keeping an eye on energy use and Volkswagen teams with AWS.md b/sources/tech/20190327 IoT roundup- Keeping an eye on energy use and Volkswagen teams with AWS.md
new file mode 100644
index 0000000000..016c5151fb
--- /dev/null
+++ b/sources/tech/20190327 IoT roundup- Keeping an eye on energy use and Volkswagen teams with AWS.md
@@ -0,0 +1,71 @@
+[#]: collector: (lujun9972)
+[#]: translator: ( )
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+[#]: subject: (IoT roundup: Keeping an eye on energy use and Volkswagen teams with AWS)
+[#]: via: (https://www.networkworld.com/article/3384697/iot-roundup-keeping-an-eye-on-energy-use-and-volkswagen-teams-with-aws.html#tk.rss_all)
+[#]: author: (Jon Gold https://www.networkworld.com/author/Jon-Gold/)
+
+IoT roundup: Keeping an eye on energy use and Volkswagen teams with AWS
+======
+
+### This week's roundup features new tech from MIT, big news in the automotive sector and a handy new level of centralization from a smaller IoT-focused company.
+
+![Getty Images][1]
+
+Much of what’s exciting about IoT technology has to do with getting data from a huge variety of sources into one place so it can be mined for insight, but sensors used to gather that data are frequently legacy devices from the early days of industrial automation or cheap, lightweight, SoC-based gadgets without a lot of sophistication of their own.
+
+Researchers at MIT have devised a system that can gather a certain slice of data from unsophisticated devices that are grouped on the same electrical circuit without adding sensors to each device.
+
+**[ Check out our[corporate guide to addressing IoT security][2]. ]**
+
+The technology’s called non-intrusive load monitoring, and sits directly on a given building's, vehicle's or other piece of infrastructure’s electrical circuits, identifies devices based on their power usage, and sends alerts when there are irregularities.
+
+It seems likely to make IIoT-related waves once it’s out of testing and onto the market.
+
+NLIM was recently tested, said MIT’s news service, on a U.S. Coast Guard cutter based in Boston, where it was attached to the outside of an electrical wire “at a single point, without requiring any cutting or splicing of wires.”
+
+Two such connections allowed the scientists to monitor roughly 20 separate devices on an electrical circuit, and the system was able to detect an anomalous amount of energy use from a component of the ship’s diesel engines known as a jacket water heater.
+
+“[C]rewmembers were skeptical about the reading but went to check it anyway. The heaters are hidden under protective metal covers, but as soon as the cover was removed from the suspect device, smoke came pouring out, and severe corrosion and broken insulation were clearly revealed,” the MIT report stated. Two other important but slightly less critical faults were also detected by the system.
+
+It’s easy to see why NLIM could easily prove to be an attractive technology for IIoT use in the future. It sounds as though it’s very simple to install, can operate without any kind of Internet connection (though most implementers will probably want to connect it to a wider monitoring setup for a more holistic picture of their systems) and does all of its computational work locally. It can even be used for general energy audits. What, in short, is not to like?
+
+**Volkswagen teams up with Amazon**
+
+AWS has got a new flagship client for its growing IoT services in the form of the Volkswagen Group, which [announced][3] that AWS is going to design and build the Volkswagen Industrial Cloud, a floor-to-ceiling industrial IoT implementation aimed at improving uptime, flexibility, productivity and vehicle quality.
+
+Real-time data from all 122 of VW’s manufacturing plants around the world will be available to the system, everything from part tracking to comparative analysis of efficiency to even deeper forms of analytics will take place in the company’s “data lake,” as the announcement calls it. Oh, and machine learning is part of it, too.
+
+The German carmaker clearly believes that AWS’s technology can provide a lot of help to its operations across the board, [even in the wake of a partnership with Microsoft for Azure-based cloud services announced last year.][4]
+
+**IoT-in-a-box**
+
+IoT can be very complicated. While individual components of any given implementation are often quite simple, each implementation usually contains a host of technologies that have to work in close concert. That means a lot of orchestration work has to go into making this stuff work.
+
+Enter Digi International, which rolled out an IoT-in-a-box package called Digi Foundations earlier this month. The idea is to take a lot of the logistical legwork out of IoT implementations by integrating cloud-connection software and edge-computing capabilities into the company’s core industrial router business. Foundations, which is packaged as a software subscription that adds these capabilities and more to the company’s devices, also includes a built-in management layer, allowing for simplified configuration and monitoring.
+
+OK, so it’s not quite all-in-one, but it’s still an impressive level of integration, particularly from a company that many might not have heard of before. It’s also a potential bellwether for other smaller firms upping their technical sophistication in the IoT sector.
+
+Join the Network World communities on [Facebook][5] and [LinkedIn][6] to comment on topics that are top of mind.
+
+--------------------------------------------------------------------------------
+
+via: https://www.networkworld.com/article/3384697/iot-roundup-keeping-an-eye-on-energy-use-and-volkswagen-teams-with-aws.html#tk.rss_all
+
+作者:[Jon Gold][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://www.networkworld.com/author/Jon-Gold/
+[b]: https://github.com/lujun9972
+[1]: https://images.idgesg.net/images/article/2018/08/nw_iot-news_internet-of-things_smart-city_smart-home7-100768495-large.jpg
+[2]: https://www.networkworld.com/article/3269165/internet-of-things/a-corporate-guide-to-addressing-iot-security-concerns.html
+[3]: https://www.volkswagen-newsroom.com/en/press-releases/volkswagen-and-amazon-web-services-to-develop-industrial-cloud-4780
+[4]: https://www.volkswagenag.com/en/news/2018/09/volkswagen-and-microsoft-announce-strategic-partnership.html
+[5]: https://www.facebook.com/NetworkWorld/
+[6]: https://www.linkedin.com/company/network-world
diff --git a/sources/tech/20190327 Standardizing WASI- A system interface to run WebAssembly outside the web.md b/sources/tech/20190327 Standardizing WASI- A system interface to run WebAssembly outside the web.md
new file mode 100644
index 0000000000..e473614955
--- /dev/null
+++ b/sources/tech/20190327 Standardizing WASI- A system interface to run WebAssembly outside the web.md
@@ -0,0 +1,347 @@
+[#]: collector: (lujun9972)
+[#]: translator: ( )
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+[#]: subject: (Standardizing WASI: A system interface to run WebAssembly outside the web)
+[#]: via: (https://hacks.mozilla.org/2019/03/standardizing-wasi-a-webassembly-system-interface/)
+[#]: author: (Lin Clark https://twitter.com/linclark)
+
+Standardizing WASI: A system interface to run WebAssembly outside the web
+======
+
+Today, we announce the start of a new standardization effort — WASI, the WebAssembly system interface.
+
+**Why:** Developers are starting to push WebAssembly beyond the browser, because it provides a fast, scalable, secure way to run the same code across all machines.
+
+But we don’t yet have a solid foundation to build upon. Code outside of a browser needs a way to talk to the system — a system interface. And the WebAssembly platform doesn’t have that yet.
+
+**What:** WebAssembly is an assembly language for a conceptual machine, not a physical one. This is why it can be run across a variety of different machine architectures.
+
+Just as WebAssembly is an assembly language for a conceptual machine, WebAssembly needs a system interface for a conceptual operating system, not any single operating system. This way, it can be run across all different OSs.
+
+This is what WASI is — a system interface for the WebAssembly platform.
+
+We aim to create a system interface that will be a true companion to WebAssembly and last the test of time. This means upholding the key principles of WebAssembly — portability and security.
+
+**Who:** We are chartering a WebAssembly subgroup to focus on standardizing [WASI][1]. We’ve already gathered interested partners, and are looking for more to join.
+
+Here are some of the reasons that we, our partners, and our supporters think this is important:
+
+### Sean White, Chief R&D Officer of Mozilla
+
+“WebAssembly is already transforming the way the web brings new kinds of compelling content to people and empowers developers and creators to do their best work on the web. Up to now that’s been through browsers, but with WASI we can deliver the benefits of WebAssembly and the web to more users, more places, on more devices, and as part of more experiences.”
+
+### Tyler McMullen, CTO of Fastly
+
+“We are taking WebAssembly beyond the browser, as a platform for fast, safe execution of code in our edge cloud. Despite the differences in environment between our edge and browsers, WASI means WebAssembly developers won’t have to port their code to each different platform.”
+
+### Myles Borins, Node Technical Steering committee director
+
+“WebAssembly could solve one of the biggest problems in Node — how to get close-to-native speeds and reuse code written in other languages like C and C++ like you can with native modules, while still remaining portable and secure. Standardizing this system interface is the first step towards making that happen.”
+
+### Laurie Voss, co-founder of npm
+
+“npm is tremendously excited by the potential WebAssembly holds to expand the capabilities of the npm ecosystem while hugely simplifying the process of getting native code to run in server-side JavaScript applications. We look forward to the results of this process.”
+
+So that’s the big news! 🎉
+
+There are currently 3 implementations of WASI:
+
+
++ [wasmtime](https://github.com/CraneStation/wasmtime), Mozilla’s WebAssembly runtime
++ [Lucet](https://www.fastly.com/blog/announcing-lucet-fastly-native-webassembly-compiler-runtime), Fastly’s WebAssembly runtime
++ [a browser polyfill](https://wasi.dev/polyfill/)
+
+
+You can see WASI in action in this video:
+
+
+
+And if you want to learn more about our proposal for how this system interface should work, keep reading.
+
+### What’s a system interface?
+
+Many people talk about languages like C giving you direct access to system resources. But that’s not quite true.
+
+These languages don’t have direct access to do things like open or create files on most systems. Why not?
+
+Because these system resources — such as files, memory, and network connections— are too important for stability and security.
+
+If one program unintentionally messes up the resources of another, then it could crash the program. Even worse, if a program (or user) intentionally messes with the resources of another, it could steal sensitive data.
+
+[![A frowning terminal window indicating a crash, and a file with a broken lock indicating a data leak][2]][3]
+
+So we need a way to control which programs and users can access which resources. People figured this out pretty early on, and came up with a way to provide this control: protection ring security.
+
+With protection ring security, the operating system basically puts a protective barrier around the system’s resources. This is the kernel. The kernel is the only thing that gets to do operations like creating a new file or opening a file or opening a network connection.
+
+The user’s programs run outside of this kernel in something called user mode. If a program wants to do anything like open a file, it has to ask the kernel to open the file for it.
+
+[![A file directory structure on the left, with a protective barrier in the middle containing the operating system kernel, and an application knocking for access on the right][4]][5]
+
+This is where the concept of the system call comes in. When a program needs to ask the kernel to do one of these things, it asks using a system call. This gives the kernel a chance to figure out which user is asking. Then it can see if that user has access to the file before opening it.
+
+On most devices, this is the only way that your code can access the system’s resources — through system calls.
+
+[![An application asking the operating system to put data into an open file][6]][7]
+
+The operating system makes the system calls available. But if each operating system has its own system calls, wouldn’t you need a different version of the code for each operating system? Fortunately, you don’t.
+
+How is this problem solved? Abstraction.
+
+Most languages provide a standard library. While coding, the programmer doesn’t need to know what system they are targeting. They just use the interface.
+
+Then, when compiling, your toolchain picks which implementation of the interface to use based on what system you’re targeting. This implementation uses functions from the operating system’s API, so it’s specific to the system.
+
+This is where the system interface comes in. For example, `printf` being compiled for a Windows machine could use the Windows API to interact with the machine. If it’s being compiled for Mac or Linux, it will use POSIX instead.
+
+[![The interface for putc being translated into two different implementations, one implemented using POSIX and one implemented using Windows APIs][8]][9]
+
+This poses a problem for WebAssembly, though.
+
+With WebAssembly, you don’t know what kind of operating system you’re targeting even when you’re compiling. So you can’t use any single OS’s system interface inside the WebAssembly implementation of the standard library.
+
+[![an empty implementation of putc][10]][11]
+
+I’ve talked before about how WebAssembly is [an assembly language for a conceptual machine][12], not a real machine. In the same way, WebAssembly needs a system interface for a conceptual operating system, not a real operating system.
+
+But there are already runtimes that can run WebAssembly outside the browser, even without having this system interface in place. How do they do it? Let’s take a look.
+
+### How is WebAssembly running outside the browser today?
+
+The first tool for producing WebAssembly was Emscripten. It emulates a particular OS system interface, POSIX, on the web. This means that the programmer can use functions from the C standard library (libc).
+
+To do this, Emscripten created its own implementation of libc. This implementation was split in two — part was compiled into the WebAssembly module, and the other part was implemented in JS glue code. This JS glue would then call into the browser, which would then talk to the OS.
+
+[![A Rube Goldberg machine showing how a call goes from a WebAssembly module, into Emscripten's JS glue code, into the browser, into the kernel][13]][14]
+
+Most of the early WebAssembly code was compiled with Emscripten. So when people started wanting to run WebAssembly without a browser, they started by making Emscripten-compiled code run.
+
+So these runtimes needed to create their own implementations for all of these functions that were in the JS glue code.
+
+There’s a problem here, though. The interface provided by this JS glue code wasn’t designed to be a standard, or even a public facing interface. That wasn’t the problem it was solving.
+
+For example, for a function that would be called something like `read` in an API that was designed to be a public interface, the JS glue code instead uses `_system3(which, varargs)`.
+
+[![A clean interface for read, vs a confusing one for system3][15]][16]
+
+The first parameter, `which`, is an integer which is always the same as the number in the name (so 3 in this case).
+
+The second parameter, `varargs`, are the arguments to use. It’s called `varargs` because you can have a variable number of them. But WebAssembly doesn’t provide a way to pass in a variable number of arguments to a function. So instead, the arguments are passed in via linear memory. This isn’t type safe, and it’s also slower than it would be if the arguments could be passed in using registers.
+
+That was fine for Emscripten running in the browser. But now runtimes are treating this as a de facto standard, implementing their own versions of the JS glue code. They are emulating an internal detail of an emulation layer of POSIX.
+
+This means they are re-implementing choices (like passing arguments in as heap values) that made sense based on Emscripten’s constraints, even though these constraints don’t apply in their environments.
+
+[![A more convoluted Rube Goldberg machine, with the JS glue and browser being emulated by a WebAssembly runtime][17]][18]
+
+If we’re going to build a WebAssembly ecosystem that lasts for decades, we need solid foundations. This means our de facto standard can’t be an emulation of an emulation.
+
+But what principles should we apply?
+
+### What principles does a WebAssembly system interface need to uphold?
+
+There are two important principles that are baked into WebAssembly :
+
+ * portability
+ * security
+
+
+
+We need to maintain these key principles as we move to outside-the-browser use cases.
+
+As it is, POSIX and Unix’s Access Control approach to security don’t quite get us there. Let’s look at where they fall short.
+
+### Portability
+
+POSIX provides source code portability. You can compile the same source code with different versions of libc to target different machines.
+
+[![One C source file being compiled to multiple binaries][19]][20]
+
+But WebAssembly needs to go one step beyond this. We need to be able to compile once and run across a whole bunch of different machines. We need portable binaries.
+
+[![One C source file being compiled to a single binary][21]][22]
+
+This kind of portability makes it much easier to distribute code to users.
+
+For example, if Node’s native modules were written in WebAssembly, then users wouldn’t need to run node-gyp when they install apps with native modules, and developers wouldn’t need to configure and distribute dozens of binaries.
+
+### Security
+
+When a line of code asks the operating system to do some input or output, the OS needs to determine if it is safe to do what the code asks.
+
+Operating systems typically handle this with access control that is based on ownership and groups.
+
+For example, the program might ask the OS to open a file. A user has a certain set of files that they have access to.
+
+When the user starts the program, the program runs on behalf of that user. If the user has access to the file — either because they are the owner or because they are in a group with access — then the program has that same access, too.
+
+[![An application asking to open a file that is relevant to what it's doing][23]][24]
+
+This protects users from each other. That made a lot of sense when early operating systems were developed. Systems were often multi-user, and administrators controlled what software was installed. So the most prominent threat was other users taking a peek at your files.
+
+That has changed. Systems now are usually single user, but they are running code that pulls in lots of other, third party code of unknown trustworthiness. Now the biggest threat is that the code that you yourself are running will turn against you.
+
+For example, let’s say that the library you’re using in an application gets a new maintainer (as often happens in open source). That maintainer might have your interest at heart… or they might be one of the bad guys. And if they have access to do anything on your system — for example, open any of your files and send them over the network — then their code can do a lot of damage.
+
+[![An evil application asking for access to the users bitcoin wallet and opening up a network connection][25]][26]
+
+This is why using third-party libraries that can talk directly to the system can be dangerous.
+
+WebAssembly’s way of doing security is different. WebAssembly is sandboxed.
+
+This means that code can’t talk directly to the OS. But then how does it do anything with system resources? The host (which might be a browser, or might be a wasm runtime) puts functions in the sandbox that the code can use.
+
+This means that the host can limit what a program can do on a program-by-program basis. It doesn’t just let the program act on behalf of the user, calling any system call with the user’s full permissions.
+
+Just having a mechanism for sandboxing doesn’t make a system secure in and of itself — the host can still put all of the capabilities into the sandbox, in which case we’re no better off — but it at least gives hosts the option of creating a more secure system.
+
+[![A runtime placing safe functions into the sandbox with an application][27]][28]
+
+In any system interface we design, we need to uphold these two principles. Portability makes it easier to develop and distribute software, and providing the tools for hosts to secure themselves or their users is an absolute must.,
+
+### What should this system interface look like?
+
+Given those two key principles, what should the design of the WebAssembly system interface be?
+
+That’s what we’ll figure out through the standardization process. We do have a proposal to start with, though:
+
+ * Create a modular set of standard interfaces
+ * Start with standardizing the most fundamental module, wasi-core
+
+
+
+[![Multiple modules encased in the WASI standards effort][29]][30]
+
+What will be in wasi-core?
+
+wasi-core will contain the basics that all programs need. It will cover much of the same ground as POSIX, including things such as files, network connections, clocks, and random numbers.
+
+And it will take a very similar approach to POSIX for many of these things. For example, it will use POSIX’s file-oriented approach, where you have system calls such as open, close, read, and write and everything else basically provides augmentations on top.
+
+But wasi-core won’t cover everything that POSIX does. For example, the process concept does not map clearly onto WebAssembly. And beyond that, it doesn’t make sense to say that every WebAssembly engine needs to support process operations like `fork`. But we also want to make it possible to standardize `fork`.
+
+This is where the modular approach comes in. This way, we can get good standardization coverage while still allowing niche platforms to use only the parts of WASI that make sense for them.
+
+[![Modules filled in with possible areas for standardization, such as processes, sensors, 3D graphics, etc][31]][32]
+
+Languages like Rust will use wasi-core directly in their standard libraries. For example, Rust’s `open` is implemented by calling `__wasi_path_open` when it’s compiled to WebAssembly.
+
+For C and C++, we’ve created a [wasi-sysroot][33] that implements libc in terms of wasi-core functions.
+
+[![The Rust and C implementations of openat with WASI][34]][35]
+
+We expect compilers like Clang to be ready to interface with the WASI API, and complete toolchains like the Rust compiler and Emscripten to use WASI as part of their system implementations
+
+How does the user’s code call these WASI functions?
+
+The runtime that is running the code passes the wasi-core functions in as imports.
+
+[![A runtime placing an imports object into the sandbox][36]][37]
+
+This gives us portability, because each host can have their own implementation of wasi-core that is specifically written for their platform — from WebAssembly runtimes like Mozilla’s wasmtime and Fastly’s Lucet, to Node, or even the browser.
+
+It also gives us sandboxing because the host can choose which wasi-core functions to pass in — so, which system calls to allow — on a program-by-program basis. This preserves security.
+
+[
+][38][![Three runtimes—wastime, Node, and the browser—passing their own implementations of wasi_fd_open into the sandbox][39]][40]
+
+WASI gives us a way to extend this security even further. It brings in more concepts from capability-based security.
+
+Traditionally, if code needs to open a file, it calls `open` with a string, which is the path name. Then the OS does a check to see if the code has permission (based on the user who started the program).
+
+With WASI, if you’re calling a function that needs to access a file, you have to pass in a file descriptor, which has permissions attached to it. This could be for the file itself, or for a directory that contains the file.
+
+This way, you can’t have code that randomly asks to open `/etc/passwd`. Instead, the code can only operate on the directories that are passed in to it.
+
+[![Two evil apps in sandboxes. The one on the left is using POSIX and succeeds at opening a file it shouldn't have access to. The other is using WASI and can't open the file.][41]][42]
+
+This makes it possible to safely give sandboxed code more access to different system calls — because the capabilities of these system calls can be limited.
+
+And this happens on a module-by-module basis. By default, a module doesn’t have any access to file descriptors. But if code in one module has a file descriptor, it can choose to pass that file descriptor to functions it calls in other modules. Or it can create more limited versions of the file descriptor to pass to the other functions.
+
+So the runtime passes in the file descriptors that an app can use to the top level code, and then file descriptors get propagated through the rest of the system on an as-needed basis.
+
+[![The runtime passing a directory to the app, and then then app passing a file to a function][43]][44]
+
+This gets WebAssembly closer to the principle of least privilege, where a module can only access the exact resources it needs to do its job.
+
+These concepts come from capability-oriented systems, like CloudABI and Capsicum. One problem with capability-oriented systems is that it is often hard to port code to them. But we think this problem can be solved.
+
+If code already uses `openat` with relative file paths, compiling the code will just work.
+
+If code uses `open` and migrating to the `openat` style is too much up-front investment, WASI can provide an incremental solution. With [libpreopen][45], you can create a list of file paths that the application legitimately needs access to. Then you can use `open`, but only with those paths.
+
+### What’s next?
+
+We think wasi-core is a good start. It preserves WebAssembly’s portability and security, providing a solid foundation for an ecosystem.
+
+But there are still questions we’ll need to address after wasi-core is fully standardized. Those questions include:
+
+ * asynchronous I/O
+ * file watching
+ * file locking
+
+
+
+This is just the beginning, so if you have ideas for how to solve these problems, [join us][1]!
+
+--------------------------------------------------------------------------------
+
+via: https://hacks.mozilla.org/2019/03/standardizing-wasi-a-webassembly-system-interface/
+
+作者:[Lin Clark][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://twitter.com/linclark
+[b]: https://github.com/lujun9972
+[1]: https://wasi.dev/
+[2]: https://2r4s9p1yi1fa2jd7j43zph8r-wpengine.netdna-ssl.com/files/2019/03/01-01_crash-data-leak-1-500x220.png
+[3]: https://2r4s9p1yi1fa2jd7j43zph8r-wpengine.netdna-ssl.com/files/2019/03/01-01_crash-data-leak-1.png
+[4]: https://2r4s9p1yi1fa2jd7j43zph8r-wpengine.netdna-ssl.com/files/2019/03/01-02-protection-ring-sec-1-500x298.png
+[5]: https://2r4s9p1yi1fa2jd7j43zph8r-wpengine.netdna-ssl.com/files/2019/03/01-02-protection-ring-sec-1.png
+[6]: https://2r4s9p1yi1fa2jd7j43zph8r-wpengine.netdna-ssl.com/files/2019/03/01-03-syscall-1-500x227.png
+[7]: https://2r4s9p1yi1fa2jd7j43zph8r-wpengine.netdna-ssl.com/files/2019/03/01-03-syscall-1.png
+[8]: https://2r4s9p1yi1fa2jd7j43zph8r-wpengine.netdna-ssl.com/files/2019/03/02-01-implementations-1-500x267.png
+[9]: https://2r4s9p1yi1fa2jd7j43zph8r-wpengine.netdna-ssl.com/files/2019/03/02-01-implementations-1.png
+[10]: https://2r4s9p1yi1fa2jd7j43zph8r-wpengine.netdna-ssl.com/files/2019/03/02-02-implementations-1-500x260.png
+[11]: https://2r4s9p1yi1fa2jd7j43zph8r-wpengine.netdna-ssl.com/files/2019/03/02-02-implementations-1.png
+[12]: https://hacks.mozilla.org/2017/02/creating-and-working-with-webassembly-modules/
+[13]: https://2r4s9p1yi1fa2jd7j43zph8r-wpengine.netdna-ssl.com/files/2019/03/03-01-emscripten-1-500x329.png
+[14]: https://2r4s9p1yi1fa2jd7j43zph8r-wpengine.netdna-ssl.com/files/2019/03/03-01-emscripten-1.png
+[15]: https://2r4s9p1yi1fa2jd7j43zph8r-wpengine.netdna-ssl.com/files/2019/03/03-02-system3-1-500x179.png
+[16]: https://2r4s9p1yi1fa2jd7j43zph8r-wpengine.netdna-ssl.com/files/2019/03/03-02-system3-1.png
+[17]: https://2r4s9p1yi1fa2jd7j43zph8r-wpengine.netdna-ssl.com/files/2019/03/03-03-emulation-1-500x341.png
+[18]: https://2r4s9p1yi1fa2jd7j43zph8r-wpengine.netdna-ssl.com/files/2019/03/03-03-emulation-1.png
+[19]: https://2r4s9p1yi1fa2jd7j43zph8r-wpengine.netdna-ssl.com/files/2019/03/04-01-portability-1-500x375.png
+[20]: https://2r4s9p1yi1fa2jd7j43zph8r-wpengine.netdna-ssl.com/files/2019/03/04-01-portability-1.png
+[21]: https://2r4s9p1yi1fa2jd7j43zph8r-wpengine.netdna-ssl.com/files/2019/03/04-02-portability-1-500x484.png
+[22]: https://2r4s9p1yi1fa2jd7j43zph8r-wpengine.netdna-ssl.com/files/2019/03/04-02-portability-1.png
+[23]: https://2r4s9p1yi1fa2jd7j43zph8r-wpengine.netdna-ssl.com/files/2019/03/04-03-access-control-1-500x224.png
+[24]: https://2r4s9p1yi1fa2jd7j43zph8r-wpengine.netdna-ssl.com/files/2019/03/04-03-access-control-1.png
+[25]: https://2r4s9p1yi1fa2jd7j43zph8r-wpengine.netdna-ssl.com/files/2019/03/04-04-bitcoin-1-500x258.png
+[26]: https://2r4s9p1yi1fa2jd7j43zph8r-wpengine.netdna-ssl.com/files/2019/03/04-04-bitcoin-1.png
+[27]: https://2r4s9p1yi1fa2jd7j43zph8r-wpengine.netdna-ssl.com/files/2019/03/04-05-sandbox-1-500x278.png
+[28]: https://2r4s9p1yi1fa2jd7j43zph8r-wpengine.netdna-ssl.com/files/2019/03/04-05-sandbox-1.png
+[29]: https://2r4s9p1yi1fa2jd7j43zph8r-wpengine.netdna-ssl.com/files/2019/03/05-01-wasi-1-500x419.png
+[30]: https://2r4s9p1yi1fa2jd7j43zph8r-wpengine.netdna-ssl.com/files/2019/03/05-01-wasi-1.png
+[31]: https://2r4s9p1yi1fa2jd7j43zph8r-wpengine.netdna-ssl.com/files/2019/03/05-02-wasi-1-500x251.png
+[32]: https://2r4s9p1yi1fa2jd7j43zph8r-wpengine.netdna-ssl.com/files/2019/03/05-02-wasi-1.png
+[33]: https://github.com/CraneStation/wasi-sysroot
+[34]: https://2r4s9p1yi1fa2jd7j43zph8r-wpengine.netdna-ssl.com/files/2019/03/05-03-open-imps-1-500x229.png
+[35]: https://2r4s9p1yi1fa2jd7j43zph8r-wpengine.netdna-ssl.com/files/2019/03/05-03-open-imps-1.png
+[36]: https://2r4s9p1yi1fa2jd7j43zph8r-wpengine.netdna-ssl.com/files/2019/03/05-04-imports-1-500x285.png
+[37]: https://2r4s9p1yi1fa2jd7j43zph8r-wpengine.netdna-ssl.com/files/2019/03/05-04-imports-1.png
+[38]: https://2r4s9p1yi1fa2jd7j43zph8r-wpengine.netdna-ssl.com/files/2019/03/05-05-sec-port-1.png
+[39]: https://2r4s9p1yi1fa2jd7j43zph8r-wpengine.netdna-ssl.com/files/2019/03/05-05-sec-port-2-500x705.png
+[40]: https://2r4s9p1yi1fa2jd7j43zph8r-wpengine.netdna-ssl.com/files/2019/03/05-05-sec-port-2.png
+[41]: https://2r4s9p1yi1fa2jd7j43zph8r-wpengine.netdna-ssl.com/files/2019/03/05-06-openat-path-1-500x192.png
+[42]: https://2r4s9p1yi1fa2jd7j43zph8r-wpengine.netdna-ssl.com/files/2019/03/05-06-openat-path-1.png
+[43]: https://2r4s9p1yi1fa2jd7j43zph8r-wpengine.netdna-ssl.com/files/2019/03/05-07-file-perms-1-500x423.png
+[44]: https://2r4s9p1yi1fa2jd7j43zph8r-wpengine.netdna-ssl.com/files/2019/03/05-07-file-perms-1.png
+[45]: https://github.com/musec/libpreopen
diff --git a/sources/tech/20190328 As memory prices plummet, PCIe is poised to overtake SATA for SSDs.md b/sources/tech/20190328 As memory prices plummet, PCIe is poised to overtake SATA for SSDs.md
new file mode 100644
index 0000000000..3dfb93eec7
--- /dev/null
+++ b/sources/tech/20190328 As memory prices plummet, PCIe is poised to overtake SATA for SSDs.md
@@ -0,0 +1,85 @@
+[#]: collector: (lujun9972)
+[#]: translator: ( )
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+[#]: subject: (As memory prices plummet, PCIe is poised to overtake SATA for SSDs)
+[#]: via: (https://www.networkworld.com/article/3384700/as-memory-prices-plummet-pcie-is-poised-to-overtake-sata-for-ssds.html#tk.rss_all)
+[#]: author: (Andy Patrizio https://www.networkworld.com/author/Andy-Patrizio/)
+
+As memory prices plummet, PCIe is poised to overtake SATA for SSDs
+======
+
+### Taiwan vendors believe PCIe and SATA will achieve price and market share parity by years' end.
+
+![Intel SSD DC P6400 Series][1]
+
+A collapse in price for NAND flash memory and a shrinking gap between the prices of PCI Express-based and SATA-based [solid-state drives][2] (SSDs) means the shift to PCI Express SSDs will accelerate in 2019, with the newer, faster format replacing the old by years' end.
+
+According to the Taiwanese tech publication DigiTimes (the stories are now archived and unavailable without a subscription), falling NAND flash prices continue to drag down SSD prices, which will drive the adoption of SSDs in enterprise and data-center applications. This, in turn, will further drive the adoption of PCIe drives, which are a superior format to SATA.
+
+**[ Read also:[Backup vs. archive: Why it’s important to know the difference][3] ]**
+
+## SATA vs. PCI Express
+
+SATA was introduced in 2001 as a replacement for the IDE interface, which had a much larger cable and slower interface. But SATA is a legacy HDD connection and not fast enough for NAND flash memory.
+
+I used to review SSDs, and it was always the same when it came to benchmarking, with the drives scoring within a few milliseconds of each other despite the memory used. The SATA interface was the bottleneck. A SATA SSD is like a one-lane highway with no speed limit.
+
+PCIe is several times faster and has much more parallelism, so throughput is more suited to the NAND format. It comes in two physical formats: an [add-in card][4] that plugs into a PCIe slot and M.2, which is about the size of a [stick of gum][5] and sits on the motherboard. PCIe is most widely used in servers, while M.2 is in consumer devices.
+
+There used to be a significant price difference between PCIe and SATA drives with the same capacity, but they have come into parity thanks to Moore’s Law, said Jim Handy, principal analyst with Objective Analysis, who follows the memory market.
+
+“The controller used to be a big part of the price of an SSD. But complexity has not grown with transistor count. It can have a lot of transistors, and it doesn’t cost more. SATA got more complicated, but PCIe has not. PCIe is very close to the same price as SATA, and [the controller] was the only thing that justified the price diff between the two,” he said.
+
+**[[Get certified as an Apple Technical Coordinator with this seven-part online course from PluralSight.][6] ]**
+
+DigiTimes estimates that the price drop for NAND flash chips will cause global shipments of SSDs to surge 20 to 25 percent in 2019, and PCIe SSDs are expected to emerge as a new mainstream offering by the end of 2019 with a market share of 50 percent, matching SATA SSDs.
+
+## SSD and NAND memory prices already falling
+
+Market sources to DigiTimes said that unit price for 512GB PCIe SSD has fallen by 11 percent sequentially in the first quarter of 2019, while SATA SSDs have dropped 9 percent. They added that the current average unit price for 512GB SSDs is now equal to that of 256GB SSDs from one year ago, with prices continuing to drop.
+
+According to DRAMeXchange, NAND flash contract prices will continue falling but at a slower rate in the second quarter of 2019. Memory makers are cutting production to avoid losing any more profits.
+
+“We’re in a price collapse. For over a year I’ve been saying the destination for NAND is 8 cents per gigabyte, and some spot markets are 6 cents. It was 30 cents a year ago. Contract pricing is around 15 cents now, it had been 25 to 27 cents last year,” said Handy.
+
+A contract price is what it sounds like. A memory maker like Samsung or Micron signs a contract with a SSD maker like Toshiba or Kingston for X amount for Y cents per gigabyte. Spot prices are prices that take place at the end of a quarter (like now) where a vendor anxious to unload excessive inventory has a fire sale to a drive maker that needs it on short supply.
+
+DigiTimes’s contacts aren’t the only ones who foresee this. Handy was at an analyst event by Samsung a few months back where they presented their projection that PCIe SSD would outsell SATA by the end of this year, and not just in the enterprise but everywhere.
+
+**More about backup and recovery:**
+
+ * [Backup vs. archive: Why it’s important to know the difference][3]
+ * [How to pick an off-site data-backup method][7]
+ * [Tape vs. disk storage: Why isn’t tape dead yet?][8]
+ * [The correct levels of backup save time, bandwidth, space][9]
+
+
+
+Join the Network World communities on [Facebook][10] and [LinkedIn][11] to comment on topics that are top of mind.
+
+--------------------------------------------------------------------------------
+
+via: https://www.networkworld.com/article/3384700/as-memory-prices-plummet-pcie-is-poised-to-overtake-sata-for-ssds.html#tk.rss_all
+
+作者:[Andy Patrizio][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://www.networkworld.com/author/Andy-Patrizio/
+[b]: https://github.com/lujun9972
+[1]: https://images.idgesg.net/images/article/2018/12/intel-ssd-p4600-series1-100782098-large.jpg
+[2]: https://www.networkworld.com/article/3326058/what-is-an-ssd.html
+[3]: https://www.networkworld.com/article/3285652/storage/backup-vs-archive-why-its-important-to-know-the-difference.html
+[4]: https://www.newegg.com/Product/Product.aspx?Item=N82E16820249107
+[5]: https://www.newegg.com/Product/Product.aspx?Item=20-156-199&cm_sp=SearchSuccess-_-INFOCARD-_-m.2+-_-20-156-199-_-2&Description=m.2+
+[6]: https://pluralsight.pxf.io/c/321564/424552/7490?u=https%3A%2F%2Fwww.pluralsight.com%2Fpaths%2Fapple-certified-technical-trainer-10-11
+[7]: https://www.networkworld.com/article/3328488/backup-systems-and-services/how-to-pick-an-off-site-data-backup-method.html
+[8]: https://www.networkworld.com/article/3315156/storage/tape-vs-disk-storage-why-isnt-tape-dead-yet.html
+[9]: https://www.networkworld.com/article/3302804/storage/the-correct-levels-of-backup-save-time-bandwidth-space.html
+[10]: https://www.facebook.com/NetworkWorld/
+[11]: https://www.linkedin.com/company/network-world
diff --git a/sources/tech/20190328 Can Better Task Stealing Make Linux Faster.md b/sources/tech/20190328 Can Better Task Stealing Make Linux Faster.md
new file mode 100644
index 0000000000..bae14a2f5c
--- /dev/null
+++ b/sources/tech/20190328 Can Better Task Stealing Make Linux Faster.md
@@ -0,0 +1,133 @@
+[#]: collector: (lujun9972)
+[#]: translator: ( )
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+[#]: subject: (Can Better Task Stealing Make Linux Faster?)
+[#]: via: (https://www.linux.com/blog/can-better-task-stealing-make-linux-faster)
+[#]: author: (Oracle )
+
+Can Better Task Stealing Make Linux Faster?
+======
+
+_Oracle Linux kernel developer Steve Sistare contributes this discussion on kernel scheduler improvements._
+
+### Load balancing via scalable task stealing
+
+The Linux task scheduler balances load across a system by pushing waking tasks to idle CPUs, and by pulling tasks from busy CPUs when a CPU becomes idle. Efficient scaling is a challenge on both the push and pull sides on large systems. For pulls, the scheduler searches all CPUs in successively larger domains until an overloaded CPU is found, and pulls a task from the busiest group. This is very expensive, costing 10's to 100's of microseconds on large systems, so search time is limited by the average idle time, and some domains are not searched. Balance is not always achieved, and idle CPUs go unused.
+
+I have implemented an alternate mechanism that is invoked after the existing search in idle_balance() limits itself and finds nothing. I maintain a bitmap of overloaded CPUs, where a CPU sets its bit when its runnable CFS task count exceeds 1. The bitmap is sparse, with a limited number of significant bits per cacheline. This reduces cache contention when many threads concurrently set, clear, and visit elements. There is a bitmap per last-level cache. When a CPU becomes idle, it searches the bitmap to find the first overloaded CPU with a migratable task, and steals it. This simple stealing yields a higher CPU utilization than idle_balance() alone, because the search is cheap, costing 1 to 2 microseconds, so it may be called every time the CPU is about to go idle. Stealing does not offload the globally busiest queue, but it is much better than running nothing at all.
+
+### Results
+
+Stealing improves utilization with only a modest CPU overhead in scheduler code. In the following experiment, hackbench is run with varying numbers of groups (40 tasks per group), and the delta in /proc/schedstat is shown for each run, averaged per CPU, augmented with these non-standard stats:
+
+ * %find - percent of time spent in old and new functions that search for idle CPUs and tasks to steal and set the overloaded CPUs bitmap.
+ * steal - number of times a task is stolen from another CPU. Elapsed time improves by 8 to 36%, costing at most 0.4% more find time.
+
+
+
+![load balancing][1]
+
+[Used with permission][2]
+
+CPU busy utilization is close to 100% for the new kernel, as shown by the green curve in the following graph, versus the orange curve for the baseline kernel:
+
+![][3]
+
+Stealing improves Oracle database OLTP performance by up to 9% depending on load, and we have seen some nice improvements for mysql, pgsql, gcc, java, and networking. In general, stealing is most helpful for workloads with a high context switch rate.
+
+### The code
+
+As of this writing, this work is not yet upstream, but the latest patch series is at [https://lkml.org/lkml/2018/12/6/1253. ][4]If your kernel is built with CONFIG_SCHED_DEBUG=y, you can verify that it contains the stealing optimization using
+
+```
+# grep -q STEAL /sys/kernel/debug/sched_features && echo Yes
+Yes
+```
+
+If you try it, note that stealing is disabled for systems with more than 2 NUMA nodes, because hackbench regresses on such systems, as I explain in [https://lkml.org/lkml/2018/12/6/1250 .][5]However, I suspect this effect is specific to hackbench and that stealing will help other workloads on many-node systems. To try it, reboot with kernel parameter sched_steal_node_limit = 8 (or larger).
+
+### Future work
+
+After the basic stealing algorithm is pushed upstream, I am considering the following enhancements:
+
+ * If stealing within the last-level cache does not find a candidate, steal across LLC's and NUMA nodes.
+ * Maintain a sparse bitmap to identify stealing candidates in the RT scheduling class. Currently pull_rt_task() searches all run queues.
+ * Remove the core and socket levels from idle_balance(), as stealing handles those levels. Remove idle_balance() entirely when stealing across LLC is supported.
+ * Maintain a bitmap to identify idle cores and idle CPUs, for push balancing.
+
+
+
+_This article originally appeared at[Oracle Developers Blog][6]._
+
+_Oracle Linux kernel developer Steve Sistare contributes this discussion on kernel scheduler improvements._
+
+### Load balancing via scalable task stealing
+
+The Linux task scheduler balances load across a system by pushing waking tasks to idle CPUs, and by pulling tasks from busy CPUs when a CPU becomes idle. Efficient scaling is a challenge on both the push and pull sides on large systems. For pulls, the scheduler searches all CPUs in successively larger domains until an overloaded CPU is found, and pulls a task from the busiest group. This is very expensive, costing 10's to 100's of microseconds on large systems, so search time is limited by the average idle time, and some domains are not searched. Balance is not always achieved, and idle CPUs go unused.
+
+I have implemented an alternate mechanism that is invoked after the existing search in idle_balance() limits itself and finds nothing. I maintain a bitmap of overloaded CPUs, where a CPU sets its bit when its runnable CFS task count exceeds 1. The bitmap is sparse, with a limited number of significant bits per cacheline. This reduces cache contention when many threads concurrently set, clear, and visit elements. There is a bitmap per last-level cache. When a CPU becomes idle, it searches the bitmap to find the first overloaded CPU with a migratable task, and steals it. This simple stealing yields a higher CPU utilization than idle_balance() alone, because the search is cheap, costing 1 to 2 microseconds, so it may be called every time the CPU is about to go idle. Stealing does not offload the globally busiest queue, but it is much better than running nothing at all.
+
+### Results
+
+Stealing improves utilization with only a modest CPU overhead in scheduler code. In the following experiment, hackbench is run with varying numbers of groups (40 tasks per group), and the delta in /proc/schedstat is shown for each run, averaged per CPU, augmented with these non-standard stats:
+
+ * %find - percent of time spent in old and new functions that search for idle CPUs and tasks to steal and set the overloaded CPUs bitmap.
+ * steal - number of times a task is stolen from another CPU. Elapsed time improves by 8 to 36%, costing at most 0.4% more find time.
+
+
+
+![load balancing][1]
+
+[Used with permission][2]
+
+CPU busy utilization is close to 100% for the new kernel, as shown by the green curve in the following graph, versus the orange curve for the baseline kernel:
+
+![][3]
+
+Stealing improves Oracle database OLTP performance by up to 9% depending on load, and we have seen some nice improvements for mysql, pgsql, gcc, java, and networking. In general, stealing is most helpful for workloads with a high context switch rate.
+
+### The code
+
+As of this writing, this work is not yet upstream, but the latest patch series is at [https://lkml.org/lkml/2018/12/6/1253. ][4]If your kernel is built with CONFIG_SCHED_DEBUG=y, you can verify that it contains the stealing optimization using
+
+```
+# grep -q STEAL /sys/kernel/debug/sched_features && echo Yes
+Yes
+```
+
+If you try it, note that stealing is disabled for systems with more than 2 NUMA nodes, because hackbench regresses on such systems, as I explain in [https://lkml.org/lkml/2018/12/6/1250 .][5]However, I suspect this effect is specific to hackbench and that stealing will help other workloads on many-node systems. To try it, reboot with kernel parameter sched_steal_node_limit = 8 (or larger).
+
+### Future work
+
+After the basic stealing algorithm is pushed upstream, I am considering the following enhancements:
+
+ * If stealing within the last-level cache does not find a candidate, steal across LLC's and NUMA nodes.
+ * Maintain a sparse bitmap to identify stealing candidates in the RT scheduling class. Currently pull_rt_task() searches all run queues.
+ * Remove the core and socket levels from idle_balance(), as stealing handles those levels. Remove idle_balance() entirely when stealing across LLC is supported.
+ * Maintain a bitmap to identify idle cores and idle CPUs, for push balancing.
+
+
+
+_This article originally appeared at[Oracle Developers Blog][6]._
+
+--------------------------------------------------------------------------------
+
+via: https://www.linux.com/blog/can-better-task-stealing-make-linux-faster
+
+作者:[Oracle][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]:
+[b]: https://github.com/lujun9972
+[1]: https://www.linux.com/sites/lcom/files/styles/rendered_file/public/linux-load-balancing.png?itok=2Uk1yALt (load balancing)
+[2]: /LICENSES/CATEGORY/USED-PERMISSION
+[3]: https://cdn.app.compendium.com/uploads/user/e7c690e8-6ff9-102a-ac6d-e4aebca50425/b7a700fe-edc3-4ea0-876a-c91e1850b59b/Image/00c074f4282bcbaf0c10dd153c5dfa76/steal_graph.png
+[4]: https://lkml.org/lkml/2018/12/6/1253
+[5]: https://lkml.org/lkml/2018/12/6/1250
+[6]: https://blogs.oracle.com/linux/can-better-task-stealing-make-linux-faster
diff --git a/sources/tech/20190328 Elizabeth Warren-s right-to-repair plan fails to consider data from IoT equipment.md b/sources/tech/20190328 Elizabeth Warren-s right-to-repair plan fails to consider data from IoT equipment.md
new file mode 100644
index 0000000000..1ae1222f6e
--- /dev/null
+++ b/sources/tech/20190328 Elizabeth Warren-s right-to-repair plan fails to consider data from IoT equipment.md
@@ -0,0 +1,65 @@
+[#]: collector: (lujun9972)
+[#]: translator: ( )
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+[#]: subject: (Elizabeth Warren's right-to-repair plan fails to consider data from IoT equipment)
+[#]: via: (https://www.networkworld.com/article/3385122/elizabeth-warrens-right-to-repair-plan-fails-to-consider-data-from-iot-equipment.html#tk.rss_all)
+[#]: author: (Fredric Paul https://www.networkworld.com/author/Fredric-Paul/)
+
+Elizabeth Warren's right-to-repair plan fails to consider data from IoT equipment
+======
+
+### Senator and presidential candidate Elizabeth Warren suggests national legislation focused on farm equipment. But that’s only a first step. The data collected by that equipment must also be considered.
+
+![Thinkstock][1]
+
+There’s a surprising battle being fought on America’s farms, between farmers and the companies that sell them tractors, combines, and other farm equipment. Surprisingly, the outcome of that war could have far-reaching implications for the internet of things (IoT) — and now Massachusetts senator and Democratic presidential candidate Elizabeth Warren has weighed in with a proposal that could shift the balance of power in this largely under-the-radar struggle.
+
+## Right to repair farm equipment
+
+Here’s the story: As part of a new plan to support family farms, Warren came out in support of a national right-to-repair law for farm equipment. That might not sound like a big deal, but it raises the stakes in a long-simmering fight between farmers and equipment makers over who really controls access to the equipment — and to the increasingly critical data gathered by the IoT capabilities built into it.
+
+**[ Also read:[Right-to-repair smartphone ruling loosens restrictions on industrial, farm IoT][2] | Get regularly scheduled insights: [Sign up for Network World newsletters][3] ]**
+
+[Warren’s proposal reportedly][4] calls for making all diagnostics tools and manuals freely available to the equipment owners, as well as independent repair shops — not just vendors and their authorized agents — and focuses solely on farm equipment.
+
+That’s a great start, and kudos to Warren for being by far the most prominent politician to weigh in on the issue.
+
+## Part of a much bigger IoT data issue
+
+But Warren's proposal merely scratches the surface of the much larger issue of who actually controls the equipment and devices that consumers and businesses buy. Even more important, it doesn’t address the critical data gathered by IoT sensors in everything ranging from smartphones, wearables, and smart-home devices to private and commercial vehicles and aircraft to industrial equipment.
+
+And as many farmers can tell you, this isn’t some academic argument. That data has real value — not to mention privacy implications. For farmers, it’s GPS-equipped smart sensors tracking everything — from temperature to moisture to soil acidity — that can determine the most efficient times to plant and harvest crops. For consumers, it might be data that affects their home or auto insurance rates, or even divorce cases. For manufacturers, it might cover everything from which equipment needs maintenance to potential issues with raw materials or finished products.
+
+The solution is simple: IoT users need consistent regulations that ensure free access to what is really their own data, and give them the option to share that data with the equipment vendors — if they so choose and on their own terms.
+
+At the very least, users need clear statements of the rules, so they know exactly what they’re getting — and not getting — when they buy IoT-enhanced devices and equipment. And if they’re being honest, most equipment vendors would likely admit that clear rules would benefit them as well by creating a level playing field, reducing potential liabilities and helping to avoid making customers unhappy.
+
+Sen. Warren made headlines earlier this month by proposing to ["break up" tech giants][5] such as Amazon, Apple, and Facebook. If she really wants to help technology buyers, prioritizing the right-to-repair and the associated right to own your own data seems like a more effective approach.
+
+**[ Now read this:[Big trouble down on the IoT farm][6] ]**
+
+Join the Network World communities on [Facebook][7] and [LinkedIn][8] to comment on topics that are top of mind.
+
+--------------------------------------------------------------------------------
+
+via: https://www.networkworld.com/article/3385122/elizabeth-warrens-right-to-repair-plan-fails-to-consider-data-from-iot-equipment.html#tk.rss_all
+
+作者:[Fredric Paul][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://www.networkworld.com/author/Fredric-Paul/
+[b]: https://github.com/lujun9972
+[1]: https://images.techhive.com/images/article/2017/03/ai_agriculture_primary-100715481-large.jpg
+[2]: https://www.networkworld.com/article/3317696/the-recent-right-to-repair-smartphone-ruling-will-also-affect-farm-and-industrial-equipment.html
+[3]: https://www.networkworld.com/newsletters/signup.html
+[4]: https://appleinsider.com/articles/19/03/27/presidential-candidate-elizabeth-warren-focusing-right-to-repair-on-farmers-not-tech
+[5]: https://www.nytimes.com/2019/03/08/us/politics/elizabeth-warren-amazon.html
+[6]: https://www.networkworld.com/article/3262631/big-trouble-down-on-the-iot-farm.html
+[7]: https://www.facebook.com/NetworkWorld/
+[8]: https://www.linkedin.com/company/network-world
diff --git a/sources/tech/20190328 Microsoft introduces Azure Stack for HCI.md b/sources/tech/20190328 Microsoft introduces Azure Stack for HCI.md
new file mode 100644
index 0000000000..0400f4db04
--- /dev/null
+++ b/sources/tech/20190328 Microsoft introduces Azure Stack for HCI.md
@@ -0,0 +1,63 @@
+[#]: collector: (lujun9972)
+[#]: translator: ( )
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+[#]: subject: (Microsoft introduces Azure Stack for HCI)
+[#]: via: (https://www.networkworld.com/article/3385078/microsoft-introduces-azure-stack-for-hci.html#tk.rss_all)
+[#]: author: (Andy Patrizio https://www.networkworld.com/author/Andy-Patrizio/)
+
+Microsoft introduces Azure Stack for HCI
+======
+
+### Azure Stack is great for your existing hardware, so Microsoft is covering the bases with a turnkey solution.
+
+![Thinkstock/Microsoft][1]
+
+Microsoft has introduced Azure Stack HCI Solutions, a new implementation of its on-premises Azure product specifically for [Hyper Converged Infrastructure][2] (HCI) hardware.
+
+[Azure Stack][3] is an on-premises version of its Azure cloud service. It gives companies a chance to migrate to an Azure environment within the confines of their own enterprise rather than onto Microsoft’s data centers. Once you have migrated your apps and infrastructure to Azure Stack, moving between your systems and Microsoft’s cloud service is easy.
+
+HCI is the latest trend in server hardware. It uses scale-out hardware systems and a full software-defined platform to handle [virtualization][4] and management. It’s designed to reduce the complexity of a deployment and on-going management, since everything ships fully integrated, hardware and software.
+
+**[ Read also:[12 most powerful hyperconverged infrasctructure vendors][5] | Get regularly scheduled insights: [Sign up for Network World newsletters][6] ]**
+
+It makes sense for Microsoft to take this step. Azure Stack was ideal for an existing enterprise. Now you can deploy a whole new hardware configuration setup to run Azure in-house, complete with Hyper-V-based software-defined compute, storage, and networking.
+
+The Windows Admin Center is the main management tool for Azure Stack HCI. It connects to other Azure tools, such as Azure Monitor, Azure Security Center, Azure Update Management, Azure Network Adapter, and Azure Site Recovery.
+
+“We are bringing our existing HCI technology into the Azure Stack family for customers to run virtualized applications on-premises with direct access to Azure management services such as backup and disaster recovery,” wrote Julia White, corporate vice president of Microsoft Azure, in a [blog post announcing Azure Stack HCI][7].
+
+It’s not so much a new product launch as a rebranding. When Microsoft launched Server 2016, it introduced a version called Windows Server Software-Defined Data Center (SDDC), which was built on the Hyper-V hypervisor, and says so in a [FAQ][8] as part of the announcement.
+
+"Azure Stack HCI is the evolution of Windows Server Software-Defined (WSSD) solutions previously available from our hardware partners. We brought it into the Azure Stack family because we have started to offer new options to connect seamlessly with Azure for infrastructure management services,” the company said.
+
+Microsoft introduced Azure Stack in 2017, but it was not the first to offer an on-premises cloud option. That distinction goes to [OpenStack][9], a joint project between Rackspace and NASA built on open-source code. Amazon followed with its own product, called [Outposts][10].
+
+Join the Network World communities on [Facebook][11] and [LinkedIn][12] to comment on topics that are top of mind.
+
+--------------------------------------------------------------------------------
+
+via: https://www.networkworld.com/article/3385078/microsoft-introduces-azure-stack-for-hci.html#tk.rss_all
+
+作者:[Andy Patrizio][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://www.networkworld.com/author/Andy-Patrizio/
+[b]: https://github.com/lujun9972
+[1]: https://images.idgesg.net/images/article/2017/08/5_microsoft-azure-100733132-large.jpg
+[2]: https://www.networkworld.com/article/3207567/what-is-hyperconvergence.html
+[3]: https://www.networkworld.com/article/3207748/microsoft-introduces-azure-stack-its-answer-to-openstack.html
+[4]: https://www.networkworld.com/article/3234795/what-is-virtualization-definition-virtual-machine-hypervisor.html
+[5]: https://www.networkworld.com/article/3112622/hardware/12-most-powerful-hyperconverged-infrastructure-vendors.htmll
+[6]: https://www.networkworld.com/newsletters/signup.html
+[7]: https://azure.microsoft.com/en-us/blog/enabling-customers-hybrid-strategy-with-new-microsoft-innovation/
+[8]: https://azure.microsoft.com/en-us/blog/announcing-azure-stack-hci-a-new-member-of-the-azure-stack-family/
+[9]: https://www.openstack.org/
+[10]: https://www.networkworld.com/article/3324043/aws-does-hybrid-cloud-with-on-prem-hardware-vmware-help.html
+[11]: https://www.facebook.com/NetworkWorld/
+[12]: https://www.linkedin.com/company/network-world
diff --git a/sources/tech/20190328 Motorola taps freed-up wireless spectrum for enterprise LTE networks.md b/sources/tech/20190328 Motorola taps freed-up wireless spectrum for enterprise LTE networks.md
new file mode 100644
index 0000000000..ce38f54f79
--- /dev/null
+++ b/sources/tech/20190328 Motorola taps freed-up wireless spectrum for enterprise LTE networks.md
@@ -0,0 +1,68 @@
+[#]: collector: (lujun9972)
+[#]: translator: ( )
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+[#]: subject: (Motorola taps freed-up wireless spectrum for enterprise LTE networks)
+[#]: via: (https://www.networkworld.com/article/3385117/motorola-taps-cbrs-spectrum-to-create-private-broadband-lmr-system.html#tk.rss_all)
+[#]: author: (Patrick Nelson https://www.networkworld.com/author/Patrick-Nelson/)
+
+Motorola taps freed-up wireless spectrum for enterprise LTE networks
+======
+
+### Citizens Broadband Radio Service (CBRS) is developing. Out of the gate, Motorola is creating a land mobile radio (LMR) system that includes enterprise-level, voice handheld devices and fast, private data networks.
+
+![Jiraroj Praditcharoenkul / Getty Images][1]
+
+In a move that could upend how workers access data in the enterprise, Motorola has announced a broadband product that it says will deliver data at double the capacity and four-times the range of Wi-Fi for end users. The handheld, walkie-talkie-like device, called Mototrbo Nitro, will, importantly, also include a voice channel. “Business-critical voice with private broadband data,” as [Motorola describes it on its website][2].
+
+The company sees the product being implemented in traditional, moving-around, voice communications environments, such as factories and warehouses, that increasingly need data supplementation, too. A shop floor that has an electronically delivered repair manual, with included video demonstration, could be one example. Video could be two-way, even.
+
+**[ Also read:[Wi-Fi 6 is coming to a router near you][3] | Get regularly scheduled insights: [Sign up for Network World newsletters][4] ]**
+
+The product takes advantage of upcoming Citizens Broadband Radio Service (CBRS) spectrum. That’s a swath of radio bandwidth that’s being released by the Federal Communications Commission (FCC) in the 3.5GHz band. It’s a frequency chunk that is also expected to be used heavily for 5G. In this case, though, Motorola is creating a private LTE network for the enterprise.
+
+The CBRS band is the first time publicly available broadband spectrum has been available, [Motorola explains in a white paper][5] (pdf) — organizations don’t have to buy licenses, yet they can get access to useful spectrum: [A tiered sharing system, where auction winners will get priority access licenses, but others will have some access too is proposed][6] by the FCC. The non-prioritized open access could be used by any enterprise for whatever — internet of things (IoT) or private networks.
+
+## Motorola's pitch for using a private broadband network
+
+Why a private broadband network and not simply cell phones? One giveaway line is in Motorola’s promotional video: “Without sacrificing control,” it says. What it means is that the firm thinks there’s a market for companies who want to run entire business communications systems — data and voice — without involvement from possibly nosy Mobile Network Operator phone companies. [I’ve written before about how control over security is prompting large industrials to explore private networks][7] more. Motorola manages the network in this case, though, for the enterprise.
+
+Motorola also refers to potentially limited or intermittent onsite coverage and congestion for public, commercial, single-platform voice and data networks. That’s particularly the case in factories, [Motorola says in an ebook][8]. Heavy machinery containing radio-unfriendly metal can hinder Wi-Fi and cellular, it claims. Or that traditional Land Mobile Radios (LMRs), such as walkie-talkies and vehicle-mounted mobile radios, don’t handle data natively. In particular, it says that if you want to get into artificial intelligence (AI) and analytics, say, you need a more evolving voice and fast data communications setup.
+
+## Industrial IoT uses for Motorola's Nitro network
+
+Industrial IoT will be another beneficiary, Motorola says. It says its CBRS Nitro network could include instant notifications of equipment failures that traditional products can’t provide. It also suggests merging fixed security cameras with “photos and videos of broken machines and sending real-time video to an expert.”
+
+**[[Take this mobile device management course from PluralSight and learn how to secure devices in your company without degrading the user experience.][9] ]**
+
+Motorola also suggests that by separating consumer Wi-Fi (as is offered in hospitality and transport verticals, for example) from business-critical systems, one reduces traffic congestion risks.
+
+The highly complicated CBRS band-sharing system is still not through its government testing. “However, we could deploy customer systems under an experimental license,” a Motorola representative told me.
+
+Join the Network World communities on [Facebook][10] and [LinkedIn][11] to comment on topics that are top of mind.
+
+--------------------------------------------------------------------------------
+
+via: https://www.networkworld.com/article/3385117/motorola-taps-cbrs-spectrum-to-create-private-broadband-lmr-system.html#tk.rss_all
+
+作者:[Patrick Nelson][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://www.networkworld.com/author/Patrick-Nelson/
+[b]: https://github.com/lujun9972
+[1]: https://images.idgesg.net/images/article/2019/02/industry_4-0_industrial_iot_smart_factory_automation_robotic_arm_gear_engineer_tablet_by_jiraroj_praditcharoenkul_gettyimages-1091790364_2400x1600-100788459-large.jpg
+[2]: https://www.motorolasolutions.com/en_us/products/two-way-radios/mototrbo/nitro.html
+[3]: https://www.networkworld.com/article/3311921/mobile-wireless/wi-fi-6-is-coming-to-a-router-near-you.html
+[4]: https://www.networkworld.com/newsletters/signup.html
+[5]: https://www.motorolasolutions.com/content/dam/msi/docs/products/mototrbo/nitro/cbrs-white-paper.pdf
+[6]: https://www.networkworld.com/article/3300339/private-lte-using-new-spectrum-approaching-market-readiness.html
+[7]: https://www.networkworld.com/article/3319176/private-5g-networks-are-coming.html
+[8]: https://img04.en25.com/Web/MotorolaSolutionsInc/%7B293ce809-fde0-4619-8507-2b42076215c3%7D_radio_evolution_eBook_Nitro_03.13.19_MS_V3.pdf?elqTrackId=850d56c6d53f4013afa2290a66d6251f&elqaid=2025&elqat=2
+[9]: https://pluralsight.pxf.io/c/321564/424552/7490?u=https%3A%2F%2Fwww.pluralsight.com%2Fcourses%2Fmobile-device-management-big-picture
+[10]: https://www.facebook.com/NetworkWorld/
+[11]: https://www.linkedin.com/company/network-world
diff --git a/sources/tech/20190328 Robots in Retail are Real- and so is Edge Computing.md b/sources/tech/20190328 Robots in Retail are Real- and so is Edge Computing.md
new file mode 100644
index 0000000000..f62317ae54
--- /dev/null
+++ b/sources/tech/20190328 Robots in Retail are Real- and so is Edge Computing.md
@@ -0,0 +1,48 @@
+[#]: collector: (lujun9972)
+[#]: translator: ( )
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+[#]: subject: (Robots in Retail are Real… and so is Edge Computing)
+[#]: via: (https://www.networkworld.com/article/3385046/robots-in-retail-are-real-and-so-is-edge-computing.html#tk.rss_all)
+[#]: author: (Wendy Torell https://www.networkworld.com/author/Wendy-Torell/)
+
+Robots in Retail are Real… and so is Edge Computing
+======
+
+### I’ve seen plenty of articles touting the promise of edge computing technologies like AI and robotics in retail brick & mortar, but it wasn’t until this past weekend that I had my first encounter with an actual robot in a retail store.
+
+![Getty][1]
+
+I’ve seen plenty of articles touting the promise of [edge computing][2] technologies like AI and robotics in retail brick & mortar, but it wasn’t until this past weekend that I had my first encounter with an actual robot in a retail store. I was doing my usual weekly grocery shopping at my local Stop & Shop, and who comes strolling down the aisle, but…. Marty… the autonomous robot. He was friendly looking with his big googly eyes and was wearing a sign that explained he was there for safety, and that he was monitoring the aisles to report spills, debris, and other hazards to employees to improve my shopping experience. He caught the attention of most of the shoppers.
+
+At the National Retail Federation conference in NY that I attended in January, this was a topic of one of the [panel sessions][3]. It all makes sense… a positive customer experience is critical to retail success. But employee-to-customer (human to human) interaction has also been proven important. That’s where Marty comes in… to free up resources spent on tedious, time consuming tasks so that personnel can spend more time directly helping customers.
+
+**Use cases for robots in stores**
+
+Robotics have been utilized by retailers in manufacturing floors, and in distribution warehouses to improve productivity and optimize business processes along the supply chain. But it is only more recently that we’re seeing them make their way into the retail store front, where they are in contact with the customers. Alerting to hazards in the aisles is just one of many use-cases for the robots. They can also be used to scan and re-stock shelves, or as general information sources and greeters upon entering the store to guide your shopping experience. But how does a retailer justify the investment in this type of technology? Determining your ROI isn’t as cut and dry as in a warehouse environment, for example, where costs are directly tied to number of staff, time to complete tasks, etc… I guess time will tell for the retailers that are giving it a go.
+
+**What does it mean for the IT equipment on-premise ([micro data center][4])**
+
+Robotics are one of the many ways retail stores are being digitized. Video analytics is another big one, being used to analyze facial expressions for customer satisfaction, obtain customer demographics as input to product development, or ensure queue lines don’t get too long. My colleague, Patrick Donovan, wrote a detailed [blog post][5] about our trip to NRF and the impact on the physical infrastructure in the stores. In a nutshell, the equipment on-premise is becoming more mission critical, more integrated to business applications in the cloud, more tied to positive customer-experiences… and with that comes the need for more secure, more available, more manageable edge. But this is easier said than done in an environment that generally has no IT staff on-premise, and with hundreds or potentially thousands of stores spread out geographically. So how do we address this?
+
+We answer this question in a white paper that Patrick and I are currently writing titled “An Integrated Ecosystem to Solve Edge Computing Infrastructure Challenges”. Here’s a hint, (1) an integrated ecosystem of partners, and (2) an integrated micro data center that emerges from the ecosystem. I’ll be sure to comment on this blog with the link when the white paper becomes publicly available! In the meantime, explore our [edge computing][2] landing page to learn more.
+
+--------------------------------------------------------------------------------
+
+via: https://www.networkworld.com/article/3385046/robots-in-retail-are-real-and-so-is-edge-computing.html#tk.rss_all
+
+作者:[Wendy Torell][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://www.networkworld.com/author/Wendy-Torell/
+[b]: https://github.com/lujun9972
+[1]: https://images.idgesg.net/images/article/2019/03/gettyimages-828488368-1060x445-100792228-large.jpg
+[2]: https://www.apc.com/us/en/solutions/business-solutions/edge-computing.jsp
+[3]: https://stores.org/2019/01/15/why-is-there-a-robot-in-my-store/
+[4]: https://www.apc.com/us/en/solutions/business-solutions/micro-data-centers.jsp
+[5]: https://blog.apc.com/2019/02/06/4-thoughts-edge-computing-infrastructure-retail-sector/
diff --git a/sources/tech/20190329 How to submit a bug report with Bugzilla.md b/sources/tech/20190329 How to submit a bug report with Bugzilla.md
new file mode 100644
index 0000000000..ee778410e7
--- /dev/null
+++ b/sources/tech/20190329 How to submit a bug report with Bugzilla.md
@@ -0,0 +1,102 @@
+[#]: collector: (lujun9972)
+[#]: translator: ( )
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+[#]: subject: (How to submit a bug report with Bugzilla)
+[#]: via: (https://opensource.com/article/19/3/bug-reporting)
+[#]: author: (David Both https://opensource.com/users/dboth)
+
+How to submit a bug report with Bugzilla
+======
+
+Submitting bug reports is an easy way to give back and it helps everyone.
+
+![][1]
+
+I spend a lot of time doing research for my books and [Opensource.com][2] articles. Sometimes this leads me to discover bugs in the software I use, including Fedora and the Linux kernel. As a long-time Linux user and sysadmin, I have benefited greatly from GNU/Linux, and I like to give back. I am not a C language programmer, so I don't create fixes and submit them with bug reports, as some people do. But a way I can return some value to the Linux community is by reporting bugs.
+
+Product maintainers use a lot of tools to let their users search for existing bugs and report new ones. Bugzilla is a popular tool, and I use the Red Hat [Bugzilla][3] website to report Fedora-related bugs because I primarily use Fedora on the systems I'm responsible for. It's an easy process, but it may seem daunting if you have never done it before. So let's start with the basics.
+
+### Start with a search
+
+Even though it's tempting, never assume that seemingly anomalous behavior is the result of a bug. I always start with a search of relevant websites, such as the [Fedora wiki][4], the [CentOS wiki][5], and the documentation for the distro I'm using. I also try to check the various distro listservs.
+
+If it appears that no one has encountered this problem before (or if they have, they haven't reported it as a bug), I go to the Red Hat Bugzilla site and begin searching for a bug report that might come close to matching the symptoms I encountered.
+
+You can search the Red Hat Bugzilla site without an account. Go to the Bugzilla site and click on the [Advanced Search tab][6].
+
+![Searching for a bug][7]
+
+For example, if you want to search for bug reports related to Fedora's Rescue mode kernel, enter the following data in the Advanced Search form.
+
+Field | Logic | Data or Selection
+---|---|---
+Summary | Contains the string | Rescue mode kernel
+Classification | | Fedora
+Product | | Fedora
+Component | | grub2
+Status | | New + Assigned
+
+Then press **Search**. This returns a list of one bug with the ID 1654337 (which happens to be a bug I reported).
+
+![Bug report list][8]
+
+Click on the ID to view my bug report details. I entered as much relevant data as possible in the top section of the report. In the comments, I described the problem and included supporting files, other relevant comments (such as the fact that the problem occurred on multiple motherboards), and the steps to reproduce the problem.
+
+![Bug report details][9]
+
+The more information you can provide here that pertains to the bug, such as symptoms, the hardware and software environments (if they are applicable), other software that was running at the time, kernel and distro release levels, and so on, the easier it will be to determine where to assign your bug. In this case, I originally chose the kernel component, but it was quickly changed to the GRUB2 component because the problem occurred before the kernel loaded.
+
+### How to submit a bug report
+
+The Red Hat [Bugzilla][3] website requires an account to submit new bugs or comment on old ones. It is easy to sign up. On Bugzilla's main page, click **Open a New Account** and fill in the requested information. After you verify your email address, you can fill in the rest of the information to create your account.
+
+_**Advisory:**_ _Bugzilla is a working website that people count on for support. I strongly suggest not creating an account unless you intend to submit bug reports or comment on existing bugs._
+
+To demonstrate how to submit a bug report, I'll use a fictional example of creating a bug against the Xfce4-terminal emulator in Fedora. _Please do not do this unless you have a real bug to report._
+
+Log into your account and click on **New** in the menu bar or the **File a Bug** button. You'll need to select a classification for the bug to continue the process. This will narrow down some of the choices on the next page.
+
+The following image shows how I filled out the required fields (and a couple of others that are not required).
+
+![Reporting a bug][10]
+
+When you type a short problem description in the **Summary** field, Bugzilla displays a list of other bugs that might match yours. If one matches, click **Add Me to the CC List** to receive emails when changes are made to the bug.
+
+If none match, fill in the information requested in the **Description** field. Add as much information as you can, including error messages and screen captures that illustrate the problem. Be sure to describe the exact steps needed to reproduce the problem and how reproducible it is: does it fail every time, every second, third, fourth, random time, or whatever. If it happened only once, it's very unlikely anyone will be able to reproduce the problem you observed.
+
+When you finish adding as much information as you can, press **Submit Bug**.
+
+### Be kind
+
+Bug reporting websites are not for asking questions—they are for searching and reporting bugs. That means you must have performed some work on your own to conclude that there really is a bug. There are many wikis, listservs, and Q&A websites that are appropriate for asking questions. Use sites like Bugzilla to search for existing bug reports on the problem you have found.
+
+Be sure you submit your bugs on the correct bug reporting website. For example, only submit bugs about Red Hat products on the Red Hat Bugzilla, and submit bugs about LibreOffice by following [LibreOffice's instructions][11].
+
+Reporting bugs is not difficult, and it is an important way to participate.
+
+--------------------------------------------------------------------------------
+
+via: https://opensource.com/article/19/3/bug-reporting
+
+作者:[David Both (Community Moderator)][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://opensource.com/users/dboth
+[b]: https://github.com/lujun9972
+[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/bug-insect-butterfly-diversity-inclusion-2.png?itok=TcC9eews
+[2]: http://Opensource.com
+[3]: https://bugzilla.redhat.com/
+[4]: https://fedoraproject.org/wiki/
+[5]: https://wiki.centos.org/
+[6]: https://bugzilla.redhat.com/query.cgi?format=advanced
+[7]: https://opensource.com/sites/default/files/uploads/bugreporting-1.png (Searching for a bug)
+[8]: https://opensource.com/sites/default/files/uploads/bugreporting-2.png (Bug report list)
+[9]: https://opensource.com/sites/default/files/uploads/bugreporting-4.png (Bug report details)
+[10]: https://opensource.com/sites/default/files/uploads/bugreporting-3.png (Reporting a bug)
+[11]: https://wiki.documentfoundation.org/QA/BugReport
diff --git a/sources/tech/20190329 Russia demands access to VPN providers- servers.md b/sources/tech/20190329 Russia demands access to VPN providers- servers.md
new file mode 100644
index 0000000000..0c950eb04f
--- /dev/null
+++ b/sources/tech/20190329 Russia demands access to VPN providers- servers.md
@@ -0,0 +1,77 @@
+[#]: collector: (lujun9972)
+[#]: translator: ( )
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+[#]: subject: (Russia demands access to VPN providers’ servers)
+[#]: via: (https://www.networkworld.com/article/3385050/russia-demands-access-to-vpn-providers-servers.html#tk.rss_all)
+[#]: author: (Tim Greene https://www.networkworld.com/author/Tim-Greene/)
+
+Russia demands access to VPN providers’ servers
+======
+
+### 10 VPN service providers have been ordered to link their servers in Russia to the state censorship agency by April 26
+
+![Getty Images][1]
+
+The Russian censorship agency Roskomnadzor has ordered 10 [VPN][2] service providers to link their servers in Russia to its network in order to stop users within the country from reaching banned sites.
+
+If they fail to comply, their services will be blocked, according to a machine translation of the order.
+
+[RELATED: Best VPN routers for small business][3]
+
+The 10 VPN providers are ExpressVPN, HideMyAss!, Hola VPN, IPVanish, Kaspersky Secure Connection, KeepSolid, NordVPN, OpenVPN, TorGuard, and VyprVPN.
+
+In response at least five of the 10 – Express VPN, IPVanish, KeepSolid, NordVPN, TorGuard and – say they are tearing down their servers in Russia but continuing to offer their services to Russian customers if they can reach the providers’ servers located outside of Russia. A sixth provider, Kaspersky Labs, which is based in Moscow, says it will comply with the order. The other four could not be reached for this article.
+
+IPVanish characterized the order as another phase of “Russia’s censorship agenda” dating back to 2017 when the government enacted a law forbidding the use of VPNs to access blocked Web sites.
+
+“Up until recently, however, they had done little to enforce such rules,” IPVanish [says in its blog][4]. “These new demands mark a significant escalation.”
+
+The reactions of those not complying are similar. TorGuard says it has taken steps to remove all its physical servers from Russia. It is also cutting off its business with data centers in the region
+
+**[[Prepare to become a Certified Information Security Systems Professional with this comprehensive online course from PluralSight. Now offering a 10-day free trial!][5] ]**
+
+“We would like to be clear that this removal of servers was a voluntary decision by TorGuard management and no equipment seizure occurred,” [TorGuard says in its blog][6]. “We do not store any logs so even if servers were compromised it would be impossible for customer’s data to be exposed.”
+
+TorGuard says it is deploying more servers in adjacent countries to protect fast download speeds for customers in the region.
+
+IPVanish says it has faced similar demands from Russia before and responded similarly. In 2016, a new Russian law required online service providers to store customers’ private data for a year. “In response, [we removed all physical server presence in Russia][7], while still offering Russians encrypted connections via servers outside of Russian borders,” the company says. “That decision was made in accordance with our strict zero-logs policy.”
+
+KeepSolid says it had no servers in Russia, but it will not comply with the order to link with Roskomnadzor's network. KeepSolid says it will [draw on its experience dealing with the Great Firewall of China][8] to fight the Russian censorship attempt. "Our team developed a special [KeepSolid Wise protocol][9] which is designed for use in countries where the use of VPN is blocked," a spokesperson for the company said in an email statement.
+
+NordVPN says it’s shutting down all its Russian servers, and all of them will be shredded as of April 1. [The company says in a blog][10] that some of its customers who connected to its Russian servers without use of the NordVPN application will have to reconfigure their devices to insure their security. Those customers using the app won’t have to do anything differently because the option to connect to Russia via the app has been removed.
+
+ExpressVPN is also not complying with the order. "As a matter of principle, ExpressVPN will never cooperate with efforts to censor the internet by any country," said the company's vice presidentn Harold Li in an email, but he said that blocking traffic will be ineffective. "We epect that Russian internet users will still be able to find means of accessing the sites and services they want, albeit perhaps with some additional effort."
+
+Kaspersky Labs says it will comply with the Russian order and responded to emailed questions about its reaction with this written response:
+
+“Kaspersky Lab is aware of the new requirements from Russian regulators for VPN providers operating in the country. These requirements oblige VPN providers to restrict access to a number of websites that were listed and prohibited by the Russian Government in the country’s territory. As a responsible company, Kaspersky Lab complies with the laws of all the countries where it operates, including Russia. At the same time, the new requirements don’t affect the main purpose of Kaspersky Secure Connection which protects user privacy and ensures confidentiality and protection against data interception, for example, when using open Wi-Fi networks, making online payments at cafes, airports or hotels. Additionally, the new requirements are relevant to VPN use only in Russian territory and do not concern users in other countries.”
+
+Join the Network World communities on [Facebook][11] and [LinkedIn][12] to comment on topics that are top of mind.
+
+--------------------------------------------------------------------------------
+
+via: https://www.networkworld.com/article/3385050/russia-demands-access-to-vpn-providers-servers.html#tk.rss_all
+
+作者:[Tim Greene][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://www.networkworld.com/author/Tim-Greene/
+[b]: https://github.com/lujun9972
+[1]: https://images.idgesg.net/images/article/2018/10/ipsecurity-protocols-network-security-vpn-100775457-large.jpg
+[2]: https://www.networkworld.com/article/3268744/understanding-virtual-private-networks-and-why-vpns-are-important-to-sd-wan.html
+[3]: http://www.networkworld.com/article/3002228/router/best-vpn-routers-for-small-business.html#tk.nww-fsb
+[4]: https://nordvpn.com/blog/nordvpn-servers-roskomnadzor-russia/
+[5]: https://pluralsight.pxf.io/c/321564/424552/7490?u=https%3A%2F%2Fwww.pluralsight.com%2Fpaths%2Fcertified-information-systems-security-professional-cisspr
+[6]: https://torguard.net/blog/why-torguard-has-removed-all-russian-servers/
+[7]: https://blog.ipvanish.com/ipvanish-removes-russian-vpn-servers-from-moscow/
+[8]: https://www.vpnunlimitedapp.com/blog/what-roskomnadzor-demands-from-vpns/
+[9]: https://www.vpnunlimitedapp.com/blog/keepsolid-wise-a-smart-solution-to-get-total-online-freedom/
+[10]: /cms/article/blog%20https:/nordvpn.com/blog/nordvpn-servers-roskomnadzor-russia/
+[11]: https://www.facebook.com/NetworkWorld/
+[12]: https://www.linkedin.com/company/network-world
diff --git a/sources/tech/20190329 ShadowReader- Serverless load tests for replaying production traffic.md b/sources/tech/20190329 ShadowReader- Serverless load tests for replaying production traffic.md
new file mode 100644
index 0000000000..3d7f7eaf0c
--- /dev/null
+++ b/sources/tech/20190329 ShadowReader- Serverless load tests for replaying production traffic.md
@@ -0,0 +1,176 @@
+[#]: collector: (lujun9972)
+[#]: translator: ( )
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+[#]: subject: (ShadowReader: Serverless load tests for replaying production traffic)
+[#]: via: (https://opensource.com/article/19/3/shadowreader-serverless)
+[#]: author: (Yuki Sawa https://opensource.com/users/yukisawa1/users/yongsanchez)
+
+ShadowReader: Serverless load tests for replaying production traffic
+======
+This open source tool recreates serverless production conditions to
+pinpoint causes of memory leaks and other errors that aren't visible in
+the QA environment.
+![Traffic lights at night][1]
+
+While load testing has become more accessible, configuring load tests that faithfully re-create production conditions can be difficult. A good load test must use a set of URLs that are representative of production traffic and achieve request rates that mimic real users. Even performing distributed load tests requires the upkeep of a fleet of servers.
+
+[ShadowReader][2] aims to solve these problems. It gathers URLs and request rates straight from production logs and replays them using AWS Lambda. Being serverless, it is more cost-efficient and performant than traditional distributed load tests; in practice, it has scaled beyond 50,000 requests per minute.
+
+At Edmunds, we have been able to utilize these capabilities to solve problems, such as Node.js memory leaks that were happening only in production, by recreating the same conditions in our QA environment. We're also using it daily to generate load for pre-production canary deployments.
+
+The memory leak problem we faced in our Node.js application confounded our engineering team; as it was only occurring in our production environment; we could not reproduce it in QA until we introduced ShadowReader to replay production traffic into QA.
+
+### The incident
+
+On Christmas Eve 2017, we suffered an incident where there was a jump in response time across the board with error rates tripling and impacting many users of our website.
+
+![Christmas Eve 2017 incident][3]
+
+![Christmas Eve 2017 incident][4]
+
+Monitoring during the incident helped identify and resolve the issue quickly, but we still needed to understand the root cause.
+
+At Edmunds, we leverage a robust continuous delivery (CD) pipeline that releases new updates to production multiple times a day. We also dynamically scale up our applications to accommodate peak traffic and scale down to save costs. Unfortunately, this had the side effect of masking a memory leak.
+
+In our investigation, we saw that the memory leak had existed for weeks, since early December. Memory usage would climb to 60%, along with a slow increase in 99th percentile response time.
+
+Between our CD pipeline and autoscaling events, long-running containers were frequently being shut down and replaced by newer ones. This inadvertently masked the memory leak until December, when we decided to stop releasing software to ensure stability during the holidays.
+
+![Slow increase in 99th percentile response time][5]
+
+### Our CD pipeline
+
+At a glance, Edmunds' CD pipeline looks like this:
+
+ 1. Unit test
+ 2. Build a Docker image for the application
+ 3. Integration test
+ 4. Load test/performance test
+ 5. Canary release
+
+
+
+The solution is fully automated and requires no manual cutover. The final step is a canary deployment directly into the live website, allowing us to release multiple times a day.
+
+For our load testing, we leveraged custom tooling built on top of JMeter. It takes random samples of production URLs and can simulate various percentages of traffic. Unfortunately, however, our load tests were not able to reproduce the memory leak in any of our pre-production environments.
+
+### Solving the memory leak
+
+When looking at the memory patterns in QA, we noticed there was a very healthy pattern. Our initial hypothesis was that our JMeter load testing in QA was unable to simulate production traffic in a way that allows us to predict how our applications will perform.
+
+While the load test takes samples from production URLs, it can't precisely simulate the URLs customers use and the exact frequency of calls (i.e., the burst rate).
+
+Our first step was to re-create the problem in QA. We used a new tool called ShadowReader, a project that evolved out of our hackathons. While many projects we considered were product-focused, this was the only operations-centric one. It is a load-testing tool that runs on AWS Lambda and can replay production traffic and usage patterns against our QA environment.
+
+The results it returned were immediate:
+
+![QA results in ShadowReader][6]
+
+Knowing that we could re-create the problem in QA, we took the additional step to point ShadowReader to our local environment, as this allowed us to trigger Node.js heap dumps. After analyzing the contents of the dumps, it was obvious the memory leak was coming from two excessively large objects containing only strings. At the time the snapshot dumped, these objects contained 373MB and 63MB of strings!
+
+![Heap dumps show source of memory leak][7]
+
+We found that both objects were temporary lookup caches containing metadata to be used on the client side. Neither of these caches was ever intended to be persisted on the server side. The user's browser cached only its own metadata, but on the server side, it cached the metadata for all users. This is why we were unable to reproduce the leak with synthetic testing. Synthetic tests always resulted in the same fixed set of metadata in the server-side caches. The leak surfaced only when we had a sufficient amount of unique metadata being generated from a variety of users.
+
+Once we identified the problem, we were able to remove the large caches that we observed in the heap dumps. We've since instrumented the application to start collecting metrics that can help detect issues like this faster.
+
+![Collecting metrics][8]
+
+After making the fix in QA, we saw that the memory usage was constant and the leak was plugged.
+
+![Graph showing memory leak fixed][9]
+
+### What is ShadowReader?
+
+ShadowReader is a serverless load-testing framework powered by AWS Lambda and S3 to replay production traffic. It mimics real user traffic by replaying URLs from production at the same rate as the live website. We are happy to announce that after months of internal usage, we have released it as open source!
+
+#### Features
+
+ * ShadowReader mimics real user traffic by replaying user requests (URLs). It can also replay certain headers, such as True-Client-IP and User-Agent, along with the URL.
+
+
+ * It is more efficient cost- and performance-wise than traditional distributed load tests that run on a fleet of servers. Managing a fleet of servers for distributed load testing can cost $1,000 or more per month; with a serverless stack, it can be reduced to $100 per month by provisioning compute resources on demand.
+
+
+ * We've scaled it up to 50,000 requests per minute, but it should be able to handle more than 100,000 reqs/min.
+
+
+ * New load tests can be spun up and stopped instantly, unlike traditional load-testing tools, which can take many minutes to generate the test plan and distribute the test data to the load-testing servers.
+
+
+ * It can ramp traffic up or down by a percentage value to function as a more traditional load test.
+
+
+ * Its plugin system enables you to switch out plugins to change its behavior. For instance, you can switch from past replay (i.e., replays past requests) to live replay (i.e., replays requests as they come in).
+
+
+ * Currently, it can replay logs from the [Application Load Balancer][10] and [Classic Load Balancer][11] Elastic Load Balancers (ELBs), and support for other load balancers is coming soon.
+
+
+
+### How it works
+
+ShadowReader is composed of four different Lambdas: a Parser, an Orchestrator, a Master, and a Worker.
+
+![ShadowReader architecture][12]
+
+When a user visits a website, a load balancer (in this case, an ELB) typically routes the request. As the ELB routes the request, it will log the event and ship it to S3.
+
+Next, ShadowReader triggers a Parser Lambda every minute via a CloudWatch event, which parses the latest access (ELB) logs on S3 for that minute, then ships the parsed URLs into another S3 bucket.
+
+On the other side of the system, ShadowReader also triggers an Orchestrator lambda every minute. This Lambda holds the configurations and state of the system.
+
+The Orchestrator then invokes a Master Lambda function. From the Orchestrator, the Master receives information on which time slice to replay and downloads the respective data from the S3 bucket of parsed URLs (deposited there by the Parser).
+
+The Master Lambda divides the load-test URLs into smaller batches, then invokes and passes each batch into a Worker Lambda. If 800 requests must be sent out, then eight Worker Lambdas will be invoked, each one handling 100 URLs.
+
+Finally, the Worker receives the URLs passed from the Master and starts load-testing the chosen test environment.
+
+### The bigger picture
+
+The challenge of reproducibility in load testing serverless infrastructure becomes increasingly important as we move from steady-state application sizing to on-demand models. While ShadowReader is designed and used with Edmunds' infrastructure in mind, any application leveraging ELBs can take full advantage of it. Soon, it will have support to replay the traffic of any service that generates traffic logs.
+
+As the project moves forward, we would love to see it evolve to be compatible with next-generation serverless runtimes such as Knative. We also hope to see other open source communities build similar toolchains for their infrastructure as serverless becomes more prevalent.
+
+### Getting started
+
+If you would like to test drive ShadowReader, check out the [GitHub repo][2]. The README contains how-to guides and a batteries-included [demo][13] that will deploy all the necessary resources to try out live replay in your AWS account.
+
+We would love to hear what you think and welcome contributions. See the [contributing guide][14] to get started!
+
+* * *
+
+_This article is based on "[How we fixed a Node.js memory leak by using ShadowReader to replay production traffic into QA][15]," published on the_ _Edmunds Tech Blog_ _with the help of Carlos Macasaet, Sharath Gowda, and Joey Davis._ _Yuki_ _Sawa_ _also presented this_ as* [ShadowReader—Serverless load tests for replaying production traffic][16] at ([SCaLE 17x][17]) March 7-10 in Pasadena, Calif.*
+
+--------------------------------------------------------------------------------
+
+via: https://opensource.com/article/19/3/shadowreader-serverless
+
+作者:[Yuki Sawa][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://opensource.com/users/yukisawa1/users/yongsanchez
+[b]: https://github.com/lujun9972
+[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/traffic-light-go.png?itok=nC_851ys (Traffic lights at night)
+[2]: https://github.com/edmunds/shadowreader
+[3]: https://opensource.com/sites/default/files/uploads/shadowreader_incident1_0.png (Christmas Eve 2017 incident)
+[4]: https://opensource.com/sites/default/files/uploads/shadowreader_incident2.png (Christmas Eve 2017 incident)
+[5]: https://opensource.com/sites/default/files/uploads/shadowreader_99thpercentile.png (Slow increase in 99th percentile response time)
+[6]: https://opensource.com/sites/default/files/uploads/shadowreader_qa.png (QA results in ShadowReader)
+[7]: https://opensource.com/sites/default/files/uploads/shadowreader_heapdumps.png (Heap dumps show source of memory leak)
+[8]: https://opensource.com/sites/default/files/uploads/shadowreader_code.png (Collecting metrics)
+[9]: https://opensource.com/sites/default/files/uploads/shadowreader_leakplugged.png (Graph showing memory leak fixed)
+[10]: https://docs.aws.amazon.com/elasticloadbalancing/latest/application/introduction.html
+[11]: https://docs.aws.amazon.com/elasticloadbalancing/latest/classic/introduction.html
+[12]: https://opensource.com/sites/default/files/uploads/shadowreader_architecture.png (ShadowReader architecture)
+[13]: https://github.com/edmunds/shadowreader#live-replay
+[14]: https://github.com/edmunds/shadowreader/blob/master/CONTRIBUTING.md
+[15]: https://technology.edmunds.com/2018/08/25/Investigating-a-Memory-Leak-and-Introducing-ShadowReader/
+[16]: https://www.socallinuxexpo.org/scale/17x/speakers/yuki-sawa
+[17]: https://www.socallinuxexpo.org/
diff --git a/sources/tech/20190401 Build and host a website with Git.md b/sources/tech/20190401 Build and host a website with Git.md
new file mode 100644
index 0000000000..32a07d3490
--- /dev/null
+++ b/sources/tech/20190401 Build and host a website with Git.md
@@ -0,0 +1,226 @@
+[#]: collector: (lujun9972)
+[#]: translator: ( )
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+[#]: subject: (Build and host a website with Git)
+[#]: via: (https://opensource.com/article/19/4/building-hosting-website-git)
+[#]: author: (Seth Kenlon https://opensource.com/users/seth)
+
+Build and host a website with Git
+======
+Publishing your own website is easy if you let Git help you out. Learn
+how in the first article in our series about little-known Git uses.
+![web development and design, desktop and browser][1]
+
+[Git][2] is one of those rare applications that has managed to encapsulate so much of modern computing into one program that it ends up serving as the computational engine for many other applications. While it's best-known for tracking source code changes in software development, it has many other uses that can make your life easier and more organized. In this series leading up to Git's 14th anniversary on April 7, we'll share seven little-known ways to use Git.
+
+Creating a website used to be both sublimely simple and a form of black magic all at once. Back in the old days of Web 1.0 (that's not what anyone actually called it), you could just open up any website, view its source code, and reverse engineer the HTML—with all its inline styling and table-based layout—and you felt like a programmer after an afternoon or two. But there was still the matter of getting the page you created on the internet, which meant dealing with servers and FTP and webroot directories and file permissions. While the modern web has become far more complex since then, self-publication can be just as easy (or easier!) if you let Git help you out.
+
+### Create a website with Hugo
+
+[Hugo][3] is an open source static site generator. Static sites are what the web used to be built on (if you go back far enough, it was _all_ the web was). There are several advantages to static sites: they're relatively easy to write because you don't have to code them, they're relatively secure because there's no code executed on the pages, and they can be quite fast because there's no processing aside from transferring whatever you have on the page.
+
+Hugo isn't the only static site generator out there. [Grav][4], [Pico][5], [Jekyll][6], [Podwrite][7], and many others provide an easy way to create a full-featured website with minimal maintenance. Hugo happens to be one with GitLab integration built in, which means you can generate and host your website with a free GitLab account.
+
+Hugo has some pretty big fans, too. For instance, if you've ever gone to the Let's Encrypt website, then you've used a site built with Hugo.
+
+![Let's Encrypt website][8]
+
+#### Install Hugo
+
+Hugo is cross-platform, and you can find installation instructions for MacOS, Windows, Linux, OpenBSD, and FreeBSD in [Hugo's getting started resources][9].
+
+If you're on Linux or BSD, it's easiest to install Hugo from a software repository or ports tree. The exact command varies depending on what your distribution provides, but on Fedora you would enter:
+
+```
+$ sudo dnf install hugo
+```
+
+Confirm you have installed it correctly by opening a terminal and typing:
+
+```
+$ hugo help
+```
+
+This prints all the options available for the **hugo** command. If you don't see that, you may have installed Hugo incorrectly or need to [add the command to your path][10].
+
+#### Create your site
+
+To build a Hugo site, you must have a specific directory structure, which Hugo will generate for you by entering:
+
+```
+$ hugo new site mysite
+```
+
+You now have a directory called **mysite** , and it contains the default directories you need to build a Hugo website.
+
+Git is your interface to get your site on the internet, so change directory to your new **mysite** folder and initialize it as a Git repository:
+
+```
+$ cd mysite
+$ git init .
+```
+
+Hugo is pretty Git-friendly, so you can even use Git to install a theme for your site. Unless you plan on developing the theme you're installing, you can use the **\--depth** option to clone the latest state of the theme's source:
+
+```
+$ git clone --depth 1 \
+
+themes/mero
+```
+
+
+Now create some content for your site:
+
+```
+$ hugo new posts/hello.md
+```
+
+Use your favorite text editor to edit the **hello.md** file in the **content/posts** directory. Hugo accepts Markdown files and converts them to themed HTML files at publication, so your content must be in [Markdown format][11].
+
+If you want to include images in your post, create a folder called **images** in the **static** directory. Place your images into this folder and reference them in your markup using the absolute path starting with **/images**. For example:
+
+```
+
+```
+
+#### Choose a theme
+
+You can find more themes at [themes.gohugo.io][12], but it's best to stay with a basic theme while testing. The canonical Hugo test theme is [Ananke][13]. Some themes have complex dependencies, and others don't render pages the way you might expect without complex configuration. The Mero theme used in this example comes bundled with a detailed **config.toml** configuration file, but (for the sake of simplicity) I'll provide just the basics here. Open the file called **config.toml** in a text editor and add three configuration parameters:
+
+```
+
+languageCode = "en-us"
+title = "My website on the web"
+theme = "mero"
+
+[params]
+ author = "Seth Kenlon"
+ description = "My hugo demo"
+```
+
+#### Preview your site
+
+You don't have to put anything on the internet until you're ready to publish it. While you work, you can preview your site by launching the local-only web server that ships with Hugo.
+
+```
+$ hugo server --buildDrafts --disableFastRender
+```
+
+Open a web browser and navigate to **** to see your work in progress.
+
+### Publish with Git to GitLab
+
+To publish and host your site on GitLab, create a repository for the contents of your site.
+
+To create a repository in GitLab, click on the **New Project** button in your GitLab Projects page. Create an empty repository called **yourGitLabUsername.gitlab.io** , replacing **yourGitLabUsername** with your GitLab user name or group name. You must use this scheme as the name of your project. If you want to add a custom domain later, you can.
+
+Do not include a license or a README file (because you've started a project locally, adding these now would make pushing your data to GitLab more complex, and you can always add them later).
+
+Once you've created the empty repository on GitLab, add it as the remote location for the local copy of your Hugo site, which is already a Git repository:
+
+```
+$ git remote add origin git@gitlab.com:skenlon/mysite.git
+```
+
+Create a GitLab site configuration file called **.gitlab-ci.yml** and enter these options:
+
+```
+image: monachus/hugo
+
+variables:
+ GIT_SUBMODULE_STRATEGY: recursive
+
+pages:
+ script:
+ - hugo
+ artifacts:
+ paths:
+ - public
+ only:
+ - master
+```
+
+The **image** parameter defines a containerized image that will serve your site. The other parameters are instructions telling GitLab's servers what actions to execute when you push new code to your remote repository. For more information on GitLab's CI/CD (Continuous Integration and Delivery) options, see the [CI/CD section of GitLab's docs][14].
+
+#### Set the excludes
+
+Your Git repository is configured, the commands to build your site on GitLab's servers are set, and your site ready to publish. For your first Git commit, you must take a few extra precautions so you're not version-controlling files you don't intend to version-control.
+
+First, add the **/public** directory that Hugo creates when building your site to your **.gitignore** file. You don't need to manage the finished site in Git; all you need to track are your source Hugo files.
+
+```
+$ echo "/public" >> .gitignore
+```
+
+You can't maintain a Git repository within a Git repository without creating a Git submodule. For the sake of keeping this simple, move the embedded **.git** directory so that the theme is just a theme.
+
+Note that you _must_ add your theme files to your Git repository so GitLab will have access to the theme. Without committing your theme files, your site cannot successfully build.
+
+```
+$ mv themes/mero/.git ~/.local/share/Trash/files/
+```
+
+Alternately, use a **trash** command such as [Trashy][15]:
+
+```
+$ trash themes/mero/.git
+```
+
+Now you can add all the contents of your local project directory to Git and push it to GitLab:
+
+```
+$ git add .
+$ git commit -m 'hugo init'
+$ git push -u origin HEAD
+```
+
+### Go live with GitLab
+
+Once your code has been pushed to GitLab, take a look at your project page. An icon indicates GitLab is processing your build. It might take several minutes the first time you push your code, so be patient. However, don't be _too_ patient, because the icon doesn't always update reliably.
+
+![GitLab processing your build][16]
+
+While you're waiting for GitLab to assemble your site, go to your project settings and find the **Pages** panel. Once your site is ready, its URL will be provided for you. The URL is **yourGitLabUsername.gitlab.io/yourProjectName**. Navigate to that address to view the fruits of your labor.
+
+![Previewing Hugo site][17]
+
+If your site fails to assemble correctly, GitLab provides insight into the CI/CD pipeline logs. Review the error message for an indication of what went wrong.
+
+### Git and the web
+
+Hugo (or Jekyll or similar tools) is just one way to leverage Git as your web publishing tool. With server-side Git hooks, you can design your own Git-to-web pipeline with minimal scripting. With the community edition of GitLab, you can self-host your own GitLab instance or you can use an alternative like [Gitolite][18] or [Gitea][19] and use this article as inspiration for a custom solution. Have fun!
+
+--------------------------------------------------------------------------------
+
+via: https://opensource.com/article/19/4/building-hosting-website-git
+
+作者:[Seth Kenlon (Red Hat, Community Moderator)][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://opensource.com/users/seth
+[b]: https://github.com/lujun9972
+[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/web_browser_desktop_devlopment_design_system_computer.jpg?itok=pfqRrJgh (web development and design, desktop and browser)
+[2]: https://git-scm.com/
+[3]: http://gohugo.io
+[4]: http://getgrav.org
+[5]: http://picocms.org/
+[6]: https://jekyllrb.com
+[7]: http://slackermedia.info/podwrite/
+[8]: https://opensource.com/sites/default/files/uploads/letsencrypt-site.jpg (Let's Encrypt website)
+[9]: https://gohugo.io/getting-started/installing
+[10]: https://opensource.com/article/17/6/set-path-linux
+[11]: https://commonmark.org/help/
+[12]: https://themes.gohugo.io/
+[13]: https://themes.gohugo.io/gohugo-theme-ananke/
+[14]: https://docs.gitlab.com/ee/ci/#overview
+[15]: http://slackermedia.info/trashy
+[16]: https://opensource.com/sites/default/files/uploads/hugo-gitlab-cicd.jpg (GitLab processing your build)
+[17]: https://opensource.com/sites/default/files/uploads/hugo-demo-site.jpg (Previewing Hugo site)
+[18]: http://gitolite.com
+[19]: http://gitea.io
diff --git a/sources/tech/20190401 Meta Networks builds user security into its Network-as-a-Service.md b/sources/tech/20190401 Meta Networks builds user security into its Network-as-a-Service.md
new file mode 100644
index 0000000000..777108f639
--- /dev/null
+++ b/sources/tech/20190401 Meta Networks builds user security into its Network-as-a-Service.md
@@ -0,0 +1,87 @@
+[#]: collector: (lujun9972)
+[#]: translator: ( )
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+[#]: subject: (Meta Networks builds user security into its Network-as-a-Service)
+[#]: via: (https://www.networkworld.com/article/3385531/meta-networks-builds-user-security-into-its-network-as-a-service.html#tk.rss_all)
+[#]: author: (Linda Musthaler https://www.networkworld.com/author/Linda-Musthaler/)
+
+Meta Networks builds user security into its Network-as-a-Service
+======
+
+### Meta Networks has a unique approach to the security of its Network-as-a-Service. A tight security perimeter is built around every user and the specific resources each person needs to access.
+
+![MF3d / Getty Images][1]
+
+Network-as-a-Service (NaaS) is growing in popularity and availability for those organizations that don’t want to host their own LAN or WAN, or that want to complement or replace their traditional network with something far easier to manage.
+
+With NaaS, a service provider creates a multi-tenant wide area network comprised of geographically dispersed points of presence (PoPs) connected via high-speed Tier 1 carrier links that create the network backbone. The PoPs peer with cloud services to facilitate customer access to cloud applications such as SaaS offerings, as well as to infrastructure services from the likes of Amazon, Google and Microsoft. User organizations connect to the network from whatever facilities they have — data centers, branch offices, or even individual client devices — typically via SD-WAN appliances and/or VPNs.
+
+Numerous service providers now offer Network-as-a-Service. As the network backbone and the PoPs become more of a commodity, the providers are distinguishing themselves on other value-added services, such as integrated security or WAN optimization.
+
+**[ Also read:[What to consider when deploying a next generation firewall][2] | Get regularly scheduled insights: [Sign up for Network World newsletters][3]. ]**
+
+Ever since its launch about a year ago, [Meta Networks][4] has staked security as its primary value-add. What’s different about the Meta NaaS is the philosophy that the network is built around users, not around specific sites or offices. Meta Networks does this by building a software-defined perimeter (SDP) for each user, giving workers micro-segmented access to only the applications and network resources they need. The vendor was a little ahead of its time with SDP, but the market is starting to catch up. Companies are beginning to show interest in SDP as a VPN replacement or VPN alternative.
+
+Meta NaaS has a zero-trust architecture where each user is bound by an SDP. Each user has a unique, fixed identity no matter from where they connect to this network. The SDP security framework allows one-to-one network connections that are dynamically created on demand between the user and the specific resources they need to access. Everything else on the NaaS is invisible to the user. No access is possible unless it is explicitly granted, and it’s continuously verified at the packet level. This model effectively provides dynamically provisioned secure network segmentation.
+
+## SDP tightly controls access to specific resources
+
+This approach works very well when a company wants to securely connect employees, contractors, and external partners to specific resources on the network. For example, one of Meta Networks’ customers is Via Transportation, a New York-based company that has a ride-sharing platform. The company operates its own ride-sharing services in various cities in North America and Europe, and it licenses its technology to other transit systems around the world.
+
+Via’s operations are completely cloud-native, and so it has no legacy-style site-based WAN to connect its 400-plus employees and contractors to their cloud-based applications. Via’s partners, primarily transportation operators in different cities and countries, also need controlled access to specific portions of Via’s software platform to manage rideshares. Giving each group of users access to the applications they need — and _only_ to the ones they specifically need – was a challenge using a VPN. Using the Meta NaaS instead gives Via more granular control over who has what access.
+
+**[[Prepare to become a Certified Information Security Systems Professional with this comprehensive online course from PluralSight. Now offering a 10-day free trial!][5] ]**
+
+Via’s employees with managed devices connect to the Meta NaaS using client software on the device, and they are authenticated using Okta and a certificate. Contractors and customers with unmanaged devices use a browser-based access solution from Meta that doesn’t require installation or setup. New users can be on-boarded quickly and assigned granular access policies based on their role. Integration with Okta provides information that facilitates identity-based access policies. Once users connect to the network, they can see only the applications and network resources that their policy allows; everything else is invisible to them under the SDP architecture.
+
+For Via, there are several benefits to the Meta NaaS approach. First and foremost, the company doesn’t have to own or operate its own WAN infrastructure. Everything is a managed service located in the cloud — the same business model that Via itself espouses. Next, this solution scales easily to support the company’s growth. Meta’s security integrates with Via’s existing identity management system, so identities and access policies can be centrally managed. And finally, the software-defined perimeter hides resources from unauthorized users, creating security by obscurity.
+
+## Tightening security even further
+
+Meta Networks further tightens the security around the user by doing device posture checks — “NAC lite,” if you will. A customer can define the criteria that devices have to meet before they are allowed to connect to the NaaS. For example, the check could be whether a security certificate is installed, if a registry key is set to a specific value, or if anti-virus software is installed and running. It’s one more way to enforce company policies on network access.
+
+When end users use the browser-based method to connect to the Meta NaaS, all activity is recorded in a rich log so that everything can be audited, but also to set alerts and look for anomalies. This data can be exported to a SIEM if desired, but Meta has its own notification and alert system for security incidents.
+
+Meta Networks recently implemented some new features around management, including smart groups and support for the System for Cross-Domain Identity Management (SCIM) protocol. The smart groups feature provides the means to add an extra notation or tag to elements such as devices, services, network subnets or segments, and basically everything that’s in the system. These tags can then be applied to policy. For example, a customer could label some of their services as a production, staging, or development environment. Then a policy could be implemented to say that only sales people can access the production environment. Smart groups are just one more way to get even more granular about policy.
+
+The SCIM support makes on-boarding new users simple. SCIM is a protocol that is used to synchronize and provision users and identities from a third-party identity provider such as Okta, Azure AD, or OneLogin. A customer can use SCIM to provision all the users from the IdP into the Meta system, synchronize in real time the groups and attributes, and then use that information to build the access policies inside Meta NaaS.
+
+These and other security features fit into Meta Networks’ vision that the security perimeter goes with you no matter where you are, and the perimeter includes everything that was formerly delivered through the data center. It is delivered through the cloud to your client device with always-on security. It’s a broad approach to SDP and a unique approach to NaaS.
+
+**Reviews: 4 free, open-source network monitoring tools**
+
+ * [Icinga: Enterprise-grade, open-source network-monitoring that scales][6]
+ * [Nagios Core: Network-monitoring software with lots of plugins, steep learning curve][7]
+ * [Observium open-source network monitoring tool: Won’t run on Windows but has a great user interface][8]
+ * [Zabbix delivers effective no-frills network monitoring][9]
+
+
+
+Join the Network World communities on [Facebook][10] and [LinkedIn][11] to comment on topics that are top of mind.
+
+--------------------------------------------------------------------------------
+
+via: https://www.networkworld.com/article/3385531/meta-networks-builds-user-security-into-its-network-as-a-service.html#tk.rss_all
+
+作者:[Linda Musthaler][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://www.networkworld.com/author/Linda-Musthaler/
+[b]: https://github.com/lujun9972
+[1]: https://images.idgesg.net/images/article/2018/10/firewall_network-security_lock_padlock_cyber-security-100776989-large.jpg
+[2]: https://www.networkworld.com/article/3236448/lan-wan/what-to-consider-when-deploying-a-next-generation-firewall.html
+[3]: https://www.networkworld.com/newsletters/signup.html
+[4]: https://www.metanetworks.com/
+[5]: https://pluralsight.pxf.io/c/321564/424552/7490?u=https%3A%2F%2Fwww.pluralsight.com%2Fpaths%2Fcertified-information-systems-security-professional-cisspr
+[6]: https://www.networkworld.com/article/3273439/review-icinga-enterprise-grade-open-source-network-monitoring-that-scales.html?nsdr=true#nww-fsb
+[7]: https://www.networkworld.com/article/3304307/nagios-core-monitoring-software-lots-of-plugins-steep-learning-curve.html
+[8]: https://www.networkworld.com/article/3269279/review-observium-open-source-network-monitoring-won-t-run-on-windows-but-has-a-great-user-interface.html?nsdr=true#nww-fsb
+[9]: https://www.networkworld.com/article/3304253/zabbix-delivers-effective-no-frills-network-monitoring.html
+[10]: https://www.facebook.com/NetworkWorld/
+[11]: https://www.linkedin.com/company/network-world
diff --git a/sources/tech/20190401 Top Ten Reasons to Think Outside the Router -2- Simplify and Consolidate the WAN Edge.md b/sources/tech/20190401 Top Ten Reasons to Think Outside the Router -2- Simplify and Consolidate the WAN Edge.md
new file mode 100644
index 0000000000..8177390648
--- /dev/null
+++ b/sources/tech/20190401 Top Ten Reasons to Think Outside the Router -2- Simplify and Consolidate the WAN Edge.md
@@ -0,0 +1,103 @@
+[#]: collector: (lujun9972)
+[#]: translator: ( )
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+[#]: subject: (Top Ten Reasons to Think Outside the Router #2: Simplify and Consolidate the WAN Edge)
+[#]: via: (https://www.networkworld.com/article/3384928/top-ten-reasons-to-think-outside-the-router-2-simplify-and-consolidate-the-wan-edge.html#tk.rss_all)
+[#]: author: (Rami Rammaha https://www.networkworld.com/author/Rami-Rammaha/)
+
+Top Ten Reasons to Think Outside the Router #2: Simplify and Consolidate the WAN Edge
+======
+
+![istock][1]
+
+We’re now near reaching the end of our homage to the iconic David Letterman Top Ten List segment from his former Late Show, as [Silver Peak][2] counts down the *Top Ten Reasons to Think Outside the Router. *Click for the [#3][3], [#4][4], [#5][5], [#6][6], [#7][7], [#8][8], [#9][9] and [#10][10] reasons to retire traditional branch routers.
+
+_The #2 reason it’s time to retire branch routers: conventional router-centric WAN architectures are rigid and complex to manage!_
+
+### **Challenges of conventional WAN edge architecture**
+
+A conventional WAN edge architecture consists of a disparate array of devices, including routers, firewalls, WAN optimization appliances, wireless controllers and so on. This architecture was born in the era when applications were hosted exclusively in the data center. With this model, deploying new applications or provisioning new policies or making policy changes has become an arduous and time-consuming task. Configuration, deployment and management requires specialized on-premise IT expertise to manually program and configure each device with its own management interface, often using an arcane CLI. This process has hit the wall in the cloud era proving too slow, complex, error-prone, costly and inefficient.
+
+As cloud-first enterprises increasingly migrate applications and infrastructure to the cloud, the traditional WAN architecture is no longer efficient. IT is now faced with a new set of challenges when it comes to connecting users securely and directly to the applications that run their businesses:
+
+ * How do you manage and consistently apply QoS and security policies across the distributed enterprise?
+ * How do you intelligently automate traffic steering across multiple WAN transport services based on application type and unique requirements?
+ * How do you deliver the highest quality of experiences to users when running applications over broadband, especially voice and video?
+ * How do you quickly respond to continuously changing business requirements?
+
+
+
+These are just some of the new challenges facing IT teams in the cloud era. To be successful, enterprises will need to shift toward a business-first networking model where top-down business intent drives how the network behaves. And they would be well served to deploy a business-driven unified [SD-WAN][11] edge platform to transform their networks from a business constraint to a business accelerant.
+
+### **Shifting toward a business-driven WAN edge platform**
+
+A business-driven WAN edge platform is designed to enable enterprises to realize the full transformation promise of the cloud. It is a model where top-down business intent is the driver, not bottoms-up technology constraints. It’s outcome oriented, utilizing automation, artificial intelligence (AI) and machine learning to get smarter every day. Through this continuous adaptation, and the ability to improve the performance of underlying transport and applications, it delivers the highest quality of experience to end users. This is in stark contrast to the router-centric model where application policies must be shoe-horned to fit within the constraints of the network. A business-driven, top-down approach continuously stays in compliance with business intent and centrally defined security policies.
+
+### **A unified platform for simplifying and consolidating the WAN Edge**
+
+Achieving a business-driven architecture requires a unified platform, designed from the ground up as one system, uniting [SD-WAN][12], [firewall][13], [segmentation][14], [routing][15], [WAN optimization][16], application visibility and control in a single-platform. Furthermore, it requires [centralized orchestration][17] with complete observability of the entire wide area network through a single pane of glass.
+
+The use case “[Simplifying WAN Architecture][18]” describes in detail key capabilities of the Silver Peak [Unity EdgeConnect™][19] SD-WAN edge platform. It illustrates how EdgeConnect enables enterprises to simplify branch office WAN edge infrastructure and streamline deployment, configuration and ongoing management.
+
+![][20]
+
+### **Business and IT outcomes of a business-driven SD-WAN**
+
+ * Accelerates deployment, leveraging consistent hardware, software, cloud delivery models
+ * Saves up to 40 percent on hardware, software, installation, management and maintenance costs when replacing traditional routers
+ * Protects existing investment in security through simplified service chaining with our broadest ecosystem partners: [Check Point][21], [Forcepoint][22], [McAfee][23], [OPAQ][24], [Palo Alto Networks][25], [Symantec][26] and [Zscaler][27].
+ * Reduces foot print by 75 percent as it unifies network functions into a single platform
+ * Saves more than 50 percent on WAN optimization costs by selectively applying it when and where is needed on an application-by-application basis
+ * Accelerates time-to-resolution of application or network performance bottlenecks from days to minutes with simple, visual application and WAN analytics
+
+
+
+Calculate your [ROI][28] today and learn why the time is now to [think outside the router][29] and deploy the business-driven Silver Peak EdgeConnect SD-WAN edge platform!
+
+![][30]
+
+--------------------------------------------------------------------------------
+
+via: https://www.networkworld.com/article/3384928/top-ten-reasons-to-think-outside-the-router-2-simplify-and-consolidate-the-wan-edge.html#tk.rss_all
+
+作者:[Rami Rammaha][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://www.networkworld.com/author/Rami-Rammaha/
+[b]: https://github.com/lujun9972
+[1]: https://images.idgesg.net/images/article/2019/04/silverpeak_main-100792490-large.jpg
+[2]: https://www.silver-peak.com/why-silver-peak
+[3]: http://blog.silver-peak.com/think-outside-the-router-reason-3-mpls-contract-renewal
+[4]: http://blog.silver-peak.com/top-ten-reasons-to-think-outside-the-router-4-broadband-is-used-only-for-failover
+[5]: http://blog.silver-peak.com/think-outside-the-router-reason-5-manual-cli-based-configuration-and-management
+[6]: http://blog.silver-peak.com/https-blog-silver-peak-com-think-outside-the-router-reason-6
+[7]: http://blog.silver-peak.com/think-outside-the-router-reason-7-exorbitant-router-support-and-maintenance-costs
+[8]: http://blog.silver-peak.com/think-outside-the-router-reason-8-garbled-voip-pixelated-video
+[9]: http://blog.silver-peak.com/think-outside-router-reason-9-sub-par-saas-performance
+[10]: http://blog.silver-peak.com/think-outside-router-reason-10-its-getting-cloudy
+[11]: https://www.silver-peak.com/sd-wan/sd-wan-explained
+[12]: https://www.silver-peak.com/sd-wan
+[13]: https://www.silver-peak.com/products/unity-edge-connect/orchestrated-security-policies
+[14]: https://www.silver-peak.com/resource-center/centrally-orchestrated-end-end-segmentation
+[15]: https://www.silver-peak.com/products/unity-edge-connect/bgp-routing
+[16]: https://www.silver-peak.com/products/unity-boost
+[17]: https://www.silver-peak.com/products/unity-orchestrator
+[18]: https://www.silver-peak.com/use-cases/simplifying-wan-architecture
+[19]: https://www.silver-peak.com/products/unity-edge-connect
+[20]: https://images.idgesg.net/images/article/2019/04/sp_linkthrough-copy-100792505-large.jpg
+[21]: https://www.silver-peak.com/resource-center/check-point-silver-peak-securing-internet-sd-wan
+[22]: https://www.silver-peak.com/company/tech-partners/forcepoint
+[23]: https://www.silver-peak.com/company/tech-partners/mcafee
+[24]: https://www.silver-peak.com/company/tech-partners/opaq-networks
+[25]: https://www.silver-peak.com/resource-center/palo-alto-networks-and-silver-peak
+[26]: https://www.silver-peak.com/company/tech-partners/symantec
+[27]: https://www.silver-peak.com/resource-center/zscaler-and-silver-peak-solution-brief
+[28]: https://www.silver-peak.com/sd-wan-interactive-roi-calculator
+[29]: https://www.silver-peak.com/think-outside-router
+[30]: https://images.idgesg.net/images/article/2019/04/roi-100792506-large.jpg
diff --git a/sources/tech/20190402 3 Essentials for Achieving Resiliency at the Edge.md b/sources/tech/20190402 3 Essentials for Achieving Resiliency at the Edge.md
new file mode 100644
index 0000000000..38cbc70e94
--- /dev/null
+++ b/sources/tech/20190402 3 Essentials for Achieving Resiliency at the Edge.md
@@ -0,0 +1,83 @@
+[#]: collector: (lujun9972)
+[#]: translator: ( )
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+[#]: subject: (3 Essentials for Achieving Resiliency at the Edge)
+[#]: via: (https://www.networkworld.com/article/3386438/3-essentials-for-achieving-resiliency-at-the-edge.html#tk.rss_all)
+[#]: author: (Anne Taylor https://www.networkworld.com/author/Anne-Taylor/)
+
+3 Essentials for Achieving Resiliency at the Edge
+======
+
+### Edge computing requires different thinking and management to ensure the always-on availability that users have come to demand.
+
+![iStock][1]
+
+> “The IT industry has done a good job of making robust data centers that are highly manageable, highly secure, with redundant systems,” [says Kevin Brown][2], SVP Innovation and CTO for Schneider Electric’s Secure Power Division.
+
+However, he continues, companies then connect these data centers to messy edge closets and server rooms, which over time have become “micro mission-critical data centers” in their own right — making system availability vital. If not designed and managed correctly, the situation can be disastrous if users cannot connect to business-critical applications.
+
+To avoid unacceptable downtime, companies should incorporate three essential ingredients into their edge computing deployments: remote management, physical security, and rapid deployments.
+
+**Remote management**
+
+Depending on the company’s size, staff could be managing several — or many multiple — edge sites. Not only is this time consuming and costly, it’s also complex, especially if protocols differ from site to site.
+
+While some organizations might deploy traditional remote monitoring technology to manage these sites, it’s important to note these tools: don’t provide real-time status updates; are largely reactionary rather than proactive; and are sometimes limited in terms of data output.
+
+Coupled with the need to overcome these limitations, the economics for managing edge sites necessitate that organizations consider a digital, or cloud-based, solution. In addition to cost savings, these platforms provide:
+
+ * Simplification in monitoring across edge sites
+ * Real-time visibility, right down to any device on the network
+ * Predictive analytics, including data-driven intelligence and recommendations to ensure proactive service delivery
+
+
+
+**Physical security**
+
+Small, local edge computing sites are often situated within larger corporate or wide-open spaces, sometimes in highly accessible, shared offices and public areas. And sometimes they’re set up on-the-fly for a time-sensitive project.
+
+However, when there is no dedicated location and open racks are unsecured, the risks of malicious and accidental incidents escalate.
+
+To prevent unauthorized access to IT equipment at edge computing sites, proper physical security is critical and requires:
+
+ * Physical space monitoring, with environmental sensors for temperature and humidity
+ * Access control, with biometric sensors as an option
+ * Audio and video surveillance and monitoring with recording
+ * If possible, install IT equipment within a secure enclosure
+
+
+
+**Rapid deployments**
+
+The [benefits of edge computing][3] are significant, especially the ability to bring bandwidth-intensive computing closer to the user, which leads to faster speed to market and greater productivity.
+
+Create a holistic plan that will enable the company to quickly deploy edge sites, while ensuring resiliency and reliability. That means having a standardized, repeatable process including:
+
+ * Pre-configured, integrated equipment that combines server, storage, networking, and software in a single enclosure — a prefabricated micro data center, if you will
+ * Designs that specify supporting racks, UPSs, PDUs, cable management, airflow practices, and cooling systems
+
+
+
+These best practices as well as a balanced, systematic approach to edge computing deployments will ensure the always-on availability that today’s employees and users have come to expect.
+
+Learn how to enable resiliency within your edge computing deployment at [APC.com][4].
+
+--------------------------------------------------------------------------------
+
+via: https://www.networkworld.com/article/3386438/3-essentials-for-achieving-resiliency-at-the-edge.html#tk.rss_all
+
+作者:[Anne Taylor][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://www.networkworld.com/author/Anne-Taylor/
+[b]: https://github.com/lujun9972
+[1]: https://images.idgesg.net/images/article/2019/04/istock-900882382-100792635-large.jpg
+[2]: https://www.youtube.com/watch?v=IfsCTFSH6Jc
+[3]: https://www.networkworld.com/article/3342455/how-edge-computing-will-bring-business-to-the-next-level.html
+[4]: https://www.apc.com/us/en/solutions/business-solutions/edge-computing.jsp
diff --git a/sources/tech/20190402 Automate password resets with PWM.md b/sources/tech/20190402 Automate password resets with PWM.md
new file mode 100644
index 0000000000..0bc7012c21
--- /dev/null
+++ b/sources/tech/20190402 Automate password resets with PWM.md
@@ -0,0 +1,94 @@
+[#]: collector: (lujun9972)
+[#]: translator: ( )
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+[#]: subject: (Automate password resets with PWM)
+[#]: via: (https://opensource.com/article/19/4/automate-password-resets-pwm)
+[#]: author: (James Mawson https://opensource.com/users/dxmjames)
+
+Automate password resets with PWM
+======
+PWM puts responsibility for password resets in users' hands, freeing IT
+for more pressing tasks.
+![Password][1]
+
+One of the things that can be "death by a thousand cuts" for any IT team's sanity and patience is constantly being asked to reset passwords.
+
+The best way we've found to handle this is to ditch your hashing algorithms and store your passwords in plaintext so that your users can retrieve them at any time.
+
+Ha! I am, of course, kidding. That's a terrible idea.
+
+When your users forget their passwords, you'll still need to reset them. But is there a way to break free from the monotonous, repetitive task of doing it manually?
+
+### PWM puts password resets in users' hands
+
+[PWM][2] is an open source ([GPLv2][3]) [JavaServer Pages][4] application that provides a webpage where users can submit their own password resets. If certain conditions are met—which you can configure—PWM will send a password reset instruction to whichever directory service you've connected it to.
+
+![PWM password reset screen][5]
+
+One thing that's great about PWM is it's very easy to add it to an existing network. If you're largely happy with what you've already built—just sick of processing password requests manually—you can just throw PWM into the mix.
+
+PWM works with any implementation of [LDAP][6] and written to run on [Apache Tomcat][7]. Once you get it up and running, you can administer it through a browser-based dashboard.
+
+### Why PWM is better than Microsoft SSPR
+
+As much as our team prefers open source, we still have to deal with Windows networks. Of course, Microsoft has its own password-reset tool, called Self Service Password Reset (SSPR). But I prefer PWM, and not just because of a general preference for open source. I believe PWM is better for my use case for the following reasons:
+
+ * **SSPR has a very complex licensing system**. You need different products depending on what servers you're running and whose metal they're running on. This is a constraint on your flexibility and a whole extra pain in the neck when it's time to move to new architecture. For [the busy admin who wants to go home on time][8], it's extra bureaucracy to get the purchase approved. PWM just works on what it's configured to work on at no cost.
+
+ * **PWM is not just for Windows**. It works with any kind of LDAP server. So, it's one less part you need to worry about if you ever stop using Windows for a certain role. It also means that, once you've gotten the hang of it, you have something in your bag of tricks that you can use in many different environments.
+
+ * **PWM is easy to install**. If you know how to install Linux as a virtual machine—and, let's face it, if you're running a network, you probably do—then you're already most of the way there.
+
+
+
+
+PWM can run on Windows, but we prefer to include it in a Windows network by running it on a Linux virtual machine, [for example, Ubuntu Server 16.04][9].
+
+### Risks and rewards of automation
+
+Password resets are an attack vector, so be thoughtful about where and how you use PWM. Automating your password resets can mean an attacker is potentially just one unencrypted email connection away from resetting a password.
+
+To some extent, automating your password resets trades a bit of security for some convenience. So maybe this isn't the right way to handle C-suite user accounts that approve large payments.
+
+On the other hand, manual resets are not 100% secure either—they can be gamed with targeted attacks like spear phishing and social engineering. It's much easier to fall for these scams if your team gets frequent reset requests and is sick of dealing with them. You may benefit from automating the bulk of lower-risk requests so you can focus on protecting the higher-risk accounts manually; this is possible given the time you can save using PWM.
+
+Some of the risks associated with shifting resets to users can be mitigated with PWM's built-in features, such as insisting users verify their password reset request by email or SMS. You can also make PWM accessible only on the intranet.
+
+![PWM configuration options][10]
+
+PWM doesn't store any passwords, so that's one less headache. It does, however, store answers to users' secret questions in a MySQL database that can be configured to be stored locally or on a separate server, depending on your preference.
+
+There are a ton of ways to make PWM look and feel like a polished part of your team's infrastructure. With a little bit of CSS know-how, you can customize the user interface for your business' branding. There are also more options for implementation than you can shake a stick at.
+
+### Wrapping up
+
+PWM is a great open source project, it's actively developed, and it has a helpful online community. It's a great alternative to Microsoft's Azure SSPR solution for small to midsized businesses that have to keep a tight grip on the purse strings, and it slots in neatly to any existing Active Directory infrastructure. It also saves IT's time by outsourcing this mundane task to users.
+
+I advise every network admin to dive in and have a look at the cool stuff PWM offers. Check out the [getting started resources][11] and reach out to the community if you have any questions.
+
+--------------------------------------------------------------------------------
+
+via: https://opensource.com/article/19/4/automate-password-resets-pwm
+
+作者:[James Mawson][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://opensource.com/users/dxmjames
+[b]: https://github.com/lujun9972
+[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/password.jpg?itok=ec6z6YgZ (Password)
+[2]: https://github.com/pwm-project/pwm
+[3]: https://github.com/pwm-project/pwm/blob/master/LICENSE
+[4]: https://www.oracle.com/technetwork/java/index-jsp-138231.html
+[5]: https://opensource.com/sites/default/files/uploads/pwm_password-reset.png (PWM password reset screen)
+[6]: https://opensource.com/business/14/5/top-4-open-source-ldap-implementations
+[7]: http://tomcat.apache.org/
+[8]: https://opensource.com/article/18/7/tools-admin
+[9]: https://blog.dxmtechsupport.com.au/adding-pwm-password-reset-tool-to-windows-network/
+[10]: https://opensource.com/sites/default/files/uploads/pwm-configuration.png (PWM configuration options)
+[11]: https://github.com/pwm-project/pwm#links
diff --git a/sources/tech/20190402 How to Install and Configure Plex on Ubuntu Linux.md b/sources/tech/20190402 How to Install and Configure Plex on Ubuntu Linux.md
new file mode 100644
index 0000000000..8b5010a2ec
--- /dev/null
+++ b/sources/tech/20190402 How to Install and Configure Plex on Ubuntu Linux.md
@@ -0,0 +1,202 @@
+[#]: collector: (lujun9972)
+[#]: translator: ( )
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+[#]: subject: (How to Install and Configure Plex on Ubuntu Linux)
+[#]: via: (https://itsfoss.com/install-plex-ubuntu)
+[#]: author: (Chinmay https://itsfoss.com/author/chinmay/)
+
+How to Install and Configure Plex on Ubuntu Linux
+======
+
+When you are a media hog and have a big collection of movies, photos or music, the below capabilities would be very handy.
+
+ * Share media with family and other people.
+ * Access media from different devices and platforms.
+
+
+
+Plex ticks all of those boxes and more. Plex is a client-server media player system with additional features. Plex supports a wide array of platforms, both for the server and the player. No wonder it is considered one of the [best media servers for Linux][1].
+
+Note: Plex is not a completely open source media player. We have covered it because this is one of the frequently [requested tutorial][2].
+
+### Install Plex on Ubuntu
+
+For this guide I am installing Plex on Elementary OS, an Ubuntu based distribution. You can still follow along if you are installing it on a headless Linux machine.
+
+Go to the Plex [downloads][3] page, select Ubuntu 64-bit (I would not recommend installing it on a 32-bit CPU) and download the .deb file.
+
+![][4]
+
+[Download Plex][3]
+
+You can [install the .deb file][5] by just clicking on the package. If it does not work, you can use an installer like **Eddy** or **[GDebi][6].**
+
+You can also install it via the terminal using dpkg as shown below.
+
+Install Plex on a headless Linux system
+
+For a [headless system][7], you can use **wget** to download the .deb package. This example uses the current link for Ubuntu, at the time of writing. Be sure to use the up-to-date version supplied on the Plex website.
+
+```
+wget https://downloads.plex.tv/plex-media-server-new/1.15.1.791-8bec0f76c/debian/plexmediaserver_1.15.1.791-8bec0f76c_amd64.deb
+```
+
+The above command downloads the 64-bit .deb package. Once downloaded install the package using the following command.
+
+```
+dpkg -i plexmediaserver*.deb
+```
+
+Enable version upgrades for Plex
+
+The .deb installation does create an entry in sources.d, but [repository updates][8] are not enabled by default and the contents of _plexmediaserver.list_ are commented out. This means that if there is a new Plex version available, your system will not be able to update your Plex install.
+
+To enable repository updates you can either remove the # from the line starting with deb or run the following commands.
+
+```
+echo deb https://downloads.plex.tv/repo/deb public main | sudo tee /etc/apt/sources.list.d/plexmediaserver.list
+```
+
+The above command updates the entry in sources.d directory.
+
+We also need to add Plex’s public key to facilitate secure and safe downloads. You can try running the command below, unfortunately this **did not work for me** and the [GPG][9] key was not added.
+
+```
+curl https://downloads.plex.tv/plex-keys/PlexSign.key | sudo apt-key add -
+```
+
+To fix this issue I found out the key hash for from the error message after running _sudo apt-get update._
+
+![][10]
+
+```
+97203C7B3ADCA79D
+```
+
+The above hash can be used to add the key from the key-server. Run the below commands to add the key.
+
+```
+gpg --keyserver https://downloads.plex.tv/plex-keys/PlexSign.key --recv-keys 97203C7B3ADCA79D
+```
+
+```
+gpg --export --armor 97203C7B3ADCA79D|sudo apt-key add -
+```
+
+You should see an **OK** once the key is added.
+
+Run the below command to verify that the repository is added to the sources list successfully.
+
+```
+sudo apt update
+```
+
+To update Plex to the newest version available on the repository, run the below [apt-get command][11].
+
+```
+sudo apt-get --only-upgrade install plexmediaserver
+```
+
+Once installed the Plex service automatically starts running. You can check if its running by running the this command in a terminal.
+
+```
+systemctl status plexmediaserver
+```
+
+If the service is running properly you should see something like this.
+
+![Check the status of Plex Server][12]
+
+### Configuring Plex as a Media Server
+
+The Plex server is accessible on the ports 32400 and 32401. Navigate to **localhost:32400** or **localhost:32401** using a browser. You should replace the ‘localhost’ with the IP address of the machine running Plex server if you are going headless.
+
+The first time you are required to sign up or log in to your Plex account.
+
+![Plex Login Page][13]
+
+Now you can go ahead and give a friendly name to your Plex Server. This name will be used to identify the server over the network. You can also have multiple Plex servers identified by different names on the same network.
+
+![Plex Server Setup][14]
+
+Now it is finally time to add all your collections to the Plex library. Here your collections will be automatically get indexed and organized.
+
+You can click the add library button to add all your collections.
+
+![Add Media Library][15]
+
+![][16]
+
+Navigate to the location of the media you want to add to Plex .
+
+![][17]
+
+You can add multiple folders and different types of media.
+
+When you are done, you are taken to a very slick looking Plex UI. You can already see the contents of your libraries showing up on the home screen. It also automatically selects a thumbnail and also fills the metadata.
+
+![][18]
+
+You can head over to the settings and configure some of the settings. You can create new users( **only with Plex Pass** ), adjust the transcoding settings set scheduled library updates and more.
+
+If you have a public IP assigned to your router by the ISP you can also enable Remote Access. This means that you can be traveling and still access your libraries at home, considering you have your Plex server running all the time.
+
+Now you are all set up and ready, but how do you access your media? Yes you can access through your browser but Plex has a presence in almost all platforms you can think of including Android Auto.
+
+### Accessing Your Media and Plex Pass
+
+You can access you media either by using the web browser (the same address you used earlier) or Plex’s suite of apps. The web browser experience is pretty good on computers and can be better on phones.
+
+Plex apps provide a much better experience. But, the iOS and Android apps need to be activated with a [Plex Pass][19]. Without activation you are limited to 1 minute of video playback and images are watermarked.
+
+Plex Pass is a premium subscription service which activates the mobile apps and enables more features. You can also individually activate your apps tied to a particular phone for a cheaper price. You can also create multiple users and set permissions with the Plex Pass which is a very handy feature.
+
+You can check out all the benefits of Plex Pass [here][19].
+
+_Note: Plex Meida Player is free on all platforms other than Android and iOS App._
+
+**Conclusion**
+
+That’s about all things you need to know for the first time configuration, go ahead and explore the Plex UI, it also gives you access to free online content like podcasts and music through Tidal.
+
+There are alternatives to Plex like [Jellyfin][20] which is free but native apps are in beta and on road to be published on the App stores.You can also use a NAS with any of the freely available media centers like Kodi, OpenELEC or even VLC media player.
+
+Here is an article listing the [best Linux media servers.][1]
+
+Let us know your experience with Plex and what you use for your media sharing needs.
+
+--------------------------------------------------------------------------------
+
+via: https://itsfoss.com/install-plex-ubuntu
+
+作者:[Chinmay][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://itsfoss.com/author/chinmay/
+[b]: https://github.com/lujun9972
+[1]: https://itsfoss.com/best-linux-media-server/
+[2]: https://itsfoss.com/request-tutorial/
+[3]: https://www.plex.tv/media-server-downloads/
+[4]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/03/downloads-plex.png?ssl=1
+[5]: https://itsfoss.com/install-deb-files-ubuntu/
+[6]: https://itsfoss.com/gdebi-default-ubuntu-software-center/
+[7]: https://www.lions-wing.net/lessons/servers/home-server.html
+[8]: https://itsfoss.com/ubuntu-repositories/
+[9]: https://www.gnupg.org/
+[10]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/03/Screenshot-from-2019-03-26-07-21-05-1.png?ssl=1
+[11]: https://itsfoss.com/apt-get-linux-guide/
+[12]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/03/check-plex-service.png?ssl=1
+[13]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/03/plex-home-page.png?ssl=1
+[14]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/03/Plex-server-setup.png?ssl=1
+[15]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/03/add-library.png?ssl=1
+[16]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/03/add-plex-library.png?ssl=1
+[17]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/03/add-plex-folder.png?ssl=1
+[18]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/03/Screenshot-from-2019-03-17-22-27-56.png?ssl=1
+[19]: https://www.plex.tv/plex-pass/
+[20]: https://jellyfin.readthedocs.io/en/latest/
diff --git a/sources/tech/20190402 Intel-s Agilex FPGA family targets data-intensive workloads.md b/sources/tech/20190402 Intel-s Agilex FPGA family targets data-intensive workloads.md
new file mode 100644
index 0000000000..686a2be6a4
--- /dev/null
+++ b/sources/tech/20190402 Intel-s Agilex FPGA family targets data-intensive workloads.md
@@ -0,0 +1,103 @@
+[#]: collector: (lujun9972)
+[#]: translator: ( )
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+[#]: subject: (Intel's Agilex FPGA family targets data-intensive workloads)
+[#]: via: (https://www.networkworld.com/article/3386158/intels-agilex-fpga-family-targets-data-intensive-workloads.html#tk.rss_all)
+[#]: author: (Marc Ferranti https://www.networkworld.com)
+
+Intel's Agilex FPGA family targets data-intensive workloads
+======
+Agilex processors are the first Intel FPGAs to use 10nm manufacturing, achieving a performance boost for AI, financial and IoT workloads
+![Intel][1]
+
+After teasing out details about the technology for a year and half under the code name Falcon Mesa, Intel has unveiled the Agilex family of FPGAs, aimed at data-center and network applications that are processing increasing amounts of data for AI, financial, database and IoT workloads.
+
+The Agilex family, expected to start appearing in devices in the third quarter, is part of a new wave of more easily programmable FPGAs that is beginning to take an increasingly central place in computing as data centers are called on to handle an explosion of data.
+
+**Learn about edge networking**
+
+ * [How edge networking and IoT will reshape data centers][2]
+ * [Edge computing best practices][3]
+ * [How edge computing can help secure the IoT][4]
+
+
+
+FPGAs, or field programmable gate arrays, are built around around a matrix of configurable logic blocks (CLBs) linked via programmable interconnects that can be programmed after manufacturing – and even reprogrammed after being deployed in devices – to run algorithms written for specific workloads. They can thus be more efficient on a performance-per-watt basis than general-purpose CPUs, even while driving higher performance.
+
+### Accelerated computing takes center stage
+
+CPUs can be packaged with FPGAs, offloading specific tasks to them and enhancing overall data-center and network efficiency. The concept, known as accelerated computing, is increasingly viewed by data-center and network managers as a cost-efficient way to handle increasing data and network traffic.
+
+"This data is creating what I call an innovation race across from the edge to the network to the cloud," said Dan McNamara, general manager of the Programmable Solutions Group (PSG) at Intel. "We believe that we’re in the largest adoption phase for FPGAs in our history."
+
+The Agilex family is the first line of FPGAs developed from the ground up in the wake of [Intel’s $16.7 billion 2015 acquisition of Altera.][5] It's the first FPGA line to be made with Intel's 10nm manufacturing process, which adds billions of transistors to the FPGAs compared to earlier generations. Along with Intel's second-generation HyperFlex architecture, it helps give Agilex 40 percent higher performance than the company's current high-end FPGA family, the Stratix 10 line, Intel says.
+
+HyperFlex architecture includes additional registers – places on a processor that temporarily hold data – called Hyper-Registers, located everywhere throughout the core fabric to enhance bandwidth as well as area and power efficiency.
+
+**[[Take this mobile device management course from PluralSight and learn how to secure devices in your company without degrading the user experience.][6] ]**
+
+### Memory coherency is key
+
+Agilex FPGAs are also the first processors to support [Compute Express Link (CXL), a high-speed interconnect][7] designed to maintain memory coherency among CPUs like Intel's second-generation Xeon Scalable processors and purpose-built accelerators like FPGAs and GPUs. It ensures that different processors don't clash when trying to write to the same memory space, essentially allowing CPUs and accelerators to share memory.
+
+"By having this CXL bus you can actually write applications that will use all the real memory so what that does is it simplifies the programming model in large memory workloads," said Patrick Moorhead, founder and principal at Moor Insights & Strategy.
+
+The ability to integrate FPGAs, other accelerators and CPUs is key to Intel's accelerated computing strategy for the data center. Intel calls it "any to any" integration.
+
+### 'Any-to-any' integration is crucial for the data center
+
+The Agilex family uses embedded multi-die interconnect bridge (EMIB) packaging technology to integrate, for example, Xeon Scalable CPUs or ASICs – special-function processors that are not reprogammable – alongside FPGA fabric. Intel last year bought eASIC, a maker of structured ASICs, which the company describes as an intermediary technology between FPGAs and ASICs. The idea is to deliver products that offer a mix of functionality to achieve optimal cost and performance efficiency for data-intensive workloads.
+
+Intel underscored the importance of processor integration for the data center by unveiling Agilex on Tuesday at its Data Centric Innovation Day in San Francisco, when it also discussed plans for its second generation Xeon Scalable line.
+
+Traditionally, FPGAs were mainly used in embedded devices, communications equipment and in hyperscale data centers, and not sold directly to enterprises. But several products based on Intel Stratix 10 and Arria 10 FPGAs are now being sold to enterprises, including in Dell EMC and Fujitsu off-the-shelf servers.
+
+Making FPGAs easier to program is key to making them more mainstream. "What's really, really important is the software story," said Intel's McNamara. "None of this really matters if we can't generate more users and make it easier to program FPGA's."
+
+Intel's Quartus Prime design tool will be available for Agilex hardware developers but the real breakthrough for FPGA software development will be Intel's OneAPI concept, announced in December.
+
+"OneAPI is is an effort by Intel to be able to have programmers write to OneAPI and OneAPI determines the best piece of silicon to run it on," Moorhead said. "I lovingly refer to it as the magic API; this is the big play I always thought Intel was gonna be working on ever since it bought Altera. The first thing I expect to happen are the big enterprise developers like SAP and Oracle to write to Agilex, then smaller ISVs, then custom enterprise applications."
+
+![][8]
+
+Intel plans three different product lines in the Agilex family – from low to high end, the F-, I- and M-series – aimed at different applications and processing requirements. The Agilex family, depending on the series, supports PCIe (peripheral component interconnect express) Gen 5, and different types of memory including DDR5 RAM, HBM (high-bandwidth memory) and Optane DC persistent memory. It will offer up to 112G bps transceiver data rates and a greater mix of arithmetic precision for AI, including bfloat16 number format.
+
+In addition to accelerating server-based workloads like AI, genomics, financial and database applications, FPGAs play an important part in networking. Their cost-per-watt efficiency makes them suitable for edge networks, IoT devices as well as deep packet inspection. In addition, they can be used in 5G base stations; as 5G standards evolve, they can be reprogrammed. Once 5G standards are hardened, the "any to any" integration will allow processing to be offloaded to special-purpose ASICs for ultimate cost efficiency.
+
+### Agilex will compete with Xylinx's ACAPs
+
+Agilex will likely vie with Xylinx's upcoming [Versal product family][9], due out in devices in the second half of the year. Xylinx competed for years with Altera in the FPGA market, and with Versal has introduced what it says is [a new product category, the Adaptive Compute Acceleration Platform (ACAP)][10]. Versal ACAPs will be made using TSMC's 7nm manufacturing process technology, though because Intel achieves high transistor density, the number of transistors offered by Agilex and Versal chips will likely be equivalent, noted Moorhead.
+
+Though Agilex and Versal differ in details, the essential pitch is similar: the programmable processors offer a wider variety of programming options than prior generations of FPGA, work with CPUs to accelerate data-intensive workloads, and offer memory coherence. Rather than CXL, though, the Versal family uses the cache coherent interconnect for accelerators (CCIX) interconnect fabric.
+
+Neither Intel or Xylinx for the moment have announced OEM support for Agilex or Versal products that will be sold to the enterprise, but that should change as the year progresses.
+
+Join the Network World communities on [Facebook][11] and [LinkedIn][12] to comment on topics that are top of mind.
+
+--------------------------------------------------------------------------------
+
+via: https://www.networkworld.com/article/3386158/intels-agilex-fpga-family-targets-data-intensive-workloads.html#tk.rss_all
+
+作者:[Marc Ferranti][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://www.networkworld.com
+[b]: https://github.com/lujun9972
+[1]: https://images.idgesg.net/images/article/2019/04/agilex-100792596-large.jpg
+[2]: https://www.networkworld.com/article/3291790/data-center/how-edge-networking-and-iot-will-reshape-data-centers.html
+[3]: https://www.networkworld.com/article/3331978/lan-wan/edge-computing-best-practices.html
+[4]: https://www.networkworld.com/article/3331905/internet-of-things/how-edge-computing-can-help-secure-the-iot.html
+[5]: https://www.networkworld.com/article/2903454/intel-could-strengthen-its-server-product-stack-with-altera.html
+[6]: https://pluralsight.pxf.io/c/321564/424552/7490?u=https%3A%2F%2Fwww.pluralsight.com%2Fcourses%2Fmobile-device-management-big-picture
+[7]: https://www.networkworld.com/article/3359254/data-center-giants-announce-new-high-speed-interconnect.html
+[8]: https://images.idgesg.net/images/article/2019/04/agilex-family-100792597-large.jpg
+[9]: https://www.xilinx.com/news/press/2018/xilinx-unveils-versal-the-first-in-a-new-category-of-platforms-delivering-rapid-innovation-with-software-programmability-and-scalable-ai-inference.html
+[10]: https://www.networkworld.com/article/3263436/fpga-maker-xilinx-aims-range-of-software-programmable-chips-at-data-centers.html
+[11]: https://www.facebook.com/NetworkWorld/
+[12]: https://www.linkedin.com/company/network-world
diff --git a/sources/tech/20190402 Manage your daily schedule with Git.md b/sources/tech/20190402 Manage your daily schedule with Git.md
new file mode 100644
index 0000000000..8f5d7d89bb
--- /dev/null
+++ b/sources/tech/20190402 Manage your daily schedule with Git.md
@@ -0,0 +1,240 @@
+[#]: collector: (lujun9972)
+[#]: translator: ( )
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+[#]: subject: (Manage your daily schedule with Git)
+[#]: via: (https://opensource.com/article/19/4/calendar-git)
+[#]: author: (Seth Kenlon https://opensource.com/users/seth)
+
+Manage your daily schedule with Git
+======
+Treat time like source code and maintain your calendar with the help of
+Git.
+![website design image][1]
+
+[Git][2] is one of those rare applications that has managed to encapsulate so much of modern computing into one program that it ends up serving as the computational engine for many other applications. While it's best-known for tracking source code changes in software development, it has many other uses that can make your life easier and more organized. In this series leading up to Git's 14th anniversary on April 7, we'll share seven little-known ways to use Git. Today, we'll look at using Git to keep track of your calendar.
+
+### Keep track of your schedule with Git
+
+What if time itself was but source code that could be managed and version controlled? While proving or disproving such a theory is probably beyond the scope of this article, it happens that you can treat time like source code and manage your daily schedule with the help of Git.
+
+The reigning champion for calendaring is the [CalDAV][3] protocol, which drives popular open source calendaring applications like [NextCloud][4] as well as popular closed source ones. There's nothing wrong with CalDAV (commenters, take heed). But it's not for everyone, and besides there's nothing less inspiring than a mono-culture.
+
+Because I have no interest in becoming invested in largely GUI-dependent CalDAV clients (although if you're looking for a good terminal CalDAV viewer, see [khal][5]), I started investigating text-based alternatives. Text-based calendaring has all the usual benefits of working in [plaintext][6]. It's lightweight, it's highly portable, and as long as it's structured, it's easy to parse and beautify (whatever _beauty_ means to you).
+
+And best of all, it's exactly what Git was designed to manage.
+
+### Org mode not in a scary way
+
+If you don't impose structure on your plaintext, it quickly falls into a pandemonium of off-the-cuff thoughts and devil-may-care notation. Luckily, a markup syntax exists for calendaring, and it's contained in the venerable productivity Emacs mode, [Org mode][7] (which, admit it, you've been meaning to start using anyway).
+
+The amazing thing about Org mode that many people don't realize is [you don't need to know or even use Emacs][8] to take advantage of conventions established by Org mode. You get a lot of great features if you _do_ use Emacs, but if Emacs intimidates you, then you can implement a Git-based Org-mode calendaring system without so much as installing Emacs.
+
+The only part of Org mode that you need to know is its syntax. Org-mode syntax is low-maintenance and fairly intuitive. The biggest difference in calendaring with Org mode instead of a GUI calendaring app is the workflow: instead of going to a calendar and finding the day you want to schedule a task, you create a list of tasks and then assign each one a day and time.
+
+Lists in Org mode use asterisks (*) as bullets. Here's my gaming task list: ****
+
+```
+* Gaming
+** Build Stardrifter character
+** Read Stardrifter rules
+** Stardrifter playtest
+
+** Blue Planet @ Mike's
+
+** Run Rappan Athuk
+*** Purchase hard copy
+*** Skim Rappan Athuk
+*** Build Rappan Athuk maps in maptool
+*** Sort Rappan Athuk tokens
+```
+
+If you're familiar with [CommonMark][9] or Markdown, you'll notice that instead of using whitespace to create a subtask, Org mode favors the more explicit use of additional bullets. Whatever your background with lists, this is an intuitive and easy way to build a list, and it obviously is not inherently tied to Emacs (although using Emacs provides you with shortcuts so you can rearrange your list quickly).
+
+To turn your list into scheduled tasks or events in a calendar, go back through and add the keywords **SCHEDULED** and, optionally, **:CATEGORY:**.
+
+```
+* Gaming
+:CATEGORY: Game
+** Build Stardrifter character
+SCHEDULED: <2019-03-22 18:00-19:00>
+** Read Stardrifter rules
+SCHEDULED: <2019-03-22 19:00-21:00>
+** Stardrifter playtest
+SCHEDULED: <2019-03-25 0900-1300>
+** Blue Planet @ Mike's
+SCHEDULED: <2019-03-18 18:00-23:00 +1w>
+
+and so on...
+```
+
+The **SCHEDULED** keyword marks the entry as an event that you expect to be notified about and the optional **:CATEGORY:** keyword is an arbitrary tagging system for your own use (and in Emacs, you can color-code entries according to category).
+
+For a repeating event, you can use notation such as **+1w** to create a weekly event or **+2w** for a fortnightly event, and so on.
+
+All the fancy markup available for Org mode is [documented][10], so don't hesitate to find more tricks to help it fit your needs.
+
+### Put it into Git
+
+Without Git, your Org-mode appointments are just a file on your local machine. It's the 21st century, though, so you at least need your calendar on your mobile phone, if not on all of your personal computers. You can use Git to publish your calendar for yourself and others.
+
+First, create a directory for your **.org** files. I store mine in **~/cal**.
+
+```
+$ mkdir ~/cal
+```
+
+Change into your directory and make it a Git repository:
+
+```
+$ cd cal
+$ git init
+```
+
+Move your **.org** file to your local Git repo. In practice, I maintain one **.org** file per category.
+
+```
+$ mv ~/*.org ~/cal
+$ ls
+Game.org Meal.org Seth.org Work.org
+```
+
+Stage and commit your files:
+
+```
+$ git add *.org
+$ git commit -m 'cal init'
+```
+
+### Create a Git remote
+
+To make your calendar available from anywhere, you must have a Git repository on the internet. Your calendar is plaintext, so any Git repository will do. You can put your calendar on [GitLab][11] or any other public Git hosting service (even proprietary ones), and as long as your host allows it, you can even mark the repository as private. If you don't want to post your calendar to a server you don't control, it's easy to host a Git repository yourself, either using a bare repository for a single user or using a frontend service like [Gitolite][12] or [Gitea][13].
+
+In the interest of simplicity, I'll assume a self-hosted bare Git repository. You can create a bare remote repository on any server you have SSH access to with one Git command:
+```
+$ ssh -p 22122 [seth@example.com][14]
+[remote]$ mkdir cal.git
+[remote]$ cd cal.git
+[remote]$ git init --bare
+[remote]$ exit
+```
+
+This bare repository can serve as your calendar's home on the internet.
+
+Set it as the remote source for your local (on your computer, not your server) Git repository:
+
+```
+$ git remote add origin seth@example.com:/home/seth/cal.git
+```
+
+And then push your calendar data to the server:
+
+```
+$ git push -u origin HEAD
+```
+
+With your calendar in a Git repository, it's available to you on any device running Git. That means you can make updates and changes to your schedule and push your changes upstream so it updates everywhere.
+
+I use this method to keep my calendar in sync between my work laptop and my home workstation. Since I use Emacs every day for most of the day, being able to view and edit my calendar in Emacs is a major convenience. The same is true for most people with a mobile device, so the next step is to set up an Org-mode calendaring system on a mobile.
+
+### Mobile Git
+
+Since your calendar data is in plaintext, strictly speaking, you can "use" it on any device that can read a text file. That's part of the beauty of this system; you're never without, at the very least, your raw data. But to integrate your calendar on a mobile device the way you'd expect a modern calendar to work, you need two components: a mobile Git client and a mobile Org-mode viewer.
+
+#### Git client for mobile
+
+[MGit][15] is a good Git client for Android. There are Git clients for iOS, as well.
+
+Once you've installed MGit (or a similar Git client), you must clone your calendar repository so your phone has a copy. To access your server from your mobile device, you must set up an SSH key for authentication. MGit can generate and store a key for you, which you must add to your server's **~/.ssh/authorized_keys** file or to your SSH keys in the settings of your hosted Git account.
+
+You must do this manually. MGit does not have an interface to log into your server or hosted Git account. If you do not do this, your mobile device cannot access your server to access your calendar data.
+
+I did it by copying the key file I generated in MGit to my laptop over [KDE Connect][16] (but you can do the same over Bluetooth, or with an SD card reader, or a USB cable, depending on your preferred method of accessing data on your phone). I copied the key (a file called **calkey** to my server with this command:
+
+```
+$ cat calkey | ssh seth@example.com "cat >> /home/seth/.ssh/authorized_keys"
+```
+
+You may have a different way of doing it, but if you ever set your server up for passwordless login, this is exactly the same process. If you're using a hosted Git service like GitLab, you must copy and paste the contents of your key file into your user account's SSH Key panel.
+
+![Adding key file data to GitLab][17]
+
+Once that's done, your mobile device can authorize to your server, but it still needs to know where to go to find your calendar data. Different apps may use different notation, but MGit uses plain old Git-over-SSH. That means if you're using a non-standard SSH port, you must specify the SSH port to use:
+
+```
+$ git clone ssh://seth@example.com:22122//home/seth/git/cal.git
+```
+
+![Specifying SSH port in MGit][18]
+
+If you use a different app, it may use a different syntax that allows you to provide a port in a special field or drop the **ssh://** prefix. Refer to the app documentation if you experience issues.
+
+Clone the repository to your phone.
+
+![Cloned repositories][19]
+
+Few Git apps are set to automatically update the repository. There are a few apps you can use to automate pulls, or you can set up Git hooks to push updates from your server—but I won't get into that here. For now, after you make an update to your calendar, be sure to pull new changes manually in MGit (or if you change events on your phone, push the changes to your server).
+
+![MGit push/pull settings][20]
+
+#### Mobile calendar
+
+There are a few different apps that provide frontends for Org mode on a mobile device. [Orgzly][21] is a great open source Android app that provides an interface for Org mode's greatest features, from the Agenda mode to the TODO lists. Install and launch it.
+
+From the Main menu, choose Setting Sync Repositories and select the directory containing your calendar files (i.e., the Git repository you cloned from your server).
+
+Give Orgzly a moment to import the data, then use Orgzly's [hamburger][22] menu to select the Agenda view.
+
+![Orgzly's agenda view][23]
+
+In Orgzly's Settings Reminders menu, you can choose which event types trigger a notification on your phone. You can get notifications for **SCHEDULED** tasks, **DEADLINE** tasks, or anything with an event time assigned to it. If you use your phone as your taskmaster, you'll never miss an event with Org mode and Orgzly.
+
+![Orgzly notification][24]
+
+Orgzly isn't just a parser. You can edit and update events, and even mark events **DONE**.
+
+![Orgzly to-do list][25]
+
+### Designed for and by you
+
+The important thing to understand about using Org mode and Git is that both applications are highly flexible, and it's expected that you'll customize how and what they do so they will adapt to your needs. If something in this article is an affront to how you organize your life or manage your weekly schedule, but you like other parts of what this proposal offers, then throw out the part you don't like. You can use Org mode in Emacs if you want, or you can just use it as calendar markup. You can set your phone to pull Git data right off your computer at the end of the day instead of a server on the internet, or you can configure your computer to sync calendars whenever your phone is plugged in, or you can manage it daily as you load up your phone with all the stuff you need for the workday. It's up to you, and that's the most significant thing about Git, about Org mode, and about open source.
+
+--------------------------------------------------------------------------------
+
+via: https://opensource.com/article/19/4/calendar-git
+
+作者:[Seth Kenlon (Red Hat, Community Moderator)][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://opensource.com/users/seth
+[b]: https://github.com/lujun9972
+[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/web-design-monitor-website.png?itok=yUK7_qR0 (website design image)
+[2]: https://git-scm.com/
+[3]: https://tools.ietf.org/html/rfc4791
+[4]: http://nextcloud.com
+[5]: https://github.com/pimutils/khal
+[6]: https://plaintextproject.online/
+[7]: https://orgmode.org
+[8]: https://opensource.com/article/19/1/productivity-tool-org-mode
+[9]: https://commonmark.org/
+[10]: https://orgmode.org/manual/
+[11]: http://gitlab.com
+[12]: http://gitolite.com/gitolite/index.html
+[13]: https://gitea.io/en-us/
+[14]: mailto:seth@example.com
+[15]: https://f-droid.org/en/packages/com.manichord.mgit
+[16]: https://community.kde.org/KDEConnect
+[17]: https://opensource.com/sites/default/files/uploads/gitlab-add-key.jpg (Adding key file data to GitLab)
+[18]: https://opensource.com/sites/default/files/uploads/mgit-0.jpg (Specifying SSH port in MGit)
+[19]: https://opensource.com/sites/default/files/uploads/mgit-1.jpg (Cloned repositories)
+[20]: https://opensource.com/sites/default/files/uploads/mgit-2.jpg (MGit push/pull settings)
+[21]: https://f-droid.org/en/packages/com.orgzly/
+[22]: https://en.wikipedia.org/wiki/Hamburger_button
+[23]: https://opensource.com/sites/default/files/uploads/orgzly-agenda.jpg (Orgzly's agenda view)
+[24]: https://opensource.com/sites/default/files/uploads/orgzly-cal-notify.jpg (Orgzly notification)
+[25]: https://opensource.com/sites/default/files/uploads/orgzly-cal-todo.jpg (Orgzly to-do list)
diff --git a/sources/tech/20190402 What are Ubuntu Repositories- How to enable or disable them.md b/sources/tech/20190402 What are Ubuntu Repositories- How to enable or disable them.md
new file mode 100644
index 0000000000..dc0961a66d
--- /dev/null
+++ b/sources/tech/20190402 What are Ubuntu Repositories- How to enable or disable them.md
@@ -0,0 +1,189 @@
+[#]: collector: (lujun9972)
+[#]: translator: ( )
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+[#]: subject: (What are Ubuntu Repositories? How to enable or disable them?)
+[#]: via: (https://itsfoss.com/ubuntu-repositories)
+[#]: author: (Abhishek Prakash https://itsfoss.com/author/abhishek/)
+
+What are Ubuntu Repositories? How to enable or disable them?
+======
+
+_**This detailed article tells you about various repositories like universe, multiverse in Ubuntu and how to enable or disable them.**_
+
+So, you are trying to follow a tutorial from the web and installing a software using apt-get command and it throws you an error:
+
+```
+E: Unable to locate package xyz
+```
+
+You are surprised because others the package should be available. You search on the internet and come across a solution that you have to enable universe or multiverse repository to install that package.
+
+**You can enable universe and multiverse repositories in Ubuntu using the commands below:**
+
+```
+sudo add-apt-repository universe multiverse
+sudo apt update
+```
+
+You installed the universe and multiverse repository but do you know what are these repositories? How do they play a role in installing packages? Why there are several repositories?
+
+I’ll explain all these questions in detail here.
+
+### The concept of repositories in Ubuntu
+
+Okay, so you already know that to [install software in Ubuntu][1], you can use the [apt command][2]. This is the same [APT package manager][3] that Ubuntu Software Center utilizes underneath. So all the software (except Snap packages) that you see in the Software Center are basically from APT.
+
+Have you ever wondered where does the apt program install the programs from? How does it know which packages are available and which are not?
+
+Apt basically works on the repository. A repository is nothing but a server that contains a set of software. Ubuntu provides a set of repositories so that you won’t have to search on the internet for the installation file of various software of your need. This centralized way of providing software is one of the main strong points of using Linux.
+
+The APT package manager gets the repository information from the /etc/apt/sources.list file and files listed in /etc/apt/sources.list.d directory. Repository information is usually in the following format:
+
+```
+deb http://us.archive.ubuntu.com/ubuntu/ bionic main
+```
+
+In fact, you can [go to the above server address][4] and see how the repository is structured.
+
+When you [update Ubuntu using the apt update command][5], the apt package manager gets the information about the available packages (and their version info) from the repositories and stores them in local cache. You can see this in /var/lib/apt/lists directory.
+
+Keeping this information locally speeds up the search process because you don’t have to go through the network and search the database of available packages just to check if a certain package is available or not.
+
+Now you know how repositories play an important role, let’s see why there are several repositories provided by Ubuntu.
+
+### Ubuntu Repositories: Main, Universe, Multiverse, Restricted and Partner
+
+![][6]
+
+Software in Ubuntu repository are divided into five categories: main, universe, multiverse, restricted and partner.
+
+Why Ubuntu does that? Why not put all the software into one single repository? To answer this question, let’s see what are these repositories:
+
+#### **Main**
+
+When you install Ubuntu, this is the repository enabled by default. The main repository consists of only FOSS (free and open source software) that can be distributed freely without any restrictions.
+
+Software in this repository are fully supported by the Ubuntu developers. This is what Ubuntu will provide with security updates until your system reaches end of life.
+
+#### **Universe**
+
+This repository also consists free and open source software but Ubuntu doesn’t guarantee of regular security updates to software in this category.
+
+Software in this category are packaged and maintained by the community. The Universe repository has a vast amount of open source software and thus it enables you to have access to a huge number of software via apt package manager.
+
+#### **Multiverse**
+
+Multiverse contains the software that are not FOSS. Due to licensing and legal issues, Ubuntu cannot enable this repository by default and cannot provide fix and updates.
+
+It’s up to you to decide if you want to use Multiverse repository and check if you have the right to use the software.
+
+#### **Restricted**
+
+Ubuntu tries to provide only free and open source software but that’s not always possible specially when it comes to supporting hardware.
+
+The restricted repositories consists of proprietary drivers.
+
+#### **Partner**
+
+This repository consist of proprietary software packaged by Ubuntu for their partners. Earlier, Ubuntu used to provide Skype trough this repository.
+
+#### Third party repositories and PPA (Not provided by Ubuntu)
+
+The above five repositories are provided by Ubuntu. You can also add third party repositories (it’s up to you if you want to do it) to access more software or to access newer version of a software (as Ubuntu might provide old version of the same software).
+
+For example, if you add the repository provided by [VirtualBox][7], you can get the latest version of VurtualBox. It will add a new entry in your sources.list.
+
+You can also install additional application using PPA (Personal Package Archive). I have written about [what is PPA and how it works][8] in detail so please read that article.
+
+Tip
+
+Try NOT adding anything other than Ubuntu’s repositories in your sources.list file. You should keep this file in pristine condition because if you mess it up, you won’t be able to update your system or (at times) even install new packages.
+
+### Add universe, multiverse and other repositories
+
+As I had mentioned earlier, only the Main repository is enabled by default when you install Ubuntu. To access more software, you can add the additional repositories.
+
+Let me show you how to do it in command line first and then I’ll show you the GUI ways as well.
+
+To enable Universe repository, use:
+
+```
+sudo add-apt-repository universe
+```
+
+To enable Restricted repository, use:
+
+```
+sudo add-apt-repository restricted
+```
+
+To enable Multiverse repository, use this command:
+
+```
+sudo add-apt-repository multiverse
+```
+
+You must use sudo apt update command after adding the repository so that you system creates the local cache with package information.
+
+If you want to **remove a repository** , simply add -r like **sudo add-apt-repository -r universe**.
+
+Graphically, go to Software & Updates and you can enable the repositories here:
+
+![Adding Universe, Restricted and Multiverse repositories][9]
+
+You’ll find the option to enable partner repository in the Other Software tab.
+
+![Adding Partner repository][10]
+
+To disable a repository, simply uncheck the box.
+
+### Bonus Tip: How to know which repository a package belongs to?
+
+Ubuntu has a dedicated website that provides you with information about all the packages available in the Ubuntu archive. Go to Ubuntu Packages website.
+
+[Ubuntu Packages][11]
+
+You can search for a package name in the search field. You can select if you are looking for a particular Ubuntu release or a particular repository. I prefer using ‘any’ option in both fields.
+
+![][12]
+
+It will show you all the matching packages, Ubuntu releases and the repository information.
+
+![][13]
+
+As you can see above the package tor is available in the Universe repository for various Ubuntu releases.
+
+**Conclusion**
+
+I hope this article helped you in understanding the concept of repositories in Ubuntu.
+
+If you have any questions or suggestions, please feel free to leave a comment below. If you liked the article, please share it on social media sites like Reddit and Hacker News.
+
+--------------------------------------------------------------------------------
+
+via: https://itsfoss.com/ubuntu-repositories
+
+作者:[Abhishek Prakash][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://itsfoss.com/author/abhishek/
+[b]: https://github.com/lujun9972
+[1]: https://itsfoss.com/remove-install-software-ubuntu/
+[2]: https://itsfoss.com/apt-command-guide/
+[3]: https://wiki.debian.org/Apt
+[4]: http://us.archive.ubuntu.com/ubuntu/
+[5]: https://itsfoss.com/update-ubuntu/
+[6]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/03/ubuntu-repositories.png?resize=800%2C450&ssl=1
+[7]: https://itsfoss.com/install-virtualbox-ubuntu/
+[8]: https://itsfoss.com/ppa-guide/
+[9]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/03/enable-repositories-ubuntu.png?resize=800%2C490&ssl=1
+[10]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/03/enable-partner-repository-ubuntu.png?resize=800%2C490&ssl=1
+[11]: https://packages.ubuntu.com
+[12]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/03/search-packages-ubuntu-archive.png?ssl=1
+[13]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/03/search-packages-ubuntu-archive-1.png?resize=800%2C454&ssl=1
diff --git a/sources/tech/20190402 When Wi-Fi is mission-critical, a mixed-channel architecture is the best option.md b/sources/tech/20190402 When Wi-Fi is mission-critical, a mixed-channel architecture is the best option.md
new file mode 100644
index 0000000000..29a73998d7
--- /dev/null
+++ b/sources/tech/20190402 When Wi-Fi is mission-critical, a mixed-channel architecture is the best option.md
@@ -0,0 +1,90 @@
+[#]: collector: (lujun9972)
+[#]: translator: ( )
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+[#]: subject: (When Wi-Fi is mission-critical, a mixed-channel architecture is the best option)
+[#]: via: (https://www.networkworld.com/article/3386376/when-wi-fi-is-mission-critical-a-mixed-channel-architecture-is-the-best-option.html#tk.rss_all)
+[#]: author: (Zeus Kerravala https://www.networkworld.com/author/Zeus-Kerravala/)
+
+When Wi-Fi is mission-critical, a mixed-channel architecture is the best option
+======
+
+### Multi-channel is the norm for Wi-Fi today, but it’s not always the best choice. Single-channel and hybrid APs offer compelling alternatives when reliable Wi-Fi is a must.
+
+![Getty Images][1]
+
+I’ve worked with a number of companies that have implemented digital projects only to see them fail. The ideation was correct, the implementation was sound, and the market opportunity was there. The weak link? The Wi-Fi network.
+
+For example, a large hospital wanted to improve clinician response times to patient alarms by having telemetry information sent to mobile devices. Without the system, the only way a nurse would know about a patient alarm is from an audible alert. And with all the background noise, it’s often tough to discern where noises are coming from. The problem was the Wi-Fi network in the hospital had not been upgraded in years and caused messages to be significantly delayed in their delivery, often taking four to five minutes to deliver. The long delivery times caused a lack of confidence in the system, so many clinicians stopped using it and went back to manual alerting. As a result, the project was considered a failure.
+
+I’ve seen similar examples in manufacturing, K-12 education, entertainment, and other industries. Businesses are competing on the basis of customer experience, and that’s driven from the ever-expanding, ubiquitous wireless edge. Great Wi-Fi doesn’t necessarily mean market leadership, but bad Wi-Fi will have a negative impact on customers and employees. And in today’s competitive climate, that’s a recipe for disaster.
+
+**[ Read also:[Wi-Fi site-survey tips: How to avoid interference, dead spots][2] ]**
+
+## Wi-Fi performance historically inconsistent
+
+The problem with Wi-Fi is that it’s inherently flaky. I’m sure everyone reading this has experienced the typical flaws with failed downloads, dropped connections, inconsistent performance, and lengthy wait times to connect to public hot spots.
+
+Picture sitting in a conference prior to a keynote address and being able to tweet, send email, browse the web, and do other things with no problem. Then the keynote speaker comes on stage and the entire audiences start snapping pics, uploading those pictures, and streaming things – and the Wi-Fi stops working. I find this to be the norm more than the exception, underscoring the need for [no-compromise Wi-Fi][3].
+
+The question for network professionals is how to get to a place where the Wi-Fi is rock solid 100% of the time. Some say that just beefing up the existing network will do that, and it might, but in some cases, the type of Wi-Fi might not be appropriate.
+
+The most commonly deployed type of Wi-Fi is multi-channel, also known as micro-cell, where each client connects to the access point (AP) using a radio channel. A high-quality experience is based on two things: good signal strength and minimal interference. Several things can cause interference, such as APs being too close, layout issues, or interference from other equipment. To minimize interference, businesses invest a significant amount of time and money in [site surveys to plan the optimal channel map][2], but even with that’s done well, Wi-Fi glitches can still happen.
+
+**[[Take this mobile device management course from PluralSight and learn how to secure devices in your company without degrading the user experience.][4] ]**
+
+## Multi-channel Wi-Fi not always the best choice
+
+For many carpeted offices, multi-channel Wi-Fi is likely to be solid, but there are some environments where external circumstances will impact performance. A good example of this is a multi-tenant building in which there are multiple Wi-Fi networks transmitting on the same channel and interfering with one another. Another example is a hospital where there are many campus workers moving between APs. The client will also try to connect to the best AP, causing the client to continually disconnect and reconnect resulting in dropped sessions. Then there are environments such as schools, airports, and conference facilities where there is a high number of transient devices and multi-channel can struggle to keep up.
+
+## Single channel Wi-Fi offers better reliability but with a performance hit
+
+What’s a network manager to do? Is inconsistent Wi-Fi just a fait accompli? Multi-channel is the norm, but it isn’t designed for dynamic physical environments or those where reliable connectivity is a must.
+
+Several years ago an alternative architecture was proposed that would solve these problems. As the name suggests, “single channel” Wi-Fi uses a single radio channel for all APs in the network. Think of this as being a single Wi-Fi fabric that operates on one channel. With this architecture, the placement of APs is irrelevant because they all utilize the same channel, so they won’t interfere with one another. This has an obvious simplicity advantage, such as if coverage is poor, there’s no reason to do another expensive site survey. Instead, just drop in APs where they are needed.
+
+One of the disadvantages of single-channel is that aggregate network throughput was lower than multi-channel because only one channel can be used. This might be fine in environments where reliability trumps performance, but many organizations want both.
+
+## Hybrid APs offer the best of both worlds
+
+There has been recent innovation from the manufacturers of single-channel systems that mix channel architectures, creating a “best of both worlds” deployment that offers the throughput of multi-channel with the reliability of single-channel. For example, Allied Telesis offers Hybrid APs that can operate in multi-channel and single-channel mode simultaneously. That means some web clients can be assigned to the multi-channel to have maximum throughput, while others can use single-channel for seamless roaming experience.
+
+A practical use-case of such a mix might be a logistics facility where the office staff uses multi-channel, but the fork-lift operators use single-channel for continuous connectivity as they move throughout the warehouse.
+
+Wi-Fi was once a network of convenience, but now it is perhaps the most mission-critical of all networks. A traditional multi-channel system might work, but due diligence should be done to see how it functions under a heavy load. IT leaders need to understand how important Wi-Fi is to digital transformation initiatives and do the proper testing to ensure it’s not the weak link in the infrastructure chain and choose the best technology for today’s environment.
+
+**Reviews: 4 free, open-source network monitoring tools:**
+
+ * [Icinga: Enterprise-grade, open-source network-monitoring that scales][5]
+ * [Nagios Core: Network-monitoring software with lots of plugins, steep learning curve][6]
+ * [Observium open-source network monitoring tool: Won’t run on Windows but has a great user interface][7]
+ * [Zabbix delivers effective no-frills network monitoring][8]
+
+
+
+Join the Network World communities on [Facebook][9] and [LinkedIn][10] to comment on topics that are top of mind.
+
+--------------------------------------------------------------------------------
+
+via: https://www.networkworld.com/article/3386376/when-wi-fi-is-mission-critical-a-mixed-channel-architecture-is-the-best-option.html#tk.rss_all
+
+作者:[Zeus Kerravala][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://www.networkworld.com/author/Zeus-Kerravala/
+[b]: https://github.com/lujun9972
+[1]: https://images.idgesg.net/images/article/2018/09/tablet_graph_wifi_analytics-100771638-large.jpg
+[2]: https://www.networkworld.com/article/3315269/wi-fi-site-survey-tips-how-to-avoid-interference-dead-spots.html
+[3]: https://www.alliedtelesis.com/blog/no-compromise-wi-fi
+[4]: https://pluralsight.pxf.io/c/321564/424552/7490?u=https%3A%2F%2Fwww.pluralsight.com%2Fcourses%2Fmobile-device-management-big-picture
+[5]: https://www.networkworld.com/article/3273439/review-icinga-enterprise-grade-open-source-network-monitoring-that-scales.html?nsdr=true#nww-fsb
+[6]: https://www.networkworld.com/article/3304307/nagios-core-monitoring-software-lots-of-plugins-steep-learning-curve.html
+[7]: https://www.networkworld.com/article/3269279/review-observium-open-source-network-monitoring-won-t-run-on-windows-but-has-a-great-user-interface.html?nsdr=true#nww-fsb
+[8]: https://www.networkworld.com/article/3304253/zabbix-delivers-effective-no-frills-network-monitoring.html
+[9]: https://www.facebook.com/NetworkWorld/
+[10]: https://www.linkedin.com/company/network-world
diff --git a/sources/tech/20190402 Zero-trust- microsegmentation networking.md b/sources/tech/20190402 Zero-trust- microsegmentation networking.md
new file mode 100644
index 0000000000..864bd8eea4
--- /dev/null
+++ b/sources/tech/20190402 Zero-trust- microsegmentation networking.md
@@ -0,0 +1,137 @@
+[#]: collector: (lujun9972)
+[#]: translator: ( )
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+[#]: subject: (Zero-trust: microsegmentation networking)
+[#]: via: (https://www.networkworld.com/article/3384748/zero-trust-microsegmentation-networking.html#tk.rss_all)
+[#]: author: (Matt Conran https://www.networkworld.com/author/Matt-Conran/)
+
+Zero-trust: microsegmentation networking
+======
+
+### Microsegmentation gives administrators the control to set granular policies in order to protect the application environment.
+
+![Aaron Burson \(CC0\)][1]
+
+The transformation to the digital age has introduced significant changes to the cloud and data center environments. This has compelled the organizations to innovate more quickly than ever before. This, however, brings with it both – the advantages and disadvantages.
+
+The network and security need to keep up with this rapid pace of change. If you cannot match with the speed of the [digital age,][2] then ultimately bad actors will become a hazard. Therefore, the organizations must move to a [zero-trust environment][3]: default deny, with least privilege access. In today’s evolving digital world this is the primary key to success.
+
+Ideally, a comprehensive solution must provide protection across all platforms including legacy servers, VMs, services in public clouds, on-premise, off-premise, hosted, managed or self-managed. We are going to stay hybrid for a long time, therefore we need to equip our architecture with [zero-trust][4].
+
+**[ Don’t miss[customer reviews of top remote access tools][5] and see [the most powerful IoT companies][6] . | Get daily insights by [signing up for Network World newsletters][7]. ]**
+
+We need to have the ability to support all of these hybrid environments that can analyze at a process, flow data, and infrastructure level. As a matter of fact, there is never just one element to analyze within a network in order to create an effective security posture.
+
+To adequately secure such an environment requires a solution with key components: such as appropriate visibility, microsegmentation, and breach detection. Let's learn more about one of these primary elements: zero-trust microsegmentation networking.
+
+There are a variety of microsegmentation vendors, all with competing platforms. We have, for example, SDN-based, container-centric, network-based appliance be it physical or virtual, and container-centric to name just a few.
+
+## What is microsegmentation?
+
+Microsegmentation is the ability to put a wrapper around the access control for each component of an application. The traditional days are gone where we can just impose a block on source/destination/port numbers or higher up in the stack with protocols, such as HTTP or HTTPS.
+
+As the communication patterns become more complex, thereby isolating the communication flows between entities, hence following the microsegmentation principles has become a necessity.
+
+## Why is microsegmentation important?
+
+Microsegmentation gives administrators the control to set granular policies in order to protect the application environment. It defines the rules and policies as to how an application can communicate within its tier. The policies are granular (a lot more granular than what we had before), which restrict the communication to hosts that are only allowed to communicate.
+
+Eventually, this reduces the available attack surface and completely locks down the ability for the bad actors to move laterally within the application infrastructure. Why? Because it governs the application’s activity at a granular level, thereby improving the entire security posture. The traditional zone-based networking no longer cuts it in today’s [digital world][8].
+
+## General networking
+
+Let's start with the basics. We all know that with security, you are only as strong as your weakest link. As a result, enterprises have begun to further segment networks into microsegments. Some call them nanosegments.
+
+But first, let’s recap on what we actually started within the initial stage- nothing! We had IP addresses that were used for connectivity but unfortunately, they have no built-in authentication mechanism. Why? Because it wasn't a requirement back then.
+
+Network connectivity based on network routing protocols was primarily used for sharing resources. A printer, 30 years ago, could cost the same as a house, so connectivity and the sharing of resources were important. The authentication of the communication endpoints was not considered significant.
+
+## Broadcast domains
+
+As networks grew in size, virtual LANs (VLANs) were introduced to divide the broadcast domains and improve network performance. A broadcast domain is a logical division of a computer network. All nodes can reach each other by sending a broadcast at the data link layer. When the broadcast domain swells, the network performance takes a hit.
+
+Over time the role of the VLAN grew to be used as a security tool but it was never meant to be in that space. VLANs were used to improve performance, not to isolate the resources. The problem with VLANs is that there is no intra VLAN filtering. They have a very broad level of access and trust. If bad actors gain access to one segment in the zone, they should not be allowed to try and compromise another device within that zone, but with VLANs, this is a strong possibility.
+
+Hence, VLAN offers the bad actor a pretty large attack surface to play with and move across laterally without inspection. Lateral movements are really hard to detect with traditional architectures.
+
+Therefore, enterprises were forced to switch to microsegmentation. Microsegmentation further segments networks within the zone. On the contrary, the whole area of virtualization complicates the segmentation process. A virtualized server may only have a single physical network port but it supports numerous logical networks where services and applications reside across multiple security zones.
+
+Thus, microsegmentation needs to work at both; the physical network layer as well as within the virtualized networking layer. As you are aware, there has been a change in the traffic pattern. The good thing about microsegmentation is that it controls both; the “north & south” and also the “east & west” movement of traffic, further isolating the size of broadcast domains.
+
+## Microsegmentation – a multi-stage process
+
+Implementing microsegmentation is a multi-stage process. There are certain prerequisites that must be followed before the implementation. Firstly, you need to fully understand the communication patterns, map the flows and all the application dependencies.
+
+Once this is done, it's only then you can enable microsegmentation in a platform-agnostic manner across all the environments. Segmenting your network appropriately creates a dark network until the administrator turns on the lights. Authentication is performed first and then access is granted to the communicating entities operating with zero-trust with least privilege access.
+
+Once you are connecting the entities, they need to run through a number of technologies in order to be fully connected. There is not a once-off check with microsegmentation. It’s rather a continuous process to make sure that both entities are doing what they are supposed to do.
+
+This ensures that everyone is doing what they are entitled to do. You want to reduce the unnecessary cross-talk to an absolute minimum and only allow communication that is a complete necessity.
+
+## How do you implement microsegmentation?
+
+Firstly, you need strong visibility not just at the traffic flow level but also at the process and data contextual level. Without granular application visibility, it's impossible to map and fully understand what is normal traffic flow and irregular application communication patterns.
+
+Visibility cannot be mapped out manually, as there could be hundreds of workloads. Therefore, an automatic approach must be taken. Manual mapping is more prone to errors and is inefficient. The visibility also needs to be in real-time. A static snapshot of the application architecture, even if it's down to a process level, will not tell you anything about the behaviors that are sanctioned or unsanctioned.
+
+You also need to make sure that you, not under-segmenting, similar to what we had in the old days. Primarily, microsegmentation must manage communication workflows all the way up to Layer 7 of the Open Systems Interconnection (OSI) layer. Layer 4 microsegmentation only focuses on the Transport layer. If you are only segmenting the network at Layer 4 then you are widening your attack surface, thereby opening the network to be compromised.
+
+Segmenting right up to the application layer means you are locking down the lateral movements, open ports, and protocols. It enables you to restrict access to the source and destination process rather than source and destination port numbers.
+
+## Security issues with hybrid cloud
+
+Since the [network perimeter][9] has been removed, therefore, it has become difficult to bolt the traditional security tools. Traditionally, we could position a static perimeter around the network infrastructure. However, this is not an available option today as we have a mixture of containerized applications, for example, a legacy database server. We have legacy communicating to the containerized land.
+
+Hybrid enables organizations to use different types of cloud architects to include the on-premise and new technologies, such as containers. We are going to have a hybrid cloud in coming times which will change the way we think about networking. Hybrid forces the organizations to rethink about the network architectures.
+
+When you attach the microsegment policies around the workload itself, then the policies will go with the workload. Then it would not matter if the entity moves to the on-premise or to the cloud. If the workload auto scales up and down or horizontally, the policy needs to go with the workload. Even if you go deeper than the workload, into the process level, you can set even more granular controls for microsegmentation.
+
+## Identity
+
+However, this is the point where identity becomes a challenge. If things are scaling and becoming dynamic, you can’t tie policies to the IP addresses. Rather than using IP addresses as the base for microsegmentation, policies are based on the logical (not physical) attributes.
+
+With microsegmentation, the workload identity is based on logical attributes, such as the multi-factor authentication (MFA), transport layer security (TLS) certificate, the application service, or the use of a logical label associated with the workload.
+
+These are what are known as logical attributes. Ultimately the policies map to the IP addresses but these are set by using the logical attributes, not the physical ones. As we progress in this technological era, the IP address is less relevant now. Named data networking is one of the perfect examples.
+
+Other identity methods for microsegmentation are TLS certificates. If the traffic is encrypted with a different TLS certificate or from an invalid source, it automatically gets dropped, even if it comes from the right location. It will get blocked as it does not have the right identity.
+
+You can even extend that further and look inside the actual payload. If an entity is trying to do a hypertext transfer protocol (HTTP) post to a record and if it tries to perform any other operation, it will get blocked.
+
+## Policy enforcement
+
+Practically, all of these policies can be implemented and enforced in different places throughout the network. However, if you enforce in only one place, that point in the network can become compromised and become an entry door to the bad actor. You can, for example, enforce in 10 different network points, even if you subvert in 2 of them the other 8 will still protect you.
+
+Zero-trust microsegmentation ensures that you can enforce in different points throughout the network and also with different mechanics.
+
+**This article is published as part of the IDG Contributor Network.[Want to Join?][10]**
+
+Join the Network World communities on [Facebook][11] and [LinkedIn][12] to comment on topics that are top of mind.
+
+--------------------------------------------------------------------------------
+
+via: https://www.networkworld.com/article/3384748/zero-trust-microsegmentation-networking.html#tk.rss_all
+
+作者:[Matt Conran][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://www.networkworld.com/author/Matt-Conran/
+[b]: https://github.com/lujun9972
+[1]: https://images.idgesg.net/images/article/2018/07/hive-structured_windows_architecture_connections_connectivity_network_lincoln_park_pavilion_chicago_by_aaron_burson_cc0_via_unsplash_1200x800-100765880-large.jpg
+[2]: https://youtu.be/AnMQH_noNDo
+[3]: https://network-insight.net/2018/10/zero-trust-networking-ztn-want-ghosted/
+[4]: https://network-insight.net/2018/09/embrace-zero-trust-networking/
+[5]: https://www.networkworld.com/article/3262145/lan-wan/customer-reviews-top-remote-access-tools.html#nww-fsb
+[6]: https://www.networkworld.com/article/2287045/internet-of-things/wireless-153629-10-most-powerful-internet-of-things-companies.html#nww-fsb
+[7]: https://www.networkworld.com/newsletters/signup.html#nww-fsb
+[8]: https://network-insight.net/2017/10/internet-things-iot-dissolving-cloud/
+[9]: https://network-insight.net/2018/09/software-defined-perimeter-zero-trust/
+[10]: /contributor-network/signup.html
+[11]: https://www.facebook.com/NetworkWorld/
+[12]: https://www.linkedin.com/company/network-world
diff --git a/sources/tech/20190403 Intel unveils an epic response to AMD-s server push.md b/sources/tech/20190403 Intel unveils an epic response to AMD-s server push.md
new file mode 100644
index 0000000000..826cd9d413
--- /dev/null
+++ b/sources/tech/20190403 Intel unveils an epic response to AMD-s server push.md
@@ -0,0 +1,78 @@
+[#]: collector: (lujun9972)
+[#]: translator: ( )
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+[#]: subject: (Intel unveils an epic response to AMD’s server push)
+[#]: via: (https://www.networkworld.com/article/3386142/intel-unveils-an-epic-response-to-amds-server-push.html#tk.rss_all)
+[#]: author: (Andy Patrizio https://www.networkworld.com/author/Andy-Patrizio/)
+
+Intel unveils an epic response to AMD’s server push
+======
+
+### Intel introduced more than 50 new Xeon Scalable Processors for servers that cover a variety of workloads.
+
+![Intel][1]
+
+Intel on Tuesday introduced its second-generation Xeon Scalable Processors for servers, developed under the codename Cascade Lake, and it’s clear AMD has lit a fire under a once complacent company.
+
+These new Xeon SP processors max out at 28 cores and 56 threads, a bit shy of AMD’s Epyc server processors with 32 cores and 64 threads, but independent benchmarks are still to come, which may show Intel having a lead at single core performance.
+
+And for absolute overkill, there is the Xeon SP Platinum 9200 Series, which sports 56 cores and 112 threads. It will also require up to 400W of power, more than twice what the high-end Xeons usually consume.
+
+**[ Now read:[What is quantum computing (and why enterprises should care)][2] ]**
+
+The new processors were unveiled at a big event at Intel’s headquarters in Santa Clara, California, and live-streamed on the web. [Newly minted CEO][3] Bob Swan kicked off the event, saying the new processors were the “first truly data-centric portfolio for our customers.”
+
+“For the last several years, we have embarked on a journey to transform from a PC-centric company to a data-centric computing company and build the silicon processors with our partners to help our customers prosper and grow in an increasingly data-centric world,” he added.
+
+He also said the move to a data-centric world isn’t just CPUs, but a suite of accelerant technologies, including the [Agilex FPGA processors][4], Optane memory, and more.
+
+This launch is the largest Xeon launch in the company’s history, with more than 50 processor designs across the Xeon 8200 and 9200 lines. While something like that can lead to confusion, many of these are specific to certain workloads instead of general-purpose processors.
+
+**[[Get certified as an Apple Technical Coordinator with this seven-part online course from PluralSight.][5] ]**
+
+Cascade Lake chips are the replacement for the previous Skylake platform, and the mainstream Cascade Lake chips have the same architecture as the Purley motherboard used by Skylake. Like the current Xeon Scalable processors, they have up to 28 cores with up to 38.5 MB of L3 cache, but speeds and feeds have been bumped up.
+
+The Cascade Lake generation supports the new UPI (Ultra Path Interface) high-speed interconnect, up to six memory channels, AVX-512 support, and up to 48 PCIe lanes. Memory capacity has been doubled, from 768GB to 1.5TB of memory per socket. They work in the same socket as Purley motherboards and are built on a 14nm manufacturing process.
+
+Some of the new Xeons, however, can access up to 4.5TB of memory per processor: 1.5TB of memory and 3TB of Optane memory, the new persistent memory that sits between DRAM and NAND flash memory and acts as a massive cache for both.
+
+## Built-in fixes for Meltdown and Spectre vulnerabilities
+
+Most important, though, is that these new Xeons have built-in fixes for the Meltdown and Spectre vulnerabilities. There are existing fixes for the exploits, but they have the effect of reducing performance, which varies based on workload. Intel showed a slide at the event that shows the company is using a combination of firmware and software mitigation.
+
+New features also include Intel Deep Learning Boost (DL Boost), a technology developed to accelerate vector computing that Intel said makes this the first CPU with built-in inference acceleration for AI workloads. It works with the AVX-512 extension, which should make it ideal for machine learning scenarios.
+
+Most of the new Xeons are available now, except for the 9200 Platinum, which is coming in the next few months. Many Intel partners – Dell, Cray, Cisco, Supermicro – all have new products, with Supermicro launching more than 100 new products built around Cascade Lake.
+
+## Intel also rolls out Xeon D-1600 series processors
+
+In addition to its hot rod Xeons, Intel also rolled out the Xeon D-1600 series processors, a low power variant based on a completely different architecture. Xeon D-1600 series processors are designed for space and/or power constrained environments, such as edge network devices and base stations.
+
+Along with the new Xeons and FPGA chips, Intel also announced the Intel Ethernet 800 series adapter, which supports 25, 50 and 100 Gigabit transfer speeds.
+
+Thank you, AMD. This is what competition looks like.
+
+Join the Network World communities on [Facebook][6] and [LinkedIn][7] to comment on topics that are top of mind.
+
+--------------------------------------------------------------------------------
+
+via: https://www.networkworld.com/article/3386142/intel-unveils-an-epic-response-to-amds-server-push.html#tk.rss_all
+
+作者:[Andy Patrizio][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://www.networkworld.com/author/Andy-Patrizio/
+[b]: https://github.com/lujun9972
+[1]: https://images.idgesg.net/images/article/2019/04/intel-xeon-family-1-100792811-large.jpg
+[2]: https://www.networkworld.com/article/3275367/what-s-quantum-computing-and-why-enterprises-need-to-care.html
+[3]: https://www.networkworld.com/article/3336921/intel-promotes-swan-to-ceo-bumps-off-itanium-and-eyes-mellanox.html
+[4]: https://www.networkworld.com/article/3386158/intels-agilex-fpga-family-targets-data-intensive-workloads.html
+[5]: https://pluralsight.pxf.io/c/321564/424552/7490?u=https%3A%2F%2Fwww.pluralsight.com%2Fpaths%2Fapple-certified-technical-trainer-10-11
+[6]: https://www.facebook.com/NetworkWorld/
+[7]: https://www.linkedin.com/company/network-world
diff --git a/sources/tech/20190403 Top Ten Reasons to Think Outside the Router -1- It-s Time for a Router Refresh.md b/sources/tech/20190403 Top Ten Reasons to Think Outside the Router -1- It-s Time for a Router Refresh.md
new file mode 100644
index 0000000000..72d566a7d0
--- /dev/null
+++ b/sources/tech/20190403 Top Ten Reasons to Think Outside the Router -1- It-s Time for a Router Refresh.md
@@ -0,0 +1,101 @@
+[#]: collector: (lujun9972)
+[#]: translator: ( )
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+[#]: subject: (Top Ten Reasons to Think Outside the Router #1: It’s Time for a Router Refresh)
+[#]: via: (https://www.networkworld.com/article/3386116/top-ten-reasons-to-think-outside-the-router-1-it-s-time-for-a-router-refresh.html#tk.rss_all)
+[#]: author: (Rami Rammaha https://www.networkworld.com/author/Rami-Rammaha/)
+
+Top Ten Reasons to Think Outside the Router #1: It’s Time for a Router Refresh
+======
+
+![istock][1]
+
+We’re now at the end of our homage to the iconic David Letterman Top Ten List segment from his former Late Show, as [Silver Peak][2] counts down the _Top Ten Reasons to Think Outside the Router._ Click for the [#2][3], [#3][4], [#4][5], [#5][6], [#6][7], [#7][8], [#8][9], [#9][10] and [#10][11] reasons to retire traditional branch routers.
+
+_**The #1 reason it’s time to retire conventional routers at the branch: your branch routers are coming due for a refresh – the perfect time to evaluate new options.**_
+
+Your WAN architecture is due for a branch router refresh! You’re under immense pressure to advance your organization’s digital transformation initiatives and deliver a high quality of experience to your users and customers. Your applications – at least SaaS apps – are all cloud-based. You know you need to move more quickly to keep pace with changing business requirements to realize the transformational promise of the cloud. And, you’re dealing with shifting traffic patterns and an insatiable appetite for more bandwidth at branch sites to support your users and applications. Finally, you know your IT budget for networking isn’t going to increase.
+
+_So, what’s next?_ You really only have three options when it comes to refreshing your WAN. You can continue to try and stretch your conventional router-centric model. You can choose a basic [SD-WAN][12] model that may or may not be good enough. Or you can take a new approach and deploy a business-driven SD-WAN edge platform.
+
+### **The pitfalls of a router-centric model**
+
+![][13]
+
+The router-centric approach worked well when enterprise applications were hosted in the data center; before the advent of the cloud. All traffic was routed directly from branch offices to the data center. With the emergence of the cloud, businesses were forced to conform to the constraints of the network when deploying new applications or making network changes. This is a bottoms-up device centric approach in which the network becomes a bottleneck to the business.
+
+A router-centric approach requires manual device-by-device configuration that results in endless hours of manual programming, making it extremely difficult for network administrators to scale without experiencing major challenges in configuration, outages and troubleshooting. Any changes that arise when deploying a new application or changing a QoS or security policy, once again requires manually programming every router at every branch across the network. Re-programming is time consuming and requires utilizing a complex, cumbersome CLI, further adding to the inefficiencies of the model. In short, the router-centric WAN has hit the wall.
+
+### **Basic SD-WAN, a step in the right direction**
+
+![][14]
+
+In this model, businesses realize the benefit of foundational features, but this model falls short of the goal of a fully automated, business-driven network. A basic SD-WAN approach is unable to provide what the business really needs, including the ability to deliver the best Quality of Experience for users.
+
+Some of the basic SD-WAN features include the ability to use multiple forms of transport, path selection, centralized management, zero-touch provisioning and encrypted VPN overlays. However, a basic SD-WAN lacks in many areas:
+
+ * Limited end-to-end orchestration of WAN edge network functions
+ * Rudimentary path selection with traffic steering limited to pre-defined rules
+ * Long fail-over times in response to WAN transport outages
+ * Inability to use links when they experience brownouts due to link congestion or packet loss
+ * Fixed application definitions and manually scripted ACLs to control traffic steering across the internet
+
+
+
+### **The solution: shift to a business-first networking model**
+
+![][15]
+
+In this model, the network enables the business. The WAN is transformed into a business accelerant that is fully automated and continuous, giving every application the resources it truly needs while delivering 10x the bandwidth for the same budget – ultimately achieving the highest quality of experience to users and IT alike. With a business-first networking model, the network functions (SD-WAN, firewall, segmentation, routing, WAN optimization and application visibility and control) are unified in a single platform and are centrally orchestrated and managed. Top-down business intent is the driver, enabling businesses to unlock the full transformational promise of the cloud.
+
+The business-driven [Silver Peak® EdgeConnect™ SD-WAN][16] edge platform was built for the cloud, enabling enterprises to liberate their applications from the constraints of existing WAN approaches. EdgeConnect offers the following advanced capabilities:
+
+1\. Automates traffic steering and security policy enforcement based on business intent instead of TCP/IP addresses, delivering the highest Quality of Experience for users
+
+2\. Actively embraces broadband to increase application performance and availability while lowering costs
+
+3\. Securely and directly connect branch users to SaaS and IaaS cloud services
+
+4\. Increases operational efficiency while increasing business agility and time-to-market via centralized orchestration
+
+Silver Peak has more than 1,000 enterprise customer deployments across a range of vertical industries. Bentley Systems, [Nuffield Health][17] and [Solis Mammography][18] have all realized tangible business outcomes from their EdgeConnect deployments.
+
+![][19]
+
+Learn why the time is now to [think outside the router][20]!
+
+--------------------------------------------------------------------------------
+
+via: https://www.networkworld.com/article/3386116/top-ten-reasons-to-think-outside-the-router-1-it-s-time-for-a-router-refresh.html#tk.rss_all
+
+作者:[Rami Rammaha][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://www.networkworld.com/author/Rami-Rammaha/
+[b]: https://github.com/lujun9972
+[1]: https://images.idgesg.net/images/article/2019/04/istock-478729482-100792542-large.jpg
+[2]: https://www.silver-peak.com/why-silver-peak
+[3]: http://blog.silver-peak.com/think-outside-the-router-reason-2-simplify-and-consolidate-the-wan-edge
+[4]: http://blog.silver-peak.com/think-outside-the-router-reason-3-mpls-contract-renewal
+[5]: http://blog.silver-peak.com/top-ten-reasons-to-think-outside-the-router-4-broadband-is-used-only-for-failover
+[6]: http://blog.silver-peak.com/think-outside-the-router-reason-5-manual-cli-based-configuration-and-management
+[7]: http://blog.silver-peak.com/https-blog-silver-peak-com-think-outside-the-router-reason-6
+[8]: http://blog.silver-peak.com/think-outside-the-router-reason-7-exorbitant-router-support-and-maintenance-costs
+[9]: http://blog.silver-peak.com/think-outside-the-router-reason-8-garbled-voip-pixelated-video
+[10]: http://blog.silver-peak.com/think-outside-router-reason-9-sub-par-saas-performance
+[11]: http://blog.silver-peak.com/think-outside-router-reason-10-its-getting-cloudy
+[12]: https://www.silver-peak.com/sd-wan/sd-wan-explained
+[13]: https://images.idgesg.net/images/article/2019/04/1_router-centric-vs-business-first-100792538-medium.jpg
+[14]: https://images.idgesg.net/images/article/2019/04/2_basic-sd-wan-vs-business-first-100792539-medium.jpg
+[15]: https://images.idgesg.net/images/article/2019/04/3_bus-first-networking-model-100792540-large.jpg
+[16]: https://www.silver-peak.com/products/unity-edge-connect
+[17]: https://www.silver-peak.com/resource-center/nuffield-health-deploys-uk-wide-sd-wan-silver-peak
+[18]: https://www.silver-peak.com/resource-center/national-leader-mammography-services-accelerates-access-life-critical-scans
+[19]: https://images.idgesg.net/images/article/2019/04/4_real-world-business-outcomes-100792541-large.jpg
+[20]: https://www.silver-peak.com/think-outside-router
diff --git a/sources/tech/20190403 Use Git as the backend for chat.md b/sources/tech/20190403 Use Git as the backend for chat.md
new file mode 100644
index 0000000000..e564bbc6e7
--- /dev/null
+++ b/sources/tech/20190403 Use Git as the backend for chat.md
@@ -0,0 +1,141 @@
+[#]: collector: (lujun9972)
+[#]: translator: ( )
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+[#]: subject: (Use Git as the backend for chat)
+[#]: via: (https://opensource.com/article/19/4/git-based-chat)
+[#]: author: (Seth Kenlon https://opensource.com/users/seth)
+
+Use Git as the backend for chat
+======
+GIC is a prototype chat application that showcases a novel way to use Git.
+![Team communication, chat][1]
+
+[Git][2] is one of those rare applications that has managed to encapsulate so much of modern computing into one program that it ends up serving as the computational engine for many other applications. While it's best-known for tracking source code changes in software development, it has many other uses that can make your life easier and more organized. In this series leading up to Git's 14th anniversary on April 7, we'll share seven little-known ways to use Git. Today, we'll look at GIC, a Git-based chat application
+
+### Meet GIC
+
+While the authors of Git probably expected frontends to be created for Git, they undoubtedly never expected Git would become the backend for, say, a chat client. Yet, that's exactly what developer Ephi Gabay did with his experimental proof-of-concept [GIC][3]: a chat client written in [Node.js][4] using Git as its backend database.
+
+GIC is by no means intended for production use. It's purely a programming exercise, but it's one that demonstrates the flexibility of open source technology. What's astonishing is that the client consists of just 300 lines of code, excluding the Node libraries and Git itself. And that's one of the best things about the chat client and about open source; the ability to build upon existing work. Seeing is believing, so you should give GIC a look for yourself.
+
+### Get set up
+
+GIC uses Git as its engine, so you need an empty Git repository to serve as its chatroom and logger. The repository can be hosted anywhere, as long as you and anyone who needs access to the chat service has access to it. For instance, you can set up a Git repository on a free Git hosting service like GitLab and grant chat users contributor access to the Git repository. (They must be able to make commits to the repository, because each chat message is a literal commit.)
+
+If you're hosting it yourself, create a centrally located bare repository. Each user in the chat must have an account on the server where the bare repository is located. You can create accounts specific to Git with Git hosting software like [Gitolite][5] or [Gitea][6], or you can give them individual user accounts on your server, possibly using **git-shell** to restrict their access to Git.
+
+Performance is best on a self-hosted instance. Whether you host your own or you use a hosting service, the Git repository you create must have an active branch, or GIC won't be able to make commits as users chat because there is no Git HEAD. The easiest way to ensure that a branch is initialized and active is to commit a README or license file upon creation. If you don't do that, you can create and commit one after the fact:
+
+```
+$ echo "chat logs" > README
+$ git add README
+$ git commit -m 'just creating a HEAD ref'
+$ git push -u origin HEAD
+```
+
+### Install GIC
+
+Since GIC is based on Git and written in Node.js, you must first install Git, Node.js, and the Node package manager, npm (which should be bundled with Node). The command to install these differs depending on your Linux or BSD distribution, but here's an example command on Fedora:
+
+```
+$ sudo dnf install git nodejs
+```
+
+If you're not running Linux or BSD, follow the installation instructions on [git-scm.com][7] and [nodejs.org][8].
+
+There's no install process, as such, for GIC. Each user (Alice and Bob, in this example) must clone the repository to their hard drive:
+
+```
+$ git cone https://github.com/ephigabay/GIC GIC
+```
+
+Change directory into the GIC directory and install the Node.js dependencies with **npm** :
+
+```
+$ cd GIC
+$ npm install
+```
+
+Wait for the Node modules to download and install.
+
+### Configure GIC
+
+The only configuration GIC requires is the location of your Git chat repository. Edit the **config.js** file:
+
+```
+module.exports = {
+gitRepo: '[seth@example.com][9]:/home/gitchat/chatdemo.git',
+messageCheckInterval: 500,
+branchesCheckInterval: 5000
+};
+```
+
+
+Test your connection to the Git repository before trying GIC, just to make sure your configuration is sane:
+
+```
+$ git clone --quiet seth@example.com:/home/gitchat/chatdemo.git > /dev/null
+```
+
+Assuming you receive no errors, you're ready to start chatting.
+
+### Chat with Git
+
+From within the GIC directory, start the chat client:
+
+```
+$ npm start
+```
+
+When the client first launches, it must clone the chat repository. Since it's nearly an empty repository, it won't take long. Type your message and press Enter to send a message.
+
+![GIC][10]
+
+A Git-based chat client. What will they think of next?
+
+As the greeting message says, a branch in Git serves as a chatroom or channel in GIC. There's no way to create a new branch from within the GIC UI, but if you create one in another terminal session or in a web UI, it shows up immediately in GIC. It wouldn't take much to patch some IRC-style commands into GIC.
+
+After chatting for a while, take a look at your Git repository. Since the chat happens in Git, the repository itself is also a chat log:
+
+```
+$ git log --pretty=format:"%p %cn %s"
+4387984 Seth Kenlon Hey Chani, did you submit a talk for All Things Open this year?
+36369bb Chani No I didn't get a chance. Did you?
+[...]
+```
+
+### Exit GIC
+
+Not since Vim has there been an application as difficult to stop as GIC. You see, there is no way to stop GIC. It will continue to run until it is killed. When you're ready to stop GIC, open another terminal tab or window and issue this command:
+
+```
+$ kill `pgrep npm`
+```
+
+GIC is a novelty. It's a great example of how an open source ecosystem encourages and enables creativity and exploration and challenges us to look at applications from different angles. Try GIC out. Maybe it will give you ideas. At the very least, it's a great excuse to spend an afternoon with Git.
+
+--------------------------------------------------------------------------------
+
+via: https://opensource.com/article/19/4/git-based-chat
+
+作者:[Seth Kenlon (Red Hat, Community Moderator)][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://opensource.com/users/seth
+[b]: https://github.com/lujun9972
+[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/talk_chat_team_mobile_desktop.png?itok=d7sRtKfQ (Team communication, chat)
+[2]: https://git-scm.com/
+[3]: https://github.com/ephigabay/GIC
+[4]: https://nodejs.org/en/
+[5]: http://gitolite.com
+[6]: http://gitea.io
+[7]: http://git-scm.com
+[8]: http://nodejs.org
+[9]: mailto:seth@example.com
+[10]: https://opensource.com/sites/default/files/uploads/gic.jpg (GIC)
diff --git a/sources/tech/20190404 9 features developers should know about Selenium IDE.md b/sources/tech/20190404 9 features developers should know about Selenium IDE.md
new file mode 100644
index 0000000000..b099da68e2
--- /dev/null
+++ b/sources/tech/20190404 9 features developers should know about Selenium IDE.md
@@ -0,0 +1,158 @@
+[#]: collector: (lujun9972)
+[#]: translator: ( )
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+[#]: subject: (9 features developers should know about Selenium IDE)
+[#]: via: (https://opensource.com/article/19/4/features-selenium-ide)
+[#]: author: (Al Sargent https://opensource.com/users/alsargent)
+
+9 features developers should know about Selenium IDE
+======
+The new Selenium IDE brings the benefits of functional test automation
+to many IT professionals—and to frontend developers specifically.
+![magnifying glass on computer screen][1]
+
+There has long been a stigma associated with using record-and-playback tools for testing rather than scripted QA automation tools like [Selenium Webdriver][2], [Cypress][3], and [WebdriverIO][4].
+
+Record-and-playbook tools are perceived to suffer from many issues, including a lack of cross-browser support, no way to run scripts in parallel or from CI build scripts, poor support for responsive web apps, and no way to quickly diagnose frontend bugs.
+
+Needless to say, it's been somewhat of a rough road for these tools, and after Selenium IDE [went end-of-life][5] in 2017, many thought the road for record and playback would end altogether.
+
+Well, it turns out this perception was wrong. Not long after the Selenium IDE project was discontinued, my colleagues at [Applitools approached the Selenium open source community][6] to see how they could help.
+
+Since then, much of Selenium IDE's code has been revamped. The code is now freely available on GitHub under an Apache 2.0 license, managed by the Selenium community, and supported by [two full-time engineers][7], one of whom literally wrote the book on [Selenium testing][8].
+
+![Selenium IDE's GitHub repository][9]
+
+The new Selenium IDE brings the benefits of functional test automation to many IT professionals—and to frontend developers specifically. Here are nine things developers should know about the new Selenium IDE.
+
+### 1\. Selenium IDE is now cross-browser
+
+When the record-and-playback tool first came out in 2006, Firefox was the shiny new browser it hitched its wagon to, and it remained that way for a decade. No more! Selenium IDE is now available as a [Google Chrome Extension][10] and [Firefox Add-on][11].
+
+Even better, Selenium IDE can run its tests on Selenium WebDriver servers by using Selenium IDE's new command-line test runner, [SIDE Runner][12]. SIDE Runner blends elements of Selenium IDE and Selenium Webdriver. It takes a Selenium IDE script, saved as a [**.side** file][13], and runs it using browser drivers such as [ChromeDriver][14], [EdgeDriver][15], Firefox's [Geckodriver][16], [IEDriver][17], and [SafariDriver][18].
+
+SIDE Runner and the other drivers above are available as [straightforward npm installs][12]. Here's what it looks like in action.
+
+![SIDE Runner][19]
+
+### 2\. No more brittle functional tests
+
+For years, brittle tests have been an issue for functional tests—whether you record them or code them by hand. Now that developers are releasing new features more frequently, their user interface (UI) code is constantly changing as well. When a UI changes, object locators often change, too.
+
+Selenium IDE fixes that by capturing multiple object locators when you record your script. During playback, if Selenium IDE can't find one locator, it tries each of the other locators until it finds one that works. Your test will fail only if none of the locators work. This doesn't guarantee scripts will always play back, but it does insulate scripts against numerous changes. As you can see below, Selenium IDE captures linkText, an xPath expression, and CSS-based locators.
+
+![Selenium IDE captures linkText, an xPath expression, and CSS-based locators][20]
+
+### 3\. Conditional logic to handle UI features
+
+When testing web apps, scripts have to handle intermittent UI elements that can randomly appear in your app. These come in the form of cookie notices, popups for special offers, quote requests, newsletter subscriptions, paywall notifications, adblocker requests, and more.
+
+Conditional logic is a great way to handle these intermittent UI features. Developers can easily insert conditional logic—also called control flow—into Selenium IDE scripts. [Here are details][21] and how it looks.
+
+![Selenium IDE's Conditional logic][22]
+
+### 4\. Support for embedded code
+
+As broad as the new [Selenium IDE API][23] is, it doesn't do everything. For this reason, Selenium IDE has **[**execute** **script**][24]** and **[execute async script][25]** commands that let your script call a JavaScript snippet.
+
+This provides developers with a tremendous amount of flexibility to take advantage of JavaScript's flexibility and wide range of libraries. To use it, click on the test step where you want JavaScript to run, choose **Insert New Command** , and enter **execute script** or **execute async script** in the command field, as shown below.
+
+![Selenium IDE's command line][26]
+
+### 5\. Selenium IDE runs from CI build scripts
+
+Because SIDE Runner is called from the command line, you can easily fit it into CI build scripts, so long as the CI server can call **selenium-ide-runner** and upload the **.side** file (the test script) as a build artifact. For example, here's how to upload an input file in [Jenkins][27], [Travis][28], and [CircleCI][29].
+
+This means Selenium IDE can be better integrated into the software development technology stack. In addition, the scripts created by less-technical QA team members—including business analysts—can run with every build. This helps better align QA with the developer so fewer bugs escape into production.
+
+### 6\. Support for third-party plugins
+
+Imagine companies building plugins to have Selenium IDE do all kinds of things, like uploading scripts to a functional testing cloud, a load testing cloud, or a production application monitoring service.
+
+Plenty of companies have integrated Selenium Webdriver into their offerings, and I bet the same will happen with Selenium IDE. You can also [build your own Selenium IDE plugin][30].
+
+### 7\. Visual UI testing
+
+Speaking of new plugins, Applitools introduced a new Selenium IDE plugin to add artificial intelligence-powered visual validations to the equation. Available through the [Chrome][31] and [Firefox][32] stores via a three-second install, just plug in the Applitools API key and go.
+
+Visual checkpoints are a great way to ensure a UI renders correctly. Rather than a bunch of assert statements on all the UI elements—which would be a pain to maintain—one visual checkpoint checks all your page elements.
+
+Best of all, visual AI looks at a web app the same way a human does, ignoring minor differences. This means fewer fake bugs to frustrate a development team.
+
+### 8\. Visually test responsive web apps
+
+When testing the visual layout of [responsive web apps][33], it's best to do it on a wide range of screen sizes (also called viewports) to ensure nothing appears out of whack. It's all too easy for responsive web bugs to creep in, and when they do, the problems can range from merely cosmetic to business stopping.
+
+When you use visual UI testing for Selenium IDE, you can visually test your webpages on the Applitools [Visual Grid][34], which has more than 100 combinations of browsers, emulated devices, and viewport sizes.
+
+Once tests run on the Visual Grid, developers can easily check the test results on all the various combinations.
+
+![Selenium IDE's Visual Grid][35]
+
+### 9\. Responsive web bugs have nowhere to hide
+
+Selenium IDE can help pinpoint the cause of frontend bugs. Every Selenium IDE script that's run with the Visual Grid can be analyzed with Applitools' [Root Cause Analysis][36]. It's no longer enough to find a bug—developers also need to fix it.
+
+When a visual bug is discovered, it can be clicked on and just the relevant (not all) Document Object Model (DOM) and CSS differences will be displayed.
+
+![Finding visual bugs][37]
+
+In summary, much like many emerging technologies in software development, Selenium IDE is part of a larger trend of making life easier and simpler for technical professionals and enabling them to spend more time and effort on creating code for even faster feedback.
+
+* * *
+
+_This article is based on[16 reasons why to use Selenium IDE in 2019 (and 2 why not)][38] originally published on the Applitools blog._
+
+--------------------------------------------------------------------------------
+
+via: https://opensource.com/article/19/4/features-selenium-ide
+
+作者:[Al Sargent][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://opensource.com/users/alsargent
+[b]: https://github.com/lujun9972
+[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/search_find_code_issue_bug_programming.png?itok=XPrh7fa0 (magnifying glass on computer screen)
+[2]: https://www.seleniumhq.org/projects/webdriver/
+[3]: https://www.cypress.io/
+[4]: https://webdriver.io/
+[5]: https://seleniumhq.wordpress.com/2017/08/09/firefox-55-and-selenium-ide/
+[6]: https://seleniumhq.wordpress.com/2018/08/06/selenium-ide-tng/
+[7]: https://github.com/SeleniumHQ/selenium-ide/graphs/contributors
+[8]: http://davehaeffner.com/
+[9]: https://opensource.com/sites/default/files/uploads/selenium_ide_github_graphic_1.png (Selenium IDE's GitHub repository)
+[10]: https://chrome.google.com/webstore/detail/selenium-ide/mooikfkahbdckldjjndioackbalphokd
+[11]: https://addons.mozilla.org/en-US/firefox/addon/selenium-ide/
+[12]: https://www.seleniumhq.org/selenium-ide/docs/en/introduction/command-line-runner/
+[13]: https://www.seleniumhq.org/selenium-ide/docs/en/introduction/command-line-runner/#launching-the-runner
+[14]: http://chromedriver.chromium.org/
+[15]: https://developer.microsoft.com/en-us/microsoft-edge/tools/webdriver/
+[16]: https://github.com/mozilla/geckodriver
+[17]: https://github.com/SeleniumHQ/selenium/wiki/InternetExplorerDriver
+[18]: https://developer.apple.com/documentation/webkit/testing_with_webdriver_in_safari
+[19]: https://opensource.com/sites/default/files/uploads/selenium_ide_side_runner_2.png (SIDE Runner)
+[20]: https://opensource.com/sites/default/files/uploads/selenium_ide_linktext_3.png (Selenium IDE captures linkText, an xPath expression, and CSS-based locators)
+[21]: https://www.seleniumhq.org/selenium-ide/docs/en/introduction/control-flow/
+[22]: https://opensource.com/sites/default/files/uploads/selenium_ide_conditional_logic_4.png (Selenium IDE's Conditional logic)
+[23]: https://www.seleniumhq.org/selenium-ide/docs/en/api/commands/
+[24]: https://www.seleniumhq.org/selenium-ide/docs/en/api/commands/#execute-script
+[25]: https://www.seleniumhq.org/selenium-ide/docs/en/api/commands/#execute-async-script
+[26]: https://opensource.com/sites/default/files/uploads/selenium_ide_command_line_5.png (Selenium IDE's command line)
+[27]: https://stackoverflow.com/questions/27491789/how-to-upload-a-generic-file-into-a-jenkins-job
+[28]: https://docs.travis-ci.com/user/uploading-artifacts/
+[29]: https://circleci.com/docs/2.0/artifacts/
+[30]: https://www.seleniumhq.org/selenium-ide/docs/en/plugins/plugins-getting-started/
+[31]: https://chrome.google.com/webstore/detail/applitools-for-selenium-i/fbnkflkahhlmhdgkddaafgnnokifobik
+[32]: https://addons.mozilla.org/en-GB/firefox/addon/applitools-for-selenium-ide/
+[33]: https://en.wikipedia.org/wiki/Responsive_web_design
+[34]: https://applitools.com/visualgrid
+[35]: https://opensource.com/sites/default/files/uploads/selenium_ide_visual_grid_6.png (Selenium IDE's Visual Grid)
+[36]: https://applitools.com/root-cause-analysis
+[37]: https://opensource.com/sites/default/files/uploads/seleniumice_rootcauseanalysis_7.png (Finding visual bugs)
+[38]: https://applitools.com/blog/why-selenium-ide-2019
diff --git a/sources/tech/20190404 Edge Computing is Key to Meeting Digital Transformation Demands - and Partnerships Can Help Deliver Them.md b/sources/tech/20190404 Edge Computing is Key to Meeting Digital Transformation Demands - and Partnerships Can Help Deliver Them.md
new file mode 100644
index 0000000000..b2f8a59ab4
--- /dev/null
+++ b/sources/tech/20190404 Edge Computing is Key to Meeting Digital Transformation Demands - and Partnerships Can Help Deliver Them.md
@@ -0,0 +1,72 @@
+[#]: collector: (lujun9972)
+[#]: translator: ( )
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+[#]: subject: (Edge Computing is Key to Meeting Digital Transformation Demands – and Partnerships Can Help Deliver Them)
+[#]: via: (https://www.networkworld.com/article/3387140/edge-computing-is-key-to-meeting-digital-transformation-demands-and-partnerships-can-help-deliver-t.html#tk.rss_all)
+[#]: author: (Rob McKernan https://www.networkworld.com/author/Rob-McKernan/)
+
+Edge Computing is Key to Meeting Digital Transformation Demands – and Partnerships Can Help Deliver Them
+======
+
+### Organizations in virtually every vertical industry are undergoing a digital transformation in an attempt to take advantage of edge computing technology
+
+![Getty Images][1]
+
+Organizations in virtually every vertical industry are undergoing a digital transformation in an attempt to take advantage of [edge computing][2] technology to make their businesses more efficient, innovative and profitable. In the process, they’re coming face to face with challenges ranging from time to market to reliability of IT infrastructure.
+
+It’s a complex problem, especially when you consider the scope of what digital transformation entails. “Digital transformation is not simply a list of IT projects, it involves completely rethinking how an organization uses technology to pursue new revenue streams, products, services, and business models,” as the [research firm IDC says][3].
+
+Companies will be spending more than $650 billion per year on digital transformation efforts by 2024, a CAGR of more than 18.5% from 2018, according to the research firm [Market Research Engine][4].
+
+The drivers behind all that spending include Internet of Things (IoT) technology, which involves collecting data from machines and sensors covering every aspect of the organization. That is contributing to Big Data – the treasure trove of data that companies mine to find the keys to efficiency, opportunity and more. Artificial intelligence and machine learning are crucial to that effort, helping companies make sense of the mountains of data they’re creating and consuming, and to find opportunities.
+
+**Requirements for Edge Computing**
+
+All of these trends are creating the need for more and more compute power and data storage. And much of it needs to be close to the source of the data, and to those employees who are working with it. In other words, it’s driving the need for companies to build edge data centers or edge computing sites.
+
+Physically, these edge computing sites bear little resemblance to large, centralized data centers, but they have many of the same requirements in terms of performance, reliability, efficiency and security. Given they are typically in locations with few if any IT personnel, the data centers must have a high degree of automation and remote management capabilities. And to meet business requirements, they must be built quickly.
+
+**Answering the Call at the Edge**
+
+These are complex requirements, but if companies are to meet time-to-market goals and deal with the lack of IT personnel at the edge, they demand simple solutions.
+
+One solution is integration. We’re seeing this already in the IT space, with vendors delivering hyper-converged infrastructure that combines servers, storage, networking and software that is tightly integrated and delivered in a single enclosure. This saves IT groups valuable time in terms of procuring and configuring equipment and makes it far easier to manage over the long term.
+
+Now we’re seeing the same strategy applied to edge data centers. Prefabricated, modular data centers are an ideal solution for delivering edge data center capacity quickly and reliably. All the required infrastructure – power, cooling, racks, UPSs – can be configured and installed in a factory and delivered as a single, modular unit to the data center site (or multiple modules, depending on requirements).
+
+Given they’re built in a factory under controlled conditions, modular data centers are more reliable over the long haul. They can be configured with management software built-in, enabling remote management capabilities and a high degree of automation. And they can be delivered in weeks or months, not years – and in whatever size is required, including small “micro” data centers.
+
+Few companies, however, have all the components required to deliver a complete, functional data center, not to mention the expertise required to install and configure it. So, it takes effective partnerships to deliver complete edge data center solutions.
+
+**Tech Data Partnership Delivers at the Edge **
+
+APC by Schneider Electric has a long history of partnering to deliver complete solutions that address customer needs. Of the thousands of partnerships it has established over the years, the [25-year partnership][5] with [Tech Data][6] is particularly relevant for the digital transformation era.
+
+Tech Data is a $36.8 billion, Fortune 100 company that has established itself as the world’s leading end-to-end IT distributor. Power and physical infrastructure specialists from Tech Data team up with their counterparts from APC to deliver innovative solutions, including modular and [micro data centers][7]. Many of these solutions are pre-certified by major alliance partners, including IBM, HPE, Cisco, Nutanix, Dell EMC and others.
+
+To learn more, [access the full story][8] that explains how the Tech Data and APC partnership helps deliver [Certainty in a Connected World][9] and effective edge computing solutions that meet today’s time to market requirements.
+
+--------------------------------------------------------------------------------
+
+via: https://www.networkworld.com/article/3387140/edge-computing-is-key-to-meeting-digital-transformation-demands-and-partnerships-can-help-deliver-t.html#tk.rss_all
+
+作者:[Rob McKernan][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://www.networkworld.com/author/Rob-McKernan/
+[b]: https://github.com/lujun9972
+[1]: https://images.idgesg.net/images/article/2019/04/gettyimages-494323751-942x445-100792905-large.jpg
+[2]: https://www.apc.com/us/en/solutions/business-solutions/edge-computing.jsp
+[3]: https://www.idc.com/getdoc.jsp?containerId=US43985717
+[4]: https://www.marketresearchengine.com/digital-transformation-market
+[5]: https://www.apc.com/us/en/partners-alliances/partners/tech-data-and-apc-partnership-drives-edge-computing-success/full-resource.jsp
+[6]: https://www.techdata.com/
+[7]: https://www.apc.com/us/en/solutions/business-solutions/micro-data-centers.jsp
+[8]: https://www.apc.com/us/en/partners-alliances/partners/tech-data-and-apc-partnership-drives-edge-computing-success/index.jsp
+[9]: https://www.apc.com/us/en/who-we-are/certainty-in-a-connected-world.jsp
diff --git a/sources/tech/20190404 Intel formally launches Optane for data center memory caching.md b/sources/tech/20190404 Intel formally launches Optane for data center memory caching.md
new file mode 100644
index 0000000000..3ec4b4600e
--- /dev/null
+++ b/sources/tech/20190404 Intel formally launches Optane for data center memory caching.md
@@ -0,0 +1,73 @@
+[#]: collector: (lujun9972)
+[#]: translator: ( )
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+[#]: subject: (Intel formally launches Optane for data center memory caching)
+[#]: via: (https://www.networkworld.com/article/3387117/intel-formally-launches-optane-for-data-center-memory-caching.html#tk.rss_all)
+[#]: author: (Andy Patrizio https://www.networkworld.com/author/Andy-Patrizio/)
+
+Intel formally launches Optane for data center memory caching
+======
+
+### Intel formally launched the Optane persistent memory product line, which includes 3D Xpoint memory technology. The Intel-only solution is meant to sit between DRAM and NAND and to speed up performance.
+
+![Intel][1]
+
+As part of its [massive data center event][2] on Tuesday, Intel formally launched the Optane persistent memory product line. It had been out for a while, but the current generation of Xeon server processors could not fully utilize it. The new Xeon 8200 and 9200 lines take full advantage of it.
+
+And since Optane is an Intel product (co-developed with Micron), that means AMD and Arm server processors are out of luck.
+
+As I have [stated in the past][3], Optane DC Persistent Memory uses 3D Xpoint memory technology that Intel developed with Micron Technology. 3D Xpoint is a non-volatile memory type that is much faster than solid-state drives (SSD), almost at the speed of DRAM, but it has the persistence of NAND flash.
+
+**[ Read also:[Why NVMe? Users weigh benefits of NVMe-accelerated flash storage][4] and [IDC’s top 10 data center predictions][5] | Get regularly scheduled insights [Sign up for Network World newsletters][6] ]**
+
+The first 3D Xpoint products were SSDs called Intel’s ["ruler,"][7] because they were designed in a long, thin format similar to the shape of a ruler. They were designed that way to fit in 1u server carriages. As part of Tuesday’s announcement, Intel introduced the new Intel SSD D5-P4326 'Ruler' SSD, using four-cell or QLC 3D NAND memory, with up to 1PB of storage in a 1U design.
+
+Optane DC Persistent Memory will be available in DIMM capacities of 128GB on up to 512GB initially. That’s two to four times what you can get with DRAM, said Navin Shenoy, executive vice president and general manager of Intel’s Data Center Group, who keynoted the event.
+
+“We expect system capacity in a server system to scale to 4.5 terabytes per socket or 36 TB in an 8-socket system. That’s three times larger than what we were able to do with the first-generation of Xeon Scalable,” he said.
+
+## Intel Optane memory uses and speed
+
+Optane runs in two different modes: Memory Mode and App Direct Mode. Memory mode is what I have been describing to you, where Optane memory exists “above” the DRAM and acts as a cache. In App Direct mode, the DRAM and Optane DC Persistent Memory are pooled together to maximize the total capacity. Not every workload is ideal for this kind of configuration, so it should be used in applications that are not latency-sensitive. The primary use case for Optane, as Intel is promoting it, is Memory Mode.
+
+**[[Get certified as an Apple Technical Coordinator with this seven-part online course from PluralSight.][8] ]**
+
+When 3D Xpoint was initially announced a few years back, Intel claimed it was 1,000 times faster than NAND, with 1000 times the endurance, and 10 times the density potential of DRAM. Well that was a little exaggerated, but it does have some intriguing elements.
+
+Optane memory, when used in 256B contiguous 4 cacheline, can achieve read speeds of 8.3GB/sec and write speeds of 3.0GB/sec. Compare that with the read/write speed of 500 or so MB/sec for a SATA SSD, and you can see the performance gain. Optane, remember, is feeding memory, so it caches frequently accessed SSD content.
+
+This is the key takeaware of Optane DC. It will keep very large data sets very close to memory, and hence the CPU, with low latency while at the same time minimizing the need to access the slower storage subsystem, whether it’s SSD or HDD. It now offers the possibility of putting multiple terabytes of data very close to the CPU for much faster access.
+
+## One challenge with Optane memory
+
+The only real challenge is that Optane goes into DIMM slots, which is where memory goes. Now some motherboards come with as many as 16 DIMM slots per CPU socket, but that’s still board real estate that the customer and OEM provider will need to balance out: Optane vs. memory. There are some Optane drives in PCI Express format, which alleviate the memory crowding on the motherboard.
+
+3D Xpoint also offers higher endurance than traditional NAND flash memory due to the way it writes data. Intel promises a five-year warranty with its Optane, while a lot of SSDs offer only three years.
+
+Join the Network World communities on [Facebook][9] and [LinkedIn][10] to comment on topics that are top of mind.
+
+--------------------------------------------------------------------------------
+
+via: https://www.networkworld.com/article/3387117/intel-formally-launches-optane-for-data-center-memory-caching.html#tk.rss_all
+
+作者:[Andy Patrizio][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://www.networkworld.com/author/Andy-Patrizio/
+[b]: https://github.com/lujun9972
+[1]: https://images.idgesg.net/images/article/2018/06/intel-optane-persistent-memory-100760427-large.jpg
+[2]: https://www.networkworld.com/article/3386142/intel-unveils-an-epic-response-to-amds-server-push.html
+[3]: https://www.networkworld.com/article/3279271/intel-launches-optane-the-go-between-for-memory-and-storage.html
+[4]: https://www.networkworld.com/article/3290421/why-nvme-users-weigh-benefits-of-nvme-accelerated-flash-storage.html
+[5]: https://www.networkworld.com/article/3242807/data-center/top-10-data-center-predictions-idc.html#nww-fsb
+[6]: https://www.networkworld.com/newsletters/signup.html#nww-fsb
+[7]: https://www.theregister.co.uk/2018/02/02/ruler_and_miniruler_ssd_formats_look_to_banish_diskstyle_drives/
+[8]: https://pluralsight.pxf.io/c/321564/424552/7490?u=https%3A%2F%2Fwww.pluralsight.com%2Fpaths%2Fapple-certified-technical-trainer-10-11
+[9]: https://www.facebook.com/NetworkWorld/
+[10]: https://www.linkedin.com/company/network-world
diff --git a/sources/tech/20190404 Why blockchain (might be) coming to an IoT implementation near you.md b/sources/tech/20190404 Why blockchain (might be) coming to an IoT implementation near you.md
new file mode 100644
index 0000000000..f5915aebe7
--- /dev/null
+++ b/sources/tech/20190404 Why blockchain (might be) coming to an IoT implementation near you.md
@@ -0,0 +1,79 @@
+[#]: collector: (lujun9972)
+[#]: translator: ( )
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+[#]: subject: (Why blockchain (might be) coming to an IoT implementation near you)
+[#]: via: (https://www.networkworld.com/article/3386881/why-blockchain-might-be-coming-to-an-iot-implementation-near-you.html#tk.rss_all)
+[#]: author: (Jon Gold https://www.networkworld.com/author/Jon-Gold/)
+
+Why blockchain (might be) coming to an IoT implementation near you
+======
+
+![MF3D / Getty Images][1]
+
+Companies have found that IoT partners well with a host of other popular enterprise computing technologies of late, and blockchain – the innovative system of distributed trust most famous for underpinning cryptocurrencies – is no exception. Yet while the two phenomena can be complementary in certain circumstances, those expecting an explosion of blockchain-enabled IoT technologies probably shouldn’t hold their breath.
+
+Blockchain technology can be counter-intuitive to understand at a basic level, but it’s probably best thought of as a sort of distributed ledger keeping track of various transactions. Every “block” on the chain contains transactional records or other data to be secured against tampering, and is linked to the previous one by a cryptographic hash, which means that any tampering with the block will invalidate that connection. The nodes – which can be largely anything with a CPU in it – communicate via a decentralized, peer-to-peer network to share data and ensure the validity of the data in the chain.
+
+**[ Also see[What is edge computing?][2] and [How edge networking and IoT will reshape data centers][3].]**
+
+The system works because all the blocks have to agree with each other on the specifics of the data that they’re safeguarding, according to Nir Kshetri, a professor of management at the University of North Carolina – Greensboro. If someone attempts to alter a previous transaction on a given node, the rest of the data on the network pushes back. “The old record of the data is still there,” said Kshetri.
+
+That’s a powerful security technique – absent a bad actor successfully controlling all of the nodes on a given blockchain (the [famous “51% attack][4]”), the data protected by that blockchain can’t be falsified or otherwise fiddled with. So it should be no surprise that the use of blockchain is an attractive option to companies in some corners of the IoT world.
+
+Part of the reason for that, over and above the bare fact of blockchain’s ability to securely distribute trusted information across a network, is its place in the technology stack, according to Jay Fallah, CTO and co-founder of NXMLabs, an IoT security startup.
+
+“Blockchain stands at a very interesting intersection. Computing has accelerated in the last 15 years [in terms of] storage, CPU, etc, but networking hasn’t changed that much until recently,” he said. “[Blockchain]’s not a network technology, it’s not a data technology, it’s both.”
+
+### Blockchain and IoT**
+
+**
+
+Where blockchain makes sense as a part of the IoT world depends on who you speak to and what they are selling, but the closest thing to a general summation may have come from Allison Clift-Jenning, CEO of enterprise blockchain vendor Filament.
+
+“Anywhere where you've got people who are kind of wanting to trust each other, and have very archaic ways of doing it, that is usually a good place to start with use cases,” she said.
+
+One example, culled directly from Filament’s own customer base, is used car sales. Filament’s working with “a major Detroit automaker” to create a trusted-vehicle history platform, based on a device that plugs into the diagnostic port of a used car, pulls information from there, and writes that data to a blockchain. Just like that, there’s an immutable record of a used car’s history, including whether its airbags have ever been deployed, whether it’s been flooded, and so on. No unscrupulous used car lot or duplicitous former owner could change the data, and even unplugging the device would mean that there’s a suspicious blank period in the records.
+
+Most of present-day blockchain IoT implementation is about trust and the validation of data, according to Elvira Wallis, senior vice president and global head of IoT at SAP.
+
+“Most of the use cases that we have come across are in the realm of tracking and tracing items,” she said, giving the example of a farm-to-fork tracking system for high-end foodstuffs, using blockchain nodes mounted on crates and trucks, allowing for the creation of an un-fudgeable record of an item’s passage through transport infrastructure. (e.g., how long has this steak been refrigerated at such-and-such a temperature, how far has it traveled today, and so on.)
+
+### **Is using blockchain with IoT a good idea?**
+
+Different vendors sell different blockchain-based products for different use cases, which use different implementations of blockchain technology, some of which don’t bear much resemblance to the classic, linear, mined-transaction blockchain used in cryptocurrency.
+
+That means it’s a capability that you’d buy from a vendor for a specific use case, at this point. Few client organizations have the in-house expertise to implement a blockchain security system, according to 451 Research senior analyst Csilla Zsigri.
+
+The idea with any intelligent application of blockchain technology is to play to its strengths, she said, creating a trusted platform for critical information.
+
+“That’s where I see it really adding value, just in adding a layer of trust and validation,” said Zsigri.
+
+Yet while the basic idea of blockchain-enabled IoT applications is fairly well understood, it’s not applicable to every IoT use case, experts agree. Applying blockchain to non-transactional systems – although there are exceptions, including NXM Labs’ blockchain-based configuration product for IoT devices – isn’t usually the right move.
+
+If there isn’t a need to share data between two different parties – as opposed to simply moving data from sensor to back-end – blockchain doesn’t generally make sense, since it doesn’t really do anything for the key value-add present in most IoT implementations today: data analysis.
+
+“We’re still in kind of the early dial-up era of blockchain today,” said Clift-Jennings. “It’s slower than a typical database, it often isn't even readable, it often doesn't have a query engine tied to it. You don't really get privacy, by nature of it.”
+
+Join the Network World communities on [Facebook][5] and [LinkedIn][6] to comment on topics that are top of mind.
+
+--------------------------------------------------------------------------------
+
+via: https://www.networkworld.com/article/3386881/why-blockchain-might-be-coming-to-an-iot-implementation-near-you.html#tk.rss_all
+
+作者:[Jon Gold][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://www.networkworld.com/author/Jon-Gold/
+[b]: https://github.com/lujun9972
+[1]: https://images.idgesg.net/images/article/2019/02/chains_binary_data_blockchain_security_by_mf3d_gettyimages-941175690_2400x1600-100788434-large.jpg
+[2]: https://www.networkworld.com/article/3224893/internet-of-things/what-is-edge-computing-and-how-it-s-changing-the-network.html
+[3]: https://www.networkworld.com/article/3291790/data-center/how-edge-networking-and-iot-will-reshape-data-centers.html
+[4]: https://bitcoinist.com/51-percent-attack-hackers-steals-18-million-bitcoin-gold-btg-tokens/
+[5]: https://www.facebook.com/NetworkWorld/
+[6]: https://www.linkedin.com/company/network-world
diff --git a/sources/tech/20190405 5 open source tools for teaching young children to read.md b/sources/tech/20190405 5 open source tools for teaching young children to read.md
new file mode 100644
index 0000000000..c3a1fe82c8
--- /dev/null
+++ b/sources/tech/20190405 5 open source tools for teaching young children to read.md
@@ -0,0 +1,97 @@
+[#]: collector: (lujun9972)
+[#]: translator: ( )
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+[#]: subject: (5 open source tools for teaching young children to read)
+[#]: via: (https://opensource.com/article/19/4/early-literacy-tools)
+[#]: author: (Laura B. Janusek https://opensource.com/users/lbjanusek)
+
+5 open source tools for teaching young children to read
+======
+Early literacy apps give kids a foundation in letter recognition,
+alphabet sequencing, word finding, and more.
+![][1]
+
+Anyone who sees a child using a tablet or smartphone observes their seemingly innate ability to scroll through apps and swipe through screens, flexing those "digital native" muscles. According to [Common Sense Media][2], the percentage of US households in which 0- to 8-year-olds have access to a smartphone has grown from 52% in 2011 to 98% in 2017. While the debates around age guidelines and screen time surge, it's hard to deny that children are developing familiarity and skills with technology at an unprecedented rate.
+
+This rise in early technical literacy may be astonishing, but what about _traditional_ literacy, the good old-fashioned ability to read? What does the intersection of early literacy development and early tech use look like? Let's explore some open source tools for early learners that may help develop both of these critical skill sets.
+
+### Balancing risks and rewards
+
+But first, a disclaimer: Guidelines for technology use, especially for young children, are [constantly changing][3]. Organizations like the American Academy of Pediatrics, Common Sense Media, Zero to Three, and PBS Kids are continually conducting research and publishing recommendations. One position that all of these and other organizations can agree on is that plopping a child in front of a screen with unmonitored content for an unlimited set of time is highly inadvisable.
+
+Even setting kids up with educational content or tools for extended periods of time may have risks. And on the flip side, research on the benefits of education technologies is often limited or unavailable. In short, there are many cases in which we don't know for certain if educational technology use at a young age is beneficial, detrimental, or simply neutral.
+
+But if screen time is available to your child or student, it's logical to infer that educational resources would be preferable over simpler pop-the-bubble or slice-the-fruit games or platforms that could house inappropriate content or online predators. While we may not be able to prove that education apps will make a child's test scores soar, we can at least take comfort in their generally being safer and more age-appropriate than the internet at large.
+
+That said, if you're open to exploring early-education technologies, there are many reasons to look to open source options. Open source technologies are not only free but open to collaborative improvement. In many cases, they are created by developers who are educators or parents themselves, and they're a great way to avoid in-app purchases, advertisements, and paid upgrades. Open source programs can often be downloaded and installed on your device and accessed without an internet connection. Plus, the idea of [open source in education][4] is a growing trend, and there are countless resources to [learn more][5] about the concept.
+
+But for now, let's check out some open source tools for early literacy in action!
+
+### Childsplay
+
+![Childsplay screenshot][6]
+
+Let's start simple. [Childsplay][7], licensed under the GPLv2, is the most basic of the resources on this list. It's a compilation of just over a dozen educational games for young learners, four of which are specific to letter recognition, including memory games and an activity where the learner identifies a spoken letter.
+
+### eduActiv8
+
+![eduActiv8 screenshot][8]
+
+[eduActiv8][9] started in 2011 as a personal project for the developer's son, "whose thirst for learning and knowledge inspired the creation of this educational program." It includes activities for building basic math and early literacy skills, including a variety of spelling, matching, and listening activities. Games include filling in missing letters in the alphabet, unscrambling letters to form a word, matching words to images, and completing mazes by connecting letters in the correct order. eduActiv8 was written in [Python][10] and is available under the GPLv3.
+
+### GCompris
+
+![GCompris screenshot][11]
+
+[GCompris][12] is an open source behemoth (licensed under the GPLv3) of early educational activities. A French software engineer started it in 2000, and it now includes over 130 educational games in nearly 20 languages. Tailored for learners under age 10, it includes activities for letter recognition and drawing, alphabet sequencing, vocabulary building, and games like hangman to identify missing letters in words, plus activities for learning braille. It also includes games in math and music, plus classics from tic-tac-toe to chess.
+
+### Feed the Monster
+
+![Feed the Monster screenshot][13]
+
+The quality of the playful "monster" graphics in [Feed the Monster][14] definitely sets it apart from the others on this list, plus it supports nearly 40 languages! The app includes activities for sorting letters to form words, memory games to match words to images, and letter-tracing writing activities. The app is developed by Curious Learning, which states: "We create, localize, distribute, and optimize open source mobile software so every child can learn to read." While Feed the Monster's offerings are geared toward early readers, Curious Mind's roadmap suggests it's headed towards a more robust personalized literacy platform growing on a foundation of research with MIT, Tufts, and Georgia State University.
+
+### Syntax Untangler
+
+![Syntax Untangler screenshot][15]
+
+[Syntax Untangler][16] is the outlier of this group. Developed by a technologist at the University of Wisconsin–Madison under the GPLv2, the application is "particularly designed for training language learners to recognize and parse linguistic features." Examples show the software being used for foreign language learning, but anyone can use it to create language identification games, including games for early literacy activities like letter recognition. It could also be applied to later literacy skills, like identifying parts of speech in complex sentences or literary techniques in poetry or fiction.
+
+### Wrapping up
+
+Access to [literary environments][17] has been shown to impact literacy and attitudes towards reading. Why not strive to create a digital literary environment for our kids by filling our devices with educational technologies, just like our shelves are filled with books?
+
+Now it's your turn! What open source literacy tools have you used? Comment below to share.
+
+--------------------------------------------------------------------------------
+
+via: https://opensource.com/article/19/4/early-literacy-tools
+
+作者:[Laura B. Janusek][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://opensource.com/users/lbjanusek
+[b]: https://github.com/lujun9972
+[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/idea_innovation_kid_education.png?itok=3lRp6gFa
+[2]: https://www.commonsensemedia.org/research/the-common-sense-census-media-use-by-kids-age-zero-to-eight-2017?action
+[3]: https://www.businessinsider.com/smartphone-use-young-kids-toddlers-limits-science-2018-3
+[4]: /article/18/1/best-open-education
+[5]: https://opensource.com/resources/open-source-education
+[6]: https://opensource.com/sites/default/files/uploads/cp_flashcards.gif (Childsplay screenshot)
+[7]: http://www.childsplay.mobi/
+[8]: https://opensource.com/sites/default/files/uploads/eduactiv8.jpg (eduActiv8 screenshot)
+[9]: https://www.eduactiv8.org/
+[10]: /article/17/11/5-approaches-learning-python
+[11]: https://opensource.com/sites/default/files/uploads/gcompris2.png (GCompris screenshot)
+[12]: https://gcompris.net/index-en.html
+[13]: https://opensource.com/sites/default/files/uploads/feedthemonster.png (Feed the Monster screenshot)
+[14]: https://www.curiouslearning.org/
+[15]: https://opensource.com/sites/default/files/uploads/syntaxuntangler.png (Syntax Untangler screenshot)
+[16]: https://courses.dcs.wisc.edu/untangler/
+[17]: http://www.jstor.org/stable/41386459
diff --git a/sources/tech/20190405 File sharing with Git.md b/sources/tech/20190405 File sharing with Git.md
new file mode 100644
index 0000000000..13f95b8287
--- /dev/null
+++ b/sources/tech/20190405 File sharing with Git.md
@@ -0,0 +1,234 @@
+[#]: collector: (lujun9972)
+[#]: translator: ( )
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+[#]: subject: (File sharing with Git)
+[#]: via: (https://opensource.com/article/19/4/file-sharing-git)
+[#]: author: (Seth Kenlon https://opensource.com/users/seth)
+
+File sharing with Git
+======
+SparkleShare is an open source, Git-based, Dropbox-style file sharing
+application. Learn more in our series about little-known uses of Git.
+![][1]
+
+[Git][2] is one of those rare applications that has managed to encapsulate so much of modern computing into one program that it ends up serving as the computational engine for many other applications. While it's best-known for tracking source code changes in software development, it has many other uses that can make your life easier and more organized. In this series leading up to Git's 14th anniversary on April 7, we'll share seven little-known ways to use Git. Today, we'll look at SparkleShare, which uses Git as the backbone for file sharing.
+
+### Git for file sharing
+
+One of the nice things about Git is that it's inherently distributed. It's built to share. Even if you're sharing a repository just with other computers on your own network, Git brings transparency to the act of getting files from a shared location.
+
+As interfaces go, Git is pretty simple. It varies from user to user, but the common incantation when sitting down to get some work done is just **git pull** or maybe the slightly more complex **git pull && git checkout -b my-branch**. Still, for some people, the idea of _entering a command_ into their computer at all is confusing or bothersome. Computers are meant to make life easy, and computers are good at repetitious tasks, and so there are easier ways to share files with Git.
+
+### SparkleShare
+
+The [SparkleShare][3] project is a cross-platform, open source, Dropbox-style file sharing application based on Git. It automates all Git commands, triggering the add, commit, push, and pull processes with the simple act of dragging-and-dropping a file into a specially designated SparkleShare directory. Because it is based on Git, you get fast, diff-based pushes and pulls, and you inherit all the benefits of Git version control and backend infrastructure (like Git hooks). It can be entirely self-hosted, or you can use it with Git hosting services like [GitLab][4], GitHub, Bitbucket, and others. Furthermore, because it's basically just a frontend to Git, you can access your SparkleShare files on devices that may not have a SparkleShare client but do have Git clients.
+
+Just as you get all the benefits of Git, you also get all the usual Git restrictions: It's impractical to use SparkleShare to store hundreds of photos and music and videos because Git is designed and optimized for text. Git certainly has the capability to store large files of binary data but it is designed to track history, so once a file is added to it, it's nearly impossible to completely remove it. This somewhat limits the usefulness of SparkleShare for some people, but it makes it ideal for many workflows, including [calendaring][5].
+
+#### Installing SparkleShare
+
+SparkleShare is cross-platform, with installers for Windows and Mac available from its [website][6]. For Linux, there's a [Flatpak][7] in your software installer, or you can run these commands in a terminal:
+
+
+```
+$ sudo flatpak remote-add flathub