[@TimSweeneyEpic][8] will probably like this 😊 [pic.twitter.com/7mt9fXt7TH][9]
+>
+> — Lutris Gaming (@LutrisGaming) [April 17, 2019][10]
+
+As an avid gamer and Linux user, I immediately jumped upon this news and installed Lutris to run Epic Games on it.
+
+**Note:** _I used[Ubuntu 19.04][11] to test Epic Games store for Linux._
+
+### Using Epic Games Store for Linux using Lutris
+
+To install Epic Games Store on your Linux system, make sure that you have [Lutris][4] installed with its pre-requisites Wine and Python 3. So, first [install Wine on Ubuntu][12] or whichever Linux you are using and then [download Lutris from its website][13].
+
+[][14]
+
+Suggested read Ubuntu Mate Will Be Default OS On Entroware Laptops
+
+#### Installing Epic Games Store
+
+Once the installation of Lutris is successful, simply launch it.
+
+While I tried this, I encountered an error (nothing happened when I tried to launch it using the GUI). However, when I typed in “ **lutris** ” on the terminal to launch it otherwise, I noticed an error that looked like this:
+
+![][15]
+
+Thanks to Abhishek, I learned that this is a common issue (you can check that on [GitHub][16]).
+
+So, to fix it, all I had to do was – type in a command in the terminal:
+
+```
+export LC_ALL=C
+```
+
+Just copy it and enter it in your terminal if you face the same issue. And, then, you will be able to open Lutris.
+
+**Note:** _You’ll have to enter this command every time you launch Lutris. So better to add it to your .bashrc or list of environment variable._
+
+Once that is done, simply launch it and search for “ **Epic Games Store** ” as shown in the image below:
+
+![Epic Games Store in Lutris][17]
+
+Here, I have it installed already, so you will get the option to “Install” it and then it will automatically ask you to install the required packages that it needs. You just have to proceed in order to successfully install it. That’s it – no rocket science involved.
+
+#### Playing a Game on Epic Games Store
+
+![Epic Games Store][18]
+
+Now that we have Epic Games store via Lutris on Linux, simply launch it and log in to your account to get started.
+
+But, does it really work?
+
+_Yes, the Epic Games Store does work._ **But, all the games don’t.**
+
+Well, I haven’t tried everything, but I grabbed a free game (Transistor – a turn-based ARPG game) to check if that works.
+
+![Transistor – Epic Games Store][19]
+
+Unfortunately, it didn’t. It says that it is “Running” when I launch it but then again, nothing happens.
+
+As of now, I’m not aware of any solutions to that – so I’ll try to keep you guys updated if I find a fix.
+
+[][20]
+
+Suggested read Alpha Version Of New Skype Client For Linux Is Out Now
+
+**Wrapping Up**
+
+It’s good to see the gaming scene improve on Linux thanks to the solutions like Lutris for users. However, there’s still a lot of work to be done.
+
+For a game to run hassle-free on Linux is still a challenge. There can be issues like this which I encountered or similar. But, it’s going in the right direction – even if it has issues.
+
+What do you think of Epic Games Store on Linux via Lutris? Have you tried it yet? Let us know your thoughts in the comments below.
+
+--------------------------------------------------------------------------------
+
+via: https://itsfoss.com/epic-games-lutris-linux/
+
+作者:[Ankush Das][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://itsfoss.com/author/ankush/
+[b]: https://github.com/lujun9972
+[1]: https://itsfoss.com/linux-gaming-guide/
+[2]: https://itsfoss.com/steam-play/
+[3]: https://itsfoss.com/steam-play-proton/
+[4]: https://lutris.net/
+[5]: https://itsfoss.com/wp-content/uploads/2019/04/epic-games-store-lutris-linux-800x450.png
+[6]: https://www.epicgames.com/store/en-US/
+[7]: https://twitter.com/EpicGames?ref_src=twsrc%5Etfw
+[8]: https://twitter.com/TimSweeneyEpic?ref_src=twsrc%5Etfw
+[9]: https://t.co/7mt9fXt7TH
+[10]: https://twitter.com/LutrisGaming/status/1118552969816018948?ref_src=twsrc%5Etfw
+[11]: https://itsfoss.com/ubuntu-19-04-release-features/
+[12]: https://itsfoss.com/install-latest-wine/
+[13]: https://lutris.net/downloads/
+[14]: https://itsfoss.com/ubuntu-mate-entroware/
+[15]: https://itsfoss.com/wp-content/uploads/2019/04/lutris-error.jpg
+[16]: https://github.com/lutris/lutris/issues/660
+[17]: https://itsfoss.com/wp-content/uploads/2019/04/lutris-epic-games-store-800x520.jpg
+[18]: https://itsfoss.com/wp-content/uploads/2019/04/epic-games-store-800x450.jpg
+[19]: https://itsfoss.com/wp-content/uploads/2019/04/transistor-game-epic-games-store-800x410.jpg
+[20]: https://itsfoss.com/skpe-alpha-linux/
diff --git a/sources/tech/20190423 How to identify same-content files on Linux.md b/sources/tech/20190423 How to identify same-content files on Linux.md
new file mode 100644
index 0000000000..8d9b34b30a
--- /dev/null
+++ b/sources/tech/20190423 How to identify same-content files on Linux.md
@@ -0,0 +1,260 @@
+[#]: collector: (lujun9972)
+[#]: translator: ( )
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+[#]: subject: (How to identify same-content files on Linux)
+[#]: via: (https://www.networkworld.com/article/3390204/how-to-identify-same-content-files-on-linux.html#tk.rss_all)
+[#]: author: (Sandra Henry-Stocker https://www.networkworld.com/author/Sandra-Henry_Stocker/)
+
+How to identify same-content files on Linux
+======
+Copies of files sometimes represent a big waste of disk space and can cause confusion if you want to make updates. Here are six commands to help you identify these files.
+![Vinoth Chandar \(CC BY 2.0\)][1]
+
+In a recent post, we looked at [how to identify and locate files that are hard links][2] (i.e., that point to the same disk content and share inodes). In this post, we'll check out commands for finding files that have the same _content_ , but are not otherwise connected.
+
+Hard links are helpful because they allow files to exist in multiple places in the file system while not taking up any additional disk space. Copies of files, on the other hand, sometimes represent a big waste of disk space and run some risk of causing some confusion if you want to make updates. In this post, we're going to look at multiple ways to identify these files.
+
+**[ Two-Minute Linux Tips:[Learn how to master a host of Linux commands in these 2-minute video tutorials][3] ]**
+
+### Comparing files with the diff command
+
+Probably the easiest way to compare two files is to use the **diff** command. The output will show you the differences between the two files. The < and > signs indicate whether the extra lines are in the first (<) or second (>) file provided as arguments. In this example, the extra lines are in backup.html.
+
+```
+$ diff index.html backup.html
+2438a2439,2441
+>
+> That's all there is to report.
+>
+```
+
+If diff shows no output, that means the two files are the same.
+
+```
+$ diff home.html index.html
+$
+```
+
+The only drawbacks to diff are that it can only compare two files at a time, and you have to identify the files to compare. Some commands we will look at in this post can find the duplicate files for you.
+
+### Using checksums
+
+The **cksum** (checksum) command computes checksums for files. Checksums are a mathematical reduction of the contents to a lengthy number (like 2819078353 228029). While not absolutely unique, the chance that files that are not identical in content would result in the same checksum is extremely small.
+
+```
+$ cksum *.html
+2819078353 228029 backup.html
+4073570409 227985 home.html
+4073570409 227985 index.html
+```
+
+In the example above, you can see how the second and third files yield the same checksum and can be assumed to be identical.
+
+### Using the find command
+
+While the find command doesn't have an option for finding duplicate files, it can be used to search files by name or type and run the cksum command. For example:
+
+```
+$ find . -name "*.html" -exec cksum {} \;
+4073570409 227985 ./home.html
+2819078353 228029 ./backup.html
+4073570409 227985 ./index.html
+```
+
+### Using the fslint command
+
+The **fslint** command can be used to specifically find duplicate files. Note that we give it a starting location. The command can take quite some time to complete if it needs to run through a large number of files. Here's output from a very modest search. Note how it lists the duplicate files and also looks for other issues, such as empty directories and bad IDs.
+
+```
+$ fslint .
+-----------------------------------file name lint
+-------------------------------Invalid utf8 names
+-----------------------------------file case lint
+----------------------------------DUPlicate files <==
+home.html
+index.html
+-----------------------------------Dangling links
+--------------------redundant characters in links
+------------------------------------suspect links
+--------------------------------Empty Directories
+./.gnupg
+----------------------------------Temporary Files
+----------------------duplicate/conflicting Names
+------------------------------------------Bad ids
+-------------------------Non Stripped executables
+```
+
+You may have to install **fslint** on your system. You will probably have to add it to your search path, as well:
+
+```
+$ export PATH=$PATH:/usr/share/fslint/fslint
+```
+
+### Using the rdfind command
+
+The **rdfind** command will also look for duplicate (same content) files. The name stands for "redundant data find," and the command is able to determine, based on file dates, which files are the originals — which is helpful if you choose to delete the duplicates, as it will remove the newer files.
+
+```
+$ rdfind ~
+Now scanning "/home/shark", found 12 files.
+Now have 12 files in total.
+Removed 1 files due to nonunique device and inode.
+Total size is 699498 bytes or 683 KiB
+Removed 9 files due to unique sizes from list.2 files left.
+Now eliminating candidates based on first bytes:removed 0 files from list.2 files left.
+Now eliminating candidates based on last bytes:removed 0 files from list.2 files left.
+Now eliminating candidates based on sha1 checksum:removed 0 files from list.2 files left.
+It seems like you have 2 files that are not unique
+Totally, 223 KiB can be reduced.
+Now making results file results.txt
+```
+
+You can also run this command in "dryrun" (i.e., only report the changes that might otherwise be made).
+
+```
+$ rdfind -dryrun true ~
+(DRYRUN MODE) Now scanning "/home/shark", found 12 files.
+(DRYRUN MODE) Now have 12 files in total.
+(DRYRUN MODE) Removed 1 files due to nonunique device and inode.
+(DRYRUN MODE) Total size is 699352 bytes or 683 KiB
+Removed 9 files due to unique sizes from list.2 files left.
+(DRYRUN MODE) Now eliminating candidates based on first bytes:removed 0 files from list.2 files left.
+(DRYRUN MODE) Now eliminating candidates based on last bytes:removed 0 files from list.2 files left.
+(DRYRUN MODE) Now eliminating candidates based on sha1 checksum:removed 0 files from list.2 files left.
+(DRYRUN MODE) It seems like you have 2 files that are not unique
+(DRYRUN MODE) Totally, 223 KiB can be reduced.
+(DRYRUN MODE) Now making results file results.txt
+```
+
+The rdfind command also provides options for things such as ignoring empty files (-ignoreempty) and following symbolic links (-followsymlinks). Check out the man page for explanations.
+
+```
+-ignoreempty ignore empty files
+-minsize ignore files smaller than speficied size
+-followsymlinks follow symbolic links
+-removeidentinode remove files referring to identical inode
+-checksum identify checksum type to be used
+-deterministic determiness how to sort files
+-makesymlinks turn duplicate files into symbolic links
+-makehardlinks replace duplicate files with hard links
+-makeresultsfile create a results file in the current directory
+-outputname provide name for results file
+-deleteduplicates delete/unlink duplicate files
+-sleep set sleep time between reading files (milliseconds)
+-n, -dryrun display what would have been done, but don't do it
+```
+
+Note that the rdfind command offers an option to delete duplicate files with the **-deleteduplicates true** setting. Hopefully the command's modest problem with grammar won't irritate you. ;-)
+
+```
+$ rdfind -deleteduplicates true .
+...
+Deleted 1 files. <==
+```
+
+You will likely have to install the rdfind command on your system. It's probably a good idea to experiment with it to get comfortable with how it works.
+
+### Using the fdupes command
+
+The **fdupes** command also makes it easy to identify duplicate files and provides a large number of useful options — like **-r** for recursion. In its simplest form, it groups duplicate files together like this:
+
+```
+$ fdupes ~
+/home/shs/UPGRADE
+/home/shs/mytwin
+
+/home/shs/lp.txt
+/home/shs/lp.man
+
+/home/shs/penguin.png
+/home/shs/penguin0.png
+/home/shs/hideme.png
+```
+
+Here's an example using recursion. Note that many of the duplicate files are important (users' .bashrc and .profile files) and should clearly not be deleted.
+
+```
+# fdupes -r /home
+/home/shark/home.html
+/home/shark/index.html
+
+/home/dory/.bashrc
+/home/eel/.bashrc
+
+/home/nemo/.profile
+/home/dory/.profile
+/home/shark/.profile
+
+/home/nemo/tryme
+/home/shs/tryme
+
+/home/shs/arrow.png
+/home/shs/PNGs/arrow.png
+
+/home/shs/11/files_11.zip
+/home/shs/ERIC/file_11.zip
+
+/home/shs/penguin0.jpg
+/home/shs/PNGs/penguin.jpg
+/home/shs/PNGs/penguin0.jpg
+
+/home/shs/Sandra_rotated.png
+/home/shs/PNGs/Sandra_rotated.png
+```
+
+The fdupe command's many options are listed below. Use the **fdupes -h** command, or read the man page for more details.
+
+```
+-r --recurse recurse
+-R --recurse: recurse through specified directories
+-s --symlinks follow symlinked directories
+-H --hardlinks treat hard links as duplicates
+-n --noempty ignore empty files
+-f --omitfirst omit the first file in each set of matches
+-A --nohidden ignore hidden files
+-1 --sameline list matches on a single line
+-S --size show size of duplicate files
+-m --summarize summarize duplicate files information
+-q --quiet hide progress indicator
+-d --delete prompt user for files to preserve
+-N --noprompt when used with --delete, preserve the first file in set
+-I --immediate delete duplicates as they are encountered
+-p --permissions don't soncider files with different owner/group or
+ permission bits as duplicates
+-o --order=WORD order files according to specification
+-i --reverse reverse order while sorting
+-v --version display fdupes version
+-h --help displays help
+```
+
+The fdupes command is another one that you're like to have to install and work with for a while to become familiar with its many options.
+
+### Wrap-up
+
+Linux systems provide a good selection of tools for locating and potentially removing duplicate files, along with options for where you want to run your search and what you want to do with duplicate files when you find them.
+
+**[ Also see:[Invaluable tips and tricks for troubleshooting Linux][4] ]**
+
+Join the Network World communities on [Facebook][5] and [LinkedIn][6] to comment on topics that are top of mind.
+
+--------------------------------------------------------------------------------
+
+via: https://www.networkworld.com/article/3390204/how-to-identify-same-content-files-on-linux.html#tk.rss_all
+
+作者:[Sandra Henry-Stocker][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://www.networkworld.com/author/Sandra-Henry_Stocker/
+[b]: https://github.com/lujun9972
+[1]: https://images.idgesg.net/images/article/2019/04/chairs-100794266-large.jpg
+[2]: https://www.networkworld.com/article/3387961/how-to-identify-duplicate-files-on-linux.html
+[3]: https://www.youtube.com/playlist?list=PL7D2RMSmRO9J8OTpjFECi8DJiTQdd4hua
+[4]: https://www.networkworld.com/article/3242170/linux/invaluable-tips-and-tricks-for-troubleshooting-linux.html
+[5]: https://www.facebook.com/NetworkWorld/
+[6]: https://www.linkedin.com/company/network-world
diff --git a/sources/tech/20190425 Automate backups with restic and systemd.md b/sources/tech/20190425 Automate backups with restic and systemd.md
new file mode 100644
index 0000000000..46c71ae313
--- /dev/null
+++ b/sources/tech/20190425 Automate backups with restic and systemd.md
@@ -0,0 +1,132 @@
+[#]: collector: (lujun9972)
+[#]: translator: ( )
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+[#]: subject: (Automate backups with restic and systemd)
+[#]: via: (https://fedoramagazine.org/automate-backups-with-restic-and-systemd/)
+[#]: author: (Link Dupont https://fedoramagazine.org/author/linkdupont/)
+
+Automate backups with restic and systemd
+======
+
+![][1]
+
+Timely backups are important. So much so that [backing up software][2] is a common topic of discussion, even [here on the Fedora Magazine][3]. This article demonstrates how to automate backups with **restic** using only systemd unit files.
+
+For an introduction to restic, be sure to check out our article [Use restic on Fedora for encrypted backups][4]. Then read on for more details.
+
+Two systemd services are required to run in order to automate taking snapshots and keeping data pruned. The first service runs the _backup_ command needs to be run on a regular frequency. The second service takes care of data pruning.
+
+If you’re not familiar with systemd at all, there’s never been a better time to learn. Check out [the series on systemd here at the Magazine][5], starting with this primer on unit files:
+
+> [systemd unit file basics][6]
+
+If you haven’t installed restic already, note it’s in the official Fedora repositories. To install use this command [with sudo][7]:
+
+```
+$ sudo dnf install restic
+```
+
+### Backup
+
+First, create the _~/.config/systemd/user/restic-backup.service_ file. Copy and paste the text below into the file for best results.
+
+```
+[Unit]
+Description=Restic backup service
+[Service]
+Type=oneshot
+ExecStart=restic backup --verbose --one-file-system --tag systemd.timer $BACKUP_EXCLUDES $BACKUP_PATHS
+ExecStartPost=restic forget --verbose --tag systemd.timer --group-by "paths,tags" --keep-daily $RETENTION_DAYS --keep-weekly $RETENTION_WEEKS --keep-monthly $RETENTION_MONTHS --keep-yearly $RETENTION_YEARS
+EnvironmentFile=%h/.config/restic-backup.conf
+```
+
+This service references an environment file in order to load secrets (such as _RESTIC_PASSWORD_ ). Create the _~/.config/restic-backup.conf_ file. Copy and paste the content below for best results. This example uses BackBlaze B2 buckets. Adjust the ID, key, repository, and password values accordingly.
+
+```
+BACKUP_PATHS="/home/rupert"
+BACKUP_EXCLUDES="--exclude-file /home/rupert/.restic_excludes --exclude-if-present .exclude_from_backup"
+RETENTION_DAYS=7
+RETENTION_WEEKS=4
+RETENTION_MONTHS=6
+RETENTION_YEARS=3
+B2_ACCOUNT_ID=XXXXXXXXXXXXXXXXXXXXXXXXX
+B2_ACCOUNT_KEY=XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
+RESTIC_REPOSITORY=b2:XXXXXXXXXXXXXXXXXX:/
+RESTIC_PASSWORD=XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
+```
+
+Now that the service is installed, reload systemd: _systemctl –user daemon-reload_. Try running the service manually to create a backup: _systemctl –user start restic-backup_.
+
+Because the service is a _oneshot_ , it will run once and exit. After verifying that the service runs and creates snapshots as desired, set up a timer to run this service regularly. For example, to run the _restic-backup.service_ daily, create _~/.config/systemd/user/restic-backup.timer_ as follows. Again, copy and paste this text:
+
+```
+[Unit]
+Description=Backup with restic daily
+[Timer]
+OnCalendar=daily
+Persistent=true
+[Install]
+WantedBy=timers.target
+```
+
+Enable it by running this command:
+
+```
+$ systemctl --user enable --now restic-backup.timer
+```
+
+### Prune
+
+While the main service runs the _forget_ command to only keep snapshots within the keep policy, the data is not actually removed from the restic repository. The _prune_ command inspects the repository and current snapshots, and deletes any data not associated with a snapshot. Because _prune_ can be a time-consuming process, it is not necessary to run every time a backup is run. This is the perfect scenario for a second service and timer. First, create the file _~/.config/systemd/user/restic-prune.service_ by copying and pasting this text:
+
+```
+[Unit]
+Description=Restic backup service (data pruning)
+[Service]
+Type=oneshot
+ExecStart=restic prune
+EnvironmentFile=%h/.config/restic-backup.conf
+```
+
+Similarly to the main _restic-backup.service_ , _restic-prune_ is a oneshot service and can be run manually. Once the service has been set up, create and enable a corresponding timer at _~/.config/systemd/user/restic-prune.timer_ :
+
+```
+[Unit]
+Description=Prune data from the restic repository monthly
+[Timer]
+OnCalendar=monthly
+Persistent=true
+[Install]
+WantedBy=timers.target
+```
+
+That’s it! Restic will now run daily and prune data monthly.
+
+* * *
+
+_Photo by _[ _Samuel Zeller_][8]_ on _[_Unsplash_][9]_._
+
+--------------------------------------------------------------------------------
+
+via: https://fedoramagazine.org/automate-backups-with-restic-and-systemd/
+
+作者:[Link Dupont][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://fedoramagazine.org/author/linkdupont/
+[b]: https://github.com/lujun9972
+[1]: https://fedoramagazine.org/wp-content/uploads/2019/04/restic-systemd-816x345.jpg
+[2]: https://restic.net/
+[3]: https://fedoramagazine.org/?s=backup
+[4]: https://fedoramagazine.org/use-restic-encrypted-backups/
+[5]: https://fedoramagazine.org/series/systemd-series/
+[6]: https://fedoramagazine.org/systemd-getting-a-grip-on-units/
+[7]: https://fedoramagazine.org/howto-use-sudo/
+[8]: https://unsplash.com/photos/JuFcQxgCXwA?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText
+[9]: https://unsplash.com/search/photos/archive?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText
diff --git a/sources/tech/20190425 Debian has a New Project Leader.md b/sources/tech/20190425 Debian has a New Project Leader.md
new file mode 100644
index 0000000000..00f114b907
--- /dev/null
+++ b/sources/tech/20190425 Debian has a New Project Leader.md
@@ -0,0 +1,106 @@
+[#]: collector: (lujun9972)
+[#]: translator: ( )
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+[#]: subject: (Debian has a New Project Leader)
+[#]: via: (https://itsfoss.com/debian-project-leader-election/)
+[#]: author: (Shirish https://itsfoss.com/author/shirish/)
+
+Debian has a New Project Leader
+======
+
+Like each year, the Debian Secretary announced a call for nominations for the post of Debian Project Leader (commonly known as DPL) in early March. Soon 5 candidates shared their nomination. One of the DPL candidates backed out due to personal reasons and we had [four candidates][1] as can be seen in the Nomination section of the Vote page.
+
+### Sam Hartman, the new Debian Project Leader
+
+![][2]
+
+While I will not go much into details as Sam already outlined his position on his [platform][3], it is good to see that most Debian developers recognize that it’s no longer just the technical excellence which need to be looked at. I do hope he is able to create more teams which would leave some more time in DPL’s hands and less stress going forward.
+
+As he has shared, he would be looking into also helping the other DPL candidates, all of which presented initiatives to make Debian better.
+
+Apart from this, there had been some excellent suggestions, for example modernizing debian-installer, making lists.debian.org have a [Mailman 3][4] instance, modernizing Debian packaging and many more.
+
+While probably a year is too short a time for any of the deliverables that Debian people are thinking, some sort of push or start should enable Debian to reach greater heights than today.
+
+### A brief history of DPL elections
+
+In the beginning, Debian was similar to many distributions which have a [BDFL][5], although from the very start Debian had a sort of rolling leadership. While I wouldn’t go through the whole history, from October 1998 there was an idea [germinated][6] to have a Debian Constitution.
+
+After quite a bit of discussion between Debian users, contributors, developers etc. [Debian 1.0 Constitution][7] was released on December 2nd, 1998. One of the big changes was that it formalised the selection of Debian Project Leader via elections.
+
+From 1998 till 2019 13 Debian project leaders have been elected till date with Sam Hartman being the latest (2019).
+
+Before Sam, [Chris Lamb][8] was DPL in 2017 and again stood up for re-election in 2018. One of the biggest changes in Chris’s tenure was having more impetus to outreach than ever before. This made it possible to have many more mini-debconfs all around the world and thus increasing more number of Debian users and potential Debian Developers.
+
+[][9]
+
+Suggested read SemiCode OS: A Linux Distribution For Programmers And Web Developers
+
+### Duties and Responsibilities of the Debian Project Leader
+
+![][10]
+
+Debian Project Leader (DPL) is a non-monetary position which means that the DPL doesn’t get a salary or any monetary benefits in the traditional sense but it’s a prestigious position.
+
+Curious what what a DPL does? Here are some of the duties, responsibilities, prestige and perks associated with this position.
+
+#### Travelling
+
+As the DPL is the public face of the project, she/he is supposed to travel to many places in the world to share about Debian. While the travel may be a perk, it is and could be discounted by being not paid for the time spent articulating Debian’s position in various free software and other communities. Also travel, language, politics of free software are also some of the stress points that any DPL would have to go through.
+
+#### Communication
+
+A DPL is expected to have excellent verbal and non-verbal communication skills as she/he is the expected to share Debian’s vision of computing to technical and non-technical people. As she/he is also expected to weigh in many a sensitive matter, the Project Leader has to make choices about which communications should be made public and which should be private.
+
+#### Budgeting
+
+Quite a bit of the time the Debian Project Leader has to look into the finances along with the Secretary and take a call at various initiatives mooted by the larger community. The Project Leader has to ask and then make informed decisions on the same.
+
+#### Delegation
+
+One of the important tasks of the DPL is to delegate different tasks to suitable people. Some sensitive delegations include ftp-master, ftp-assistant, list-managers, debian-mirror, debian-infrastructure and so on.
+
+#### Influence
+
+Last but not the least, just like any other election, the people who contest for DPL have a platform where they share their ideas about where they would like to see the Debian project heading and how they would go about doing it.
+
+This is by no means an exhaustive list. I would suggest to read Lucas Nussbaum’s [mail][11] in which he outlines some more responsibilities as a Debian Project Leader.
+
+[][12]
+
+Suggested read Lightweight Linux Distribution Bodhi Linux 5.0 Released
+
+**In the end…**
+
+I wish Sam Hartman all the luck. I look forward to see how Debian grows under his leadership.
+
+I also hope that you learned a few non-technical thing around Debian. If you are an [ardent Debian user][13], stuff like this make you feel more involved with Debian project. What do you say?
+
+--------------------------------------------------------------------------------
+
+via: https://itsfoss.com/debian-project-leader-election/
+
+作者:[Shirish][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://itsfoss.com/author/shirish/
+[b]: https://github.com/lujun9972
+[1]: https://www.debian.org/vote/2019/vote_001
+[2]: https://itsfoss.com/wp-content/uploads/2019/04/Debian-Project-Leader-election-800x450.png
+[3]: https://www.debian.org/vote/2019/platforms/hartmans
+[4]: http://docs.mailman3.org/en/latest/
+[5]: https://en.wikipedia.org/wiki/Benevolent_dictator_for_life
+[6]: https://lists.debian.org/debian-devel/1998/09/msg00506.html
+[7]: https://www.debian.org/devel/constitution.1.0
+[8]: https://www.debian.org/vote/2017/platforms/lamby
+[9]: https://itsfoss.com/semicode-os-linux/
+[10]: https://itsfoss.com/wp-content/uploads/2019/04/leadership-800x450.jpg
+[11]: https://lists.debian.org/debian-vote/2019/03/msg00023.html
+[12]: https://itsfoss.com/bodhi-linux-5/
+[13]: https://itsfoss.com/reasons-why-i-love-debian/
diff --git a/sources/tech/20190426 NomadBSD, a BSD for the Road.md b/sources/tech/20190426 NomadBSD, a BSD for the Road.md
new file mode 100644
index 0000000000..d31f9b4a90
--- /dev/null
+++ b/sources/tech/20190426 NomadBSD, a BSD for the Road.md
@@ -0,0 +1,125 @@
+[#]: collector: (lujun9972)
+[#]: translator: ( )
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+[#]: subject: (NomadBSD, a BSD for the Road)
+[#]: via: (https://itsfoss.com/nomadbsd/)
+[#]: author: (John Paul https://itsfoss.com/author/john/)
+
+NomadBSD, a BSD for the Road
+======
+
+As regular It’s FOSS readers should know, I like diving into the world of BSDs. Recently, I came across an interesting BSD that is designed to live on a thumb drive. Let’s take a look at NomadBSD.
+
+### What is NomadBSD?
+
+![Nomadbsd Desktop][1]
+
+[NomadBSD][2] is different than most available BSDs. NomadBSD is a live system based on FreeBSD. It comes with automatic hardware detection and an initial config tool. NomadBSD is designed to “be used as a desktop system that works out of the box, but can also be used for data recovery, for educational purposes, or to test FreeBSD’s hardware compatibility.”
+
+This German BSD comes with an [OpenBox][3]-based desktop with the Plank application dock. NomadBSD makes use of the [DSB project][4]. DSB stands for “Desktop Suite (for) (Free)BSD” and consists of a collection of programs designed to create a simple and working environment without needing a ton of dependencies to use one tool. DSB is created by [Marcel Kaiser][5] one of the lead devs of NomadBSD.
+
+Just like the original BSD projects, you can contact the NomadBSD developers via a [mailing list][6].
+
+[][7]
+
+Suggested read Enjoy Netflix? You Should Thank FreeBSD
+
+#### Included Applications
+
+NomadBSD comes with the following software installed:
+
+ * Thunar file manager
+ * Asunder CD ripper
+ * Bash 5.0
+ * Filezilla FTP client
+ * Firefox web browser
+ * Fish Command line
+ * Gimp
+ * Qpdfview
+ * Git
+
+
+ * Hexchat IRC client
+ * Leafpad text editor
+ * Midnight Commander file manager
+ * PaleMoon web browser
+ * PCManFM file manager
+ * Pidgin messaging client
+ * Transmission BitTorrent client
+
+
+ * Redshift
+ * Sakura terminal emulator
+ * Slim login manager
+ * Thunderbird email client
+ * VLC media player
+ * Plank application dock
+ * Z Shell
+
+
+
+You can see a complete of the pre-installed applications in the [MANIFEST file][8].
+
+![Nomadbsd Openbox Menu][9]
+
+#### Version 1.2 Released
+
+NomadBSD recently released version 1.2 on April 21, 2019. This means that NomadBSD is now based on FreeBSD 12.0-p3. TRIM is now enabled by default. One of the biggest changes is that the initial command-line setup was replaced with a Qt graphical interface. They also added a Qt5 tool to install NomadBSD to your hard drive. A number of fixes were included to improve graphics support. They also added support for creating 32-bit images.
+
+[][10]
+
+Suggested read 6 Reasons Why Linux Users Switch to BSD
+
+### Installing NomadBSD
+
+Since NomadBSD is designed to be a live system, we will need to add the BSD to a USB drive. First, you will need to [download it][11]. There are several options to choose from: 64-bit, 32-bit, or 64-bit Mac.
+
+You will be a USB drive that has at least 4GB. The system that you are installing to should have a 1.2 GHz processor and 1GB of RAM to run NomadBSD comfortably. Both BIOS and UEFI are supported.
+
+All of the images available for download are compressed as a `.lzma` file. So, once you have downloaded the file, you will need to extract the `.img` file. On Linux, you can use either of these commands: `lzma -d nomadbsd-x.y.z.img.lzma` or `xzcat nomadbsd-x.y.z.img.lzma`. (Be sure to replace x.y.z with the correct file name you just downloaded.)
+
+Before we proceed, we need to find out the id of your USB drive. (Hopefully, you have inserted it by now.) I use the `lsblk` command to find my USB drive, which in my case is `sdb`. To write the image file, use this command `sudo dd if=nomadbsd-x.y.z.img of=/dev/sdb bs=1M conv=sync`. (Again, don’t forget to correct the file name.) If you are uncomfortable using `dd`, you can use [Etcher][12]. If you have Windows, you will need to use [7-zip][13] to extract the image file and Etcher or [Rufus][14] to write the image to the USB drive.
+
+When you boot from the USB drive, you will encounter a simple config tool. Once you answer the required questions, you will be greeted with a simple Openbox desktop.
+
+### Thoughts on NomadBSD
+
+I first discovered NomadBSD back in January when they released 1.2-RC1. At the time, I had been unable to install [Project Trident][15] on my laptop and was very frustrated with BSDs. I downloaded NomadBSD and tried it out. I initially ran into issues reaching the desktop, but RC2 fixed that issue. However, I was unable to get on the internet, even though I had an Ethernet cable plugged in. Luckily, I found the wifi manager in the menu and was able to connect to my wifi.
+
+Overall, my experience with NomadBSD was pleasant. Once I figured out a few things, I was good to go. I hope that NomadBSD is the first of a new generation of BSDs that focus on mobility and ease of use. BSD has conquered the server world, it’s about time they figured out how to be more user-friendly.
+
+Have you ever used NomadBSD? What is your BSD? Please let us know in the comments below.
+
+If you found this article interesting, please take a minute to share it on social media, Hacker News or [Reddit][16].
+
+--------------------------------------------------------------------------------
+
+via: https://itsfoss.com/nomadbsd/
+
+作者:[John Paul][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://itsfoss.com/author/john/
+[b]: https://github.com/lujun9972
+[1]: https://itsfoss.com/wp-content/uploads/2019/04/NomadBSD-desktop-800x500.jpg
+[2]: http://nomadbsd.org/
+[3]: http://openbox.org/wiki/Main_Page
+[4]: https://freeshell.de/%7Emk/projects/dsb.html
+[5]: https://github.com/mrclksr
+[6]: http://nomadbsd.org/contact.html
+[7]: https://itsfoss.com/netflix-freebsd-cdn/
+[8]: http://nomadbsd.org/download/nomadbsd-1.2.manifest
+[9]: https://itsfoss.com/wp-content/uploads/2019/04/NomadBSD-Openbox-menu-800x500.jpg
+[10]: https://itsfoss.com/why-use-bsd/
+[11]: http://nomadbsd.org/download.html
+[12]: https://www.balena.io/etcher/
+[13]: https://www.7-zip.org/
+[14]: https://rufus.ie/
+[15]: https://itsfoss.com/project-trident-interview/
+[16]: http://reddit.com/r/linuxusersgroup
diff --git a/sources/tech/20190427 Monitoring CPU and GPU Temperatures on Linux.md b/sources/tech/20190427 Monitoring CPU and GPU Temperatures on Linux.md
new file mode 100644
index 0000000000..89f942ce66
--- /dev/null
+++ b/sources/tech/20190427 Monitoring CPU and GPU Temperatures on Linux.md
@@ -0,0 +1,166 @@
+[#]: collector: (lujun9972)
+[#]: translator: ( )
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+[#]: subject: (Monitoring CPU and GPU Temperatures on Linux)
+[#]: via: (https://itsfoss.com/monitor-cpu-gpu-temp-linux/)
+[#]: author: (It's FOSS Community https://itsfoss.com/author/itsfoss/)
+
+Monitoring CPU and GPU Temperatures on Linux
+======
+
+_**Brief: This articles discusses two simple ways of monitoring CPU and GPU temperatures in Linux command line.**_
+
+Because of **[Steam][1]** (including _[Steam Play][2]_ , aka _Proton_ ) and other developments, **GNU/Linux** is becoming the gaming platform of choice for more and more computer users everyday. A good number of users are also going for **GNU/Linux** when it comes to other resource-consuming computing tasks such as [video editing][3] or graphic design ( _Kdenlive_ and _[Blender][4]_ are good examples of programs for these).
+
+Whether you are one of those users or otherwise, you are bound to have wondered how hot your computer’s CPU and GPU can get (even more so if you do overclocking). If that is the case, keep reading. We will be looking at a couple of very simple commands to monitor CPU and GPU temps.
+
+My setup includes a [Slimbook Kymera][5] and two displays (a TV set and a PC monitor) which allows me to use one for playing games and the other to keep an eye on the temperatures. Also, since I use [Zorin OS][6] I will be focusing on **Ubuntu** and **Ubuntu** derivatives.
+
+To monitor the behaviour of both CPU and GPU we will be making use of the useful `watch` command to have dynamic readings every certain number of seconds.
+
+![][7]
+
+### Monitoring CPU Temperature in Linux
+
+For CPU temps, we will combine `watch` with the `sensors` command. An interesting article about a [gui version of this tool has already been covered on It’s FOSS][8]. However, we will use the terminal version here:
+
+```
+watch -n 2 sensors
+```
+
+`watch` guarantees that the readings will be updated every 2 seconds (and this value can — of course — be changed to what best fit your needs):
+
+```
+Every 2,0s: sensors
+
+iwlwifi-virtual-0
+Adapter: Virtual device
+temp1: +39.0°C
+
+acpitz-virtual-0
+Adapter: Virtual device
+temp1: +27.8°C (crit = +119.0°C)
+temp2: +29.8°C (crit = +119.0°C)
+
+coretemp-isa-0000
+Adapter: ISA adapter
+Package id 0: +37.0°C (high = +82.0°C, crit = +100.0°C)
+Core 0: +35.0°C (high = +82.0°C, crit = +100.0°C)
+Core 1: +35.0°C (high = +82.0°C, crit = +100.0°C)
+Core 2: +33.0°C (high = +82.0°C, crit = +100.0°C)
+Core 3: +36.0°C (high = +82.0°C, crit = +100.0°C)
+Core 4: +37.0°C (high = +82.0°C, crit = +100.0°C)
+Core 5: +35.0°C (high = +82.0°C, crit = +100.0°C)
+```
+
+Amongst other things, we get the following information:
+
+ * We have 5 cores in use at the moment (with the current highest temperature being 37.0ºC).
+ * Values higher than 82.0ºC are considered high.
+ * A value over 100.0ºC is deemed critical.
+
+
+
+[][9]
+
+Suggested read Top 10 Command Line Games For Linux
+
+The values above lead us to the conclusion that the computer’s workload is very light at the moment.
+
+### Monitoring GPU Temperature in Linux
+
+Let us turn to the graphics card now. I have never used an **AMD** dedicated graphics card, so I will be focusing on **Nvidia** ones. The first thing to do is download the appropriate, current driver through [additional drivers in Ubuntu][10].
+
+On **Ubuntu** (and its forks such as **Zorin** or **Linux Mint** ), going to _Software & Updates_ > _Additional Drivers_ and selecting the most recent one normally suffices. Additionally, you can add/enable the official _ppa_ for graphics cards (either through the command line or via _Software & Updates_ > _Other Software_ ). After installing the driver you will have at your disposal the _Nvidia X Server_ gui application along with the command line utility _nvidia-smi_ (Nvidia System Management Interface). So we will use `watch` and `nvidia-smi`:
+
+```
+watch -n 2 nvidia-smi
+```
+
+And — the same as for the CPU — we will get updated readings every two seconds:
+
+```
+Every 2,0s: nvidia-smi
+
+Fri Apr 19 20:45:30 2019
++-----------------------------------------------------------------------------+
+| Nvidia-SMI 418.56 Driver Version: 418.56 CUDA Version: 10.1 |
+|-------------------------------+----------------------+----------------------+
+| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
+| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
+|===============================+======================+======================|
+| 0 GeForce GTX 106... Off | 00000000:01:00.0 On | N/A |
+| 0% 54C P8 10W / 120W | 433MiB / 6077MiB | 4% Default |
++-------------------------------+----------------------+----------------------+
+
++-----------------------------------------------------------------------------+
+| Processes: GPU Memory |
+| GPU PID Type Process name Usage |
+|=============================================================================|
+| 0 1557 G /usr/lib/xorg/Xorg 190MiB |
+| 0 1820 G /usr/bin/gnome-shell 174MiB |
+| 0 7820 G ...equest-channel-token=303407235874180773 65MiB |
++-----------------------------------------------------------------------------+
+```
+
+The chart gives the following information about the graphics card:
+
+ * it is using the open source driver version 418.56.
+ * the current temperature of the card is 54.0ºC — with the fan at 0% of its capacity.
+ * the power consumption is very low: only 10W.
+ * out of 6 GB of vram (video random access memory), it is only using 433 MB.
+ * the used vram is being taken by three processes whose IDs are — respectively — 1557, 1820 and 7820.
+
+
+
+[][11]
+
+Suggested read Googler: Now You Can Google From Linux Terminal!
+
+Most of these facts/values show that — clearly — we are not playing any resource-consuming games or dealing with heavy workloads. Should we started playing a game, processing a video — or the like —, the values would start to go up.
+
+#### Conclusion
+
+Althoug there are gui tools, I find these two commands very handy to check on your hardware in real time.
+
+What do you make of them? You can learn more about the utilities involved by reading their man pages.
+
+Do you have other preferences? Share them with us in the comments, ;).
+
+Halof!!! (Have a lot of fun!!!).
+
+![avatar][12]
+
+### Alejandro Egea-Abellán
+
+It’s FOSS Community Contributor
+
+I developed a liking for electronics, linguistics, herpetology and computers (particularly GNU/Linux and FOSS). I am LPIC-2 certified and currently work as a technical consultant and Moodle administrator in the Department for Lifelong Learning at the Ministry of Education in Murcia, Spain. I am a firm believer in lifelong learning, the sharing of knowledge and computer-user freedom.
+
+--------------------------------------------------------------------------------
+
+via: https://itsfoss.com/monitor-cpu-gpu-temp-linux/
+
+作者:[It's FOSS Community][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://itsfoss.com/author/itsfoss/
+[b]: https://github.com/lujun9972
+[1]: https://itsfoss.com/install-steam-ubuntu-linux/
+[2]: https://itsfoss.com/steam-play-proton/
+[3]: https://itsfoss.com/best-video-editing-software-linux/
+[4]: https://www.blender.org/
+[5]: https://slimbook.es/
+[6]: https://zorinos.com/
+[7]: https://itsfoss.com/wp-content/uploads/2019/04/monitor-cpu-gpu-temperature-linux-800x450.png
+[8]: https://itsfoss.com/check-laptop-cpu-temperature-ubuntu/
+[9]: https://itsfoss.com/best-command-line-games-linux/
+[10]: https://itsfoss.com/install-additional-drivers-ubuntu/
+[11]: https://itsfoss.com/review-googler-linux/
+[12]: https://itsfoss.com/wp-content/uploads/2019/04/EGEA-ABELLAN-Alejandro.jpg
diff --git a/sources/tech/20190428 Installing Budgie Desktop on Ubuntu -Quick Guide.md b/sources/tech/20190428 Installing Budgie Desktop on Ubuntu -Quick Guide.md
new file mode 100644
index 0000000000..11659592fb
--- /dev/null
+++ b/sources/tech/20190428 Installing Budgie Desktop on Ubuntu -Quick Guide.md
@@ -0,0 +1,116 @@
+[#]: collector: (lujun9972)
+[#]: translator: ( )
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+[#]: subject: (Installing Budgie Desktop on Ubuntu [Quick Guide])
+[#]: via: (https://itsfoss.com/install-budgie-ubuntu/)
+[#]: author: (Atharva Lele https://itsfoss.com/author/atharva/)
+
+Installing Budgie Desktop on Ubuntu [Quick Guide]
+======
+
+_**Brief: Learn how to install Budgie desktop on Ubuntu in this step-by-step tutorial.**_
+
+Among all the [various Ubuntu versions][1], [Ubuntu Budgie][2] is the most underrated one. It looks elegant and it’s not heavy on resources.
+
+Read this [Ubuntu Budgie review][3] or simply watch this video to see what Ubuntu Budgie 18.04 looks like.
+
+[Subscribe to our YouTube channel for more Linux Videos][4]
+
+If you like [Budgie desktop][5] but you are using some other version of Ubuntu such as the default Ubuntu with GNOME desktop, I have good news for you. You can install Budgie on your current Ubuntu system and switch the desktop environments.
+
+In this post, I’m going to tell you exactly how to do that. But first, a little introduction to Budgie for those who are unaware about it.
+
+Budgie desktop environment is developed mainly by [Solus Linux team.][6] It is designed with focus on elegance and modern usage. Budgie is available for all major Linux distributions for users to try and experience this new desktop environment. Budgie is pretty mature by now and provides a great desktop experience.
+
+Warning
+
+Installing multiple desktops on the same system MAY result in conflicts and you may see some issue like missing icons in the panel or multiple icons of the same program.
+
+You may not see any issue at all as well. It’s your call if you want to try different desktop.
+
+### Install Budgie on Ubuntu
+
+This method is not tested on Linux Mint, so I recommend that you not follow this guide for Mint.
+
+For those on Ubuntu, Budgie is now a part of the Ubuntu repositories by default. Hence, we don’t need to add any PPAs in order to get Budgie.
+
+To install Budgie, simply run this command in terminal. We’ll first make sure that the system is fully updated.
+
+```
+sudo apt update && sudo apt upgrade
+sudo apt install ubuntu-budgie-desktop
+```
+
+When everything is done downloading, you will get a prompt to choose your display manager. Select ‘lightdm’ to get the full Budgie experience.
+
+![Select lightdm][7]
+
+After the installation is complete, reboot your computer. You will be then greeted by the Budgie login screen. Enter your password to go into the homescreen.
+
+![Budgie Desktop Home][8]
+
+### Switching to other desktop environments
+
+![Budgie login screen][9]
+
+You can click the Budgie icon next to your name to get options for login. From there you can select between the installed Desktop Environments (DEs). In my case, I see Budgie and the default Ubuntu (GNOME) DEs.
+
+![Select your DE][10]
+
+Hence whenever you feel like logging into GNOME, you can do so using this menu.
+
+[][11]
+
+Suggested read Get Rid of 'snapd returned status code 400: Bad Request' Error in Ubuntu
+
+### How to Remove Budgie
+
+If you don’t like Budgie or just want to go back to your regular old Ubuntu, you can switch back to your regular desktop as described in the above section.
+
+However, if you really want to remove Budgie and its component, you can follow the following commands to get back to a clean slate.
+
+_**Switch to some other desktop environments before using these commands:**_
+
+```
+sudo apt remove ubuntu-budgie-desktop ubuntu-budgie* lightdm
+sudo apt autoremove
+sudo apt install --reinstall gdm3
+```
+
+After running all the commands successfully, reboot your computer.
+
+Now, you will be back to GNOME or whichever desktop environment you had.
+
+**What you think of Budgie?**
+
+Budgie is one of the [best desktop environments for Linux][12]. Hope this short guide helped you install the awesome Budgie desktop on your Ubuntu system.
+
+If you did install Budgie, what do you like about it the most? Let us know in the comments below. And as usual, any questions or suggestions are always welcome.
+
+--------------------------------------------------------------------------------
+
+via: https://itsfoss.com/install-budgie-ubuntu/
+
+作者:[Atharva Lele][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://itsfoss.com/author/atharva/
+[b]: https://github.com/lujun9972
+[1]: https://itsfoss.com/which-ubuntu-install/
+[2]: https://ubuntubudgie.org/
+[3]: https://itsfoss.com/ubuntu-budgie-18-review/
+[4]: https://www.youtube.com/c/itsfoss?sub_confirmation=1
+[5]: https://github.com/solus-project/budgie-desktop
+[6]: https://getsol.us/home/
+[7]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/04/budgie_install_select_dm.png?fit=800%2C559&ssl=1
+[8]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/04/budgie_homescreen.jpg?fit=800%2C500&ssl=1
+[9]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/04/budgie_install_lockscreen.png?fit=800%2C403&ssl=1
+[10]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/04/budgie_install_lockscreen_select_de.png?fit=800%2C403&ssl=1
+[11]: https://itsfoss.com/snapd-error-ubuntu/
+[12]: https://itsfoss.com/best-linux-desktop-environments/
diff --git a/sources/tech/20190429 Awk utility in Fedora.md b/sources/tech/20190429 Awk utility in Fedora.md
new file mode 100644
index 0000000000..21e40641f7
--- /dev/null
+++ b/sources/tech/20190429 Awk utility in Fedora.md
@@ -0,0 +1,177 @@
+[#]: collector: (lujun9972)
+[#]: translator: ( )
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+[#]: subject: (Awk utility in Fedora)
+[#]: via: (https://fedoramagazine.org/awk-utility-in-fedora/)
+[#]: author: (Stephen Snow https://fedoramagazine.org/author/jakfrost/)
+
+Awk utility in Fedora
+======
+
+![][1]
+
+Fedora provides _awk_ as part of its default installation, including all its editions, including the immutable ones like Silverblue. But you may be asking, what is _awk_ and why would you need it?
+
+_Awk_ is a data driven programming language that acts when it matches a pattern. On Fedora, and most other distributions, GNU _awk_ or _gawk_ is used. Read on for more about this language and how to use it.
+
+### A brief history of awk
+
+_Awk_ began at Bell Labs in 1977. Its name is an acronym from the initials of the designers: Alfred V. Aho, Peter J. Weinberger, and Brian W. Kernighan.
+
+> The specification for _awk_ in the POSIX Command Language and Utilities standard further clarified the language. Both the _gawk_ designers and the original _awk_ designers at Bell Laboratories provided feedback for the POSIX specification.
+>
+> From [The GNU Awk User’s Guide][2]
+
+For a more in-depth look at how _awk/gawk_ ended up being as powerful and useful as it is, follow the link above. Numerous individuals have contributed to the current state of _gawk_. Among those are:
+
+ * Arnold Robbins and David Trueman, the creators of _gawk_
+ * Michael Brennan, the creator of _mawk_ , which later was merged with _gawk_
+ * Jurgen Kahrs, who added networking capabilities to _gawk_ in 1997
+ * John Hague, who rewrote the _gawk_ internals and added an _awk_ -level debugger in 2011
+
+
+
+### Using awk
+
+The following sections show various ways of using _awk_ in Fedora.
+
+#### At the command line
+
+The simples way to invoke _awk_ is at the command line. You can search a text file for a particular pattern, and if found, print out the line(s) of the file that match the pattern anywhere. As an example, use _cat_ to take a look at the command history file in your home director:
+
+```
+$ cat ~/.bash_history
+```
+
+There are probably many lines scrolling by right now.
+
+_Awk_ helps with this type of file quite easily. Instead of printing the entire file out to the terminal like _cat_ , you can use _awk_ to find something of specific interest. For this example, type the following at the command line if you’re running a standard Fedora edition:
+
+```
+$ awk '/dnf/' ~/.bash_history
+```
+
+If you’re running Silverblue, try this instead:
+
+```
+$ awk '/rpm-ostree/' ~/.bash_history
+```
+
+In both cases, more data likely appears than what you really want. That’s no problem for _awk_ since it can accept regular expressions. Using the previous example, you can change the pattern to more closely match search requirements of wanting to know about installs only. Try changing the search pattern to one of these:
+
+```
+$ awk '/rpm-ostree install/' ~/.bash_history
+$ awk '/dnf install/' ~/.bash_history
+```
+
+All the entries of your bash command line history appear that have the pattern specified at any position along the line. Awk works on one line of a data file at a time. It matches pattern, then performs an action, then moves to next line until the end of file (EOF) is reached.
+
+#### From an _awk_ program
+
+Using awk at the command line as above is not much different than piping output to _grep_ , like this:
+
+```
+$ cat .bash_history | grep 'dnf install'
+```
+
+The end result of printing to standard output ( _stdout_ ) is the same with both methods.
+
+Awk is a programming language, and the command _awk_ is an interpreter of that language. The real power and flexibility of _awk_ is you can make programs with it, and combine them with shell scripts to create even more powerful programs. For more feature rich development with _awk_ , you can also incorporate C or C++ code using [Dynamic-Extensions][3].
+
+Next, to show the power of _awk_ , let’s make a couple of program files to print the header and draw five numbers for the first row of a bingo card. To do this we’ll create two awk program files.
+
+The first file prints out the header of the bingo card. For this example it is called _bingo-title.awk_. Use your favorite editor to save this text as that file name:
+```
+
+```
+
+BEGIN {
+print "B\tI\tN\tG\tO"
+}
+```
+
+```
+
+Now the title program is ready. You could try it out with this command:
+
+```
+$ awk -f bingo-title.awk
+```
+
+The program prints the word BINGO, with a tab space ( _\t_ ) between the characters. For the number selection, let’s use one of awk’s builtin numeric functions called _rand()_ and use two of the control statements, _for_ and _switch._ (Except the editor changed my program, so no switch statement used this time).
+
+The title of the second awk program is _bingo-num.awk_. Enter the following into your favorite editor and save with that file name:
+```
+
+```
+
+@include "bingo-title.awk"
+BEGIN {
+for (i = 1; i < = 5; i++) {
+b = int(rand() * 15) + (15*(i-1))
+printf "%s\t", b
+}
+print
+}
+```
+
+```
+
+The _@include_ statement in the file tells the interpreter to process the included file first. In this case the interpreter processs the _bingo-title.awk_ file so the title prints out first.
+
+#### Running the test program
+
+Now enter the command to pick a row of bingo numbers:
+
+```
+$ awk -f bingo-num.awk
+```
+
+Output appears similar to the following. Note that the _rand()_ function in _awk_ is not ideal for truly random numbers. It’s used here only as for example purposes.
+```
+
+```
+
+$ awk -f bingo-num.awk
+B I N G O
+13 23 34 53 71
+```
+
+```
+
+In the example, we created two programs with only beginning sections that used actions to manipulate data generated from within the awk program. In order to satisfy the rules of Bingo, more work is needed to achieve the desirable results. The reader is encouraged to fix the programs so they can reliably pick bingo numbers, maybe look at the awk function _srand()_ for answers on how that could be done.
+
+### Final examples
+
+_Awk_ can be useful even for mundane daily search tasks that you encounter, like listing all _flatpak’s_ on the _Flathub_ repository from _org.gnome_ (providing you have the Flathub repository setup). The command to do that would be:
+
+```
+$ flatpak remote-ls flathub --system | awk /org.gnome/
+```
+
+A listing appears that shows all output from _remote-ls_ that matches the _org.gnome_ pattern. To see flatpaks already installed from org.gnome, enter this command:
+
+```
+$ flatpak list --system | awk /org.gnome/
+```
+
+Awk is a powerful and flexible programming language that fills a niche with text file manipulation exceedingly well.
+
+--------------------------------------------------------------------------------
+
+via: https://fedoramagazine.org/awk-utility-in-fedora/
+
+作者:[Stephen Snow][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://fedoramagazine.org/author/jakfrost/
+[b]: https://github.com/lujun9972
+[1]: https://fedoramagazine.org/wp-content/uploads/2019/04/awk-816x345.jpg
+[2]: https://www.gnu.org/software/gawk/manual/gawk.html#Foreword3
+[3]: https://www.gnu.org/software/gawk/manual/gawk.html#Dynamic-Extensions
diff --git a/sources/tech/20190429 How To Turn On And Shutdown The Raspberry Pi -Absolute Beginner Tip.md b/sources/tech/20190429 How To Turn On And Shutdown The Raspberry Pi -Absolute Beginner Tip.md
new file mode 100644
index 0000000000..ce667a1dff
--- /dev/null
+++ b/sources/tech/20190429 How To Turn On And Shutdown The Raspberry Pi -Absolute Beginner Tip.md
@@ -0,0 +1,111 @@
+[#]: collector: (lujun9972)
+[#]: translator: ( )
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+[#]: subject: (How To Turn On And Shutdown The Raspberry Pi [Absolute Beginner Tip])
+[#]: via: (https://itsfoss.com/turn-on-raspberry-pi/)
+[#]: author: (Chinmay https://itsfoss.com/author/chinmay/)
+
+How To Turn On And Shutdown The Raspberry Pi [Absolute Beginner Tip]
+======
+
+_**Brief: This quick tip teaches you how to turn on Raspberry Pi and how to shut it down properly afterwards.**_
+
+The [Raspberry Pi][1] is one of the [most popular SBC (Single-Board-Computer)][2]. If you are interested in this topic, I believe that you’ve finally got a Pi device. I also advise to get all the [additional Raspberry Pi accessories][3] to get started with your device.
+
+You’re ready to turn it on and start to tinker around with it. It has it’s own similarities and differences compared to traditional computers like desktops and laptops.
+
+Today, let’s go ahead and learn how to turn on and shutdown a Raspberry Pi as it doesn’t really feature a ‘power button’ of sorts.
+
+For this article I’m using a Raspberry Pi 3B+, but it’s the same for all the Raspberry Pi variants.
+
+Bestseller No. 1
+
+[][4]
+
+[CanaKit Raspberry Pi 3 B+ (B Plus) Starter Kit (32 GB EVO+ Edition, Premium Black Case)][4]
+
+CanaKit - Personal Computers
+
+$79.99 [][5]
+
+Bestseller No. 2
+
+[][6]
+
+[CanaKit Raspberry Pi 3 B+ (B Plus) with Premium Clear Case and 2.5A Power Supply][6]
+
+CanaKit - Personal Computers
+
+$54.99 [][5]
+
+### Turn on Raspberry Pi
+
+![Micro USB port for Power][7]
+
+The micro USB port powers the Raspberry Pi, the way you turn it on is by plugging in the power cable into the micro USB port. But, before you do that you should make sure that you have done the following things.
+
+ * Preparing the micro SD card with Raspbian according to the official [guide][8] and inserting into the micro SD card slot.
+ * Plugging in the HDMI cable, USB keyboard and a Mouse.
+ * Plugging in the Ethernet Cable(Optional).
+
+
+
+Once you have done the above, plug in the power cable. This turns on the Raspberry Pi and the display will light up and load the Operating System.
+
+Bestseller No. 1
+
+[][4]
+
+[CanaKit Raspberry Pi 3 B+ (B Plus) Starter Kit (32 GB EVO+ Edition, Premium Black Case)][4]
+
+CanaKit - Personal Computers
+
+$79.99 [][5]
+
+### Shutting Down the Pi
+
+Shutting down the Pi is pretty straight forward, click the menu button and choose shutdown.
+
+![Turn off Raspberry Pi graphically][9]
+
+Alternatively, you can use the [shutdown command][10] in the terminal:
+
+```
+sudo shutdown now
+```
+
+Once the shutdown process has started **wait** till it completely finishes and then you can cut the power to it. Once the Pi shuts down, there is no real way to turn the Pi back on without turning off and turning on the power. You could the GPIO’s to turn on the Pi from the shutdown state but it’ll require additional modding.
+
+[][2]
+
+Suggested read 12 Single Board Computers: Alternative to Raspberry Pi
+
+_Note: Micro USB ports tend to be fragile, hence turn-off/on the power at source instead of frequently unplugging and plugging into the micro USB port._
+
+Well, that’s about all you should know about turning on and shutting down the Pi, what do you plan to use it for? Let me know in the comments.
+
+--------------------------------------------------------------------------------
+
+via: https://itsfoss.com/turn-on-raspberry-pi/
+
+作者:[Chinmay][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://itsfoss.com/author/chinmay/
+[b]: https://github.com/lujun9972
+[1]: https://www.raspberrypi.org/
+[2]: https://itsfoss.com/raspberry-pi-alternatives/
+[3]: https://itsfoss.com/things-you-need-to-get-your-raspberry-pi-working/
+[4]: https://www.amazon.com/CanaKit-Raspberry-Starter-Premium-Black/dp/B07BCC8PK7?SubscriptionId=AKIAJ3N3QBK3ZHDGU54Q&tag=chmod7mediate-20&linkCode=xm2&camp=2025&creative=165953&creativeASIN=B07BCC8PK7&keywords=raspberry%20pi%20kit (CanaKit Raspberry Pi 3 B+ (B Plus) Starter Kit (32 GB EVO+ Edition, Premium Black Case))
+[5]: https://www.amazon.com/gp/prime/?tag=chmod7mediate-20 (Amazon Prime)
+[6]: https://www.amazon.com/CanaKit-Raspberry-Premium-Clear-Supply/dp/B07BC7BMHY?SubscriptionId=AKIAJ3N3QBK3ZHDGU54Q&tag=chmod7mediate-20&linkCode=xm2&camp=2025&creative=165953&creativeASIN=B07BC7BMHY&keywords=raspberry%20pi%20kit (CanaKit Raspberry Pi 3 B+ (B Plus) with Premium Clear Case and 2.5A Power Supply)
+[7]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/04/raspberry-pi-3-microusb.png?fit=800%2C532&ssl=1
+[8]: https://www.raspberrypi.org/documentation/installation/installing-images/README.md
+[9]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/04/Raspbian-ui-menu.jpg?fit=800%2C492&ssl=1
+[10]: https://linuxhandbook.com/linux-shutdown-command/
diff --git a/sources/tech/20190430 The Awesome Fedora 30 is Here- Check Out the New Features.md b/sources/tech/20190430 The Awesome Fedora 30 is Here- Check Out the New Features.md
new file mode 100644
index 0000000000..3d158c7031
--- /dev/null
+++ b/sources/tech/20190430 The Awesome Fedora 30 is Here- Check Out the New Features.md
@@ -0,0 +1,115 @@
+[#]: collector: (lujun9972)
+[#]: translator: ( )
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+[#]: subject: (The Awesome Fedora 30 is Here! Check Out the New Features)
+[#]: via: (https://itsfoss.com/fedora-30/)
+[#]: author: (Abhishek Prakash https://itsfoss.com/author/abhishek/)
+
+The Awesome Fedora 30 is Here! Check Out the New Features
+======
+
+The latest and greatest release of Fedora is here. Fedora 30 brings some visual as well as performance improvements.
+
+Fedora releases a new version every six months and each release is supported for thirteen months.
+
+Before you decide to download or upgrade Fedora, let’s first see what’s new in Fedora 30.
+
+### New Features in Fedora 30
+
+![Fedora 30 Release][1]
+
+Here’s what’s new in the latest release of Fedora.
+
+#### GNOME 3.32 gives a brand new look, features and performance improvement
+
+A lot of visual improvements is brought by the latest release of GNOME.
+
+GNOME 3.32 has refreshed new icons and UI and it almost looks like a brand new version of GNOME.
+
+![Gnome 3.32 icons | Image Credit][2]
+
+GNOME 3.32 also brings several other features like fractional scaling, permission control for each application, granular control on Night Light intensity among many other changes.
+
+GNOME 3.32 also brings some performance improvements. You’ll see faster file and app searches and a smoother scrolling.
+
+#### Improved performance for DNF
+
+Fedora 30 will see a faster [DNF][3] (the default package manager for Fedora) thanks to the [zchunk][4] compression algorithm.
+
+The zchunk algorithm splits the file into independent chunks. This helps in dealing with ‘delta’ or changes as you download only the changed chunks while downloading the new version of a file.
+
+With zcunk, dnf will only download the difference between the metadata of the current version and the earlier versions.
+
+#### Fedora 30 brings two new desktop environments into the fold
+
+Fedora already offers several desktop environment choices. Fedora 30 extends the offering with [elementary OS][5]‘ Pantheon desktop environment and Deepin Linux’ [DeepinDE][6].
+
+So now you can enjoy the looks and feel of elementary OS and Deepin Linux in Fedora. How cool is that!
+
+#### Linux Kernel 5
+
+Fedora 29 has Linux Kernel 5.0.9 version that has improved support for hardware and some performance improvements. You may check out the [features of Linux kernel 5.0 in this article][7].
+
+[][8]
+
+Suggested read The Featureful Release of Nextcloud 14 Has Two New Security Features
+
+#### Updated software
+
+You’ll also get newer versions of software. Some of the major ones are:
+
+ * GCC 9.0.1
+ * [Bash Shell 5.0][9]
+ * GNU C Library 2.29
+ * Ruby 2.6
+ * Golang 1.12
+ * Mesa 19.0.2
+
+
+ * Vagrant 2.2
+ * JDK12
+ * PHP 7.3
+ * Fish 3.0
+ * Erlang 21
+ * Python 3.7.3
+
+
+
+### Getting Fedora 30
+
+If you are already using Fedora 29 then you can upgrade to the latest release from your current install. You may follow this guide to learn [how to upgrade a Fedora version][10].
+
+Fedora 29 users will still get the updates for seven more months so if you don’t feel like upgrading, you may skip it for now. Fedora 28 users have no choice because Fedora 28 reached end of life next month which means there will be no security or maintenance update anymore. Upgrading to a newer version is no longer a choice.
+
+You always has the option to download the ISO of Fedora 30 and install it afresh. You can download Fedora from its official website. It’s only available for 64-bit systems and the ISO is 1.9 GB in size.
+
+[Download Fedora 30 Workstation][11]
+
+What do you think of Fedora 30? Are you planning to upgrade or at least try it out? Do share your thoughts in the comment section.
+
+--------------------------------------------------------------------------------
+
+via: https://itsfoss.com/fedora-30/
+
+作者:[Abhishek Prakash][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://itsfoss.com/author/abhishek/
+[b]: https://github.com/lujun9972
+[1]: https://itsfoss.com/wp-content/uploads/2019/04/fedora-30-release-800x450.png
+[2]: https://itsfoss.com/wp-content/uploads/2019/04/gnome-3-32-icons.png
+[3]: https://fedoraproject.org/wiki/DNF?rd=Dnf
+[4]: https://github.com/zchunk/zchunk
+[5]: https://itsfoss.com/elementary-os-juno-features/
+[6]: https://www.deepin.org/en/dde/
+[7]: https://itsfoss.com/linux-kernel-5/
+[8]: https://itsfoss.com/nextcloud-14-release/
+[9]: https://itsfoss.com/bash-5-release/
+[10]: https://itsfoss.com/upgrade-fedora-version/
+[11]: https://getfedora.org/en/workstation/
diff --git a/sources/tech/20190430 Upgrading Fedora 29 to Fedora 30.md b/sources/tech/20190430 Upgrading Fedora 29 to Fedora 30.md
new file mode 100644
index 0000000000..f6d819c754
--- /dev/null
+++ b/sources/tech/20190430 Upgrading Fedora 29 to Fedora 30.md
@@ -0,0 +1,96 @@
+[#]: collector: (lujun9972)
+[#]: translator: ( )
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+[#]: subject: (Upgrading Fedora 29 to Fedora 30)
+[#]: via: (https://fedoramagazine.org/upgrading-fedora-29-to-fedora-30/)
+[#]: author: (Ryan Lerch https://fedoramagazine.org/author/ryanlerch/)
+
+Upgrading Fedora 29 to Fedora 30
+======
+
+![][1]
+
+Fedora 30 i[s available now][2]. You’ll likely want to upgrade your system to the latest version of Fedora. Fedora Workstation has a graphical upgrade method. Alternatively, Fedora offers a command-line method for upgrading Fedora 29 to Fedora 30.
+
+### Upgrading Fedora 29 Workstation to Fedora 30
+
+Soon after release time, a notification appears to tell you an upgrade is available. You can click the notification to launch the **GNOME Software** app. Or you can choose Software from GNOME Shell.
+
+Choose the _Updates_ tab in GNOME Software and you should see a screen informing you that Fedora 30 is Now Available.
+
+If you don’t see anything on this screen, try using the reload button at the top left. It may take some time after release for all systems to be able to see an upgrade available.
+
+Choose _Download_ to fetch the upgrade packages. You can continue working until you reach a stopping point, and the download is complete. Then use GNOME Software to restart your system and apply the upgrade. Upgrading takes time, so you may want to grab a coffee and come back to the system later.
+
+### Using the command line
+
+If you’ve upgraded from past Fedora releases, you are likely familiar with the _dnf upgrade_ plugin. This method is the recommended and supported way to upgrade from Fedora 29 to Fedora 30. Using this plugin will make your upgrade to Fedora 30 simple and easy.
+
+##### 1\. Update software and back up your system
+
+Before you do anything, you will want to make sure you have the latest software for Fedora 29 before beginning the upgrade process. To update your software, use _GNOME Software_ or enter the following command in a terminal.
+
+```
+sudo dnf upgrade --refresh
+```
+
+Additionally, make sure you back up your system before proceeding. For help with taking a backup, see [the backup series][3] on the Fedora Magazine.
+
+##### 2\. Install the DNF plugin
+
+Next, open a terminal and type the following command to install the plugin:
+
+```
+sudo dnf install dnf-plugin-system-upgrade
+```
+
+##### 3\. Start the update with DNF
+
+Now that your system is up-to-date, backed up, and you have the DNF plugin installed, you can begin the upgrade by using the following command in a terminal:
+
+```
+sudo dnf system-upgrade download --releasever=30
+```
+
+This command will begin downloading all of the upgrades for your machine locally to prepare for the upgrade. If you have issues when upgrading because of packages without updates, broken dependencies, or retired packages, add the _‐‐allowerasing_ flag when typing the above command. This will allow DNF to remove packages that may be blocking your system upgrade.
+
+##### 4\. Reboot and upgrade
+
+Once the previous command finishes downloading all of the upgrades, your system will be ready for rebooting. To boot your system into the upgrade process, type the following command in a terminal:
+
+```
+sudo dnf system-upgrade reboot
+```
+
+Your system will restart after this. Many releases ago, the _fedup_ tool would create a new option on the kernel selection / boot screen. With the _dnf-plugin-system-upgrade_ package, your system reboots into the current kernel installed for Fedora 29; this is normal. Shortly after the kernel selection screen, your system begins the upgrade process.
+
+Now might be a good time for a coffee break! Once it finishes, your system will restart and you’ll be able to log in to your newly upgraded Fedora 30 system.
+
+![][4]
+
+### Resolving upgrade problems
+
+On occasion, there may be unexpected issues when you upgrade your system. If you experience any issues, please visit the [DNF system upgrade wiki page][5] for more information on troubleshooting in the event of a problem.
+
+If you are having issues upgrading and have third-party repositories installed on your system, you may need to disable these repositories while you are upgrading. For support with repositories not provided by Fedora, please contact the providers of the repositories.
+
+--------------------------------------------------------------------------------
+
+via: https://fedoramagazine.org/upgrading-fedora-29-to-fedora-30/
+
+作者:[Ryan Lerch][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://fedoramagazine.org/author/ryanlerch/
+[b]: https://github.com/lujun9972
+[1]: https://fedoramagazine.org/wp-content/uploads/2019/04/29-30-816x345.jpg
+[2]: https://fedoramagazine.org/announcing-fedora-30/
+[3]: https://fedoramagazine.org/taking-smart-backups-duplicity/
+[4]: https://cdn.fedoramagazine.org/wp-content/uploads/2016/06/Screenshot_f23-ws-upgrade-test_2016-06-10_110906-1024x768.png
+[5]: https://fedoraproject.org/wiki/DNF_system_upgrade#Resolving_post-upgrade_issues
diff --git a/sources/tech/20190501 3 apps to manage personal finances in Fedora.md b/sources/tech/20190501 3 apps to manage personal finances in Fedora.md
new file mode 100644
index 0000000000..afa5eb889f
--- /dev/null
+++ b/sources/tech/20190501 3 apps to manage personal finances in Fedora.md
@@ -0,0 +1,73 @@
+[#]: collector: (lujun9972)
+[#]: translator: ( )
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+[#]: subject: (3 apps to manage personal finances in Fedora)
+[#]: via: (https://fedoramagazine.org/3-apps-to-manage-personal-finances-in-fedora/)
+[#]: author: (Paul W. Frields https://fedoramagazine.org/author/pfrields/)
+
+3 apps to manage personal finances in Fedora
+======
+
+![][1]
+
+There are numerous services available on the web for managing your personal finances. Although they may be convenient, they also often mean leaving your most valuable personal data with a company you can’t monitor. Some people are comfortable with this level of trust.
+
+Whether you are or not, you might be interested in an app you can maintain on your own system. This means your data never has to leave your own computer if you don’t want. One of these three apps might be what you’re looking for.
+
+### HomeBank
+
+HomeBank is a fully featured way to manage multiple accounts. It’s easy to set up and keep updated. It has multiple ways to categorize and graph income and liabilities so you can see where your money goes. It’s available through the official Fedora repositories.
+
+![A simple account set up in HomeBank with a few transactions.][2]
+
+To install HomeBank, open the _Software_ app, search for _HomeBank_ , and select the app. Then click _Install_ to add it to your system. HomeBank is also available via a Flatpak.
+
+### KMyMoney
+
+The KMyMoney app is a mature app that has been around for a long while. It has a robust set of features to help you manage multiple accounts, including assets, liabilities, taxes, and more. KMyMoney includes a full set of tools for managing investments and making forecasts. It also sports a huge set of reports for seeing how your money is doing.
+
+![A subset of the many reports available in KMyMoney.][3]
+
+To install, use a software center app, or use the command line:
+
+```
+$ sudo dnf install kmymoney
+```
+
+### GnuCash
+
+One of the most venerable free GUI apps for personal finance is GnuCash. GnuCash is not just for personal finances. It also has functions for managing income, assets, and liabilities for a business. That doesn’t mean you can’t use it for managing just your own accounts. Check out [the online tutorial and guide][4] to get started.
+
+![Checking account records shown in GnuCash.][5]
+
+Open the _Software_ app, search for _GnuCash_ , and select the app. Then click _Install_ to add it to your system. Or use _dnf install_ as above to install the _gnucash_ package.
+
+It’s now available via Flathub which makes installation easy. If you don’t have Flathub support, check out [this article on the Fedora Magazine][6] for how to use it. Then you can also use the _flatpak install GnuCash_ command with a terminal.
+
+* * *
+
+*Photo by _[_Fabian Blank_][7]_ on *[ _Unsplash_][8].
+
+--------------------------------------------------------------------------------
+
+via: https://fedoramagazine.org/3-apps-to-manage-personal-finances-in-fedora/
+
+作者:[Paul W. Frields][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://fedoramagazine.org/author/pfrields/
+[b]: https://github.com/lujun9972
+[1]: https://fedoramagazine.org/wp-content/uploads/2019/04/personal-finance-3-apps-816x345.jpg
+[2]: https://fedoramagazine.org/wp-content/uploads/2019/04/Screenshot-from-2019-04-28-16-16-16-1024x637.png
+[3]: https://fedoramagazine.org/wp-content/uploads/2019/04/Screenshot-from-2019-04-28-16-27-10-1-1024x649.png
+[4]: https://www.gnucash.org/viewdoc.phtml?rev=3&lang=C&doc=guide
+[5]: https://fedoramagazine.org/wp-content/uploads/2019/04/Screenshot-from-2019-04-28-16-41-27-1024x631.png
+[6]: https://fedoramagazine.org/install-flathub-apps-fedora/
+[7]: https://unsplash.com/photos/pElSkGRA2NU?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText
+[8]: https://unsplash.com/search/photos/money?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText
diff --git a/sources/tech/20190501 Looking into Linux modules.md b/sources/tech/20190501 Looking into Linux modules.md
new file mode 100644
index 0000000000..eb3125c19b
--- /dev/null
+++ b/sources/tech/20190501 Looking into Linux modules.md
@@ -0,0 +1,219 @@
+[#]: collector: (lujun9972)
+[#]: translator: ( )
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+[#]: subject: (Looking into Linux modules)
+[#]: via: (https://www.networkworld.com/article/3391362/looking-into-linux-modules.html#tk.rss_all)
+[#]: author: (Sandra Henry-Stocker https://www.networkworld.com/author/Sandra-Henry_Stocker/)
+
+Looking into Linux modules
+======
+The lsmod command can tell you which kernel modules are currently loaded on your system, along with some interesting details about their use.
+![Rob Oo \(CC BY 2.0\)][1]
+
+### What are Linux modules?
+
+Kernel modules are chunks of code that are loaded and unloaded into the kernel as needed, thus extending the functionality of the kernel without requiring a reboot. In fact, unless users inquire about modules using commands like **lsmod** , they won't likely know that anything has changed.
+
+One important thing to understand is that there are _lots_ of modules that will be in use on your Linux system at all times and that a lot of details are available if you're tempted to dive into the details.
+
+One of the prime ways that lsmod is used is to examine modules when a system isn't working properly. However, most of the time, modules load as needed and users don't need to be aware of how they are working.
+
+**[ Also see:[Must-know Linux Commands][2] ]**
+
+### Listing modules
+
+The easiest way to list modules is with the **lsmod** command. While this command provides a lot of detail, this is the most user-friendly output.
+
+```
+$ lsmod
+Module Size Used by
+snd_hda_codec_realtek 114688 1
+snd_hda_codec_generic 77824 1 snd_hda_codec_realtek
+ledtrig_audio 16384 2 snd_hda_codec_generic,snd_hda_codec_realtek
+snd_hda_codec_hdmi 53248 1
+snd_hda_intel 40960 2
+snd_hda_codec 131072 4 snd_hda_codec_generic,snd_hda_codec_hdmi,snd_hda_intel
+ ,snd_hda_codec_realtek
+snd_hda_core 86016 5 snd_hda_codec_generic,snd_hda_codec_hdmi,snd_hda_intel
+ ,snd_hda_codec,snd_hda_codec_realtek
+snd_hwdep 20480 1 snd_hda_codec
+snd_pcm 102400 4 snd_hda_codec_hdmi,snd_hda_intel,snd_hda_codec,snd_hda
+ _core
+snd_seq_midi 20480 0
+snd_seq_midi_event 16384 1 snd_seq_midi
+dcdbas 20480 0
+snd_rawmidi 36864 1 snd_seq_midi
+snd_seq 69632 2 snd_seq_midi,snd_seq_midi_event
+coretemp 20480 0
+snd_seq_device 16384 3 snd_seq,snd_seq_midi,snd_rawmidi
+snd_timer 36864 2 snd_seq,snd_pcm
+kvm_intel 241664 0
+kvm 626688 1 kvm_intel
+radeon 1454080 10
+irqbypass 16384 1 kvm
+joydev 24576 0
+input_leds 16384 0
+ttm 102400 1 radeon
+drm_kms_helper 180224 1 radeon
+drm 475136 13 drm_kms_helper,radeon,ttm
+snd 81920 15 snd_hda_codec_generic,snd_seq,snd_seq_device,snd_hda
+ _codec_hdmi,snd_hwdep,snd_hda_intel,snd_hda_codec,snd
+ _hda_codec_realtek,snd_timer,snd_pcm,snd_rawmidi
+i2c_algo_bit 16384 1 radeon
+fb_sys_fops 16384 1 drm_kms_helper
+syscopyarea 16384 1 drm_kms_helper
+serio_raw 20480 0
+sysfillrect 16384 1 drm_kms_helper
+sysimgblt 16384 1 drm_kms_helper
+soundcore 16384 1 snd
+mac_hid 16384 0
+sch_fq_codel 20480 2
+parport_pc 40960 0
+ppdev 24576 0
+lp 20480 0
+parport 53248 3 parport_pc,lp,ppdev
+ip_tables 28672 0
+x_tables 40960 1 ip_tables
+autofs4 45056 2
+raid10 57344 0
+raid456 155648 0
+async_raid6_recov 24576 1 raid456
+async_memcpy 20480 2 raid456,async_raid6_recov
+async_pq 24576 2 raid456,async_raid6_recov
+async_xor 20480 3 async_pq,raid456,async_raid6_recov
+async_tx 20480 5 async_pq,async_memcpy,async_xor,raid456,async_raid6_re
+ cov
+xor 24576 1 async_xor
+raid6_pq 114688 3 async_pq,raid456,async_raid6_recov
+libcrc32c 16384 1 raid456
+raid1 45056 0
+raid0 24576 0
+multipath 20480 0
+linear 20480 0
+hid_generic 16384 0
+psmouse 151552 0
+i2c_i801 32768 0
+pata_acpi 16384 0
+lpc_ich 24576 0
+usbhid 53248 0
+hid 126976 2 usbhid,hid_generic
+e1000e 245760 0
+floppy 81920 0
+```
+
+In the output above:
+
+ * "Module" shows the name of each module
+ * "Size" shows the module size (not how much memory it is using)
+ * "Used by" shows each module's usage count and the referring modules
+
+
+
+Clearly, that's a _lot_ of modules. The number of modules loaded will depend on your system and distribution and what's running. We can count them like this:
+
+```
+$ lsmod | wc -l
+67
+```
+
+To see the number of modules available on the system (not just running), try this command:
+
+```
+$ modprobe -c | wc -l
+41272
+```
+
+### Other commands for examining modules
+
+Linux provides several commands for listing, loading and unloading, examining, and checking the status of modules.
+
+ * depmod -- generates modules.dep and map files
+ * insmod -- a simple program to insert a module into the Linux Kernel
+ * lsmod -- show the status of modules in the Linux Kernel
+ * modinfo -- show information about a Linux Kernel module
+ * modprobe -- add and remove modules from the Linux Kernel
+ * rmmod -- a simple program to remove a module from the Linux Kernel
+
+
+
+### Listing modules that are built in
+
+As mentioned above, the **lsmod** command is the most convenient command for listing modules. There are, however, other ways to examine them. The modules.builtin file lists all modules that are built into the kernel and is used by modprobe when trying to load one of these modules. Note that **$(uname -r)** in the commands below provides the name of the kernel release.
+
+```
+$ more /lib/modules/$(uname -r)/modules.builtin | head -10
+kernel/arch/x86/crypto/crc32c-intel.ko
+kernel/arch/x86/events/intel/intel-uncore.ko
+kernel/arch/x86/platform/intel/iosf_mbi.ko
+kernel/mm/zpool.ko
+kernel/mm/zbud.ko
+kernel/mm/zsmalloc.ko
+kernel/fs/binfmt_script.ko
+kernel/fs/mbcache.ko
+kernel/fs/configfs/configfs.ko
+kernel/fs/crypto/fscrypto.ko
+```
+
+You can get some additional detail on a module by using the **modinfo** command, though nothing that qualifies as an easy explanation of what service the module provides. The omitted details from the output below include a lengthy signature.
+
+```
+$ modinfo floppy | head -16
+filename: /lib/modules/5.0.0-13-generic/kernel/drivers/block/floppy.ko
+alias: block-major-2-*
+license: GPL
+author: Alain L. Knaff
+srcversion: EBEAA26742DF61790588FD9
+alias: acpi*:PNP0700:*
+alias: pnp:dPNP0700*
+depends:
+retpoline: Y
+intree: Y
+name: floppy
+vermagic: 5.0.0-13-generic SMP mod_unload
+sig_id: PKCS#7
+signer:
+sig_key:
+sig_hashalgo: md4
+```
+
+You can load or unload a module using the **modprobe** command. Using a command like the one below, you can locate the kernel object associated with a particular module:
+
+```
+$ find /lib/modules/$(uname -r) -name floppy*
+/lib/modules/5.0.0-13-generic/kernel/drivers/block/floppy.ko
+```
+
+If you needed to load the module, you could use a command like this one:
+
+```
+$ sudo modprobe floppy
+```
+
+### Wrap-up
+
+Clearly the loading and unloading of modules is a big deal. It makes Linux systems considerably more flexible and efficient than if they ran with a one-size-fits-all kernel. It also means you can make significant changes — including adding hardware — without rebooting.
+
+**[ Two-Minute Linux Tips:[Learn how to master a host of Linux commands in these 2-minute video tutorials][3] ]**
+
+Join the Network World communities on [Facebook][4] and [LinkedIn][5] to comment on topics that are top of mind.
+
+--------------------------------------------------------------------------------
+
+via: https://www.networkworld.com/article/3391362/looking-into-linux-modules.html#tk.rss_all
+
+作者:[Sandra Henry-Stocker][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://www.networkworld.com/author/Sandra-Henry_Stocker/
+[b]: https://github.com/lujun9972
+[1]: https://images.idgesg.net/images/article/2019/04/modules-100794941-large.jpg
+[2]: https://www.networkworld.com/article/3391029/must-know-linux-commands.html
+[3]: https://www.youtube.com/playlist?list=PL7D2RMSmRO9J8OTpjFECi8DJiTQdd4hua
+[4]: https://www.facebook.com/NetworkWorld/
+[5]: https://www.linkedin.com/company/network-world
diff --git a/sources/tech/20190502 Crowdsourcing license compliance with ClearlyDefined.md b/sources/tech/20190502 Crowdsourcing license compliance with ClearlyDefined.md
new file mode 100644
index 0000000000..fe36e37b9c
--- /dev/null
+++ b/sources/tech/20190502 Crowdsourcing license compliance with ClearlyDefined.md
@@ -0,0 +1,99 @@
+[#]: collector: (lujun9972)
+[#]: translator: ( )
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+[#]: subject: (Crowdsourcing license compliance with ClearlyDefined)
+[#]: via: (https://opensource.com/article/19/5/license-compliance-clearlydefined)
+[#]: author: (Jeff McAffer https://opensource.com/users/jeffmcaffer)
+
+Crowdsourcing license compliance with ClearlyDefined
+======
+Licensing is what holds open source together, and ClearlyDefined takes
+the mystery out of projects' licenses, copyright, and source location.
+![][1]
+
+Open source use continues to skyrocket, not just in use cases and scenarios but also in volume. It is trivial for a developer to depend on a 1,000 JavaScript packages from a single run of `npm install` or have thousands of packages in a [Docker][2] image. At the same time, there is increased interest in ensuring license compliance.
+
+Without the right license you may not be able to legally use a software component in the way you intend or may have obligations that run counter to your business model. For instance, a JavaScript package could be marked as [MIT license][3], which allows commercial reuse, while one of its dependencies is licensed has a [copyleft license][4] that requires you give your software away under the same license. Complying means finding the applicable license(s), and assessing and adhering to the terms, which is not too bad for individual components adn can be daunting for large initiatives.
+
+Fortunately, this open source challenge has an open source solution: [ClearlyDefined][5]. ClearlyDefined is a crowdsourced, open source, [Open Source Initiative][6] (OSI) effort to gather, curate, and upstream/normalize data about open source components, such as license, copyright, and source location. This data is the cornerstone of reducing the friction in open source license compliance.
+
+The premise behind ClearlyDefined is simple: we are all struggling to find and understand key information related to the open source we use—whether it is finding the license, knowing who to attribute, or identifying the source that goes with a particular package. Rather than struggling independently, ClearlyDefined allows us to collaborate and share the compliance effort. Moreover, the ClearlyDefined community seeks to upstream any corrections so future releases are more clearly defined and make conventions more explicit to improve community understanding of project intent.
+
+### How it works
+
+![ClearlyDefined's harvest, curate, upstream process][7]
+
+ClearlyDefined monitors the open source ecosystem and automatically harvests relevant data from open source components using a host of open source tools such as [ScanCode][8], [FOSSology][9], and [Licensee][10]. The results are summarized and aggregated to create a _definition_ , which is then surfaced to users via an API and a UI. Each definition includes:
+
+ * Declared license of the component
+ * Licenses and copyrights discovered across all files
+ * Exact source code location to the commit level
+ * Release date
+ * List of embedded components
+
+
+
+Coincidentally (well, not really), this is exactly the data you need to do license compliance.
+
+### Curating
+
+Any given definition may have gaps or imperfections due to tool issues or the data being missing or incorrect at the origin. ClearlyDefined enables users to curate the results by refining the values and filling in the gaps. These contributions are reviewed and merged, as with any open source project. The result is an improved dataset for all to use.
+
+### Getting ahead
+
+To a certain degree, this process is still chasing the problem—analyzing and curating after the packages have already been published. To get ahead of the game, the ClearlyDefined community also feeds merged curations back to the originating projects as pull requests (e.g., adding a license file, clarifying a copyright). This increases the clarity of future release and sets up a virtuous cycle.
+
+### Adapting, not mandating
+
+In doing the analysis, we've found quite a number of approaches to expressing license-related data. Different communities put LICENSE files in different places or have different practices around attribution. The ClearlyDefined philosophy is to discover these conventions and adapt to them rather than asking the communities to do something different. A side benefit of this is that implicit conventions can be made more explicit, improving clarity for all.
+
+Related to this, ClearlyDefined is careful to not look too hard for this interesting data. If we have to be too smart and infer too much to find the data, then there's a good chance the origin is not all that clear. Instead, we prefer to work with the community to better understand and clarify the conventions being used. From there, we can update the tools accordingly and make it easier to be "clearly defined."
+
+#### NOTICE files
+
+As an added bonus for users, we set up an API and UI for generating NOTICE files, making it trivial for you to comply with the attribution requirements found in most open source licenses. You can give ClearlyDefined a list of components (e.g., _drag and drop an npm package-lock.json file on the UI_ ) and get back a fully formed NOTICE file rendered by one of several renderers (e.g., text, HTML, Handlebars.js template). This is a snap, given that we already have all the compliance data. Big shout out to the [OSS Attribution Builder project][11] for making a simple and pluggable NOTICE renderer we could just pop into the ClearlyDefined service.
+
+### Getting involved
+
+You can get involved with ClearlyDefined in several ways:
+
+ * Become an active user, contributing to your compliance workflow
+ * Review other people's curations using the interface
+ * Get involved in [the code][12] (Node and React)
+ * Ask and answer questions on [our mailing list][13] or [Discord channel][14]
+ * Contribute money to the OSI targeted to ClearlyDefined. We'll use that to fund development and curation.
+
+
+
+We are excited to continue to grow our community of contributors so that licensing can continue to become an understable part of any team's open source adoption. For more information, check out [https://clearlydefined.io][15].
+
+--------------------------------------------------------------------------------
+
+via: https://opensource.com/article/19/5/license-compliance-clearlydefined
+
+作者:[Jeff McAffer][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://opensource.com/users/jeffmcaffer
+[b]: https://github.com/lujun9972
+[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/OSDC_Crowdfunding_520x292_9597717_0612CM.png?itok=lxSKyFXU
+[2]: https://opensource.com/resources/what-docker
+[3]: /article/19/4/history-mit-license
+[4]: /resources/what-is-copyleft
+[5]: https://clearlydefined.io
+[6]: https://opensource.org
+[7]: https://opensource.com/sites/default/files/uploads/clearlydefined.png (ClearlyDefined's harvest, curate, upstream process)
+[8]: https://github.com/nexB/scancode-toolkit
+[9]: https://www.fossology.org/
+[10]: https://github.com/licensee/licensee
+[11]: https://github.com/amzn/oss-attribution-builder
+[12]: https://github.com/clearlydefined
+[13]: mailto:clearlydefined@googlegroups.com
+[14]: %C2%A0https://clearlydefined.io/discord)
+[15]: https://clearlydefined.io/
diff --git a/sources/tech/20190502 Format Python however you like with Black.md b/sources/tech/20190502 Format Python however you like with Black.md
new file mode 100644
index 0000000000..7030bc795b
--- /dev/null
+++ b/sources/tech/20190502 Format Python however you like with Black.md
@@ -0,0 +1,99 @@
+[#]: collector: (lujun9972)
+[#]: translator: ( )
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+[#]: subject: (Format Python however you like with Black)
+[#]: via: (https://opensource.com/article/19/5/python-black)
+[#]: author: (Moshe Zadka https://opensource.com/users/moshez/users/moshez/users/moshez)
+
+Format Python however you like with Black
+======
+Learn more about solving common Python problems in our series covering
+seven PyPI libraries.
+![OpenStack source code \(Python\) in VIM][1]
+
+Python is one of the most [popular programming languages][2] in use today—and for good reasons: it's open source, it has a wide range of uses (such as web programming, business applications, games, scientific programming, and much more), and it has a vibrant and dedicated community supporting it. This community is the reason we have such a large, diverse range of software packages available in the [Python Package Index][3] (PyPI) to extend and improve Python and solve the inevitable glitches that crop up.
+
+In this series, we'll look at seven PyPI libraries that can help you solve common Python problems. In the first article, we learned about [Cython][4]; today, we'll examine the **[Black][5]** code formatter.
+
+### Black
+
+Sometimes creativity can be a wonderful thing. Sometimes it is just a pain. I enjoy solving hard problems creatively, but I want my Python formatted as consistently as possible. Nobody has ever been impressed by code that uses "interesting" indentation.
+
+But even worse than inconsistent formatting is a code review that consists of nothing but formatting nits. It is annoying to the reviewer—and even more annoying to the person whose code is reviewed. It's also infuriating when your linter tells you that your code is indented incorrectly, but gives no hint about the _correct_ amount of indentation.
+
+Enter Black. Instead of telling you _what_ to do, Black is a good, industrious robot: it will fix your code for you.
+
+To see how it works, feel free to write something beautifully inconsistent like:
+
+
+```
+def add(a, b): return a+b
+
+def mult(a, b):
+return \
+a * b
+```
+
+Does Black complain? Goodness no, it just fixes it for you!
+
+
+```
+$ black math
+reformatted math
+All done! ✨ 🍰 ✨
+1 file reformatted.
+$ cat math
+def add(a, b):
+return a + b
+
+def mult(a, b):
+return a * b
+```
+
+Black does offer the option of failing instead of fixing and even outputting a **diff** -style edit. These options are great in a continuous integration (CI) system that enforces running Black locally. In addition, if the **diff** output is logged to the CI output, you can directly paste it into **patch** in the rare case that you need to fix your output but cannot install Black locally.
+
+
+```
+$ black --check --diff bad
+\--- math 2019-04-09 17:24:22.747815 +0000
++++ math 2019-04-09 17:26:04.269451 +0000
+@@ -1,7 +1,7 @@
+-def add(a, b): return a + b
++def add(a, b):
+\+ return a + b
+
+
+def mult(a, b):
+\- return \
+\- a * b
+\+ return a * b
+
+would reformat math
+All done! 💥 💔 💥
+1 file would be reformatted.
+$ echo $?
+1
+```
+
+In the next article in this series, we'll look at **attrs** , a library that helps you write concise, correct code quickly.
+
+--------------------------------------------------------------------------------
+
+via: https://opensource.com/article/19/5/python-black
+
+作者:[Moshe Zadka ][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://opensource.com/users/moshez/users/moshez/users/moshez
+[b]: https://github.com/lujun9972
+[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/openstack_python_vim_1.jpg?itok=lHQK5zpm (OpenStack source code (Python) in VIM)
+[2]: https://opensource.com/article/18/5/numbers-python-community-trends
+[3]: https://pypi.org/
+[4]: https://opensource.com/article/19/4/7-python-problems-solved-cython
+[5]: https://pypi.org/project/black/
diff --git a/sources/tech/20190502 Get started with Libki to manage public user computer access.md b/sources/tech/20190502 Get started with Libki to manage public user computer access.md
new file mode 100644
index 0000000000..7c6f4b2746
--- /dev/null
+++ b/sources/tech/20190502 Get started with Libki to manage public user computer access.md
@@ -0,0 +1,61 @@
+[#]: collector: (lujun9972)
+[#]: translator: ( )
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+[#]: subject: (Get started with Libki to manage public user computer access)
+[#]: via: (https://opensource.com/article/19/5/libki-computer-access)
+[#]: author: (Don Watkins https://opensource.com/users/don-watkins/users/tony-thomas)
+
+Get started with Libki to manage public user computer access
+======
+Libki is a cross-platform, computer reservation and time management
+system.
+![][1]
+
+Libraries, schools, colleges, and other organizations that provide public computers need a good way to manage users' access—otherwise, there's no way to prevent some people from monopolizing the machines and ensure everyone has a fair amount of time. This is the problem that [Libki][2] was designed to solve.
+
+Libki is an open source, cross-platform, computer reservation and time management system for Windows and Linux PCs. It provides a web-based server and a web-based administration system that staff can use to manage computer access, including creating and deleting users, setting time limits on accounts, logging out and banning users, and setting access restrictions.
+
+According to lead developer [Kyle Hall][3], Libki is mainly used for PC time control as an open source alternative to Envisionware's proprietary computer access control software. When users log into a Libki-managed computer, they get a block of time to use the computer; once that time is up, they are logged off. The default setting is 45 minutes, but that can easily be adjusted using the web-based administration system. Some organizations offer 24 hours of access before logging users off, and others use it to track usage without setting time limits.
+
+Kyle is currently lead developer at [ByWater Solutions][4], which provides open source software solutions (including Libki) to libraries. He developed Libki early in his career when he was the IT tech at the [Meadville Public Library][5] in Pennsylvania. He was occasionally asked to cover the children's room during lunch breaks for other employees. The library used a paper sign-up sheet to manage access to the computers in the children's room, which meant constant supervision and checking to ensure equitable access for the people who came there.
+
+Kyle said, "I found this system to be cumbersome and awkward, and I wanted to find a solution. That solution needed to be both FOSS and cross-platform. In the end, no existing software package suited our particular needs, and that is why I developed Libki."
+
+Or, as Libki's website proclaims, "Libki was born of the need to avoid interacting with teenagers and now allows librarians to avoid interacting with teenagers around the world!"
+
+### Easy to set up and use
+
+I recently decided to try Libki in our local public library, where I frequently volunteer. I followed the [documentation][6] for the automatic installation, using Ubuntu 18.04 Server, and very quickly had it up and running.
+
+I am planning to support Libki in our local library, but I wondered about libraries that don't have someone with IT experience or the ability to build and deploy a server. Kyle says, "ByWater Solutions can cloud-host a Libki server, which makes maintenance and management much simpler for everyone."
+
+Kyle says ByWater is not planning to bundle Libki with its most popular offering, open source integrated library system (ILS) Koha, or any of the other [projects][7] it supports. "Libki and Koha are different [types of] software serving different needs, but they definitely work well together in a library setting. In fact, it was quite early on that I developed Libki's SIP2 integration so it could support single sign-on using Koha," he says.
+
+### How you can contribute
+
+Libki client is licensed under the GPLv3 and Libki server is licensed under the AGPLv3. Kyle says he would love Libki to have a more active and robust community, and the project is always looking for new people to join its [contributors][8]. If you would like to participate, visit [Libki's Community page][9] and join the mailing list.
+
+--------------------------------------------------------------------------------
+
+via: https://opensource.com/article/19/5/libki-computer-access
+
+作者:[Don Watkins ][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://opensource.com/users/don-watkins/users/tony-thomas
+[b]: https://github.com/lujun9972
+[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/desk_clock_job_work.jpg?itok=Nj4fuhl6
+[2]: https://libki.org/
+[3]: https://www.linkedin.com/in/kylemhallinfo/
+[4]: https://opensource.com/article/19/4/software-libraries
+[5]: https://meadvillelibrary.org/
+[6]: https://manual.libki.org/master/libki-manual.html#_automatic_installation
+[7]: https://bywatersolutions.com/projects
+[8]: https://github.com/Libki/libki-server/graphs/contributors
+[9]: https://libki.org/community/
diff --git a/sources/tech/20190502 The making of the Breaking the Code electronic book.md b/sources/tech/20190502 The making of the Breaking the Code electronic book.md
new file mode 100644
index 0000000000..6786df8549
--- /dev/null
+++ b/sources/tech/20190502 The making of the Breaking the Code electronic book.md
@@ -0,0 +1,62 @@
+[#]: collector: (lujun9972)
+[#]: translator: ( )
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+[#]: subject: (The making of the Breaking the Code electronic book)
+[#]: via: (https://opensource.com/article/19/5/code-book)
+[#]: author: (Alicia Gibb https://opensource.com/users/aliciagibb/users/don-watkins)
+
+The making of the Breaking the Code electronic book
+======
+Offering a safe space for middle school girls to learn technology speaks
+volumes about who should be sitting around the tech table.
+![Open hardware electronic book][1]
+
+I like a good challenge. The [Open Source Stories team][2] came to me with a great one: Create a hardware project where students could create their own thing that would be put together as a larger thing. The students would be middle school girls. My job was to figure out the hardware and make this thing make sense.
+
+After days of sketching out concepts, I was wandering through my local public library, and it dawned on me that the perfect piece of hardware where everyone could design their own part to create something whole is a book! The idea of a book using paper electronics was exciting, simple enough to be taught in a day, and fit the criteria of needing no special equipment, like soldering irons.
+
+!["Breaking the Code" book cover][3]
+
+I designed two parts to the electronics within the book. Half the circuits were developed with copper tape, LEDs, and DIY buttons, and half were developed with LilyPad Arduino microcontrollers, sensors, LEDs, and DIY buttons. Using the electronics in the book, the girls could make pages light up, buzz, or play music using various inputs such as button presses, page turns, or tilting the book.
+
+!['Breaking the Code' interior pages][4]
+
+We worked with young adult author [Lauren Sabel][5] to come up with the story, which features two girls who get locked in the basement of their school and have to solve puzzles to get out. Setting the scene in the basement gave us lots of opportunities to use lights! Along with the story, we received illustrations that the girls enhanced with electronics. The girls got creative, for example, using lights as the skeleton's eyes, not just for the obvious light bulb in the room.
+
+Creating a curriculum that was flexible enough to empower each girl to build her own successfully functioning circuit was a vital piece of the user experience. We chose components so the circuit wouldn't need to be over-engineered. We also used breakout boards and LEDs with built-in resistors so that the circuits allowed flexibility and functioned with only basic knowledge of circuit design—without getting too muddled in the deep end.
+
+!['Breaking the Code' interior pages][6]
+
+The project curriculum gave girls the confidence and skills to understand electronics by building two circuits, in the process learning circuit layout, directional aspects, cause-and-effect through inputs and outputs, and how to identify various components. Controlling electrons by pushing them through a circuit feels a bit like you're controlling a tiny part of the universe. And seeing the girls' faces light up is like seeing a universe of opportunities open in front of them.
+
+!['Breaking the Code' interior pages][7]
+
+The girls were ecstatic to see their work as a completed book, taking pride in their pages and showing others what they had built.
+
+![About 'Breaking the Code'][8]
+
+Teaching them my little corner of the world for the day was a truly empowering experience for me. As a woman in tech, I think this is the right approach for companies trying to change the gender inequalities we see in tech. Offering a safe space to learn—with lots of people in the room who look like you as mentors—speaks volumes about who should be sitting around the tech table.
+
+--------------------------------------------------------------------------------
+
+via: https://opensource.com/article/19/5/code-book
+
+作者:[Alicia Gibb][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://opensource.com/users/aliciagibb/users/don-watkins
+[b]: https://github.com/lujun9972
+[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/code_book_electronics_hardware.jpg?itok=zb-zaiwz (Open hardware electronic book)
+[2]: https://www.redhat.com/en/open-source-stories
+[3]: https://opensource.com/sites/default/files/uploads/codebook_cover.jpg ("Breaking the Code" book cover)
+[4]: https://opensource.com/sites/default/files/uploads/codebook_38-39.jpg ('Breaking the Code' interior pages)
+[5]: https://www.amazon.com/Lauren-Sabel/e/B01M0FW223
+[6]: https://opensource.com/sites/default/files/uploads/codebook_lightbulb.jpg ('Breaking the Code' interior pages)
+[7]: https://opensource.com/sites/default/files/uploads/codebook_10-11.jpg ('Breaking the Code' interior pages)
+[8]: https://opensource.com/sites/default/files/uploads/codebook_pg1.jpg (About 'Breaking the Code')
diff --git a/sources/tech/20190503 API evolution the right way.md b/sources/tech/20190503 API evolution the right way.md
new file mode 100644
index 0000000000..ada8bdce20
--- /dev/null
+++ b/sources/tech/20190503 API evolution the right way.md
@@ -0,0 +1,735 @@
+[#]: collector: (lujun9972)
+[#]: translator: ( )
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+[#]: subject: (API evolution the right way)
+[#]: via: (https://opensource.com/article/19/5/api-evolution-right-way)
+[#]: author: (A. Jesse https://opensource.com/users/emptysquare)
+
+API evolution the right way
+======
+Ten covenants that responsible library authors keep with their users.
+![Browser of things][1]
+
+Imagine you are a creator deity, designing a body for a creature. In your benevolence, you wish for the creature to evolve over time: first, because it must respond to changes in its environment, and second, because your wisdom grows and you think of better designs for the beast. It shouldn't remain in the same body forever!
+
+![Serpents][2]
+
+The creature, however, might be relying on features of its present anatomy. You can't add wings or change its scales without warning. It needs an orderly process to adapt its lifestyle to its new body. How can you, as a responsible designer in charge of this creature's natural history, gently coax it toward ever greater improvements?
+
+It's the same for responsible library maintainers. We keep our promises to the people who depend on our code: we release bugfixes and useful new features. We sometimes delete features if that's beneficial for the library's future. We continue to innovate, but we don't break the code of people who use our library. How can we fulfill all those goals at once?
+
+### Add useful features
+
+Your library shouldn't stay the same for eternity: you should add features that make your library better for your users. For example, if you have a Reptile class and it would be useful to have wings for flying, go for it.
+
+
+```
+class Reptile:
+@property
+def teeth(self):
+return 'sharp fangs'
+
+# If wings are useful, add them!
+@property
+def wings(self):
+return 'majestic wings'
+```
+
+But beware, features come with risk. Consider the following feature in the Python standard library, and see what went wrong with it.
+
+
+```
+bool(datetime.time(9, 30)) == True
+bool(datetime.time(0, 0)) == False
+```
+
+This is peculiar: converting any time object to a boolean yields True, except for midnight. (Worse, the rules for timezone-aware times are even stranger.)
+
+I've been writing Python for more than a decade but I didn't discover this rule until last week. What kind of bugs can this odd behavior cause in users' code?
+
+Consider a calendar application with a function that creates events. If an event has an end time, the function requires it to also have a start time.
+
+
+```
+def create_event(day,
+start_time=None,
+end_time=None):
+if end_time and not start_time:
+raise ValueError("Can't pass end_time without start_time")
+
+# The coven meets from midnight until 4am.
+create_event(datetime.date.today(),
+datetime.time(0, 0),
+datetime.time(4, 0))
+```
+
+Unfortunately for witches, an event starting at midnight fails this validation. A careful programmer who knows about the quirk at midnight can write this function correctly, of course.
+
+
+```
+def create_event(day,
+start_time=None,
+end_time=None):
+if end_time is not None and start_time is None:
+raise ValueError("Can't pass end_time without start_time")
+```
+
+But this subtlety is worrisome. If a library creator wanted to make an API that bites users, a "feature" like the boolean conversion of midnight works nicely.
+
+![Man being chased by an alligator][3]
+
+The responsible creator's goal, however, is to make your library easy to use correctly.
+
+This feature was written by Tim Peters when he first made the datetime module in 2002. Even founding Pythonistas like Tim make mistakes. [The quirk was removed][4], and all times are True now.
+
+
+```
+# Python 3.5 and later.
+
+bool(datetime.time(9, 30)) == True
+bool(datetime.time(0, 0)) == True
+```
+
+Programmers who didn't know about the oddity of midnight are saved from obscure bugs, but it makes me nervous to think about any code that relies on the weird old behavior and didn't notice the change. It would have been better if this bad feature were never implemented at all. This leads us to the first promise of any library maintainer:
+
+#### First covenant: Avoid bad features
+
+The most painful change to make is when you have to delete a feature. One way to avoid bad features is to add few features in general! Make no public method, class, function, or property without a good reason. Thus:
+
+#### Second covenant: Minimize features
+
+Features are like children: conceived in a moment of passion, they must be supported for years. Don't do anything silly just because you can. Don't add feathers to a snake!
+
+![Serpents with and without feathers][5]
+
+But of course, there are plenty of occasions when users need something from your library that it does not yet offer. How do you choose the right feature to give them? Here's another cautionary tale.
+
+### A cautionary tale from asyncio
+
+As you may know, when you call a coroutine function, it returns a coroutine object:
+
+
+```
+async def my_coroutine():
+pass
+
+print(my_coroutine())
+
+[/code] [code]``
+```
+
+Your code must "await" this object to run the coroutine. It's easy to forget this, so asyncio's developers wanted a "debug mode" that catches this mistake. Whenever a coroutine is destroyed without being awaited, the debug mode prints a warning with a traceback to the line where it was created.
+
+When Yury Selivanov implemented the debug mode, he added as its foundation a "coroutine wrapper" feature. The wrapper is a function that takes in a coroutine and returns anything at all. Yury used it to install the warning logic on each coroutine, but someone else could use it to turn coroutines into the string "hi!"
+
+
+```
+import sys
+
+def my_wrapper(coro):
+return 'hi!'
+
+sys.set_coroutine_wrapper(my_wrapper)
+
+async def my_coroutine():
+pass
+
+print(my_coroutine())
+
+[/code] [code]`hi!`
+```
+
+That is one hell of a customization. It changes the very meaning of "async." Calling set_coroutine_wrapper once will globally and permanently change all coroutine functions. It is, [as Nathaniel Smith wrote][6], "a problematic API" that is prone to misuse and had to be removed. The asyncio developers could have avoided the pain of deleting the feature if they'd better shaped it to its purpose. Responsible creators must keep this in mind:
+
+#### Third covenant: Keep features narrow
+
+Luckily, Yury had the good judgment to mark this feature provisional, so asyncio users knew not to rely on it. Nathaniel was free to replace **set_coroutine_wrapper** with a narrower feature that only customized the traceback depth.
+
+
+```
+import sys
+
+sys.set_coroutine_origin_tracking_depth(2)
+
+async def my_coroutine():
+pass
+
+print(my_coroutine())
+
+[/code] [code]
+
+
+
+RuntimeWarning:'my_coroutine' was never awaited
+
+Coroutine created at (most recent call last)
+File "script.py", line 8, in
+print(my_coroutine())
+```
+
+This is much better. There's no more global setting that can change coroutines' type, so asyncio users need not code as defensively. Deities should all be as farsighted as Yury.
+
+#### Fourth covenant: Mark experimental features "provisional"
+
+If you have merely a hunch that your creature wants horns and a quadruple-forked tongue, introduce the features but mark them "provisional."
+
+![Serpent with horns][7]
+
+You might discover that the horns are extraneous but the quadruple-forked tongue is useful after all. In the next release of your library, you can delete the former and mark the latter official.
+
+### Deleting features
+
+No matter how wisely we guide our creature's evolution, there may come a time when it's best to delete an official feature. For example, you might have created a lizard, and now you choose to delete its legs. Perhaps you want to transform this awkward creature into a sleek and modern python.
+
+![Lizard transformed to snake][8]
+
+There are two main reasons to delete features. First, you might discover a feature was a bad idea, through user feedback or your own growing wisdom. That was the case with the quirky behavior of midnight. Or, the feature might have been well-adapted to your library's environment at first, but the ecology changes. Perhaps another deity invents mammals. Your creature wants to squeeze into the mammals' little burrows and eat the tasty mammal filling, so it has to lose its legs.
+
+![A mouse][9]
+
+Similarly, the Python standard library deletes features in response to changes in the language itself. Consider asyncio's Lock. It has been awaitable ever since "await" was added as a keyword:
+
+
+```
+lock = asyncio.Lock()
+
+async def critical_section():
+await lock
+try:
+print('holding lock')
+finally:
+lock.release()
+```
+
+But now, we can do "async with lock."
+
+
+```
+lock = asyncio.Lock()
+
+async def critical_section():
+async with lock:
+print('holding lock')
+```
+
+The new style is much better! It's short and less prone to mistakes in a big function with other try-except blocks. Since "there should be one and preferably only one obvious way to do it," [the old syntax is deprecated in Python 3.7][10] and it will be banned soon.
+
+It's inevitable that ecological change will have this effect on your code, too, so learn to delete features gently. Before you do so, consider the cost or benefit of deleting it. Responsible maintainers are reluctant to make their users change a large amount of their code or change their logic. (Remember how painful it was when Python 3 removed the "u" string prefix, before it was added back.) If the code changes are mechanical, however, like a simple search-and-replace, or if the feature is dangerous, it may be worth deleting.
+
+#### Whether to delete a feature
+
+![Balance scales][11]
+
+Con | Pro
+---|---
+Code must change | Change is mechanical
+Logic must change | Feature is dangerous
+
+In the case of our hungry lizard, we decide to delete its legs so it can slither into a mouse's hole and eat it. How do we go about this? We could just delete the **walk** method, changing code from this:
+
+
+```
+class Reptile:
+def walk(self):
+print('step step step')
+```
+
+to this:
+
+
+```
+class Reptile:
+def slither(self):
+print('slide slide slide')
+```
+
+That's not a good idea; the creature is accustomed to walking! Or, in terms of a library, your users have code that relies on the existing method. When they upgrade to the latest version of your library, their code will break.
+
+
+```
+# User's code. Oops!
+Reptile.walk()
+```
+
+Therefore, responsible creators make this promise:
+
+#### Fifth covenant: Delete features gently
+
+There are a few steps involved in deleting a feature gently. Starting with a lizard that walks with its legs, you first add the new method, "slither." Next, deprecate the old method.
+
+
+```
+import warnings
+
+class Reptile:
+def walk(self):
+warnings.warn(
+"walk is deprecated, use slither",
+DeprecationWarning, stacklevel=2)
+print('step step step')
+
+def slither(self):
+print('slide slide slide')
+```
+
+The Python warnings module is quite powerful. By default it prints warnings to stderr, only once per code location, but you can silence warnings or turn them into exceptions, among other options.
+
+As soon as you add this warning to your library, PyCharm and other IDEs render the deprecated method with a strikethrough. Users know right away that the method is due for deletion.
+
+`Reptile().walk()`
+
+What happens when they run their code with the upgraded library?
+
+
+```
+$ python3 script.py
+
+DeprecationWarning: walk is deprecated, use slither
+script.py:14: Reptile().walk()
+
+step step step
+```
+
+By default, they see a warning on stderr, but the script succeeds and prints "step step step." The warning's traceback shows what line of the user's code must be fixed. (That's what the "stacklevel" argument does: it shows the call site that users need to change, not the line in your library where the warning is generated.) Notice that the error message is instructive, it describes what a library user must do to migrate to the new version.
+
+Your users will want to test their code and prove they call no deprecated library methods. Warnings alone won't make unit tests fail, but exceptions will. Python has a command-line option to turn deprecation warnings into exceptions.
+
+
+```
+> python3 -Werror::DeprecationWarning script.py
+
+Traceback (most recent call last):
+File "script.py", line 14, in
+Reptile().walk()
+File "script.py", line 8, in walk
+DeprecationWarning, stacklevel=2)
+DeprecationWarning: walk is deprecated, use slither
+```
+
+Now, "step step step" is not printed, because the script terminates with an error.
+
+So, once you've released a version of your library that warns about the deprecated "walk" method, you can delete it safely in the next release. Right?
+
+Consider what your library's users might have in their projects' requirements.
+
+
+```
+# User's requirements.txt has a dependency on the reptile package.
+reptile
+```
+
+The next time they deploy their code, they'll install the latest version of your library. If they haven't yet handled all deprecations, then their code will break, because it still depends on "walk." You need to be gentler than this. There are three more promises you must keep to your users: maintain a changelog, choose a version scheme, and write an upgrade guide.
+
+#### Sixth covenant: Maintain a changelog
+
+Your library must have a changelog; its main purpose is to announce when a feature that your users rely on is deprecated or deleted.
+
+#### Changes in Version 1.1
+
+**New features**
+
+ * New function Reptile.slither()
+
+
+
+**Deprecations**
+
+ * Reptile.walk() is deprecated and will be removed in version 2.0, use slither()
+
+
+---
+
+Responsible creators use version numbers to express how a library has changed so users can make informed decisions about upgrading. A "version scheme" is a language for communicating the pace of change.
+
+#### Seventh covenant: Choose a version scheme
+
+There are two schemes in widespread use, [semantic versioning][12] and time-based versioning. I recommend semantic versioning for nearly any library. The Python flavor thereof is defined in [PEP 440][13], and tools like **pip** understand semantic version numbers.
+
+If you choose semantic versioning for your library, you can delete its legs gently with version numbers like:
+
+> 1.0: First "stable" release, with walk()
+> 1.1: Add slither(), deprecate walk()
+> 2.0: Delete walk()
+
+Your users should depend on a range of your library's versions, like so:
+
+
+```
+# User's requirements.txt.
+reptile>=1,<2
+```
+
+This allows them to upgrade automatically within a major release, receiving bugfixes and potentially raising some deprecation warnings, but not upgrading to the _next_ major release and risking a change that breaks their code.
+
+If you follow time-based versioning, your releases might be numbered thus:
+
+> 2017.06.0: A release in June 2017
+> 2018.11.0: Add slither(), deprecate walk()
+> 2019.04.0: Delete walk()
+
+And users can depend on your library like:
+
+
+```
+# User's requirements.txt for time-based version.
+reptile==2018.11.*
+```
+
+This is terrific, but how do your users know your versioning scheme and how to test their code for deprecations? You have to advise them how to upgrade.
+
+#### Eighth covenant: Write an upgrade guide
+
+Here's how a responsible library creator might guide users:
+
+#### Upgrading to 2.0
+
+**Migrate from Deprecated APIs**
+
+See the changelog for deprecated features.
+
+**Enable Deprecation Warnings**
+
+Upgrade to 1.1 and test your code with:
+
+`python -Werror::DeprecationWarning`
+
+Now it's safe to upgrade.
+
+---
+
+You must teach users how to handle deprecation warnings by showing them the command line options. Not all Python programmers know this—I certainly have to look up the syntax each time. And take note, you must _release_ a version that prints warnings from each deprecated API so users can test with that version before upgrading again. In this example, version 1.1 is the bridge release. It allows your users to rewrite their code incrementally, fixing each deprecation warning separately until they have entirely migrated to the latest API. They can test changes to their code and changes in your library, independently from each other, and isolate the cause of bugs.
+
+If you chose semantic versioning, this transitional period lasts until the next major release, from 1.x to 2.0, or from 2.x to 3.0, and so on. The gentle way to delete a creature's legs is to give it at least one version in which to adjust its lifestyle. Don't remove the legs all at once!
+
+![A skink][14]
+
+Version numbers, deprecation warnings, the changelog, and the upgrade guide work together to gently evolve your library without breaking the covenant with your users. The [Twisted project's Compatibility Policy][15] explains this beautifully:
+
+> "The First One's Always Free"
+>
+> Any application which runs without warnings may be upgraded one minor version of Twisted.
+>
+> In other words, any application which runs its tests without triggering any warnings from Twisted should be able to have its Twisted version upgraded at least once with no ill effects except the possible production of new warnings.
+
+Now, we creator deities have gained the wisdom and power to add features by adding methods and to delete them gently. We can also add features by adding parameters, but this brings a new level of difficulty. Are you ready?
+
+### Adding parameters
+
+Imagine that you just gave your snake-like creature a pair of wings. Now you must allow it the choice whether to move by slithering or flying. Currently its "move" function takes one parameter.
+
+
+```
+# Your library code.
+def move(direction):
+print(f'slither {direction}')
+
+# A user's application.
+move('north')
+```
+
+You want to add a "mode" parameter, but this breaks your users' code if they upgrade, because they pass only one argument.
+
+
+```
+# Your library code.
+def move(direction, mode):
+assert mode in ('slither', 'fly')
+print(f'{mode} {direction}')
+
+# A user's application. Error!
+move('north')
+```
+
+A truly wise creator promises not to break users' code this way.
+
+#### Ninth covenant: Add parameters compatibly
+
+To keep this covenant, add each new parameter with a default value that preserves the original behavior.
+
+
+```
+# Your library code.
+def move(direction, mode='slither'):
+assert mode in ('slither', 'fly')
+print(f'{mode} {direction}')
+
+# A user's application.
+move('north')
+```
+
+Over time, parameters are the natural history of your function's evolution. They're listed oldest first, each with a default value. Library users can pass keyword arguments to opt into specific new behaviors and accept the defaults for all others.
+
+
+```
+# Your library code.
+def move(direction,
+mode='slither',
+turbo=False,
+extra_sinuous=False,
+hail_lyft=False):
+# ...
+
+# A user's application.
+move('north', extra_sinuous=True)
+```
+
+There is a danger, however, that a user might write code like this:
+
+
+```
+# A user's application, poorly-written.
+move('north', 'slither', False, True)
+```
+
+What happens if, in the next major version of your library, you get rid of one of the parameters, like "turbo"?
+
+
+```
+# Your library code, next major version. "turbo" is deleted.
+def move(direction,
+mode='slither',
+extra_sinuous=False,
+hail_lyft=False):
+# ...
+
+# A user's application, poorly-written.
+move('north', 'slither', False, True)
+```
+
+The user's code still compiles, and this is a bad thing. The code stopped moving extra-sinuously and started hailing a Lyft, which was not the intention. I trust that you can predict what I'll say next: Deleting a parameter requires several steps. First, of course, deprecate the "turbo" parameter. I like a technique like this one, which detects whether any user's code relies on this parameter.
+
+
+```
+# Your library code.
+_turbo_default = object()
+
+def move(direction,
+mode='slither',
+turbo=_turbo_default,
+extra_sinuous=False,
+hail_lyft=False):
+if turbo is not _turbo_default:
+warnings.warn(
+"'turbo' is deprecated",
+DeprecationWarning,
+stacklevel=2)
+else:
+# The old default.
+turbo = False
+```
+
+But your users might not notice the warning. Warnings are not very loud: they can be suppressed or lost in log files. Users might heedlessly upgrade to the next major version of your library, the version that deletes "turbo." Their code will run without error and silently do the wrong thing! As the Zen of Python says, "Errors should never pass silently." Indeed, reptiles hear poorly, so you must correct them very loudly when they make mistakes.
+
+![Woman riding an alligator][16]
+
+The best way to protect your users is with Python 3's star syntax, which requires callers to pass keyword arguments.
+
+
+```
+# Your library code.
+# All arguments after "*" must be passed by keyword.
+def move(direction,
+*,
+mode='slither',
+turbo=False,
+extra_sinuous=False,
+hail_lyft=False):
+# ...
+
+# A user's application, poorly-written.
+# Error! Can't use positional args, keyword args required.
+move('north', 'slither', False, True)
+```
+
+With the star in place, this is the only syntax allowed:
+
+
+```
+# A user's application.
+move('north', extra_sinuous=True)
+```
+
+Now when you delete "turbo," you can be certain any user code that relies on it will fail loudly. If your library also supports Python 2, there's no shame in that; you can simulate the star syntax thus ([credit to Brett Slatkin][17]):
+
+
+```
+# Your library code, Python 2 compatible.
+def move(direction, **kwargs):
+mode = kwargs.pop('mode', 'slither')
+turbo = kwargs.pop('turbo', False)
+sinuous = kwargs.pop('extra_sinuous', False)
+lyft = kwargs.pop('hail_lyft', False)
+
+if kwargs:
+raise TypeError('Unexpected kwargs: %r'
+% kwargs)
+
+# ...
+```
+
+Requiring keyword arguments is a wise choice, but it requires foresight. If you allow an argument to be passed positionally, you cannot convert it to keyword-only in a later release. So, add the star now. You can observe in the asyncio API that it uses the star pervasively in constructors, methods, and functions. Even though "Lock" only takes one optional parameter so far, the asyncio developers added the star right away. This is providential.
+
+
+```
+# In asyncio.
+class Lock:
+def __init__(self, *, loop=None):
+# ...
+```
+
+Now we've gained the wisdom to change methods and parameters while keeping our covenant with users. The time has come to try the most challenging kind of evolution: changing behavior without changing either methods or parameters.
+
+### Changing behavior
+
+Let's say your creature is a rattlesnake, and you want to teach it a new behavior.
+
+![Rattlesnake][18]
+
+Sidewinding! The creature's body will appear the same, but its behavior will change. How can we prepare it for this step of its evolution?
+
+![][19]
+
+Image by HCA [[CC BY-SA 4.0][20]], [via Wikimedia Commons][21], modified by Opensource.com
+
+A responsible creator can learn from the following example in the Python standard library, when behavior changed without a new function or parameters. Once upon a time, the os.stat function was introduced to get file statistics, like the creation time. At first, times were always integers.
+
+
+```
+>>> os.stat('file.txt').st_ctime
+1540817862
+```
+
+One day, the core developers decided to use floats for os.stat times to give sub-second precision. But they worried that existing user code wasn't ready for the change. They created a setting in Python 2.3, "stat_float_times," that was false by default. A user could set it to True to opt into floating-point timestamps.
+
+
+```
+>>> # Python 2.3.
+>>> os.stat_float_times(True)
+>>> os.stat('file.txt').st_ctime
+1540817862.598021
+```
+
+Starting in Python 2.5, float times became the default, so any new code written for 2.5 and later could ignore the setting and expect floats. Of course, you could set it to False to keep the old behavior or set it to True to ensure the new behavior in all Python versions, and prepare your code for the day when stat_float_times is deleted.
+
+Ages passed. In Python 3.1, the setting was deprecated to prepare people for the distant future and finally, after its decades-long journey, [the setting was removed][22]. Float times are now the only option. It's a long road, but responsible deities are patient because we know this gradual process has a good chance of saving users from unexpected behavior changes.
+
+#### Tenth covenant: Change behavior gradually
+
+Here are the steps:
+
+ * Add a flag to opt into the new behavior, default False, warn if it's False
+ * Change default to True, deprecate flag entirely
+ * Remove the flag
+
+
+
+If you follow semantic versioning, the versions might be like so:
+
+Library version | Library API | User code
+---|---|---
+| |
+1.0 | No flag | Expect old behavior
+1.1 | Add flag, default False,
+warn if it's False | Set flag True,
+handle new behavior
+2.0 | Change default to True,
+deprecate flag entirely | Handle new behavior
+3.0 | Remove flag | Handle new behavior
+
+You need _two_ major releases to complete the maneuver. If you had gone straight from "Add flag, default False, warn if it's False" to "Remove flag" without the intervening release, your users' code would be unable to upgrade. User code written correctly for 1.1, which sets the flag to True and handles the new behavior, must be able to upgrade to the next release with no ill effect except new warnings, but if the flag were deleted in the next release, that code would break. A responsible deity never violates the Twisted policy: "The First One's Always Free."
+
+### The responsible creator
+
+![Demeter][23]
+
+Our 10 covenants belong loosely in three categories:
+
+**Evolve cautiously**
+
+ 1. Avoid bad features
+ 2. Minimize features
+ 3. Keep features narrow
+ 4. Mark experimental features "provisional"
+ 5. Delete features gently
+
+
+
+**Record history rigorously**
+
+ 1. Maintain a changelog
+ 2. Choose a version scheme
+ 3. Write an upgrade guide
+
+
+
+**Change slowly and loudly**
+
+ 1. Add parameters compatibly
+ 2. Change behavior gradually
+
+
+
+If you keep these covenants with your creature, you'll be a responsible creator deity. Your creature's body can evolve over time, forever improving and adapting to changes in its environment but without sudden changes the creature isn't prepared for. If you maintain a library, keep these promises to your users and you can innovate your library without breaking the code of the people who rely on you.
+
+* * *
+
+_This article originally appeared on[A. Jesse Jiryu Davis's blog][24] and is republished with permission._
+
+Illustration credits:
+
+ * [The World's Progress, The Delphian Society, 1913][25]
+ * [Essay Towards a Natural History of Serpents, Charles Owen, 1742][26]
+ * [On the batrachia and reptilia of Costa Rica: With notes on the herpetology and ichthyology of Nicaragua and Peru, Edward Drinker Cope, 1875][27]
+ * [Natural History, Richard Lydekker et. al., 1897][28]
+ * [Mes Prisons, Silvio Pellico, 1843][29]
+ * [Tierfotoagentur / m.blue-shadow][30]
+ * [Los Angeles Public Library, 1930][31]
+
+
+
+--------------------------------------------------------------------------------
+
+via: https://opensource.com/article/19/5/api-evolution-right-way
+
+作者:[A. Jesse][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://opensource.com/users/emptysquare
+[b]: https://github.com/lujun9972
+[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/browser_desktop_website_checklist_metrics.png?itok=OKKbl1UR (Browser of things)
+[2]: https://opensource.com/sites/default/files/uploads/praise-the-creator.jpg (Serpents)
+[3]: https://opensource.com/sites/default/files/uploads/bite.jpg (Man being chased by an alligator)
+[4]: https://bugs.python.org/issue13936
+[5]: https://opensource.com/sites/default/files/uploads/feathers.jpg (Serpents with and without feathers)
+[6]: https://bugs.python.org/issue32591
+[7]: https://opensource.com/sites/default/files/uploads/horns.jpg (Serpent with horns)
+[8]: https://opensource.com/sites/default/files/uploads/lizard-to-snake.jpg (Lizard transformed to snake)
+[9]: https://opensource.com/sites/default/files/uploads/mammal.jpg (A mouse)
+[10]: https://bugs.python.org/issue32253
+[11]: https://opensource.com/sites/default/files/uploads/scale.jpg (Balance scales)
+[12]: https://semver.org
+[13]: https://www.python.org/dev/peps/pep-0440/
+[14]: https://opensource.com/sites/default/files/uploads/skink.jpg (A skink)
+[15]: https://twistedmatrix.com/documents/current/core/development/policy/compatibility-policy.html
+[16]: https://opensource.com/sites/default/files/uploads/loudly.jpg (Woman riding an alligator)
+[17]: http://www.informit.com/articles/article.aspx?p=2314818
+[18]: https://opensource.com/sites/default/files/uploads/rattlesnake.jpg (Rattlesnake)
+[19]: https://opensource.com/sites/default/files/articles/neonate_sidewinder_sidewinding_with_tracks_unlabeled.png
+[20]: https://creativecommons.org/licenses/by-sa/4.0
+[21]: https://commons.wikimedia.org/wiki/File:Neonate_sidewinder_sidewinding_with_tracks_unlabeled.jpg
+[22]: https://bugs.python.org/issue31827
+[23]: https://opensource.com/sites/default/files/uploads/demeter.jpg (Demeter)
+[24]: https://emptysqua.re/blog/api-evolution-the-right-way/
+[25]: https://www.gutenberg.org/files/42224/42224-h/42224-h.htm
+[26]: https://publicdomainreview.org/product-att/artist/charles-owen/
+[27]: https://archive.org/details/onbatrachiarepti00cope/page/n3
+[28]: https://www.flickr.com/photos/internetarchivebookimages/20556001490
+[29]: https://www.oldbookillustrations.com/illustrations/stationery/
+[30]: https://www.alamy.com/mediacomp/ImageDetails.aspx?ref=D7Y61W
+[31]: https://www.vintag.es/2013/06/riding-alligator-c-1930s.html
diff --git a/sources/tech/20190503 Check your spelling at the command line with Ispell.md b/sources/tech/20190503 Check your spelling at the command line with Ispell.md
new file mode 100644
index 0000000000..5c26143241
--- /dev/null
+++ b/sources/tech/20190503 Check your spelling at the command line with Ispell.md
@@ -0,0 +1,81 @@
+[#]: collector: (lujun9972)
+[#]: translator: ( )
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+[#]: subject: (Check your spelling at the command line with Ispell)
+[#]: via: (https://opensource.com/article/19/5/spelling-command-line-ispell)
+[#]: author: (Scott Nesbitt https://opensource.com/users/scottnesbitt)
+
+Check your spelling at the command line with Ispell
+======
+Ispell helps you stamp out typos in plain text files written in more
+than 50 languages.
+![Command line prompt][1]
+
+Good spelling is a skill. A skill that takes time to learn and to master. That said, there are people who never quite pick that skill up—I know a couple or three outstanding writers who can't spell to save their lives.
+
+Even if you spell well, the occasional typo creeps in. That's especially true if you're quickly banging on your keyboard to meet a deadline. Regardless of your spelling chops, it's always a good idea to run what you've written through a spelling checker.
+
+I do most of my writing in [plain text][2] and often use a command line spelling checker called [Aspell][3] to do the deed. Aspell isn't the only game in town. You might also want to check out the venerable [Ispell][4].
+
+### Getting started
+
+Ispell's been around, in various forms, since 1971. Don't let its age fool you. Ispell is still a peppy application that you can use effectively in the 21st century.
+
+Before doing anything else, check whether or not Ispell is installed on your computer by cracking open a terminal window and typing **which ispell**. If it isn't installed, fire up your distribution's package manager and install Ispell from there.
+
+Don't forget to install dictionaries for the languages you work in, too. My only language is English, so I just need to worry about grabbing the US and British English dictionaries. You're not limited to my mother (and only) tongue. Ispell has [dictionaries for over 50 languages][5].
+
+![Installing Ispell dictionaries][6]
+
+### Using Ispell
+
+If you haven't guessed already, Ispell only works with text files. That includes ones marked up with HTML, LaTeX, and [nroff or troff][7]. More on this in a few moments.
+
+To get to work, open a terminal window and navigate to the directory containing the file where you want to run a spelling check. Type **ispell** followed by the file's name and then press Enter.
+
+![Checking spelling with Ispell][8]
+
+Ispell highlights the first word it doesn't recognize. If the word is misspelled, Ispell usually offers one or more alternatives. Press **R** and then the number beside the correct choice. In the screen capture above, I'd press **R** and **0** to fix the error.
+
+If, on the other hand, the word is correctly spelled, press **A** to move to the next misspelled word.
+
+Keep doing that until you reach the end of the file. Ispell saves your changes, creates a backup of the file you just checked (with the extension _.bak_ ), and shuts down.
+
+### A few other options
+
+This example illustrates basic Ispell usage. The program has a [number of options][9], some of which you _might_ use and others you _might never_ use. Let's take a quick peek at some of the ones I regularly use.
+
+A few paragraphs ago, I mentioned that Ispell works with certain markup languages. You need to tell it a file's format. When starting Ispell, add **-t** for a TeX or LaTeX file, **-H** for an HTML file, or **-n** for a groff or troff file. For example, if you enter **ispell -t myReport.tex** , Ispell ignores all markup.
+
+If you don't want the backup file that Ispell creates after checking a file, add **-x** to the command line—for example, **ispell -x myFile.txt**.
+
+What happens if Ispell runs into a word that's spelled correctly but isn't in its dictionary, like a proper name? You can add that word to a personal word list by pressing **I**. This saves the word to a file called _.ispell_default_ in the root of your _/home_ directory.
+
+Those are the options I find most useful when working with Ispell, but check out [Ispell's man page][9] for descriptions of all its options.
+
+Is Ispell any better or faster than Aspell or any other command line spelling checker? I have to say it's no worse than any of them, nor is it any slower. Ispell's not for everyone. It might not be for you. But it is good to have options, isn't it?
+
+--------------------------------------------------------------------------------
+
+via: https://opensource.com/article/19/5/spelling-command-line-ispell
+
+作者:[Scott Nesbitt ][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://opensource.com/users/scottnesbitt
+[b]: https://github.com/lujun9972
+[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/command_line_prompt.png?itok=wbGiJ_yg (Command line prompt)
+[2]: https://plaintextproject.online
+[3]: https://opensource.com/article/18/2/how-check-spelling-linux-command-line-aspell
+[4]: https://www.cs.hmc.edu/~geoff/ispell.html
+[5]: https://www.cs.hmc.edu/~geoff/ispell-dictionaries.html
+[6]: https://opensource.com/sites/default/files/uploads/ispell-install-dictionaries.png (Installing Ispell dictionaries)
+[7]: https://opensource.com/article/18/2/how-format-academic-papers-linux-groff-me
+[8]: https://opensource.com/sites/default/files/uploads/ispell-checking.png (Checking spelling with Ispell)
+[9]: https://www.cs.hmc.edu/~geoff/ispell-man.html
diff --git a/sources/tech/20190503 Mirror your System Drive using Software RAID.md b/sources/tech/20190503 Mirror your System Drive using Software RAID.md
new file mode 100644
index 0000000000..1b5936dfa0
--- /dev/null
+++ b/sources/tech/20190503 Mirror your System Drive using Software RAID.md
@@ -0,0 +1,306 @@
+[#]: collector: (lujun9972)
+[#]: translator: ( )
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+[#]: subject: (Mirror your System Drive using Software RAID)
+[#]: via: (https://fedoramagazine.org/mirror-your-system-drive-using-software-raid/)
+[#]: author: (Gregory Bartholomew https://fedoramagazine.org/author/glb/)
+
+Mirror your System Drive using Software RAID
+======
+
+![][1]
+
+Nothing lasts forever. When it comes to the hardware in your PC, most of it can easily be replaced. There is, however, one special-case hardware component in your PC that is not as easy to replace as the rest — your hard disk drive.
+
+### Drive Mirroring
+
+Your hard drive stores your personal data. Some of your data can be backed up automatically by scheduled backup jobs. But those jobs scan the files to be backed up for changes and trying to scan an entire drive would be very resource intensive. Also, anything that you’ve changed since your last backup will be lost if your drive fails. [Drive mirroring][2] is a better way to maintain a secondary copy of your entire hard drive. With drive mirroring, a secondary copy of _all the data_ on your hard drive is maintained _in real time_.
+
+An added benefit of live mirroring your hard drive to a secondary hard drive is that it can [increase your computer’s performance][3]. Because disk I/O is one of your computer’s main performance [bottlenecks][4], the performance improvement can be quite significant.
+
+Note that a mirror is not a backup. It only protects your data from being lost if one of your physical drives fail. Types of failures that drive mirroring, by itself, does not protect against include:
+
+ * [File System Corruption][5]
+ * [Bit Rot][6]
+ * Accidental File Deletion
+ * Simultaneous Failure of all Mirrored Drives (highly unlikely)
+
+
+
+Some of the above can be addressed by other file system features that can be used in conjunction with drive mirroring. File system features that address the above types of failures include:
+
+ * Using a [Journaling][7] or [Log-Structured][8] file system
+ * Using [Checksums][9] ([ZFS][10] , for example, does this automatically and transparently)
+ * Using [Snapshots][11]
+ * Using [BCVs][12]
+
+
+
+This guide will demonstrate one method of mirroring your system drive using the Multiple Disk and Device Administration (mdadm) toolset. Just for fun, this guide will show how to do the conversion without using any extra boot media (CDs, USB drives, etc). For more about the concepts and terminology related to the multiple device driver, you can skim the _md_ man page:
+
+```
+$ man md
+```
+
+### The Procedure
+
+ 1. **Use** [**sgdisk**][13] **to (re)partition the _extra_ drive that you have added to your computer** :
+
+```
+ $ sudo -i
+# MY_DISK_1=/dev/sdb
+# sgdisk --zap-all $MY_DISK_1
+# test -d /sys/firmware/efi/efivars || sgdisk -n 0:0:+1MiB -t 0:ef02 -c 0:grub_1 $MY_DISK_1
+# sgdisk -n 0:0:+1GiB -t 0:ea00 -c 0:boot_1 $MY_DISK_1
+# sgdisk -n 0:0:+4GiB -t 0:fd00 -c 0:swap_1 $MY_DISK_1
+# sgdisk -n 0:0:0 -t 0:fd00 -c 0:root_1 $MY_DISK_1
+```
+
+– If the drive that you will be using for the second half of the mirror in step 12 is smaller than this drive, then you will need to adjust down the size of the last partition so that the total size of all the partitions is not greater than the size of your second drive.
+– A few of the commands in this guide are prefixed with a test for the existence of an _efivars_ directory. This is necessary because those commands are slightly different depending on whether your computer is BIOS-based or UEFI-based.
+
+ 2. **Use** [**mdadm**][14] **to create RAID devices that use the new partitions to store their data** :
+
+```
+ # mdadm --create /dev/md/boot --homehost=any --metadata=1.0 --level=1 --raid-devices=2 /dev/disk/by-partlabel/boot_1 missing
+# mdadm --create /dev/md/swap --homehost=any --metadata=1.0 --level=1 --raid-devices=2 /dev/disk/by-partlabel/swap_1 missing
+# mdadm --create /dev/md/root --homehost=any --metadata=1.0 --level=1 --raid-devices=2 /dev/disk/by-partlabel/root_1 missing
+
+# cat << END > /etc/mdadm.conf
+MAILADDR root
+AUTO +all
+DEVICE partitions
+END
+
+# mdadm --detail --scan >> /etc/mdadm.conf
+```
+
+– The _missing_ parameter tells mdadm to create an array with a missing member. You will add the other half of the mirror in step 14.
+– You should configure [sendmail][15] so you will be notified if a drive fails.
+– You can configure [Evolution][16] to [monitor a local mail spool][17].
+
+ 3. **Use** [**dracut**][18] **to update the initramfs** :
+
+```
+# dracut -f --add mdraid --add-drivers xfs
+```
+
+– Dracut will include the /etc/mdadm.conf file you created in the previous section in your initramfs _unless_ you build your initramfs with the _hostonly_ option set to _no_. If you build your initramfs with the hostonly option set to no, then you should either manually include the /etc/mdadm.conf file, manually specify the UUID’s of the RAID arrays to assemble at boot time with the _rd.md.uuid_ kernel parameter, or specify the _rd.auto_ kernel parameter to have all RAID arrays automatically assembled and started at boot time. This guide will demonstrate the _rd.auto_ option since it is the most generic.
+
+ 4. **Format the RAID devices** :
+
+```
+ # mkfs -t vfat /dev/md/boot
+# mkswap /dev/md/swap
+# mkfs -t xfs /dev/md/root
+```
+
+– The new [Boot Loader Specification][19] states “if the OS is installed on a disk with GPT disk label, and no ESP partition exists yet, a new suitably sized (let’s say 500MB) ESP should be created and should be used as $BOOT” and “$BOOT must be a VFAT (16 or 32) file system”.
+
+ 5. **Reboot and set the _rd.auto_ , _rd.break_ and _single_ kernel parameters** :
+
+```
+# reboot
+```
+
+– You may need to [set your root password][20] before rebooting so that you can get into _single-user mode_ in step 7.
+– See “[Making Temporary Changes to a GRUB 2 Menu][21]” for directions on how to set kernel parameters on compters that use the GRUB 2 boot loader.
+
+ 6. **Use** [**the dracut shell**][18] **to copy the root file system** :
+
+```
+ # mkdir /newroot
+# mount /dev/md/root /newroot
+# shopt -s dotglob
+# cp -ax /sysroot/* /newroot
+# rm -rf /newroot/boot/*
+# umount /newroot
+# exit
+```
+
+– The _dotglob_ flag is set for this bash session so that the [wildcard character][22] will match hidden files.
+– Files are removed from the _boot_ directory because they will be copied to a separate partition in the next step.
+– This copy operation is being done from the dracut shell to insure that no processes are accessing the files while they are being copied.
+
+ 7. **Use _single-user mode_ to copy the non-root file systems** :
+
+```
+ # mkdir /newroot
+# mount /dev/md/root /newroot
+# mount /dev/md/boot /newroot/boot
+# shopt -s dotglob
+# cp -Lr /boot/* /newroot/boot
+# test -d /newroot/boot/efi/EFI && mv /newroot/boot/efi/EFI/* /newroot/boot/efi && rmdir /newroot/boot/efi/EFI
+# test -d /sys/firmware/efi/efivars && ln -sfr /newroot/boot/efi/fedora/grub.cfg /newroot/etc/grub2-efi.cfg
+# cp -ax /home/* /newroot/home
+# exit
+```
+
+– It is OK to run these commands in the dracut shell shown in the previous section instead of doing it from single-user mode. I’ve demonstrated using single-user mode to avoid having to explain how to mount the non-root partitions from the dracut shell.
+– The parameters being past to the _cp_ command for the _boot_ directory are a little different because the VFAT file system doesn’t support symbolic links or Unix-style file permissions.
+– In rare cases, the _rd.auto_ parameter is known to cause LVM to fail to assemble due to a [race condition][23]. If you see errors about your _swap_ or _home_ partition failing to mount when entering single-user mode, simply try again by repeating step 5 but omiting the _rd.break_ paramenter so that you will go directly to single-user mode.
+
+ 8. **Update _fstab_ on the new drive** :
+
+```
+ # cat << END > /newroot/etc/fstab
+/dev/md/root / xfs defaults 0 0
+/dev/md/boot /boot vfat defaults 0 0
+/dev/md/swap swap swap defaults 0 0
+END
+```
+
+ 9. **Configure the boot loader on the new drive** :
+
+```
+ # NEW_GRUB_CMDLINE_LINUX=$(cat /etc/default/grub | sed -n 's/^GRUB_CMDLINE_LINUX="\(.*\)"/\1/ p')
+# NEW_GRUB_CMDLINE_LINUX=${NEW_GRUB_CMDLINE_LINUX//rd.lvm.*([^ ])}
+# NEW_GRUB_CMDLINE_LINUX=${NEW_GRUB_CMDLINE_LINUX//resume=*([^ ])}
+# NEW_GRUB_CMDLINE_LINUX+=" selinux=0 rd.auto"
+# sed -i "/^GRUB_CMDLINE_LINUX=/s/=.*/=\"$NEW_GRUB_CMDLINE_LINUX\"/" /newroot/etc/default/grub
+```
+
+– You can re-enable selinux after this procedure is complete. But you will have to [relabel your file system][24] first.
+
+ 10. **Install the boot loader on the new drive** :
+
+```
+ # sed -i '/^GRUB_DISABLE_OS_PROBER=.*/d' /newroot/etc/default/grub
+# echo "GRUB_DISABLE_OS_PROBER=true" >> /newroot/etc/default/grub
+# MY_DISK_1=$(mdadm --detail /dev/md/boot | grep active | grep -m 1 -o "/dev/sd.")
+# for i in dev dev/pts proc sys run; do mount -o bind /$i /newroot/$i; done
+# chroot /newroot env MY_DISK_1=$MY_DISK_1 bash --login
+# test -d /sys/firmware/efi/efivars || MY_GRUB_DIR=/boot/grub2
+# test -d /sys/firmware/efi/efivars && MY_GRUB_DIR=$(find /boot/efi -type d -name 'fedora' -print -quit)
+# test -e /usr/sbin/grub2-switch-to-blscfg && grub2-switch-to-blscfg --grub-directory=$MY_GRUB_DIR
+# grub2-mkconfig -o $MY_GRUB_DIR/grub.cfg \;
+# test -d /sys/firmware/efi/efivars && test /boot/grub2/grubenv -nt $MY_GRUB_DIR/grubenv && cp /boot/grub2/grubenv $MY_GRUB_DIR/grubenv
+# test -d /sys/firmware/efi/efivars || grub2-install "$MY_DISK_1"
+# logout
+# for i in run sys proc dev/pts dev; do umount /newroot/$i; done
+# test -d /sys/firmware/efi/efivars && efibootmgr -c -d "$MY_DISK_1" -p 1 -l "$(find /newroot/boot -name shimx64.efi -printf '/%P\n' -quit | sed 's!/!\\!g')" -L "Fedora RAID Disk 1"
+```
+
+– The _grub2-switch-to-blscfg_ command is optional. It is only supported on Fedora 29+.
+– The _cp_ command above should not be necessary, but there appears to be a bug in the current version of grub which causes it to write to $BOOT/grub2/grubenv instead of $BOOT/efi/fedora/grubenv on UEFI systems.
+– You can use the following command to verify the contents of the _grub.cfg_ file right after running the _grub2-mkconfig_ command above:
+
+```
+# sed -n '/BEGIN .*10_linux/,/END .*10_linux/ p' $MY_GRUB_DIR/grub.cfg
+```
+
+– You should see references to _mdraid_ and _mduuid_ in the output from the above command if the RAID array was detected properly.
+
+ 11. **Boot off of the new drive** :
+
+```
+# reboot
+```
+
+– How to select the new drive is system-dependent. It usually requires pressing one of the **F12** , **F10** , **Esc** or **Del** keys when you hear the [System OK BIOS beep code][25].
+– On UEFI systems the boot loader on the new drive should be labeled “Fedora RAID Disk 1”.
+
+ 12. **Remove all the volume groups and partitions from your old drive** :
+
+```
+ # MY_DISK_2=/dev/sda
+# MY_VOLUMES=$(pvs | grep $MY_DISK_2 | awk '{print $2}' | tr "\n" " ")
+# test -n "$MY_VOLUMES" && vgremove $MY_VOLUMES
+# sgdisk --zap-all $MY_DISK_2
+```
+
+– **WARNING** : You want to make certain that everything is working properly on your new drive before you do this. A good way to verify that your old drive is no longer being used is to try booting your computer once without the old drive connected.
+– You can add another new drive to your computer instead of erasing your old one if you prefer.
+
+ 13. **Create new partitions on your old drive to match the ones on your new drive** :
+
+```
+ # test -d /sys/firmware/efi/efivars || sgdisk -n 0:0:+1MiB -t 0:ef02 -c 0:grub_2 $MY_DISK_2
+# sgdisk -n 0:0:+1GiB -t 0:ea00 -c 0:boot_2 $MY_DISK_2
+# sgdisk -n 0:0:+4GiB -t 0:fd00 -c 0:swap_2 $MY_DISK_2
+# sgdisk -n 0:0:0 -t 0:fd00 -c 0:root_2 $MY_DISK_2
+```
+
+– It is important that the partitions match in size and type. I prefer to use the _parted_ command to display the partition table because it supports setting the display unit:
+
+```
+ # parted /dev/sda unit MiB print
+# parted /dev/sdb unit MiB print
+```
+
+ 14. **Use mdadm to add the new partitions to the RAID devices** :
+
+```
+ # mdadm --manage /dev/md/boot --add /dev/disk/by-partlabel/boot_2
+# mdadm --manage /dev/md/swap --add /dev/disk/by-partlabel/swap_2
+# mdadm --manage /dev/md/root --add /dev/disk/by-partlabel/root_2
+```
+
+ 15. **Install the boot loader on your old drive** :
+
+```
+ # test -d /sys/firmware/efi/efivars || grub2-install "$MY_DISK_2"
+# test -d /sys/firmware/efi/efivars && efibootmgr -c -d "$MY_DISK_2" -p 1 -l "$(find /boot -name shimx64.efi -printf "/%P\n" -quit | sed 's!/!\\!g')" -L "Fedora RAID Disk 2"
+```
+
+ 16. **Use mdadm to test that email notifications are working** :
+
+```
+# mdadm --monitor --scan --oneshot --test
+```
+
+
+
+
+As soon as your drives have finished synchronizing, you should be able to select either drive when restarting your computer and you will receive the same live-mirrored operating system. If either drive fails, mdmonitor will send an email notification. Recovering from a drive failure is now simply a matter of swapping out the bad drive with a new one and running a few _sgdisk_ and _mdadm_ commands to re-create the mirrors (steps 13 through 15). You will no longer have to worry about losing any data if a drive fails!
+
+### Video Demonstrations
+
+Converting a UEFI PC to RAID1
+
+Converting a BIOS PC to RAID1
+
+ * TIP: Set the the quality to 720p on the above videos for best viewing.
+
+
+
+--------------------------------------------------------------------------------
+
+via: https://fedoramagazine.org/mirror-your-system-drive-using-software-raid/
+
+作者:[Gregory Bartholomew][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://fedoramagazine.org/author/glb/
+[b]: https://github.com/lujun9972
+[1]: https://fedoramagazine.org/wp-content/uploads/2019/05/raid_mirroring-816x345.jpg
+[2]: https://en.wikipedia.org/wiki/Disk_mirroring
+[3]: https://en.wikipedia.org/wiki/Disk_mirroring#Additional_benefits
+[4]: https://en.wikipedia.org/wiki/Bottleneck_(software)
+[5]: https://en.wikipedia.org/wiki/Data_corruption
+[6]: https://en.wikipedia.org/wiki/Data_degradation
+[7]: https://en.wikipedia.org/wiki/Journaling_file_system
+[8]: https://www.quora.com/What-is-the-difference-between-a-journaling-vs-a-log-structured-file-system
+[9]: https://en.wikipedia.org/wiki/File_verification
+[10]: https://en.wikipedia.org/wiki/ZFS#Summary_of_key_differentiating_features
+[11]: https://en.wikipedia.org/wiki/Snapshot_(computer_storage)#File_systems
+[12]: https://en.wikipedia.org/wiki/Business_continuance_volume
+[13]: https://fedoramagazine.org/managing-partitions-with-sgdisk/
+[14]: https://fedoramagazine.org/managing-raid-arrays-with-mdadm/
+[15]: https://fedoraproject.org/wiki/QA:Testcase_Sendmail
+[16]: https://en.wikipedia.org/wiki/Evolution_(software)
+[17]: https://dotancohen.com/howto/root_email.html
+[18]: https://fedoramagazine.org/initramfs-dracut-and-the-dracut-emergency-shell/
+[19]: https://systemd.io/BOOT_LOADER_SPECIFICATION#technical-details
+[20]: https://docs.fedoraproject.org/en-US/Fedora/26/html/System_Administrators_Guide/sec-Changing_and_Resetting_the_Root_Password.html
+[21]: https://docs.fedoraproject.org/en-US/fedora/rawhide/system-administrators-guide/kernel-module-driver-configuration/Working_with_the_GRUB_2_Boot_Loader/#sec-Making_Temporary_Changes_to_a_GRUB_2_Menu
+[22]: https://en.wikipedia.org/wiki/Wildcard_character#File_and_directory_patterns
+[23]: https://en.wikipedia.org/wiki/Race_condition
+[24]: https://wiki.centos.org/HowTos/SELinux#head-867ca18a09f3103705cdb04b7d2581b69cd74c55
+[25]: https://en.wikipedia.org/wiki/Power-on_self-test#Original_IBM_POST_beep_codes
diff --git a/sources/tech/20190503 Say goodbye to boilerplate in Python with attrs.md b/sources/tech/20190503 Say goodbye to boilerplate in Python with attrs.md
new file mode 100644
index 0000000000..42d9f86ca3
--- /dev/null
+++ b/sources/tech/20190503 Say goodbye to boilerplate in Python with attrs.md
@@ -0,0 +1,107 @@
+[#]: collector: (lujun9972)
+[#]: translator: ( )
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+[#]: subject: (Say goodbye to boilerplate in Python with attrs)
+[#]: via: (https://opensource.com/article/19/5/python-attrs)
+[#]: author: (Moshe Zadka https://opensource.com/users/moshez/users/moshez)
+
+Say goodbye to boilerplate in Python with attrs
+======
+Learn more about solving common Python problems in our series covering
+seven PyPI libraries.
+![Programming at a browser, orange hands][1]
+
+Python is one of the most [popular programming languages][2] in use today—and for good reasons: it's open source, it has a wide range of uses (such as web programming, business applications, games, scientific programming, and much more), and it has a vibrant and dedicated community supporting it. This community is the reason we have such a large, diverse range of software packages available in the [Python Package Index][3] (PyPI) to extend and improve Python and solve the inevitable glitches that crop up.
+
+In this series, we'll look at seven PyPI libraries that can help you solve common Python problems. Today, we'll examine [**attrs**][4], a Python package that helps you write concise, correct code quickly.
+
+### attrs
+
+If you have been using Python for any length of time, you are probably used to writing code like:
+
+
+```
+class Book(object):
+
+def __init__(self, isbn, name, author):
+self.isbn = isbn
+self.name = name
+self.author = author
+```
+
+Then you write a **__repr__** function; otherwise, it would be hard to log instances of **Book** :
+
+
+```
+def __repr__(self):
+return f"Book({self.isbn}, {self.name}, {self.author})"
+```
+
+Next, you write a nice docstring documenting the expected types. But you notice you forgot to add the **edition** and **published_year** attributes, so you have to modify them in five places.
+
+What if you didn't have to?
+
+
+```
+@attr.s(auto_attribs=True)
+class Book(object):
+isbn: str
+name: str
+author: str
+published_year: int
+edition: int
+```
+
+Annotating the attributes with types using the new type annotation syntax, **attrs** detects the annotations and creates a class.
+
+ISBNs have a specific format. What if we want to enforce that format?
+
+
+```
+@attr.s(auto_attribs=True)
+class Book(object):
+isbn: str = attr.ib()
+@isbn.validator
+def pattern_match(self, attribute, value):
+m = re.match(r"^(\d{3}-)\d{1,3}-\d{2,3}-\d{1,7}-\d$", value)
+if not m:
+raise ValueError("incorrect format for isbn", value)
+name: str
+author: str
+published_year: int
+edition: int
+```
+
+The **attrs** library also has great support for [immutability-style programming][5]. Changing the first line to **@attr.s(auto_attribs=True, frozen=True)** means that **Book** is now immutable: trying to modify an attribute will raise an exception. Instead, we can get a _new_ instance with modification using **attr.evolve(old_book, published_year=old_book.published_year+1)** , for example, if we need to push publication forward by a year.
+
+In the next article in this series, we'll look at **singledispatch** , a library that allows you to add methods to Python libraries retroactively.
+
+#### Review the previous articles in this series
+
+ * [Cython][6]
+ * [Black][7]
+
+
+
+--------------------------------------------------------------------------------
+
+via: https://opensource.com/article/19/5/python-attrs
+
+作者:[Moshe Zadka ][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://opensource.com/users/moshez/users/moshez
+[b]: https://github.com/lujun9972
+[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/programming_code_keyboard_orange_hands.png?itok=G6tJ_64Y (Programming at a browser, orange hands)
+[2]: https://opensource.com/article/18/5/numbers-python-community-trends
+[3]: https://pypi.org/
+[4]: https://pypi.org/project/attrs/
+[5]: https://opensource.com/article/18/10/functional-programming-python-immutable-data-structures
+[6]: https://opensource.com/article/19/4/7-python-problems-solved-cython
+[7]: https://opensource.com/article/19/4/python-problems-solved-black
diff --git a/sources/tech/20190503 SuiteCRM- An Open Source CRM Takes Aim At Salesforce.md b/sources/tech/20190503 SuiteCRM- An Open Source CRM Takes Aim At Salesforce.md
new file mode 100644
index 0000000000..63802d4976
--- /dev/null
+++ b/sources/tech/20190503 SuiteCRM- An Open Source CRM Takes Aim At Salesforce.md
@@ -0,0 +1,105 @@
+[#]: collector: (lujun9972)
+[#]: translator: ( )
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+[#]: subject: (SuiteCRM: An Open Source CRM Takes Aim At Salesforce)
+[#]: via: (https://itsfoss.com/suitecrm-ondemand/)
+[#]: author: (Ankush Das https://itsfoss.com/author/ankush/)
+
+SuiteCRM: An Open Source CRM Takes Aim At Salesforce
+======
+
+SuiteCRM is one of the most popular open source CRM (Customer Relationship Management) software available. With its unique-priced managed CRM hosting service, SuiteCRM is aiming to challenge enterprise CRMs like Salesforce.
+
+### SuiteCRM: An Open Source CRM Software
+
+CRM stands for Customer Relationship Management. It is used by businesses to manage the interaction with customers, keep track of services, supplies and other things that help the business manage their customers.
+
+![][1]
+
+[SuiteCRM][2] came into existence after the hugely popular [SugarCRM][3] decided to stop developing its open source version. The open source version of SugarCRM was then forked into SuiteCRM by UK-based [SalesAgility][4] team.
+
+In just a couple of years, SuiteCRM became immensely popular and started to be considered the best open source CRM software out there. You can gauge its popularity from the fact that it’s nearing a million download and it has over 100,000 community members. There are around 4 million SuiteCRM users worldwide (a CRM software usually has more than one user) and it is available in several languages. It’s even used by National Health Service ([NHS][5]) in UK.
+
+Since SuiteCRM is a free and open source software, you are free to download it and deploy it on your cloud server such as [UpCloud][6] (we at It’s FOSS use it), [DigitalOcean][7], [AWS][8] or any Linux server of our own.
+
+But configuring the software, deploying it and managing it a tiresome job and requires certain skill level or a the services of a sysadmin. This is why business oriented open source software provide a hosted version of their software.
+
+This enables you to enjoy the open source software without the additional headache and the team behind the software has a way to generate revenue and continue the development of their software.
+
+### Suite:OnDemand – Cost effective managed hosting of SuiteCRM
+
+So, recently, [SalesAgility][4] – the creators/maintainers of SuiteCRM, decided to challenge [Salesforce][9] and other enterprise CRMs by introducing [Suite:OnDemand][10] , a hosted version of SuiteCRM.
+
+[][11]
+
+Suggested read Papyrus: An Open Source Note Manager
+
+Normally, you will observe pricing plans on the basis of number of users. But, with SuiteCRM’s OnDemand cloud hosting plans, they are trying to give businesses an affordable solution on a “per-server” basis instead of paying for every user you add.
+
+In other words, they want you to pay extra only for advanced features, not for more users.
+
+Here’s what SalesAgility mentioned in their [press release][12]:
+
+> Unlike Salesforce and other enterprise CRM vendors, the practice of pricing per user has been abandoned in favour of per-server hosting packages all of which will support unlimited users. In addition, there’s no increase in cost for access to advanced features. With Suite:OnDemand every feature and benefit is available with each hosting package.
+
+Of course, unlimited users does not mean that you will have to abuse the term. So, there’s a recommended number of users for every hosting plan you opt for.
+
+![Suitecrm Hosting][13]
+
+The CEO of SalesAgility also had to describe their goals for this step:
+
+“ _We want SuiteCRM to be available to all businesses and to all users within a business,_ ”said **Dale Murray CEO** of **SalesAgility**.
+
+In addition to that, they also mentioned that they want to revolutionize the way enterprise-class CRM is being currently offered in order to make it more accessible to businesses and organizations:
+
+> “Many organisations do not have the experience to run and support our product on-premise or it is not part of their technology strategy to do so. With Suite:OnDemand we are providing our customers with a quick and easy solution to access all the features of SuiteCRM without a per user cost. We’re also saying to Salesforce that enterprise-class CRM can be delivered, enhanced, maintained and supported without charging mouth-wateringly expensive monthly fees. Our aim is to transform the CRM market to enable users to make CRM pervasive within their organisations.”
+>
+> Dale Murray, CEO of SalesAgility
+
+### Why is this a big deal?
+
+This is a huge relief for small business owners and startups because other CRMs like Saleforce and SugarCRM charge $30-$40 per month per user. If you have 10 members in your team, this will increase the cost to $300-$400 per month.
+
+[][14]
+
+Suggested read Winds Beautifully Combines Feed Reader and Podcast Player in One Single App
+
+This is also a good news for the open source community that we will have an affordable alternative to Salesforce.
+
+In addition to this, SuiteCRM is fully open source meaning there are no license fees or vendor lock-in – as they mention. You are always free to use it on your own.
+
+It is interesting to see different strategies and solutions being applied for an open source CRM software to take an aim at Salesforce directly.
+
+What do you think? Let us know your thoughts in the comments below.
+
+_With inputs from Abhishek Prakash._
+
+--------------------------------------------------------------------------------
+
+via: https://itsfoss.com/suitecrm-ondemand/
+
+作者:[Ankush Das][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://itsfoss.com/author/ankush/
+[b]: https://github.com/lujun9972
+[1]: https://itsfoss.com/wp-content/uploads/2019/05/suite-crm-800x450.png
+[2]: https://suitecrm.com/
+[3]: https://www.sugarcrm.com/
+[4]: https://salesagility.com/
+[5]: https://www.nhs.uk/
+[6]: https://www.upcloud.com/register/?promo=itsfoss
+[7]: https://m.do.co/c/d58840562553
+[8]: https://aws.amazon.com/
+[9]: https://www.salesforce.com
+[10]: https://suitecrm.com/suiteondemand/
+[11]: https://itsfoss.com/papyrus-open-source-note-manager/
+[12]: https://suitecrm.com/sod-pr/
+[13]: https://itsfoss.com/wp-content/uploads/2019/05/suitecrm-hosting-800x457.jpg
+[14]: https://itsfoss.com/winds-podcast-feedreader/
diff --git a/sources/tech/20190503 Tutanota Launches New Encrypted Tool to Support Press Freedom.md b/sources/tech/20190503 Tutanota Launches New Encrypted Tool to Support Press Freedom.md
new file mode 100644
index 0000000000..692b4ecba8
--- /dev/null
+++ b/sources/tech/20190503 Tutanota Launches New Encrypted Tool to Support Press Freedom.md
@@ -0,0 +1,85 @@
+[#]: collector: (lujun9972)
+[#]: translator: ( )
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+[#]: subject: (Tutanota Launches New Encrypted Tool to Support Press Freedom)
+[#]: via: (https://itsfoss.com/tutanota-secure-connect/)
+[#]: author: (John Paul https://itsfoss.com/author/john/)
+
+Tutanota Launches New Encrypted Tool to Support Press Freedom
+======
+
+A secure email provider has announced the release of a new product designed to help whistleblowers get their information to the media. The tool is free for journalists.
+
+### Tutanota helps you protect your privacy
+
+![][1]
+
+[Tutanota][2] is a German-based company that provides “world’s most secure email service, easy to use and private by design.” They offer end-to-end encryption for their [secure email service][3]. Recently Tutanota announced a [desktop app for their email service][4].
+
+They also make use of two-factor authentication and [open source the code][5] that they use.
+
+While you can get an account for free, you don’t have to worry about your information being sold or seeing ads. Tutanota makes money by charging for extra features and storage. They also offer solutions for non-profit organizations.
+
+Tutanota has launched a new service to further help journalists, social activists and whistleblowers in communicating securely.
+
+[][6]
+
+Suggested read Purism's New Offering is a Dream Come True for Privacy Concerned People
+
+### Secure Connect: An encrypted form for websites
+
+![][7]
+
+Tutanota has released a new piece of software named Secure Connect. Secure Connect is “an open source encrypted contact form for news sites”. The goal of the project is to create a way so that “whistleblowers can get in touch with journalists securely”. Tutanota picked the right day because May 3rd is the [Day of Press Freedom][8].
+
+According to Tutanota, Secure Connect is designed to be easily added to websites, but can also work on any blog to ensure access by smaller news agencies. A whistleblower would access Secure Connect app on a news site, preferably using Tor, and type in any information that they want to bring to light. The whistleblower would also be able to upload files. Once they submit the information, Secure Connect will assign a random address and password, “which lets the whistleblower re-access his sent message at a later stage and check for replies from the news site.”
+
+![Secure Connect Encrypted Contact Form][9]
+
+While Tutanota will be offering Secure Connect to journalists for free, they know that someone will have to foot the bill. They plan to pay for further development of the project by selling it to businesses, such as “lawyers, financial institutions, medical institutions, educational institutions, and the authorities”. Non-journalists would have to pay €24 per month.
+
+You can see a demo of Secure Connect, by clicking [here][10]. If you are a journalist interested in adding Secure Connect to your website or blog, you can contact them at [[email protected]][11] Be sure to include a link to your website.
+
+[][12]
+
+Suggested read 8 Privacy Oriented Alternative Search Engines To Google in 2019
+
+### Final Thoughts on Secure Connect
+
+I have read repeatedly about whistleblowers whose identities were accidentally exposed, either by themselves or others. Tutanota’s project looks like it would remove that possibility by making it impossible for others to discover their identity. It also gives both parties an easy way to exchange information without having to worry about encryption or PGP keys.
+
+I understand that it’s not the same as [Firefox Send][13], another encrypted file sharing program from Mozilla. The only question I have is whose servers will the whistleblowers’ information be sitting on?
+
+Do you think that Tutanota’s Secure Connect will be a boon for whistleblowers and activists? Please let us know in the comments below.
+
+If you found this article interesting, please take a minute to share it on social media, Hacker News or [Reddit][14].
+
+--------------------------------------------------------------------------------
+
+via: https://itsfoss.com/tutanota-secure-connect/
+
+作者:[John Paul][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://itsfoss.com/author/john/
+[b]: https://github.com/lujun9972
+[1]: https://itsfoss.com/wp-content/uploads/2018/02/tutanota-featured-800x450.png
+[2]: https://tutanota.com/
+[3]: https://itsfoss.com/tutanota-review/
+[4]: https://itsfoss.com/tutanota-desktop/
+[5]: https://tutanota.com/blog/posts/open-source-email
+[6]: https://itsfoss.com/librem-one/
+[7]: https://itsfoss.com/wp-content/uploads/2019/05/secure-communication.jpg
+[8]: https://en.wikipedia.org/wiki/World_Press_Freedom_Day
+[9]: https://itsfoss.com/wp-content/uploads/2019/05/secure-connect-encrypted-contact-form.png
+[10]: https://secureconnect.tutao.de/contactform/demo
+[11]: /cdn-cgi/l/email-protection
+[12]: https://itsfoss.com/privacy-search-engines/
+[13]: https://itsfoss.com/firefox-send/
+[14]: http://reddit.com/r/linuxusersgroup
diff --git a/sources/tech/20190504 Add methods retroactively in Python with singledispatch.md b/sources/tech/20190504 Add methods retroactively in Python with singledispatch.md
new file mode 100644
index 0000000000..022b06aa52
--- /dev/null
+++ b/sources/tech/20190504 Add methods retroactively in Python with singledispatch.md
@@ -0,0 +1,106 @@
+[#]: collector: (lujun9972)
+[#]: translator: ( )
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+[#]: subject: (Add methods retroactively in Python with singledispatch)
+[#]: via: (https://opensource.com/article/19/5/python-singledispatch)
+[#]: author: (Moshe Zadka https://opensource.com/users/moshez)
+
+Add methods retroactively in Python with singledispatch
+======
+Learn more about solving common Python problems in our series covering
+seven PyPI libraries.
+![][1]
+
+Python is one of the most [popular programming languages][2] in use today—and for good reasons: it's open source, it has a wide range of uses (such as web programming, business applications, games, scientific programming, and much more), and it has a vibrant and dedicated community supporting it. This community is the reason we have such a large, diverse range of software packages available in the [Python Package Index][3] (PyPI) to extend and improve Python and solve the inevitable glitches that crop up.
+
+In this series, we'll look at seven PyPI libraries that can help you solve common Python problems. Today, we'll examine [**singledispatch**][4], a library that allows you to add methods to Python libraries retroactively.
+
+### singledispatch
+
+Imagine you have a "shapes" library with a **Circle** class, a **Square** class, etc.
+
+A **Circle** has a **radius** , a **Square** has a **side** , and a **Rectangle** has **height** and **width**. Our library already exists; we do not want to change it.
+
+However, we do want to add an **area** calculation to our library. If we didn't share this library with anyone else, we could just add an **area** method so we could call **shape.area()** and not worry about what the shape is.
+
+While it is possible to reach into a class and add a method, this is a bad idea: nobody expects their class to grow new methods, and things might break in weird ways.
+
+Instead, the **singledispatch** function in **functools** can come to our rescue.
+
+
+```
+@singledispatch
+def get_area(shape):
+raise NotImplementedError("cannot calculate area for unknown shape",
+shape)
+```
+
+The "base" implementation for the **get_area** function fails. This makes sure that if we get a new shape, we will fail cleanly instead of returning a nonsense result.
+
+
+```
+@get_area.register(Square)
+def _get_area_square(shape):
+return shape.side ** 2
+@get_area.register(Circle)
+def _get_area_circle(shape):
+return math.pi * (shape.radius ** 2)
+```
+
+One nice thing about doing things this way is that if someone writes a _new_ shape that is intended to play well with our code, they can implement **get_area** themselves.
+
+
+```
+from area_calculator import get_area
+
+@attr.s(auto_attribs=True, frozen=True)
+class Ellipse:
+horizontal_axis: float
+vertical_axis: float
+
+@get_area.register(Ellipse)
+def _get_area_ellipse(shape):
+return math.pi * shape.horizontal_axis * shape.vertical_axis
+```
+
+_Calling_ **get_area** is straightforward.
+
+
+```
+`print(get_area(shape))`
+```
+
+This means we can change a function that has a long **if isintance()/elif isinstance()** chain to work this way, without changing the interface. The next time you are tempted to check **if isinstance** , try using **singledispatch**!
+
+In the next article in this series, we'll look at **tox** , a tool for automating tests on Python code.
+
+#### Review the previous articles in this series:
+
+ * [Cython][5]
+ * [Black][6]
+ * [attrs][7]
+
+
+
+--------------------------------------------------------------------------------
+
+via: https://opensource.com/article/19/5/python-singledispatch
+
+作者:[Moshe Zadka ][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://opensource.com/users/moshez
+[b]: https://github.com/lujun9972
+[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/computer_code_programming_laptop.jpg?itok=ormv35tV
+[2]: https://opensource.com/article/18/5/numbers-python-community-trends
+[3]: https://pypi.org/
+[4]: https://pypi.org/project/singledispatch/
+[5]: https://opensource.com/article/19/4/7-python-problems-solved-cython
+[6]: https://opensource.com/article/19/4/python-problems-solved-black
+[7]: https://opensource.com/article/19/4/python-problems-solved-attrs
diff --git a/sources/tech/20190504 May the fourth be with you- How Star Wars (and Star Trek) inspired real life tech.md b/sources/tech/20190504 May the fourth be with you- How Star Wars (and Star Trek) inspired real life tech.md
new file mode 100644
index 0000000000..a05f9a6b4f
--- /dev/null
+++ b/sources/tech/20190504 May the fourth be with you- How Star Wars (and Star Trek) inspired real life tech.md
@@ -0,0 +1,93 @@
+[#]: collector: (lujun9972)
+[#]: translator: ( )
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+[#]: subject: (May the fourth be with you: How Star Wars (and Star Trek) inspired real life tech)
+[#]: via: (https://opensource.com/article/19/5/may-the-fourth-star-wars-trek)
+[#]: author: (Jeff Macharyas https://opensource.com/users/jeffmacharyas)
+
+May the fourth be with you: How Star Wars (and Star Trek) inspired real life tech
+======
+The technologies may have been fictional, but these two acclaimed sci-fi
+series have inspired open source tech.
+![Triangulum galaxy, NASA][1]
+
+Conventional wisdom says you can either be a fan of _Star Trek_ or of _Star Wars_ , but mixing the two is like mixing matter and anti-matter. I'm not sure that's true, but even if the laws of physics cannot be changed, these two acclaimed sci-fi series have influenced the open source universe and created their own open source multi-verses.
+
+For example, fans have used the original _Star Trek_ as "source code" to create fan-made films, cartoons, and games. One of the more notable fan creations was the web series _Star Trek Continues_ , which faithfully adapted Gene Roddenberry's universe and redistributed it to the world.
+
+"Eventually we realized that there is no more profound way in which people could express what _Star Trek_ has meant to them than by creating their own very personal _Star Trek_ things," [Roddenberry said][2]. However, due to copyright restrictions, this "open source" channel [has since been curtailed][3].
+
+_Star Wars_ has a different approach to open sourcing its universe. [Jess Paguaga writes][4] on FanSided: "With a variety [of] fan film awards dating back to 2002, the _Star Wars_ brand has always supported and encouraged the creation of short films that help expand the universe of a galaxy far, far away."
+
+But, _Star Wars_ is not without its own copyright prime directives. In one case, a Darth Vader film by a YouTuber called Star Wars Theory has drawn a copyright claim from Disney. The claim does not stop production of the film, but diverts monetary gains from it, [reports James Richards][5] on FanSided.
+
+This could be one of the [Ferengi Rules of Acquisition][6], perhaps.
+
+But if you can't watch your favorite fan film, you can still get your [_Star Wars_ fix right in the Linux terminal][7] by entering:
+
+
+```
+`telnet towel.blinkenlights.nl`
+```
+
+And _Star Trek_ fans can also interact with the Federation with the original text-based video game from 1971. While a high-school senior, Mike Mayfield ported the game from punch cards to HP BASIC. If you'd like to go old school and battle Klingons, the source code is available at the [Code Project][8].
+
+### Real-life star tech
+
+Both _Star Wars_ and _Star Trek_ have inspired real-life technologies. Although those technologies were fictional, many have become the practical, open technology we use today. Some of them inspired technologies that are still in development now.
+
+In the early 1970s, Motorola engineer Martin Cooper was trying to beat AT&T at the car-phone game. He says he was watching Captain Kirk use a "communicator" on an episode of _Star Trek_ and had a eureka moment. His team went on to create the first portable cellular 800MHz phone prototype in 90 days.
+
+In _Star Wars_ , scout stormtroopers of the Galactic Empire rode the Aratech 74-Z Speeder Bike, and a real-life counterpart is the [Aero-X][9] being developed by California's Aerofex.
+
+Perhaps the most visible _Star Wars_ tech to enter our lives is droids. We first encountered R2-D2 back in the 1970s, but now we have droids vacuuming our carpets and mowing our lawns, from Roombas to the [Worx Landroid][10] lawnmower.
+
+And, in _Star Wars_ , Princess Leia appeared to Obi-Wan Kenobi as a hologram, and in Star Trek: Voyager, the ship's chief medical officer was an interactive hologram that could diagnose and treat patients. The technology to bring characters like these to "life" is still a ways off, but there are some interesting open source developments that hint of things to come. [OpenHolo][11], "an open source library containing algorithms and software implementations for holograms in various fields," is one such project.
+
+### Where's the beef?
+
+> "She handled… real meat… touched it, and cut it?" —Keiko O'Brien, Star Trek: The Next Generation
+
+In the _Star Trek_ universe, crew members get their meals by simply ordering a replicator to produce whatever food they desire. That could one day become a reality thanks to a concept created by two German students for an open source "meat-printer" they call the [Cultivator][12]. It would use bio-printing to produce something that appears to be meat; the user could even select its mineral and fat content. Perhaps with more collaboration and development, the Cultivator could become the replicator in tomorrow's kitchen!
+
+### The 501st
+
+Cosplayers, people from all walks of life who dress as their favorite characters, are the "open source embodiment" of their favorite universes. The [501st][13] [Legion][13] is an all-volunteer _Star Wars_ fan organization "formed for the express purpose of bringing together costume enthusiasts under a collective identity within which to operate," according to its charter.
+
+Jon Stallard, a member of Garrison Tyranus, the Central Virginia chapter of the 501st Legion says, "Everybody wanted to be something else when they were a kid, right? Whether it was Neil Armstrong, Batman, or the Six Million Dollar Man. Every backyard playdate was some kind of make-believe. The 501st lets us participate in our fan communities while contributing to the community at large."
+
+Are cosplayers really "open source characters"? Well, that depends. The copyright laws around cosplay and using unique props, costumes, and more are very complex, [writes Meredith Filak Rose][14] for _Public Knowledge_. "We're lucky to be living in a time where fandom generally enjoys a positive relationship with the creators whose work it admires," Rose concludes.
+
+So, it is safe to say that stormtroopers, Ferengi, Vulcans, and Yoda are all here to stay for a long, long time, near, and far, far away.
+
+Live long and prosper, you shall.
+
+--------------------------------------------------------------------------------
+
+via: https://opensource.com/article/19/5/may-the-fourth-star-wars-trek
+
+作者:[Jeff Macharyas ][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://opensource.com/users/jeffmacharyas
+[b]: https://github.com/lujun9972
+[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/triangulum_galaxy_nasa_stars.jpg?itok=NdS19A7m
+[2]: https://fanlore.org/wiki/Gene_Roddenberry#His_Views_Regarding_Fanworks
+[3]: https://trekmovie.com/2016/06/23/cbs-and-paramount-release-fan-film-guidelines/
+[4]: https://dorksideoftheforce.com/2019/01/17/star-wars-fan-films/
+[5]: https://dorksideoftheforce.com/2019/01/16/disney-claims-copyright-star-wars-theory/
+[6]: https://en.wikipedia.org/wiki/Rules_of_Acquisition
+[7]: https://itsfoss.com/star-wars-linux/
+[8]: https://www.codeproject.com/Articles/28228/Star-Trek-1971-Text-Game
+[9]: https://www.livescience.com/58943-real-life-star-wars-technology.html
+[10]: https://www.digitaltrends.com/cool-tech/best-robot-lawnmowers/
+[11]: http://openholo.org/
+[12]: https://www.pastemagazine.com/articles/2016/05/the-future-is-vegan-according-to-star-trek.html
+[13]: https://www.501st.com/
+[14]: https://www.publicknowledge.org/news-blog/blogs/copyright-and-cosplay-working-with-an-awkward-fit
diff --git a/sources/tech/20190504 Using the force at the Linux command line.md b/sources/tech/20190504 Using the force at the Linux command line.md
new file mode 100644
index 0000000000..48e802e183
--- /dev/null
+++ b/sources/tech/20190504 Using the force at the Linux command line.md
@@ -0,0 +1,240 @@
+[#]: collector: (lujun9972)
+[#]: translator: ( )
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+[#]: subject: (Using the force at the Linux command line)
+[#]: via: (https://opensource.com/article/19/5/may-the-force-linux)
+[#]: author: (Alan Formy-Duval https://opensource.com/users/alanfdoss)
+
+Using the force at the Linux command line
+======
+Like the Jedi Force, -f is powerful, potentially destructive, and very
+helpful when you know how to use it.
+![Fireworks][1]
+
+Sometime in recent history, sci-fi nerds began an annual celebration of everything [_Star Wars_ on May the 4th][2], a pun on the Jedi blessing, "May the Force be with you." Although most Linux users are probably not Jedi, they still have ways to use the force. Of course, the movie might not have been quite as exciting if Yoda simply told Luke to type **man X-Wing fighter** or **man force**. Or if he'd said, "RTFM" (Read the Force Manual, of course).
+
+Many Linux commands have an **-f** option, which stands for, you guessed it, force! Sometimes when you execute a command, it fails or prompts you for additional input. This may be an effort to protect the files you are trying to change or inform the user that a device is busy or a file already exists.
+
+If you don't want to be bothered by prompts or don't care about errors, use the force!
+
+Be aware that using a command's force option to override these protections is, generally, destructive. Therefore, the user needs to pay close attention and be sure that they know what they are doing. Using the force can have consequences!
+
+Following are four Linux commands with a force option and a brief description of how and why you might want to use it.
+
+### cp
+
+The **cp** command is short for copy—it's used to copy (or duplicate) a file or directory. The [man page][3] describes the force option for **cp** as:
+
+
+```
+-f, --force
+if an existing destination file cannot be opened, remove it
+and try again
+```
+
+This example is for when you are working with read-only files:
+
+
+```
+[alan@workstation ~]$ ls -l
+total 8
+-rw-rw---- 1 alan alan 13 May 1 12:24 Hoth
+-r--r----- 1 alan alan 14 May 1 12:23 Naboo
+[alan@workstation ~]$ cat Hoth Naboo
+Icy Planet
+
+Green Planet
+```
+
+If you want to copy a file called _Hoth_ to _Naboo_ , the **cp** command will not allow it since _Naboo_ is read-only:
+
+
+```
+[alan@workstation ~]$ cp Hoth Naboo
+cp: cannot create regular file 'Naboo': Permission denied
+```
+
+But by using the force, **cp** will not prompt. The contents and permissions of _Hoth_ will immediately be copied to _Naboo_ :
+
+
+```
+[alan@workstation ~]$ cp -f Hoth Naboo
+[alan@workstation ~]$ cat Hoth Naboo
+Icy Planet
+
+Icy Planet
+
+[alan@workstation ~]$ ls -l
+total 8
+-rw-rw---- 1 alan alan 12 May 1 12:32 Hoth
+-rw-rw---- 1 alan alan 12 May 1 12:38 Naboo
+```
+
+Oh no! I hope they have winter gear on Naboo.
+
+### ln
+
+The **ln** command is used to make links between files. The [man page][4] describes the force option for **ln** as:
+
+
+```
+-f, --force
+remove existing destination files
+```
+
+Suppose Princess Leia is maintaining a Java application server and she has a directory where all Java versions are stored. Here is an example:
+
+
+```
+leia@workstation:/usr/lib/java$ ls -lt
+total 28
+lrwxrwxrwx 1 leia leia 12 Mar 5 2018 jdk -> jdk1.8.0_162
+drwxr-xr-x 8 leia leia 4096 Mar 5 2018 jdk1.8.0_162
+drwxr-xr-x 8 leia leia 4096 Aug 28 2017 jdk1.8.0_144
+```
+
+As you can see, there are several versions of the Java Development Kit (JDK) and a symbolic link pointing to the latest one. She uses a script with the following commands to install new JDK versions. However, it won't work without a force option or unless the root user runs it:
+
+
+```
+tar xvzmf jdk1.8.0_181.tar.gz -C jdk1.8.0_181/
+ln -vs jdk1.8.0_181 jdk
+```
+
+The **tar** command will extract the .gz file to the specified directory, but the **ln** command will fail to upgrade the link because one already exists. The result will be that the link no longer points to the latest JDK:
+
+
+```
+leia@workstation:/usr/lib/java$ ln -vs jdk1.8.0_181 jdk
+ln: failed to create symbolic link 'jdk/jdk1.8.0_181': File exists
+leia@workstation:/usr/lib/java$ ls -lt
+total 28
+drwxr-x--- 2 leia leia 4096 May 1 15:44 jdk1.8.0_181
+lrwxrwxrwx 1 leia leia 12 Mar 5 2018 jdk -> jdk1.8.0_162
+drwxr-xr-x 8 leia leia 4096 Mar 5 2018 jdk1.8.0_162
+drwxr-xr-x 8 leia leia 4096 Aug 28 2017 jdk1.8.0_144
+```
+
+She can force **ln** to update the link correctly by passing the force option and one other, **-n**. The **-n** is needed because the link points to a directory. Now, the link again points to the latest JDK:
+
+
+```
+leia@workstation:/usr/lib/java$ ln -vsnf jdk1.8.0_181 jdk
+'jdk' -> 'jdk1.8.0_181'
+leia@workstation:/usr/lib/java$ ls -lt
+total 28
+lrwxrwxrwx 1 leia leia 12 May 1 16:13 jdk -> jdk1.8.0_181
+drwxr-x--- 2 leia leia 4096 May 1 15:44 jdk1.8.0_181
+drwxr-xr-x 8 leia leia 4096 Mar 5 2018 jdk1.8.0_162
+drwxr-xr-x 8 leia leia 4096 Aug 28 2017 jdk1.8.0_144
+```
+
+A Java application can be configured to find the JDK with the path **/usr/lib/java/jdk** instead of having to change it every time Java is updated.
+
+### rm
+
+The **rm** command is short for "remove" (which we often call delete, since some other operating systems have a **del** command for this action). The [man page][5] describes the force option for **rm** as:
+
+
+```
+-f, --force
+ignore nonexistent files and arguments, never prompt
+```
+
+If you try to delete a read-only file, you will be prompted by **rm** :
+
+
+```
+[alan@workstation ~]$ ls -l
+total 4
+-r--r----- 1 alan alan 16 May 1 11:38 B-wing
+[alan@workstation ~]$ rm B-wing
+rm: remove write-protected regular file 'B-wing'?
+```
+
+You must type either **y** or **n** to answer the prompt and allow the **rm** command to proceed. If you use the force option, **rm** will not prompt you and will immediately delete the file:
+
+
+```
+[alan@workstation ~]$ rm -f B-wing
+[alan@workstation ~]$ ls -l
+total 0
+[alan@workstation ~]$
+```
+
+The most common use of force with **rm** is to delete a directory. The **-r** (recursive) option tells **rm** to remove a directory. When combined with the force option, it will remove the directory and all its contents without prompting.
+
+The **rm** command with certain options can be disastrous. Over the years, online forums have filled with jokes and horror stories of users completely wiping their systems. This notorious usage is **rm -rf ***. This will immediately delete all files and directories without any prompt wherever it is used.
+
+### userdel
+
+The **userdel** command is short for user delete, which will delete a user. The [man page][6] describes the force option for **userdel** as:
+
+
+```
+-f, --force
+This option forces the removal of the user account, even if the
+user is still logged in. It also forces userdel to remove the
+user's home directory and mail spool, even if another user uses
+the same home directory or if the mail spool is not owned by the
+specified user. If USERGROUPS_ENAB is defined to yes in
+/etc/login.defs and if a group exists with the same name as the
+deleted user, then this group will be removed, even if it is
+still the primary group of another user.
+
+Note: This option is dangerous and may leave your system in an
+inconsistent state.
+```
+
+When Obi-Wan reached the castle on Mustafar, he knew what had to be done. He had to delete Darth's user account—but Darth was still logged in.
+
+
+```
+[root@workstation ~]# ps -fu darth
+UID PID PPID C STIME TTY TIME CMD
+darth 7663 7655 0 13:28 pts/3 00:00:00 -bash
+[root@workstation ~]# userdel darth
+userdel: user darth is currently used by process 7663
+```
+
+Since Darth is currently logged in, Obi-Wan has to use the force option to **userdel**. This will delete the user account even though it's logged in.
+
+
+```
+[root@workstation ~]# userdel -f darth
+userdel: user darth is currently used by process 7663
+[root@workstation ~]# finger darth
+finger: darth: no such user.
+[root@workstation ~]# ps -fu darth
+error: user name does not exist
+```
+
+As you can see, the **finger** and **ps** commands confirm the user Darth has been deleted.
+
+### Using force in shell scripts
+
+Many other commands have a force option. One place force is very useful is in shell scripts. Since we use scripts in cron jobs and other automated operations, avoiding any prompts is crucial, or else these automated processes will not complete.
+
+I hope the four examples I shared above help you understand how certain circumstances may require the use of force. You should have a strong understanding of the force option when used at the command line or in creating automation scripts. It's misuse can have devastating effects—sometimes across your infrastructure, and not only on a single machine.
+
+--------------------------------------------------------------------------------
+
+via: https://opensource.com/article/19/5/may-the-force-linux
+
+作者:[Alan Formy-Duval ][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://opensource.com/users/alanfdoss
+[b]: https://github.com/lujun9972
+[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/fireworks_light_art_design.jpg?itok=hfx9i4By (Fireworks)
+[2]: https://www.starwars.com/star-wars-day
+[3]: http://man7.org/linux/man-pages/man1/cp.1.html
+[4]: http://man7.org/linux/man-pages/man1/ln.1.html
+[5]: http://man7.org/linux/man-pages/man1/rm.1.html
+[6]: http://man7.org/linux/man-pages/man8/userdel.8.html
diff --git a/sources/tech/20190505 Blockchain 2.0 - An Introduction To Hyperledger Project (HLP) -Part 8.md b/sources/tech/20190505 Blockchain 2.0 - An Introduction To Hyperledger Project (HLP) -Part 8.md
new file mode 100644
index 0000000000..bb1d187ea4
--- /dev/null
+++ b/sources/tech/20190505 Blockchain 2.0 - An Introduction To Hyperledger Project (HLP) -Part 8.md
@@ -0,0 +1,88 @@
+[#]: collector: (lujun9972)
+[#]: translator: ( )
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+[#]: subject: (Blockchain 2.0 – An Introduction To Hyperledger Project (HLP) [Part 8])
+[#]: via: (https://www.ostechnix.com/blockchain-2-0-an-introduction-to-hyperledger-project-hlp/)
+[#]: author: (editor https://www.ostechnix.com/author/editor/)
+
+Blockchain 2.0 – An Introduction To Hyperledger Project (HLP) [Part 8]
+======
+
+![Introduction To Hyperledger Project][1]
+
+Once a new technology platform reaches a threshold level of popularity in terms of active development and commercial interests, major global companies and smaller start-ups alike rush to catch a slice of the pie. **Linux** was one such platform back in the day. Once the ubiquity of its applications was realized individuals, firms, and institutions started displaying their interest in it and by 2000 the **Linux foundation** was formed.
+
+The Linux foundation aims to standardize and develop Linux as a platform by sponsoring their development team. The Linux Foundation is a non-profit organization that is supported by software and IT behemoths such as Microsoft, Oracle, Samsung, Cisco, IBM, Intel among others[1]. This is excluding the hundreds of individual developers who offer their services for the betterment of the platform. Over the years the Linux foundation has taken many projects under its roof. The **Hyperledger Project** is their fastest growing one till date.
+
+Such consortium led development have a lot of advantages when it comes to furthering tech into usable useful forms. Developing the standards, libraries and all the back-end protocols for large scale projects are expensive and resource intensive without a shred of income generating from it. Hence, it makes sense for companies to pool in their resources to develop the common “boring” parts by supporting such organizations and later upon completing work on these standard parts to simply plug & play and customize their products afterwards. Apart from the economics of the model, such collaborative efforts also yield standards allowing for easier use and integration into aspiring products and services.
+
+Other major innovations that were once or are currently being developed following the said consortium model include standards for WiFi (The Wi-Fi alliance), Mobile Telephony etc.
+
+### Introduction to Hyperledger Project (HLP)
+
+The Hyperledger project was launched in December 2015 by the Linux foundation as is currently among the fastest growing project they’ve incubated. It’s an umbrella organization for collaborative efforts into developing and advancing tools & standards for [**blockchain**][2] based distributed ledger technologies(DLT). Major industry players supporting the project include **IBM** , **Intel** and **SAP Ariba** among [**others**][3]. The HLP aims to create frameworks for individuals and companies to create shared as well as closed blockchains as required to further their own requirements. The design principles include a strong tilt toward developing a globally deployable, scalable, robust platform with a focus on privacy, and future auditability[2]. It is also important to note that most of the blockchains proposed and the frame.
+
+### Development goals and structure: Making it plug & play
+
+Although enterprise facing platforms exist from the likes of the Ethereum alliance, HLP is by definition business facing and supported by industry behemoths who contribute and further development in the many modules that come under the HLP banner. The HLP incubates projects in development after their induction into the cause and after finishing work on it and correcting the knick-knacks rolls it out for the public. Members of the Hyperledger project contribute their own work such as how IBM contributed their Fabric platform for collaborative development. The codebase is absorbed and developed in house by the group in the project and rolled out for all members equally for their use.
+
+Such processes make the modules in HLP highly flexible plug-in frameworks which will support rapid development and roll-outs in enterprise settings. Furthermore, other comparable platforms are open **permission-less blockchains** or rather **public chains** by default and even though it is possible to adapt them to specific applications, HLP modules support the feature natively.
+
+The differences and use cases of public & private blockchains are covered more [**here**][4] in this comparative primer on the same.
+
+The Hyperledger project’s mission is four-fold according to **Brian Behlendorf** , the executive director of the project.
+
+They are:
+
+ 1. To create an enterprise grade DLT framework and standards which anyone can port to suit their specific industrial or personal needs.
+ 2. To give rise to a robust open source community to aid the ecosystem.
+ 3. To promote and further participation of industry members of the said ecosystem such as member firms.
+ 4. To host a neutral unbiased infrastructure for the HLP community to gather and share updates and developments regarding the same.
+
+
+
+The original document can be accessed [**here**][5]****.
+
+### Structure of the HLP
+
+The **HLP consists of 12 projects** that are classified as independent modules, each usually structured and working independently to develop their module. These are first studied for their capabilities and viability before being incubated. Proposals for additions can be made by any member of the organization. After the project is incubated active development ensues after which it is rolled out. The interoperability between these modules are given a high priority, hence regular communication between these groups are maintained by the community. Currently 4 of these projects are categorized as active. The active tag implies these are ready for use but not ready for a major release yet. These 4 are arguably the most significant or rather fundamental modules to furthering the blockchain revolution. We’ll look at the individual modules and their functionalities at a later time in detail. However, a brief description of a the Hyperledger Fabric platform, arguably the most popular among them follows.
+
+### Hyperledger Fabric
+
+The **Hyperledger Fabric** [2] is a fully open-source, permissioned (non-public) blockchain-based DLT platform that is designed keeping enterprise uses in mind. The platform provides features and is structured to fit the enterprise environment. It is highly modular allowing its developers to choose from different consensus protocols, **chain code protocols ([smart contracts][6])** , or identity management systems etc., as they go along. **It is a permissioned blockchain based platform** that’s makes use of an identity management system, meaning participants will be aware of each other’s identities which is required in an enterprise setting. Fabric allows for smart contract ( _ **“chaincode”, is the term that the Hyperledger team uses**_ ) development in a variety of mainstream programming languages including **Java** , **Javascript** , **Go** etc. This allows institutions and enterprises to make use of their existing talent in the area without hiring or re-training developers to develop their own smart contracts. Fabric also uses an execute-order-validate system to handle smart contracts for better reliability compared to the standard order-validate system that is used by other platforms providing smart contract functionality. Pluggable performance, identity management systems, DBMS, Consensus platforms etc. are other features of Fabric that keeps it miles ahead of its competition.
+
+### Conclusion
+
+Projects such as the Hyperledger Fabric platforms enable a faster rate of adoption of blockchain technology in mainstream use-cases. The Hyperledger community structure itself supports open governance principles and since all the projects are led as open source platforms, this improves the security and accountability that the teams exhibit in pushing out commitments.
+
+Since major applications of such projects involve working with enterprises to further development of platforms and standards, the Hyperledger project is currently at a great position with respect to comparable projects by others.
+
+**References:**
+
+ * **[1][Samsung takes a seat with Intel and IBM at the Linux Foundation | TheINQUIRER][7]**
+ * **[2] E. Androulaki et al., “Hyperledger Fabric: A Distributed Operating System for Permissioned Blockchains,” 2018.**
+
+
+
+--------------------------------------------------------------------------------
+
+via: https://www.ostechnix.com/blockchain-2-0-an-introduction-to-hyperledger-project-hlp/
+
+作者:[editor][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://www.ostechnix.com/author/editor/
+[b]: https://github.com/lujun9972
+[1]: https://www.ostechnix.com/wp-content/uploads/2019/04/Introduction-To-Hyperledger-Project-720x340.png
+[2]: https://www.ostechnix.com/blockchain-2-0-an-introduction/
+[3]: https://www.hyperledger.org/members
+[4]: https://www.ostechnix.com/blockchain-2-0-public-vs-private-blockchain-comparison/
+[5]: http://www.hitachi.com/rev/archive/2017/r2017_01/expert/index.html
+[6]: https://www.ostechnix.com/blockchain-2-0-explaining-smart-contracts-and-its-types/
+[7]: https://www.theinquirer.net/inquirer/news/2182438/samsung-takes-seat-intel-ibm-linux-foundation
diff --git a/sources/tech/20190505 Blockchain 2.0 - Public Vs Private Blockchain Comparison -Part 7.md b/sources/tech/20190505 Blockchain 2.0 - Public Vs Private Blockchain Comparison -Part 7.md
new file mode 100644
index 0000000000..a954e8514e
--- /dev/null
+++ b/sources/tech/20190505 Blockchain 2.0 - Public Vs Private Blockchain Comparison -Part 7.md
@@ -0,0 +1,106 @@
+[#]: collector: (lujun9972)
+[#]: translator: ( )
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+[#]: subject: (Blockchain 2.0 – Public Vs Private Blockchain Comparison [Part 7])
+[#]: via: (https://www.ostechnix.com/blockchain-2-0-public-vs-private-blockchain-comparison/)
+[#]: author: (editor https://www.ostechnix.com/author/editor/)
+
+Blockchain 2.0 – Public Vs Private Blockchain Comparison [Part 7]
+======
+
+![Public vs Private blockchain][1]
+
+The previous part of the [**Blockchain 2.0**][2] series explored the [**the state of Smart contracts**][3] now. This post intends to throw some light on the different types of blockchains that can be created. Each of these are used for vastly different applications and depending on the use cases, the protocol followed by each of these differ. Now let us go ahead and learn about **Public vs Private blockchain comparison** with Open source and proprietary technology.
+
+The fundamental three-layer structure of a blockchain based distributed ledger as we know is as follows:
+
+![][4]
+
+Figure 1 – Fundamental structure of Blockchain-based ledgers
+
+The differences between the types mentioned here is attributable primarily to the protocol that rests on the underlying blockchain. The protocol dictates rules for the participants and the behavior of the blockchain in response to the said participation.
+
+Remember to keep the following things in mind while reading through this article:
+
+ * Platforms such as these are always created to solve a use-case requirement. There is no one direction that the technology should take that is best. Blockchains for instance have tremendous applications and some of these might require dropping features that seem significant in other settings. **Decentralized storage** is a major example in this regard.
+ * Blockchains are basically database systems keeping track of information by timestamping and organizing data in the form of blocks. Creators of such blockchains can choose who has the right to make these blocks and perform alterations.
+ * Blockchains can be “centralized” as well, and participation in varying extents can be limited to those who this “central authority” deems eligible.
+
+
+
+Most blockchains are either **public** or **private**. Broadly speaking, public blockchains can be considered as being the equivalent of open source software and most private blockchains can be seen as proprietary platforms deriving from the public ones. The figure below should make the basic difference obvious to most of you.
+
+![][5]
+
+Figure 2 – Public vs Private blockchain comparison with Open source and Proprietary Technology
+
+This is not to say that all private blockchains are derived from open public ones. The most popular ones however usually are though.
+
+### Public Blockchains
+
+A public blockchain can be considered as a **permission-less platform** or **network**. Anyone with the knowhow and computing resources can participate in it. This will have the following implications:
+
+ * Anyone can join and participate in a public blockchain network. All the “participant” needs is a stable internet connection along with computing resources.
+ * Participation will include reading, writing, verifying, and providing consensus during transactions. An example for participating individuals would be **Bitcoin miners**. In exchange for participating in the network the miners are paid back in Bitcoins in this case.
+ * The platform is decentralized completely and fully redundant.
+ * Because of the decentralized nature, no one entity has complete control over the data recorded in the ledger. To validate a block all (or most) participants need to vet the data.
+ * This means that once information is verified and recorded, it cannot be altered easily. Even if it is, its impossible to not leave marks.
+ * The identity of participants remains anonymous by design in platforms such as **BITCOIN** and **LITECOIN**. These platforms by design aim for protecting and securing user identities. This is primarily a feature provided by the overlying protocol stack.
+ * Examples for public blockchain networks are **BITCOIN** , **LITECOIN** , **ETHEREUM** etc.
+ * Extensive decentralizations mean that gaining consensus on transactions might take a while compared to what is typically possible over blockchain ledger networks and throughput can be a challenge for large enterprises aiming for pushing a very high number of transactions every instant.
+ * The open participation and often the high number of such participants in open chains such as bitcoin add up to considerable initial investments in computing equipment and energy costs.
+
+
+
+### Private Blockchain
+
+In contrast, a private blockchain is a **permissioned blockchain**. Meaning:
+
+ * Permission to participate in the network is restricted and is presided over by the owner or institution overseeing the network. Meaning even though an individual will be able to store data and transact (send and receive payments for example), the validation and storage of these transactions will be done only by select participants.
+ * Participation even once permission is given by the central authority will be limited by terms. For instance, in case of a private blockchain network run by a financial institution, not every customer will have access to the entire blockchain ledger, and even among those with the permission, not everyone will be able to access everything. Permissions to access select services will be given by the central figure in this case. This is often referred to as **“channeling”**.
+ * Such systems have significantly larger throughput capabilities and also showcase much faster transaction speeds compared to their public counterparts because a block of information only needs to be validated by a select few.
+ * Security by design is something the public blockchains are renowned for. They achieve this
+by:
+ * Anonymizing participants,
+ * Distributed & redundant but encrypted storage on multiple nodes,
+ * Mass consensus required for creating and altering data.
+
+
+
+Private blockchains usually don’t feature any of these in their protocol. This makes the system only as secure as most cloud-based database systems currently in use.
+
+### A note for the wise
+
+An important point to note is this, the fact that they’re named public or private (or open or closed) has nothing to do with the underlying code base. The code or the literal foundations on which the platforms are based on may or may not be publicly available and or developed in either of these cases. **R3** is a **DLT** ( **D** istributed **L** edger **T** echnology) company that leads a public consortium of over 200 multinational institutions. Their aim is to further development of blockchain and related distributed ledger technology in the domain of finance and commerce. **Corda** is the product of this joint effort. R3 defines corda as a blockchain platform that is built specially for businesses. The codebase for the same is open source and developers all over the world are encouraged to contribute to the project. However, given its business facing nature and the needs it is meant to address, corda would be categorized as a permissioned closed blockchain platform. Meaning businesses can choose the participants of the network once it is deployed and choose the kind of information these participants can access through the use of natively available smart contract tools.
+
+While it is a reality that public platforms like Bitcoin and Ethereum are responsible for the widespread awareness and development going on in the space, it can still be argued that private blockchains designed for specific use cases in enterprise or business settings is what will lead monetary investments in the short run. These are the platforms most of us will see implemented the near future in practical ways.
+
+Read the next guide about Hyperledger project in this series.
+
+ * [**Blockchain 2.0 – An Introduction To Hyperledger Project (HLP)**][6]
+
+
+
+We are working on many interesting topics on Blockchain technology. Stay tuned!
+
+--------------------------------------------------------------------------------
+
+via: https://www.ostechnix.com/blockchain-2-0-public-vs-private-blockchain-comparison/
+
+作者:[editor][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://www.ostechnix.com/author/editor/
+[b]: https://github.com/lujun9972
+[1]: https://www.ostechnix.com/wp-content/uploads/2019/04/Public-Vs-Private-Blockchain-720x340.png
+[2]: https://www.ostechnix.com/blockchain-2-0-an-introduction/
+[3]: https://www.ostechnix.com/blockchain-2-0-ongoing-projects-the-state-of-smart-contracts-now/
+[4]: http://www.ostechnix.com/wp-content/uploads/2019/04/blockchain-architecture.png
+[5]: http://www.ostechnix.com/wp-content/uploads/2019/04/Public-vs-Private-blockchain-comparison.png
+[6]: https://www.ostechnix.com/blockchain-2-0-an-introduction-to-hyperledger-project-hlp/
diff --git a/sources/tech/20190505 Blockchain 2.0 - What Is Ethereum -Part 9.md b/sources/tech/20190505 Blockchain 2.0 - What Is Ethereum -Part 9.md
new file mode 100644
index 0000000000..a4669a2eb0
--- /dev/null
+++ b/sources/tech/20190505 Blockchain 2.0 - What Is Ethereum -Part 9.md
@@ -0,0 +1,83 @@
+[#]: collector: (lujun9972)
+[#]: translator: ( )
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+[#]: subject: (Blockchain 2.0 – What Is Ethereum [Part 9])
+[#]: via: (https://www.ostechnix.com/blockchain-2-0-what-is-ethereum/)
+[#]: author: (editor https://www.ostechnix.com/author/editor/)
+
+Blockchain 2.0 – What Is Ethereum [Part 9]
+======
+
+![Ethereum][1]
+
+In the previous guide of this series, we discussed about [**Hyperledger Project (HLP)**][2], a fastest growing product developed by **Linux Foundation**. In this guide, we are going to discuss about what is **Ethereum** and its features in detail. Many researchers opine that the future of the internet will be based on principles of decentralized computing. Decentralized computing was in fact among one of the broader objectives of having the internet in the first place. However, the internet took another turn owing to differences in computing capabilities available. While modern server capabilities make the case for server-side processing and execution, lack of decent mobile networks in large parts of the world make the case for the same on the client side. Modern smartphones now have **SoCs** (system on a chip or system on chip) capable of handling many such operations on the client side itself, however, limitations owing to retrieving and storing data securely still pushes developers to have server-side computing and data management. Hence, a bottleneck in regards to data transfer capabilities is currently observed.
+
+All of that might soon change because of advancements in distributed data storage and program execution platforms. [**The blockchain**][3], for the first time in the history of the internet, basically allows for secure data management and program execution on a distributed network of users as opposed to central servers.
+
+**Ethereum** is one such blockchain platform that gives developers access to frameworks and tools used to build and run applications on such a decentralized network. Though more popularly known in general for its cryptocurrency, Ethereum is more than just **ethers** (the cryptocurrency). It’s a full **Turing complete programming language** that is designed to develop and deploy **DApps** or **Distributed APPlications** [1]. We’ll look at DApps in more detail in one of the upcoming posts.
+
+Ethereum is an open-source, supports by default a public (non-permissioned) blockchain, and features an extensive smart contract platform **(Solidity)** underneath. Ethereum provides a virtual computing environment called the **Ethereum virtual machine** to run applications and [**smart contracts**][4] as well[2]. The Ethereum virtual machine runs on thousands of participating nodes all over the world, meaning the application data while being secure, is almost impossible to be tampered with or lost.
+
+### Getting behind Ethereum: What sets it apart
+
+In 2017, a 30 plus group of the who’s who of the tech and financial world got together to leverage the Ethereum blockchain’s capabilities. Thus, the **Ethereum Enterprise Alliance (EEA)** was formed by a long list of supporting members including _Microsoft_ , _JP Morgan_ , _Cisco Systems_ , _Deloitte_ , and _Accenture_. JP Morgan already has **Quorum** , a decentralized computing platform for financial services based on Ethereum currently in operation, while Microsoft has Ethereum based cloud services it markets through its Azure cloud business[3].
+
+### What is ether and how is it related to Ethereum
+
+Ethereum creator **Vitalik Buterin** understood the true value of a decentralized processing platform and the underlying blockchain tech that powered bitcoin. He failed to gain majority agreement for his idea of proposing that Bitcoin should be developed to support running distributed applications (DApps) and programs (now referred to as smart contracts).
+
+Hence in 2013, he proposed the idea of Ethereum in a white paper he published. The original white paper is still maintained and available for readers **[here][5]**. The idea was to develop a blockchain based platform to run smart contracts and applications designed to run on nodes and user devices instead of servers.
+
+The Ethereum system is often mistaken to just mean the cryptocurrency ether, however, it has to be reiterated that Ethereum is a full stack platform for developing applications and executing them as well and has been so since inception whereas bitcoin isn’t. **Ether is currently the second biggest cryptocurrency** by market capitalization and trades at an average of $170 per ether at the time of writing this article[4].
+
+### Features and technicalities of the platform[5]
+
+ * As we’ve already mentioned, the cryptocurrency called ether is simply one of the things the platform features. The purpose of the system is more than taking care of financial transactions. In fact, the key difference between the Ethereum platform and Bitcoin is in their scripting capabilities. Ethereum is developed in a Turing complete programming language which means it has scripting and application capabilities similar to other major programming languages. Developers require this feature to create DApps and complex smart contracts on the platform, a feature that bitcoin misses on.
+ * The “mining” process of ether is more stringent and complex. While specialized ASICs may be used to mine bitcoin, the basic hashing algorithm used by Ethereum **(EThash)** reduces the advantage that ASICs have in this regard.
+ * The transaction fees itself to be paid as an incentive to miners and node operators for running the network is calculated using a computational token called **Gas**. Gas improves the system’s resilience and resistance to external hacks and attacks by requiring the initiator of the transaction to pay ethers proportionate to the number of computational resources that are required to carry out that transaction. This is in contrast to other platforms such as Bitcoin where the transaction fee is measured in tandem with the transaction size. As such, the average transaction costs in Ethereum is radically less than Bitcoin. This also implies that running applications running on the Ethereum virtual machine will require a fee depending straight up on the computational problems that the application is meant to solve. Basically, the more complex an execution, the more the fee.
+ * The block time for Ethereum is estimated to be around _**10-15 seconds**_. The block time is the average time that is required to timestamp and create a block on the blockchain network. Compared to the 10+ minutes the same transaction will take on the bitcoin network, it becomes apparent that _**Ethereum is much faster**_ with respect to transactions and verification of blocks.
+ * _It is also interesting to note that there is no hard cap on the amount of ether that can be mined or the rate at which ether can be mined leading to less radical system design than bitcoin._
+
+
+
+### Conclusion
+
+While Ethereum is comparable and far outpaces similar platforms, the platform itself lacked a definite path for development until the Ethereum enterprise alliance started pushing it. While the definite push for enterprise developments are made by the Ethereum platform, it has to be noted that Ethereum also caters to small-time developers and individuals as well. As such developing the platform for end users and enterprises leave a lot of specific functionality out of the loop for Ethereum. Also, the blockchain model proposed and developed by the Ethereum foundation is a public model whereas the one proposed by projects such as the Hyperledger project is private and permissioned.
+
+While only time can tell which platform among the ones put forward by Ethereum, Hyperledger, and R3 Corda among others will find the most fans in real-world use cases, such systems do prove the validity behind the claim of a blockchain powered future.
+
+**References:**
+
+ * [1] [**Gabriel Nicholas, “Ethereum Is Coding’s New Wild West | WIRED,” Wired , 2017**][6].
+ * [2] [**What is Ethereum? — Ethereum Homestead 0.1 documentation**][7].
+ * [3] [**Ethereum, a Virtual Currency, Enables Transactions That Rival Bitcoin’s – The New York Times**][8].
+ * [4] [**Cryptocurrency Market Capitalizations | CoinMarketCap**][9].
+ * [5] [**Introduction — Ethereum Homestead 0.1 documentation**][10].
+
+
+
+--------------------------------------------------------------------------------
+
+via: https://www.ostechnix.com/blockchain-2-0-what-is-ethereum/
+
+作者:[editor][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://www.ostechnix.com/author/editor/
+[b]: https://github.com/lujun9972
+[1]: https://www.ostechnix.com/wp-content/uploads/2019/04/Ethereum-720x340.png
+[2]: https://www.ostechnix.com/blockchain-2-0-an-introduction-to-hyperledger-project-hlp/
+[3]: https://www.ostechnix.com/blockchain-2-0-an-introduction/
+[4]: https://www.ostechnix.com/blockchain-2-0-explaining-smart-contracts-and-its-types/
+[5]: https://github.com/ethereum/wiki/wiki/White-Paper
+[6]: https://www.wired.com/story/ethereum-is-codings-new-wild-west/
+[7]: http://www.ethdocs.org/en/latest/introduction/what-is-ethereum.html#ethereum-virtual-machine
+[8]: https://www.nytimes.com/2016/03/28/business/dealbook/ethereum-a-virtual-currency-enables-transactions-that-rival-bitcoins.html
+[9]: https://coinmarketcap.com/
+[10]: http://www.ethdocs.org/en/latest/introduction/index.html
diff --git a/sources/tech/20190505 Duc - A Collection Of Tools To Inspect And Visualize Disk Usage.md b/sources/tech/20190505 Duc - A Collection Of Tools To Inspect And Visualize Disk Usage.md
new file mode 100644
index 0000000000..edba21d327
--- /dev/null
+++ b/sources/tech/20190505 Duc - A Collection Of Tools To Inspect And Visualize Disk Usage.md
@@ -0,0 +1,261 @@
+[#]: collector: (lujun9972)
+[#]: translator: ( )
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+[#]: subject: (Duc – A Collection Of Tools To Inspect And Visualize Disk Usage)
+[#]: via: (https://www.ostechnix.com/duc-a-collection-of-tools-to-inspect-and-visualize-disk-usage/)
+[#]: author: (sk https://www.ostechnix.com/author/sk/)
+
+Duc – A Collection Of Tools To Inspect And Visualize Disk Usage
+======
+
+![Duc - A Collection Of Tools To Inspect And Visualize Disk Usage][1]
+
+**Duc** is a collection of tools that can be used to index, inspect and visualize disk usage on Unix-like operating systems. Don’t think of it as a simple CLI tool that merely displays a fancy graph of your disk usage. It is built to scale quite well on huge filesystems. Duc has been tested on systems that consisted of more than 500 million files and several petabytes of storage without any problems.
+
+Duc is quite fast and versatile tool. It stores your disk usage in an optimized database, so you can quickly find where your bytes are as soon as the index is completed. In addition, it comes with various user interfaces and back-ends to access the database and draw the graphs.
+
+Here is the list of currently supported user interfaces (UI):
+
+ 1. Command line interface (ls),
+ 2. Ncurses console interface (ui),
+ 3. X11 GUI (duc gui),
+ 4. OpenGL GUI (duc gui).
+
+
+
+List of supported database back-ends:
+
+ * Tokyocabinet,
+ * Leveldb,
+ * Sqlite3.
+
+
+
+Duc uses **Tokyocabinet** as default database backend.
+
+### Install Duc
+
+Duc is available in the default repositories of Debian and its derivatives such as Ubuntu. So installing Duc on DEB-based systems is a piece of cake.
+
+```
+$ sudo apt-get install duc
+```
+
+On other Linux distributions, you may need to manually compile and install Duc from source as shown below.
+
+Download latest duc source .tgz file from the [**releases**][2] page on github. As of writing this guide, the latest version was **1.4.4**.
+
+```
+$ wget https://github.com/zevv/duc/releases/download/1.4.4/duc-1.4.4.tar.gz
+```
+
+Then run the following commands one by one to install DUC.
+
+```
+$ tar -xzf duc-1.4.4.tar.gz
+$ cd duc-1.4.4
+$ ./configure
+$ make
+$ sudo make install
+```
+
+### Duc Usage
+
+The typical usage of duc is:
+
+```
+$ duc
+```
+
+You can view the list of general options and sub-commands by running the following command:
+
+```
+$ duc help
+```
+
+You can also know the the usage of a specific subcommand as below.
+
+```
+$ duc help
+```
+
+To view the extensive list of all commands and their options, simply run:
+
+```
+$ duc help --all
+```
+
+Let us now se some practical use cases of duc utility.
+
+### Create Index (database)
+
+First of all, you need to create an index file (database) of your filesystem. To create an index file, use “duc index” command.
+
+For example, to create an index of your **/home** directory, simply run:
+
+```
+$ duc index /home
+```
+
+The above command will create the index of your /home/ directory and save it in **$HOME/.duc.db** file. If you have added new files/directories in the /home directory in future, just re-run the above command at any time later to rebuild the index.
+
+### Query Index
+
+Duc has various sub-commands to query and explore the index.
+
+To view the list of available indexes, run:
+
+```
+$ duc info
+```
+
+**Sample output:**
+
+```
+Date Time Files Dirs Size Path
+2019-04-09 15:45:55 3.5K 305 654.6M /home
+```
+
+As you see in the above output, I have already indexed the /home directory.
+
+To list all files and directories in the current working directory, you can do:
+
+```
+$ duc ls
+```
+
+To list files/directories in a specific directory, for example **/home/sk/Downloads** , just pass the path as argument like below.
+
+```
+$ duc ls /home/sk/Downloads
+```
+
+Similarly, run **“duc ui”** command to open a **ncurses** based console user interface for exploring the file system usage and run **“duc gui”** to start a **graphical (X11)** interface to explore the file system.
+
+To know more about a sub-command usage, simply refer the help section.
+
+```
+$ duc help ls
+```
+
+The above command will display the help section of “ls” subcommand.
+
+### Visualize Disk Usage
+
+In the previous section, we have seen how to list files and directories using duc subcommands. In addition, you can even show the file sizes in a fancy graph.
+
+To show the graph of a given path, use “ls” subcommand like below.
+
+```
+$ duc ls -Fg /home/sk
+```
+
+Sample output:
+
+![][3]
+
+Visualize disk usage using “duc ls” command
+
+As you see in the above output, the “ls” subcommand queries the duc database and lists the inclusive size of all
+files and directories of the given path i.e **/home/sk/** in this case.
+
+Here, the **“-F”** option is used to append file type indicator (one of */) to entries and the **“-g”** option is used to draw graph with relative size for each entry.
+
+Please note that if no path is given, the current working directory is explored.
+
+You can use **-R** option to view the disk usage result in [**tree**][4] structure.
+
+```
+$ duc ls -R /home/sk
+```
+
+![][5]
+
+Visualize disk usage in tree structure
+
+To query the duc database and open a **ncurses** based console user interface for exploring the disk usage of given path, use **“ui”** subcommand like below.
+
+```
+$ duc ui /home/sk
+```
+
+![][6]
+
+Similarly, we use **“gui”** subcommand to query the duc database and start a **graphical (X11)** interface to explore the disk usage of the given path:
+
+```
+$ duc gui /home/sk
+```
+
+![][7]
+
+Like I already mentioned earlier, we can learn more about a subcommand usage like below.
+
+```
+$ duc help
+```
+
+I covered the basic usage part only. Refer man pages for more details about “duc” tool.
+
+```
+$ man duc
+```
+
+* * *
+
+**Related read:**
+
+ * [**Filelight – Visualize Disk Usage On Your Linux System**][8]
+ * [**Some Good Alternatives To ‘du’ Command**][9]
+ * [**How To Check Disk Space Usage In Linux Using Ncdu**][10]
+ * [**Agedu – Find Out Wasted Disk Space In Linux**][11]
+ * [**How To Find The Size Of A Directory In Linux**][12]
+ * [**The df Command Tutorial With Examples For Beginners**][13]
+
+
+
+* * *
+
+### Conclusion
+
+Duc is simple yet useful disk usage viewer. If you want to quickly and easily know which files/directories are eating up your disk space, Duc might be a good choice. What are you waiting for? Go get this tool already, scan your filesystem and get rid of unused files/directories.
+
+And, that’s all for now. Hope this was useful. More good stuffs to come. Stay tuned!
+
+Cheers!
+
+**Resource:**
+
+ * [**Duc website**][14]
+
+
+
+--------------------------------------------------------------------------------
+
+via: https://www.ostechnix.com/duc-a-collection-of-tools-to-inspect-and-visualize-disk-usage/
+
+作者:[sk][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://www.ostechnix.com/author/sk/
+[b]: https://github.com/lujun9972
+[1]: https://www.ostechnix.com/wp-content/uploads/2019/04/duc-720x340.png
+[2]: https://github.com/zevv/duc/releases
+[3]: http://www.ostechnix.com/wp-content/uploads/2019/04/duc-1-1.png
+[4]: https://www.ostechnix.com/view-directory-tree-structure-linux/
+[5]: http://www.ostechnix.com/wp-content/uploads/2019/04/duc-2.png
+[6]: http://www.ostechnix.com/wp-content/uploads/2019/04/duc-3.png
+[7]: http://www.ostechnix.com/wp-content/uploads/2019/04/duc-4.png
+[8]: https://www.ostechnix.com/filelight-visualize-disk-usage-on-your-linux-system/
+[9]: https://www.ostechnix.com/some-good-alternatives-to-du-command/
+[10]: https://www.ostechnix.com/check-disk-space-usage-linux-using-ncdu/
+[11]: https://www.ostechnix.com/agedu-find-out-wasted-disk-space-in-linux/
+[12]: https://www.ostechnix.com/find-size-directory-linux/
+[13]: https://www.ostechnix.com/the-df-command-tutorial-with-examples-for-beginners/
+[14]: https://duc.zevv.nl/
diff --git a/sources/tech/20190505 Five Methods To Check Your Current Runlevel In Linux.md b/sources/tech/20190505 Five Methods To Check Your Current Runlevel In Linux.md
new file mode 100644
index 0000000000..2169f04e51
--- /dev/null
+++ b/sources/tech/20190505 Five Methods To Check Your Current Runlevel In Linux.md
@@ -0,0 +1,183 @@
+[#]: collector: (lujun9972)
+[#]: translator: ( )
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+[#]: subject: (Five Methods To Check Your Current Runlevel In Linux?)
+[#]: via: (https://www.2daygeek.com/check-current-runlevel-in-linux/)
+[#]: author: (Magesh Maruthamuthu https://www.2daygeek.com/author/magesh/)
+
+Five Methods To Check Your Current Runlevel In Linux?
+======
+
+A run level is an operating system state on Linux system.
+
+There are seven runlevels exist, numbered from zero to six.
+
+A system can be booted into any of the given runlevel. Run levels are identified by numbers.
+
+Each runlevel designates a different system configuration and allows access to a different combination of processes.
+
+By default Linux boots either to runlevel 3 or to runlevel 5.
+
+Only one runlevel is executed at a time on startup. It doesn’t execute one after another.
+
+The default runlevel for a system is specified in the /etc/inittab file for SysVinit system.
+
+But systemd systems doesn’t read this file and it uses the following file `/etc/systemd/system/default.target` to get default runlevel information.
+
+We can check the Linux system current runlevel using the below five methods.
+
+ * **`runlevel Command:`** runlevel prints the previous and current runlevel of the system.
+ * **`who Command:`** Print information about users who are currently logged in. It will print the runlevel information with “-r” option.
+ * **`systemctl Command:`** It controls the systemd system and service manager.
+ * **`Using /etc/inittab File:`** The default runlevel for a system is specified in the /etc/inittab file for SysVinit System.
+ * **`Using /etc/systemd/system/default.target File:`** The default runlevel for a system is specified in the /etc/systemd/system/default.target file for systemd System.
+
+
+
+Detailed runlevels information is described in the below table.
+
+**Runlevel** | **SysVinit System** | **systemd System**
+---|---|---
+0 | Shutdown or Halt the system | shutdown.target
+1 | Single user mode | rescue.target
+2 | Multiuser, without NFS | multi-user.target
+3 | Full multiuser mode | multi-user.target
+4 | unused | multi-user.target
+5 | X11 (Graphical User Interface) | graphical.target
+6 | reboot the system | reboot.target
+
+The system will execute the programs/service based on the runlevel.
+
+For SysVinit system, it will be execute from the following location.
+
+ * Run level 0 – /etc/rc.d/rc0.d/
+ * Run level 1 – /etc/rc.d/rc1.d/
+ * Run level 2 – /etc/rc.d/rc2.d/
+ * Run level 3 – /etc/rc.d/rc3.d/
+ * Run level 4 – /etc/rc.d/rc4.d/
+ * Run level 5 – /etc/rc.d/rc5.d/
+ * Run level 6 – /etc/rc.d/rc6.d/
+
+
+
+For systemd system, it will be execute from the following location.
+
+ * runlevel1.target – /etc/systemd/system/rescue.target
+ * runlevel2.target – /etc/systemd/system/multi-user.target.wants
+ * runlevel3.target – /etc/systemd/system/multi-user.target.wants
+ * runlevel4.target – /etc/systemd/system/multi-user.target.wants
+ * runlevel5.target – /etc/systemd/system/graphical.target.wants
+
+
+
+### 1) How To Check Your Current Runlevel In Linux Using runlevel Command?
+
+runlevel prints the previous and current runlevel of the system.
+
+```
+$ runlevel
+N 5
+```
+
+ * **`N:`** “N” indicates that the runlevel has not been changed since the system was booted.
+ * **`5:`** “5” indicates the current runlevel of the system.
+
+
+
+### 2) How To Check Your Current Runlevel In Linux Using who Command?
+
+Print information about users who are currently logged in. It will print the runlevel information with `-r` option.
+
+```
+$ who -r
+ run-level 5 2019-04-22 09:32
+```
+
+### 3) How To Check Your Current Runlevel In Linux Using systemctl Command?
+
+systemctl is used to controls the systemd system and service manager. systemd is system and service manager for Unix like operating systems.
+
+It can work as a drop-in replacement for sysvinit system. systemd is the first process get started by kernel and holding PID 1.
+
+systemd uses `.service` files Instead of bash scripts (SysVinit uses). systemd sorts all daemons into their own Linux cgroups and you can see the system hierarchy by exploring `/cgroup/systemd` file.
+
+```
+$ systemctl get-default
+graphical.target
+```
+
+### 4) How To Check Your Current Runlevel In Linux Using /etc/inittab File?
+
+The default runlevel for a system is specified in the /etc/inittab file for SysVinit System but systemd systemd doesn’t read the files.
+
+So, it will work only on SysVinit system and not in systemd system.
+
+```
+$ cat /etc/inittab
+# inittab is only used by upstart for the default runlevel.
+#
+# ADDING OTHER CONFIGURATION HERE WILL HAVE NO EFFECT ON YOUR SYSTEM.
+#
+# System initialization is started by /etc/init/rcS.conf
+#
+# Individual runlevels are started by /etc/init/rc.conf
+#
+# Ctrl-Alt-Delete is handled by /etc/init/control-alt-delete.conf
+#
+# Terminal gettys are handled by /etc/init/tty.conf and /etc/init/serial.conf,
+# with configuration in /etc/sysconfig/init.
+#
+# For information on how to write upstart event handlers, or how
+# upstart works, see init(5), init(8), and initctl(8).
+#
+# Default runlevel. The runlevels used are:
+# 0 - halt (Do NOT set initdefault to this)
+# 1 - Single user mode
+# 2 - Multiuser, without NFS (The same as 3, if you do not have networking)
+# 3 - Full multiuser mode
+# 4 - unused
+# 5 - X11
+# 6 - reboot (Do NOT set initdefault to this)
+#
+id:5:initdefault:
+```
+
+### 5) How To Check Your Current Runlevel In Linux Using /etc/systemd/system/default.target File?
+
+The default runlevel for a system is specified in the /etc/systemd/system/default.target file for systemd System.
+
+It doesn’t work on SysVinit system.
+
+```
+$ cat /etc/systemd/system/default.target
+# This file is part of systemd.
+#
+# systemd is free software; you can redistribute it and/or modify it
+# under the terms of the GNU Lesser General Public License as published by
+# the Free Software Foundation; either version 2.1 of the License, or
+# (at your option) any later version.
+
+[Unit]
+Description=Graphical Interface
+Documentation=man:systemd.special(7)
+Requires=multi-user.target
+Wants=display-manager.service
+Conflicts=rescue.service rescue.target
+After=multi-user.target rescue.service rescue.target display-manager.service
+AllowIsolate=yes
+```
+--------------------------------------------------------------------------------
+
+via: https://www.2daygeek.com/check-current-runlevel-in-linux/
+
+作者:[Magesh Maruthamuthu][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://www.2daygeek.com/author/magesh/
+[b]: https://github.com/lujun9972
diff --git a/sources/tech/20190505 How To Create SSH Alias In Linux.md b/sources/tech/20190505 How To Create SSH Alias In Linux.md
new file mode 100644
index 0000000000..3ea1a77b7a
--- /dev/null
+++ b/sources/tech/20190505 How To Create SSH Alias In Linux.md
@@ -0,0 +1,209 @@
+[#]: collector: (lujun9972)
+[#]: translator: ( )
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+[#]: subject: (How To Create SSH Alias In Linux)
+[#]: via: (https://www.ostechnix.com/how-to-create-ssh-alias-in-linux/)
+[#]: author: (sk https://www.ostechnix.com/author/sk/)
+
+How To Create SSH Alias In Linux
+======
+
+![How To Create SSH Alias In Linux][1]
+
+If you frequently access a lot of different remote systems via SSH, this trick will save you some time. You can create SSH alias to frequently-accessed systems via SSH. This way you need not to remember all the different usernames, hostnames, ssh port numbers and IP addresses etc. Additionally, It avoids the need to repetitively type the same username/hostname, ip address, port no whenever you SSH into a Linux server(s).
+
+### Create SSH Alias In Linux
+
+Before I know this trick, usually, I connect to a remote system over SSH using anyone of the following ways.
+
+Using IP address:
+
+```
+$ ssh 192.168.225.22
+```
+
+Or using port number, username and IP address:
+
+```
+$ ssh -p 22 sk@server.example.com
+```
+
+Or using port number, username and hostname:
+
+```
+$ ssh -p 22 sk@server.example.com
+```
+
+Here,
+
+ * **22** is the port number,
+ * **sk** is the username of the remote system,
+ * **192.168.225.22** is the IP of my remote system,
+ * **server.example.com** is the hostname of remote system.
+
+
+
+I believe most of the newbie Linux users and/or admins would SSH into a remote system this way. However, If you SSH into multiple different systems, remembering all hostnames/ip addresses, usernames is bit difficult unless you write them down in a paper or save them in a text file. No worries! This can be easily solved by creating an alias(or shortcut) for SSH connections.
+
+We can create an alias for SSH commands in two methods.
+
+##### Method 1 – Using SSH Config File
+
+This is my preferred way of creating aliases.
+
+We can use SSH default configuration file to create SSH alias. To do so, edit **~/.ssh/config** file (If this file doesn’t exist, just create one):
+
+```
+$ vi ~/.ssh/config
+```
+
+Add all of your remote hosts details like below:
+
+```
+Host webserver
+ HostName 192.168.225.22
+ User sk
+
+Host dns
+ HostName server.example.com
+ User root
+
+Host dhcp
+ HostName 192.168.225.25
+ User ostechnix
+ Port 2233
+```
+
+![][2]
+
+Create SSH Alias In Linux Using SSH Config File
+
+Replace the values of **Host** , **Hostname** , **User** and **Port** with your own. Once you added the details of all remote hosts, save and exit the file.
+
+Now you can SSH into the systems with commands:
+
+```
+$ ssh webserver
+
+$ ssh dns
+
+$ ssh dhcp
+```
+
+It is simple as that.
+
+Have a look at the following screenshot.
+
+![][3]
+
+Access remote system using SSH alias
+
+See? I only used the alias name (i.e **webserver** ) to access my remote system that has IP address **192.168.225.22**.
+
+Please note that this applies for current user only. If you want to make the aliases available for all users (system wide), add the above lines in **/etc/ssh/ssh_config** file.
+
+You can also add plenty of other things in the SSH config file. For example, if you have [**configured SSH Key-based authentication**][4], mention the SSH keyfile location as below.
+
+```
+Host ubuntu
+ HostName 192.168.225.50
+ User senthil
+ IdentityFIle ~/.ssh/id_rsa_remotesystem
+```
+
+Make sure you have replace the hostname, username and SSH keyfile path with your own.
+
+Now connect to the remote server with command:
+
+```
+$ ssh ubuntu
+```
+
+This way you can add as many as remote hosts you want to access over SSH and quickly access them using their alias name.
+
+##### Method 2 – Using Bash aliases
+
+This is quick and dirty way to create SSH aliases for faster communication. You can use the [**alias command**][5] to make this task much easier.
+
+Open **~/.bashrc** or **~/.bash_profile** file:
+
+Add aliases for each SSH connections one by one like below.
+
+```
+alias webserver='ssh sk@server.example.com'
+alias dns='ssh sk@server.example.com'
+alias dhcp='ssh sk@server.example.com -p 2233'
+alias ubuntu='ssh sk@server.example.com -i ~/.ssh/id_rsa_remotesystem'
+```
+
+Again make sure you have replaced the host, hostname, port number and ip address with your own. Save the file and exit.
+
+Then, apply the changes using command:
+
+```
+$ source ~/.bashrc
+```
+
+Or,
+
+```
+$ source ~/.bash_profile
+```
+
+In this method, you don’t even need to use “ssh alias-name” command. Instead, just use alias name only like below.
+
+```
+$ webserver
+$ dns
+$ dhcp
+$ ubuntu
+```
+
+![][6]
+
+These two methods are very simple, yet useful and much more convenient for those who often SSH into multiple different systems. Use any one of the aforementioned methods that suits for you to quickly access your remote Linux systems over SSH.
+
+* * *
+
+**Suggested read:**
+
+ * [**Allow Or Deny SSH Access To A Particular User Or Group In Linux**][7]
+ * [**How To SSH Into A Particular Directory On Linux**][8]
+ * [**How To Stop SSH Session From Disconnecting In Linux**][9]
+ * [**4 Ways To Keep A Command Running After You Log Out Of The SSH Session**][10]
+ * [**SSLH – Share A Same Port For HTTPS And SSH**][11]
+
+
+
+* * *
+
+And, that’s all for now. Hope this was useful. More good stuffs to come. Stay tuned!
+
+Cheers!
+
+--------------------------------------------------------------------------------
+
+via: https://www.ostechnix.com/how-to-create-ssh-alias-in-linux/
+
+作者:[sk][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://www.ostechnix.com/author/sk/
+[b]: https://github.com/lujun9972
+[1]: https://www.ostechnix.com/wp-content/uploads/2019/04/ssh-alias-720x340.png
+[2]: http://www.ostechnix.com/wp-content/uploads/2019/04/Create-SSH-Alias-In-Linux.png
+[3]: http://www.ostechnix.com/wp-content/uploads/2019/04/create-ssh-alias.png
+[4]: https://www.ostechnix.com/configure-ssh-key-based-authentication-linux/
+[5]: https://www.ostechnix.com/the-alias-and-unalias-commands-explained-with-examples/
+[6]: http://www.ostechnix.com/wp-content/uploads/2019/04/create-ssh-alias-1.png
+[7]: https://www.ostechnix.com/allow-deny-ssh-access-particular-user-group-linux/
+[8]: https://www.ostechnix.com/how-to-ssh-into-a-particular-directory-on-linux/
+[9]: https://www.ostechnix.com/how-to-stop-ssh-session-from-disconnecting-in-linux/
+[10]: https://www.ostechnix.com/4-ways-keep-command-running-log-ssh-session/
+[11]: https://www.ostechnix.com/sslh-share-port-https-ssh/
diff --git a/sources/tech/20190505 How To Install-Uninstall Listed Packages From A File In Linux.md b/sources/tech/20190505 How To Install-Uninstall Listed Packages From A File In Linux.md
new file mode 100644
index 0000000000..5b42159f08
--- /dev/null
+++ b/sources/tech/20190505 How To Install-Uninstall Listed Packages From A File In Linux.md
@@ -0,0 +1,338 @@
+[#]: collector: (lujun9972)
+[#]: translator: ( )
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+[#]: subject: (How To Install/Uninstall Listed Packages From A File In Linux?)
+[#]: via: (https://www.2daygeek.com/how-to-install-uninstall-listed-packages-from-a-file-in-linux/)
+[#]: author: (Magesh Maruthamuthu https://www.2daygeek.com/author/magesh/)
+
+How To Install/Uninstall Listed Packages From A File In Linux?
+======
+
+In some case you may want to install list of packages from one server to another server.
+
+For example, You have installed 15 packages on ServerA, and those packages needs to be installed on ServerB, ServerC, etc.,
+
+We can manually install all the packages but it’s time consuming process.
+
+It can be done for one or two servers, think about if you have around 10 servers.
+
+In this case it doesn’t help you then What will be the solution?
+
+Don’t worry we are here to help you out in this situation or scenario.
+
+We have added four methods in this article to overcome this situation.
+
+I hope this will help you to fix your issue. I have tested these commands on CentOS7 and Ubuntu 18.04 systems.
+
+I hope this will work with other distributions too. Just replace with distribution official package manager command instead of us.
+
+Navigate to the following article if you want to **[check list of installed packages in Linux system][1]**.
+
+For example, if you would like to create a package lists from RHEL based system then use the following steps. Do the same for other distributions as well.
+
+```
+# rpm -qa --last | head -15 | awk '{print $1}' > /tmp/pack1.txt
+
+# cat /tmp/pack1.txt
+mariadb-server-5.5.60-1.el7_5.x86_64
+perl-DBI-1.627-4.el7.x86_64
+perl-DBD-MySQL-4.023-6.el7.x86_64
+perl-PlRPC-0.2020-14.el7.noarch
+perl-Net-Daemon-0.48-5.el7.noarch
+perl-IO-Compress-2.061-2.el7.noarch
+perl-Compress-Raw-Zlib-2.061-4.el7.x86_64
+mariadb-5.5.60-1.el7_5.x86_64
+perl-Data-Dumper-2.145-3.el7.x86_64
+perl-Compress-Raw-Bzip2-2.061-3.el7.x86_64
+httpd-2.4.6-88.el7.centos.x86_64
+mailcap-2.1.41-2.el7.noarch
+httpd-tools-2.4.6-88.el7.centos.x86_64
+apr-util-1.5.2-6.el7.x86_64
+apr-1.4.8-3.el7_4.1.x86_64
+```
+
+### Method-1 : How To Install Listed Packages From A File In Linux With Help Of cat Command?
+
+To achieve this, i would like to go with this first method. As this very simple and straightforward.
+
+To do so, just create a file and add the list of packages that you want to install it.
+
+For testing purpose, we are going to add only the below three packages into the following file.
+
+```
+# cat /tmp/pack1.txt
+
+apache2
+mariadb-server
+nano
+```
+
+Simply run the following **[apt command][2]** to install all the packages in a single shot from a file in Ubuntu/Debian systems.
+
+```
+# apt -y install $(cat /tmp/pack1.txt)
+
+Reading package lists... Done
+Building dependency tree
+Reading state information... Done
+The following packages were automatically installed and are no longer required:
+ libopts25 sntp
+Use 'sudo apt autoremove' to remove them.
+Suggested packages:
+ apache2-doc apache2-suexec-pristine | apache2-suexec-custom spell
+The following NEW packages will be installed:
+ apache2 mariadb-server nano
+0 upgraded, 3 newly installed, 0 to remove and 24 not upgraded.
+Need to get 339 kB of archives.
+After this operation, 1,377 kB of additional disk space will be used.
+Get:1 http://in.archive.ubuntu.com/ubuntu bionic-updates/main amd64 apache2 amd64 2.4.29-1ubuntu4.6 [95.1 kB]
+Get:2 http://in.archive.ubuntu.com/ubuntu bionic/main amd64 nano amd64 2.9.3-2 [231 kB]
+Get:3 http://in.archive.ubuntu.com/ubuntu bionic-updates/universe amd64 mariadb-server all 1:10.1.38-0ubuntu0.18.04.1 [12.9 kB]
+Fetched 339 kB in 19s (18.0 kB/s)
+Selecting previously unselected package apache2.
+(Reading database ... 290926 files and directories currently installed.)
+Preparing to unpack .../apache2_2.4.29-1ubuntu4.6_amd64.deb ...
+Unpacking apache2 (2.4.29-1ubuntu4.6) ...
+Selecting previously unselected package nano.
+Preparing to unpack .../nano_2.9.3-2_amd64.deb ...
+Unpacking nano (2.9.3-2) ...
+Selecting previously unselected package mariadb-server.
+Preparing to unpack .../mariadb-server_1%3a10.1.38-0ubuntu0.18.04.1_all.deb ...
+Unpacking mariadb-server (1:10.1.38-0ubuntu0.18.04.1) ...
+Processing triggers for ufw (0.36-0ubuntu0.18.04.1) ...
+Setting up apache2 (2.4.29-1ubuntu4.6) ...
+Processing triggers for ureadahead (0.100.0-20) ...
+Processing triggers for install-info (6.5.0.dfsg.1-2) ...
+Setting up nano (2.9.3-2) ...
+update-alternatives: using /bin/nano to provide /usr/bin/editor (editor) in auto mode
+update-alternatives: using /bin/nano to provide /usr/bin/pico (pico) in auto mode
+Processing triggers for systemd (237-3ubuntu10.20) ...
+Processing triggers for man-db (2.8.3-2ubuntu0.1) ...
+Setting up mariadb-server (1:10.1.38-0ubuntu0.18.04.1) ...
+```
+
+For removal, use the same format with appropriate option.
+
+```
+# apt -y remove $(cat /tmp/pack1.txt)
+Reading package lists... Done
+Building dependency tree
+Reading state information... Done
+The following packages were automatically installed and are no longer required:
+ apache2-bin apache2-data apache2-utils galera-3 libaio1 libapr1 libaprutil1 libaprutil1-dbd-sqlite3 libaprutil1-ldap libconfig-inifiles-perl libdbd-mysql-perl libdbi-perl libjemalloc1 liblua5.2-0
+ libmysqlclient20 libopts25 libterm-readkey-perl mariadb-client-10.1 mariadb-client-core-10.1 mariadb-common mariadb-server-10.1 mariadb-server-core-10.1 mysql-common sntp socat
+Use 'apt autoremove' to remove them.
+The following packages will be REMOVED:
+ apache2 mariadb-server nano
+0 upgraded, 0 newly installed, 3 to remove and 24 not upgraded.
+After this operation, 1,377 kB disk space will be freed.
+(Reading database ... 291046 files and directories currently installed.)
+Removing apache2 (2.4.29-1ubuntu4.6) ...
+Removing mariadb-server (1:10.1.38-0ubuntu0.18.04.1) ...
+Removing nano (2.9.3-2) ...
+update-alternatives: using /usr/bin/vim.tiny to provide /usr/bin/editor (editor) in auto mode
+Processing triggers for ufw (0.36-0ubuntu0.18.04.1) ...
+Processing triggers for install-info (6.5.0.dfsg.1-2) ...
+Processing triggers for man-db (2.8.3-2ubuntu0.1) ...
+```
+
+Use the following **[yum command][3]** to install listed packages from a file on RHEL based systems such as CentOS, RHEL (Redhat) and OEL (Oracle Enterprise Linux).
+
+```
+# yum -y install $(cat /tmp/pack1.txt)
+```
+
+Use the following format to uninstall listed packages from a file on RHEL based systems such as CentOS, RHEL (Redhat) and OEL (Oracle Enterprise Linux).
+
+```
+# yum -y remove $(cat /tmp/pack1.txt)
+```
+
+Use the following **[dnf command][4]** to install listed packages from a file on Fedora system.
+
+```
+# dnf -y install $(cat /tmp/pack1.txt)
+```
+
+Use the following format to uninstall listed packages from a file on Fedora system.
+
+```
+# dnf -y remove $(cat /tmp/pack1.txt)
+```
+
+Use the following **[zypper command][5]** to install listed packages from a file on openSUSE system.
+
+```
+# zypper -y install $(cat /tmp/pack1.txt)
+```
+
+Use the following format to uninstall listed packages from a file on openSUSE system.
+
+```
+# zypper -y remove $(cat /tmp/pack1.txt)
+```
+
+Use the following **[pacman command][6]** to install listed packages from a file on Arch Linux based systems such as Manjaro and Antergos system.
+
+```
+# pacman -S $(cat /tmp/pack1.txt)
+```
+
+Use the following format to uninstall listed packages from a file on Arch Linux based systems such as Manjaro and Antergos system.
+
+```
+# pacman -Rs $(cat /tmp/pack1.txt)
+```
+
+### Method-2 : How To Install Listed Packages From A File In Linux With Help Of cat And xargs Command?
+
+Even, i prefer to go with this method because this is very simple and straightforward method.
+
+Use the following apt command to install listed packages from a file on Debian based systems such as Debian, Ubuntu and Linux Mint.
+
+```
+# cat /tmp/pack1.txt | xargs apt -y install
+```
+
+Use the following apt command to uninstall listed packages from a file on Debian based systems such as Debian, Ubuntu and Linux Mint.
+
+```
+# cat /tmp/pack1.txt | xargs apt -y remove
+```
+
+Use the following yum command to install listed packages from a file on RHEL based systems such as CentOS, RHEL (Redhat) and OEL (Oracle Enterprise Linux).
+
+```
+# cat /tmp/pack1.txt | xargs yum -y install
+```
+
+Use the following format to uninstall listed packages from a file on RHEL based systems such as CentOS, RHEL (Redhat) and OEL (Oracle Enterprise Linux).
+
+```
+# cat /tmp/pack1.txt | xargs yum -y remove
+```
+
+Use the following dnf command to install listed packages from a file on Fedora system.
+
+```
+# cat /tmp/pack1.txt | xargs dnf -y install
+```
+
+Use the following format to uninstall listed packages from a file on Fedora system.
+
+```
+# cat /tmp/pack1.txt | xargs dnf -y remove
+```
+
+Use the following zypper command to install listed packages from a file on openSUSE system.
+
+```
+# cat /tmp/pack1.txt | xargs zypper -y install
+```
+
+Use the following format to uninstall listed packages from a file on openSUSE system.
+
+```
+# cat /tmp/pack1.txt | xargs zypper -y remove
+```
+
+Use the following pacman command to install listed packages from a file on Arch Linux based systems such as Manjaro and Antergos system.
+
+```
+# cat /tmp/pack1.txt | xargs pacman -S
+```
+
+Use the following format to uninstall listed packages from a file on Arch Linux based systems such as Manjaro and Antergos system.
+
+```
+# cat /tmp/pack1.txt | xargs pacman -Rs
+```
+
+### Method-3 : How To Install Listed Packages From A File In Linux With Help Of For Loop Command?
+
+Alternatively we can use the “For Loop” command to achieve this.
+
+To install bulk packages. Use the below format to run a “For Loop” with single line.
+
+```
+# for pack in `cat /tmp/pack1.txt` ; do apt -y install $i; done
+```
+
+To install bulk packages with shell script use the following “For Loop”.
+
+```
+# vi /opt/scripts/bulk-package-install.sh
+
+#!/bin/bash
+for pack in `cat /tmp/pack1.txt`
+do apt -y remove $pack
+done
+```
+
+Set an executable permission to `bulk-package-install.sh` file.
+
+```
+# chmod + bulk-package-install.sh
+```
+
+Finally run the script to achieve this.
+
+```
+# sh bulk-package-install.sh
+```
+
+### Method-4 : How To Install Listed Packages From A File In Linux With Help Of While Loop Command?
+
+Alternatively we can use the “While Loop” command to achieve this.
+
+To install bulk packages. Use the below format to run a “While Loop” with single line.
+
+```
+# file="/tmp/pack1.txt"; while read -r pack; do apt -y install $pack; done < "$file"
+```
+
+To install bulk packages with shell script use the following "While Loop".
+
+```
+# vi /opt/scripts/bulk-package-install.sh
+
+#!/bin/bash
+file="/tmp/pack1.txt"
+while read -r pack
+do apt -y remove $pack
+done < "$file"
+```
+
+Set an executable permission to `bulk-package-install.sh` file.
+
+```
+# chmod + bulk-package-install.sh
+```
+
+Finally run the script to achieve this.
+
+```
+# sh bulk-package-install.sh
+```
+
+--------------------------------------------------------------------------------
+
+via: https://www.2daygeek.com/how-to-install-uninstall-listed-packages-from-a-file-in-linux/
+
+作者:[Magesh Maruthamuthu][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://www.2daygeek.com/author/magesh/
+[b]: https://github.com/lujun9972
+[1]: https://www.2daygeek.com/check-installed-packages-in-rhel-centos-fedora-debian-ubuntu-opensuse-arch-linux/
+[2]: https://www.2daygeek.com/apt-command-examples-manage-packages-debian-ubuntu-systems/
+[3]: https://www.2daygeek.com/yum-command-examples-manage-packages-rhel-centos-systems/
+[4]: https://www.2daygeek.com/dnf-command-examples-manage-packages-fedora-system/
+[5]: https://www.2daygeek.com/zypper-command-examples-manage-packages-opensuse-system/
+[6]: https://www.2daygeek.com/pacman-command-examples-manage-packages-arch-linux-system/
diff --git a/sources/tech/20190505 How To Navigate Directories Faster In Linux.md b/sources/tech/20190505 How To Navigate Directories Faster In Linux.md
new file mode 100644
index 0000000000..e0979b3915
--- /dev/null
+++ b/sources/tech/20190505 How To Navigate Directories Faster In Linux.md
@@ -0,0 +1,350 @@
+[#]: collector: (lujun9972)
+[#]: translator: ( )
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+[#]: subject: (How To Navigate Directories Faster In Linux)
+[#]: via: (https://www.ostechnix.com/navigate-directories-faster-linux/)
+[#]: author: (sk https://www.ostechnix.com/author/sk/)
+
+How To Navigate Directories Faster In Linux
+======
+
+![Navigate Directories Faster In Linux][1]
+
+Today we are going to learn some command line productivity hacks. As you already know, we use “cd” command to move between a stack of directories in Unix-like operating systems. In this guide I am going to teach you how to navigate directories faster without having to use “cd” command often. There could be many ways, but I only know the following five methods right now! I will keep updating this guide when I came across any methods or utilities to achieve this task in the days to come.
+
+### Five Different Methods To Navigate Directories Faster In Linux
+
+##### Method 1: Using “Pushd”, “Popd” And “Dirs” Commands
+
+This is the most frequent method that I use everyday to navigate between a stack of directories. The “Pushd”, “Popd”, and “Dirs” commands comes pre-installed in most Linux distributions, so don’t bother with installation. These trio commands are quite useful when you’re working in a deep directory structure and scripts. For more details, check our guide in the link given below.
+
+ * **[How To Use Pushd, Popd And Dirs Commands For Faster CLI Navigation][2]**
+
+
+
+##### Method 2: Using “bd” utility
+
+The “bd” utility also helps you to quickly go back to a specific parent directory without having to repeatedly typing “cd ../../.” on your Bash.
+
+Bd is also available in the [**Debian extra**][3] and [**Ubuntu universe**][4] repositories. So, you can install it using “apt-get” package manager in Debian, Ubuntu and other DEB based systems as shown below:
+
+```
+$ sudo apt-get update
+
+$ sudo apt-get install bd
+```
+
+For other distributions, you can install as shown below.
+
+```
+$ sudo wget --no-check-certificate -O /usr/local/bin/bd https://raw.github.com/vigneshwaranr/bd/master/bd
+
+$ sudo chmod +rx /usr/local/bin/bd
+
+$ echo 'alias bd=". bd -si"' >> ~/.bashrc
+
+$ source ~/.bashrc
+```
+
+To enable auto completion, run:
+
+```
+$ sudo wget -O /etc/bash_completion.d/bd https://raw.github.com/vigneshwaranr/bd/master/bash_completion.d/bd
+
+$ source /etc/bash_completion.d/bd
+```
+
+The Bd utility has now been installed. Let us see few examples to understand how to quickly move through stack of directories using this tool.
+
+Create some directories.
+
+```
+$ mkdir -p dir1/dir2/dir3/dir4/dir5/dir6/dir7/dir8/dir9/dir10
+```
+
+The above command will create a hierarchy of directories. Let us check [**directory structure**][5] using command:
+
+```
+$ tree dir1/
+dir1/
+└── dir2
+ └── dir3
+ └── dir4
+ └── dir5
+ └── dir6
+ └── dir7
+ └── dir8
+ └── dir9
+ └── dir10
+
+9 directories, 0 files
+```
+
+Alright, we have now 10 directories. Let us say you’re currently in 7th directory i.e dir7.
+
+```
+$ pwd
+/home/sk/dir1/dir2/dir3/dir4/dir5/dir6/dir7
+```
+
+You want to move to dir3. Normally you would type:
+
+```
+$ cd /home/sk/dir1/dir2/dir3
+```
+
+Right? yes! But it not necessary though! To go back to dir3, just type:
+
+```
+$ bd dir3
+```
+
+Now you will be in dir3.
+
+![][6]
+
+Navigate Directories Faster In Linux Using “bd” Utility
+
+Easy, isn’t it? It supports auto complete, so you can just type the partial name of a directory and hit the tab key to auto complete the full path.
+
+To check the contents of a specific parent directory, you don’t need to inside that particular directory. Instead, just type:
+
+```
+$ ls `bd dir1`
+```
+
+The above command will display the contents of dir1 from your current working directory.
+
+For more details, check out the following GitHub page.
+
+ * [**bd GitHub repository**][7]
+
+
+
+##### Method 3: Using “Up” Shell script
+
+The “Up” is a shell script allows you to move quickly to your parent directory. It works well on many popular shells such as Bash, Fish, and Zsh etc. Installation is absolutely easy too!
+
+To install “Up” on **Bash** , run the following commands one bye:
+
+```
+$ curl --create-dirs -o ~/.config/up/up.sh https://raw.githubusercontent.com/shannonmoeller/up/master/up.sh
+
+$ echo 'source ~/.config/up/up.sh' >> ~/.bashrc
+```
+
+The up script registers the “up” function and some completion functions via your “.bashrc” file.
+
+Update the changes using command:
+
+```
+$ source ~/.bashrc
+```
+
+On **zsh** :
+
+```
+$ curl --create-dirs -o ~/.config/up/up.sh https://raw.githubusercontent.com/shannonmoeller/up/master/up.sh
+
+$ echo 'source ~/.config/up/up.sh' >> ~/.zshrc
+```
+
+The up script registers the “up” function and some completion functions via your “.zshrc” file.
+
+Update the changes using command:
+
+```
+$ source ~/.zshrc
+```
+
+On **fish** :
+
+```
+$ curl --create-dirs -o ~/.config/up/up.fish https://raw.githubusercontent.com/shannonmoeller/up/master/up.fish
+
+$ source ~/.config/up/up.fish
+```
+
+The up script registers the “up” function and some completion functions via “funcsave”.
+
+Now it is time to see some examples.
+
+Let us create some directories.
+
+```
+$ mkdir -p dir1/dir2/dir3/dir4/dir5/dir6/dir7/dir8/dir9/dir10
+```
+
+Let us say you’re in 7th directory i.e dir7.
+
+```
+$ pwd
+/home/sk/dir1/dir2/dir3/dir4/dir5/dir6/dir7
+```
+
+You want to move to dir3. Using “cd” command, we can do this by typing the following command:
+
+```
+$ cd /home/sk/dir1/dir2/dir3
+```
+
+But it is really easy to go back to dir3 using “up” script:
+
+```
+$ up dir3
+```
+
+That’s it. Now you will be in dir3. To go one directory up, just type:
+
+```
+$ up 1
+```
+
+To go back two directory type:
+
+```
+$ up 2
+```
+
+It’s that simple. Did I type the full path? Nope. Also it supports tab completion. So just type the partial directory name and hit the tab to complete the full path.
+
+For more details, check out the GitHub page.
+
+ * [**Up GitHub Repository**][8]
+
+
+
+Please be mindful that “bd” and “up” tools can only help you to go backward i.e to the parent directory of the current working directory. You can’t move forward. If you want to switch to dir10 from dir5, you can’t! Instead, you need to use “cd” command to switch to dir10. These two utilities are meant for quickly moving you to the parent directory!
+
+##### Method 4: Using “Shortcut” tool
+
+This is yet another handy method to switch between different directories quickly and easily. This is somewhat similar to [**alias**][9] command. In this method, we create shortcuts to frequently used directories and use the shortcut name to go to that respective directory without having to type the path. If you’re working in deep directory structure and stack of directories, this method will greatly save some time. You can learn how it works in the guide given below.
+
+ * [**Create Shortcuts To The Frequently Used Directories In Your Shell**][10]
+
+
+
+##### Method 5: Using “CDPATH” Environment variable
+
+This method doesn’t require any installation. **CDPATH** is an environment variable. It is somewhat similar to **PATH** variable which contains many different paths concatenated using **‘:’** (colon). The main difference between PATH and CDPATH variables is the PATH variable is usable with all commands whereas CDPATH works only for **cd** command.
+
+I have the following directory structure.
+
+![][11]
+
+Directory structure
+
+As you see, there are four child directories under a parent directory named “ostechnix”.
+
+Now add this parent directory to CDPATH using command:
+
+```
+$ export CDPATH=~/ostechnix
+```
+
+You now can instantly cd to the sub-directories of the parent directory (i.e **~/ostechnix** in our case) from anywhere in the filesystem.
+
+For instance, currently I am in **/var/mail/** location.
+
+![][12]
+
+To cd into **~/ostechnix/Linux/** directory, we don’t have to use the full path of the directory as shown below:
+
+```
+$ cd ~/ostechnix/Linux
+```
+
+Instead, just mention the name of the sub-directory you want to switch to:
+
+```
+$ cd Linux
+```
+
+It will automatically cd to **~/ostechnix/Linux** directory instantly.
+
+![][13]
+
+As you can see in the above output, I didn’t use “cd ”. Instead, I just used “cd ” command.
+
+Please note that CDPATH will allow you to quickly navigate to only one child directory of the parent directory set in CDPATH variable. It doesn’t much help for navigating a stack of directories (directories inside sub-directories, of course).
+
+To find the values of CDPATH variable, run:
+
+```
+$ echo $CDPATH
+```
+
+Sample output would be:
+
+```
+/home/sk/ostechnix
+```
+
+**Set multiple values to CDPATH**
+
+Similar to PATH variable, we can also set multiple values (more than one directory) to CDPATH separated by colon (:).
+
+```
+$ export CDPATH=.:~/ostechnix:/etc:/var:/opt
+```
+
+**Make the changes persistent**
+
+As you already know, the above command (export) will only keep the values of CDPATH until next reboot. To permanently set the values of CDPATH, just add them to your **~/.bashrc** or **~/.bash_profile** files.
+
+```
+$ vi ~/.bash_profile
+```
+
+Add the values:
+
+```
+export CDPATH=.:~/ostechnix:/etc:/var:/opt
+```
+
+Hit **ESC** key and type **:wq** to save and exit.
+
+Apply the changes using command:
+
+```
+$ source ~/.bash_profile
+```
+
+**Clear CDPATH**
+
+To clear the values of CDPATH, use **export CDPATH=””**. Or, simply delete the entire line from **~/.bashrc** or **~/.bash_profile** files.
+
+In this article, you have learned the different ways to navigate directory stack faster and easier in Linux. As you can see, it’s not that difficult to browse a pile of directories faster. Now stop typing “cd ../../..” endlessly by using these tools. If you know any other worth trying tool or method to navigate directories faster, feel free to let us know in the comment section below. I will review and add them in this guide.
+
+And, that’s all for now. Hope this helps. More good stuffs to come. Stay tuned!
+
+Cheers!
+
+--------------------------------------------------------------------------------
+
+via: https://www.ostechnix.com/navigate-directories-faster-linux/
+
+作者:[sk][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://www.ostechnix.com/author/sk/
+[b]: https://github.com/lujun9972
+[1]: https://www.ostechnix.com/wp-content/uploads/2017/12/Navigate-Directories-Faster-In-Linux-720x340.png
+[2]: https://www.ostechnix.com/use-pushd-popd-dirs-commands-faster-cli-navigation/
+[3]: https://tracker.debian.org/pkg/bd
+[4]: https://launchpad.net/ubuntu/+source/bd
+[5]: https://www.ostechnix.com/view-directory-tree-structure-linux/
+[6]: http://www.ostechnix.com/wp-content/uploads/2017/12/Navigate-Directories-Faster-1.png
+[7]: https://github.com/vigneshwaranr/bd
+[8]: https://github.com/shannonmoeller/up
+[9]: https://www.ostechnix.com/the-alias-and-unalias-commands-explained-with-examples/
+[10]: https://www.ostechnix.com/create-shortcuts-frequently-used-directories-shell/
+[11]: http://www.ostechnix.com/wp-content/uploads/2018/12/tree-command-output.png
+[12]: http://www.ostechnix.com/wp-content/uploads/2018/12/pwd-command.png
+[13]: http://www.ostechnix.com/wp-content/uploads/2018/12/cdpath.png
diff --git a/sources/tech/20190505 Kindd - A Graphical Frontend To dd Command.md b/sources/tech/20190505 Kindd - A Graphical Frontend To dd Command.md
new file mode 100644
index 0000000000..59dcfd2ffa
--- /dev/null
+++ b/sources/tech/20190505 Kindd - A Graphical Frontend To dd Command.md
@@ -0,0 +1,156 @@
+[#]: collector: (lujun9972)
+[#]: translator: ( )
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+[#]: subject: (Kindd – A Graphical Frontend To dd Command)
+[#]: via: (https://www.ostechnix.com/kindd-a-graphical-frontend-to-dd-command/)
+[#]: author: (sk https://www.ostechnix.com/author/sk/)
+
+Kindd – A Graphical Frontend To dd Command
+======
+
+![Kindd - A Graphical Frontend To dd Command][1]
+
+A while ago we learned how to [**create bootable ISO using dd command**][2] in Unix-like systems. Please keep in mind that dd command is one of the dangerous and destructive command. If you’re not sure what you are actually doing, you might accidentally wipe your hard drive in minutes. The dd command just takes bytes from **if** and writes them to **of**. It won’t care what it’s overwriting, it won’t care if there’s a partition table in the way, or a boot sector, or a home folder, or anything important. It will simply do what it is told to do. If you’re beginner, mostly try to avoid using dd command to do stuffs. Thankfully, there is a simple GUI utility for dd command. Say hello to **“Kindd”** , a graphical frontend to dd command. It is free, open source tool written in **Qt Quick**. This tool can be very helpful for the beginners and who are not comfortable with command line in general.
+
+The developer created this tool mainly to provide,
+
+ 1. a modern, simple and safe graphical user interface for dd command,
+ 2. a graphical way to easily create bootable device without having to use Terminal.
+
+
+
+### Installing Kindd
+
+Kindd is available in [**AUR**][3]. So if you’re a Arch user, install it using any AUR helper tools, for example [**Yay**][4].
+
+To install Git version, run:
+
+```
+$ yay -S kindd-git
+```
+
+To install release version, run:
+
+```
+$ yay -S kindd
+```
+
+After installing, launch Kindd from the Menu or Application launcher.
+
+For other distributions, you need to manually compile and install it from source as shown below.
+
+Make sure you have installed the following prerequisites.
+
+ * git
+ * coreutils
+ * polkit
+ * qt5-base
+ * qt5-quickcontrols
+ * qt5-quickcontrols2
+ * qt5-graphicaleffects
+
+
+
+Once all prerequisites installed, git clone the Kindd repository:
+
+```
+git clone https://github.com/LinArcX/Kindd/
+```
+
+Go to the directory where you just cloned Kindd and compile and install it:
+
+```
+cd Kindd
+
+qmake
+
+make
+```
+
+Finally run the following command to launch Kindd application:
+
+```
+./kindd
+```
+
+Kindd uses **pkexec** internally. The pkexec agent is installed by default in most most Desktop environments. But if you use **i3** (or maybe some other DE), you should install **polkit-gnome** first, and then paste the following line into i3 config file:
+
+```
+exec /usr/lib/polkit-gnome/polkit-gnome-authentication-agent-1 &
+```
+
+### Create bootable ISO using Kindd
+
+To create a bootable USB from an ISO, plug in the USB drive. Then, launch Kindd either from the Menu or Terminal.
+
+This is how Kindd default interface looks like:
+
+![][5]
+
+Kindd interface
+
+As you can see, Kindd interface is very simple and self-explanatory. There are just two sections namely **List Devices** which displays the list of available devices (hdd and Usb) on your system and **Create Bootable .iso**. You will be in “Create Bootable .iso” section by default.
+
+Enter the block size in the first column, select the path of the ISO file in the second column and choose the correct device (USB drive path) in third column. Click **Convert/Copy** button to start creating bootable ISO.
+
+![][6]
+
+Once the process is completed, you will see successful message.
+
+![][7]
+
+Now, unplug the USB drive and boot your system with USB to check if it really works.
+
+If you don’t know the actual device name (target path), just click on the List devices and check the USB drive name.
+
+![][8]
+
+* * *
+
+**Related read:**
+
+ * [**Etcher – A Beautiful App To Create Bootable SD Cards Or USB Drives**][9]
+ * [**Bootiso Lets You Safely Create Bootable USB Drive**][10]
+
+
+
+* * *
+
+Kindd is in its early development stage. So, there would be bugs. If you find any bugs, please report them in its GitHub page given at the end of this guide.
+
+And, that’s all. Hope this was useful. More good stuffs to come. Stay tuned!
+
+Cheers!
+
+**Resource:**
+
+ * [**Kindd GitHub Repository**][11]
+
+
+
+--------------------------------------------------------------------------------
+
+via: https://www.ostechnix.com/kindd-a-graphical-frontend-to-dd-command/
+
+作者:[sk][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://www.ostechnix.com/author/sk/
+[b]: https://github.com/lujun9972
+[1]: https://www.ostechnix.com/wp-content/uploads/2019/04/kindd-720x340.png
+[2]: https://www.ostechnix.com/how-to-create-bootable-usb-drive-using-dd-command/
+[3]: https://aur.archlinux.org/packages/kindd-git/
+[4]: https://www.ostechnix.com/yay-found-yet-another-reliable-aur-helper/
+[5]: http://www.ostechnix.com/wp-content/uploads/2019/04/kindd-interface.png
+[6]: http://www.ostechnix.com/wp-content/uploads/2019/04/kindd-1.png
+[7]: http://www.ostechnix.com/wp-content/uploads/2019/04/kindd-2.png
+[8]: http://www.ostechnix.com/wp-content/uploads/2019/04/kindd-3.png
+[9]: https://www.ostechnix.com/etcher-beauitiful-app-create-bootable-sd-cards-usb-drives/
+[10]: https://www.ostechnix.com/bootiso-lets-you-safely-create-bootable-usb-drive/
+[11]: https://github.com/LinArcX/Kindd
diff --git a/sources/tech/20190505 Linux Shell Script To Monitor Disk Space Usage And Send Email.md b/sources/tech/20190505 Linux Shell Script To Monitor Disk Space Usage And Send Email.md
new file mode 100644
index 0000000000..38a1d6419b
--- /dev/null
+++ b/sources/tech/20190505 Linux Shell Script To Monitor Disk Space Usage And Send Email.md
@@ -0,0 +1,202 @@
+[#]: collector: (lujun9972)
+[#]: translator: ( )
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+[#]: subject: (Linux Shell Script To Monitor Disk Space Usage And Send Email)
+[#]: via: (https://www.2daygeek.com/linux-shell-script-to-monitor-disk-space-usage-and-send-email/)
+[#]: author: (Magesh Maruthamuthu https://www.2daygeek.com/author/magesh/)
+
+Linux Shell Script To Monitor Disk Space Usage And Send Email
+======
+
+There are numerous monitoring tools are available in market to monitor Linux systems and it will send an email when the system reaches the threshold limit.
+
+It monitors everything such as CPU utilization, Memory utilization, swap utilization, disk space utilization and much more.
+
+However, it’s suitable for small and big environment.
+
+Think about if you have only few systems then what will be the best approach on this.
+
+Yup, we want to write a **[shell script][1]** to achieve this.
+
+In this tutorial we are going to write a shell script to monitor disk space usage on system.
+
+When the system reaches the given threshold then it will trigger a mail to corresponding email id.
+
+We have added totally four shell scripts in this article and each has been used for different purpose.
+
+Later, we will come up with other shell scripts to monitor CPU, Memory and Swap utilization.
+
+Before step into that, i would like to clarify one thing which i noticed regarding the disk space usage shell script.
+
+Most of the users were commented in multiple blogs saying they were getting the following error message when they are running the disk space usage script.
+
+```
+# sh /opt/script/disk-usage-alert-old.sh
+
+/dev/mapper/vg_2g-lv_root
+test-script.sh: line 7: [: /dev/mapper/vg_2g-lv_root: integer expression expected
+/ 9.8G
+```
+
+Yes that’s right. Even, i had faced the same issue when i ran the script first time. Later, i had found the root causes.
+
+When you use “df -h” or “df -H” in shell script for disk space alert on RHEL 5 & RHEL 6 based system, you will be end up with the above error message because the output is not in the proper format, see the below output.
+
+To overcome this issue, we need to use “df -Ph” (POSIX output format) but by default “df -h” is working fine on RHEL 7 based systems.
+
+```
+# df -h
+
+Filesystem Size Used Avail Use% Mounted on
+/dev/mapper/vg_2g-lv_root
+ 10G 6.7G 3.4G 67% /
+tmpfs 7.8G 0 7.8G 0% /dev/shm
+/dev/sda1 976M 95M 830M 11% /boot
+/dev/mapper/vg_2g-lv_home
+ 5.0G 4.3G 784M 85% /home
+/dev/mapper/vg_2g-lv_tmp
+ 4.8G 14M 4.6G 1% /tmp
+```
+
+### Method-1 : Linux Shell Script To Monitor Disk Space Usage And Send Email
+
+You can use the following shell script to monitor disk space usage on Linux system.
+
+It will send an email when the system reaches the given threshold limit. In this example, we set threshold limit at 60% for testing purpose and you can change this limit as per your requirements.
+
+It will send multiple mails if more than one file systems get reached the given threshold limit because the script is using loop.
+
+Also, replace your email id instead of us to get this alert.
+
+```
+# vi /opt/script/disk-usage-alert.sh
+
+#!/bin/sh
+df -Ph | grep -vE '^Filesystem|tmpfs|cdrom' | awk '{ print $5,$1 }' | while read output;
+do
+ echo $output
+ used=$(echo $output | awk '{print $1}' | sed s/%//g)
+ partition=$(echo $output | awk '{print $2}')
+ if [ $used -ge 60 ]; then
+ echo "The partition \"$partition\" on $(hostname) has used $used% at $(date)" | mail -s "Disk Space Alert: $used% Used On $(hostname)" [email protected]
+ fi
+done
+```
+
+**Output:** I got the following two email alerts.
+
+```
+The partition "/dev/mapper/vg_2g-lv_home" on 2g.CentOS7 has used 85% at Mon Apr 29 06:16:14 IST 2019
+
+The partition "/dev/mapper/vg_2g-lv_root" on 2g.CentOS7 has used 67% at Mon Apr 29 06:16:14 IST 2019
+```
+
+Finally add a **[cronjob][2]** to automate this. It will run every 10 minutes.
+
+```
+# crontab -e
+*/10 * * * * /bin/bash /opt/script/disk-usage-alert.sh
+```
+
+### Method-2 : Linux Shell Script To Monitor Disk Space Usage And Send Email
+
+Alternatively, you can use the following shell script. We have made few changes in this compared with above script.
+
+```
+# vi /opt/script/disk-usage-alert-1.sh
+
+#!/bin/sh
+df -Ph | grep -vE '^Filesystem|tmpfs|cdrom' | awk '{ print $5,$1 }' | while read output;
+do
+ max=60%
+ echo $output
+ used=$(echo $output | awk '{print $1}')
+ partition=$(echo $output | awk '{print $2}')
+ if [ ${used%?} -ge ${max%?} ]; then
+ echo "The partition \"$partition\" on $(hostname) has used $used at $(date)" | mail -s "Disk Space Alert: $used Used On $(hostname)" [email protected]
+ fi
+done
+```
+
+**Output:** I got the following two email alerts.
+
+```
+The partition "/dev/mapper/vg_2g-lv_home" on 2g.CentOS7 has used 85% at Mon Apr 29 06:16:14 IST 2019
+
+The partition "/dev/mapper/vg_2g-lv_root" on 2g.CentOS7 has used 67% at Mon Apr 29 06:16:14 IST 2019
+```
+
+Finally add a **[cronjob][2]** to automate this. It will run every 10 minutes.
+
+```
+# crontab -e
+*/10 * * * * /bin/bash /opt/script/disk-usage-alert-1.sh
+```
+
+### Method-3 : Linux Shell Script To Monitor Disk Space Usage And Send Email
+
+I would like to go with this method. Since, it work like a charm and you will be getting single email for everything.
+
+This is very simple and straightforward.
+
+```
+*/10 * * * * df -Ph | sed s/%//g | awk '{ if($5 > 60) print $0;}' | mail -s "Disk Space Alert On $(hostname)" [email protected]
+```
+
+**Output:** I got a single mail for all alerts.
+
+```
+Filesystem Size Used Avail Use Mounted on
+/dev/mapper/vg_2g-lv_root 10G 6.7G 3.4G 67 /
+/dev/mapper/vg_2g-lv_home 5.0G 4.3G 784M 85 /home
+```
+
+### Method-4 : Linux Shell Script To Monitor Disk Space Usage Of Particular Partition And Send Email
+
+If anybody wants to monitor the particular partition then you can use the following shell script. Simply replace your filesystem name instead of us.
+
+```
+# vi /opt/script/disk-usage-alert-2.sh
+
+#!/bin/bash
+used=$(df -Ph | grep '/dev/mapper/vg_2g-lv_dbs' | awk {'print $5'})
+max=80%
+if [ ${used%?} -ge ${max%?} ]; then
+echo "The Mount Point "/DB" on $(hostname) has used $used at $(date)" | mail -s "Disk space alert on $(hostname): $used used" [email protected]
+fi
+```
+
+**Output:** I got the following email alerts.
+
+```
+The partition /dev/mapper/vg_2g-lv_dbs on 2g.CentOS6 has used 82% at Mon Apr 29 06:16:14 IST 2019
+```
+
+Finally add a **[cronjob][2]** to automate this. It will run every 10 minutes.
+
+```
+# crontab -e
+*/10 * * * * /bin/bash /opt/script/disk-usage-alert-2.sh
+```
+
+**Note:** You will be getting an email alert 10 mins later since the script has scheduled to run every 10 minutes (But it’s not exactly 10 mins and it depends the timing).
+
+Say for example. If your system reaches the limit at 8.25 then you will get an email alert in another 5 mins. Hope it’s clear now.
+
+--------------------------------------------------------------------------------
+
+via: https://www.2daygeek.com/linux-shell-script-to-monitor-disk-space-usage-and-send-email/
+
+作者:[Magesh Maruthamuthu][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://www.2daygeek.com/author/magesh/
+[b]: https://github.com/lujun9972
+[1]: https://www.2daygeek.com/category/shell-script/
+[2]: https://www.2daygeek.com/crontab-cronjob-to-schedule-jobs-in-linux/
diff --git a/sources/tech/20190505 Ping Multiple Servers And Show The Output In Top-like Text UI.md b/sources/tech/20190505 Ping Multiple Servers And Show The Output In Top-like Text UI.md
new file mode 100644
index 0000000000..ba177fc480
--- /dev/null
+++ b/sources/tech/20190505 Ping Multiple Servers And Show The Output In Top-like Text UI.md
@@ -0,0 +1,89 @@
+[#]: collector: (lujun9972)
+[#]: translator: ( )
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+[#]: subject: (Ping Multiple Servers And Show The Output In Top-like Text UI)
+[#]: via: (https://www.ostechnix.com/ping-multiple-servers-and-show-the-output-in-top-like-text-ui/)
+[#]: author: (sk https://www.ostechnix.com/author/sk/)
+
+Ping Multiple Servers And Show The Output In Top-like Text UI
+======
+
+![Ping Multiple Servers And Show The Output In Top-like Text UI][1]
+
+A while ago, we wrote about [**“Fping”**][2] utility which enables us to ping multiple hosts at once. Unlike the traditional **“Ping”** utility, Fping doesn’t wait for one host’s timeout. It uses round-robin method. Meaning – It will send the ICMP Echo request to one host, then move to the another host and finally display which hosts are up or down at a time. Today, we are going to discuss about a similar utility named **“Pingtop”**. As the name says, it will ping multiple servers at a time and show the result in Top-like Terminal UI. It is free and open source, command line program written in **Python**.
+
+### Install Pingtop
+
+Pingtop can be installed using Pip, a package manager to install programs developed in Python. Make sure sure you have installed Python 3.7.x and Pip in your Linux box.
+
+To install Pip on Linux, refer the following link.
+
+ * [**How To Manage Python Packages Using Pip**][3]
+
+
+
+Once Pip is installed, run the following command to install Pingtop:
+
+```
+$ pip install pingtop
+```
+
+Now let us go ahead and ping multiple systems using Pingtop.
+
+### Ping Multiple Servers And Show The Output In Top-like Terminal UI
+
+To ping mulstiple hosts/systems, run:
+
+```
+$ pingtop ostechnix.com google.com facebook.com twitter.com
+```
+
+You will now see the result in a nice top-like Terminal UI as shown in the following output.
+
+![][4]
+
+Ping multiple servers using Pingtop
+
+* * *
+
+**Suggested read:**
+
+ * [**Some Alternatives To ‘top’ Command line Utility You Might Want To Know**][5]
+
+
+
+* * *
+
+I personally couldn’t find any use cases for Pingtop utility at the moment. But I like the idea of showing the ping command’s output in text user interface. Give it a try and see if it helps.
+
+And, that’s all for now. More good stuffs to come. Stay tuned!
+
+Cheers!
+
+**Resource:**
+
+ * [**Pingtop GitHub Repository**][6]
+
+
+
+--------------------------------------------------------------------------------
+
+via: https://www.ostechnix.com/ping-multiple-servers-and-show-the-output-in-top-like-text-ui/
+
+作者:[sk][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://www.ostechnix.com/author/sk/
+[b]: https://github.com/lujun9972
+[1]: https://www.ostechnix.com/wp-content/uploads/2019/04/pingtop-720x340.png
+[2]: https://www.ostechnix.com/ping-multiple-hosts-linux/
+[3]: https://www.ostechnix.com/manage-python-packages-using-pip/
+[4]: http://www.ostechnix.com/wp-content/uploads/2019/04/pingtop-1.gif
+[5]: https://www.ostechnix.com/some-alternatives-to-top-command-line-utility-you-might-want-to-know/
+[6]: https://github.com/laixintao/pingtop
diff --git a/sources/tech/20190505 apt-clone - Backup Installed Packages And Restore Those On Fresh Ubuntu System.md b/sources/tech/20190505 apt-clone - Backup Installed Packages And Restore Those On Fresh Ubuntu System.md
new file mode 100644
index 0000000000..ab8efd7599
--- /dev/null
+++ b/sources/tech/20190505 apt-clone - Backup Installed Packages And Restore Those On Fresh Ubuntu System.md
@@ -0,0 +1,121 @@
+[#]: collector: (lujun9972)
+[#]: translator: ( )
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+[#]: subject: (apt-clone : Backup Installed Packages And Restore Those On Fresh Ubuntu System)
+[#]: via: (https://www.2daygeek.com/apt-clone-backup-installed-packages-and-restore-them-on-fresh-ubuntu-system/)
+[#]: author: (Magesh Maruthamuthu https://www.2daygeek.com/author/magesh/)
+
+apt-clone : Backup Installed Packages And Restore Those On Fresh Ubuntu System
+======
+
+Package installation is become more easier on Ubuntu/Debian based systems when we use apt-clone utility.
+
+apt-clone will work for you, if you want to build few systems with same set of packages.
+
+It’s time consuming process if you want to build and install necessary packages manually on each systems.
+
+It can be achieved in many ways and there are many utilities are available in Linux.
+
+We have already wrote an article about **[Aptik][1]** in the past.
+
+It’s one of the utility that allow Ubuntu users to backup and restore system settings and data
+
+### What Is apt-clone?
+
+[apt-clone][2] lets allow you to create backup of all installed packages for your Debian/Ubuntu systems that can be restored on freshly installed systems (or containers) or into a directory.
+
+This backup can be restored on multiple systems with same operating system version and architecture.
+
+### How To Install apt-clone?
+
+The apt-clone package is available on Ubuntu/Debian official repository so, use **[apt Package Manager][3]** or **[apt-get Package Manager][4]** to install it.
+
+Install apt-clone package using apt package manager.
+
+```
+$ sudo apt install apt-clone
+```
+
+Install apt-clone package using apt-get package manager.
+
+```
+$ sudo apt-get install apt-clone
+```
+
+### How To Backup Installed Packages Using apt-clone?
+
+Once you have successfully installed the apt-clone package. Simply give a location where do you want to save the backup file.
+
+We are going to save the installed packages backup under `/backup` directory.
+
+The apt-clone utility will save the installed packages list into `apt-clone-state-Ubuntu18.2daygeek.com.tar.gz` file.
+
+```
+$ sudo apt-clone clone /backup
+```
+
+We can check the same by running the ls Command.
+
+```
+$ ls -lh /backup/
+total 32K
+-rw-r--r-- 1 root root 29K Apr 20 19:06 apt-clone-state-Ubuntu18.2daygeek.com.tar.gz
+```
+
+Run the following command to view the details of the backup file.
+
+```
+$ apt-clone info /backup/apt-clone-state-Ubuntu18.2daygeek.com.tar.gz
+Hostname: Ubuntu18.2daygeek.com
+Arch: amd64
+Distro: bionic
+Meta: libunity-scopes-json-def-desktop, ubuntu-desktop
+Installed: 1792 pkgs (194 automatic)
+Date: Sat Apr 20 19:06:43 2019
+```
+
+As per the above output, totally we have 1792 packages in the backup file.
+
+### How To Restore The Backup Which Was Taken Using apt-clone?
+
+You can use any of the remote copy utility to copy the files on remote server.
+
+```
+$ scp /backup/apt-clone-state-ubunt-18-04.tar.gz Destination-Server:/opt
+```
+
+Once you copy the file then perform the restore using apt-clone utility.
+
+Run the following command to restore it.
+
+```
+$ sudo apt-clone restore /opt/apt-clone-state-Ubuntu18.2daygeek.com.tar.gz
+```
+
+Make a note, The restore will override your existing `/etc/apt/sources.list` and will install/remove packages. So be careful.
+
+If you want to restore all the packages into a folder instead of actual restore, you can do it by using the following command.
+
+```
+$ sudo apt-clone restore /opt/apt-clone-state-Ubuntu18.2daygeek.com.tar.gz --destination /opt/oldubuntu
+```
+
+--------------------------------------------------------------------------------
+
+via: https://www.2daygeek.com/apt-clone-backup-installed-packages-and-restore-them-on-fresh-ubuntu-system/
+
+作者:[Magesh Maruthamuthu][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://www.2daygeek.com/author/magesh/
+[b]: https://github.com/lujun9972
+[1]: https://www.2daygeek.com/aptik-backup-restore-ppas-installed-apps-users-data/
+[2]: https://github.com/mvo5/apt-clone
+[3]: https://www.2daygeek.com/apt-command-examples-manage-packages-debian-ubuntu-systems/
+[4]: https://www.2daygeek.com/apt-get-apt-cache-command-examples-manage-packages-debian-ubuntu-systems/
diff --git a/translated/tech/20180130 An introduction to the DomTerm terminal emulator for Linux.md b/translated/tech/20180130 An introduction to the DomTerm terminal emulator for Linux.md
new file mode 100644
index 0000000000..59510f430b
--- /dev/null
+++ b/translated/tech/20180130 An introduction to the DomTerm terminal emulator for Linux.md
@@ -0,0 +1,125 @@
+DomTerm 一款为 Linux 打造的终端模拟器
+======
+
+
+[DomTerm][1] 是一款现代化的终端模拟器,它使用浏览器引擎作为 “GUI 工具包”。这就使得一些干净特性例如可嵌入的图像和链接,HTML 富文本以及可折叠(显示/隐藏)命令成为可能。除此以外,它看起来感觉就像一个功能强大,有着优秀 xterm 兼容性(包括鼠标处理和24位色)和恰当的 “chrome” (菜单)的独特终端模拟器。另外它同样有对会话管理和副窗口(例如 `tmux` 和 `GNU Screen` 中),基本输入编辑(例如 `readline` 中)以及分页(例如 `less` 中)的内置支持。
+
+
+图 1: DomTerminal 终端模拟器。 查看大图
+
+在以下部分我们将看一看这些特性。我们将假设你已经安装好了 `domterm` (如果你需要获取并搭建 Dormterm 请跳到本文最后)。开始之前先让我们概览一下这项技术。
+
+### 前端 vs. 后端
+
+DomTerm 大部分是用 JavaScript 写的,它运行在一个浏览器引擎中。这个引擎可以是一个桌面浏览器,例如 Chrome 或者 Firefox(见图三),也可以是一个内嵌的浏览器。使用一个通用的网页浏览器没有问题,但是用户体验却不够好(因为菜单是为通用的网页浏览而不是为了终端模拟器所打造),并且安全模型也会妨碍使用。因此使用内嵌的浏览器更好一些。
+
+目前以下这些是支持的:
+
+ * `qdomterm`,使用了 Qt 工具包 和 `QtWebEngine`
+ * 一个内嵌的 `[Electron][2]`(见图一)
+ * `atom-domterm` 以 [Atom 文本编辑器][3](同样基于 Electron)包的形式运行 DomTerm,并和 Atom 面板系统集成在一起(见图二)
+ * 一个为 JavaFX 的 `WebEngine` 包装器,这对 Java 编程十分有用(见图四)
+ * 之前前端使用 [Firefox-XUL][4] 作为首选,但是 Mozilla 已经终止了 XUL
+
+
+
+![在 Atom 编辑器中的 DomTerm 终端面板][6]
+
+图二:在 Atom 编辑器中的 DomTerm 终端面板。[查看大图][7]
+
+目前,Electron 前端可能是最佳选择,紧随其后的是 Qt 前端。如果你使用 Atom,`atom-domterm` 也工作得相当不错。
+
+后端服务器是用 C 写的。它管理着伪终端(PTYs)和会话。它同样也是一个为前端提供 Javascript 和其他文件的 HTTP 服务器。如果没有服务器在运行,`domterm` 就会使用它自己。后端与服务器之间的通讯通常是用 WebSockets (在服务器端是[libwebsockets][8])完成的。然而,JavaFX 嵌入时既不用 Websockets 也不用 DomTerm 服务器。相反 Java 应用直接通过 Java-Javascript 桥接进行通讯。
+
+### 一个稳健的可兼容 xterm 的终端模拟器
+
+DomTerm 看上去感觉像一个现代的终端模拟器。它处理鼠标事件,24位色,万国码,倍宽字符(CJK)以及输入方式。DomTerm 在 [vttest 测试套件][9] 上工作地十分出色。
+
+不同寻常的特性包括:
+
+**展示/隐藏按钮(“折叠”):** 小三角(如上图二)是隐藏/展示相应输出的按钮。仅需在[提示文字][11]中添加特定的[转义字符][10]就可以创建按钮。
+
+**对于 `readline` 和类似输入编辑器的鼠标点击支持:** 如果你点击(黄色)输入区域,DomTerm 会向应用发送正确的方向键按键序列。(提示窗口中的转义字符使能这一特性,你也可以通过 Alt+Click 强制使用。)
+
+**用CSS样式化终端:** 这通常是在 `~/.domterm/settings.ini` 里完成的,保存时会自动重载。例如在图二中,终端专用的背景色被设置。
+
+### 一个更好的 REPL 控制台
+
+一个经典的终端模拟器基于长方形的字符单元格工作。这在 REPL(命令行)上没问题,但是并不理想。这有些对通常在终端模拟器中不常见的 REPL 很有用的 DomTerm 特性:
+
+**一个能“打印”图片,图表,数学公式或者一组可点击的链接的命令:** 一个应用可以发送包含几乎任何 HTML 的转义字符。(擦除 HTML 以移除 JavaScript 和其它危险特性。)
+
+图三从[`gnuplot`][12]会话展示了一个片段。Gnuplot(2.1或者跟高版本)支持 `dormterm` 作为终端类型。图像输出被转换成 [SVG 图][13],然后图片被打印到终端。我的博客帖子[在 DormTerm 上的 Gnuplot 展示]在这方面提供了更多信息。
+
+
+图三: Gnuplot 截图。查看大图
+
+[Kawa][15] 语言有一个创建并转换[几何图像值][16]的库。如果你将这样的图片值打印到 DomTerm 终端,图片就会被转换成 SVG 形式并嵌入进输出中。
+
+
+图四: Kawa 中可计算的几何形状。查看大图
+
+**输出中的富文本:** 有着 HTML 样式的帮助信息更加便于阅读,看上去也更漂亮。图片一的面板下部分展示 `dormterm help` 的输出。(如果没在 DomTerm 下运行的话输出的是普通文本。)注意自带的分页器中的 `PAUSED` 消息。
+
+**包括可点击链接的错误消息:** DomTerm 识别语法 `filename:line:column` 并将其转化成一个能在可定制文本编辑器中打开文件并定位到行的链接。(这适用相对的文件名上如果你用 `PROMPT_COMMAND` 或类似的以跟踪目录。)
+
+一个编译器可以侦测到它在 DomTerm 下运行,并直接用转义字符发出文件链接。这比依赖 DomTerm 的样式匹配要稳健得多,因为它可以处理空格和其他字符并且无需依赖目录追踪。在图四中,你可以看到来自 [Kawa Compiler][15] 的错误消息。悬停在文件位置上会使其出现下划线,`file:` URL 出现在 `atom-domterm` 消息栏(窗口底部)中。(当不用 `atom-domterm` 时,这样的消息会在一个覆盖框中显示,如图一中所看到的 `PAUSED` 消息所示。)
+
+点击链接时的动作是可以配置的。默认对于带有 `#position` 后缀的 `file:` 链接的动作是在文本编辑器中打开那个文件。
+
+**内建的 Lisp 样式优美打印:** 你可以在输出中包括优美打印目录(比如,组)这样行分隔符会随着窗口调整二重新计算。查看我的文章 [DomTerm 中的动态优美打印][17]以更深入探讨。
+
+**基本的有着历史记录的内建行编辑**(像 `GNU readline` 一样): 这使用浏览器自带的编辑器,因此它有着优秀的鼠标和选择处理机制。你可以在正常字符模式(大多数输入的字符被指接送向进程); 或者行模式(当控制字符导致编辑动作,回车键向进程发送被编辑行,通常的字符是被插入的)之间转换。默认的是自动模式,根据 PTY 是在原始模式还是终端模式中,DomTerm 在字符模式与行模式间转换。
+
+**自带的分页器**(类似简化版的 `less`):键盘快捷键控制滚动。在“页模式”中,输出在每个新的屏幕(或者单独的行如果你一行行地向前移)后暂停; 页模式对于用户输入简单智能,因此(如果你想的话)你可以无需阻碍交互程序就可以运行它。
+
+### 多路传输和会话
+
+**标签和平铺:** 你不仅可以创建多个终端标签,也可以平铺它们。你可以要么使用鼠标要么使用键盘快捷键来创建或者切换面板和标签。它们可以用鼠标重新排列并调整大小。这是通过[GoldenLayout][18] JavaScript 库实现的。[图一][19]展示了一个有着两个面板的窗口。上面的有两个标签,一个运行 [Midnight Commander][20]; 底下的面板以 HTML 形式展示了 `dormterm help` 输出。然而相反在 Atom 中我们使用其自带的可拖拽的面板和标签。你可以在图二中看到这个。
+
+**分离或重接会话:** 与 `tmux` 和 GNU `screen` 类似,DomTerm 支持会话安排。你甚至可以给同样的会话接上多个窗口或面板。这支持多用户会话分享和远程链接。(为了安全,同一个服务器的所有会话都需要能够读取 Unix 域接口和包含随机密钥的本地文件。当我们有了良好,安全的远程链接,这个限制将会有所改善。)
+
+**`domterm`** **命令** 与 `tmux` 和 GNU `screen` 同样相似的地方在于它为控制或者打开单个或多个会话的服务器有着多个选项。主要的差别在于,如果它没在 DomTerm 下运行,`dormterm` 命令会创建一个新的顶层窗口,而不是在现有的终端中运行。
+
+与 `tmux` 和 `git` 类似,dormterm` 命令有许多副命令。一些副命令创建窗口或者会话。另一些(例如“打印”一张图片)仅在现有的 DormTerm 会话下起作用。
+
+命令 `domterm browse` 打开一个窗口或者面板以浏览一个指定的 URL,例如浏览文档的时候。
+
+### 获取并安装 DomTerm
+
+DomTerm 从其 [Github 仓库][21]可以获取。目前没有提前搭建好的包,但是有[详细指导][22]。所有的前提条件都可以在 Fedora 27 上获取,这使得其特别容易被搭建。
+
+--------------------------------------------------------------------------------
+
+via: https://opensource.com/article/18/1/introduction-domterm-terminal-emulator
+
+作者:[Per Bothner][a]
+译者:[tomjlw](https://github.com/tomjlw)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]:https://opensource.com/users/perbothner
+[1]:http://domterm.org/
+[2]:https://electronjs.org/
+[3]:https://atom.io/
+[4]:https://en.wikipedia.org/wiki/XUL
+[5]:/file/385346
+[6]:https://opensource.com/sites/default/files/images/dt-atom1.png (DomTerm terminal panes in Atom editor)
+[7]:https://opensource.com/sites/default/files/images/dt-atom1.png
+[8]:https://libwebsockets.org/
+[9]:http://invisible-island.net/vttest/
+[10]:http://domterm.org/Wire-byte-protocol.html
+[11]:http://domterm.org/Shell-prompts.html
+[12]:http://www.gnuplot.info/
+[13]:https://developer.mozilla.org/en-US/docs/Web/SVG
+[14]:http://per.bothner.com/blog/2016/gnuplot-in-domterm/
+[15]:https://www.gnu.org/software/kawa/
+[16]:https://www.gnu.org/software/kawa/Composable-pictures.html
+[17]:http://per.bothner.com/blog/2017/dynamic-prettyprinting/
+[18]:https://golden-layout.com/
+[19]:https://opensource.com/sites/default/files/u128651/domterm1.png
+[20]:https://midnight-commander.org/
+[21]:https://github.com/PerBothner/DomTerm
+[22]:http://domterm.org/Downloading-and-building.html
+
diff --git a/translated/tech/20180605 How to use autofs to mount NFS shares.md b/translated/tech/20180605 How to use autofs to mount NFS shares.md
new file mode 100644
index 0000000000..b402ee2ba2
--- /dev/null
+++ b/translated/tech/20180605 How to use autofs to mount NFS shares.md
@@ -0,0 +1,146 @@
+[#]: collector: (lujun9972)
+[#]: translator: (geekpi)
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+[#]: subject: (How to use autofs to mount NFS shares)
+[#]: via: (https://opensource.com/article/18/6/using-autofs-mount-nfs-shares)
+[#]: author: (Alan Formy-Duval https://opensource.com/users/alanfdoss)
+
+如何使用 autofs 挂载 NFS 共享
+======
+
+
+
+大多数 Linux 文件系统在引导时挂载,并在系统运行时保持挂载状态。对于已在 `fstab` 中配置的任何远程文件系统也是如此。但是,有时你可能希望仅按需挂载远程文件系统 - 例如,通过减少网络带宽使用来提高性能,或出于安全原因隐藏或混淆某些目录。[autofs][1] 软件包提供此功能。在本文中,我将介绍如何配置基本的自动挂载。
+
+首先做点假设:假设有台 NFS 服务器 `tree.mydatacenter.net` 已经启动并运行。另外假设一个名为 `ourfiles` 的数据目录还有供 Carl 和 Sarah 使用的用户目录,它们都由服务器共享。
+
+一些最佳实践可以使工作更好:服务器上的用户和任何客户端工作站上的帐号有相同的用户 ID。此外,你的工作站和服务器应有相同的域名。检查相关配置文件应该确认。
+
+```
+alan@workstation1:~$ sudo getent passwd carl sarah
+
+[sudo] password for alan:
+
+carl:x:1020:1020:Carl,,,:/home/carl:/bin/bash
+
+sarah:x:1021:1021:Sarah,,,:/home/sarah:/bin/bash
+
+
+
+alan@workstation1:~$ sudo getent hosts
+
+127.0.0.1 localhost
+
+127.0.1.1 workstation1.mydatacenter.net workstation1
+
+10.10.1.5 tree.mydatacenter.net tree
+
+```
+
+如你所见,客户端工作站和 NFS 服务器都在 `hosts` 中配置。我假设一个基本的家庭甚至小型办公室网络,可能缺乏适合的内部域名服务(即 DNS)。
+
+### 安装软件包
+
+你只需要安装两个软件包:用于 NFS 客户端的 `nfs-common` 和提供自动挂载的 `autofs`。
+```
+alan@workstation1:~$ sudo apt-get install nfs-common autofs
+
+```
+
+你可以验证 autofs 是否已放在 `etc` 目录中:
+```
+alan@workstation1:~$ cd /etc; ll auto*
+
+-rw-r--r-- 1 root root 12596 Nov 19 2015 autofs.conf
+
+-rw-r--r-- 1 root root 857 Mar 10 2017 auto.master
+
+-rw-r--r-- 1 root root 708 Jul 6 2017 auto.misc
+
+-rwxr-xr-x 1 root root 1039 Nov 19 2015 auto.net*
+
+-rwxr-xr-x 1 root root 2191 Nov 19 2015 auto.smb*
+
+alan@workstation1:/etc$
+
+```
+
+### 配置 autofs
+
+现在你需要编辑其中几个文件并添加 `auto.home` 文件。首先,将以下两行添加到文件 `auto.master` 中:
+```
+/mnt/tree /etc/auto.misc
+
+/home/tree /etc/auto.home
+
+```
+
+每行以挂载 NFS 共享的目录开头。继续创建这些目录:
+```
+alan@workstation1:/etc$ sudo mkdir /mnt/tree /home/tree
+
+```
+
+接下来,将以下行添加到文件 `auto.misc`:
+```
+ourfiles -fstype=nfs tree:/share/ourfiles
+
+```
+
+该行表示 autofs 将挂载 `auto.master` 文件中匹配 `auto.misc` 的 `ourfiles` 共享。如上所示,这些文件将在 `/mnt/tree/ourfiles` 目录中。
+
+第三步,使用以下行创建文件 `auto.home`:
+```
+* -fstype=nfs tree:/home/&
+
+```
+
+该行表示 autofs 将挂载 `auto.master` 文件中匹配 `auto.home` 的用户共享。在这种情况下,Carl 和 Sarah 的文件将分别在目录 `/home/tree/carl` 或 `/home/tree/sarah`中。星号(称为通配符)使每个用户的共享可以在登录时自动挂载。& 符号也可以作为表示服务器端用户目录的通配符。它们的主目录会相应地根据 `passwd` 文件映射。如果你更喜欢本地主目录,则无需执行此操作。相反,用户可以将其用作特定文件的简单远程存储。
+
+最后,重启 `autofs` 守护进程,以便识别并加载这些配置的更改。
+```
+alan@workstation1:/etc$ sudo service autofs restart
+
+```
+
+### 测试 autofs
+
+如果更改文件 `auto.master` 中的列出目录并运行 `ls` 命令,那么不会立即看到任何内容。例如,`(cd)` 到目录 `/mnt/tree`。首先,`ls` 的输出不会显示任何内容,但在运行 `cd ourfiles` 之后,将自动挂载 `ourfiles` 共享目录。 `cd` 命令也将被执行,你将进入新挂载的目录中。
+```
+carl@workstation1:~$ cd /mnt/tree
+
+carl@workstation1:/mnt/tree$ ls
+
+carl@workstation1:/mnt/tree$ cd ourfiles
+
+carl@workstation1:/mnt/tree/ourfiles$
+
+```
+
+为了进一步确认正常工作,`mount` 命令会显示已挂载共享的细节
+```
+carl@workstation1:~$ mount
+
+tree:/mnt/share/ourfiles on /mnt/tree/ourfiles type nfs4 (rw,relatime,vers=4.0,rsize=131072,wsize=131072,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,clientaddr=10.10.1.22,local_lock=none,addr=10.10.1.5)
+
+```
+
+对于Carl和Sarah,`/home/tree` 目录工作方式相同。
+
+我发现在我的文件管理器中添加这些目录的书签很有用,可以用来快速访问。
+
+--------------------------------------------------------------------------------
+
+via: https://opensource.com/article/18/6/using-autofs-mount-nfs-shares
+
+作者:[Alan Formy-Duval][a]
+选题:[lujun9972](https://github.com/lujun9972)
+译者:[geekpi](https://github.com/geekpi)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]:https://opensource.com/users/alanfdoss
+[1]:https://wiki.archlinux.org/index.php/autofs
diff --git a/sources/tech/20190409 How To Install And Configure Chrony As NTP Client.md b/translated/tech/20190409 How To Install And Configure Chrony As NTP Client.md
similarity index 55%
rename from sources/tech/20190409 How To Install And Configure Chrony As NTP Client.md
rename to translated/tech/20190409 How To Install And Configure Chrony As NTP Client.md
index 3988cda330..bd9ddaf2ef 100644
--- a/sources/tech/20190409 How To Install And Configure Chrony As NTP Client.md
+++ b/translated/tech/20190409 How To Install And Configure Chrony As NTP Client.md
@@ -7,84 +7,84 @@
[#]: via: (https://www.2daygeek.com/configure-ntp-client-using-chrony-in-linux/)
[#]: author: (Magesh Maruthamuthu https://www.2daygeek.com/author/magesh/)
-How To Install And Configure Chrony As NTP Client?
+如何正确安装和配置Chrony作为NTP客户端?
======
-The NTP server and NTP client allow us to sync the clock across the network.
+NTP服务器和NTP客户端运行我们通过网络来同步时钟。
-We had written an article about **[NTP server and NTP client installation and configuration][1]** in the past.
+在过去,我们已经撰写了一篇关于 **[NTP服务器和NTP客户端的安装与配置][1]** 的文章。
-If you would like to check these, navigate to the above URL.
+如果你想看这些内容,点击上述的URL访问。
-### What Is Chrony Client?
+### 什么是Chrony客户端?
-Chrony is replacement of NTP client.
+Chrony是NTP客户端的替代品。
-It can synchronize the system clock faster with better time accuracy and it can be particularly useful for the systems which are not online all the time.
+它能以更精确的时间和更快的速度同步时钟,并且它对于那些不是全天候在线的系统非常有用。
-chronyd is smaller, it uses less memory and it wakes up the CPU only when necessary, which is better for power saving.
+chronyd更小、更省电,它占用更少的内存且仅当需要时它才唤醒CPU。
-It can perform well even when the network is congested for longer periods of time.
+即使网络拥塞较长时间,它也能很好地运行。
-It supports hardware timestamping on Linux, which allows extremely accurate synchronization on local networks.
+它支持Linux上的硬件时间戳,允许在本地网络进行极其准确的同步。
-It offers following two services.
+它提供下列两个服务。
- * **`chronyc:`** Command line interface for chrony.
- * **`chronyd:`** Chrony daemon service.
+ * **`chronyc:`** Chrony的命令行接口。
+ * **`chronyd:`** Chrony守护进程服务。
-### How To Install And Configure Chrony In Linux?
+### 如何在Linux上安装和配置Chrony?
-Since the package is available in most of the distributions official repository. So, use the package manager to install it.
+由于安装包在大多数发行版的官方仓库中可用,因此直接使用包管理器去安装它。
-For **`Fedora`** system, use **[DNF Command][2]** to install chrony.
+对于 **`Fedora`** 系统, 使用 **[DNF 命令][2]** 去安装chrony.
```
$ sudo dnf install chrony
```
-For **`Debian/Ubuntu`** systems, use **[APT-GET Command][3]** or **[APT Command][4]** to install chrony.
+对于 **`Debian/Ubuntu`** 系统, 使用 **[APT-GET 命令][3]** 或者 **[APT 命令][4]** 去安装chrony.
```
$ sudo apt install chrony
```
-For **`Arch Linux`** based systems, use **[Pacman Command][5]** to install chrony.
+对基于 **`Arch Linux`** 的系统, 使用 **[Pacman 命令][5]** 去安装chrony.
```
$ sudo pacman -S chrony
```
-For **`RHEL/CentOS`** systems, use **[YUM Command][6]** to install chrony.
+对于 **`RHEL/CentOS`** 系统, 使用 **[YUM 命令][6]** 去安装chrony.
```
$ sudo yum install chrony
```
-For **`openSUSE Leap`** system, use **[Zypper Command][7]** to install chrony.
+对于**`openSUSE Leap`** 系统, 使用 **[Zypper 命令][7]** 去安装chrony.
```
$ sudo zypper install chrony
```
-In this article, we are going to use the following setup to test this.
+在这篇文章中,我们将使用下列设置去测试。
- * **`NTP Server:`** HostName: CentOS7.2daygeek.com, IP:192.168.1.5, OS:CentOS 7
- * **`Chrony Client:`** HostName: Ubuntu18.2daygeek.com, IP:192.168.1.3, OS:Ubuntu 18.04
+ * **`NTP服务器:`** 主机名: CentOS7.2daygeek.com, IP:192.168.1.5, OS:CentOS 7
+ * **`Chrony客户端:`** 主机名: Ubuntu18.2daygeek.com, IP:192.168.1.3, OS:Ubuntu 18.04
+导航到 **[在Linux上安装和配置NTP服务器][1]** 的URL。
-Navigate to the following URL for **[NTP server installation and configuration in Linux][1]**.
-I have installed and configured the NTP server on `CentOS7.2daygeek.com` so, append the same into all the client machines. Also, include the other required information on it.
+我已经在`CentOS7.2daygeek.com`这台主机上安装和配置了NTP服务器,因此,将其附加到所有的客户端机器上。此外,还包括其他所需信息。
-The `chrony.conf` file will be placed in the different locations based on your distribution.
+`chrony.conf`文件的位置根据你的发行版不同而不同。
-For RHEL based systems, it’s located at `/etc/chrony.conf`.
+对基于RHEL的系统,它位于`/etc/chrony.conf`。
-For Debian based systems, it’s located at `/etc/chrony/chrony.conf`.
+对基于Debian的系统,它位于`/etc/chrony/chrony.conf`。
```
# vi /etc/chrony/chrony.conf
@@ -98,27 +98,28 @@ makestep 1 3
cmdallow 192.168.1.0/24
```
-Bounce the Chrony service once you update the configuration.
+更新配置后需要重启Chrony服务。
-For sysvinit systems. For RHEL based system we need to run `chronyd` instead of chrony.
+对于sysvinit系统。基于RHEL的系统需要去运行`chronyd`而不是chrony。
```
-# service chrony restart
+# service chronyd restart
-# chkconfig chrony on
+# chkconfig chronyd on
```
-For systemctl systems. For RHEL based system we need to run `chronyd` instead of chrony.
+对于systemctl系统。 基于RHEL的系统需要去运行`chronyd`而不是chrony。
```
-# systemctl restart chrony
+# systemctl restart chronyd
-# systemctl enable chrony
+# systemctl enable chronyd
```
-Use the following commands like tacking, sources and sourcestats to check chrony synchronization details.
+使用像tacking,sources和sourcestats这样的命令去检查chrony的同步细节。
+
+去检查chrony的跟踪状态。
-To check chrony tracking status.
```
# chronyc tracking
@@ -137,7 +138,7 @@ Update interval : 2.0 seconds
Leap status : Normal
```
-Run the sources command to displays information about the current time sources.
+运行sources命令去显示当前时间源的信息。
```
# chronyc sources
@@ -147,7 +148,7 @@ MS Name/IP address Stratum Poll Reach LastRx Last sample
^* CentOS7.2daygeek.com 2 6 17 62 +36us[+1230us] +/- 1111ms
```
-The sourcestats command displays information about the drift rate and offset estimation process for each of the sources currently being examined by chronyd.
+sourcestats命令显示有关chronyd当前正在检查的每个源的漂移率和偏移估计过程的信息。
```
# chronyc sourcestats
@@ -157,7 +158,7 @@ Name/IP Address NP NR Span Frequency Freq Skew Offset Std Dev
CentOS7.2daygeek.com 5 3 71 -97.314 78.754 -469us 441us
```
-When chronyd is configured as an NTP client or peer, you can have the transmit and receive timestamping modes and the interleaved mode reported for each NTP source by the chronyc ntpdata command.
+当chronyd配置为NTP客户端或对等端时,你就能通过chronyc ntpdata命令向每一个NTP源发送和接收时间戳模式和交错模式报告。
```
# chronyc ntpdata
@@ -190,13 +191,14 @@ Total RX : 46
Total valid RX : 46
```
-Finally run the `date` command.
+最后运行`date`命令。
```
# date
Thu Mar 28 03:08:11 CDT 2019
```
+为了立即切换系统时钟,通过转换绕过任何正在进行的调整,请以root身份发出以下命令(手动调整系统时钟)。
To step the system clock immediately, bypassing any adjustments in progress by slewing, issue the following command as root (To adjust the system clock manually).
```
@@ -209,7 +211,7 @@ via: https://www.2daygeek.com/configure-ntp-client-using-chrony-in-linux/
作者:[Magesh Maruthamuthu][a]
选题:[lujun9972][b]
-译者:[译者ID](https://github.com/译者ID)
+译者:[arrowfeng](https://github.com/arrowfeng)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
diff --git a/translated/tech/20190409 How To Install And Configure NTP Server And NTP Client In Linux.md b/translated/tech/20190409 How To Install And Configure NTP Server And NTP Client In Linux.md
deleted file mode 100644
index 1c43f50626..0000000000
--- a/translated/tech/20190409 How To Install And Configure NTP Server And NTP Client In Linux.md
+++ /dev/null
@@ -1,262 +0,0 @@
-[#]: collector: (lujun9972)
-[#]: translator: (arrowfeng)
-[#]: reviewer: ( )
-[#]: publisher: ( )
-[#]: url: ( )
-[#]: subject: (How To Install And Configure NTP Server And NTP Client In Linux?)
-[#]: via: (https://www.2daygeek.com/install-configure-ntp-server-ntp-client-in-linux/)
-[#]: author: (Magesh Maruthamuthu https://www.2daygeek.com/author/magesh/)
-
-如何在Linux上安装、配置NTP服务和NTP客户端?
-======
-
-你也许听说过这个词很多次或者你可能已经在使用它了。
-但是,在这篇文章中我将会清晰的告诉你NTP服务和NTP客户端的安装。
-
-之后我们将会了解 **[Chrony NTP 客户端的安装][1]**。
-
-
-### 什么是NTP服务?
-
-NTP 表示为网络时间协议。
-
-它是通过网络在电脑系统之间进行时钟同步的网络协议。
-
-另一方面,我可以说,它可以让那些通过NTP或者Chrony客户端连接到NTP服务的系统保持时间上的一致(它能保持一个精确的时间)。
-
-
-NTP在公共互联网上通常能够保持时间延迟在几十毫秒以内的精度,并在理想条件下,它能在局域网下达到优于一毫秒的延迟精度。
-
-它使用用户数据报协议(UDP)在端口123上发送和接受时间戳。它是C/S架构的应用程序。
-
-
-
-### 什么是NTP客户端?
-
-NTP客户端将其时钟与网络时间服务器同步。
-
-### 什么是Chrony客户端?
-Chrony是NTP客户端的替代品。它能以更精确的时间更快的同步系统时钟,并且它对于那些不总是在线的系统很有用。
-
-### 为什么我们需要NTP服务?
-
-为了使你组织中的所有服务器与基于时间的作业保持精确的时间同步。
-
-为了说明这点,我将告诉你一个场景。比如说,我们有两个服务器(服务器1和服务器2)。服务器1通常在10:55完成离线作业,然后服务器2在11:00需要基于服务器1完成的作业报告去运行其他作业。
-
-如果两个服务器正在使用不同的时间(如果服务器2时间比服务器1提前,服务器1的时间就落后于服务器2),然后我们就不能去执行这个作业。为了达到时间一致,我们应该安装NTP。
-希望上述能清除你对于NTP的疑惑。
-
-
-在这篇文章中,我们将使用下列设置去测试。
-
- * **`NTP Server:`** HostName: CentOS7.2daygeek.com, IP:192.168.1.8, OS:CentOS 7
- * **`NTP Client:`** HostName: Ubuntu18.2daygeek.com, IP:192.168.1.5, OS:Ubuntu 18.04
-
-
-
-### NTP服务端: 如何在Linux上安装NTP?
-
-因为它是c/s架构,所以NTP服务端和客户端的安装包没有什么不同。在发行版的官方仓库中都有NTP安装包,因此可以使用发行版的包管理器安装它。
-
-对于 **`Fedora`** 系统, 使用 **[DNF 命令][2]** 去安装ntp.
-
-```
-$ sudo dnf install ntp
-```
-
-对于 **`Debian/Ubuntu`** 系统, 使用 **[APT-GET 命令][3]** 或者 **[APT 命令][4]** 去安装 ntp.
-
-```
-$
-```
-
-对基于 **`Arch Linux`** 的系统, 使用 **[Pacman 命令][5]** 去安装 ntp.
-
-```
-$ sudo pacman -S ntp
-```
-
-对 **`RHEL/CentOS`** 系统, 使用 **[YUM 命令][6]** 去安装 ntp.
-
-```
-$ sudo yum install ntp
-```
-
-对于 **`openSUSE Leap`** 系统, 使用 **[Zypper 命令][7]** 去安装 ntp.
-
-```
-$ sudo zypper install ntp
-```
-
-### 如何在Linux上配置NTP服务?
-
-安装NTP软件包后,请确保在服务器端的`/etc/ntp.conf`文件中,必须取消以下配置的注释。
-
-默认情况下,NTP服务器配置依赖于`X.distribution_name.pool.ntp.org`。 如果有必要,可以使用默认配置,也可以访问站点,根据你所在的位置(特定国家/地区)进行更改。
-
-比如说如果你在印度,然后你的NTP服务器将是`0.in.pool.ntp.org`,并且这个地址适用于大多数国家。
-
-```
-# vi /etc/ntp.conf
-
-restrict default kod nomodify notrap nopeer noquery
-restrict -6 default kod nomodify notrap nopeer noquery
-restrict 127.0.0.1
-restrict -6 ::1
-server 0.asia.pool.ntp.org
-server 1.asia.pool.ntp.org
-server 2.asia.pool.ntp.org
-server 3.asia.pool.ntp.org
-restrict 192.168.1.0 mask 255.255.255.0 nomodify notrap
-driftfile /var/lib/ntp/drift
-keys /etc/ntp/keys
-```
-
-我们仅允许`192.168.1.0/24`子网的客户端访问NTP服务器。
-
-由于默认情况下基于RHEL7的发行版的防火墙是打开的,因此允许ntp服务通过。
-
-```
-# firewall-cmd --add-service=ntp --permanent
-# firewall-cmd --reload
-```
-
-更新配置后重启服务。
-
-对于基于Debian的sysvinit系统,我们需要去运行`ntp`而不是`ntpd`。
-
-```
-# service ntpd restart
-
-# chkconfig ntpd on
-```
-对于基于Debian的systemctl系统,我们需要去运行`ntp`和`ntpd`。
-
-```
-# systemctl restart ntpd
-
-# systemctl enable ntpd
-```
-
-### NTP客户端:如何在Linux上安装NTP客户端?
-
-正如我在这篇文章中前面所说的。NTP服务端和客户端的安装包没有什么不同。因此在客户端上也安装同样的软件包。
-
-对于 **`Fedora`** 系统, 使用 **[DNF 命令][2]** 去安装ntp.
-
-```
-$ sudo dnf install ntp
-```
-
-对于 **`Debian/Ubuntu`** 系统, 使用 **[APT-GET 命令][3]** 或者 **[APT 命令][4]** 去安装 ntp.
-
-```
-$
-```
-
-对基于 **`Arch Linux`** 的系统, 使用 **[Pacman 命令][5]** 去安装 ntp.
-
-```
-$ sudo pacman -S ntp
-```
-
-对 **`RHEL/CentOS`** 系统, 使用 **[YUM 命令][6]** 去安装 ntp.
-
-```
-$ sudo yum install ntp
-```
-
-对于 **`openSUSE Leap`** 系统, 使用 **[Zypper 命令][7]** 去安装 ntp.
-
-```
-$ sudo zypper install ntp
-```
-
-我已经在`CentOS7.2daygeek.com`这台主机上安装和配置了NTP服务器,因此将其附加到所有的客户端机器上。
-
-```
-# vi /etc/ntp.conf
-
-restrict default kod nomodify notrap nopeer noquery
-restrict -6 default kod nomodify notrap nopeer noquery
-restrict 127.0.0.1
-restrict -6 ::1
-server CentOS7.2daygeek.com prefer iburst
-driftfile /var/lib/ntp/drift
-keys /etc/ntp/keys
-```
-
-更新配置后重启服务。
-
-对于基于Debian的sysvinit系统,我们需要去运行`ntp`而不是`ntpd`。
-
-```
-# service ntpd restart
-
-# chkconfig ntpd on
-```
-对于基于Debian的systemctl系统,我们需要去运行`ntp`和`ntpd`。
-
-```
-# systemctl restart ntpd
-
-# systemctl enable ntpd
-```
-
-重新启动NTP服务后等待几分钟以便从NTP服务器获取同步的时间。
-
-在Linux上运行下列命令去验证NTP服务的同步状态。
-
-```
-# ntpq –p
-或
-# ntpq -pn
-
- remote refid st t when poll reach delay offset jitter
-==============================================================================
-*CentOS7.2daygee 133.243.238.163 2 u 14 64 37 0.686 0.151 16.432
-```
-
-运行下列命令去得到ntpd的当前状态。
-
-```
-# ntpstat
-synchronised to NTP server (192.168.1.8) at stratum 3
- time correct to within 508 ms
- polling server every 64 s
-```
-
-最后运行`date`命令。
-
-```
-# date
-Tue Mar 26 23:17:05 CDT 2019
-```
-
-如果你观察到NTP中输出的偏移很大。运行下列命令从NTP服务器手动同步时钟。当你执行下列命令的时候,确保你的NTP客户端应该为未激活状态。
-
-```
-# ntpdate –uv CentOS7.2daygeek.com
-```
-
---------------------------------------------------------------------------------
-
-via: https://www.2daygeek.com/install-configure-ntp-server-ntp-client-in-linux/
-
-作者:[Magesh Maruthamuthu][a]
-选题:[lujun9972][b]
-译者:[arrowfeng](https://github.com/arrowfeng)
-校对:[校对者ID](https://github.com/校对者ID)
-
-本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
-
-[a]: https://www.2daygeek.com/author/magesh/
-[b]: https://github.com/lujun9972
-[1]: https://www.2daygeek.com/configure-ntp-client-using-chrony-in-linux/
-[2]: https://www.2daygeek.com/dnf-command-examples-manage-packages-fedora-system/
-[3]: https://www.2daygeek.com/apt-get-apt-cache-command-examples-manage-packages-debian-ubuntu-systems/
-[4]: https://www.2daygeek.com/apt-command-examples-manage-packages-debian-ubuntu-systems/
-[5]: https://www.2daygeek.com/pacman-command-examples-manage-packages-arch-linux-system/
-[6]: https://www.2daygeek.com/yum-command-examples-manage-packages-rhel-centos-systems/
-[7]: https://www.2daygeek.com/zypper-command-examples-manage-packages-opensuse-system/
diff --git a/translated/tech/20190419 Building scalable social media sentiment analysis services in Python.md b/translated/tech/20190419 Building scalable social media sentiment analysis services in Python.md
new file mode 100644
index 0000000000..a216ce8495
--- /dev/null
+++ b/translated/tech/20190419 Building scalable social media sentiment analysis services in Python.md
@@ -0,0 +1,290 @@
+[#]: collector: (lujun9972)
+[#]: translator: (MjSeven)
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+[#]: subject: (Building scalable social media sentiment analysis services in Python)
+[#]: via: (https://opensource.com/article/19/4/social-media-sentiment-analysis-python-scalable)
+[#]: author: (Michael McCune https://opensource.com/users/elmiko/users/jschlessman)
+
+使用 Python 构建可扩展的社交媒体情感分析服务
+======
+学习如何使用 spaCy、vaderSentiment、Flask 和 Python 来为你的工作添加情感分析能力。
+![Tall building with windows][1]
+
+本系列的[第一部分][2]提供了情感分析工作原理的一些背景知识,现在让我们研究如何将这些功能添加到你的设计中。
+
+### 探索 Python 库 spaCy 和 vaderSentiment
+
+#### 前提条件
+
+ * 一个终端 shell
+ * shell 中的 Python 语言二进制文件(3.4+ 版本)
+ * 用于安装 Python 包的 **pip** 命令
+ * (可选)一个 [Python 虚拟环境][3]使你的工作与系统隔离开来
+
+#### 配置环境
+
+在开始编写代码之前,你需要安装 [spaCy][4] 和 [vaderSentiment][5] 包来设置 Python 环境,同时下载一个语言模型来帮助你分析。幸运的是,大部分操作都容易在命令行中完成。
+
+在 shell 中,输入以下命令来安装 spaCy 和 vaderSentiment 包:
+
+```
+pip install spacy vaderSentiment
+```
+
+命令安装完成后,安装 spaCy 可用于文本分析的语言模型。以下命令将使用 spaCy 模块下载并安装英语[模型][6]:
+
+```
+python -m spacy download en_core_web_sm
+```
+
+安装了这些库和模型之后,就可以开始编码了。
+
+#### 一个简单的文本分析
+
+使用 [Python 解释器交互模式][7] 编写一些代码来分析单个文本片段。首先启动 Python 环境:
+
+```
+$ python
+Python 3.6.8 (default, Jan 31 2019, 09:38:34)
+[GCC 8.2.1 20181215 (Red Hat 8.2.1-6)] on linux
+Type "help", "copyright", "credits" or "license" for more information.
+>>>
+```
+
+_(你的 Python 解释器版本打印可能与此不同。)_
+
+ 1. 导入所需模块:
+ ```
+ >>> import spacy
+ >>> from vaderSentiment import vaderSentiment
+ ```
+2. 从 spaCy 加载英语语言模型:
+ ```
+ >>> english = spacy.load("en_core_web_sm")
+ ```
+ 3. 处理一段文本。本例展示了一个非常简单的句子,我们希望它能给我们带来些许积极的情感:
+ ```
+ >>> result = english("I like to eat applesauce with sugar and cinnamon.")
+ ```
+4. 从处理后的结果中收集句子。SpaCy 已识别并处理短语中的实体,这一步为每个句子生成情感(即时在本例中只有一个句子):
+ ```
+ >>> sentences = [str(s) for s in result.sents]
+ ```
+ 5. 使用 vaderSentiments 创建一个分析器:
+ ```
+ >>> analyzer = vaderSentiment.SentimentIntensityAnalyzer()
+ ```
+ 6. 对句子进行情感分析:
+ ```
+ >>> sentiment = [analyzer.polarity_scores(str(s)) for s in sentences]
+ ```
+
+`sentiment` 变量现在包含例句的极性分数。打印出这个值,看看它是如何分析这个句子的。
+
+```
+>>> print(sentiment)
+[{'neg': 0.0, 'neu': 0.737, 'pos': 0.263, 'compound': 0.3612}]
+```
+
+这个结构是什么意思?
+
+表面上,这是一个只有一个字典对象的数组。如果有多个句子,那么每个句子都会对应一个字典对象。字典中有四个键对应不同类型的情感。**neg** 键表示负面情感,因为在本例中没有报告任何负面情感,**0.0** 值证明了这一点。**neu** 键表示中性情感,它的得分相当高,为**0.737**(最高为 **1.0**)。**pos** 键代表积极情感,得分适中,为 **0.263**。最后,**cmpound** 键代表文本的总体得分,它可以从负数到正数,**0.3612** 表示积极方面的情感多一点。
+
+要查看这些值可能如何变化,你可以使用已输入的代码做一个小实验。以下代码块显示了如何对类似句子的情感评分的评估。
+
+```
+>>> result = english("I love applesauce!")
+>>> sentences = [str(s) for s in result.sents]
+>>> sentiment = [analyzer.polarity_scores(str(s)) for s in sentences]
+>>> print(sentiment)
+[{'neg': 0.0, 'neu': 0.182, 'pos': 0.818, 'compound': 0.6696}]
+```
+
+你可以看到,通过将例句改为非常积极的句子,`sentiment` 的值发生了巨大变化。
+
+### 建立一个情感分析服务
+
+现在你已经为情感分析组装了基本的代码块,让我们将这些东西转化为一个简单的服务。
+
+在这个演示中,你将使用 Python [Flask 包][9] 创建一个 [RESTful][8] HTTP 服务器。此服务将接受英文文本数据并返回情感分析结果。请注意,此示例服务是用于学习所涉及的技术,而不是用于投入生产的东西。
+
+#### 前提条件
+
+ * 一个终端 shell
+ * shell 中的 Python 语言二进制文件(3.4+版本)
+ * 安装 Python 包的 **pip** 命令
+ * **curl** 命令
+ * 一个文本编辑器
+ * (可选) 一个 [Python 虚拟环境][3]使你的工作与系统隔离开来
+
+#### 配置环境
+
+这个环境几乎与上一节中的环境相同,唯一的区别是在 Python 环境中添加了 Flask 包。
+
+ 1. 安装所需依赖项:
+ ```
+ pip install spacy vaderSentiment flask
+ ```
+2. 安装 spaCy 的英语语言模型:
+ ```
+ python -m spacy download en_core_web_sm
+ ```
+
+
+#### 创建应用程序文件
+
+打开编辑器,创建一个名为 **app.py** 的文件。添加以下内容 _(不用担心,我们将解释每一行)_ :
+
+
+```
+import flask
+import spacy
+import vaderSentiment.vaderSentiment as vader
+
+app = flask.Flask(__name__)
+analyzer = vader.SentimentIntensityAnalyzer()
+english = spacy.load("en_core_web_sm")
+
+def get_sentiments(text):
+ result = english(text)
+ sentences = [str(sent) for sent in result.sents]
+ sentiments = [analyzer.polarity_scores(str(s)) for s in sentences]
+ return sentiments
+
+@app.route("/", methods=["POST", "GET"])
+def index():
+ if flask.request.method == "GET":
+ return "To access this service send a POST request to this URL with" \
+ " the text you want analyzed in the body."
+ body = flask.request.data.decode("utf-8")
+ sentiments = get_sentiments(body)
+ return flask.json.dumps(sentiments)
+```
+
+虽然这个源文件不是很大,但它非常密集。让我们来看看这个应用程序的各个部分,并解释它们在做什么。
+
+```
+import flask
+import spacy
+import vaderSentiment.vaderSentiment as vader
+```
+
+前三行引入了执行语言分析和 HTTP 框架所需的包。
+
+```
+app = flask.Flask(__name__)
+analyzer = vader.SentimentIntensityAnalyzer()
+english = spacy.load("en_core_web_sm")
+```
+
+接下来的三行代码创建了一些全局变量。第一个变量 **app**,它是 Flask 用于创建 HTTP 路由的主要入口点。第二个变量 **analyzer** 与上一个示例中使用的类型相同,它将用于生成情感分数。最后一个变量 **english** 也与上一个示例中使用的类型相同,它将用于注释和标记初始文本输入。
+
+你可能想知道为什么全局声明这些变量。对于 **app** 变量,这是许多 Flask 应用程序的标准过程。但是,对于 **analyzer** 和 **english** 变量,将它们设置为全局变量的决定是基于与所涉及的类关联的加载时间。虽然加载时间可能看起来很短,但是当它在 HTTP 服务器的上下文中运行时,这些延迟会对性能产生负面影响。
+
+
+```
+def get_sentiments(text):
+ result = english(text)
+ sentences = [str(sent) for sent in result.sents]
+ sentiments = [analyzer.polarity_scores(str(s)) for s in sentences]
+ return sentiments
+```
+
+这部分是服务的核心 -- 一个用于从一串文本生成情感值的函数。你可以看到此函数中的操作对应于你之前在 Python 解释器中运行的命令。这里它们被封装在一个函数定义中,**text** 源作为文本变量传入,最后 **sentiments** 变量返回给调用者。
+
+
+```
+@app.route("/", methods=["POST", "GET"])
+def index():
+ if flask.request.method == "GET":
+ return "To access this service send a POST request to this URL with" \
+ " the text you want analyzed in the body."
+ body = flask.request.data.decode("utf-8")
+ sentiments = get_sentiments(body)
+ return flask.json.dumps(sentiments)
+```
+
+源文件的最后一个函数包含了指导 Flask 如何为服务配置 HTTP 服务器的逻辑。它从一行开始,该行将 HTTP 路由 **/** 与请求方法 **POST** 和 **GET** 相关联。
+
+在函数定义行之后,**if** 子句将检测请求方法是否为 **GET**。如果用户向服务发送此请求,那么下面的行将返回一条指示如何访问服务器的文本消息。这主要是为了方便最终用户。
+
+下一行使用 **flask.request** 对象来获取请求的主体,该主体应包含要处理的文本字符串。**decode** 函数将字节数组转换为可用的格式化字符串。经过解码的文本消息被传递给 **get_sentiments** 函数以生成情感分数。最后,分数通过 HTTP 框架返回给用户。
+
+你现在应该保存文件,如果尚未保存,那么返回 shell。
+
+#### 运行情感服务
+
+一切就绪后,使用 Flask 的内置调试服务器运行服务非常简单。要启动该服务,请从与源文件相同的目录中输入以下命令:
+
+```
+FLASK_APP=app.py flask run
+```
+
+现在,你将在 shell 中看到来自服务器的一些输出,并且服务器将处于运行状态。要测试服务器是否正在运行,你需要打开第二个 shell 并使用 **curl** 命令。
+
+首先,输入以下命令检查是否打印了指令信息:
+
+```
+curl http://localhost:5000
+```
+
+你应该看到说明消息:
+
+```
+To access this service send a POST request to this URI with the text you want analyzed in the body.
+```
+
+接下来,运行以下命令发送测试消息,查看情感分析:
+
+```
+curl http://localhost:5000 --header "Content-Type: application/json" --data "I love applesauce!"
+```
+
+你从服务器获得的响应应类似于以下内容:
+
+```
+[{"compound": 0.6696, "neg": 0.0, "neu": 0.182, "pos": 0.818}]
+```
+
+恭喜!你现在已经实现了一个 RESTful HTTP 情感分析服务。你可以在 [GitHub 上找到此服务的参考实现和本文中的所有代码][10]。
+
+### 继续探索
+
+现在你已经了解了自然语言处理和情感分析背后的原理和机制,下面是进一步发现探索主题的一些方法。
+
+#### 在 OpenShift 上创建流式情感分析器
+
+虽然创建本地应用程序来研究情绪分析很方便,但是接下来需要能够部署应用程序以实现更广泛的用途。按照[ Radnaalytics.io][11] 提供的指导和代码进行操作,你将学习如何创建一个情感分析仪,可以集装箱化并部署到 Kubernetes 平台。你还将了解如何将 APache Kafka 用作事件驱动消息传递的框架,以及如何将 Apache Spark 用作情绪分析的分布式计算平台。
+
+#### 使用 Twitter API 发现实时数据
+
+虽然 [Radanalytics.io][12] 实验室可以生成合成推文流,但你可以不受限于合成数据。事实上,拥有 Twitter 账户的任何人都可以使用 [Tweepy Python][13] 包访问 Twitter 流媒体 API 对推文进行情感分析。
+
+--------------------------------------------------------------------------------
+
+via: https://opensource.com/article/19/4/social-media-sentiment-analysis-python-scalable
+
+作者:[Michael McCune ][a]
+选题:[lujun9972][b]
+译者:[MjSeven](https://github.com/MjSeven)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://opensource.com/users/elmiko/users/jschlessman
+[b]: https://github.com/lujun9972
+[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/windows_building_sky_scale.jpg?itok=mH6CAX29 (Tall building with windows)
+[2]: https://opensource.com/article/19/4/social-media-sentiment-analysis-python-part-1
+[3]: https://virtualenv.pypa.io/en/stable/
+[4]: https://pypi.org/project/spacy/
+[5]: https://pypi.org/project/vaderSentiment/
+[6]: https://spacy.io/models
+[7]: https://docs.python.org/3.6/tutorial/interpreter.html
+[8]: https://en.wikipedia.org/wiki/Representational_state_transfer
+[9]: http://flask.pocoo.org/
+[10]: https://github.com/elmiko/social-moments-service
+[11]: https://github.com/radanalyticsio/streaming-lab
+[12]: http://Radanalytics.io
+[13]: https://github.com/tweepy/tweepy
diff --git a/translated/tech/20190422 8 environment-friendly open software projects you should know.md b/translated/tech/20190422 8 environment-friendly open software projects you should know.md
deleted file mode 100644
index 9d5378ddc2..0000000000
--- a/translated/tech/20190422 8 environment-friendly open software projects you should know.md
+++ /dev/null
@@ -1,65 +0,0 @@
-[#]: collector: (lujun9972)
-[#]: translator: (geekpi)
-[#]: reviewer: ( )
-[#]: publisher: ( )
-[#]: url: ( )
-[#]: subject: (8 environment-friendly open software projects you should know)
-[#]: via: (https://opensource.com/article/19/4/environment-projects)
-[#]: author: (Laura Hilliger https://opensource.com/users/laurahilliger)
-
-8 个你应该了解的环保开源项目
-======
-通过给这些致力于提升环境的项目做贡献来庆祝地球日。
-![][1]
-
-在过去的几年里,我一直在帮助 [Greenpeace][2] 建立其第一个完全开源的项目,Planet 4. [Planet 4][3] 是一个全球参与平台,Greenpeace 的支持者和活动家可以互动并参与组织。它的目标是让人们代表我们的星球采取行动。我们希望邀请参与并利用人力来应对气候变化和塑料污染等全球性问题。它们正在寻找开发者、设计师、作者、贡献者和其他通过开源支持环保主义的人都非常欢迎[参与进来][4]!
-
-Planet 4 远非唯一关注环境的开源项目。对于地球日,我会分享其他七个关注我们星球的开源项目。
-
-**[Eco Hacker Farm][5]** 致力于支持可持续社区。它建议并支持将黑客空间/黑客基地和永续农业生活结合在一起的项目。该组织还有在线项目。访问其 [wiki][6] 或 [Twitter][7] 了解有关 Eco Hacker Farm 正在做的更多信息。
-
-**[Public Lab][8]** 是一个开放社区和非营利组织,它致力于将科学掌握在公民手中。它于 2010 年在 BP 石油灾难后形成,Public Lab 与开源合作,协助环境勘探和调查。它是一个多元化的社区,有很多方法可以做[贡献][9]。
-
-不久前,Opensource.com 的管理 Don Watkins 撰写了一篇 **[Open Climate Workbench][10]** 的文章,该项目来自 Apache 基金会。 [OCW][11] 提供了进行气候建模和评估的软件,可用于各种应用。
-
-**[Open Source Ecology][12]** 是一个旨在改善经济运作方式的项目。该项目着眼于环境再生和社会公正,它旨在重新定义我们的一些肮脏的生产和分配技术,以创造一个更可持续的文明。
-
-促进开源和大数据工具之间的合作,以实现海洋、大气、土地和气候的研究,“ **[Pangeo][13]** 是第一个推广开放、可重复和可扩展科学的社区。”大数据可以改变世界!
-
-**[Leaflet][14]** 是一个著名的开源 JavaScript 库。它可以做各种各样的事情,包括环保项目,如 [Arctic Web Map][15],它能让科学家准确地可视化和分析北极地区,这是气候研究的关键能力。
-
-当然,没有我在 Mozilla 的朋友就没有这个列表(不是这个完整的列表!)。**[Mozilla Science Lab][16]** 社区就像所有 Mozilla 项目一样,非常开放,它致力于将开源原则带给科学界。它的项目和社区使科学家能够进行我们世界所需的各种研究,以解决一些最普遍的环境问题。
-
-### 如何贡献
-
-在这个地球日,做为期六个月的承诺,将一些时间贡献给一个有助于应对气候变化的开源项目,或以其他方式鼓励人们保护地球母亲。肯定还有许多关注环境的开源项目,所以请在评论中留言!
-
---------------------------------------------------------------------------------
-
-via: https://opensource.com/article/19/4/environment-projects
-
-作者:[Laura Hilliger][a]
-选题:[lujun9972][b]
-译者:[geekpi](https://github.com/geekpi)
-校对:[校对者ID](https://github.com/校对者ID)
-
-本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
-
-[a]: https://opensource.com/users/laurahilliger
-[b]: https://github.com/lujun9972
-[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/world_hands_diversity.png?itok=zm4EDxgE
-[2]: http://www.greenpeace.org
-[3]: http://medium.com/planet4
-[4]: https://planet4.greenpeace.org/community/#partners-open-sourcers
-[5]: https://wiki.ecohackerfarm.org/start
-[6]: https://wiki.ecohackerfarm.org/
-[7]: https://twitter.com/EcoHackerFarm
-[8]: https://publiclab.org/
-[9]: https://publiclab.org/contribute
-[10]: https://opensource.com/article/17/1/apache-open-climate-workbench
-[11]: https://climate.apache.org/
-[12]: https://wiki.opensourceecology.org/wiki/Project_needs
-[13]: http://pangeo.io/
-[14]: https://leafletjs.com/
-[15]: https://webmap.arcticconnect.ca/#ac_3573/2/20.8/-65.5
-[16]: https://science.mozilla.org/