wol d
+```
+
+The processor that responds to the MagicPacket may be part of the network interface or it may be the [Baseboard Management Controller][4] (BMC).
+
+#### Intel Management Engine, Platform Controller Hub, and Minix
+
+The BMC is not the only microcontroller (MCU) that may be listening when the system is nominally off. x86_64 systems also include the Intel Management Engine (IME) software suite for remote management of systems. A wide variety of devices, from servers to laptops, includes this technology, [which enables functionality][5] such as KVM Remote Control and Intel Capability Licensing Service. The [IME has unpatched vulnerabilities][6], according to [Intel's own detection tool][7]. The bad news is, it's difficult to disable the IME. Trammell Hudson has created an [me_cleaner project][8] that wipes some of the more egregious IME components, like the embedded web server, but could also brick the system on which it is run.
+
+The IME firmware and the System Management Mode (SMM) software that follows it at boot are [based on the Minix operating system][9] and run on the separate Platform Controller Hub processor, not the main system CPU. The SMM then launches the Universal Extensible Firmware Interface (UEFI) software, about which much has [already been written][10], on the main processor. The Coreboot group at Google has started a breathtakingly ambitious [Non-Extensible Reduced Firmware][11] (NERF) project that aims to replace not only UEFI but early Linux userspace components such as systemd. While we await the outcome of these new efforts, Linux users may now purchase laptops from Purism, System76, or Dell [with IME disabled][12], plus we can hope for laptops [with ARM 64-bit processors][13].
+
+#### Bootloaders
+
+Besides starting buggy spyware, what function does early boot firmware serve? The job of a bootloader is to make available to a newly powered processor the resources it needs to run a general-purpose operating system like Linux. At power-on, there not only is no virtual memory, but no DRAM until its controller is brought up. A bootloader then turns on power supplies and scans buses and interfaces in order to locate the kernel image and the root filesystem. Popular bootloaders like U-Boot and GRUB have support for familiar interfaces like USB, PCI, and NFS, as well as more embedded-specific devices like NOR- and NAND-flash. Bootloaders also interact with hardware security devices like [Trusted Platform Modules][14] (TPMs) to establish a chain of trust from earliest boot.
+
+![Running the U-boot bootloader][16]
+
+Running the U-boot bootloader in the sandbox on the build host.
+
+The open source, widely used [U-Boot ][17]bootloader is supported on systems ranging from Raspberry Pi to Nintendo devices to automotive boards to Chromebooks. There is no syslog, and when things go sideways, often not even any console output. To facilitate debugging, the U-Boot team offers a sandbox in which patches can be tested on the build-host, or even in a nightly Continuous Integration system. Playing with U-Boot's sandbox is relatively simple on a system where common development tools like Git and the GNU Compiler Collection (GCC) are installed:
+```
+
+
+$# git clone git://git.denx.de/u-boot; cd u-boot
+
+$# make ARCH=sandbox defconfig
+
+$# make; ./u-boot
+
+=> printenv
+
+=> help
+```
+
+That's it: you're running U-Boot on x86_64 and can test tricky features like [mock storage device][2] repartitioning, TPM-based secret-key manipulation, and hotplug of USB devices. The U-Boot sandbox can even be single-stepped under the GDB debugger. Development using the sandbox is 10x faster than testing by reflashing the bootloader onto a board, and a "bricked" sandbox can be recovered with Ctrl+C.
+
+### Starting up the kernel
+
+#### Provisioning a booting kernel
+
+Upon completion of its tasks, the bootloader will execute a jump to kernel code that it has loaded into main memory and begin execution, passing along any command-line options that the user has specified. What kind of program is the kernel? `file /boot/vmlinuz` indicates that it is a bzImage, meaning a big compressed one. The Linux source tree contains an [extract-vmlinux tool][18] that can be used to uncompress the file:
+```
+
+
+$# scripts/extract-vmlinux /boot/vmlinuz-$(uname -r) > vmlinux
+
+$# file vmlinux
+
+vmlinux: ELF 64-bit LSB executable, x86-64, version 1 (SYSV), statically
+
+linked, stripped
+```
+
+The kernel is an [Executable and Linking Format][19] (ELF) binary, like Linux userspace programs. That means we can use commands from the `binutils` package like `readelf` to inspect it. Compare the output of, for example:
+```
+
+
+$# readelf -S /bin/date
+
+$# readelf -S vmlinux
+```
+
+The list of sections in the binaries is largely the same.
+
+So the kernel must start up something like other Linux ELF binaries ... but how do userspace programs actually start? In the `main()` function, right? Not precisely.
+
+Before the `main()` function can run, programs need an execution context that includes heap and stack memory plus file descriptors for `stdio`, `stdout`, and `stderr`. Userspace programs obtain these resources from the standard library, which is `glibc` on most Linux systems. Consider the following:
+```
+
+
+$# file /bin/date
+
+/bin/date: ELF 64-bit LSB shared object, x86-64, version 1 (SYSV), dynamically
+
+linked, interpreter /lib64/ld-linux-x86-64.so.2, for GNU/Linux 2.6.32,
+
+BuildID[sha1]=14e8563676febeb06d701dbee35d225c5a8e565a,
+
+stripped
+```
+
+ELF binaries have an interpreter, just as Bash and Python scripts do, but the interpreter need not be specified with `#!` as in scripts, as ELF is Linux's native format. The ELF interpreter [provisions a binary][20] with the needed resources by calling `_start()`, a function available from the `glibc` source package that can be [inspected via GDB][21]. The kernel obviously has no interpreter and must provision itself, but how?
+
+Inspecting the kernel's startup with GDB gives the answer. First install the debug package for the kernel that contains an unstripped version of `vmlinux`, for example `apt-get install linux-image-amd64-dbg`, or compile and install your own kernel from source, for example, by following instructions in the excellent [Debian Kernel Handbook][22]. `gdb vmlinux` followed by `info files` shows the ELF section `init.text`. List the start of program execution in `init.text` with `l *(address)`, where `address` is the hexadecimal start of `init.text`. GDB will indicate that the x86_64 kernel starts up in the kernel's file [arch/x86/kernel/head_64.S][23], where we find the assembly function `start_cpu0()` and code that explicitly creates a stack and decompresses the zImage before calling the `x86_64 start_kernel()` function. ARM 32-bit kernels have the similar [arch/arm/kernel/head.S][24]. `start_kernel()` is not architecture-specific, so the function lives in the kernel's [init/main.c][25]. `start_kernel()` is arguably Linux's true `main()` function.
+
+### From start_kernel() to PID 1
+
+#### The kernel's hardware manifest: the device-tree and ACPI tables
+
+At boot, the kernel needs information about the hardware beyond the processor type for which it has been compiled. The instructions in the code are augmented by configuration data that is stored separately. There are two main methods of storing this data: [device-trees][26] and [ACPI tables][27]. The kernel learns what hardware it must run at each boot by reading these files.
+
+For embedded devices, the device-tree is a manifest of installed hardware. The device-tree is simply a file that is compiled at the same time as kernel source and is typically located in `/boot` alongside `vmlinux`. To see what's in the binary device-tree on an ARM device, just use the `strings` command from the `binutils` package on a file whose name matches `/boot/*.dtb`, as `dtb` refers to a device-tree binary. Clearly the device-tree can be modified simply by editing the JSON-like files that compose it and rerunning the special `dtc` compiler that is provided with the kernel source. While the device-tree is a static file whose file path is typically passed to the kernel by the bootloader on the command line, a [device-tree overlay][28] facility has been added in recent years, where the kernel can dynamically load additional fragments in response to hotplug events after boot.
+
+x86-family and many enterprise-grade ARM64 devices make use of the alternative Advanced Configuration and Power Interface ([ACPI][27]) mechanism. In contrast to the device-tree, the ACPI information is stored in the `/sys/firmware/acpi/tables` virtual filesystem that is created by the kernel at boot by accessing onboard ROM. The easy way to read the ACPI tables is with the `acpidump` command from the `acpica-tools` package. Here's an example:
+
+![ACPI tables on Lenovo laptops][30]
+
+
+ACPI tables on Lenovo laptops are all set for Windows 2001.
+
+Yes, your Linux system is ready for Windows 2001, should you care to install it. ACPI has both methods and data, unlike the device-tree, which is more of a hardware-description language. ACPI methods continue to be active post-boot. For example, starting the command `acpi_listen` (from package `apcid`) and opening and closing the laptop lid will show that ACPI functionality is running all the time. While temporarily and dynamically [overwriting the ACPI tables][31] is possible, permanently changing them involves interacting with the BIOS menu at boot or reflashing the ROM. If you're going to that much trouble, perhaps you should just [install coreboot][32], the open source firmware replacement.
+
+#### From start_kernel() to userspace
+
+The code in [init/main.c][25] is surprisingly readable and, amusingly, still carries Linus Torvalds' original copyright from 1991-1992. The lines found in `dmesg | head` on a newly booted system originate mostly from this source file. The first CPU is registered with the system, global data structures are initialized, and the scheduler, interrupt handlers (IRQs), timers, and console are brought one-by-one, in strict order, online. Until the function `timekeeping_init()` runs, all timestamps are zero. This part of the kernel initialization is synchronous, meaning that execution occurs in precisely one thread, and no function is executed until the last one completes and returns. As a result, the `dmesg` output will be completely reproducible, even between two systems, as long as they have the same device-tree or ACPI tables. Linux is behaving like one of the RTOS (real-time operating systems) that runs on MCUs, for example QNX or VxWorks. The situation persists into the function `rest_init()`, which is called by `start_kernel()` at its termination.
+
+![Summary of early kernel boot process.][34]
+
+Summary of early kernel boot process.
+
+The rather humbly named `rest_init()` spawns a new thread that runs `kernel_init()`, which invokes `do_initcalls()`. Users can spy on `initcalls` in action by appending `initcall_debug` to the kernel command line, resulting in `dmesg` entries every time an `initcall` function runs. `initcalls` pass through seven sequential levels: early, core, postcore, arch, subsys, fs, device, and late. The most user-visible part of the `initcalls` is the probing and setup of all the processors' peripherals: buses, network, storage, displays, etc., accompanied by the loading of their kernel modules. `rest_init()` also spawns a second thread on the boot processor that begins by running `cpu_idle()` while it waits for the scheduler to assign it work.
+
+`kernel_init()` also [sets up symmetric multiprocessing][35] (SMP). With more recent kernels, find this point in `dmesg` output by looking for "Bringing up secondary CPUs..." SMP proceeds by "hotplugging" CPUs, meaning that it manages their lifecycle with a state machine that is notionally similar to that of devices like hotplugged USB sticks. The kernel's power-management system frequently takes individual cores offline, then wakes them as needed, so that the same CPU hotplug code is called over and over on a machine that is not busy. Observe the power-management system's invocation of CPU hotplug with the [BCC tool][36] called `offcputime.py`.
+
+Note that the code in `init/main.c` is nearly finished executing when `smp_init()` runs: The boot processor has completed most of the one-time initialization that the other cores need not repeat. Nonetheless, the per-CPU threads must be spawned for each core to manage interrupts (IRQs), workqueues, timers, and power events on each. For example, see the per-CPU threads that service softirqs and workqueues in action via the `ps -o psr` command.
+```
+
+
+$\# ps -o pid,psr,comm $(pgrep ksoftirqd)
+
+ PID PSR COMMAND
+
+ 7 0 ksoftirqd/0
+
+ 16 1 ksoftirqd/1
+
+ 22 2 ksoftirqd/2
+
+ 28 3 ksoftirqd/3
+
+
+
+$\# ps -o pid,psr,comm $(pgrep kworker)
+
+PID PSR COMMAND
+
+ 4 0 kworker/0:0H
+
+ 18 1 kworker/1:0H
+
+ 24 2 kworker/2:0H
+
+ 30 3 kworker/3:0H
+
+[ . . . ]
+```
+
+where the PSR field stands for "processor." Each core must also host its own timers and `cpuhp` hotplug handlers.
+
+How is it, finally, that userspace starts? Near its end, `kernel_init()` looks for an `initrd` that can execute the `init` process on its behalf. If it finds none, the kernel directly executes `init` itself. Why then might one want an `initrd`?
+
+#### Early userspace: who ordered the initrd?
+
+Besides the device-tree, another file path that is optionally provided to the kernel at boot is that of the `initrd`. The `initrd` often lives in `/boot` alongside the bzImage file vmlinuz on x86, or alongside the similar uImage and device-tree for ARM. List the contents of the `initrd` with the `lsinitramfs` tool that is part of the `initramfs-tools-core` package. Distro `initrd` schemes contain minimal `/bin`, `/sbin`, and `/etc` directories along with kernel modules, plus some files in `/scripts`. All of these should look pretty familiar, as the `initrd` for the most part is simply a minimal Linux root filesystem. The apparent similarity is a bit deceptive, as nearly all the executables in `/bin` and `/sbin` inside the ramdisk are symlinks to the [BusyBox binary][37], resulting in `/bin` and `/sbin` directories that are 10x smaller than glibc's.
+
+Why bother to create an `initrd` if all it does is load some modules and then start `init` on the regular root filesystem? Consider an encrypted root filesystem. The decryption may rely on loading a kernel module that is stored in `/lib/modules` on the root filesystem ... and, unsurprisingly, in the `initrd` as well. The crypto module could be statically compiled into the kernel instead of loaded from a file, but there are various reasons for not wanting to do so. For example, statically compiling the kernel with modules could make it too large to fit on the available storage, or static compilation may violate the terms of a software license. Unsurprisingly, storage, network, and human input device (HID) drivers may also be present in the `initrd`--basically any code that is not part of the kernel proper that is needed to mount the root filesystem. The `initrd` is also a place where users can stash their own [custom ACPI][38] table code.
+
+![Rescue shell and a custom initrd
.][40]
+
+Having some fun with the rescue shell and a custom `initrd`.
+
+`initrd`'s are also great for testing filesystems and data-storage devices themselves. Stash these test tools in the `initrd` and run your tests from memory rather than from the object under test.
+
+At last, when `init` runs, the system is up! Since the secondary processors are now running, the machine has become the asynchronous, preemptible, unpredictable, high-performance creature we know and love. Indeed, `ps -o pid,psr,comm -p 1` is liable to show that userspace's `init` process is no longer running on the boot processor.
+
+### Summary
+
+The Linux boot process sounds forbidding, considering the number of different pieces of software that participate even on simple embedded devices. Looked at differently, the boot process is rather simple, since the bewildering complexity caused by features like preemption, RCU, and race conditions are absent in boot. Focusing on just the kernel and PID 1 overlooks the large amount of work that bootloaders and subsidiary processors may do in preparing the platform for the kernel to run. While the kernel is certainly unique among Linux programs, some insight into its structure can be gleaned by applying to it some of the same tools used to inspect other ELF binaries. Studying the boot process while it's working well arms system maintainers for failures when they come.
+
+To learn more, attend Alison Chaiken's talk, [Linux: The first second][41], at [linux.conf.au][42], which will be held January 22-26 in Sydney.
+
+Thanks to [Akkana Peck][43] for originally suggesting this topic and for many corrections.
+
+--------------------------------------------------------------------------------
+
+via: https://opensource.com/article/18/1/analyzing-linux-boot-process
+
+作者:[Alison Chaiken][a]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]:https://opensource.com/users/don-watkins
+[1]:https://en.wikipedia.org/wiki/Initial_ramdisk
+[2]:https://github.com/chaiken/LCA2018-Demo-Code
+[3]:https://en.wikipedia.org/wiki/Wake-on-LAN
+[4]:https://lwn.net/Articles/630778/
+[5]:https://www.youtube.com/watch?v=iffTJ1vPCSo&index=65&list=PLbzoR-pLrL6pISWAq-1cXP4_UZAyRtesk
+[6]:https://security-center.intel.com/advisory.aspx?intelid=INTEL-SA-00086&languageid=en-fr
+[7]:https://www.intel.com/content/www/us/en/support/articles/000025619/software.html
+[8]:https://github.com/corna/me_cleaner
+[9]:https://lwn.net/Articles/738649/
+[10]:https://lwn.net/Articles/699551/
+[11]:https://trmm.net/NERF
+[12]:https://www.extremetech.com/computing/259879-dell-now-shipping-laptops-intels-management-engine-disabled
+[13]:https://lwn.net/Articles/733837/
+[14]:https://linuxplumbersconf.org/2017/ocw/events/LPC2017/tracks/639
+[15]:/file/383501
+[16]:https://opensource.com/sites/default/files/u128651/linuxboot_1.png (Running the U-boot bootloader)
+[17]:http://www.denx.de/wiki/DULG/Manual
+[18]:https://github.com/torvalds/linux/blob/master/scripts/extract-vmlinux
+[19]:http://man7.org/linux/man-pages/man5/elf.5.html
+[20]:https://0xax.gitbooks.io/linux-insides/content/Misc/program_startup.html
+[21]:https://github.com/chaiken/LCA2018-Demo-Code/commit/e543d9812058f2dd65f6aed45b09dda886c5fd4e
+[22]:http://kernel-handbook.alioth.debian.org/
+[23]:https://github.com/torvalds/linux/blob/master/arch/x86/boot/compressed/head_64.S
+[24]:https://github.com/torvalds/linux/blob/master/arch/arm/boot/compressed/head.S
+[25]:https://github.com/torvalds/linux/blob/master/init/main.c
+[26]:https://www.youtube.com/watch?v=m_NyYEBxfn8
+[27]:http://events.linuxfoundation.org/sites/events/files/slides/x86-platform.pdf
+[28]:http://lwn.net/Articles/616859/
+[29]:/file/383506
+[30]:https://opensource.com/sites/default/files/u128651/linuxboot_2.png (ACPI tables on Lenovo laptops)
+[31]:https://www.mjmwired.net/kernel/Documentation/acpi/method-customizing.txt
+[32]:https://www.coreboot.org/Supported_Motherboards
+[33]:/file/383511
+[34]:https://opensource.com/sites/default/files/u128651/linuxboot_3.png (Summary of early kernel boot process.)
+[35]:http://free-electrons.com/pub/conferences/2014/elc/clement-smp-bring-up-on-arm-soc
+[36]:http://www.brendangregg.com/ebpf.html
+[37]:https://www.busybox.net/
+[38]:https://www.mjmwired.net/kernel/Documentation/acpi/initrd_table_override.txt
+[39]:/file/383516
+[40]:https://opensource.com/sites/default/files/u128651/linuxboot_4.png (Rescue shell and a custom initrd
.)
+[41]:https://rego.linux.conf.au/schedule/presentation/16/
+[42]:https://linux.conf.au/index.html
+[43]:http://shallowsky.com/
diff --git a/sources/tech/20180116 How To Create A Bootable Zorin OS USB Drive.md b/sources/tech/20180116 How To Create A Bootable Zorin OS USB Drive.md
new file mode 100644
index 0000000000..4ab7fea3f6
--- /dev/null
+++ b/sources/tech/20180116 How To Create A Bootable Zorin OS USB Drive.md
@@ -0,0 +1,315 @@
+How To Create A Bootable Zorin OS USB Drive
+======
+![Zorin OS][17]
+
+### Introduction
+
+In this guide I will show you how to create a bootable Zorin OS USB Drive.
+
+To be able to follow this guide you will need the following:
+
+ * A blank USB drive
+ * An internet connection
+
+
+
+### What Is Zorin OS?
+
+Zorin OS is a Linux based operating system.
+
+If you are a Windows user you might wonder why you would bother with Zorin OS. If you are a Linux user then you might also wonder why you would use Zorin OS over other distributions such as Linux Mint or Ubuntu.
+
+If you are using an older version of Windows and you can't afford to upgrade to Windows 10 or your computer doesn't have the right specifications for running Windows 10 then Zorin OS provides a free (or cheap, depending how much you choose to donate) upgrade path allowing you to continue to use your computer in a much more secure environment.
+
+If your current operating system is Windows XP or Windows Vista then you might consider using Zorin OS Lite as opposed to Zorin OS Core.
+
+The features of Zorin OS Lite are generally the same as the Zorin OS Core product but some of the applications installed and the desktop environment used for displaying menus and icons and other Windowsy features take up much less memory and processing power.
+
+If you are running Windows 7 then your operating system is coming towards the end of its life. You could probably upgrade to Windows 10 but at a hefty price.
+
+Not everybody has the finances to pay for a new Windows license and not everybody has the money to buy a brand new computer.
+
+Zorin OS will help you extend the life of your computer and you will still feel you are using a premium product and that is because you will be. The product with the highest price doesn't always provide the best value.
+
+Whilst we are talking about value for money, Zorin OS allows you to install the best free and open source software available and comes with a good selection of packages pre-installed.
+
+For the home user, using Zorin OS doesn't have to feel any different to running Windows. You can browse the web using the browser of your choice, you can listen to music and watch videos. There are mail clients and other productivity tools.
+
+Talking of productivity there is LibreOffice. LibreOffice has everything the average home user requires from an office suite with a word processor, spreadsheet and presentations package.
+
+If you want to run Windows software then you can use the pre-installed PlayOnLinux and WINE packages to install and run all manner of packages including Microsoft Office.
+
+By running Zorin OS you will get the extra security benefits of running a Linux based operating system.
+
+Are you fed up with Windows updates stalling your productivity? When Windows wants to install updates it requires a reboot and then a long wait whilst it proceeds to install update after update. Sometimes it even forces a reboot whilst you are busy working.
+
+Zorin OS is different. Updates download and install themselves whilst you are using the computer. You won't even need to know it is happening.
+
+Why Zorin over Mint or Ubuntu? Zorin is the happy stepping stone between Windows and Linux. It is Linux but you don't need to care that it is Linux. If you decide later on to move to something different then so be it but there really is no need.
+
+### The Zorin OS Website
+
+![](http://dailylinuxuser.com/wp-content/uploads/2018/01/zorinwebsite1-678x381.png)
+
+You can visit the Zorin OS website by visiting [www.zorinos.com][18].
+
+The homepage of the Zorin OS website tells you everything you need to know.
+
+"Zorin OS is an alternative to Windows and macOX, designed to make your computer faster, more powerful and secure".
+
+There is nothing that tells you that Zorin OS is based on Linux. There is no need for Zorin to tell you that because even though Windows used to be heavily based on DOS you didn't need to know DOS commands to use it. Likewise you don't necessarily need to know Linux commands to use Zorin.
+
+If you scroll down the page you will see a slide show highlighting the way the desktop looks and feels under Zorin.
+
+The good thing is that you can customise the user interface so that if you prefer a Windows layout you can use a Windows style layout but if you prefer a Mac style layout you can go for that as well.
+
+Zorin OS is based on Ubuntu Linux and the website uses this fact to highlight that underneath it has a stable base and it highlights the security benefits provided by Linux.
+
+If you want to see what applications are available for Zorin then there is a link to do that and Zorin never sells your data and protects your privacy.
+
+### What Are The Different Versions Of Zorin OS
+
+#### Zorin OS Ultimate
+
+The ultimate edition takes the core edition and adds other features such as different layouts, more applications pre-installed and extra games.
+
+The ultimate edition comes at a price of 19 euros which is a bargain compared to other operating systems.
+
+#### Zorin OS Core
+
+The core version is the standard edition and comes with everything the average person could need from the outset.
+
+This is the version I will show you how to download and install in this guide.
+
+#### Zorin OS Lite
+
+Zorin OS Lite also has an ultimate version available and a core version. Zorin OS Lite is perfect for older computers and the main difference is the desktop environments used to display menus and handle screen elements such as icons and panels.
+
+Zorin OS Lite is less memory intensive than Zorin OS.
+
+#### Zorin OS Business
+
+Zorin OS Business comes with business applications installed as standard such as finance applications and office applications.
+
+### How To Get Zorin OS
+
+To download Zorin OS visit .
+
+To get the core version scroll past the Zorin Ultimate section until you get to the Zorin Core section.
+
+You will see a small pay panel which allows you to choose how much you wish to pay for Zorin Core with a purchase now button underneath.
+
+#### How To Pay For Zorin OS
+
+![](http://dailylinuxuser.com/wp-content/uploads/2018/01/zorinwebsite1-678x381.png)
+
+You can choose from the three preset amounts or enter an amount of your choice in the "Custom" box.
+
+When you click "Purchase Zorin OS Core" the following window will appear:
+
+![](http://dailylinuxuser.com/wp-content/uploads/2018/01/payforzorin.png)
+
+You can now enter your email and credit card information.
+
+When you click the "pay" button a window will appear with a download link.
+
+#### How To Get Zorin OS For Free
+
+If you don't wish to pay anything at all you can enter zero (0) into the custom box. The button will change and will show the words "Download Zorin OS Core".
+
+#### How To Download Zorin OS
+
+![](http://dailylinuxuser.com/wp-content/uploads/2018/01/downloadzorin.png)
+
+Whether you have bought Zorin or have chosen to download for free, a window will appear with the option to download a 64 bit or 32 bit version of Zorin.
+
+Most modern computers are capable of running 64 bit operating systems but in order to check within Windows click the "start" button and type "system information".
+
+![](http://dailylinuxuser.com/wp-content/uploads/2018/01/systeminfo.png)
+
+Click on the "System Information" desktop app and halfway down the right panel you will see the words "system type". If you see the words "x64 based PC" then the system is capable of running 64-bit operating systems.
+
+If your computer is capable of running 64-bit operating systems click on the "Download 64 bit" button otherwise click on "Download 32 bit".
+
+The ISO image file for Zorin will now start to download to your computer.
+
+### How To Verify If The Zorin OS Download Is Valid
+
+It is important to check whether the download is valid for many reasons.
+
+If the file has only partially downloaded or there were interruptions whilst downloading and you had to resume then the image might not be perfect and it should be downloaded again.
+
+More importantly you should check the validity to make sure the version you downloaded is genuine and wasn't uploaded by a hacker.
+
+In order to check the validity of the ISO image you should download a piece of software called QuickHash for Windows from .
+
+Click the "download" link and when the file has downloaded double click on it.
+
+Click on the relevant application file within the zip file. If you have a 32-bit system click "Quickhash-v2.8.4-32bit" or for a 64-bit system click "Quickhash-v2.8.4-64bit".
+
+Click on the "Run" button.
+
+![](http://dailylinuxuser.com/wp-content/uploads/2018/01/zorinhash.png)
+
+Click the SHA256 radio button on the left side of the screen and then click on the file tab.
+
+Click "Select File" and navigate to the downloads folder.
+
+Choose the Zorin ISO image downloaded previously.
+
+A progress bar will now work out the hash value for the ISO image.
+
+To compare this with the valid keys available for Zorin visit and scroll down until you see the list of checksums as follows:
+
+![](http://dailylinuxuser.com/wp-content/uploads/2018/01/zorinhashcodes.png)
+
+Select the long list of scrambled characters next to the version of Zorin OS that you downloaded and press CTRL and C to copy.
+
+Go back to the Quickhash screen and paste the value into the "Expected hash value" box by pressing CTRL and V.
+
+You should see the words "Expected hash matches the computed file hash, OK".
+
+If the values do not match you will see the words "Expected hash DOES NOT match the computed file hash" and you should download the ISO image again.
+
+### How To Create A Bootable Zorin OS USB Drive
+
+In order to be able to install Zorin you will need to install a piece of software called Etcher. You will also need a blank USB drive.
+
+You can download Etcher from .
+
+![](http://dailylinuxuser.com/wp-content/uploads/2018/01/downloadetcher.png)
+
+If you are using a 64 bit computer click on the "Download for Windows x64" link otherwise click on the little arrow and choose "Etcher for Windows x86 (32-bit) (Installer)".
+
+Insert the USB drive into your computer and double click on the "Etcher" setup executable file.
+
+![](http://dailylinuxuser.com/wp-content/uploads/2018/01/etcherlicense.png)
+
+When the license screen appears click "I Agree".
+
+Etcher should start automatically after the installation completes but if it doesn't you can press the Windows key or click the start button and search for "Etcher".
+
+![](http://dailylinuxuser.com/wp-content/uploads/2018/01/etcherscreen.png)
+
+Click on "Select Image" and select the "Zorin" ISO image downloaded previously.
+
+Click "Flash".
+
+Windows will ask for your permission to continue. Click "Yes" to accept.
+
+After a while a window will appear with the words "Flash Complete".
+
+### How To Buy A Zorin OS USB Drive
+
+If the above instructions seem too much like hard work then you can order a Zorin USB Drive by clicking one of the following links:
+
+* [Zorin OS Core – 32-bit DVD][1]
+
+* [Zorin OS Core – 64-bit DVD][2]
+
+* [Zorin OS Core – 16 gigabyte USB drive (32-bit)][3]
+
+* [Zorin OS Core – 32 gigabyte USB drive (32-bit)][4]
+
+* [Zorin OS Core – 64 gigabyte USB drive (32-bit)][5]
+
+* [Zorin OS Core – 16 gigabyte USB drive (64-bit)][6]
+
+* [Zorin OS Core – 32 gigabyte USB drive (64-bit)][7]
+
+* [Zorin OS Core – 64 gigabyte USB drive (64-bit)][8]
+
+* [Zorin OS Lite – 32-bit DVD][9]
+
+* [Zorin OS Lite – 64-bit DVD][10]
+
+* [Zorin OS Lite – 16 gigabyte USB drive (32-bit)][11]
+
+* [Zorin OS Lite – 32 gigabyte USB drive (32-bit)][12]
+
+* [Zorin OS Lite – 64 gigabyte USB drive (32-bit)][13]
+
+* [Zorin OS Lite – 16 gigabyte USB drive (64-bit)][14]
+
+* [Zorin OS Lite – 32 gigabyte USB drive (64-bit)][15]
+
+* [Zorin OS Lite – 64 gigabyte USB drive (64-bit)][16]
+
+
+### How To Boot Into Zorin OS Live
+
+On older computers simply insert the USB drive and restart the computer. The boot menu for Zorin should appear straight away.
+
+On modern computers insert the USB drive, restart the computer and before Windows loads press the appropriate function key to bring up the boot menu.
+
+The following list shows the key or keys you can press for the most popular computer manufacturers.
+
+ * Acer - Escape, F12, F9
+ * Asus - Escape, F8
+ * Compaq - Escape, F9
+ * Dell - F12
+ * Emachines - F12
+ * HP - Escape, F9
+ * Intel - F10
+ * Lenovo - F8, F10, F12
+ * Packard Bell - F8
+ * Samsung - Escape, F12
+ * Sony - F10, F11
+ * Toshiba - F12
+
+
+
+Check the manufacturer's website to find the key for your computer if it isn't listed or keep trying different function keys or the escape key.
+
+A screen will appear with the following three options:
+
+ 1. Try Zorin OS without Installing
+ 2. Install Zorin OS
+ 3. Check disc for defects
+
+
+
+Choose "Try Zorin OS without Installing" by pressing enter with that option selected.
+
+### Summary
+
+You can now try Zorin OS without damaging your current operating system.
+
+To get back to your original operating system reboot and remove the USB drive.
+
+### How To Remove Zorin OS From The USB Drive
+
+If you have decided that Zorin OS is not for you and you want to get the USB drive back into its pre-Zorin state follow this guide:
+
+[How To Fix A USB Drive After Linux Has Been Installed On It][19]
+
+--------------------------------------------------------------------------------
+
+via: http://dailylinuxuser.com/2018/01/how-to-create-a-bootable-zorin-os-usb-drive.html
+
+作者:[admin][a]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]:
+[1]:https://www.osdisc.com/products/zorinos/zorin-os-122-core-install-live-dvd-32bit.html?affiliate=everydaylinuxuser
+[2]:https://www.osdisc.com/products/zorinos/zorin-os-122-core-install-live-dvd-64bit.html?affiliate=everydaylinuxuser
+[3]:https://www.osdisc.com/products/zorinos/zorin-os-122-core-16gb-usb-flash-drive-32bit.html?affiliate=everydaylinuxuser
+[4]:https://www.osdisc.com/products/zorinos/zorin-os-122-core-32gb-usb-flash-drive-32bit.html?affiliate=everydaylinuxuser
+[5]:https://www.osdisc.com/products/zorinos/zorin-os-122-core-64gb-usb-flash-drive-32bit.html?affiliate=everydaylinuxuser
+[6]:https://www.osdisc.com/products/zorinos/zorin-os-122-core-16gb-usb-flash-drive-64bit.html?affiliate=everydaylinuxuser
+[7]:https://www.osdisc.com/products/zorinos/zorin-os-122-core-32gb-usb-flash-drive-64bit.html?affiliate=everydaylinuxuser
+[8]:https://www.osdisc.com/products/zorinos/zorin-os-122-core-64gb-usb-flash-drive-64bit.html?affiliate=everydaylinuxuser
+[9]:https://www.osdisc.com/products/zorinos/zorin-os-122-lite-install-live-dvd-32bit.html?affiliate=everydaylinuxuser
+[10]:https://www.osdisc.com/products/zorinos/zorin-os-122-lite-install-live-dvd-64bit.html?affiliate=everydaylinuxuser
+[11]:https://www.osdisc.com/products/zorinos/zorin-os-122-lite-16gb-usb-flash-drive-32bit.html?affiliate=everydaylinuxuser
+[12]:https://www.osdisc.com/products/zorinos/zorin-os-122-lite-32gb-usb-flash-drive-32bit.html?affiliate=everydaylinuxuser
+[13]:https://www.osdisc.com/products/zorinos/zorin-os-122-lite-64gb-usb-flash-drive-32bit.html?affiliate=everydaylinuxuser
+[14]:https://www.osdisc.com/products/zorinos/zorin-os-122-lite-16gb-usb-flash-drive-64bit.html?affiliate=everydaylinuxuser
+[15]:https://www.osdisc.com/products/zorinos/zorin-os-122-lite-32gb-usb-flash-drive-64bit.html?affiliate=everydaylinuxuser
+[16]:https://www.osdisc.com/products/zorinos/zorin-os-122-lite-64gb-usb-flash-drive-64bit.html?affiliate=everydaylinuxuser
+[17]:http://dailylinuxuser.com/wp-content/uploads/2018/01/zorindesktop-678x381.png (Zorin OS)
+[18]:http://www.zorinos.com
+[19]:http://dailylinuxuser.com/2016/04/how-to-fix-usb-drive-after-linux-has.html
diff --git a/sources/tech/20180116 How to Install and Optimize Apache on Ubuntu - ThisHosting.Rocks.md b/sources/tech/20180116 How to Install and Optimize Apache on Ubuntu - ThisHosting.Rocks.md
new file mode 100644
index 0000000000..eba7ce9c54
--- /dev/null
+++ b/sources/tech/20180116 How to Install and Optimize Apache on Ubuntu - ThisHosting.Rocks.md
@@ -0,0 +1,267 @@
+How to Install and Optimize Apache on Ubuntu
+======
+
+This is the beginning of our LAMP tutorial series: how to install the Apache web server on Ubuntu.
+
+These instructions should work on any Ubuntu-based distro, including Ubuntu 14.04, Ubuntu 16.04, [Ubuntu 18.04][1], and even non-LTS Ubuntu releases like 17.10. They were tested and written for Ubuntu 16.04.
+
+Apache (aka httpd) is the most popular and most widely used web server, so this should be useful for everyone.
+
+### Before we begin installing Apache
+
+Some requirements and notes before we begin:
+
+ * Apache may already be installed on your server, so check if it is first. You can do so with the "apachectl -V" command that outputs the Apache version you're using and some other information.
+ * You'll need an Ubuntu server. You can buy one from [Vultr][2], they're one of the [best and cheapest cloud hosting providers][3]. Their servers start from $2.5 per month.
+ * You'll need the root user or a user with sudo access. All commands below are executed by the root user so we didn't have to append 'sudo' to each command.
+ * You'll need [SSH enabled][4] if you use Ubuntu or an SSH client like [MobaXterm][5] if you use Windows.
+
+
+
+That's most of it. Let's move onto the installation.
+
+
+
+
+
+### Install Apache on Ubuntu
+
+The first thing you always need to do is update Ubuntu before you do anything else. You can do so by running:
+```
+apt-get update && apt-get upgrade
+```
+
+Next, to install Apache, run the following command:
+```
+apt-get install apache2
+```
+
+If you want to, you can also install the Apache documentation and some Apache utilities. You'll need the Apache utilities for some of the modules we'll install later.
+```
+apt-get install apache2-doc apache2-utils
+```
+
+**And that 's it. You've successfully installed Apache.**
+
+You'll still need to configure it.
+
+### Configure and Optimize Apache on Ubuntu
+
+There are various configs you can do on Apache, but the main and most common ones are explained below.
+
+#### Check if Apache is running
+
+By default, Apache is configured to start automatically on boot, so you don't have to enable it. You can check if it's running and other relevant information with the following command:
+```
+systemctl status apache2
+```
+
+[![check if apache is running][6]][6]
+
+And you can check what version you're using with
+```
+apachectl -V
+```
+
+A simpler way of checking this is by visiting your server's IP address. If you get the default Apache page, then everything's working fine.
+
+#### Update your firewall
+
+If you use a firewall (which you should), you'll probably need to update your firewall rules and allow access to the default ports. The most common firewall used on Ubuntu is UFW, so the instructions below are for UFW.
+
+To allow traffic through both the 80 (http) and 443 (https) ports, run the following command:
+```
+ufw allow 'Apache Full'
+```
+
+#### Install common Apache modules
+
+Some modules are frequently recommended and you should install them. We'll include instructions for the most common ones:
+
+##### Speed up your website with the PageSpeed module
+
+The PageSpeed module will optimize and speed up your Apache server automatically.
+
+First, go to the [PageSpeed download page][7] and choose the file you need. We're using a 64-bit Ubuntu server and we'll install the latest stable version. Download it using wget:
+```
+wget https://dl-ssl.google.com/dl/linux/direct/mod-pagespeed-stable_current_amd64.deb
+```
+
+Then, install it with the following commands:
+```
+dpkg -i mod-pagespeed-stable_current_amd64.deb
+apt-get -f install
+```
+
+Restart Apache for the changes to take effect:
+```
+systemctl restart apache2
+```
+
+##### Enable rewrites/redirects using the mod_rewrite module
+
+This module is used for rewrites (redirects), as the name suggests. You'll need it if you use WordPress or any other CMS for that matter. To install it, just run:
+```
+a2enmod rewrite
+```
+
+And restart Apache again. You may need some extra configurations depending on what CMS you're using, if any. Google it for specific instructions for your setup.
+
+##### Secure your Apache with the ModSecurity module
+
+ModSecurity is a module used for security, again, as the name suggests. It basically acts as a firewall, and it monitors your traffic. To install it, run the following command:
+```
+apt-get install libapache2-modsecurity
+```
+
+And restart Apache again:
+```
+systemctl restart apache2
+```
+
+ModSecurity comes with a default setup that's enough by itself, but if you want to extend it, you can use the [OWASP rule set][8].
+
+##### Block DDoS attacks using the mod_evasive module
+
+You can use the mod_evasive module to block and prevent DDoS attacks on your server, though it's debatable how useful it is in preventing attacks. To install it, use the following command:
+```
+apt-get install libapache2-mod-evasive
+```
+
+By default, mod_evasive is disabled, to enable it, edit the following file:
+```
+nano /etc/apache2/mods-enabled/evasive.conf
+```
+
+And uncomment all the lines (remove #) and configure it per your requirements. You can leave everything as-is if you don't know what to edit.
+
+[![mod_evasive][9]][9]
+
+And create a log file:
+```
+mkdir /var/log/mod_evasive
+chown -R www-data:www-data /var/log/mod_evasive
+```
+
+That's it. Now restart Apache for the changes to take effect:
+```
+systemctl restart apache2
+```
+
+There are [additional modules][10] you can install and configure, but it's all up to you and the software you're using. They're usually not required. Even the 4 modules we included are not required. If a module is required for a specific application, then they'll probably note that.
+
+#### Optimize Apache with the Apache2Buddy script
+
+Apache2Buddy is a script that will automatically fine-tune your Apache configuration. The only thing you need to do is run the following command and the script does the rest automatically:
+```
+curl -sL https://raw.githubusercontent.com/richardforth/apache2buddy/master/apache2buddy.pl | perl
+```
+
+You may need to install curl if you don't have it already installed. Use the following command to install curl:
+```
+apt-get install curl
+```
+
+#### Additional configurations
+
+There's some extra stuff you can do with Apache, but we'll leave them for another tutorial. Stuff like enabling http/2 support, turning off (or on) KeepAlive, tuning your Apache even more. You don't have to do any of this, but you can find tutorials online and do it if you can't wait for our tutorials.
+
+### Create your first website with Apache
+
+Now that we're done with all the tuning, let's move onto creating an actual website. Follow our instructions to create a simple HTML page and a virtual host that's going to run on Apache.
+
+The first thing you need to do is create a new directory for your website. Run the following command to do so:
+```
+mkdir -p /var/www/example.com/public_html
+```
+
+Of course, replace example.com with your desired domain. You can get a cheap domain name from [Namecheap][11].
+
+Don't forget to replace example.com in all of the commands below.
+
+Next, create a simple, static web page. Create the HTML file:
+```
+nano /var/www/example.com/public_html/index.html
+```
+
+And paste this:
+```
+
+
+ Simple Page
+
+
+ If you're seeing this in your browser then everything works.
+
+
+```
+
+Save and close the file.
+
+Configure the permissions of the directory:
+```
+chown -R www-data:www-data /var/www/example.com
+chmod -R og-r /var/www/example.com
+```
+
+Create a new virtual host for your site:
+```
+nano /etc/apache2/sites-available/example.com.conf
+```
+
+And paste the following:
+```
+
+ ServerAdmin admin@example.com
+ ServerName example.com
+ ServerAlias www.example.com
+
+ DocumentRoot /var/www/example.com/public_html
+
+ ErrorLog ${APACHE_LOG_DIR}/error.log
+ CustomLog ${APACHE_LOG_DIR}/access.log combined
+
+```
+
+This is a basic virtual host. You may need a more advanced .conf file depending on your setup.
+
+Save and close the file after updating everything accordingly.
+
+Now, enable the virtual host with the following command:
+```
+a2ensite example.com.conf
+```
+
+And finally, restart Apache for the changes to take effect:
+```
+systemctl restart apache2
+```
+
+That's it. You're done. Now you can visit example.com and view your page.
+
+
+
+--------------------------------------------------------------------------------
+
+via: https://thishosting.rocks/how-to-install-optimize-apache-ubuntu/
+
+作者:[ThisHosting][a]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]:https://thishosting.rocks
+[1]:https://thishosting.rocks/ubuntu-18-04-new-features-release-date/
+[2]:https://thishosting.rocks/go/vultr/
+[3]:https://thishosting.rocks/cheap-cloud-hosting-providers-comparison/
+[4]:https://thishosting.rocks/how-to-enable-ssh-on-ubuntu/
+[5]:https://mobaxterm.mobatek.net/
+[6]:https://thishosting.rocks/wp-content/uploads/2018/01/apache-running.jpg
+[7]:https://www.modpagespeed.com/doc/download
+[8]:https://www.owasp.org/index.php/Category:OWASP_ModSecurity_Core_Rule_Set_Project
+[9]:https://thishosting.rocks/wp-content/uploads/2018/01/mod_evasive.jpg
+[10]:https://httpd.apache.org/docs/2.4/mod/
+[11]:https://thishosting.rocks/neamcheap-review-cheap-domains-cool-names
+[12]:https://thishosting.rocks/wp-content/plugins/patron-button-and-widgets-by-codebard/images/become_a_patron_button.png
+[13]:https://www.patreon.com/thishostingrocks
diff --git a/sources/tech/20180116 How to Install and Use iostat on Ubuntu 16.04 LTS.md b/sources/tech/20180116 How to Install and Use iostat on Ubuntu 16.04 LTS.md
new file mode 100644
index 0000000000..7ddb17eb68
--- /dev/null
+++ b/sources/tech/20180116 How to Install and Use iostat on Ubuntu 16.04 LTS.md
@@ -0,0 +1,225 @@
+How to Install and Use iostat on Ubuntu 16.04 LTS
+======
+
+iostat also known as input/output statistics is a popular Linux system monitoring tool that can be used to collect statistics of input and output devices. It allows users to identify performance issues of local disk, remote disk and system information. The iostat create reports, the CPU Utilization report, the Device Utilization report and the Network Filesystem report.
+
+In this tutorial, we will learn how to install iostat on Ubuntu 16.04 and how to use it.
+
+### Prerequisite
+
+ * Ubuntu 16.04 desktop installed on your system.
+ * Non-root user with sudo privileges setup on your system
+
+
+
+### Install iostat
+
+By default, iostat is included with sysstat package in Ubuntu 16.04. You can easily install it by just running the following command:
+
+```
+sudo apt-get install sysstat -y
+```
+
+Once sysstat is installed, you can proceed to the next step.
+
+### iostat Basic Example
+
+Let's start by running the iostat command without any argument. This will displays information about the CPU usage, and I/O statistics of your system:
+
+```
+iostat
+```
+
+You should see the following output:
+```
+Linux 3.19.0-25-generic (Ubuntu-PC) Saturday 16 December 2017 _x86_64_ (4 CPU)
+
+avg-cpu: %user %nice %system %iowait %steal %idle
+ 22.67 0.52 6.99 1.88 0.00 67.94
+
+Device: tps kB_read/s kB_wrtn/s kB_read kB_wrtn
+sda 15.15 449.15 119.01 771022 204292
+
+```
+
+In the above output, the first line display, Linux kernel version and hostname. Next two lines displays CPU statistics like, average CPU usage, percentage of time the CPU were idle and waited for I/O response, percentage of waiting time of virtual CPU and the percentage of time the CPU is idle. Next two lines displays the device utilization report like, number of blocks read and write per second and total block reads and write per second.
+
+By default iostat displays the report with current date. If you want to display the current time, run the following command:
+
+```
+iostat -t
+```
+
+You should see the following output:
+```
+Linux 3.19.0-25-generic (Ubuntu-PC) Saturday 16 December 2017 _x86_64_ (4 CPU)
+
+Saturday 16 December 2017 09:44:55 IST
+avg-cpu: %user %nice %system %iowait %steal %idle
+ 21.37 0.31 6.93 1.28 0.00 70.12
+
+Device: tps kB_read/s kB_wrtn/s kB_read kB_wrtn
+sda 9.48 267.80 79.69 771022 229424
+
+```
+
+To check the version of the iostat, run the following command:
+
+```
+iostat -V
+```
+
+Output:
+```
+sysstat version 10.2.0
+(C) Sebastien Godard (sysstat orange.fr)
+
+```
+
+You can listout all the options available with iostat command using the following command:
+
+```
+iostat --help
+```
+
+Output:
+```
+Usage: iostat [ options ] [ [ ] ]
+Options are:
+[ -c ] [ -d ] [ -h ] [ -k | -m ] [ -N ] [ -t ] [ -V ] [ -x ] [ -y ] [ -z ]
+[ -j { ID | LABEL | PATH | UUID | ... } ]
+[ [ -T ] -g ] [ -p [ [,...] | ALL ] ]
+[ [...] | ALL ]
+
+```
+
+### iostat Advance Usage Example
+
+If you want to view only the device report only once, run the following command:
+
+```
+iostat -d
+```
+
+You should see the following output:
+```
+Linux 3.19.0-25-generic (Ubuntu-PC) Saturday 16 December 2017 _x86_64_ (4 CPU)
+
+Device: tps kB_read/s kB_wrtn/s kB_read kB_wrtn
+sda 12.18 353.66 102.44 771022 223320
+
+```
+
+To view the device report continuously for every 5 seconds, for 3 times:
+
+```
+iostat -d 5 3
+```
+
+You should see the following output:
+```
+Linux 3.19.0-25-generic (Ubuntu-PC) Saturday 16 December 2017 _x86_64_ (4 CPU)
+
+Device: tps kB_read/s kB_wrtn/s kB_read kB_wrtn
+sda 11.77 340.71 98.95 771022 223928
+
+Device: tps kB_read/s kB_wrtn/s kB_read kB_wrtn
+sda 2.00 0.00 8.00 0 40
+
+Device: tps kB_read/s kB_wrtn/s kB_read kB_wrtn
+sda 0.60 0.00 3.20 0 16
+
+```
+
+If you want to view the statistics of specific devices, run the following command:
+
+```
+iostat -p sda
+```
+
+You should see the following output:
+```
+Linux 3.19.0-25-generic (Ubuntu-PC) Saturday 16 December 2017 _x86_64_ (4 CPU)
+
+avg-cpu: %user %nice %system %iowait %steal %idle
+ 21.69 0.36 6.98 1.44 0.00 69.53
+
+Device: tps kB_read/s kB_wrtn/s kB_read kB_wrtn
+sda 11.00 316.91 92.38 771022 224744
+sda1 0.07 0.27 0.00 664 0
+sda2 0.01 0.05 0.00 128 0
+sda3 0.07 0.27 0.00 648 0
+sda4 10.56 315.21 92.35 766877 224692
+sda5 0.12 0.48 0.02 1165 52
+sda6 0.07 0.32 0.00 776 0
+
+```
+
+You can also view the statistics of multiple devices with the following command:
+
+```
+iostat -p sda, sdb, sdc
+```
+
+If you want to displays the device I/O statistics in MB/second, run the following command:
+
+```
+iostat -m
+```
+
+You should see the following output:
+```
+Linux 3.19.0-25-generic (Ubuntu-PC) Saturday 16 December 2017 _x86_64_ (4 CPU)
+
+avg-cpu: %user %nice %system %iowait %steal %idle
+ 21.39 0.31 6.94 1.30 0.00 70.06
+
+Device: tps MB_read/s MB_wrtn/s MB_read MB_wrtn
+sda 9.67 0.27 0.08 752 223
+
+```
+
+If you want to view the extended information for a specific partition (sda4), run the following command:
+
+```
+iostat -x sda4
+```
+
+You should see the following output:
+```
+Linux 3.19.0-25-generic (Ubuntu-PC) Saturday 16 December 2017 _x86_64_ (4 CPU)
+
+avg-cpu: %user %nice %system %iowait %steal %idle
+ 21.26 0.28 6.87 1.19 0.00 70.39
+
+Device: rrqm/s wrqm/s r/s w/s rkB/s wkB/s avgrq-sz avgqu-sz await r_await w_await svctm %util
+sda4 0.79 4.65 5.71 2.68 242.76 73.28 75.32 0.35 41.80 43.66 37.84 4.55 3.82
+
+```
+
+If you want to displays only the CPU usage statistics, run the following command:
+
+```
+iostat -c
+```
+
+You should see the following output:
+```
+Linux 3.19.0-25-generic (Ubuntu-PC) Saturday 16 December 2017 _x86_64_ (4 CPU)
+
+avg-cpu: %user %nice %system %iowait %steal %idle
+ 21.45 0.33 6.96 1.34 0.00 69.91
+
+```
+
+--------------------------------------------------------------------------------
+
+via: https://www.howtoforge.com/tutorial/how-to-install-and-use-iostat-on-ubuntu-1604/
+
+作者:[Hitesh Jethva][a]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]:https://www.howtoforge.com
diff --git a/sources/tech/20180116 Monitor your Kubernetes Cluster.md b/sources/tech/20180116 Monitor your Kubernetes Cluster.md
new file mode 100644
index 0000000000..f0ac585f6f
--- /dev/null
+++ b/sources/tech/20180116 Monitor your Kubernetes Cluster.md
@@ -0,0 +1,264 @@
+Monitor your Kubernetes Cluster
+======
+This article originally appeared on [Kevin Monroe's blog][1]
+
+Keeping an eye on logs and metrics is a necessary evil for cluster admins. The benefits are clear: metrics help you set reasonable performance goals, while log analysis can uncover issues that impact your workloads. The hard part, however, is getting a slew of applications to work together in a useful monitoring solution.
+
+In this post, I'll cover monitoring a Kubernetes cluster with [Graylog][2] (for logging) and [Prometheus][3] (for metrics). Of course that's not just wiring 3 things together. In fact, it'll end up looking like this:
+
+![][4]
+
+As you know, Kubernetes isn't just one thing -- it's a system of masters, workers, networking bits, etc(d). Similarly, Graylog comes with a supporting cast (apache2, mongodb, etc), as does Prometheus (telegraf, grafana, etc). Connecting the dots in a deployment like this may seem daunting, but the right tools can make all the difference.
+
+I'll walk through this using [conjure-up][5] and the [Canonical Distribution of Kubernetes][6] (CDK). I find the conjure-up interface really helpful for deploying big software, but I know some of you hate GUIs and TUIs and probably other UIs too. For those folks, I'll do the same deployment again from the command line.
+
+Before we jump in, note that Graylog and Prometheus will be deployed alongside Kubernetes and not in the cluster itself. Things like the Kubernetes Dashboard and Heapster are excellent sources of information from within a running cluster, but my objective is to provide a mechanism for log/metric analysis whether the cluster is running or not.
+
+### The Walk Through
+
+First things first, install conjure-up if you don't already have it. On Linux, that's simply:
+```
+sudo snap install conjure-up --classic
+```
+
+There's also a brew package for macOS users:
+```
+brew install conjure-up
+```
+
+You'll need at least version 2.5.2 to take advantage of the recent CDK spell additions, so be sure to `sudo snap refresh conjure-up` or `brew update && brew upgrade conjure-up` if you have an older version installed.
+
+Once installed, run it:
+```
+conjure-up
+```
+
+![][7]
+
+You'll be presented with a list of various spells. Select CDK and press `Enter`.
+
+![][8]
+
+At this point, you'll see additional components that are available for the CDK spell. We're interested in Graylog and Prometheus, so check both of those and hit `Continue`.
+
+You'll be guided through various cloud choices to determine where you want your cluster to live. After that, you'll see options for post-deployment steps, followed by a review screen that lets you see what is about to be deployed:
+
+![][9]
+
+In addition to the typical K8s-related applications (etcd, flannel, load-balancer, master, and workers), you'll see additional applications related to our logging and metric selections.
+
+The Graylog stack includes the following:
+
+ * apache2: reverse proxy for the graylog web interface
+ * elasticsearch: document database for the logs
+ * filebeat: forwards logs from K8s master/workers to graylog
+ * graylog: provides an api for log collection and an interface for analysis
+ * mongodb: database for graylog metadata
+
+
+
+The Prometheus stack includes the following:
+
+ * grafana: web interface for metric-related dashboards
+ * prometheus: metric collector and time series database
+ * telegraf: sends host metrics to prometheus
+
+
+
+You can fine tune the deployment from this review screen, but the defaults will suite our needs. Click `Deploy all Remaining Applications` to get things going.
+
+The deployment will take a few minutes to settle as machines are brought online and applications are configured in your cloud. Once complete, conjure-up will show a summary screen that includes links to various interesting endpoints for you to browse:
+
+![][10]
+
+#### Exploring Logs
+
+Now that Graylog has been deployed and configured, let's take a look at some of the data we're gathering. By default, the filebeat application will send both syslog and container log events to graylog (that's `/var/log/*.log` and `/var/log/containers/*.log` from the kubernetes master and workers).
+
+Grab the apache2 address and graylog admin password as follows:
+```
+juju status --format yaml apache2/0 | grep public-address
+ public-address:
+juju run-action --wait graylog/0 show-admin-password
+ admin-password:
+```
+
+Browse to `http://` and login with admin as the username and as the password. **Note:** if the interface is not immediately available, please wait as the reverse proxy configuration may take up to 5 minutes to complete.
+
+Once logged in, head to the `Sources` tab to get an overview of the logs collected from our K8s master and workers:
+
+![][11]
+
+Drill into those logs by clicking the `System / Inputs` tab and selecting `Show received messages` for the filebeat input:
+
+![][12]
+
+From here, you may want to play around with various filters or setup Graylog dashboards to help identify the events that are most important to you. Check out the [Graylog Dashboard][13] docs for details on customizing your view.
+
+#### Exploring Metrics
+
+Our deployment exposes two types of metrics through our grafana dashboards: system metrics include things like cpu/memory/disk utilization for the K8s master and worker machines, and cluster metrics include container-level data scraped from the K8s cAdvisor endpoints.
+
+Grab the grafana address and admin password as follows:
+```
+juju status --format yaml grafana/0 | grep public-address
+ public-address:
+juju run-action --wait grafana/0 get-admin-password
+ password:
+```
+
+Browse to `http://:3000` and login with admin as the username and as the password. Once logged in, check out the cluster metric dashboard by clicking the `Home` drop-down box and selecting `Kubernetes Metrics (via Prometheus)`:
+
+![][14]
+
+We can also check out the system metrics of our K8s host machines by switching the drop-down box to `Node Metrics (via Telegraf) `
+
+![][15]
+
+
+### The Other Way
+
+As alluded to in the intro, I prefer the wizard-y feel of conjure-up to guide me through complex software deployments like Kubernetes. Now that we've seen the conjure-up way, some of you may want to see a command line approach to achieve the same results. Still others may have deployed CDK previously and want to extend it with the Graylog/Prometheus components described above. Regardless of why you've read this far, I've got you covered.
+
+The tool that underpins conjure-up is [Juju][16]. Everything that the CDK spell did behind the scenes can be done on the command line with Juju. Let's step through how that works.
+
+**Starting From Scratch**
+
+If you're on Linux, install Juju like this:
+```
+sudo snap install juju --classic
+```
+
+For macOS, Juju is available from brew:
+```
+brew install juju
+```
+
+Now setup a controller for your preferred cloud. You may be prompted for any required cloud credentials:
+```
+juju bootstrap
+```
+
+We then need to deploy the base CDK bundle:
+```
+juju deploy canonical-kubernetes
+```
+
+**Starting From CDK**
+
+With our Kubernetes cluster deployed, we need to add all the applications required for Graylog and Prometheus:
+```
+## deploy graylog-related applications
+juju deploy xenial/apache2
+juju deploy xenial/elasticsearch
+juju deploy xenial/filebeat
+juju deploy xenial/graylog
+juju deploy xenial/mongodb
+```
+```
+## deploy prometheus-related applications
+juju deploy xenial/grafana
+juju deploy xenial/prometheus
+juju deploy xenial/telegraf
+```
+
+Now that the software is deployed, connect them together so they can communicate:
+```
+## relate graylog applications
+juju relate apache2:reverseproxy graylog:website
+juju relate graylog:elasticsearch elasticsearch:client
+juju relate graylog:mongodb mongodb:database
+juju relate filebeat:beats-host kubernetes-master:juju-info
+juju relate filebeat:beats-host kubernetes-worker:jujuu-info
+```
+```
+## relate prometheus applications
+juju relate prometheus:grafana-source grafana:grafana-source
+juju relate telegraf:prometheus-client prometheus:target
+juju relate kubernetes-master:juju-info telegraf:juju-info
+juju relate kubernetes-worker:juju-info telegraf:juju-info
+```
+
+At this point, all the applications can communicate with each other, but we have a bit more configuration to do (e.g., setting up the apache2 reverse proxy, telling prometheus how to scrape k8s, importing our grafana dashboards, etc):
+```
+## configure graylog applications
+juju config apache2 enable_modules="headers proxy_html proxy_http"
+juju config apache2 vhost_http_template="$(base64 )"
+juju config elasticsearch firewall_enabled="false"
+juju config filebeat \
+ logpath="/var/log/*.log /var/log/containers/*.log"
+juju config filebeat logstash_hosts=":5044"
+juju config graylog elasticsearch_cluster_name=""
+```
+```
+## configure prometheus applications
+juju config prometheus scrape-jobs=""
+juju run-action --wait grafana/0 import-dashboard \
+ dashboard="$(base64 )"
+```
+
+Some of the above steps need values specific to your deployment. You can get these in the same way that conjure-up does:
+
+ * : fetch our sample [template][17] from github
+ * : `juju run --unit graylog/0 'unit-get private-address'`
+ * : `juju config elasticsearch cluster-name`
+ * : fetch our sample [scraper][18] from github; [substitute][19]appropriate values for `[K8S_PASSWORD][20]` and `[K8S_API_ENDPOINT][21]`
+ * : fetch our [host][22] and [k8s][23] dashboards from github
+
+
+
+Finally, you'll want to expose the apache2 and grafana applications to make their web interfaces accessible:
+```
+## expose relevant endpoints
+juju expose apache2
+juju expose grafana
+```
+
+Now that we have everything deployed, related, configured, and exposed, you can login and poke around using the same steps from the **Exploring Logs** and **Exploring Metrics** sections above.
+
+### The Wrap Up
+
+My goal here was to show you how to deploy a Kubernetes cluster with rich monitoring capabilities for logs and metrics. Whether you prefer a guided approach or command line steps, I hope it's clear that monitoring complex deployments doesn't have to be a pipe dream. The trick is to figure out how all the moving parts work, make them work together repeatably, and then break/fix/repeat for a while until everyone can use it.
+
+This is where tools like conjure-up and Juju really shine. Leveraging the expertise of contributors to this ecosystem makes it easy to manage big software. Start with a solid set of apps, customize as needed, and get back to work!
+
+Give these bits a try and let me know how it goes. You can find enthusiasts like me on Freenode IRC in **#conjure-up** and **#juju**. Thanks for reading!
+
+### About the author
+
+Kevin joined Canonical in 2014 with his focus set on modeling complex software. He found his niche on the Juju Big Software team where his mission is to capture operational knowledge of Big Data and Machine Learning applications into repeatable (and reliable!) solutions.
+
+--------------------------------------------------------------------------------
+
+via: https://insights.ubuntu.com/2018/01/16/monitor-your-kubernetes-cluster/
+
+作者:[Kevin Monroe][a]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]:https://insights.ubuntu.com/author/kwmonroe/
+[1]:https://medium.com/@kwmonroe/monitor-your-kubernetes-cluster-a856d2603ec3
+[2]:https://www.graylog.org/
+[3]:https://prometheus.io/
+[4]:https://insights.ubuntu.com/wp-content/uploads/706b/1_TAA57DGVDpe9KHIzOirrBA.png
+[5]:https://conjure-up.io/
+[6]:https://jujucharms.com/canonical-kubernetes
+[7]:https://insights.ubuntu.com/wp-content/uploads/98fd/1_o0UmYzYkFiHIs2sBgj7G9A.png
+[8]:https://insights.ubuntu.com/wp-content/uploads/0351/1_pgVaO_ZlalrjvYd5pOMJMA.png
+[9]:https://insights.ubuntu.com/wp-content/uploads/9977/1_WXKxMlml2DWA5Kj6wW9oXQ.png
+[10]:https://insights.ubuntu.com/wp-content/uploads/8588/1_NWq7u6g6UAzyFxtbM-ipqg.png
+[11]:https://insights.ubuntu.com/wp-content/uploads/a1c3/1_hHK5mSrRJQi6A6u0yPSGOA.png
+[12]:https://insights.ubuntu.com/wp-content/uploads/937f/1_cP36lpmSwlsPXJyDUpFluQ.png
+[13]:http://docs.graylog.org/en/2.3/pages/dashboards.html
+[14]:https://insights.ubuntu.com/wp-content/uploads/9256/1_kskust3AOImIh18QxQPgRw.png
+[15]:https://insights.ubuntu.com/wp-content/uploads/2037/1_qJpjPOTGMQbjFY5-cZsYrQ.png
+[16]:https://jujucharms.com/
+[17]:https://raw.githubusercontent.com/conjure-up/spells/master/canonical-kubernetes/addons/graylog/steps/01_install-graylog/graylog-vhost.tmpl
+[18]:https://raw.githubusercontent.com/conjure-up/spells/master/canonical-kubernetes/addons/prometheus/steps/01_install-prometheus/prometheus-scrape-k8s.yaml
+[19]:https://github.com/conjure-up/spells/blob/master/canonical-kubernetes/addons/prometheus/steps/01_install-prometheus/after-deploy#L25
+[20]:https://github.com/conjure-up/spells/blob/master/canonical-kubernetes/addons/prometheus/steps/01_install-prometheus/after-deploy#L10
+[21]:https://github.com/conjure-up/spells/blob/master/canonical-kubernetes/addons/prometheus/steps/01_install-prometheus/after-deploy#L11
+[22]:https://raw.githubusercontent.com/conjure-up/spells/master/canonical-kubernetes/addons/prometheus/steps/01_install-prometheus/grafana-telegraf.json
+[23]:https://raw.githubusercontent.com/conjure-up/spells/master/canonical-kubernetes/addons/prometheus/steps/01_install-prometheus/grafana-k8s.json
diff --git a/sources/tech/20180116 Why building a community is worth the extra effort.md b/sources/tech/20180116 Why building a community is worth the extra effort.md
new file mode 100644
index 0000000000..ec971e84eb
--- /dev/null
+++ b/sources/tech/20180116 Why building a community is worth the extra effort.md
@@ -0,0 +1,66 @@
+Why building a community is worth the extra effort
+======
+
+![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/BUSINESS_brandbalance.png?itok=XSQ1OU16)
+
+When we launched [Nethesis][1] in 2003, we were just system integrators. We only used existing open source projects. Our business model was clear: Add multiople forms of value to those projects: know-how, documentation for the Italian market, extra modules, professional support, and training courses. We gave back to upstream projects as well, through upstream code contributions and by participating in their communities.
+
+Times were different then. We couldn't use the term "open source" too loudly. People associated it with words like: "nerdy," "no value" and, worst of all, "free." Not too good for a business.
+
+On a Saturday in 2010, with pasties and espresso in hand, the Nethesis staff were discussing how to move things forward (hey, we like to eat and drink while we innovate!). In spite of the momentum working against us, we decided not to change course. In fact, we decided to push harder--to make open source, and an open way of working, a successful model for running a business.
+
+Over the years, we've proven that model's potential. And one thing has been key to our success: community.
+
+In this three-part series, I'll explain the important role community plays in an open organization's existence. I'll explore why an organization would want to build a community, and discuss how to build one--because I really do believe it's the best way to generate new innovations today.
+
+### The crazy idea
+
+Together with the Nethesis guys, we decided to build our own open source project: our own operating system, built on top of CentOS (because we didn't want to reinvent the wheel). We assumed that we had the experience, know-how, and workforce to achieve it. We felt brave.
+
+And we very much wanted to build an operating system called [NethServer][2] with one mission: making a sysadmin's life easier with open source. We knew we could create a Linux distribution for a server that would be more accessible, easier to adopt, and simpler to understand than anything currently offered.
+
+Above all, though, we decided to create a real, 100% open project with three primary rules:
+
+ * completely free to download,
+ * openly developed, and
+ * community-driven
+
+
+
+That last one is important. We were a company; we were able to develop it by ourselves. We would have been more effective (and made quicker decisions) if we'd done the work in-house. It would be so simple, like any other company in Italy.
+
+But we were so deeply into open source culture culture that we chose a different path.
+
+We really wanted as many people as possible around us, around the product, and around the company. We wanted as many perspectives on the work as possible. We realized: Alone, you can go fast--but if you want to go far, you need to go together.
+
+So we decided to build a community instead.
+
+### What next?
+
+We realized that creating a community has so many benefits. For example, if the people who use your product are really involved in the project, they will provide feedback and use cases, write documentation, catch bugs, compare with other products, suggest features, and contribute to development. All of this generates innovations, attracts contributors and customers, and expands your product's user base.
+
+But quicky the question arose: How can we build a community? We didn't know how to achieve that. We'd participated in many communities, but we'd never built one.
+
+We were good at code--not with people. And we were a company, an organization with very specific priorities. So how were we going to build a community and a foster good relationships between the company and the community itself?
+
+We did the first thing you had to do: study. We learned from experts, blogs, and lots of books. We experimented. We failed many times, collected data from the outcomes, and tested them again.
+
+Eventually we learned the golden rule of the community management: There is no golden rule of community management.
+
+People are too complex and communities are too different to have one rule "to rule them all,"
+
+One thing I can say, however, is that an healthy relationship between a community and a company is always a process of give and take. In my next article, I'll discuss what your organization should expect to give if it wants a flourishing and innovating community.
+
+--------------------------------------------------------------------------------
+
+via: https://opensource.com/open-organization/18/1/why-build-community-1
+
+作者:[Alessio Fattorini][a]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]:https://opensource.com/users/alefattorini
+[1]:http://www.nethesis.it/
+[2]:http://www.nethserver.org/
diff --git a/sources/tech/20180117 Avoiding Server Disaster.md b/sources/tech/20180117 Avoiding Server Disaster.md
new file mode 100644
index 0000000000..cb88fe20d9
--- /dev/null
+++ b/sources/tech/20180117 Avoiding Server Disaster.md
@@ -0,0 +1,125 @@
+Avoiding Server Disaster
+======
+
+Worried that your server will go down? You should be. Here are some disaster-planning tips for server owners.
+
+If you own a car or a house, you almost certainly have insurance. Insurance seems like a huge waste of money. You pay it every year and make sure that you get the best possible price for the best possible coverage, and then you hope you never need to use the insurance. Insurance seems like a really bad deal—until you have a disaster and realize that had it not been for the insurance, you might have been in financial ruin.
+
+Unfortunately, disasters and mishaps are a fact of life in the computer industry. And so, just as you pay insurance and hope never to have to use it, you also need to take time to ensure the safety and reliability of your systems—not because you want disasters to happen, or even expect them to occur, but rather because you have to.
+
+If your website is an online brochure for your company and then goes down for a few hours or even days, it'll be embarrassing and annoying, but not financially painful. But, if your website is your business, when your site goes down, you're losing money. If that's the case, it's crucial to ensure that your server and software are not only unlikely to go down, but also easily recoverable if and when that happens.
+
+Why am I writing about this subject? Well, let's just say that this particular problem hit close to home for me, just before I started to write this article. After years of helping clients around the world to ensure the reliability of their systems, I made the mistake of not being as thorough with my own. ("The shoemaker's children go barefoot", as the saying goes.) This means that just after launching my new online product for Python developers, a seemingly trivial upgrade turned into a disaster. The precautions I put in place, it turns out, weren't quite enough—and as I write this, I'm still putting my web server together. I'll survive, as will my server and business, but this has been a painful and important lesson—one that I'll do almost anything to avoid repeating in the future.
+
+So in this article, I describe a number of techniques I've used to keep servers safe and sound through the years, and to reduce the chances of a complete meltdown. You can think of these techniques as insurance for your server, so that even if something does go wrong, you'll be able to recover fairly quickly.
+
+I should note that most of the advice here assumes no redundancy in your architecture—that is, a single web server and (at most) a single database server. If you can afford to have a bunch of servers of each type, these sorts of problems tend to be much less frequent. However, that doesn't mean they go away entirely. Besides, although people like to talk about heavy-duty web applications that require massive iron in order to run, the fact is that many businesses run on small, one- and two-computer servers. Moreover, those businesses don't need more than that; the ROI (return on investment) they'll get from additional servers cannot be justified. However, the ROI from a good backup and recovery plan is huge, and thus worth the investment.
+
+### The Parts of a Web Application
+
+Before I can talk about disaster preparation and recovery, it's important to consider the different parts of a web application and what those various parts mean for your planning.
+
+For many years, my website was trivially small and simple. Even if it contained some simple programs, those generally were used for sending email or for dynamically displaying different assets to visitors. The entire site consisted of some static HTML, images, JavaScript and CSS. No database or other excitement was necessary.
+
+At the other end of the spectrum, many people have full-blown web applications, sitting on multiple servers, with one or more databases and caches, as well as HTTP servers with extensively edited configuration files.
+
+But even when considering those two extremes, you can see that a web application consists of only a few parts:
+
+* The application software itself.
+
+* Static assets for that application.
+
+* Configuration file(s) for the HTTP server(s).
+
+* Database configuration files.
+
+* Database schema and contents.
+
+Assuming that you're using a high-level language, such as Python, Ruby or JavaScript, everything in this list either is a file or can be turned into one. (All databases make it possible to "dump" their contents onto disk, into a format that then can be loaded back into the database server.)
+
+Consider a site containing only application software, static assets and configuration files. (In other words, no database is involved.) In many cases, such a site can be backed up reliably in Git. Indeed, I prefer to keep my sites in Git, backed up on a commercial hosting service, such as GitHub or Bitbucket, and then deployed using a system like Capistrano.
+
+In other words, you develop the site on your own development machine. Whenever you are happy with a change that you've made, you commit the change to Git (on your local machine) and then do a git push to your central repository. In order to deploy your application, you then use Capistrano to do a cap deploy, which reads the data from the central repository, puts it into the appropriate place on the server's filesystem, and you're good to go.
+
+This system keeps you safe in a few different ways. The code itself is located in at least three locations: your development machine, the server and the repository. And those central repositories tend to be fairly reliable, if only because it's in the financial interest of the hosting company to ensure that things are reliable.
+
+I should add that in such a case, you also should include the HTTP server's configuration files in your Git repository. Those files aren't likely to change very often, but I can tell you from experience, if you're recovering from a crisis, the last thing you want to think about is how your Apache configuration files should look. Copying those files into your Git repository will work just fine.
+
+### Backing Up Databases
+
+You could argue that the difference between a "website" and a "web application" is a database. Databases long have powered the back ends of many web applications and for good reason—they allow you to store and retrieve data reliably and flexibly. The power that modern open-source databases provides was unthinkable just a decade or two ago, and there's no reason to think that they'll be any less reliable in the future.
+
+And yet, just because your database is pretty reliable doesn't mean that it won't have problems. This means you're going to want to keep a snapshot ("dump") of the database's contents around, in case the database server corrupts information, and you need to roll back to a previous version.
+
+My favorite solution for such a problem is to dump the database on a regular basis, preferably hourly. Here's a shell script I've used, in one form or another, for creating such regular database dumps:
+
+```
+
+#!/bin/sh
+
+BACKUP_ROOT="/home/database-backups/"
+YEAR=`/bin/date +'%Y'`
+MONTH=`/bin/date +'%m'`
+DAY=`/bin/date +'%d'`
+
+DIRECTORY="$BACKUP_ROOT/$YEAR/$MONTH/$DAY"
+USERNAME=dbuser
+DATABASE=dbname
+HOST=localhost
+PORT=3306
+
+/bin/mkdir -p $DIRECTORY
+
+/usr/bin/mysqldump -h $HOST --databases $DATABASE -u $USERNAME
+ ↪| /bin/gzip --best --verbose >
+ ↪$DIRECTORY/$DATABASE-dump.gz
+
+```
+
+The above shell script starts off by defining a bunch of variables, from the directory in which I want to store the backups, to the parts of the date (stored in $YEAR, $MONTH and $DAY). This is so I can have a separate directory for each day of the month. I could, of course, go further and have separate directories for each hour, but I've found that I rarely need more than one backup from a day.
+
+Once I have defined those variables, I then use the mkdir command to create a new directory. The -p option tells mkdir that if necessary, it should create all of the directories it needs such that the entire path will exist.
+
+Finally, I then run the database's "dump" command. In this particular case, I'm using MySQL, so I'm using the mysqldump command. The output from this command is a stream of SQL that can be used to re-create the database. I thus take the output from mysqldump and pipe it into gzip, which compresses the output file. Finally, the resulting dumpfile is placed, in compressed form, inside the daily backup directory.
+
+Depending on the size of your database and the amount of disk space you have on hand, you'll have to decide just how often you want to run dumps and how often you want to clean out old ones. I know from experience that dumping every hour can cause some load problems. On one virtual machine I've used, the overall administration team was unhappy that I was dumping and compressing every hour, which they saw as an unnecessary use of system resources.
+
+If you're worried your system will run out of disk space, you might well want to run a space-checking program that'll alert you when the filesystem is low on free space. In addition, you can run a cron job that uses find to erase all dumpfiles from before a certain cutoff date. I'm always a bit nervous about programs that automatically erase backups, so I generally prefer not to do this. Rather, I run a program that warns me if the disk usage is going above 85% (which is usually low enough to ensure that I can fix the problem in time, even if I'm on a long flight). Then I can go in and remove the problematic files by hand.
+
+When you back up your database, you should be sure to back up the configuration for that database as well. The database schema and data, which are part of the dumpfile, are certainly important. However, if you find yourself having to re-create your server from scratch, you'll want to know precisely how you configured the database server, with a particular emphasis on the filesystem configuration and memory allocations. I tend to use PostgreSQL for most of my work, and although postgresql.conf is simple to understand and configure, I still like to keep it around with my dumpfiles.
+
+Another crucial thing to do is to check your database dumps occasionally to be sure that they are working the way you want. It turns out that the backups I thought I was making weren't actually happening, in no small part because I had modified the shell script and hadn't double-checked that it was creating useful backups. Occasionally pulling out one of your dumpfiles and restoring it to a separate (and offline!) database to check its integrity is a good practice, both to ensure that the dump is working and that you remember how to restore it in the case of an emergency.
+
+### Storing Backups
+
+But wait. It might be great to have these backups, but what if the server goes down entirely? In the case of the code, I mentioned to ensure that it was located on more than one machine, ensuring its integrity. By contrast, your database dumps are now on the server, such that if the server fails, your database dumps will be inaccessible.
+
+This means you'll want to have your database dumps stored elsewhere, preferably automatically. How can you do that?
+
+There are a few relatively easy and inexpensive solutions to this problem. If you have two servers—ideally in separate physical locations—you can use rsync to copy the files from one to the other. Don't rsync the database's actual files, since those might get corrupted in transfer and aren't designed to be copied when the server is running. By contrast, the dumpfiles that you have created are more than able to go elsewhere. Setting up a remote server, with a user specifically for handling these backup transfers, shouldn't be too hard and will go a long way toward ensuring the safety of your data.
+
+I should note that using rsync in this way basically requires that you set up passwordless SSH, so that you can transfer without having to be physically present to enter the password.
+
+Another possible solution is Amazon's Simple Storage Server (S3), which offers astonishing amounts of disk space at very low prices. I know that many companies use S3 as a simple (albeit slow) backup system. You can set up a cron job to run a program that copies the contents of a particular database dumpfile directory onto a particular server. The assumption here is that you're not ever going to use these backups, meaning that S3's slow searching and access will not be an issue once you're working on the server.
+
+Similarly, you might consider using Dropbox. Dropbox is best known for its desktop client, but it has a "headless", text-based client that can be used on Linux servers without a GUI connected. One nice advantage of Dropbox is that you can share a folder with any number of people, which means you can have Dropbox distribute your backup databases everywhere automatically, including to a number of people on your team. The backups arrive in their Dropbox folder, and you can be sure that the LAMP is conditional.
+
+Finally, if you're running a WordPress site, you might want to consider VaultPress, a for-pay backup system. I must admit that in the weeks before I took my server down with a database backup error, I kept seeing ads in WordPress for VaultPress. "Who would buy that?", I asked myself, thinking that I'm smart enough to do backups myself. Of course, after disaster occurred and my database was ruined, I realized that $30/year to back up all of my data is cheap, and I should have done it before.
+
+### Conclusion
+
+When it comes to your servers, think less like an optimistic programmer and more like an insurance agent. Perhaps disaster won't strike, but if it does, will you be able to recover? Making sure that even if your server is completely unavailable, you'll be able to bring up your program and any associated database is crucial.
+
+My preferred solution involves combining a Git repository for code and configuration files, distributed across several machines and services. For the databases, however, it's not enough to dump your database; you'll need to get that dump onto a separate machine, and preferably test the backup file on a regular basis. That way, even if things go wrong, you'll be able to get back up in no time.
+
+--------------------------------------------------------------------------------
+
+via: http://www.linuxjournal.com/content/avoiding-server-disaster
+
+作者:[Reuven M.Lerner][a]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]:http://www.linuxjournal.com/user/1000891
diff --git a/sources/tech/20180117 Linux tee Command Explained for Beginners (6 Examples).md b/sources/tech/20180117 Linux tee Command Explained for Beginners (6 Examples).md
new file mode 100644
index 0000000000..e1be9e3da2
--- /dev/null
+++ b/sources/tech/20180117 Linux tee Command Explained for Beginners (6 Examples).md
@@ -0,0 +1,130 @@
+Linux tee Command Explained for Beginners (6 Examples)
+======
+
+There are times when you want to manually track output of a command and also simultaneously make sure the output is being written to a file so that you can refer to it later. If you are looking for a Linux tool which can do this for you, you'll be glad to know there exists a command **tee** that's built for this purpose.
+
+In this tutorial, we will discuss the basics of the tee command using some easy to understand examples. But before we do that, it's worth mentioning that all examples used in this article have been tested on Ubuntu 16.04 LTS.
+
+### Linux tee command
+
+The tee command basically reads from the standard input and writes to standard output and files. Following is the syntax of the command:
+
+```
+tee [OPTION]... [FILE]...
+```
+
+And here's how the man page explains it:
+```
+Copy standard input to each FILE, and also to standard output.
+```
+
+The following Q&A-styled examples should give you a better idea on how the command works.
+
+### Q1. How to use tee command in Linux?
+
+Suppose you are using the ping command for some reason.
+
+ping google.com
+
+[![How to use tee command in Linux][1]][2]
+
+And what you want, is that the output should also get written to a file in parallel. Then here's where you can use the tee command.
+
+```
+ping google.com | tee output.txt
+```
+
+The following screenshot shows the output was written to the 'output.txt' file along with being written on stdout.
+
+[![tee command output][3]][4]
+
+So that should clear the basic usage of tee.
+
+### Q2. How to make sure tee appends information in files?
+
+By default, the tee command overwrites information in a file when used again. However, if you want, you can change this behavior by using the -a command line option.
+
+```
+[command] | tee -a [file]
+```
+
+So basically, the -a option forces tee to append information to the file.
+
+### Q3. How to make tee write to multiple files?
+
+That's pretty easy. You just have to mention their names.
+
+```
+[command] | tee [file1] [file2] [file3]
+```
+
+For example:
+
+```
+ping google.com | tee output1.txt output2.txt output3.txt
+```
+
+[![How to make tee write to multiple files][5]][6]
+
+### Q4. How to make tee redirect output of one command to another?
+
+You can not only use tee to simultaneously write output to files, but also to pass on the output as input to other commands. For example, the following command will not only store the filenames in 'output.txt' but also let you know - through wc - the number of entries in the output.txt file.
+
+```
+ls file* | tee output.txt | wc -l
+```
+
+[![How to make tee redirect output of one command to another][7]][8]
+
+### Q5. How to write to a file with elevated privileges using tee?
+
+Suppose you opened a file in the [Vim editor][9], made a lot of changes, and then when you tried saving those changes, you got an error that made you realize that it's a root-owned file, meaning you need to have sudo privileges to save these changes.
+
+[![How to write to a file with elevated privileges using tee][10]][11]
+
+In scenarios like these, you can use tee to elevate privileges on the go.
+
+```
+:w !sudo tee %
+```
+
+The aforementioned command will ask you for root password, and then let you save the changes.
+
+### Q6. How to make tee ignore interrupt?
+
+The -i command line option enables tee to ignore the interrupt signal (`SIGINT`), which is usually issued when you press the crl+c key combination.
+
+```
+[command] | tee -i [file]
+```
+
+This is useful when you want to kill the command with ctrl+c but want tee to exit gracefully.
+
+### Conclusion
+
+You'll likely agree now that tee is an extremely useful command. We've discussed it's basic usage as well as majority of its command line options here. The tool doesn't have a steep learning curve, so just practice all these examples, and you should be good to go. For more information, head to the tool's [man page][12].
+
+
+--------------------------------------------------------------------------------
+
+via: https://www.howtoforge.com/linux-tee-command/
+
+作者:[Himanshu Arora][a]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]:https://www.howtoforge.com
+[1]:https://www.howtoforge.com/images/command-tutorial/ping-example.png
+[2]:https://www.howtoforge.com/images/command-tutorial/big/ping-example.png
+[3]:https://www.howtoforge.com/images/command-tutorial/ping-with-tee.png
+[4]:https://www.howtoforge.com/images/command-tutorial/big/ping-with-tee.png
+[5]:https://www.howtoforge.com/images/command-tutorial/tee-mult-files1.png
+[6]:https://www.howtoforge.com/images/command-tutorial/big/tee-mult-files1.png
+[7]:https://www.howtoforge.com/images/command-tutorial/tee-redirect-output.png
+[8]:https://www.howtoforge.com/images/command-tutorial/big/tee-redirect-output.png
+[9]:https://www.howtoforge.com/vim-basics
+[10]:https://www.howtoforge.com/images/command-tutorial/vim-write-error.png
+[11]:https://www.howtoforge.com/images/command-tutorial/big/vim-write-error.png
+[12]:https://linux.die.net/man/1/tee
diff --git a/translated/talk/20180111 AI and machine learning bias has dangerous implications.md b/translated/talk/20180111 AI and machine learning bias has dangerous implications.md
new file mode 100644
index 0000000000..3484b21163
--- /dev/null
+++ b/translated/talk/20180111 AI and machine learning bias has dangerous implications.md
@@ -0,0 +1,81 @@
+AI 和机器中暗含的算法偏见是怎样形成的,我们又能通过开源社区做些什么
+======
+
+![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/LAW_goodbadugly.png?itok=ZxaimUWU)
+
+图片来源:opensource.com
+
+在我们的世界里,算法无处不在,偏见也是一样。从社会媒体新闻的提供到流式媒体服务的推荐到线上购物,计算机算法,尤其是机器学习算法,已经渗透到我们日常生活的每一个角落。至于偏见,我们只需要参考 2016 年美国大选就可以知道,偏见是怎样在明处与暗处影响着我们的社会。
+
+很难想像,我们经常忽略的一点是这二者的交集:计算机算法中存在的偏见。
+
+与我们大多数人所认为的相反,科技并不是客观的。 AI 算法和它们的决策程序是由它们的研发者塑造的,他们写入的代码,使用的“[训练][1]”数据还有他们对算法进行[应力测试][2] 的过程,都会影响这些算法今后的选择。这意味着研发者的价值观,偏见和人类缺陷都会反映在软件上。如果我只给实验室中的人脸识别算法提供白人的照片,当遇到不是白人照片时,它[不会认为照片中的是人类][3] 。这结论并不意味着 AI 是“愚蠢的”或是“天真的”,它显示的是训练数据的分布偏差:缺乏多种的脸部照片。这会引来非常严重的后果。
+
+这样的例子并不少。全美范围内的[州法院系统][4] 都使用“黑箱子”对罪犯进行宣判。由于训练数据的问题,[这些算法对黑人有偏见][5] ,他们对黑人罪犯会选择更长的服刑期,因此监狱中的种族差异会一直存在。而这些都发生在科技的客观性伪装下,这是“科学的”选择。
+
+美国联邦政府使用机器学习算法来计算福利性支出和各类政府补贴。[但这些算法中的信息][6],例如它们的创造者和训练信息,都很难找到。这增加了政府工作人员进行不平等补助金分发操作的几率。
+
+算法偏见情况还不止这些。从 Facebook 的新闻算法到医疗系统再到警方使用的相机,我们作为社会的一部分极有可能对这些算法输入各式各样的偏见,性别歧视,仇外思想,社会经济地位歧视,确认偏误等等。这些被输入了偏见的机器会大量生产分配,将种种社会偏见潜藏于科技客观性的面纱之下。
+
+这种状况绝对不能再继续下去了。
+
+在我们对人工智能进行不断开发研究的同时,需要降低它的开发速度,小心仔细地开发。算法偏见的危害已经足够大了。
+
+## 我们能怎样减少算法偏见?
+
+最好的方式是从算法训练的数据开始审查,根据 [Microsoft 的研究者][2] 所说,这方法很有效。
+
+数据分布本身就带有一定的偏见性。编程者手中的美国公民数据分布并不均衡,本地居民的数据多于移民者,富人的数据多于穷人,这是极有可能出现的情况。这种数据的不平均会使 AI 对我们是社会组成得出错误的结论。例如机器学习算法仅仅通过统计分析,就得出“大多数美国人都是富有的白人”这个结论。
+
+即使男性和女性的样本在训练数据中等量分布,也可能出现偏见的结果。如果训练数据中所有男性的职业都是 CEO,而所有女性的职业都是秘书(即使现实中男性 CEO 的数量要多于女性),AI 也可能得出女性天生不适合做 CEO 的结论。
+
+同样的,大量研究表明,用于执法部门的 AI 在检测新闻中出现的罪犯照片时,结果会 [惊人地偏向][7] 黑人及拉丁美洲裔居民。
+
+在训练数据中存在的偏见还有很多其他形式,不幸的是比这里提到的要多得多。但是训练数据只是审查方式的一种,通过“应力测验”找出人类存在的偏见也同样重要。
+
+如果提供一张印度人的照片,我们自己的相机能够识别吗?在两名同样水平的应聘者中,我们的 AI 是否会倾向于推荐住在市区的应聘者呢?对于情报中本地白人恐怖分子和伊拉克籍恐怖分子,反恐算法会怎样选择呢?急诊室的相机可以调出儿童的病历吗?
+
+这些对于 AI 来说是十分复杂的数据,但我们可以通过多项测试对它们进行定义和传达。
+
+## 为什么开源很适合这项任务?
+
+开源方法和开源技术都有着极大的潜力改变算法偏见。
+
+现代人工智能已经被开源软件占领,TensorFlow、IBM Watson 还有 [scikit-learn][8] 这类的程序包都是开源软件。开源社区已经证明它能够开发出强健的,经得住严酷测试的机器学习工具。同样的,我相信,开源社区也能开发出消除偏见的测试程序,并将其应用于这些软件中。
+
+调试工具如哥伦比亚大学和理海大学推出的 [DeepXplore][9],增强了 AI 应力测试的强度,同时提高了其操控性。还有 [麻省理工学院的计算机科学和人工智能实验室][10]完成的项目,它开发出敏捷快速的样机研究软件,这些应该会被开源社区采纳。
+
+开源技术也已经证明了其在审查和分类大组数据方面的能力。最明显的体现在开源工具在数据分析市场的占有率上(Weka , Rapid Miner 等等)。应当由开源社区来设计识别数据偏见的工具,已经在网上发布的大量训练数据组比如 [Kaggle][11]也应当使用这种技术进行识别筛选。
+
+开源方法本身十分适合消除偏见程序的设计。内部谈话,私人软件开发及非民主的决策制定引起了很多问题。开源社区能够进行软件公开的谈话,进行大众化,维持好与大众的关系,这对于处理以上问题是十分重要的。如果线上社团,组织和院校能够接受这些开源特质,那么由开源社区进行消除算法偏见的机器设计也会顺利很多。
+
+## 我们怎样才能够参与其中?
+
+教育是一个很重要的环节。我们身边有很多还没意识到算法偏见的人,但算法偏见在立法,社会公正,政策及更多领域产生的影响与他们息息相关。让这些人知道算法偏见是怎样形成的和它们带来的重要影响是很重要的,因为想要改变目前是局面,从我们自身做起是唯一的方法。
+
+对于我们中间那些与人工智能一起工作的人来说,这种沟通尤其重要。不论是人工智能的研发者,警方或是科研人员,当他们为今后设计人工智能时,应当格外意识到现今这种偏见存在的危险性,很明显,想要消除人工智能中存在的偏见,就要从意识到偏见的存在开始。
+
+最后,我们需要围绕 AI 伦理化建立并加强开源社区。不论是需要建立应力实验训练模型,软件工具,或是从千兆字节的训练数据中筛选,现在已经到了我们利用开源方法来应对数字化时代最大的威胁的时间了。
+
+--------------------------------------------------------------------------------
+
+via: https://opensource.com/article/18/1/how-open-source-can-fight-algorithmic-bias
+
+作者:[Justin Sherman][a]
+译者:[Valoniakim](https://github.com/Valoniakim)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]:https://opensource.com/users/justinsherman
+[1]:https://www.crowdflower.com/what-is-training-data/
+[2]:https://medium.com/microsoft-design/how-to-recognize-exclusion-in-ai-ec2d6d89f850
+[3]:https://www.ted.com/talks/joy_buolamwini_how_i_m_fighting_bias_in_algorithms
+[4]:https://www.wired.com/2017/04/courts-using-ai-sentence-criminals-must-stop-now/
+[5]:https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing
+[6]:https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3012499
+[7]:https://www.hivlawandpolicy.org/sites/default/files/Race%20and%20Punishment-%20Racial%20Perceptions%20of%20Crime%20and%20Support%20for%20Punitive%20Policies%20%282014%29.pdf
+[8]:http://scikit-learn.org/stable/
+[9]:https://arxiv.org/pdf/1705.06640.pdf
+[10]:https://www.csail.mit.edu/research/understandable-deep-networks
+[11]:https://www.kaggle.com/datasets
diff --git a/translated/tech/20140210 Three steps to learning GDB.md b/translated/tech/20140210 Three steps to learning GDB.md
new file mode 100644
index 0000000000..5139321ac2
--- /dev/null
+++ b/translated/tech/20140210 Three steps to learning GDB.md
@@ -0,0 +1,108 @@
+# 三步上手GDB
+
+调试C程序,曾让我很困扰。然而当我之前在写我的[操作系统][2]时,我有很多的BUG需要调试。我很幸运的使用上了qemu模拟器,它允许我将调试器附加到我的操作系统。这个调试器就是`gdb`。
+
+我得解释一下,你可以使用`gdb`先做一些小事情,因为我发现初学它的时候真的很混乱。我们接下来会在一个小程序中,设置断点,查看内存。.
+
+### 1. 设断点
+
+如果你曾经使用过调试器,那你可能已经会设置断点了。
+
+下面是一个我们要调试的程序(虽然没有任何Bug):
+
+```
+#include
+void do_thing() {
+ printf("Hi!\n");
+}
+int main() {
+ do_thing();
+}
+
+```
+
+另存为 `hello.c`. 我们可以使用dbg调试它,像这样:
+
+```
+bork@kiwi ~> gcc -g hello.c -o hello
+bork@kiwi ~> gdb ./hello
+```
+
+以上是带调试信息编译 `hello.c`(为了gdb可以更好工作),并且它会给我们醒目的提示符,就像这样:
+`(gdb)`
+
+我们可以使用`break`命令设置断点,然后使用`run`开始调试程序。
+
+```
+(gdb) break do_thing
+Breakpoint 1 at 0x4004f8
+(gdb) run
+Starting program: /home/bork/hello
+
+Breakpoint 1, 0x00000000004004f8 in do_thing ()
+```
+程序暂停在了`do_thing`开始的地方。
+
+我们可以通过`where`查看我们所在的调用栈。
+```
+(gdb) where
+#0 do_thing () at hello.c:3
+#1 0x08050cdb in main () at hello.c:6
+(gdb)
+```
+
+### 2. 阅读汇编代码
+
+使用`disassemble`命令,我们可以看到这个函数的汇编代码。棒级了。这是x86汇编代码。虽然我不是很懂它,但是`callq`这一行是`printf`函数调用。
+
+```
+(gdb) disassemble do_thing
+Dump of assembler code for function do_thing:
+ 0x00000000004004f4 <+0>: push %rbp
+ 0x00000000004004f5 <+1>: mov %rsp,%rbp
+=> 0x00000000004004f8 <+4>: mov $0x40060c,%edi
+ 0x00000000004004fd <+9>: callq 0x4003f0
+ 0x0000000000400502 <+14>: pop %rbp
+ 0x0000000000400503 <+15>: retq
+```
+
+你也可以使用`disassemble`的缩写`disas`。`
+
+### 3. 查看内存
+
+当调试我的内核时,我使用`gdb`的主要原因是,以确保内存布局是如我所想的那样。检查内存的命令是`examine`,或者使用缩写`x`。我们将使用`x`。
+
+通过阅读上面的汇编代码,似乎`0x40060c`可能是我们所要打印的字符串地址。我们来试一下。
+
+```
+(gdb) x/s 0x40060c
+0x40060c: "Hi!"
+```
+
+的确是这样。`x/s`中`/s`部分,意思是“把它作为字符串展示”。我也可以“展示10个字符”,像这样:
+
+```
+(gdb) x/10c 0x40060c
+0x40060c: 72 'H' 105 'i' 33 '!' 0 '\000' 1 '\001' 27 '\033' 3 '\003' 59 ';'
+0x400614: 52 '4' 0 '\000'
+```
+
+你可以看到前四个字符是'H','i','!',和'\0',并且它们之后的是一些不相关的东西。
+
+我知道gdb很多其他的东西,但是我任然不是很了解它,其中`x`和`break`让我获得很多。你还可以阅读 [do umentation for examining memory][4]。
+
+--------------------------------------------------------------------------------
+
+via: https://jvns.ca/blog/2014/02/10/three-steps-to-learning-gdb/
+
+作者:[Julia Evans ][a]
+译者:[Torival](https://github.com/Torival)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]:https://jvns.ca
+[1]:https://jvns.ca/categories/spytools
+[2]:http://jvns.ca/blog/categories/kernel
+[3]:https://twitter.com/mgedmin
+[4]:https://ftp.gnu.org/old-gnu/Manuals/gdb-5.1.1/html_chapter/gdb_9.html#SEC56
diff --git a/translated/tech/20150703 Let-s Build A Simple Interpreter. Part 2..md b/translated/tech/20150703 Let-s Build A Simple Interpreter. Part 2..md
new file mode 100644
index 0000000000..a22a94bae0
--- /dev/null
+++ b/translated/tech/20150703 Let-s Build A Simple Interpreter. Part 2..md
@@ -0,0 +1,234 @@
+让我们做个简单的解释器(2)
+======
+
+在一本叫做 《高效思考的 5 要素》 的书中,作者 Burger 和 Starbird 讲述了一个关于他们如何研究 Tony Plog 的故事,一个举世闻名的交响曲名家,为一些有才华的演奏者开创了一个大师班。这些学生一开始演奏复杂的乐曲,他们演奏的非常好。然后他们被要求演奏非常基础简单的乐曲。当他们演奏这些乐曲时,与之前所演奏的相比,听起来非常幼稚。在他们结束演奏后,老师也演奏了同样的乐曲,但是听上去非常娴熟。差别令人震惊。Tony 解释道,精通简单符号可以让人更好的掌握复杂的部分。这个例子很清晰 - 要成为真正的名家,必须要掌握简单基础的思想。
+
+故事中的例子明显不仅仅适用于音乐,而且适用于软件开发。这个故事告诉我们不要忽视繁琐工作中简单基础的概念的重要性,哪怕有时候这让人感觉是一种倒退。尽管熟练掌握一门工具或者框架非常重要,了解他们背后的原理也是极其重要的。正如 Palph Waldo Emerson 所说:
+
+> “如果你只学习方法,你就会被方法束缚。但如果你知道原理,就可以发明自己的方法。”
+
+有鉴于此,让我们再次深入了解解释器和编译器。
+
+今天我会向你们展示一个全新的计算器,与 [第一部分][1] 相比,它可以做到:
+
+ 1. 处理输入字符串任意位置的空白符
+ 2. 识别输入字符串中的多位整数
+ 3. 做两个整数之间的减法(目前它仅能加减整数)
+
+
+新版本计算器的源代码在这里,它可以做到上述的所有事情:
+```
+# 标记类型
+# EOF (end-of-file 文件末尾) 标记是用来表示所有输入都解析完成
+INTEGER, PLUS, MINUS, EOF = 'INTEGER', 'PLUS', 'MINUS', 'EOF'
+
+
+class Token(object):
+ def __init__(self, type, value):
+ # token 类型: INTEGER, PLUS, MINUS, or EOF
+ self.type = type
+ # token 值: 非负整数值, '+', '-', 或无
+ self.value = value
+
+ def __str__(self):
+ """String representation of the class instance.
+
+ Examples:
+ Token(INTEGER, 3)
+ Token(PLUS '+')
+ """
+ return 'Token({type}, {value})'.format(
+ type=self.type,
+ value=repr(self.value)
+ )
+
+ def __repr__(self):
+ return self.__str__()
+
+
+class Interpreter(object):
+ def __init__(self, text):
+ # 客户端字符输入, 例如. "3 + 5", "12 - 5",
+ self.text = text
+ # self.pos 是 self.text 的索引
+ self.pos = 0
+ # 当前标记实例
+ self.current_token = None
+ self.current_char = self.text[self.pos]
+
+ def error(self):
+ raise Exception('Error parsing input')
+
+ def advance(self):
+ """Advance the 'pos' pointer and set the 'current_char' variable."""
+ self.pos += 1
+ if self.pos > len(self.text) - 1:
+ self.current_char = None # Indicates end of input
+ else:
+ self.current_char = self.text[self.pos]
+
+ def skip_whitespace(self):
+ while self.current_char is not None and self.current_char.isspace():
+ self.advance()
+
+ def integer(self):
+ """Return a (multidigit) integer consumed from the input."""
+ result = ''
+ while self.current_char is not None and self.current_char.isdigit():
+ result += self.current_char
+ self.advance()
+ return int(result)
+
+ def get_next_token(self):
+ """Lexical analyzer (also known as scanner or tokenizer)
+
+ This method is responsible for breaking a sentence
+ apart into tokens.
+ """
+ while self.current_char is not None:
+
+ if self.current_char.isspace():
+ self.skip_whitespace()
+ continue
+
+ if self.current_char.isdigit():
+ return Token(INTEGER, self.integer())
+
+ if self.current_char == '+':
+ self.advance()
+ return Token(PLUS, '+')
+
+ if self.current_char == '-':
+ self.advance()
+ return Token(MINUS, '-')
+
+ self.error()
+
+ return Token(EOF, None)
+
+ def eat(self, token_type):
+ # 将当前的标记类型与传入的标记类型作比较,如果他们相匹配,就
+ # “eat” 掉当前的标记并将下一个标记赋给 self.current_token,
+ # 否则抛出一个异常
+ if self.current_token.type == token_type:
+ self.current_token = self.get_next_token()
+ else:
+ self.error()
+
+ def expr(self):
+ """Parser / Interpreter
+
+ expr -> INTEGER PLUS INTEGER
+ expr -> INTEGER MINUS INTEGER
+ """
+ # 将输入中的第一个标记设置成当前标记
+ self.current_token = self.get_next_token()
+
+ # 当前标记应该是一个整数
+ left = self.current_token
+ self.eat(INTEGER)
+
+ # 当前标记应该是 ‘+’ 或 ‘-’
+ op = self.current_token
+ if op.type == PLUS:
+ self.eat(PLUS)
+ else:
+ self.eat(MINUS)
+
+ # 当前标记应该是一个整数
+ right = self.current_token
+ self.eat(INTEGER)
+ # 在上述函数调用后,self.current_token 就被设为 EOF 标记
+
+ # 这时要么是成功地找到 INTEGER PLUS INTEGER,要么是 INTEGER MINUS INTEGER
+ # 序列的标记,并且这个方法可以仅仅返回两个整数的加或减的结果,就能高效解释客户端的输入
+ if op.type == PLUS:
+ result = left.value + right.value
+ else:
+ result = left.value - right.value
+ return result
+
+
+def main():
+ while True:
+ try:
+ # To run under Python3 replace 'raw_input' call
+ # with 'input'
+ text = raw_input('calc> ')
+ except EOFError:
+ break
+ if not text:
+ continue
+ interpreter = Interpreter(text)
+ result = interpreter.expr()
+ print(result)
+
+
+if __name__ == '__main__':
+ main()
+```
+
+把上面的代码保存到 calc2.py 文件中,或者直接从 [GitHub][2] 上下载。试着运行它。看看它是不是正常工作:它应该能够处理输入中任意位置的空白符;能够接受多位的整数,并且能够对两个整数做减法和加法。
+
+这是我在自己的笔记本上运行的示例:
+```
+$ python calc2.py
+calc> 27 + 3
+30
+calc> 27 - 7
+20
+calc>
+```
+
+与 [第一部分][1] 的版本相比,主要的代码改动有:
+
+ 1. get_next_token 方法重写了很多。增加指针位置的逻辑之前是放在一个单独的方法中。
+ 2. 增加了一些方法:skip_whitespace 用于忽略空白字符,integer 用于处理输入字符的多位整数。
+ 3. expr 方法修改成了可以识别 “整数 -> 减号 -> 整数” 词组和 “整数 -> 加号 -> 整数” 词组。在成功识别相应的词组后,这个方法现在可以解释加法和减法。
+
+[第一部分][1] 中你学到了两个重要的概念,叫做 **标记** 和 **词法分析**。现在我想谈一谈 **词法**, **解析**,和**解析器**。
+
+你已经知道标记。但是为了让我详细的讨论标记,我需要谈一谈词法。词法是什么?**词法** 是一个标记中的字符序列。在下图中你可以看到一些关于标记的例子,还好这可以让它们之间的关系变得清晰:
+
+![][3]
+
+现在还记得我们的朋友,expr 方法吗?我之前说过,这是数学表达式实际被解释的地方。但是你要先识别这个表达式有哪些词组才能解释它,比如它是加法还是减法。expr 方法最重要的工作是:它从 get_next_token 方法中得到流,并找出标记流的结构然后解释已经识别出的词组,产生数学表达式的结果。
+
+在标记流中找出结构的过程,或者换种说法,识别标记流中的词组的过程就叫 **解析**。解释器或者编译器中执行这个任务的部分就叫做 **解析器**。
+
+现在你知道 expr 方法就是你的解释器的部分,**解析** 和 **解释** 都在这里发生 - expr 方法首先尝试识别(**解析**)标记流里的 “整数 -> 加法 -> 整数” 或者 “整数 -> 减法 -> 整数” 词组,成功识别后 (**解析**) 其中一个词组,这个方法就开始解释它,返回两个整数的和或差。
+
+又到了练习的时间。
+
+![][4]
+
+ 1. 扩展这个计算器,让它能够计算两个整数的乘法
+ 2. 扩展这个计算器,让它能够计算两个整数的除法
+ 3. 修改代码,让它能够解释包含了任意数量的加法和减法的表达式,比如 “9 - 5 + 3 + 11”
+
+
+
+**检验你的理解:**
+
+ 1. 词法是什么?
+ 2. 找出标记流结构的过程叫什么,或者换种说法,识别标记流中一个词组的过程叫什么?
+ 3. 解释器(编译器)执行解析的部分叫什么?
+
+
+希望你喜欢今天的内容。在该系列的下一篇文章里你就能扩展计算器从而处理更多复杂的算术表达式。敬请期待。
+
+--------------------------------------------------------------------------------
+
+via: https://ruslanspivak.com/lsbasi-part2/
+
+作者:[Ruslan Spivak][a]
+译者:[BriFuture](https://github.com/BriFuture)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]:https://ruslanspivak.com
+[1]:http://ruslanspivak.com/lsbasi-part1/ (Part 1)
+[2]:https://github.com/rspivak/lsbasi/blob/master/part2/calc2.py
+[3]:https://ruslanspivak.com/lsbasi-part2/lsbasi_part2_lexemes.png
+[4]:https://ruslanspivak.com/lsbasi-part2/lsbasi_part2_exercises.png
diff --git a/translated/tech/20160625 Trying out LXD containers on our Ubuntu.md b/translated/tech/20160625 Trying out LXD containers on our Ubuntu.md
deleted file mode 100644
index 29a19792fa..0000000000
--- a/translated/tech/20160625 Trying out LXD containers on our Ubuntu.md
+++ /dev/null
@@ -1,223 +0,0 @@
-在 Ubuntu 上玩玩 LXD 容器
-======
-本文的主角是容器,一种类似虚拟机但更轻量级的构造。你可以轻易地在你的 Ubuntu 桌面系统中创建一堆个容器!
-
-虚拟机会虚拟出正太电脑让你来安装客户机操作系统。**相比之下**,容器**复用**了主机 Linux 内核,只是简单地 **包容** 了我们选择的根文件系统(也就是运行时环境)。Linux 内核有很多功能可以将运行的 Linux 容器与我们的主机分割开(也就是我们的 Ubuntu 桌面)。
-
-Linux 本身需要一些手工操作来直接管理他们。好在,有 LXD( 读音为 Lex-deeh),一款为我们管理 Linux 容器的服务。
-
-我们将会看到如何
-
- 1。在我们的 Ubuntu 桌面上配置容器,
- 2。创建容器,
- 3。安装一台 web 服务器,
- 4。测试一下这台 web 服务器,以及
- 5。清理所有的东西。
-
-### 设置 Ubuntu 容器
-
-如果你安装的是 Ubuntu 16.04,那么你什么都不用做。只要安装下面所列出的一些额外的包就行了。若你安装的是 Ubuntu 14.04.x 或 Ubuntu 15.10,那么按照 [LXD 2.0:Installing and configuring LXD [2/12]][1] 来进行一些操作,然后再回来。
-
-确保已经更新了包列表:
-```
-sudo apt update
-sudo apt upgrade
-```
-
-安装 **lxd** 包:
-```
-sudo apt install lxd
-```
-
-若你安装的是 Ubuntu 16.04,那么还可以让你的容器文件以 ZFS 文件系统的格式进行存储。Ubuntu 16.04 的 Linux kernel 包含了支持 ZFS 必要的内核模块。若要让 LXD 使用 ZFS 进行存储,我们只需要安装 ZFS 工具包。没有 ZFS,容器会在主机文件系统中以单独的文件形式进行存储。通过 ZFS,我们就有了写入时拷贝等功能,可以让任务完成更快一些。
-
-安装 **zfsutils-linux** 包 (若你安装的是 Ubuntu 16.04.x):
-```
-sudo apt install zfsutils-linux
-```
-
-安装好 LXD 后,包安装脚本应该会将你加入 **lxd** 组。该组成员可以使你无需通过 sudo 就能直接使用 LXD 管理容器。根据 Linux 的尿性,**你需要先登出桌面会话然后再登陆** 才能应用 **lxd** 的组成员关系。(若你是高手,也可以通过在当前 shell 中执行 newgrp lxd 命令,就不用重登陆了)。
-
-在开始使用前,LXD 需要初始化存储和网络参数。
-
-运行下面命令:
-```
-$ **sudo lxd init**
-Name of the storage backend to use (dir or zfs):**zfs**
-Create a new ZFS pool (yes/no)?**yes**
-Name of the new ZFS pool:**lxd-pool**
-Would you like to use an existing block device (yes/no)?**no**
-Size in GB of the new loop device (1GB minimum):**30**
-Would you like LXD to be available over the network (yes/no)?**no**
-Do you want to configure the LXD bridge (yes/no)?**yes**
-**> You will be asked about the network bridge configuration。Accept all defaults and continue。**
-Warning:Stopping lxd.service,but it can still be activated by:
- lxd.socket
- LXD has been successfully configured。
-$ _
-```
-
-我们在一个(独立)的文件而不是块设备(即分区)中构建了一个文件系统来作为 ZFS 池,因此我们无需进行额外的分区操作。在本例中我指定了 30GB 大小,这个空间取之于根(/) 文件系统中。这个文件就是 `/var/lib/lxd/zfs.img`。
-
-行了!最初的配置完成了。若有问题,或者想了解其他信息,请阅读 https://www.stgraber.org/2016/03/15/lxd-2-0-installing-and-configuring-lxd-212/
-
-### 创建第一个容器
-
-所有 LXD 的管理操作都可以通过 **lxc** 命令来进行。我们通过给 **lxc** 不同参数来管理容器。
-```
-lxc list
-```
-可以列出所有已经安装的容器。很明显,这个列表现在是空的,但这表示我们的安装是没问题的。
-
-```
-lxc image list
-```
-列出可以用来启动容器的(已经缓存)镜像列表。很明显这个列表也是空的,但这也说明我们的安装是没问题的。
-
-```
-lxc image list ubuntu:
-```
-列出可以下载并启动容器的远程镜像。而且指定了是显示 Ubuntu 镜像。
-
-```
-lxc image list images:
-```
-列出可以用来启动容器的(已经缓存)各种发行版的镜像列表。这会列出各种发行版的镜像比如 Alpine,Debian,Gentoo,Opensuse 以及 Fedora。
-
-让我们启动一个 Ubuntu 16.04 容器,并称之为 c1:
-```
-$ lxc launch ubuntu:x c1
-Creating c1
-Starting c1
-$
-```
-
-我们使用 launch 动作,然后选择镜像 **ubuntu:x** (x 表示 Xenial/16.04 镜像),最后我们使用名字 `c1` 作为容器的名称。
-
-让我们来看看安装好的首个容器,
-```
-$ lxc list
-
-+---------|---------|----------------------|------|------------|-----------+
-| NAME | STATE | IPV4 | IPV6 | TYPE | SNAPSHOTS |
-+---------|---------|----------------------|------|------------|-----------+
-| c1 | RUNNING | 10.173.82.158 (eth0) | | PERSISTENT | 0 |
-+---------|---------|----------------------|------|------------|-----------+
-```
-
-我们的首个容器 c1 已经运行起来了,它还有自己的 IP 地址(可以本地访问)。我们可以开始用它了!
-
-### 安装 web 服务器
-
-我们可以在容器中运行命令。运行命令的动作为 **exec**。
-```
-$ lxc exec c1 -- uptime
- 11:47:25 up 2 min,0 users,load average:0.07,0.05,0.04
-$ _
-```
-
-在 exec 后面,我们指定容器,最后输入要在容器中运行的命令。运行时间只有 2 分钟,这是个新出炉的容器:-)。
-
-命令行中的`--`跟我们 shell 的参数处理过程有关是告诉。若我们的命令没有任何参数,则完全可以省略`-`。
-```
-$ lxc exec c1 -- df -h
-```
-
-这是一个必须要`-`的例子,由于我们的命令使用了参数 -h。若省略了 -,会报错。
-
-然我们运行容器中的 shell 来新包列表。
-```
-$ lxc exec c1 bash
-root@c1:~# apt update
-Ign http://archive.ubuntu.com trusty InRelease
-Get:1 http://archive.ubuntu.com trusty-updates InRelease [65.9 kB]
-Get:2 http://security.ubuntu.com trusty-security InRelease [65.9 kB]
-.。。
-Hit http://archive.ubuntu.com trusty/universe Translation-en
-Fetched 11.2 MB in 9s (1228 kB/s)
-Reading package lists。.. Done
-root@c1:~# **apt upgrade**
-Reading package lists。.. Done
-Building dependency tree
-.。。
-Processing triggers for man-db (2.6.7.1-1ubuntu1) .。。
-Setting up dpkg (1.17.5ubuntu5.7) .。。
-root@c1:~# _
-```
-
-我们使用 **nginx** 来做 web 服务器。nginx 在某些方面要比 Apache web 服务器更酷一些。
-```
-root@c1:~# apt install nginx
-Reading package lists。.. Done
-Building dependency tree
-.。。
-Setting up nginx-core (1.4.6-1ubuntu3.5) .。。
-Setting up nginx (1.4.6-1ubuntu3.5) .。。
-Processing triggers for libc-bin (2.19-0ubuntu6.9) .。。
-root@c1:~# _
-```
-
-让我们用浏览器访问一下这个 web 服务器。记住 IP 地址为 10.173.82.158,因此你需要在浏览器中输入这个 IP。
-
-[![lxd-nginx][2]][3]
-
-让我们对页面文字做一些小改动。回到容器中,进入默认 HTML 页面的目录中。
-```
-root@c1:~# **cd /var/www/html/**
-root@c1:/var/www/html# **ls -l**
-total 2
--rw-r--r-- 1 root root 612 Jun 25 12:15 index.nginx-debian.html
-root@c1:/var/www/html#
-```
-
-使用 nano 编辑文件,然后保存
-
-[![lxd-nginx-nano][4]][5]
-
-子后,再刷一下页面看看,
-
-[![lxd-nginx-modified][6]][7]
-
-### 清理
-
-让我们清理一下这个容器,也就是删掉它。当需要的时候我们可以很方便地创建一个新容器出来。
-```
-$ **lxc list**
-+---------|---------|----------------------|------|------------|-----------+
-| NAME | STATE | IPV4 | IPV6 | TYPE | SNAPSHOTS |
-+---------|---------|----------------------|------|------------|-----------+
-| c1 | RUNNING | 10.173.82.169 (eth0) | | PERSISTENT | 0 |
-+---------|---------|----------------------|------|------------|-----------+
-$ **lxc stop c1**
-$ **lxc delete c1**
-$ **lxc list**
-+---------|---------|----------------------|------|------------|-----------+
-| NAME | STATE | IPV4 | IPV6 | TYPE | SNAPSHOTS |
-+---------|---------|----------------------|------|------------|-----------+
-+---------|---------|----------------------|------|------------|-----------+
-
-```
-
-我们停止(关闭)这个容器,然后删掉它了。
-
-本文至此就结束了。关于容器有很多玩法。而这只是配置 Ubuntu 并尝试使用容器的第一步而已。
-
-
---------------------------------------------------------------------------------
-
-via: https://blog.simos.info/trying-out-lxd-containers-on-our-ubuntu/
-
-作者:[Simos Xenitellis][a]
-译者:[lujun9972](https://github.com/lujun9972)
-校对:[校对者ID](https://github.com/校对者ID)
-
-本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
-
-[a]:https://blog.simos.info/author/simos/
-[1]:https://www.stgraber.org/2016/03/15/lxd-2-0-installing-and-configuring-lxd-212/
-[2]:https://i2.wp.com/blog.simos.info/wp-content/uploads/2016/06/lxd-nginx.png?resize=564%2C269&ssl=1
-[3]:https://i2.wp.com/blog.simos.info/wp-content/uploads/2016/06/lxd-nginx.png?ssl=1
-[4]:https://i2.wp.com/blog.simos.info/wp-content/uploads/2016/06/lxd-nginx-nano.png?resize=750%2C424&ssl=1
-[5]:https://i2.wp.com/blog.simos.info/wp-content/uploads/2016/06/lxd-nginx-nano.png?ssl=1
-[6]:https://i1.wp.com/blog.simos.info/wp-content/uploads/2016/06/lxd-nginx-modified.png?resize=595%2C317&ssl=1
-[7]:https://i1.wp.com/blog.simos.info/wp-content/uploads/2016/06/lxd-nginx-modified.png?ssl=1
diff --git a/translated/tech/20160808 Top 10 Command Line Games For Linux.md b/translated/tech/20160808 Top 10 Command Line Games For Linux.md
new file mode 100644
index 0000000000..86d5e6fcf7
--- /dev/null
+++ b/translated/tech/20160808 Top 10 Command Line Games For Linux.md
@@ -0,0 +1,237 @@
+Linux 命令行游戏 Top 10
+======
+概要: 本文列举了 **Linux 中最好的命令行游戏**。
+
+Linux 从来都不是游戏的首选操作系统。尽管近日来 [Linux 的游戏][1] 提供了很多。你可以在 [下载 Linux 游戏][2] 得到许多资源。
+
+这有专门的 [游戏版 Linux][3]。它确实存在。但是今天,我们并不是要欣赏游戏版 Linux。
+
+Linux 有一个超过 Windows 的优势。它拥有一个强大的 Linux 终端。在 Linux 终端上,你可以做很多事情,包括玩 **命令行游戏**。
+
+当然,毕竟是 Linux 终端的核心爱好者、拥护者。终端游戏轻便,快速,有地狱般的魔力。而这最有意思的事情是,你可以在 Linux 终端上重温大量经典游戏。
+
+[推荐阅读:Linux 上游戏,你所需要了解的全部][20]
+
+### 最好的 Linux 终端游戏
+
+来揭秘这张榜单,找出 Linux 终端最好的游戏。
+
+### 1. Bastet
+
+谁还没花上几个小时玩 [俄罗斯方块][4] ?简单而且容易上瘾。 Bastet 就是 Linux 版的俄罗斯方块。
+
+![Linux 终端游戏 Bastet][5]
+
+使用下面的命令获取 Bastet:
+```
+sudo apt install bastet
+```
+
+运行下列命令,在终端上开始这个游戏:
+```
+bastet
+```
+
+使用空格键旋转方块,方向键控制方块移动
+
+### 2. Ninvaders
+
+Space Invaders(太空侵略者)。我任记得这个游戏里,和我弟弟(哥哥)在高分之路上扭打。这是最好的街机游戏之一。
+
+![Linux 终端游戏 nInvaders][6]
+
+复制粘贴这段代码安装 Ninvaders。
+```
+sudo apt-get install ninvaders
+```
+
+使用下面的命令开始游戏:
+```
+ninvaders
+```
+
+方向键移动太空飞船。空格键设计外星人。
+
+[推荐阅读:2016 你可以开始的 Linux 游戏 Top 10][21]
+
+### 3. Pacman4console
+
+是的,这个就是街机之王。Pacman4console 是最受欢迎的街机游戏 Pacman(吃豆豆)终端版。
+
+![Linux 命令行吃豆豆游戏 Pacman4console][7]
+
+使用以下命令获取 pacman4console:
+```
+sudo apt-get install pacman4console
+```
+
+打开终端,建议使用最大的终端界面(29x32)。键入以下命令启动游戏:
+```
+pacman4console
+```
+
+使用方向键控制移动。
+
+### 4. nSnake
+
+记得在老式诺基亚手机里玩的贪吃蛇游戏吗?
+
+这个游戏让我保持对手机着迷很长时间。我曾经设计过各种姿态去获得更长的蛇身。
+
+![nsnake : Linux 终端上的贪吃蛇游戏][8]
+
+我们拥有 [Linux 终端上的贪吃蛇游戏][9] 得感谢 [nSnake][9]。使用下面的命令安装它:
+```
+sudo apt-get install nsnake
+```
+
+键入下面的命令开始游戏:
+```
+nsnake
+```
+
+使用方向键控制蛇身,获取豆豆。
+
+### 5. Greed
+
+Greed 有点像精简调加速和肾上腺素的 Tron(类似贪吃蛇的进化版)。
+
+你当前的位置由‘@’表示。你被数字包围了,你可以在四个方向任意移动。你选择的移动方向上标识的数字,就是你能移动的步数。走过的路不能再走,如果你无路可走,游戏结束。
+
+听起来,似乎我让它变得更复杂了。
+
+![Greed : 命令行上的 Tron][10]
+
+通过下列命令获取 Greed:
+```
+sudo apt-get install greed
+```
+
+通过下列命令启动游戏,使用方向键控制游戏。
+```
+greed
+```
+
+### 6. Air Traffic Controller
+
+还有什么比做飞行员更有意思的?空中交通管制员。在你的终端中,你可以模拟一个空中要塞。说实话,在终端里管理空中交通蛮有意思的。
+
+![Linux 空中交通管理员][11]
+
+使用下列命令安装游戏:
+```
+sudo apt-get install bsdgames
+```
+
+键入下列命令启动游戏:
+```
+atc
+```
+
+ATC 不是孩子玩的游戏。建议查看官方文档。
+
+### 7. Backgammon(双陆棋)
+
+无论之前你有没有玩过 [双陆棋][12],你都应该看看这个。 它的说明书和控制手册都非常友好。如果你喜欢,可以挑战你的电脑或者你的朋友。
+
+![Linux 终端上的双陆棋][13]
+
+使用下列命令安装双陆棋:
+```
+sudo apt-get install bsdgames
+```
+
+键入下列命令启动游戏:
+```
+backgammon
+```
+
+当你需要提示游戏规则时,回复 ‘y’。
+
+### 8. Moon Buggy
+
+跳跃。疯狂。欢乐时光不必多言。
+
+![Moon buggy][14]
+
+使用下列命令安装游戏:
+```
+sudo apt-get install moon-buggy
+```
+
+使用下列命令启动游戏:
+```
+moon-buggy
+```
+
+空格跳跃,‘a’或者‘l’射击。尽情享受吧。
+
+### 9. 2048
+
+2048 可以活跃你的大脑。[2048][15] 是一个策咯游戏,很容易上瘾。以获取 2048 分为目标。
+
+![Linux 终端上的 2048][16]
+
+复制粘贴下面的命令安装游戏:
+```
+wget https://raw.githubusercontent.com/mevdschee/2048.c/master/2048.c
+
+gcc -o 2048 2048.c
+```
+
+键入下列命令启动游戏:
+```
+./2048
+```
+
+### 10. Tron
+
+没有动作类游戏,这张榜单怎么可能结束?
+
+![Linux 终端游戏 Tron][17]
+
+是的,Linux 终端可以实现这种精力充沛的游戏 Tron。为接下来迅捷的反应做准备吧。无需被下载和安装困扰。一个命令即可启动游戏,你只需要一个网络连接
+```
+ssh sshtron.zachlatta.com
+```
+
+如果由别的在线游戏者,你可以多人游戏。了解更多:[Linux 终端游戏 Tron][18].
+
+### 你看上了哪一款?
+
+朋友,Linux 终端游戏 Top 10,都分享给你了。我猜你现在正准备键入 ctrl+alt+T(终端快捷键) 了。榜单中,那个是你最喜欢的游戏?或者为终端提供其他的有趣的事物?尽情分享吧!
+
+在 [Abhishek Prakash][19] 回复。
+
+--------------------------------------------------------------------------------
+
+via: https://itsfoss.com/best-command-line-games-linux/
+
+作者:[Aquil Roshan][a]
+译者:[CYLeft](https://github.com/CYleft)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]:https://itsfoss.com/author/aquil/
+[1]:https://itsfoss.com/linux-gaming-guide/
+[2]:https://itsfoss.com/download-linux-games/
+[3]:https://itsfoss.com/manjaro-gaming-linux/
+[4]:https://en.wikipedia.org/wiki/Tetris
+[5]:https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2016/08/bastet.jpg
+[6]:https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2016/08/ninvaders.jpg
+[7]:https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2016/08/pacman.jpg
+[8]:https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2016/08/nsnake.jpg
+[9]:https://itsfoss.com/nsnake-play-classic-snake-game-linux-terminal/
+[10]:https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2016/08/greed.jpg
+[11]:https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2016/08/atc.jpg
+[12]:https://en.wikipedia.org/wiki/Backgammon
+[13]:https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2016/08/backgammon.jpg
+[14]:https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2016/08/moon-buggy.jpg
+[15]:https://itsfoss.com/2048-offline-play-ubuntu/
+[16]:https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2016/08/2048.jpg
+[17]:https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2016/08/tron.jpg
+[18]:https://itsfoss.com/play-tron-game-linux-terminal/
+[19]:https://twitter.com/abhishek_pc
+[20]:https://itsfoss.com/linux-gaming-guide/
+[21]:https://itsfoss.com/best-linux-games/
diff --git a/translated/tech/20170319 ftrace trace your kernel functions.md b/translated/tech/20170319 ftrace trace your kernel functions.md
new file mode 100644
index 0000000000..ccb5b76256
--- /dev/null
+++ b/translated/tech/20170319 ftrace trace your kernel functions.md
@@ -0,0 +1,284 @@
+ftrace:跟踪你的内核函数!
+============================================================
+
+大家好!今天我们将去讨论一个调试工具:ftrace,之前我的博客上还没有讨论过它。还有什么能比一个新的调试工具更让人激动呢?
+
+这个非常棒的 ftrace 并不是个新的工具!它大约在 Linux 的 2.6 内核版本中就有了,时间大约是在 2008 年。[这里是我用谷歌能找到的一些文档][10]。因此,如果你是一个调试系统的“老手”,可能早就已经使用它了!
+
+我知道,ftrace 已经存在了大约 2.5 年了,但是还没有真正的去学习它。假设我明天要召开一个专题研究会,那么,关于 ftrace 应该讨论些什么?因此,今天是时间去讨论一下它了!
+
+### 什么是 ftrace?
+
+ftrace 是一个 Linux 内核特性,它可以让你去跟踪 Linux 内核的函数调用。为什么要这么做呢?好吧,假设你调试一个奇怪的问题,而你已经得到了你的内核版本中这个问题在源代码中的开始的位置,而你想知道这里到底发生了什么?
+
+每次在调试的时候,我并不会经常去读内核源代码,但是,极个别的情况下会去读它!例如,本周在工作中,我有一个程序在内核中卡死了。查看到底是调用了什么函数、哪些系统涉及其中,能够帮我更好的理解在内核中发生了什么!(在我的那个案例中,它是虚拟内存系统)
+
+我认为 ftrace 是一个十分好用的工具(它肯定没有 strace 那样广泛被使用,使用难度也低于它),但是它还是值得你去学习。因此,让我们开始吧!
+
+### 使用 ftrace 的第一步
+
+不像 strace 和 perf,ftrace 并不是真正的 **程序** – 你不能只运行 `ftrace my_cool_function`。那样太容易了!
+
+如果你去读 [使用 Ftrace 调试内核][11],它会告诉你从 `cd /sys/kernel/debug/tracing` 开始,然后做很多文件系统的操作。
+
+对于我来说,这种办法太麻烦 – 使用 ftrace 的一个简单例子应该像这样:
+
+```
+cd /sys/kernel/debug/tracing
+echo function > current_tracer
+echo do_page_fault > set_ftrace_filter
+cat trace
+
+```
+
+这个文件系统到跟踪系统的接口(“给这些神奇的文件赋值,然后该发生的事情就会发生”)理论上看起来似乎可用,但是它不是我的首选方式。
+
+幸运的是,ftrace 团队也考虑到这个并不友好的用户界面,因此,它有了一个更易于使用的界面,它就是 **trace-cmd**!!!trace-cmd 是一个带命令行参数的普通程序。我们后面将使用它!我在 LWN 上找到了一个 trace-cmd 的使用介绍:[trace-cmd: Ftrace 的一个前端][12]。
+
+### 开始使用 trace-cmd:让 trace 仅跟踪一个函数
+
+首先,我需要去使用 `sudo apt-get install trace-cmd` 安装 `trace-cmd`,这一步很容易。
+
+对于第一个 ftrace 的演示,我决定去了解我的内核如何去处理一个页面故障。当 Linux 分配内存时,它经常偷懒,(“你并不是 _真的_ 计划去使用内存,对吗?”)。这意味着,当一个应用程序尝试去对分配给它的内存进行写入时,就会发生一个页面故障,而这个时候,内核才会真正的为应用程序去分配物理内存。
+
+我们开始使用 `trace-cmd` 并让它跟踪 `do_page_fault` 函数!
+
+```
+$ sudo trace-cmd record -p function -l do_page_fault
+ plugin 'function'
+Hit Ctrl^C to stop recording
+
+```
+
+我将它运行了几秒钟,然后按下了 `Ctrl+C`。 让我大吃一惊的是,它竟然产生了一个 2.5MB 大小的名为 `trace.dat` 的跟踪文件。我们来看一下这个文件的内容!
+
+```
+$ sudo trace-cmd report
+ chrome-15144 [000] 11446.466121: function: do_page_fault
+ chrome-15144 [000] 11446.467910: function: do_page_fault
+ chrome-15144 [000] 11446.469174: function: do_page_fault
+ chrome-15144 [000] 11446.474225: function: do_page_fault
+ chrome-15144 [000] 11446.474386: function: do_page_fault
+ chrome-15144 [000] 11446.478768: function: do_page_fault
+ CompositorTileW-15154 [001] 11446.480172: function: do_page_fault
+ chrome-1830 [003] 11446.486696: function: do_page_fault
+ CompositorTileW-15154 [001] 11446.488983: function: do_page_fault
+ CompositorTileW-15154 [001] 11446.489034: function: do_page_fault
+ CompositorTileW-15154 [001] 11446.489045: function: do_page_fault
+
+```
+
+看起来很整洁 – 它展示了进程名(chrome)、进程 ID (15144)、CPU(000)、以及它跟踪的函数。
+
+通过察看整个文件,(`sudo trace-cmd report | grep chrome`)可以看到,我们跟踪了大约 1.5 秒,在这 1.5 秒的时间段内,Chrome 发生了大约 500 个页面故障。真是太酷了!这就是我们做的第一个 ftrace!
+
+### 下一个 ftrace 技巧:我们来跟踪一个进程!
+
+好吧,只看一个函数是有点无聊!假如我想知道一个程序中都发生了什么事情。我使用一个名为 Hugo 的静态站点生成器。看看内核为 Hugo 都做了些什么事情?
+
+在我的电脑上 Hugo 的 PID 现在是 25314,因此,我使用如下的命令去记录所有的内核函数:
+
+```
+sudo trace-cmd record --help # I read the help!
+sudo trace-cmd record -p function -P 25314 # record for PID 25314
+
+```
+
+`sudo trace-cmd report` 输出了 18,000 行。如果你对这些感兴趣,你可以看 [这里是所有的 18,000 行的输出][13]。
+
+18,000 行太多了,因此,在这里仅摘录其中几行。
+
+当系统调用 `clock_gettime` 运行时,都发生了什么。
+
+```
+ compat_SyS_clock_gettime
+ SyS_clock_gettime
+ clockid_to_kclock
+ posix_clock_realtime_get
+ getnstimeofday64
+ __getnstimeofday64
+ arch_counter_read
+ __compat_put_timespec
+
+```
+
+这是与进程调试相关的一些东西:
+
+```
+ cpufreq_sched_irq_work
+ wake_up_process
+ try_to_wake_up
+ _raw_spin_lock_irqsave
+ do_raw_spin_lock
+ _raw_spin_lock
+ do_raw_spin_lock
+ walt_ktime_clock
+ ktime_get
+ arch_counter_read
+ walt_update_task_ravg
+ exiting_task
+
+```
+
+虽然你可能还不理解它们是做什么的,但是,能够看到所有的这些函数调用也是件很酷的事情。
+
+### “function graph” 跟踪
+
+这里有另外一个模式,称为 `function_graph`。除了它既可以进入也可以退出一个函数外,其它的功能和函数跟踪器是一样的。[这里是那个跟踪器的输出][14]
+
+```
+sudo trace-cmd record -p function_graph -P 25314
+
+```
+
+同样,这里只是一个片断(这次来自 futex 代码)
+
+```
+ | futex_wake() {
+ | get_futex_key() {
+ | get_user_pages_fast() {
+ 1.458 us | __get_user_pages_fast();
+ 4.375 us | }
+ | __might_sleep() {
+ 0.292 us | ___might_sleep();
+ 2.333 us | }
+ 0.584 us | get_futex_key_refs();
+ | unlock_page() {
+ 0.291 us | page_waitqueue();
+ 0.583 us | __wake_up_bit();
+ 5.250 us | }
+ 0.583 us | put_page();
++ 24.208 us | }
+
+```
+
+我们看到在这个示例中,在 `futex_wake` 后面调用了 `get_futex_key`。这是在源代码中真实发生的事情吗?我们可以检查一下!![这里是在 Linux 4.4 中 futex_wake 的定义][15] (我的内核版本是 4.4)。
+
+为节省时间我直接贴出来,它的内容如下:
+
+```
+static int
+futex_wake(u32 __user *uaddr, unsigned int flags, int nr_wake, u32 bitset)
+{
+ struct futex_hash_bucket *hb;
+ struct futex_q *this, *next;
+ union futex_key key = FUTEX_KEY_INIT;
+ int ret;
+ WAKE_Q(wake_q);
+
+ if (!bitset)
+ return -EINVAL;
+
+ ret = get_futex_key(uaddr, flags & FLAGS_SHARED, &key, VERIFY_READ);
+
+```
+
+如你所见,在 `futex_wake` 中的第一个函数调用真的是 `get_futex_key`! 太棒了!相比阅读内核代码,阅读函数跟踪肯定是更容易的找到结果的办法,并且让人高兴的是,还能看到所有的函数用了多长时间。
+
+### 如何知道哪些函数可以被跟踪
+
+如果你去运行 `sudo trace-cmd list -f`,你将得到一个你可以跟踪的函数的列表。它很简单但是也很重要。
+
+### 最后一件事:事件!
+
+现在,我们已经知道了怎么去跟踪内核中的函数,真是太酷了!
+
+还有一类我们可以跟踪的东西!有些事件与我们的函数调用并不相符。例如,你可能想去知道当一个程序被调度进入或者离开 CPU 时,都发生了什么事件!你可能想通过“盯着”函数调用计算出来,但是,我告诉你,不可行!
+
+由于函数也为你提供了几种事件,因此,你可以看到当重要的事件发生时,都发生了什么事情。你可以使用 `sudo cat /sys/kernel/debug/tracing/available_events` 来查看这些事件的一个列表。
+
+我查看了全部的 sched_switch 事件。我并不完全知道 sched_switch 是什么,但是,我猜测它与调度有关。
+
+```
+sudo cat /sys/kernel/debug/tracing/available_events
+sudo trace-cmd record -e sched:sched_switch
+sudo trace-cmd report
+
+```
+
+输出如下:
+
+```
+ 16169.624862: Chrome_ChildIOT:24817 [112] S ==> chrome:15144 [120]
+ 16169.624992: chrome:15144 [120] S ==> swapper/3:0 [120]
+ 16169.625202: swapper/3:0 [120] R ==> Chrome_ChildIOT:24817 [112]
+ 16169.625251: Chrome_ChildIOT:24817 [112] R ==> chrome:1561 [112]
+ 16169.625437: chrome:1561 [112] S ==> chrome:15144 [120]
+
+```
+
+现在,可以很清楚地看到这些切换,从 PID 24817 -> 15144 -> kernel -> 24817 -> 1561 -> 15114\。(所有的这些事件都发生在同一个 CPU 上)
+
+### ftrace 是如何工作的?
+
+ftrace 是一个动态跟踪系统。当启动 ftracing 去跟踪内核函数时,**函数的代码会被改变**。因此 – 我们假设去跟踪 `do_page_fault` 函数。内核将在那个函数的汇编代码中插入一些额外的指令,以便每次该函数被调用时去提示跟踪系统。内核之所以能够添加额外的指令的原因是,Linux 将额外的几个 NOP 指令编译进每个函数中,因此,当需要的时候,这里有添加跟踪代码的地方。
+
+这是一个十分复杂的问题,因为,当不需要使用 ftrace 去跟踪我的内核时,它根本就不影响性能。而当我需要跟踪时,跟踪的函数越多,产生的开销就越大。
+
+(或许有些是不对的,但是,我认为的 ftrace 就是这样工作的)
+
+### 更容易地使用 ftrace:brendan gregg 的工具 & kernelshark
+
+正如我们在文件中所讨论的,你需要去考虑很多的关于单个的内核函数/事件直接使用 ftrace 都做了些什么。能够做到这一点很酷!但是也需要做大量的工作!
+
+Brendan Gregg (我们的 linux 调试工具“大神”)有个工具仓库,它使用 ftrace 去提供关于像 I/O 延迟这样的各种事情的信息。这是它在 GitHub 上全部的 [perf-tools][16] 仓库。
+
+这里有一个权衡(tradeoff),那就是这些工具易于使用,但是被限制仅用于 Brendan Gregg 认可的事情。决定将它做成一个工具,那需要做很多的事情!:)
+
+另一个工具是将 ftrace 的输出可视化,做的比较好的是 [kernelshark][17]。我还没有用过它,但是看起来似乎很有用。你可以使用 `sudo apt-get install kernelshark` 来安装它。
+
+### 一个新的超能力
+
+我很高兴能够花一些时间去学习 ftrace!对于任何内核工具,不同的内核版本有不同的功效,我希望有一天你能发现它很有用!
+
+### ftrace 系列文章的一个索引
+
+最后,这里是我找到的一些 ftrace 方面的文章。它们大部分在 LWN (Linux 新闻周刊)上,它是 Linux 的一个极好的资源(你可以购买一个 [订阅][18]!)
+
+* [使用 Ftrace 调试内核 - part 1][1] (Dec 2009, Steven Rostedt)
+
+* [使用 Ftrace 调试内核 - part 2][2] (Dec 2009, Steven Rostedt)
+
+* [Linux 函数跟踪器的秘密][3] (Jan 2010, Steven Rostedt)
+
+* [trace-cmd:Ftrace 的一个前端][4] (Oct 2010, Steven Rostedt)
+
+* [使用 KernelShark 去分析实时调试器][5] (2011, Steven Rostedt)
+
+* [Ftrace: 神秘的开关][6] (2014, Brendan Gregg)
+
+* 内核文档:(它十分有用) [Documentation/ftrace.txt][7]
+
+* 你能跟踪的事件的文档 [Documentation/events.txt][8]
+
+* linux 内核开发上的一些 ftrace 设计文档 (不是有用,而是有趣!) [Documentation/ftrace-design.txt][9]
+
+--------------------------------------------------------------------------------
+
+via: https://jvns.ca/blog/2017/03/19/getting-started-with-ftrace/
+
+作者:[Julia Evans ][a]
+译者:[qhwdw](https://github.com/qhwdw)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]:https://jvns.ca
+[1]:https://lwn.net/Articles/365835/
+[2]:https://lwn.net/Articles/366796/
+[3]:https://lwn.net/Articles/370423/
+[4]:https://lwn.net/Articles/410200/
+[5]:https://lwn.net/Articles/425583/
+[6]:https://lwn.net/Articles/608497/
+[7]:https://raw.githubusercontent.com/torvalds/linux/v4.4/Documentation/trace/ftrace.txt
+[8]:https://raw.githubusercontent.com/torvalds/linux/v4.4/Documentation/trace/events.txt
+[9]:https://raw.githubusercontent.com/torvalds/linux/v4.4/Documentation/trace/ftrace-design.txt
+[10]:https://lwn.net/Articles/290277/
+[11]:https://lwn.net/Articles/365835/
+[12]:https://lwn.net/Articles/410200/
+[13]:https://gist.githubusercontent.com/jvns/e5c2d640f7ec76ed9ed579be1de3312e/raw/78b8425436dc4bb5bb4fa76a4f85d5809f7d1ef2/trace-cmd-report.txt
+[14]:https://gist.githubusercontent.com/jvns/f32e9b06bcd2f1f30998afdd93e4aaa5/raw/8154d9828bb895fd6c9b0ee062275055b3775101/function_graph.txt
+[15]:https://github.com/torvalds/linux/blob/v4.4/kernel/futex.c#L1313-L1324
+[16]:https://github.com/brendangregg/perf-tools
+[17]:https://lwn.net/Articles/425583/
+[18]:https://lwn.net/subscribe/Info
diff --git a/translated/tech/20170524 Working with Vi-Vim Editor - Advanced concepts.md b/translated/tech/20170524 Working with Vi-Vim Editor - Advanced concepts.md
new file mode 100644
index 0000000000..d31527b055
--- /dev/null
+++ b/translated/tech/20170524 Working with Vi-Vim Editor - Advanced concepts.md
@@ -0,0 +1,116 @@
+使用 Vi/Vim 编辑器:高级概念
+======
+早些时候我们已经讨论了一些关于 VI/VIM 编辑器的基础知识,但是 VI 和 VIM 都是非常强大的编辑器,还有很多其他的功能可以和编辑器一起使用。在本教程中,我们将学习 VI/VIM 编辑器的一些高级用法。
+
+(**推荐阅读**:[使用 VI 编辑器:基础知识] [1])
+
+## 使用 VI/VIM 编辑器打开多个文件
+
+要打开多个文件,命令将与打开单个文件相同。我们只要添加第二个文件的名称。
+
+```
+ $ vi file1 file2 file 3
+```
+
+要浏览到下一个文件,我们可以使用
+
+```
+$ :n
+```
+
+或者我们也可以使用
+
+```
+$ :e filename
+```
+
+## 在编辑器中运行外部命令
+
+我们可以在 vi 编辑器内部运行外部的 Linux/Unix 命令,也就是说不需要退出编辑器。要在编辑器中运行命令,如果在插入模式下,先返回到命令模式,我们使用 BANG 也就是 “!” 接着是需要使用的命令。运行命令的语法是:
+
+```
+$ :! command
+```
+
+这是一个例子
+
+```
+$ :! df -H
+```
+
+## 根据模板搜索
+
+要在文本文件中搜索一个单词或模板,我们在命令模式下使用以下两个命令:
+
+ * 命令 “/” 代表正向搜索模板
+
+ * 命令 “?” 代表正向搜索模板
+
+
+这两个命令都用于相同的目的,唯一不同的是它们搜索的方向。一个例子是:
+
+ `$ :/ search pattern` (如果在文件的开头)
+
+ `$ :? search pattern` (如果在文件末尾)
+
+## 搜索并替换一个模板
+
+我们可能需要搜索和替换我们的文本中的单词或模板。我们不是从整个文本中找到单词的出现的地方并替换它,我们可以在命令模式中使用命令来自动替换单词。使用搜索和替换的语法是:
+
+```
+$ :s/pattern_to_be_found/New_pattern/g
+```
+
+假设我们想要将单词 “alpha” 用单词 “beta” 代替,命令就是这样:
+
+```
+$ :s/alpha/beta/g
+```
+
+如果我们只想替换第一个出现的 “alpha”,那么命令就是:
+
+```
+$ :s/alpha/beta/
+```
+
+## 使用 set 命令
+
+我们也可以使用 set 命令自定义 vi/vim 编辑器的行为和外观。下面是一些可以使用 set 命令修改 vi/vim 编辑器行为的选项列表:
+
+ `$ :set ic ` 在搜索时忽略大小写
+
+ `$ :set smartcase ` 搜索强制区分大小写
+
+ `$ :set nu` 在每行开始显示行号
+
+ `$ :set hlsearch ` 高亮显示匹配的单词
+
+ `$ : set ro ` 将文件类型更改为只读
+
+ `$ : set term ` 打印终端类型
+
+ `$ : set ai ` 设置自动缩进
+
+ `$ :set noai ` 取消自动缩进
+
+其他一些修改 vi 编辑器的命令是:
+
+ `$ :colorscheme ` 用来改变编辑器的配色方案 。(仅适用于 VIM 编辑器)
+
+ `$ :syntax on ` 为 .xml、.html 等文件打开颜色方案。(仅适用于VIM编辑器)
+
+这篇结束了本系列教程,请在下面的评论栏中提出你的疑问/问题或建议。
+
+
+--------------------------------------------------------------------------------
+
+via: http://linuxtechlab.com/working-vivim-editor-advanced-concepts/
+
+作者:[Shusain][a]
+译者:[geekpi](https://github.com/geekpi)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]:http://linuxtechlab.com/author/shsuain/
+[1]:http://linuxtechlab.com/working-vi-editor-basics/
diff --git a/translated/tech/20170918 3 text editor alternatives to Emacs and Vim.md b/translated/tech/20170918 3 text editor alternatives to Emacs and Vim.md
new file mode 100644
index 0000000000..136214ce33
--- /dev/null
+++ b/translated/tech/20170918 3 text editor alternatives to Emacs and Vim.md
@@ -0,0 +1,102 @@
+3 个替代 Emacs 的 Vim 文本编辑器
+======
+
+![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/computer_keyboard_laptop_development_blue.png?itok=IfckxN48)
+
+Emacs 和 Vim 的粉丝们,在你们开始编辑器之争之前,请你们理解,这篇文章并不会把引导放在诸位最喜欢的编辑器上。我是一个 Emacs 爱好者,但是也很喜欢 Vim。
+
+就是说,我已经意识到 Emacs 和 Vim 并不适合所有人。也许 [编辑器之争][1] 略显幼稚,让很多人失望了。也许他们只是想要有一个不太苛刻的现代化的编辑器。
+
+如果你正寻找可以替代 Emacs 或者 Vim 的编辑器,请继续阅读下去。这里有三个可能会让你感兴趣的编辑器。
+
+### Geany
+
+
+![用 Geany 编辑一个 LaTeX 文档][3]
+
+
+你可以用 Geany 编辑 LaTeX 文档
+
+[Geany][4] 是一个古老的编辑器,当我还在过时的硬件上运行轻量级 Linux 发行版的时候,[Geany][4] 就是一个优秀的的编辑器。Geany 开始于我的 [LaTeX][5] 编辑,但是很快就成为我所有应用程序的编辑器了。
+
+尽管 Geany 号称是轻量且高速的 [IDE][6](集成开发环境),但是它绝不仅仅是一个技术工具。Geany 轻便快捷,即便是在一个过时的机器或是 [运行 Linux 的 Chromebook][7] 也能轻松运行起来。无论是编辑配置文件维护任务列表、写文章、代码还是脚本,Geany 都能轻松胜任。
+
+[插件][8] 给 Geany 带来一些额外的魅力。这些插件拓展了 Geany 的功能,让你编码或是处理一些标记语言变得更高效,帮助你处理文本,甚至做拼写检查。
+
+### Atom
+
+
+![使用 Atom 编辑网页][10]
+
+
+使用 Atom 编辑网页
+
+在文本编辑器领域,[Atom][11] 后来居上。很短的时间内,Atom 就获得了一批忠实的追随者。
+
+Atom 的定制功能让其拥有如此的吸引力。如果有一些技术癖好,你完全可以在这个编辑器上随意设置。如果你不仅仅是忠于技术,Atom 也有 [一些主题][12] ,你可以用来更改编辑器外观。
+
+千万不要低估 Atom 数以千计的 [拓展包][13]。它们能在不同功能上拓展 Atom,能根据你的爱好把 Atom 转化成合适的文本编辑器或是开发环境。Atom 不仅为程序员提供服务。它同样适用于 [作家的文本编辑器][14]。
+
+### Xed
+
+![使用 Xed 编辑文章][16]
+
+
+使用 Xed 编辑文章
+
+可能对用户体验来说,Atom 和 Geany 略显臃肿。也许你只想要一个轻量级,一个不要太露骨也不要有太多很少使用的特性的编辑器,如此看来,[Xed][17] 正是你所期待的。
+
+如果 Xed 你看着眼熟,那是因为它是 MATE 桌面环境中 Pluma 编辑器上的分支。我发现相比于 Pluma,Xed 可能速度更快一点,响应更灵敏一点--不过,因人而异吧。
+
+虽然 Xed 没有那么多的功能,但也不至于太糟。它有扎实的语法高亮,略强于一般的搜索替换和拼写检查功能以及单窗口编辑多文件的选项卡式界面。
+
+### 其他值得发掘的编辑器
+
+我不是 KDE 痴,当我工作在 KDE 环境下时, [KDevelop][18] 就已经是我深度工作时的首选了。它很强大而且灵活,又没有过大的体积,很像 Genany。
+
+虽然我还没感受过爱,但是我发誓我和我了解的几个人都在 [Brackets][19] 感受到了。它很强大,而且不得不承认它的 [拓展][20] 真的很实用。
+
+被称为 “开发者的编辑器” 的 [Notepadqq][21] ,总让人联想到 [Notepad++][22]。虽然它的发展仍处于早期阶段,但至少它看起来还是很有前景的。
+
+对于那些只有简单的文本编辑器需求的人来说,[Gedit][23] 和 [Kate][24] 相比是极好的。它绝不是太过原始的编辑器--它有丰富的功能去完成大型文本编辑。无论是 Gedit 还是 Kate 都缘于速度和易上手而齐名。
+
+你有其他 Emacs 和 Vim 之外的挚爱编辑器么?留言下来,免费分享。
+
+### 关于作者
+Scott Nesbitt;我长期使用开源软件;记录各种有趣的事物;利益。做自己力所能及的事,并不把自己当回事。你可以在网络上的这些地方找到我。
+
+--------------------------------------------------------------------------------
+
+via: https://opensource.com/article/17/9/3-alternatives-emacs-and-vim
+
+作者:[Scott Nesbitt][a]
+译者:[CYLeft](https://github.com/CYLeft)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]:https://opensource.com/users/scottnesbitt
+[1]:https://en.wikipedia.org/wiki/Editor_war
+[2]:/file/370196
+[3]:https://opensource.com/sites/default/files/u128651/geany.png (Editing a LaTeX document with Geany)
+[4]:https://www.geany.org/
+[5]:https://opensource.com/article/17/6/introduction-latex
+[6]:https://en.wikipedia.org/wiki/Integrated_development_environment
+[7]:https://opensource.com/article/17/4/linux-chromebook-gallium-os
+[8]:http://plugins.geany.org/
+[9]:/file/370191
+[10]:https://opensource.com/sites/default/files/u128651/atom.png (Editing a webpage with Atom)
+[11]:https://atom.io
+[12]:https://atom.io/themes
+[13]:https://atom.io/packages
+[14]:https://opensource.com/article/17/5/atom-text-editor-packages-writers
+[15]:/file/370201
+[16]:https://opensource.com/sites/default/files/u128651/xed.png (Writing this article in Xed)
+[17]:https://github.com/linuxmint/xed
+[18]:https://www.kdevelop.org/
+[19]:http://brackets.io/
+[20]:https://registry.brackets.io/
+[21]:http://notepadqq.altervista.org/s/
+[22]:https://opensource.com/article/16/12/notepad-text-editor
+[23]:https://wiki.gnome.org/Apps/Gedit
+[24]:https://kate-editor.org/
diff --git a/translated/tech/20171009 10 layers of Linux container security - Opensource.com.md b/translated/tech/20171009 10 layers of Linux container security - Opensource.com.md
new file mode 100644
index 0000000000..1c3425d008
--- /dev/null
+++ b/translated/tech/20171009 10 layers of Linux container security - Opensource.com.md
@@ -0,0 +1,131 @@
+Linux 容器安全的 10 个层面
+======
+![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/EDU_UnspokenBlockers_1110_A.png?itok=x8A9mqVA)
+
+容器提供了打包应用程序的一种简单方法,它实现了从开发到测试到投入生产系统的无缝传递。它也有助于确保跨不同环境的连贯性,包括物理服务器、虚拟机、以及公有云或私有云。这些好处使得一些组织为了更方便地部署和管理为他们提升业务价值的应用程序,而快速部署容器。
+
+企业要求存储安全,在容器中运行基础服务的任何人都会问,“容器安全吗?”以及“怎么相信运行在容器中的我的应用程序是安全的?”
+
+安全的容器就像是许多安全运行的进程。在你部署和运行你的容器之前,你需要去考虑整个解决方案栈~~(致校对,容器是由不同的层堆叠而成,英文原文中使用的stack,可以直译为“解决方案栈”,但是似乎没有这一习惯说法,也可以翻译为解决方案的不同层级,哪个更合适?)~~各个层面的安全。你也需要去考虑应用程序和容器整个生命周期的安全。
+
+尝试从这十个关键的因素去确保容器解决方案栈不同层面、以及容器生命周期的不同阶段的安全。
+
+### 1. 容器宿主机操作系统和多租户环境
+
+由于容器将应用程序和它的依赖作为一个单元来处理,使得开发者构建和升级应用程序变得更加容易,并且,容器可以启用多租户技术将许多应用程序和服务部署到一台共享主机上。在一台单独的主机上以容器方式部署多个应用程序、按需启动和关闭单个容器都是很容易的。为完全实现这种打包和部署技术的优势,运营团队需要运行容器的合适环境。运营者需要一个安全的操作系统,它能够在边界上保护容器安全、从容器中保护主机内核、以及保护容器彼此之间的安全。
+
+### 2. 容器内容(使用可信来源)
+
+容器是隔离的 Linux 进程,并且在一个共享主机的内核中,容器内使用的资源被限制在仅允许你运行着应用程序的沙箱中。保护容器的方法与保护你的 Linux 中运行的任何进程的方法是一样的。降低权限是非常重要的,也是保护容器安全的最佳实践。甚至是使用尽可能小的权限去创建容器。容器应该以一个普通用户的权限来运行,而不是 root 权限的用户。在 Linux 中可以使用多级安全,Linux 命名空间、安全强化 Linux( [SELinux][1])、[cgroups][2] 、capabilities(译者注:Linux 内核的一个安全特性,它打破了传统的普通用户与 root 用户的概念,在进程级提供更好的安全控制)、以及安全计算模式( [seccomp][3] ),Linux 的这五种安全特性可以用于保护容器的安全。
+
+在谈到安全时,首先要考虑你的容器里面有什么?例如 ,有些时候,应用程序和基础设施是由很多可用的组件所构成。它们中的一些是开源的包,比如,Linux 操作系统、Apache Web 服务器、Red Hat JBoss 企业应用平台、PostgreSQL、以及Node.js。这些包的容器化版本已经可以使用了,因此,你没有必要自己去构建它们。但是,对于你从一些外部来源下载的任何代码,你需要知道这些包的原始来源,是谁构建的它,以及这些包里面是否包含恶意代码。
+
+### 3. 容器注册(安全访问容器镜像)
+
+你的团队所构建的容器的最顶层的内容是下载的公共容器镜像,因此,管理和下载容器镜像以及内部构建镜像,与管理和下载其它类型的二进制文件的方式是相同的,这一点至关重要。许多私有的注册者支持容器镜像的保存。选择一个私有的注册者,它可以帮你将存储在它的注册中的容器镜像实现策略自动化。
+
+### 4. 安全性与构建过程
+
+在一个容器化环境中,构建过程是软件生命周期的一个阶段,它将所需的运行时库和应用程序代码集成到一起。管理这个构建过程对于软件栈安全来说是很关键的。遵守“一次构建,到处部署”的原则,可以确保构建过程的结果正是生产系统中需要的。保持容器的恒定不变也很重要 — 换句话说就是,不要对正在运行的容器打补丁,而是,重新构建和部署它们。
+
+不论是因为你处于一个高强度监管的行业中,还是只希望简单地优化你的团队的成果,去设计你的容器镜像管理以及构建过程,可以使用容器层的优势来实现控制分离,因此,你应该去这么做:
+
+ * 运营团队管理基础镜像
+ * 设计者管理中间件、运行时、数据库、以及其它解决方案
+ * 开发者专注于应用程序层面,并且只写代码
+
+
+
+最后,标记好你的定制构建容器,这样可以确保在构建和部署时不会搞混乱。
+
+### 5. 控制好在同一个集群内部署应用
+
+如果是在构建过程中出现的任何问题,或者在镜像被部署之后发现的任何漏洞,那么,请在基于策略的、自动化工具上添加另外的安全层。
+
+我们来看一下,一个应用程序的构建使用了三个容器镜像层:内核、中间件、以及应用程序。如果在内核镜像中发现了问题,那么只能重新构建镜像。一旦构建完成,镜像就会被发布到容器平台注册中。这个平台可以自动检测到发生变化的镜像。对于基于这个镜像的其它构建将被触发一个预定义的动作,平台将自己重新构建应用镜像,合并进修复库。
+
+在基于策略的、自动化工具上添加另外的安全层。
+
+一旦构建完成,镜像将被发布到容器平台的内部注册中。在它的内部注册中,会立即检测到镜像发生变化,应用程序在这里将会被触发一个预定义的动作,自动部署更新镜像,确保运行在生产系统中的代码总是使用更新后的最新的镜像。所有的这些功能协同工作,将安全功能集成到你的持续集成和持续部署(CI/CD)过程和管道中。
+
+### 6. 容器编配:保护容器平台
+
+一旦构建完成,镜像被发布到容器平台的内部注册中。内部注册会立即检测到镜像的变化,应用程序在这里会被触发一个预定义的动作,自己部署更新,确保运行在生产系统中的代码总是使用更新后的最新的镜像。所有的功能协同工作,将安全功能集成到你的持续集成和持续部署(CI/CD)过程和管道中。~~(致校对:这一段和上一段是重复的,请确认,应该是选题工具造成的重复!!)~~
+
+当然了,应用程序很少会部署在单一的容器中。甚至,单个应用程序一般情况下都有一个前端、一个后端、以及一个数据库。而在容器中以微服务模式部署的应用程序,意味着应用程序将部署在多个容器中,有时它们在同一台宿主机上,有时它们是分布在多个宿主机或者节点上,如下面的图所示:~~(致校对:图去哪里了???应该是选题问题的问题!)~~
+
+在大规模的容器部署时,你应该考虑:
+
+ * 哪个容器应该被部署在哪个宿主机上?
+ * 那个宿主机应该有什么样的性能?
+ * 哪个容器需要访问其它容器?它们之间如何发现彼此?
+ * 你如何控制和管理对共享资源的访问,像网络和存储?
+ * 如何监视容器健康状况?
+ * 如何去自动扩展性能以满足应用程序的需要?
+ * 如何在满足安全需求的同时启用开发者的自助服务?
+
+
+
+考虑到开发者和运营者的能力,提供基于角色的访问控制是容器平台的关键要素。例如,编配管理服务器是中心访问点,应该接受最高级别的安全检查。APIs 是规模化的自动容器平台管理的关键,可以用于为 pods、服务、以及复制控制器去验证和配置数据;在入站请求上执行项目验证;以及调用其它主要系统组件上的触发器。
+
+### 7. 网络隔离
+
+在容器中部署现代微服务应用,经常意味着跨多个节点在多个容器上部署。考虑到网络防御,你需要一种在一个集群中的应用之间的相互隔离的方法。一个典型的公有云容器服务,像 Google 容器引擎(GKE)、Azure 容器服务、或者 Amazon Web 服务(AWS)容器服务,是单租户服务。他们让你在你加入的虚拟机集群上运行你的容器。对于多租户容器的安全,你需要容器平台为你启用一个单一集群,并且分割通讯以隔离不同的用户、团队、应用、以及在这个集群中的环境。
+
+使用网络命名空间,容器内的每个集合(即大家熟知的“pod”)得到它自己的 IP 和绑定的端口范围,以此来从一个节点上隔离每个 pod 网络。除使用下文所述的选项之外,~~(选项在哪里???,请查看原文,是否是选题丢失???)~~默认情况下,来自不同命名空间(项目)的Pods 并不能发送或者接收其它 Pods 上的包和不同项目的服务。你可以使用这些特性在同一个集群内,去隔离开发者环境、测试环境、以及生产环境。但是,这样会导致 IP 地址和端口数量的激增,使得网络管理更加复杂。另外,容器是被反复设计的,你应该在处理这种复杂性的工具上进行投入。在容器平台上比较受欢迎的工具是使用 [软件定义网络][4] (SDN) 去提供一个定义的网络集群,它允许跨不同集群的容器进行通讯。
+
+### 8. 存储
+
+容器即可被用于无状态应用,也可被用于有状态应用。保护附加存储是保护有状态服务的一个关键要素。容器平台对多个受欢迎的存储提供了插件,包括网络文件系统(NFS)、AWS 弹性块存储(EBS)、GCE 持久磁盘、GlusterFS、iSCSI、 RADOS(Ceph)、Cinder、等等。
+
+一个持久卷(PV)可以通过资源提供者支持的任何方式装载到一个主机上。提供者有不同的性能,而每个 PV 的访问模式是设置为被特定的卷支持的特定模式。例如,NFS 能够支持多路客户端同时读/写,但是,一个特定的 NFS 的 PV 可以在服务器上被发布为只读模式。每个 PV 得到它自己的一组反应特定 PV 性能的访问模式的描述,比如,ReadWriteOnce、ReadOnlyMany、以及 ReadWriteMany。
+
+### 9. API 管理、终端安全、以及单点登陆(SSO)
+
+保护你的应用包括管理应用、以及 API 的认证和授权。
+
+Web SSO 能力是现代应用程序的一个关键部分。在构建它们的应用时,容器平台带来了开发者可以使用的多种容器化服务。
+
+APIs 是微服务构成的应用程序的关键所在。这些应用程序有多个独立的 API 服务,这导致了终端服务数量的激增,它就需要额外的管理工具。推荐使用 API 管理工具。所有的 API 平台应该提供多种 API 认证和安全所需要的标准选项,这些选项既可以单独使用,也可以组合使用,以用于发布证书或者控制访问。
+
+保护你的应用包括管理应用以及 API 的认证和授权。~~(致校对:这一句话和本节的第一句话重复)~~
+
+这些选项包括标准的 API keys、应用 ID 和密钥对、 以及 OAuth 2.0。
+
+### 10. 在一个联合集群中的角色和访问管理
+
+这些选项包括标准的 API keys、应用 ID 和密钥对、 以及 OAuth 2.0。~~(致校对:这一句和上一节最后一句重复)~~
+
+在 2016 年 7 月份,Kubernetes 1.3 引入了 [Kubernetes 联合集群][5]。这是一个令人兴奋的新特性之一,它是在 Kubernetes 上游、当前的 Kubernetes 1.6 beta 中引用的。联合是用于部署和访问跨多集群运行在公有云或企业数据中心的应用程序服务的。多个集群能够用于去实现应用程序的高可用性,应用程序可以跨多个可用区域、或者去启用部署公共管理、或者跨不同的供应商进行迁移,比如,AWS、Google Cloud、以及 Azure。
+
+当管理联合集群时,你必须确保你的编配工具能够提供,你所需要的跨不同部署平台的实例的安全性。一般来说,认证和授权是很关键的 — 不论你的应用程序运行在什么地方,将数据安全可靠地传递给它们,以及管理跨集群的多租户应用程序。Kubernetes 扩展了联合集群,包括对联合的秘密数据、联合的命名空间、以及 Ingress objects 的支持。
+
+### 选择一个容器平台
+
+当然,它并不仅关乎安全。你需要提供一个你的开发者团队和运营团队有相关经验的容器平台。他们需要一个安全的、企业级的基于容器的应用平台,它能够同时满足开发者和运营者的需要,而且还能够提高操作效率和基础设施利用率。
+
+想从 Daniel 在 [欧盟开源峰会][7] 上的 [容器安全的十个层面][6] 的演讲中学习更多知识吗?这个峰会将于10 月 23 - 26 日在 Prague 举行。
+
+### 关于作者
+Daniel Oh;Microservives;Agile;Devops;Java Ee;Container;Openshift;Jboss;Evangelism
+
+
+
+--------------------------------------------------------------------------------
+
+via: https://opensource.com/article/17/10/10-layers-container-security
+
+作者:[Daniel Oh][a]
+译者:[qhwdw](https://github.com/qhwdw)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]:https://opensource.com/users/daniel-oh
+[1]:https://en.wikipedia.org/wiki/Security-Enhanced_Linux
+[2]:https://en.wikipedia.org/wiki/Cgroups
+[3]:https://en.wikipedia.org/wiki/Seccomp
+[4]:https://en.wikipedia.org/wiki/Software-defined_networking
+[5]:https://kubernetes.io/docs/concepts/cluster-administration/federation/
+[6]:https://osseu17.sched.com/mobile/#session:f2deeabfc1640d002c1d55101ce81223
+[7]:http://events.linuxfoundation.org/events/open-source-summit-europe
diff --git a/translated/tech/20171016 Using the Linux find command with caution.md b/translated/tech/20171016 Using the Linux find command with caution.md
new file mode 100644
index 0000000000..a72ff48c11
--- /dev/null
+++ b/translated/tech/20171016 Using the Linux find command with caution.md
@@ -0,0 +1,93 @@
+谨慎使用 Linux find 命令
+======
+![](https://images.idgesg.net/images/article/2017/10/caution-sign-100738884-large.jpg)
+最近有朋友提醒我可以添加一个有用的选项来更加谨慎地运行 find 命令,它是 -ok。除了一个重要的区别之外,它的工作方式与 -exec 相似,它使 find 命令在执行指定的操作之前请求权限。
+
+这有一个例子。如果你使用 find 命令查找文件并删除它们,则可以运行下面的命令:
+```
+$ find . -name runme -exec rm {} \;
+
+```
+
+在当前目录及其子目录中中任何名为 “runme” 的文件都将被立即删除 - 当然,你要有权删除它们。改用 -ok 选项,你会看到类似这样的东西,find 命令将在删除文件之前会请求权限。回答 **y** 代表 “yes” 将允许 find 命令继续并逐个删除文件。
+```
+$ find . -name runme -ok rm {} \;
+< rm ... ./bin/runme > ?
+
+```
+
+### -exedir 命令也是一个选项
+
+另一个可以用来修改 find 命令行为并可能使其更可控的选项是 -execdir 。其中 -exec 运行指定的任何命令,-execdir 从文件所在的目录运行指定的命令,而不是在运行 find 命令的目录运行。这是一个它的例子:
+```
+$ pwd
+/home/shs
+$ find . -name runme -execdir pwd \;
+/home/shs/bin
+
+```
+```
+$ find . -name runme -execdir ls \;
+ls rm runme
+
+```
+
+到现在为止还挺好。但要记住的是,-execdir 也会在匹配文件的目录中执行命令。如果运行下面的命令,并且目录包含一个名为 “ls” 的文件,那么即使该文件没有_执行权限,它也将运行该文件。使用 **-exec** 或 **-execdir** 类似于通过 source 来运行命令。
+```
+$ find . -name runme -execdir ls \;
+Running the /home/shs/bin/ls file
+
+```
+```
+$ find . -name runme -execdir rm {} \;
+This is an imposter rm command
+
+```
+```
+$ ls -l bin
+total 12
+-r-x------ 1 shs shs 25 Oct 13 18:12 ls
+-rwxr-x--- 1 shs shs 36 Oct 13 18:29 rm
+-rw-rw-r-- 1 shs shs 28 Oct 13 18:55 runme
+
+```
+```
+$ cat bin/ls
+echo Running the $0 file
+$ cat bin/rm
+echo This is an imposter rm command
+
+```
+
+### -okdir 选项也会请求权限
+
+要更谨慎,可以使用 **-okdir** 选项。类似 **-ok**,该选项将请求权限来运行该命令。
+```
+$ find . -name runme -okdir rm {} \;
+< rm ... ./bin/runme > ?
+
+```
+
+你也可以小心地指定你想用的命令的完整路径,以避免像上面那样的冒牌命令出现的任何问题。
+```
+$ find . -name runme -execdir /bin/rm {} \;
+
+```
+
+find 命令除了默认打印之外还有很多选项,有些可以使你的文件搜索更精确,但谨慎一点总是好的。
+
+在 [Facebook][1] 和 [LinkedIn][2] 上加入网络世界社区来进行评论。
+
+--------------------------------------------------------------------------------
+
+via: https://www.networkworld.com/article/3233305/linux/using-the-linux-find-command-with-caution.html
+
+作者:[Sandra Henry-Stocker][a]
+译者:[geekpi](https://github.com/geekpi)
+校对:[Locez](https://github.com/locez)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]:https://www.networkworld.com/author/Sandra-Henry_Stocker/
+[1]:https://www.facebook.com/NetworkWorld/
+[2]:https://www.linkedin.com/company/network-world
diff --git a/translated/tech/20171024 Run Linux On Android Devices, No Rooting Required.md b/translated/tech/20171024 Run Linux On Android Devices, No Rooting Required.md
new file mode 100644
index 0000000000..929c3ecdf8
--- /dev/null
+++ b/translated/tech/20171024 Run Linux On Android Devices, No Rooting Required.md
@@ -0,0 +1,65 @@
+无需 Root 实现在 Android 设备上运行 Linux
+======
+![](https://www.ostechnix.com/wp-content/uploads/2017/10/Termux-720x340.jpg)
+
+曾今,我尝试过搜索一种简单的可以在 Android 上运行 Linux 的方法。我当时唯一的意图只是想使用 Linux 以及一些基本的用用程序,比如 SSH,Git,awk 等。要求的并不多!我不不想 root Android 设备。我有一台平板电脑,主要用于阅读电子书,新闻和少数 Linux 博客。除此之外也不怎么用它了。因此我决定用它来实现一些 Linux 的功能。在 Google Play 商店上浏览了几分钟后,一个应用程序瞬间引起了我的注意,勾起了我实验的欲望。如果你也想在 Android 设备上运行 Linux,这个应用可能会有所帮助。
+
+### Termux - 在 Android 和 Chrome OS 上运行的 Android 终端模拟器
+
+**Termux** 是一个 Android 终端模拟器以及提供 Linux 环境的应用程序。跟许多其他应用程序不同,你无需 root 设备也无需进行设置。它是开箱即用的!它会自动安装好一个最基本的 Linux 系统,当然你也可以使用 APT 软件包管理器来安装其他软件包。总之,你可以让你的 Android 设备变成一台袖珍的 Linux 电脑。它不仅适用于 Android,你还能在 Chrome OS 上安装它。
+
+![](http://www.ostechnix.com/wp-content/uploads/2017/10/termux.png)
+
+Termux 提供了许多重要的功能,比您想象的要多。
+
+ * 它允许你通过 openSSH 登陆远程服务器
+ * 你还能够从远程系统 SSH 到 Android 设备中。
+ * 使用 rsync 和 curl 将您的智能手机通讯录同步到远程系统。
+ * 支持不同的 shell,比如 BASH,ZSH,以及 FISH 等等。
+ * 可以选择不同的文本编辑器来编辑/查看文件,支持 Emacs,Nano 和 Vim。
+ * 使用 APT 软件包管理器在 Android 设备上安装你想要的软件包。支持 Git,Perl,Python,Ruby 和 Node.js 的最新版本。
+ * 可以将 Android 设备与蓝牙键盘,鼠标和外置显示器连接起来,就像是整合在一起的设备一样。Termux 支持键盘快捷键。
+ * Termux 支持几乎所有 GNU/Linux 命令。
+
+此外通过安装插件可以启用其他一些功能。例如,**Termux:API** 插件允许你访问 Android 和 Chrome 的硬件功能。其他有用的插件包括:
+
+ * Termux:Boot - 设备启动时运行脚本
+ * Termux:Float - 在浮动窗口中运行 Termux
+ * Termux:Styling - 提供配色方案和支持 powerline 的字体来定制 Termux 终端的外观。
+ * Termux:Task - 提供一种从任务栏类的应用中调用 Termux 可执行文件的简易方法。
+ * Termux:Widget - 提供一种从主屏幕启动小脚本的建议方法。
+
+要了解更多有关 termux 的信息,请长按终端上的任意位置并选择“帮助”菜单选项来打开内置的帮助部分。它唯一的缺点就是**需要 Android 5.0 及更高版本**。如果它支持 Android 4.x 和旧版本的话,将会更有用的多。你可以在** Google Play 商店 **和** F-Droid **中找到并安装 Termux。
+
+要在 Google Play 商店中安装 Termux,点击下面按钮。
+
+[![termux][1]][2]
+
+若要在 F-Droid 中安装,则点击下面按钮。
+
+[![][1]][3]
+
+你现在知道如何使用 Termux 在 Android 设备上使用 Linux 了。你有用过其他更好的应用吗?请在下面留言框中留言。我很乐意也去尝试他们!
+
+此致敬礼!
+
+相关资源:
+
++[Termux 官网 ][4]
+
+
+--------------------------------------------------------------------------------
+
+via: https://www.ostechnix.com/termux-run-linux-android-devices-no-rooting-required/
+
+作者:[SK][a]
+译者:[lujun9972](https://github.com/lujun9972)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]:https://www.ostechnix.com/author/sk/
+[1]:data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7
+[2]:https://play.google.com/store/apps/details?id=com.termux
+[3]:https://f-droid.org/packages/com.termux/
+[4]:https://termux.com/
diff --git a/translated/tech/20171030 How to bind ntpd to specific IP addresses on Linux-Unix.md b/translated/tech/20171030 How to bind ntpd to specific IP addresses on Linux-Unix.md
new file mode 100644
index 0000000000..6fd4ee93a3
--- /dev/null
+++ b/translated/tech/20171030 How to bind ntpd to specific IP addresses on Linux-Unix.md
@@ -0,0 +1,123 @@
+如何在 Linux/Unix 之上绑定 ntpd 到特定的 IP 地址
+======
+
+默认的情况下,我们的 ntpd/NTP 服务器会监听所有的端口或者 IP 地址,也就是:0.0.0.0:123。 怎么才可以在一个 Linux 或是 FreeBSD Unix 服务器上,确保只监听特定的 IP 地址,比如 localhost 或者是 192.168.1.1:123 ?
+
+NTP 是网络时间协议的首字母简写,这是一个用来同步两台电脑之间时间的协议。ntpd 是一个操作系统守护进程,可以设置并且保证系统的时间与互联网标准时间服务器同步。
+
+[![如何在Linux和Unix服务器,防止 NTPD 监听0.0.0.0:123 并将其绑定到特定的 IP 地址][1]][1]
+
+NTP使用 `/etc/directory` 之下的 `ntp.conf`作为配置文件。
+
+
+
+## /etc/ntp.conf 之中的端口指令
+
+你可以通过设置端口命令来防止 ntpd 监听 0.0.0.0:123,语法如下:
+
+```
+interface listen IPv4|IPv6|all
+interface ignore IPv4|IPv6|all
+interface drop IPv4|IPv6|all
+```
+
+上面的配置可以使 ntpd 监听或者断开一个网络地址而不需要任何的请求。**这样将会** 举个例子,如果要忽略所有端口之上的监听,加入下面的语句到`/etc/ntp.conf`:
+
+The above configures which network addresses ntpd listens or dropped without processing any requests. **The ignore prevents opening matching addresses, drop causes ntpd to open the address and drop all received packets without examination.** For example to ignore listing on all interfaces, add the following in /etc/ntp.conf:
+
+`interface ignore wildcard`
+
+如果只监听 127.0.0.1 和 192.168.1.1 则是这样:
+
+```
+interface listen 127.0.0.1
+interface listen 192.168.1.1
+```
+
+这是我 FreeBSD 云服务器上的样例 /etc/ntp.conf 文件:
+
+`$ egrep -v '^#|$^' /etc/ntp.conf`
+
+样例输出为:
+
+```
+tos minclock 3 maxclock 6
+pool 0.freebsd.pool.ntp.org iburst
+restrict default limited kod nomodify notrap noquery nopeer
+restrict -6 default limited kod nomodify notrap noquery nopeer
+restrict source limited kod nomodify notrap noquery
+restrict 127.0.0.1
+restrict -6 ::1
+leapfile "/var/db/ntpd.leap-seconds.list"
+interface ignore wildcard
+interface listen 172.16.3.1
+interface listen 10.105.28.1
+```
+
+
+## 重启 ntpd
+
+在 FreeBSD Unix 之上重新加载/重启 ntpd
+
+`$ sudo /etc/rc.d/ntpd restart`
+或者 [在Debian和Ubuntu Linux 之上使用下面的命令][2]:
+`$ sudo systemctl restart ntp`
+或者 [在CentOS/RHEL 7/Fedora Linux 之上使用下面的命令][2]:
+`$ sudo systemctl restart ntpd`
+
+## 校验
+
+使用 `netstat` 和 `ss` 命令来检查 ntpd只绑定到了特定的 IP 地址:
+
+`$ netstat -tulpn | grep :123`
+或是
+`$ ss -tulpn | grep :123`
+样例输出:
+
+```
+udp 0 0 10.105.28.1:123 0.0.0.0:* -
+udp 0 0 172.16.3.1:123 0.0.0.0:* -
+```
+使用
+
+使用 [socksata命令(FreeBSD Unix 服务群)][3]:
+
+```
+$ sudo sockstat
+$ sudo sockstat -4
+$ sudo sockstat -4 | grep :123
+```
+
+
+样例输出:
+
+```
+root ntpd 59914 22 udp4 127.0.0.1:123 *:*
+root ntpd 59914 24 udp4 127.0.1.1:123 *:*
+```
+
+
+
+## Vivek Gite 投稿
+
+这个作者是 nixCraft 的作者并且是一位经验丰富的系统管理员,也是一名 Linux 操作系统和 Unix shell 脚本的训练师。他为全球不同行业,包括 IT、教育业、安全防护、空间研究和非营利性组织的客户工作。关注他的 [Twitter][4], [Facebook][5], [Google+][6]。
+
+
+
+--------------------------------------------------------------------------------
+
+via: https://www.cyberciti.biz/faq/how-to-bind-ntpd-to-specific-ip-addresses-on-linuxunix/
+
+作者:[Vivek Gite][a]
+译者:[Drshu](https://github.com/Drshu)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]:https://www.cyberciti.biz
+[1]:https://www.cyberciti.biz/media/new/faq/2017/10/how-to-prevent-ntpd-to-listen-on-all-interfaces-on-linux-unix-box.jpg
+[2]:https://www.cyberciti.biz/faq/restarting-ntp-service-on-linux/
+[3]:https://www.cyberciti.biz/faq/freebsd-unix-find-the-process-pid-listening-on-a-certain-port-commands/
+[4]:https://twitter.com/nixcraft
+[5]:https://facebook.com/nixcraft
+[6]:https://plus.google.com/+CybercitiBiz
diff --git a/translated/tech/20171102 What is huge pages in Linux.md b/translated/tech/20171102 What is huge pages in Linux.md
new file mode 100644
index 0000000000..ee261956ad
--- /dev/null
+++ b/translated/tech/20171102 What is huge pages in Linux.md
@@ -0,0 +1,137 @@
+Linux 中的 huge pages 是个什么玩意?
+======
+学习 Linux 中的 huge pages( 巨大页)。理解什么是 hugepages,如何进行配置,如何查看当前状态以及如何禁用它。
+
+![Huge Pages in Linux][1]
+
+本文,我们会详细介绍 huge page,让你能够回答:Linux 中的 huge page 是什么玩意?在 RHEL6,RHEL7,Ubuntu 等 Linux 中,如何启用/禁用 huge pages?如何查看 huge page 的当前值?
+
+首先让我们从 Huge page 的基础知识开始讲起。
+
+### Linux 中的 Huge page 是个什么玩意?
+
+Huge pages 有助于 Linux 系统进行虚拟内存管理。顾名思义,除了标准的 4KB 大小的页面外,他们还能帮助管理内存中的巨大页面。使用 huge pages,你最大可以定义 1GB 的页面大小。
+
+在系统启动期间,huge pages 会为应用程序预留一部分内存。这部分内存,即被 huge pages 占用的这些存储器永远不会被交换出内存。它会一直保留其中除非你修改了配置。这会极大地提高像 Orcle 数据库这样的需要海量内存的应用程序的性能。
+
+### 为什么使用巨大的页?
+
+在虚拟内存管理中,内核维护一个将虚拟内存地址映射到物理地址的表,对于每个页面操作,内核都需要加载相关的映射标。如果你的内存页很小,那么你需要加载的页就会很多,导致内核加载更多的映射表。而这会降低性能。
+
+使用巨大的页,意味着所需要的页变少了。从而大大减少由内核加载的映射表的数量。这提高了内核级别的性能最终有利于应用程序的性能。
+
+简而言之,通过启用 huge pages,系统具只需要处理较少的页面映射表,从而减少访问/维护它们的开销!
+
+### 如何配置 huge pages?
+
+运行下面命令来查看当前 huge pages 的详细内容。
+
+```
+root@kerneltalks # grep Huge /proc/meminfo
+AnonHugePages: 0 kB
+HugePages_Total: 0
+HugePages_Free: 0
+HugePages_Rsvd: 0
+HugePages_Surp: 0
+Hugepagesize: 2048 kB
+```
+
+从上面输出可以看到,每个页的大小为 2MB(`Hugepagesize`) 并且系统中目前有 0 个页 (`HugePages_Total`)。这里巨大页的大小可以从 2MB 增加到 1GB。
+
+运行下面的脚本可以获取系统当前需要多少个巨大页。该脚本取之于 Oracle。
+
+```
+#!/bin/bash
+#
+# hugepages_settings.sh
+#
+# Linux bash script to compute values for the
+# recommended HugePages/HugeTLB configuration
+#
+# Note: This script does calculation for all shared memory
+# segments available when the script is run, no matter it
+# is an Oracle RDBMS shared memory segment or not.
+# Check for the kernel version
+KERN=`uname -r | awk -F. '{ printf("%d.%d\n",$1,$2); }'`
+# Find out the HugePage size
+HPG_SZ=`grep Hugepagesize /proc/meminfo | awk {'print $2'}`
+# Start from 1 pages to be on the safe side and guarantee 1 free HugePage
+NUM_PG=1
+# Cumulative number of pages required to handle the running shared memory segments
+for SEG_BYTES in `ipcs -m | awk {'print $5'} | grep "[0-9][0-9]*"`
+do
+ MIN_PG=`echo "$SEG_BYTES/($HPG_SZ*1024)" | bc -q`
+ if [ $MIN_PG -gt 0 ]; then
+ NUM_PG=`echo "$NUM_PG+$MIN_PG+1" | bc -q`
+ fi
+done
+# Finish with results
+case $KERN in
+ '2.4') HUGETLB_POOL=`echo "$NUM_PG*$HPG_SZ/1024" | bc -q`;
+ echo "Recommended setting: vm.hugetlb_pool = $HUGETLB_POOL" ;;
+ '2.6' | '3.8' | '3.10' | '4.1' ) echo "Recommended setting: vm.nr_hugepages = $NUM_PG" ;;
+ *) echo "Unrecognized kernel version $KERN. Exiting." ;;
+esac
+# End
+```
+将它以 `hugepages_settings.sh` 为名保存到 `/tmp` 中,然后运行之:
+```
+root@kerneltalks # sh /tmp/hugepages_settings.sh
+Recommended setting: vm.nr_hugepages = 124
+```
+
+输出如上结果,只是数字会有一些出入。
+
+这意味着,你系统需要 124 个每个 2MB 的巨大页!若你设置页面大小为 4MB,则结果就变成了 62。你明白了吧?
+
+### 配置内核中的 hugepages
+
+本文最后一部分内容是配置上面提到的 [内核参数 ][2] 然后重新加载。将下面内容添加到 `/etc/sysctl.conf` 中,然后输入 `sysctl -p` 命令重新加载配置。
+
+```
+vm .nr_hugepages=126
+```
+
+注意我们这里多加了两个额外的页,因为我们希望在实际需要的页面数量外多一些额外的空闲页。
+
+现在,内核已经配置好了,但是要让应用能够使用这些巨大页还需要提高内存的使用阀值。新的内存阀值应该为 126 个页 x 每个页 2 MB = 252 MB,也就是 258048 KB。
+
+你需要编辑 `/etc/security/limits.conf` 中的如下配置
+
+```
+soft memlock 258048
+hard memlock 258048
+```
+
+某些情况下,这些设置是在指定应用的文件中配置的,比如 Oracle DB 就是在 `/etc/security/limits.d/99-grid-oracle-limits.conf` 中配置的。
+
+这就完成了!你可能还需要重启应用来让应用来使用这些新的巨大页。
+
+### 如何禁用 hugepages?
+
+HugePages 默认是开启的。使用下面命令来查看 hugepages 的当前状态。
+
+```
+root@kerneltalks# cat /sys/kernel/mm/transparent_hugepage/enabled
+[always] madvise never
+```
+
+输出中的 `[always]` 标志说明系统启用了 hugepages。
+
+若使用的是基于 RedHat 的系统,则应该要查看的文件路径为 `/sys/kernel/mm/redhat_transparent_hugepage/enabled`。
+
+若想禁用巨大页,则在 `/etc/grub.conf` 中的 `kernel` 行后面加上 `transparent_hugepage=never`,然后重启系统。
+
+--------------------------------------------------------------------------------
+
+via: https://kerneltalks.com/services/what-is-huge-pages-in-linux/
+
+作者:[Shrikant Lavhate][a]
+译者:[lujun9972](https://github.com/lujun9972)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]:https://kerneltalks.com
+[1]:https://c1.kerneltalks.com/wp-content/uploads/2017/11/hugepages-in-linux.png
+[2]:https://kerneltalks.com/linux/how-to-tune-kernel-parameters-in-linux/
diff --git a/translated/tech/20171106 Autorandr- automatically adjust screen layout.md b/translated/tech/20171106 Autorandr- automatically adjust screen layout.md
deleted file mode 100644
index 4dc8095669..0000000000
--- a/translated/tech/20171106 Autorandr- automatically adjust screen layout.md
+++ /dev/null
@@ -1,50 +0,0 @@
-Autorandr:自动调整屏幕布局
-======
-像许多笔记本用户一样,我经常将笔记本插入到不同的显示器上(桌面上有多台显示器,演示时有投影机等)。运行 xrandr 命令或点击界面非常繁琐,编写脚本也不是很好。
-
-最近,我遇到了 [autorandr][1],它使用 EDID(和其他设置)检测连接的显示器,保存 xrandr 配置并恢复它们。它也可以在加载特定配置时运行任意脚本。我已经打包了它,目前仍在 NEW 状态。如果你不能等待,[这是 deb][2],[这是 git 仓库][3]。
-
-要使用它,只需安装软件包,并创建你的初始配置(我这里是 undocked):
-```
- autorandr --save undocked
-
-```
-
-然后,连接你的笔记本(或者插入你的外部显示器),使用 xrandr(或其他任何)更改配置,然后保存你的新配置(我这里是 workstation):
-```
-autorandr --save workstation
-
-```
-
-对你额外的配置(或当你有新的配置)进行重复操作。
-
-Autorandr 有 `udev`、`systemd` 和 `pm-utils` 钩子,当新的显示器出现时 `autorandr --change` 应该会立即运行。如果需要,也可以手动运行 `autorandr --change` 或 `autorandr - load workstation`。你也可以在加载配置后在 `~/.config/autorandr/$PROFILE/postswitch` 添加自己的脚本来运行。由于我运行 i3,我的工作站配置如下所示:
-```
- #!/bin/bash
-
- xrandr --dpi 92
- xrandr --output DP2-2 --primary
- i3-msg '[workspace="^(1|4|6)"] move workspace to output DP2-2;'
- i3-msg '[workspace="^(2|5|9)"] move workspace to output DP2-3;'
- i3-msg '[workspace="^(3|8)"] move workspace to output DP2-1;'
-
-```
-
-它适当地修正了 dpi,设置主屏幕(可能不需要?),并移动 i3 工作区。你可以通过在配置文件目录中添加一个 `block` 钩子来安排配置永远不会运行。
-
-如果你定期更换显示器,请看一下!
-
---------------------------------------------------------------------------------
-
-via: https://www.donarmstrong.com/posts/autorandr/
-
-作者:[Don Armstrong][a]
-译者:[geekpi](https://github.com/geekpi)
-校对:[校对者ID](https://github.com/校对者ID)
-
-本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
-
-[a]:https://www.donarmstrong.com
-[1]:https://github.com/phillipberndt/autorandr
-[2]:https://www.donarmstrong.com/autorandr_1.2-1_all.deb
-[3]:https://git.donarmstrong.com/deb_pkgs/autorandr.git
diff --git a/translated/tech/20171108 How To Setup Japanese Language Environment In Arch Linux.md b/translated/tech/20171108 How To Setup Japanese Language Environment In Arch Linux.md
index e924dcbf28..97bbfe6fb6 100644
--- a/translated/tech/20171108 How To Setup Japanese Language Environment In Arch Linux.md
+++ b/translated/tech/20171108 How To Setup Japanese Language Environment In Arch Linux.md
@@ -7,7 +7,7 @@
### 在Arch Linux中设置日语环境
-首先,安装必要的日语字体,以正确查看日语 ASCII 格式:
+首先,为了正确查看日语 ASCII 格式,先安装必要的日语字体:
```
sudo pacman -S adobe-source-han-sans-jp-fonts otf-ipafont
```
@@ -27,7 +27,7 @@ pacaur -S ttf-monapo
sudo pacman -S ibus ibus-anthy
```
-在 **~/.xprofile** 中添加以下行(如果不存在,创建一个):
+在 **~/.xprofile** 中添加以下几行(如果不存在,创建一个):
```
# Settings for Japanese input
export GTK_IM_MODULE='ibus'
@@ -38,7 +38,7 @@ export XMODIFIERS=@im='ibus'
ibus-daemon -drx
```
-~/.xprofile 允许我们在窗口管理器启动之前在 X 用户会话开始时执行命令。
+~/.xprofile 允许我们在 X 用户会话开始时且在窗口管理器启动之前执行命令。
保存并关闭文件。重启 Arch Linux 系统以使更改生效。
@@ -72,9 +72,9 @@ ibus-setup
[![][2]][8]
-你还可以在键盘绑定中编辑默认的快捷键。完成所有更改后,单击应用并确定。就是这样。从任务栏中的 iBus 图标中选择日语,或者按下**Command/Window 键+空格键**来在日语和英语(或者系统中的其他默认语言)之间切换。你可以从 iBus 首选项窗口更改键盘快捷键。
+你还可以在键盘绑定中编辑默认的快捷键。完成所有更改后,点击应用并确定。就是这样。从任务栏中的 iBus 图标中选择日语,或者按下**SUPER 键+空格键**(LCTT译注:SUPER KEY 通常为 Command/Window KEY)来在日语和英语(或者系统中的其他默认语言)之间切换。你可以从 iBus 首选项窗口更改键盘快捷键。
-你现在知道如何在 Arch Linux 及其衍生版中使用日语了。如果你发现我们的指南很有用,那么请您在社交、专业网络上分享,并支持 OSTechNix。
+现在你知道如何在 Arch Linux 及其衍生版中使用日语了。如果你发现我们的指南很有用,那么请您在社交、专业网络上分享,并支持 OSTechNix。
@@ -84,7 +84,7 @@ via: https://www.ostechnix.com/setup-japanese-language-environment-arch-linux/
作者:[][a]
译者:[geekpi](https://github.com/geekpi)
-校对:[校对者ID](https://github.com/校对者ID)
+校对:[Locez](https://github.com/locez)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
diff --git a/translated/tech/20171121 How to organize your passwords using pass password manager.md b/translated/tech/20171121 How to organize your passwords using pass password manager.md
index b129a5daf9..be460cc720 100644
--- a/translated/tech/20171121 How to organize your passwords using pass password manager.md
+++ b/translated/tech/20171121 How to organize your passwords using pass password manager.md
@@ -3,9 +3,9 @@
### 目标
-学习使用 "pass" 密码管理器来组织你的密码
+学习在 Linux 上使用 "pass" 密码管理器来管理你的密码
-### 需求
+### 条件
* 需要 root 权限来安装需要的包
@@ -16,15 +16,15 @@
### 约定
* **#** - 执行指定命令需要 root 权限,可以是直接使用 root 用户来执行或者使用 `sudo` 命令来执行
- * **$** - 使用非特权普通用户执行指定命令
+ * **$** - 使用普通的非特权用户执行指定命令
### 介绍
-如果你有根据目的不同设置不同密码的好习惯,你可能已经感受到要一个密码管理器的必要性了。在 Linux 上有很多选择,可以是专利软件(如果你敢的话)也可以是开源软件。如果你跟我一样喜欢简洁的话,你可能会对 `pass` 感兴趣。
+如果你有根据不同的意图设置不同密码的好习惯,你可能已经感受到需要一个密码管理器的必要性了。在 Linux 上有很多选择,可以是专利软件(如果你敢的话)也可以是开源软件。如果你跟我一样喜欢简洁的话,你可能会对 `pass` 感兴趣。
### First steps
-Pass 作为一个密码管理器,其实际上是对类似 `gpg` 和 `git` 等可信赖的实用工具的一种封装。虽然它也有图形界面,但它专门设计能成在命令行下工作的:因此它也可以在 headless machines 上工作 (LCTT 注:根据 wikipedia 的说法,所谓 headless machines 是指没有显示器、键盘和鼠标的机器,一般通过网络链接来控制)。
+Pass 作为一个密码管理器,其实际上是一些你可能早已每天使用的、可信赖且实用的工具的一种封装,比如 `gpg` 和 `git` 。虽然它也有图形界面,但它专门设计能成在命令行下工作的:因此它也可以在 headless machines 上工作 (LCTT 注:根据 wikipedia 的说法,所谓 headless machines 是指没有显示器、键盘和鼠标的机器,一般通过网络链接来控制)。
### 步骤 1 - 安装
@@ -42,7 +42,7 @@ Pass 不在官方仓库中,但你可以从 `epel` 中获取道它。要在 Cen
# yum install epel-release
```
-然而在 Red Hat 企业版的 Linux 上,这个额外的源是不可用的;你需要从官方的 EPEL 网站上下载它。
+然而在 Red Hat 企业版的 Linux 上,这个额外的源是不可用的;你需要从 EPEL 官方网站上下载它。
#### Debian and Ubuntu
```
@@ -95,12 +95,12 @@ Password Store
pass mysite
```
-然而更好的方法是使用 `-c` 选项让 pass 将密码直接拷贝道粘帖板上:
+然而更好的方法是使用 `-c` 选项让 pass 将密码直接拷贝到剪切板上:
```
pass -c mysite
```
-这种情况下粘帖板中的内容会在 `45` 秒后自动清除。两种方法都会要求你输入 gpg 密码。
+这种情况下剪切板中的内容会在 `45` 秒后自动清除。两种方法都会要求你输入 gpg 密码。
### 生成密码
@@ -109,11 +109,11 @@ Pass 也可以为我们自动生成(并自动存储)安全密码。假设我们
pass generate mysite 15
```
-若希望密码只包含字母和数字则可以是使用 `--no-symbols` 选项。生成的密码会显示在屏幕上。也可以通过 `--clip` 或 `-c` 选项让 pass 吧密码直接拷贝到粘帖板中。通过使用 `-q` 或 `--qrcode` 选项来生成二维码:
+若希望密码只包含字母和数字则可以是使用 `--no-symbols` 选项。生成的密码会显示在屏幕上。也可以通过 `--clip` 或 `-c` 选项让 pass 把密码直接拷贝到剪切板中。通过使用 `-q` 或 `--qrcode` 选项来生成二维码:
![qrcode][1]
-从上面的截屏中尅看出,生成了一个二维码,不过由于 `mysite` 的密码以及存在了,pass 会提示我们确认是否要覆盖原密码。
+从上面的截屏中尅看出,生成了一个二维码,不过由于 `mysite` 的密码已经存在了,pass 会提示我们确认是否要覆盖原密码。
Pass 使用 `/dev/urandom` 设备作为(伪)随机数据生成器来生成密码,同时它使用 `xclip` 工具来将密码拷贝到粘帖板中,同时使用 `qrencode` 来将密码以二维码的形式显示出来。在我看来,这种模块化的设计正是它最大的优势:它并不重复造轮子,而只是将常用的工具包装起来完成任务。
@@ -131,9 +131,9 @@ pass git init
pass git remote add
```
-我们可以把这个仓库当成普通密码仓库来用。唯一的不同点在于每次我们新增或修改一个密码,`pass` 都会自动将该文件加入索引并创建一个提交。
+我们可以把这个密码仓库当成普通仓库来用。唯一的不同点在于每次我们新增或修改一个密码,`pass` 都会自动将该文件加入索引并创建一个提交。
-`pass` 有一个叫做 `qtpass` 的图形界面,而且 `pass` 也支持 Windows 和 MacOs。通过使用 `PassFF` 插件,它还能获取 firefox 中存储的密码。在它的项目网站上可以查看更多详细信息。试一下 `pass` 吧,你不会失望的!
+`pass` 有一个叫做 `qtpass` 的图形界面,而且也支持 Windows 和 MacOs。通过使用 `PassFF` 插件,它还能获取 firefox 中存储的密码。在它的项目网站上可以查看更多详细信息。试一下 `pass` 吧,你不会失望的!
--------------------------------------------------------------------------------
@@ -142,7 +142,7 @@ via: https://linuxconfig.org/how-to-organize-your-passwords-using-pass-password-
作者:[Egidio Docile][a]
译者:[lujun9972](https://github.com/lujun9972)
-校对:[校对者ID](https://github.com/校对者ID)
+校对:[Locez](https://github.com/locez)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
diff --git a/translated/tech/20171215 How to find and tar files into a tar ball.md b/translated/tech/20171215 How to find and tar files into a tar ball.md
deleted file mode 100644
index b1cc728635..0000000000
--- a/translated/tech/20171215 How to find and tar files into a tar ball.md
+++ /dev/null
@@ -1,120 +0,0 @@
-如何找出并打包文件成 tar 包
-======
-
-我想找出所有的 \*.doc 文件并将它们创建成一个 tar 包,然后存储在 /nfs/backups/docs/file.tar 中。是否可以在 Linux 或者类 Unix 系统上查找并 tar 打包文件?
-
-find 命令用于按照给定条件在目录层次结构中搜索文件。tar 命令是用于 Linux 和类 Unix 系统创建 tar 包的归档工具。
-
-[![How to find and tar files on linux unix][1]][1]
-
-让我们看看如何将 tar 命令与 find 命令结合在一个命令行中创建一个 tar 包。
-
-## Find 命令
-
-语法是:
-```
-find /path/to/search -name "file-to-search" -options
-## 找出所有 Perl(*.pl)文件 ##
-find $HOME -name "*.pl" -print
-## 找出所有 \*.doc 文件 ##
-find $HOME -name "*.doc" -print
-## 找出所有 *.sh(shell 脚本)并运行 ls -l 命令 ##
-find . -iname "*.sh" -exec ls -l {} +
-```
-最后一个命令的输出示例:
-```
--rw-r--r-- 1 vivek vivek 1169 Apr 4 2017 ./backups/ansible/cluster/nginx.build.sh
--rwxr-xr-x 1 vivek vivek 1500 Dec 6 14:36 ./bin/cloudflare.pure.url.sh
-lrwxrwxrwx 1 vivek vivek 13 Dec 31 2013 ./bin/cmspostupload.sh -> postupload.sh
-lrwxrwxrwx 1 vivek vivek 12 Dec 31 2013 ./bin/cmspreupload.sh -> preupload.sh
-lrwxrwxrwx 1 vivek vivek 14 Dec 31 2013 ./bin/cmssuploadimage.sh -> uploadimage.sh
-lrwxrwxrwx 1 vivek vivek 13 Dec 31 2013 ./bin/faqpostupload.sh -> postupload.sh
-lrwxrwxrwx 1 vivek vivek 12 Dec 31 2013 ./bin/faqpreupload.sh -> preupload.sh
-lrwxrwxrwx 1 vivek vivek 14 Dec 31 2013 ./bin/faquploadimage.sh -> uploadimage.sh
--rw-r--r-- 1 vivek vivek 778 Nov 6 14:44 ./bin/mirror.sh
--rwxr-xr-x 1 vivek vivek 136 Apr 25 2015 ./bin/nixcraft.com.301.sh
--rwxr-xr-x 1 vivek vivek 547 Jan 30 2017 ./bin/paypal.sh
--rwxr-xr-x 1 vivek vivek 531 Dec 31 2013 ./bin/postupload.sh
--rwxr-xr-x 1 vivek vivek 437 Dec 31 2013 ./bin/preupload.sh
--rwxr-xr-x 1 vivek vivek 1046 May 18 2017 ./bin/purge.all.cloudflare.domain.sh
-lrwxrwxrwx 1 vivek vivek 13 Dec 31 2013 ./bin/tipspostupload.sh -> postupload.sh
-lrwxrwxrwx 1 vivek vivek 12 Dec 31 2013 ./bin/tipspreupload.sh -> preupload.sh
-lrwxrwxrwx 1 vivek vivek 14 Dec 31 2013 ./bin/tipsuploadimage.sh -> uploadimage.sh
--rwxr-xr-x 1 vivek vivek 1193 Oct 18 2013 ./bin/uploadimage.sh
--rwxr-xr-x 1 vivek vivek 29 Nov 6 14:33 ./.vim/plugged/neomake/tests/fixtures/errors.sh
--rwxr-xr-x 1 vivek vivek 215 Nov 6 14:33 ./.vim/plugged/neomake/tests/helpers/trap.sh
-```
-
-## Tar 命令
-
-要[创建 /home/vivek/projects 目录的 tar 包][2],运行:
-```
-$ tar -cvf /home/vivek/projects.tar /home/vivek/projects
-```
-
-## 结合 find 和 tar 命令
-
-语法是:
-```
-find /dir/to/search/ -name "*.doc" -exec tar -rvf out.tar {} \;
-```
-或者
-```
-find /dir/to/search/ -name "*.doc" -exec tar -rvf out.tar {} +
-```
-例子:
-```
-find $HOME -name "*.doc" -exec tar -rvf /tmp/all-doc-files.tar "{}" \;
-```
-或者
-```
-find $HOME -name "*.doc" -exec tar -rvf /tmp/all-doc-files.tar "{}" +
-```
-这里,find 命令的选项:
-
- * **-name "*.doc"** : 按照给定的模式/标准查找文件。在这里,在 $HOME 中查找所有 \*.doc 文件。
- * **-exec tar ...** : 对 find 命令找到的所有文件执行 tar 命令。
-
-这里,tar 命令的选项:
-
- * **-r** : 将文件追加到归档末尾。参数与 -c 选项具有相同的含义。
- * **-v** : 详细输出。
- * **-f** : out.tar : 将所有文件追加到 out.tar 中。
-
-
-
-也可以像下面这样将 find 命令的输出通过管道输入到 tar 命令中:
-```
-find $HOME -name "*.doc" -print0 | tar -cvf /tmp/file.tar --null -T -
-```
-传递给 find 命令的 -print0 选项处理特殊的文件名。-null 和 -T 选项告诉 tar 命令从标准输入/管道读取输入。也可以使用 xargs 命令:
-```
-find $HOME -type f -name "*.sh" | xargs tar cfvz /nfs/x230/my-shell-scripts.tgz
-```
-有关更多信息,请参阅下面的 man 页面:
-```
-$ man tar
-$ man find
-$ man xargs
-$ man bash
-```
-
-------------------------------
-
-作者简介:
-
-作者是 nixCraft 的创造者,是一名经验丰富的系统管理员,也是 Linux 操作系统/Unix shell 脚本培训师。他曾与全球客户以及 IT、教育、国防和太空研究以及非营利部门等多个行业合作。在 Twitter、Facebook 和 Google+ 上关注他。
-
---------------------------------------------------------------------------------
-
-via: https://www.cyberciti.biz/faq/linux-unix-find-tar-files-into-tarball-command/
-
-作者:[Vivek Gite][a]
-译者:[geekpi](https://github.com/geekpi)
-校对:[校对者ID](https://github.com/校对者ID)
-
-本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
-
-[a]:https://www.cyberciti.biz
-[1]:https://www.cyberciti.biz/media/new/faq/2017/12/How-to-find-and-tar-files-on-linux-unix.jpg
-[2]:https://www.cyberciti.biz/faq/creating-a-tar-file-linux-command-line/
diff --git a/translated/tech/20180109 Linux size Command Tutorial for Beginners (6 Examples).md b/translated/tech/20180109 Linux size Command Tutorial for Beginners (6 Examples).md
new file mode 100644
index 0000000000..3681dfa3c6
--- /dev/null
+++ b/translated/tech/20180109 Linux size Command Tutorial for Beginners (6 Examples).md
@@ -0,0 +1,137 @@
+六个例子带你入门 size 命令
+======
+
+正如你所知道的那样,Linux 中的目标文件或着说可执行文件由多个段组成(比如 txt 和 data)。若你想知道每个段的大小,那么确实存在这么一个命令行工具 - 那就是 `size`。在本教程中,我们将会用几个简单易懂的案例来讲解该工具的基本用法。
+
+在我们开始前,有必要先声明一下,本文的所有案例都在 Ubuntu 16.04LTS 中测试过了 .04LTS。
+
+## Linux size 命令
+
+size 命令基本上就是输出指定木比奥文件各段及其总和的大小。下面是该命令的语法:
+```
+size [-A|-B|--format=compatibility]
+ [--help]
+ [-d|-o|-x|--radix=number]
+ [--common]
+ [-t|--totals]
+ [--target=bfdname] [-V|--version]
+ [objfile...]
+```
+
+man 页是这样描述它的:
+```
+GNU的size程序列出参数列表objfile中,各目标文件(object)或存档库文件(archive)的段节(section)大小 — 以及总大小.默认情况下,对每目标文件或存档库中的每个模块都会产生一行输出.
+
+objfile... 是待检查的目标文件(object). 如果没有指定, 则默认为文件 "a.out".
+```
+
+下面是一些问答方式的案例,希望能让你对 size 命令有所了解。
+
+## Q1。如何使用 size 命令?
+
+size 的基本用法很简单。你只需要将目标文件/可执行文件名称作为输入就行了。下面是一个例子:
+
+```
+size apl
+```
+
+该命令在我的系统中的输出如下:
+
+[![How to use size command][1]][2]
+
+前三部分的内容是 text,data,和 bss 段及其相应的大小。然后是十进制格式和十六进制格式的总大小。最后是文件名。
+
+## Q2。如何切换不同的输出格式?
+
+根据 man 页的说法,size 的默认输出格式类似于 Berkeley 的格式。然而,如果你想的话,你也可以使用 System V 规范。要做到这一点,你可以使用 `--format` 选项加上 `SysV` 值。
+
+```
+size apl --format=SysV
+```
+
+下面是它的输出:
+
+[![How to switch between different output formats][3]][4]
+
+## Q3。如何切换使用其他的单位?
+
+默认情况下,段的大小是以十进制的方式来展示。然而,如果你想的话,也可以使用八进制或十六进制来表示。对应的命令行参数分别为 `o` 和 `-x`。
+
+[![How to switch between different size units][5]][6]
+
+关于这些参数,man 页是这么说的:
+```
+-d
+-o
+-x
+--radix=number
+
+使用这几个选项,你可以让各个段节的大小以十进制(`-d',或`--radix 10');八进制(`-o',或`--radix 8');或十六进制(`-x',或`--radix 16')数字的格式显示.`--radix number' 只支持三个数值参数 (8, 10, 16).总共大小以两种进制给出; `-d'或`-x'的十进制和十六进制输出,或`-o'的 八进制和 十六进制 输出.
+```
+
+## Q4。如何让 size 命令显示所有对象文件的总大小?
+
+如果你用 size 一次性查找多个文件的段大小,则通过使用 `-t` 选项还可以让它显示各列值的总和。
+
+```
+size -t [file1] [file2] ...
+```
+
+下面是该命令的执行的截屏:
+
+[![How to make size command show totals of all object files][7]][8]
+
+`-t` 选项让它多加了最后那一行。
+
+## Q5。如何让 size 输出每个文件中公共符号的总大小?
+
+若你为 size 提供多个输入文件作为参数,而且想让它显示每个文件中公共符号(指 common segment 中的 symbol) 的大小,则你可以带上 `--common` 选项。
+
+```
+size --common [file1] [file2] ...
+```
+
+另外需要指出的是,当使用 Berkeley 格式时,和谐公共符号的大小被纳入了 bss 大小中。
+
+## Q6。还有什么其他的选项?
+
+除了刚才提到的那些选项外,size 还有一些一般性的命令行选项,比如 `v` (显示版本信息) 和 `-h` (可选参数和选项的 summary)
+
+[![What are the other available command line options][9]][10]
+
+除此之外,你也可以使用 `@file` 选项来让 size 从文件中读取命令行选项。下面是详细的相关说明:
+```
+读出来的选项会插入并替代原来的@file选项。若文件不存在或着无法读取,则该选项不会被删除,而是会以字面意义来解释该选项。
+
+文件中的选项以空格分隔。当选项中要包含空格时需要用单引号或双引号将整个选项包起来。
+通过在字符前面添加一个反斜杠可以将任何字符(包括反斜杠本身)纳入到选项中。
+文件本身也能包含其他的@file选项;任何这样的选项都会被递归处理。
+```
+
+## 结论
+
+很明显,size 命令并不适用于所有人。它的目标群体是那些需要处理 Linux 中目标文件/可执行文件结构的人。因此,如果你刚好是目标受众,那么多试试我们这里提到的那些选项,你应该做好每天都使用这个工具的准备。想了解关于 size 的更多信息,请阅读它的 [man 页 ][11]。
+
+
+--------------------------------------------------------------------------------
+
+via: https://www.howtoforge.com/linux-size-command/
+
+作者:[Himanshu Arora][a]
+译者:[lujun9972](https://github.com/lujun9972)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]:https://www.howtoforge.com
+[1]:https://www.howtoforge.com/images/command-tutorial/size-basic-usage.png
+[2]:https://www.howtoforge.com/images/command-tutorial/big/size-basic-usage.png
+[3]:https://www.howtoforge.com/images/command-tutorial/size-format-option.png
+[4]:https://www.howtoforge.com/images/command-tutorial/big/size-format-option.png
+[5]:https://www.howtoforge.com/images/command-tutorial/size-o-x-options.png
+[6]:https://www.howtoforge.com/images/command-tutorial/big/size-o-x-options.png
+[7]:https://www.howtoforge.com/images/command-tutorial/size-t-option.png
+[8]:https://www.howtoforge.com/images/command-tutorial/big/size-t-option.png
+[9]:https://www.howtoforge.com/images/command-tutorial/size-v-x1.png
+[10]:https://www.howtoforge.com/images/command-tutorial/big/size-v-x1.png
+[11]:https://linux.die.net/man/1/size