From 0de6dfe349bcbfc028cb935d2253a099ddf8b984 Mon Sep 17 00:00:00 2001 From: darksun Date: Tue, 18 Sep 2018 16:58:26 +0800 Subject: [PATCH 001/736] =?UTF-8?q?=E9=80=89=E9=A2=98:=20Backup=20Installe?= =?UTF-8?q?d=20Packages=20And=20Restore=20Them=20On=20Freshly=20Installed?= =?UTF-8?q?=20Ubuntu?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...estore Them On Freshly Installed Ubuntu.md | 107 ++++++++++++++++++ 1 file changed, 107 insertions(+) create mode 100644 sources/tech/20180915 Backup Installed Packages And Restore Them On Freshly Installed Ubuntu.md diff --git a/sources/tech/20180915 Backup Installed Packages And Restore Them On Freshly Installed Ubuntu.md b/sources/tech/20180915 Backup Installed Packages And Restore Them On Freshly Installed Ubuntu.md new file mode 100644 index 0000000000..d5927effee --- /dev/null +++ b/sources/tech/20180915 Backup Installed Packages And Restore Them On Freshly Installed Ubuntu.md @@ -0,0 +1,107 @@ +Backup Installed Packages And Restore Them On Freshly Installed Ubuntu +====== + +![](https://www.ostechnix.com/wp-content/uploads/2018/09/apt-clone-720x340.png) + +Installing the same set of packages on multiple Ubuntu systems is time consuming and boring task. You don’t want to spend your time to install the same packages over and over on multiple systems. When it comes to install packages on similar architecture Ubuntu systems, there are many methods available to make this task easier. You could simply migrate your old Ubuntu system’s applications, settings and data to a newly installed system with a couple mouse clicks using [**Aptik**][1]. Or, you can take the [**backup entire list of installed packages**][2] using your package manager (Eg. APT), and install them later on a freshly installed system. Today, I learned that there is also yet another dedicated utility available to do this job. Say hello to **apt-clone** , a simple tool that lets you to create a list of installed packages for Debian/Ubuntu systems that can be restored on freshly installed systems or containers or into a directory. + +Apt-clone will help you on situations where you want to, + + * Install consistent applications across multiple systems running with similar Ubuntu (and derivatives) OS. + * Install same set of packages on multiple systems often. + * Backup the entire list of installed applications and restore them on demand wherever and whenever necessary. + + + +In this brief guide, we will be discussing how to install and use Apt-clone on Debian-based systems. I tested this utility on Ubuntu 18.04 LTS system, however it should work on all Debian and Ubuntu-based systems. + +### Backup Installed Packages And Restore Them Later On Freshly Installed Ubuntu System + +Apt-clone is available in the default repositories. To install it, just enter the following command from the Terminal: + +``` +$ sudo apt install apt-clone +``` + +Once installed, simply create the list of installed packages and save them in any location of your choice. + +``` +$ mkdir ~/mypackages + +$ sudo apt-clone clone ~/mypackages +``` + +The above command saved all installed packages in my Ubuntu system in a file named **apt-clone-state-ubuntuserver.tar.gz** under **~/mypackages** directory. + +To view the details of the backup file, run: + +``` +$ apt-clone info mypackages/apt-clone-state-ubuntuserver.tar.gz +Hostname: ubuntuserver +Arch: amd64 +Distro: bionic +Meta: +Installed: 516 pkgs (33 automatic) +Date: Sat Sep 15 10:23:05 2018 +``` + +As you can see, I have 516 packages in total in my Ubuntu server. + +Now, copy this file on your USB or external drive and go to any other system that want to install the same set of packages. Or you can also transfer the backup file to the system on the network and install the packages by using the following command: + +``` +$ sudo apt-clone restore apt-clone-state-ubuntuserver.tar.gz +``` + +Please be mindful that this command will overwrite your existing **/etc/apt/sources.list** and will install/remove packages. You have been warned! Also, just make sure the destination system is on same arch and same OS. For example, if the source system is running with 18.04 LTS 64bit, the destination system must also has the same. + +If you don’t want to restore packages on the system, you can simply use `--destination /some/location` option to debootstrap the clone into this directory. + +``` +$ sudo apt-clone restore apt-clone-state-ubuntuserver.tar.gz --destination ~/oldubuntu +``` + +In this case, the above command will restore the packages in a folder named **~/oldubuntu**. + +For more details, refer help section: + +``` +$ apt-clone -h +``` + +Or, man pages: + +``` +$ man apt-clone +``` + +**Suggested read:** + ++ [Systemback – Restore Ubuntu Desktop and Server to previous state][3] ++ [Cronopete – An Apple’s Time Machine Clone For Linux][4] + +And, that’s all for now. Hope this was useful. More good stuffs to come. Stay tuned! + +Cheers! + + + + +-------------------------------------------------------------------------------- + +via: https://www.ostechnix.com/backup-installed-packages-and-restore-them-on-freshly-installed-ubuntu-system/ + +作者:[SK][a] +选题:[lujun9972](https://github.com/lujun9972) +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://www.ostechnix.com/author/sk/ +[1]: https://www.ostechnix.com/how-to-migrate-system-settings-and-data-from-an-old-system-to-a-newly-installed-ubuntu-system/ +[2]: https://www.ostechnix.com/create-list-installed-packages-install-later-list-centos-ubuntu/#comment-12598 + +[3]: https://www.ostechnix.com/systemback-restore-ubuntu-desktop-and-server-to-previous-state/ + +[4]: https://www.ostechnix.com/cronopete-apples-time-machine-clone-linux/ From f21ff0975c34ca978acaabb841b06eeb284d10f4 Mon Sep 17 00:00:00 2001 From: qhwdw Date: Fri, 21 Sep 2018 19:43:03 +0800 Subject: [PATCH 002/736] Translated by qhwdw --- ...feinated 6.828:Lab 2 Memory Management.md | 234 ------------------ ...feinated 6.828:Lab 2 Memory Management.md | 232 +++++++++++++++++ 2 files changed, 232 insertions(+), 234 deletions(-) delete mode 100644 sources/tech/20140114 Caffeinated 6.828:Lab 2 Memory Management.md create mode 100644 translated/tech/20140114 Caffeinated 6.828:Lab 2 Memory Management.md diff --git a/sources/tech/20140114 Caffeinated 6.828:Lab 2 Memory Management.md b/sources/tech/20140114 Caffeinated 6.828:Lab 2 Memory Management.md deleted file mode 100644 index a52f7ac36a..0000000000 --- a/sources/tech/20140114 Caffeinated 6.828:Lab 2 Memory Management.md +++ /dev/null @@ -1,234 +0,0 @@ -Translating by qhwdw - -# Caffeinated 6.828:Lab 2: Memory Management - -### Introduction - -In this lab, you will write the memory management code for your operating system. Memory management has two components. - -The first component is a physical memory allocator for the kernel, so that the kernel can allocate memory and later free it. Your allocator will operate in units of 4096 bytes, called pages. Your task will be to maintain data structures that record which physical pages are free and which are allocated, and how many processes are sharing each allocated page. You will also write the routines to allocate and free pages of memory. - -The second component of memory management is virtual memory, which maps the virtual addresses used by kernel and user software to addresses in physical memory. The x86 hardware’s memory management unit (MMU) performs the mapping when instructions use memory, consulting a set of page tables. You will modify JOS to set up the MMU’s page tables according to a specification we provide. - -### Getting started - -In this and future labs you will progressively build up your kernel. We will also provide you with some additional source. To fetch that source, use Git to commit changes you’ve made since handing in lab 1 (if any), fetch the latest version of the course repository, and then create a local branch called lab2 based on our lab2 branch, origin/lab2: - -``` -athena% cd ~/6.828/lab -athena% add git -athena% git pull -Already up-to-date. -athena% git checkout -b lab2 origin/lab2 -Branch lab2 set up to track remote branch refs/remotes/origin/lab2. -Switched to a new branch "lab2" -athena% -``` - -You will now need to merge the changes you made in your lab1 branch into the lab2 branch, as follows: - -``` -athena% git merge lab1 -Merge made by recursive. - kern/kdebug.c | 11 +++++++++-- - kern/monitor.c | 19 +++++++++++++++++++ - lib/printfmt.c | 7 +++---- - 3 files changed, 31 insertions(+), 6 deletions(-) -athena% -``` - -Lab 2 contains the following new source files, which you should browse through: - -- inc/memlayout.h -- kern/pmap.c -- kern/pmap.h -- kern/kclock.h -- kern/kclock.c - -memlayout.h describes the layout of the virtual address space that you must implement by modifying pmap.c. memlayout.h and pmap.h define the PageInfo structure that you’ll use to keep track of which pages of physical memory are free. kclock.c and kclock.h manipulate the PC’s battery-backed clock and CMOS RAM hardware, in which the BIOS records the amount of physical memory the PC contains, among other things. The code in pmap.c needs to read this device hardware in order to figure out how much physical memory there is, but that part of the code is done for you: you do not need to know the details of how the CMOS hardware works. - -Pay particular attention to memlayout.h and pmap.h, since this lab requires you to use and understand many of the definitions they contain. You may want to review inc/mmu.h, too, as it also contains a number of definitions that will be useful for this lab. - -Before beginning the lab, don’t forget to add exokernel to get the 6.828 version of QEMU. - -### Hand-In Procedure - -When you are ready to hand in your lab code and write-up, add your answers-lab2.txt to the Git repository, commit your changes, and then run make handin. - -``` -athena% git add answers-lab2.txt -athena% git commit -am "my answer to lab2" -[lab2 a823de9] my answer to lab2 4 files changed, 87 insertions(+), 10 deletions(-) -athena% make handin -``` - -### Part 1: Physical Page Management - -The operating system must keep track of which parts of physical RAM are free and which are currently in use. JOS manages the PC’s physical memory with page granularity so that it can use the MMU to map and protect each piece of allocated memory. - -You’ll now write the physical page allocator. It keeps track of which pages are free with a linked list of struct PageInfo objects, each corresponding to a physical page. You need to write the physical page allocator before you can write the rest of the virtual memory implementation, because your page table management code will need to allocate physical memory in which to store page tables. - -> Exercise 1 -> -> In the file kern/pmap.c, you must implement code for the following functions (probably in the order given). -> -> boot_alloc() -> -> mem_init() (only up to the call to check_page_free_list()) -> -> page_init() -> -> page_alloc() -> -> page_free() -> -> check_page_free_list() and check_page_alloc() test your physical page allocator. You should boot JOS and see whether check_page_alloc() reports success. Fix your code so that it passes. You may find it helpful to add your own assert()s to verify that your assumptions are correct. - -This lab, and all the 6.828 labs, will require you to do a bit of detective work to figure out exactly what you need to do. This assignment does not describe all the details of the code you’ll have to add to JOS. Look for comments in the parts of the JOS source that you have to modify; those comments often contain specifications and hints. You will also need to look at related parts of JOS, at the Intel manuals, and perhaps at your 6.004 or 6.033 notes. - -### Part 2: Virtual Memory - -Before doing anything else, familiarize yourself with the x86’s protected-mode memory management architecture: namely segmentationand page translation. - -> Exercise 2 -> -> Look at chapters 5 and 6 of the Intel 80386 Reference Manual, if you haven’t done so already. Read the sections about page translation and page-based protection closely (5.2 and 6.4). We recommend that you also skim the sections about segmentation; while JOS uses paging for virtual memory and protection, segment translation and segment-based protection cannot be disabled on the x86, so you will need a basic understanding of it. - -### Virtual, Linear, and Physical Addresses - -In x86 terminology, a virtual address consists of a segment selector and an offset within the segment. A linear address is what you get after segment translation but before page translation. A physical address is what you finally get after both segment and page translation and what ultimately goes out on the hardware bus to your RAM. - -![屏幕快照 2018-09-04 11.22.20](/Users/qhwdw/Desktop/屏幕快照 2018-09-04 11.22.20.png) - -Recall that in part 3 of lab 1, we installed a simple page table so that the kernel could run at its link address of 0xf0100000, even though it is actually loaded in physical memory just above the ROM BIOS at 0x00100000. This page table mapped only 4MB of memory. In the virtual memory layout you are going to set up for JOS in this lab, we’ll expand this to map the first 256MB of physical memory starting at virtual address 0xf0000000 and to map a number of other regions of virtual memory. - -> Exercise 3 -> -> While GDB can only access QEMU’s memory by virtual address, it’s often useful to be able to inspect physical memory while setting up virtual memory. Review the QEMU monitor commands from the lab tools guide, especially the xp command, which lets you inspect physical memory. To access the QEMU monitor, press Ctrl-a c in the terminal (the same binding returns to the serial console). -> -> Use the xp command in the QEMU monitor and the x command in GDB to inspect memory at corresponding physical and virtual addresses and make sure you see the same data. -> -> Our patched version of QEMU provides an info pg command that may also prove useful: it shows a compact but detailed representation of the current page tables, including all mapped memory ranges, permissions, and flags. Stock QEMU also provides an info mem command that shows an overview of which ranges of virtual memory are mapped and with what permissions. - -From code executing on the CPU, once we’re in protected mode (which we entered first thing in boot/boot.S), there’s no way to directly use a linear or physical address. All memory references are interpreted as virtual addresses and translated by the MMU, which means all pointers in C are virtual addresses. - -The JOS kernel often needs to manipulate addresses as opaque values or as integers, without dereferencing them, for example in the physical memory allocator. Sometimes these are virtual addresses, and sometimes they are physical addresses. To help document the code, the JOS source distinguishes the two cases: the type uintptr_t represents opaque virtual addresses, and physaddr_trepresents physical addresses. Both these types are really just synonyms for 32-bit integers (uint32_t), so the compiler won’t stop you from assigning one type to another! Since they are integer types (not pointers), the compiler will complain if you try to dereference them. - -The JOS kernel can dereference a uintptr_t by first casting it to a pointer type. In contrast, the kernel can’t sensibly dereference a physical address, since the MMU translates all memory references. If you cast a physaddr_t to a pointer and dereference it, you may be able to load and store to the resulting address (the hardware will interpret it as a virtual address), but you probably won’t get the memory location you intended. - -To summarize: - -| C type | Address type | -| ------------ | ------------ | -| `T*` | Virtual | -| `uintptr_t` | Virtual | -| `physaddr_t` | Physical | - ->Question -> ->Assuming that the following JOS kernel code is correct, what type should variable x have, >uintptr_t or physaddr_t? -> ->![屏幕快照 2018-09-04 11.48.54](/Users/qhwdw/Desktop/屏幕快照 2018-09-04 11.48.54.png) -> - -The JOS kernel sometimes needs to read or modify memory for which it knows only the physical address. For example, adding a mapping to a page table may require allocating physical memory to store a page directory and then initializing that memory. However, the kernel, like any other software, cannot bypass virtual memory translation and thus cannot directly load and store to physical addresses. One reason JOS remaps of all of physical memory starting from physical address 0 at virtual address 0xf0000000 is to help the kernel read and write memory for which it knows just the physical address. In order to translate a physical address into a virtual address that the kernel can actually read and write, the kernel must add 0xf0000000 to the physical address to find its corresponding virtual address in the remapped region. You should use KADDR(pa) to do that addition. - -The JOS kernel also sometimes needs to be able to find a physical address given the virtual address of the memory in which a kernel data structure is stored. Kernel global variables and memory allocated by boot_alloc() are in the region where the kernel was loaded, starting at 0xf0000000, the very region where we mapped all of physical memory. Thus, to turn a virtual address in this region into a physical address, the kernel can simply subtract 0xf0000000. You should use PADDR(va) to do that subtraction. - -### Reference counting - -In future labs you will often have the same physical page mapped at multiple virtual addresses simultaneously (or in the address spaces of multiple environments). You will keep a count of the number of references to each physical page in the pp_ref field of thestruct PageInfo corresponding to the physical page. When this count goes to zero for a physical page, that page can be freed because it is no longer used. In general, this count should equal to the number of times the physical page appears below UTOP in all page tables (the mappings above UTOP are mostly set up at boot time by the kernel and should never be freed, so there’s no need to reference count them). We’ll also use it to keep track of the number of pointers we keep to the page directory pages and, in turn, of the number of references the page directories have to page table pages. - -Be careful when using page_alloc. The page it returns will always have a reference count of 0, so pp_ref should be incremented as soon as you’ve done something with the returned page (like inserting it into a page table). Sometimes this is handled by other functions (for example, page_insert) and sometimes the function calling page_alloc must do it directly. - -### Page Table Management - -Now you’ll write a set of routines to manage page tables: to insert and remove linear-to-physical mappings, and to create page table pages when needed. - -> Exercise 4 -> -> In the file kern/pmap.c, you must implement code for the following functions. -> -> pgdir_walk() -> -> boot_map_region() -> -> page_lookup() -> -> page_remove() -> -> page_insert() -> -> check_page(), called from mem_init(), tests your page table management routines. You should make sure it reports success before proceeding. - -### Part 3: Kernel Address Space - -JOS divides the processor’s 32-bit linear address space into two parts. User environments (processes), which we will begin loading and running in lab 3, will have control over the layout and contents of the lower part, while the kernel always maintains complete control over the upper part. The dividing line is defined somewhat arbitrarily by the symbol ULIM in inc/memlayout.h, reserving approximately 256MB of virtual address space for the kernel. This explains why we needed to give the kernel such a high link address in lab 1: otherwise there would not be enough room in the kernel’s virtual address space to map in a user environment below it at the same time. - -You’ll find it helpful to refer to the JOS memory layout diagram in inc/memlayout.h both for this part and for later labs. - -### Permissions and Fault Isolation - -Since kernel and user memory are both present in each environment’s address space, we will have to use permission bits in our x86 page tables to allow user code access only to the user part of the address space. Otherwise bugs in user code might overwrite kernel data, causing a crash or more subtle malfunction; user code might also be able to steal other environments’ private data. - -The user environment will have no permission to any of the memory above ULIM, while the kernel will be able to read and write this memory. For the address range [UTOP,ULIM), both the kernel and the user environment have the same permission: they can read but not write this address range. This range of address is used to expose certain kernel data structures read-only to the user environment. Lastly, the address space below UTOP is for the user environment to use; the user environment will set permissions for accessing this memory. - -### Initializing the Kernel Address Space - -Now you’ll set up the address space above UTOP: the kernel part of the address space. inc/memlayout.h shows the layout you should use. You’ll use the functions you just wrote to set up the appropriate linear to physical mappings. - -> Exercise 5 -> -> Fill in the missing code in mem_init() after the call to check_page(). - -Your code should now pass the check_kern_pgdir() and check_page_installed_pgdir() checks. - -> Question -> -> ​ 1、What entries (rows) in the page directory have been filled in at this point? What addresses do they map and where do they point? In other words, fill out this table as much as possible: -> -> EntryBase Virtual AddressPoints to (logically): -> -> 1023 ? Page table for top 4MB of phys memory -> -> 1022 ? ? -> -> . ? ? -> -> . ? ? -> -> . ? ? -> -> 2 0x00800000 ? -> -> 1 0x00400000 ? -> -> 0 0x00000000 [see next question] -> -> ​ 2、(From 20 Lecture3) We have placed the kernel and user environment in the same address space. Why will user programs not be able to read or write the kernel’s memory? What specific mechanisms protect the kernel memory? -> -> ​ 3、What is the maximum amount of physical memory that this operating system can support? Why? -> -> ​ 4、How much space overhead is there for managing memory, if we actually had the maximum amount of physical memory? How is this overhead broken down? -> -> ​ 5、Revisit the page table setup in kern/entry.S and kern/entrypgdir.c. Immediately after we turn on paging, EIP is still a low number (a little over 1MB). At what point do we transition to running at an EIP above KERNBASE? What makes it possible for us to continue executing at a low EIP between when we enable paging and when we begin running at an EIP above KERNBASE? Why is this transition necessary? - -### Address Space Layout Alternatives - -The address space layout we use in JOS is not the only one possible. An operating system might map the kernel at low linear addresses while leaving the upper part of the linear address space for user processes. x86 kernels generally do not take this approach, however, because one of the x86’s backward-compatibility modes, known as virtual 8086 mode, is “hard-wired” in the processor to use the bottom part of the linear address space, and thus cannot be used at all if the kernel is mapped there. - -It is even possible, though much more difficult, to design the kernel so as not to have to reserve any fixed portion of the processor’s linear or virtual address space for itself, but instead effectively to allow allow user-level processes unrestricted use of the entire 4GB of virtual address space - while still fully protecting the kernel from these processes and protecting different processes from each other! - -Generalize the kernel’s memory allocation system to support pages of a variety of power-of-two allocation unit sizes from 4KB up to some reasonable maximum of your choice. Be sure you have some way to divide larger allocation units into smaller ones on demand, and to coalesce multiple small allocation units back into larger units when possible. Think about the issues that might arise in such a system. - -This completes the lab. Make sure you pass all of the make grade tests and don’t forget to write up your answers to the questions inanswers-lab2.txt. Commit your changes (including adding answers-lab2.txt) and type make handin in the lab directory to hand in your lab. - ------- - -via: - -作者:[Mit][] -译者:[译者ID](https://github.com/%E8%AF%91%E8%80%85ID) -校对:[校对者ID](https://github.com/%E6%A0%A1%E5%AF%B9%E8%80%85ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 \ No newline at end of file diff --git a/translated/tech/20140114 Caffeinated 6.828:Lab 2 Memory Management.md b/translated/tech/20140114 Caffeinated 6.828:Lab 2 Memory Management.md new file mode 100644 index 0000000000..0e2a348679 --- /dev/null +++ b/translated/tech/20140114 Caffeinated 6.828:Lab 2 Memory Management.md @@ -0,0 +1,232 @@ +# Caffeinated 6.828:实验 2:内存管理 + +### 简介 + +在本实验中,你将为你的操作系统写内存管理方面的代码。内存管理有两部分组成。 + +第一部分是内核的物理内存分配器,内核通过它来分配内存,以及在不需要时释放所分配的内存。分配器以页为单位分配内存,每个页的大小为 4096 字节。你的任务是去维护那个数据结构,它负责记录物理页的分配和释放,以及每个分配的页有多少进程共享它。本实验中你将要写出分配和释放内存页的全套代码。 + +第二个部分是虚拟内存的管理,它负责由内核和用户软件使用的虚拟内存地址到物理内存地址之间的映射。当使用内存时,x86 架构的硬件是由内存管理单元(MMU)负责执行映射操作来查阅一组页表。接下来你将要修改 JOS,以根据我们提供的特定指令去设置 MMU 的页表。 + +### 预备知识 + +在本实验及后面的实验中,你将逐步构建你的内核。我们将会为你提供一些附加的资源。使用 Git 去获取这些资源、提交自实验 1 以来的改变(如有需要的话)、获取课程仓库的最新版本、以及在我们的实验 2 (origin/lab2)的基础上创建一个称为 lab2 的本地分支: + +``` +athena% cd ~/6.828/lab +athena% add git +athena% git pull +Already up-to-date. +athena% git checkout -b lab2 origin/lab2 +Branch lab2 set up to track remote branch refs/remotes/origin/lab2. +Switched to a new branch "lab2" +athena% +``` + +现在,你需要将你在 lab1 分支中的改变合并到 lab2 分支中,命令如下: + +``` +athena% git merge lab1 +Merge made by recursive. + kern/kdebug.c | 11 +++++++++-- + kern/monitor.c | 19 +++++++++++++++++++ + lib/printfmt.c | 7 +++---- + 3 files changed, 31 insertions(+), 6 deletions(-) +athena% +``` + +实验 2 包含如下的新源代码,后面你将遍历它们: + +- inc/memlayout.h +- kern/pmap.c +- kern/pmap.h +- kern/kclock.h +- kern/kclock.c + +`memlayout.h` 描述虚拟地址空间的布局,这个虚拟地址空间是通过修改 `pmap.c`、`memlayout.h` 和 `pmap.h` 所定义的 *PageInfo* 数据结构来实现的,这个数据结构用于跟踪物理内存页面是否被释放。`kclock.c` 和 `kclock.h` 维护 PC 基于电池的时钟和 CMOS RAM 硬件,在 BIOS 中记录了 PC 上安装的物理内存数量,以及其它的一些信息。在 `pmap.c` 中的代码需要去读取这个设备硬件信息,以算出在这个设备上安装了多少物理内存,这些只是由你来完成的一部分代码:你不需要知道 CMOS 硬件工作原理的细节。 + +特别需要注意的是 `memlayout.h` 和 `pmap.h`,因为本实验需要你去使用和理解的大部分内容都包含在这两个文件中。你或许还需要去复习 `inc/mmu.h` 这个文件,因为它也包含了本实验中用到的许多定义。 + +开始本实验之前,记得去添加 `exokernel` 以获取 QEMU 的 6.828 版本。 + +### 实验过程 + +在你准备进行实验和写代码之前,先添加你的 `answers-lab2.txt` 文件到 Git 仓库,提交你的改变然后去运行 `make handin`。 + +``` +athena% git add answers-lab2.txt +athena% git commit -am "my answer to lab2" +[lab2 a823de9] my answer to lab2 4 files changed, 87 insertions(+), 10 deletions(-) +athena% make handin +``` + +### 第 1 部分:物理页面管理 + +操作系统必须跟踪物理内存页是否使用的状态。JOS 以页为最小粒度来管理 PC 的物理内存,以便于它使用 MMU 去映射和保护每个已分配的内存片段。 + +现在,你将要写内存的物理页分配器的代码。它使用链接到 `PageInfo` 数据结构的一组列表来保持对物理页的状态跟踪,每个列表都对应到一个物理内存页。在你能够写出剩下的虚拟内存实现之前,你需要先写出物理内存页面分配器,因为你的页表管理代码将需要去分配物理内存来存储页表。 + +> 练习 1 +> +> 在文件 `kern/pmap.c` 中,你需要去实现以下函数的代码(或许要按给定的顺序来实现)。 +> +> boot_alloc() +> +> mem_init()(只要能够调用 check_page_free_list() 即可) +> +> page_init() +> +> page_alloc() +> +> page_free() +> +> `check_page_free_list()` 和 `check_page_alloc()` 可以测试你的物理内存页分配器。你将需要引导 JOS 然后去看一下 `check_page_alloc()` 是否报告成功即可。如果没有报告成功,修复你的代码直到成功为止。你可以添加你自己的 `assert()` 以帮助你去验证是否符合你的预期。 + +本实验以及所有的 6.828 实验中,将要求你做一些检测工作,以便于你搞清楚它们是否按你的预期来工作。这个任务不需要详细描述你添加到 JOS 中的代码的细节。查找 JOS 源代码中你需要去修改的那部分的注释;这些注释中经常包含有技术规范和提示信息。你也可能需要去查阅 JOS、和 Intel 的技术手册、以及你的 6.004 或 6.033 课程笔记的相关部分。 + +### 第 2 部分:虚拟内存 + +在你开始动手之前,需要先熟悉 x86 内存管理架构的保护模式:即分段和页面转换。 + +> 练习 2 +> +> 如果你对 x86 的保护模式还不熟悉,可以查看 Intel 80386 参考手册的第 5 章和第 6 章。阅读这些章节(5.2 和 6.4)中关于页面转换和基于页面的保护。我们建议你也去了解关于段的章节;在虚拟内存和保护模式中,JOS 使用了分页、段转换、以及在 x86 上不能禁用的基于段的保护,因此你需要去理解这些基础知识。 + +### 虚拟地址、线性地址和物理地址 + +在 x86 的专用术语中,一个虚拟地址是由一个段选择器和在段中的偏移量组成。一个线性地址是在页面转换之前、段转换之后得到的一个地址。一个物理地址是段和页面转换之后得到的最终地址,它最终将进入你的物理内存中的硬件总线。 + +![屏幕快照 2018-09-04 11.22.20](https://ws1.sinaimg.cn/large/0069RVTdly1fuxgrc398jj30gx04bgm1.jpg) + +回顾实验 1 中的第 3 部分,我们安装了一个简单的页表,这样内核就可以在 0xf0100000 链接的地址上运行,尽管它实际上是加载在 0x00100000 处的 ROM BIOS 的物理内存上。这个页表仅映射了 4MB 的内存。在实验中,你将要为 JOS 去设置虚拟内存布局,我们将从虚拟地址 0xf0000000 处开始扩展它,首先将物理内存扩展到 256MB,并映射许多其它区域的虚拟内存。 + +> 练习 3 +> +> 虽然 GDB 能够通过虚拟地址访问 QEMU 的内存,它经常用于在配置虚拟内存期间检查物理内存。在实验工具指南中复习 QEMU 的监视器命令,尤其是 `xp` 命令,它可以让你去检查物理内存。访问 QEMU 监视器,可以在终端中按 `Ctrl-a c`(相同的绑定返回到串行控制台)。 +> +> 使用 QEMU 监视器的 `xp` 命令和 GDB 的 `x` 命令去检查相应的物理内存和虚拟内存,以确保你看到的是相同的数据。 +> +> 我们的打过补丁的 QEMU 版本提供一个非常有用的 `info pg` 命令:它可以展示当前页表的一个简单描述,包括所有已映射的内存范围、权限、以及标志。Stock QEMU 也提供一个 `info mem` 命令用于去展示一个概要信息,这个信息包含了已映射的虚拟内存范围和使用了什么权限。 + +在 CPU 上运行的代码,一旦处于保护模式(这是在 boot/boot.S 中所做的第一件事情)中,是没有办法去直接使用一个线性地址或物理地址的。所有的内存引用都被解释为虚拟地址,然后由 MMU 来转换,这意味着在 C 语言中的指针都是虚拟地址。 + +例如在物理内存分配器中,JOS 内存经常需要在不反向引用的情况下,去维护作为地址的一个很难懂的值或一个整数。有时它们是虚拟地址,而有时是物理地址。为便于在代码中证明,JOS 源文件中将它们区分为两种:类型 `uintptr_t` 表示一个难懂的虚拟地址,而类型 `physaddr_trepresents` 表示物理地址。这些类型其实不过是 32 位整数(uint32_t)的同义词,因此编译器不会阻止你将一个类型的数据指派为另一个类型!因为它们都是整数(而不是指针)类型,如果你想去反向引用它们,编译器将报错。 + +JOS 内核能够通过将它转换为指针类型的方式来反向引用一个 `uintptr_t` 类型。相反,内核不能反向引用一个物理地址,因为这是由 MMU 来转换所有的内存引用。如果你转换一个 `physaddr_t` 为一个指针类型,并反向引用它,你或许能够加载和存储最终结果地址(硬件将它解释为一个虚拟地址),但你并不会取得你想要的内存位置。 + +总结如下: + +| C type | Address type | +| ------------ | ------------ | +| `T*` | Virtual | +| `uintptr_t` | Virtual | +| `physaddr_t` | Physical | + +>问题: +> +>假设下面的 JOS 内核代码是正确的,那么变量 `x` 应该是什么类型?uintptr_t 还是 physaddr_t ? +> +>![屏幕快照 2018-09-04 11.48.54](https://ws3.sinaimg.cn/large/0069RVTdly1fuxgrbkqd3j30m302bmxc.jpg) +> + +JOS 内核有时需要去读取或修改它知道物理地址的内存。例如,添加一个映射到页表,可以要求分配物理内存去存储一个页目录,然后去初始化它们。然而,内核也和其它的软件一样,并不能跳过虚拟地址转换,内核并不能直接加载和存储物理地址。一个原因是 JOS 将重映射从虚拟地址 0xf0000000 处物理地址 0 开始的所有的物理地址,以帮助内核去读取和写入它知道物理地址的内存。为转换一个物理地址为一个内核能够真正进行读写操作的虚拟地址,内核必须添加 0xf0000000 到物理地址以找到在重映射区域中相应的虚拟地址。你应该使用 KADDR(pa) 去做那个添加操作。 + +JOS 内核有时也需要能够通过给定的内核数据结构中存储的虚拟地址找到内存中的物理地址。内核全局变量和通过 `boot_alloc()` 分配的内存是加载到内核的这些区域中,从 0xf0000000 处开始,到全部物理内存映射的区域。因此,在这些区域中转变一个虚拟地址为物理地址时,内核能够只是简单地减去 0xf0000000 即可得到物理地址。你应该使用 PADDR(va) 去做那个减法操作。 + +### 引用计数 + +在以后的实验中,你将会经常遇到多个虚拟地址(或多个环境下的地址空间中)同时映射到相同的物理页面上。你将在 PageInfo 数据结构中用 pp_ref 字段来提供一个引用到每个物理页面的计数器。如果一个物理页面的这个计数器为 0,表示这个页面已经被释放,因为它不再被使用了。一般情况下,这个计数器应该等于相应的物理页面出现在所有页表下面的 UTOP 的次数(UTOP 上面的映射大都是在引导时由内核设置的,并且它从不会被释放,因此不需要引用计数器)。我们也使用它去跟踪到页目录的指针数量,反过来就是,页目录到页表的数量。 + +使用 `page_alloc` 时要注意。它返回的页面引用计数总是为 0,因此,一旦对返回页做了一些操作(比如将它插入到页表),`pp_ref` 就应该增加。有时这需要通过其它函数(比如,`page_instert`)来处理,而有时这个函数是直接调用 `page_alloc` 来做的。 + +### 页表管理 + +现在,你将写一套管理页表的代码:去插入和删除线性地址到物理地址的映射表,并且在需要的时候去创建页表。 + +> 练习 4 +> +> 在文件 `kern/pmap.c` 中,你必须去实现下列函数的代码。 +> +> pgdir_walk() +> +> boot_map_region() +> +> page_lookup() +> +> page_remove() +> +> page_insert() +> +> `check_page()`,调用 `mem_init()`,测试你的页表管理动作。在进行下一步流程之前你应该确保它成功运行。 + +### 第 3 部分:内核地址空间 + +JOS 分割处理器的 32 位线性地址空间为两部分:用户环境(进程),我们将在实验 3 中开始加载和运行,它将控制其上的布局和低位部分的内容,而内核总是维护对高位部分的完全控制。线性地址的定义是在 `inc/memlayout.h` 中通过符号 ULIM 来划分的,它为内核保留了大约 256MB 的虚拟地址空间。这就解释了为什么我们要在实验 1 中给内核这样的一个高位链接地址的原因:如是不这样做的话,内核的虚拟地址空间将没有足够的空间去同时映射到下面的用户空间中。 + +你可以在 `inc/memlayout.h` 中找到一个图表,它有助于你去理解 JOS 内存布局,这在本实验和后面的实验中都会用到。 + +### 权限和缺页隔离 + +由于内核和用户的内存都存在于它们各自环境的地址空间中,因此我们需要在 x86 的页表中使用权限位去允许用户代码只能访问用户所属地址空间的部分。否则的话,用户代码中的 bug 可能会覆写内核数据,导致系统崩溃或者发生各种莫名其妙的的故障;用户代码也可能会偷窥其它环境的私有数据。 + +对于 ULIM 以上部分的内存,用户环境没有任何权限,只有内核才可以读取和写入这部分内存。对于 [UTOP,ULIM] 地址范围,内核和用户都有相同的权限:它们可以读取但不能写入这个地址范围。这个地址范围是用于向用户环境暴露某些只读的内核数据结构。最后,低于 UTOP 的地址空间是为用户环境所使用的;用户环境将为访问这些内核设置权限。 + +### 初始化内核地址空间 + +现在,你将去配置 UTOP 以上的地址空间:内核部分的地址空间。`inc/memlayout.h` 中展示了你将要使用的布局。我将使用函数去写相关的线性地址到物理地址的映射配置。 + +> 练习 5 +> +> 完成调用 `check_page()` 之后在 `mem_init()` 中缺失的代码。 + +现在,你的代码应该通过了 `check_kern_pgdir()` 和 `check_page_installed_pgdir()` 的检查。 + +> 问题: +> +> ​ 1、在这个时刻,页目录中的条目(行)是什么?它们映射的址址是什么?以及它们映射到哪里了?换句话说就是,尽可能多地填写这个表: +> +> EntryBase Virtual AddressPoints to (logically): +> +> 1023 ? Page table for top 4MB of phys memory +> +> 1022 ? ? +> +> . ? ? +> +> . ? ? +> +> . ? ? +> +> 2 0x00800000 ? +> +> 1 0x00400000 ? +> +> 0 0x00000000 [see next question] +> +> ​ 2、(来自课程 3) 我们将内核和用户环境放在相同的地址空间中。为什么用户程序不能去读取和写入内核的内存?有什么特殊机制保护内核内存? +> +> ​ 3、这个操作系统能够支持的最大的物理内存数量是多少?为什么? +> +> ​ 4、我们真实地拥有最大数量的物理内存吗?管理内存的开销有多少?这个开销可以减少吗? +> +> ​ 5、复习在 `kern/entry.S` 和 `kern/entrypgdir.c` 中的页表设置。一旦我们打开分页,EIP 中是一个很小的数字(稍大于 1MB)。在什么情况下,我们转而去运行在 KERNBASE 之上的一个 EIP?当我们启用分页并开始在 KERNBASE 之上运行一个 EIP 时,是什么让我们能够持续运行一个很低的 EIP?为什么这种转变是必需的? + +### 地址空间布局的其它选择 + +在 JOS 中我们使用的地址空间布局并不是我们唯一的选择。一个操作系统可以在低位的线性地址上映射内核,而为用户进程保留线性地址的高位部分。然而,x86 内核一般并不采用这种方法,而 x86 向后兼容模式是不这样做的其中一个原因,这种模式被称为“虚拟 8086 模式”,处理器使用线性地址空间的最下面部分是“不可改变的”,所以,如果内核被映射到这里是根本无法使用的。 + +虽然很困难,但是设计这样的内核是有这种可能的,即:不为处理器自身保留任何固定的线性地址或虚拟地址空间,而有效地允许用户级进程不受限制地使用整个 4GB 的虚拟地址空间 —— 同时还要在这些进程之间充分保护内核以及不同的进程之间相互受保护! + +将内核的内存分配系统进行概括类推,以支持二次幂为单位的各种页大小,从 4KB 到一些你选择的合理的最大值。你务必要有一些方法,将较大的分配单位按需分割为一些较小的单位,以及在需要时,将多个较小的分配单位合并为一个较大的分配单位。想一想在这样的一个系统中可能会出现些什么样的问题。 + +这个实验做完了。确保你通过了所有的等级测试,并记得在 `answers-lab2.txt` 中写下你对上述问题的答案。提交你的改变(包括添加 `answers-lab2.txt` 文件),并在 `lab` 目录下输入 `make handin` 去提交你的实验。 + +------ + +via: + +作者:[Mit][] +译者:[qhwdw](https://github.com/qhwdw) +校对:[校对者ID](https://github.com/%E6%A0%A1%E5%AF%B9%E8%80%85ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 \ No newline at end of file From 9356085f52f50a68fab2ae45ad45b0fb63a1c34f Mon Sep 17 00:00:00 2001 From: geekpi Date: Tue, 25 Sep 2018 09:02:54 +0800 Subject: [PATCH 003/736] translating --- ...ss usr bin dpkg returned an error code 1- Error in Ubuntu.md | 2 ++ 1 file changed, 2 insertions(+) diff --git a/sources/tech/20180824 [Solved] -sub process usr bin dpkg returned an error code 1- Error in Ubuntu.md b/sources/tech/20180824 [Solved] -sub process usr bin dpkg returned an error code 1- Error in Ubuntu.md index 3cde4c7e9e..0200dfffdb 100644 --- a/sources/tech/20180824 [Solved] -sub process usr bin dpkg returned an error code 1- Error in Ubuntu.md +++ b/sources/tech/20180824 [Solved] -sub process usr bin dpkg returned an error code 1- Error in Ubuntu.md @@ -1,3 +1,5 @@ +translating---geekpi + [Solved] “sub process usr bin dpkg returned an error code 1″ Error in Ubuntu ====== If you are encountering “sub process usr bin dpkg returned an error code 1” while installing software on Ubuntu Linux, here is how you can fix it. From 79394ff5dd8147a8e8d0db5060d605bf892ebdf9 Mon Sep 17 00:00:00 2001 From: darksun Date: Tue, 25 Sep 2018 11:41:53 +0800 Subject: [PATCH 004/736] =?UTF-8?q?=E9=80=89=E9=A2=98:=20A=20Simple,=20Bea?= =?UTF-8?q?utiful=20And=20Cross-platform=20Podcast=20App?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...eautiful And Cross-platform Podcast App.md | 112 ++++++++++++++++++ 1 file changed, 112 insertions(+) create mode 100644 sources/tech/20180924 A Simple, Beautiful And Cross-platform Podcast App.md diff --git a/sources/tech/20180924 A Simple, Beautiful And Cross-platform Podcast App.md b/sources/tech/20180924 A Simple, Beautiful And Cross-platform Podcast App.md new file mode 100644 index 0000000000..ae9f91b548 --- /dev/null +++ b/sources/tech/20180924 A Simple, Beautiful And Cross-platform Podcast App.md @@ -0,0 +1,112 @@ +A Simple, Beautiful And Cross-platform Podcast App +====== + +![](https://www.ostechnix.com/wp-content/uploads/2018/09/cpod-720x340.png) + +Podcasts have become very popular in the last few years. Podcasts are what’s called “infotainment”, they are generally light-hearted, but they generally give you valuable information. Podcasts have blown up in the last few years, and if you like something, chances are there is a podcast about it. There are a lot of podcast players out there for the Linux desktop, but if you want something that is visually beautiful, has slick animations, and works on every platform, there aren’t a lot of alternatives to **CPod**. CPod (formerly known as **Cumulonimbus** ) is an open source and slickest podcast app that works on Linux, MacOS and Windows. + +CPod runs on something called **Electron** – a tool that allows developers to build cross-platform (E.g Windows, MacOs and Linux) desktop GUI applications. In this brief guide, we will be discussing – how to install and use CPod podcast app in Linux. + +### Installing CPod + +Go to the [**releases page**][1] of CPod. Download and Install the binary for your platform of choice. If you use Ubuntu/Debian, you can just download and install the .deb file from the releases page as shown below. + +``` +$ wget https://github.com/z-------------/CPod/releases/download/v1.25.7/CPod_1.25.7_amd64.deb + +$ sudo apt update + +$ sudo apt install gdebi + +$ sudo gdebi CPod_1.25.7_amd64.deb +``` + +If you use any other distribution, you probably should use the **AppImage** in the releases page. + +Download the AppImage file from the releases page. + +Open your terminal, and go to the directory where the AppImage file has been stored. Change the permissions to allow execution: + +``` +$ chmod +x CPod-1.25.7-x86_64.AppImage +``` + +Execute the AppImage File: + +``` +$ ./CPod-1.25.7-x86_64.AppImage +``` + +You’ll be presented a dialog asking whether to integrate the app with the system. Click **Yes** if you want to do so. + +### Features + +**Explore Tab** + +![](https://www.ostechnix.com/wp-content/uploads/2018/09/CPod-features-tab.png) + +CPod uses the Apple iTunes database to find podcasts. This is good, because the iTunes database is the biggest one out there. If there is a podcast out there, chances are it’s on iTunes. To find podcasts, just use the top search bar in the Explore section. The Explore Section also shows a few popular podcasts. + +**Home Tab** + +![](http://www.ostechnix.com/wp-content/uploads/2018/09/CPod-home-tab.png) + +The Home Tab is the tab that opens by default when you open the app. The Home Tab shows a chronological list of all the episodes of all the podcasts that you have subscribed to. + +From the home tab, you can: + + 1. Mark episodes read. + 2. Download them for offline playing + 3. Add them to the queue. + +![](https://www.ostechnix.com/wp-content/uploads/2018/09/The-podcasts-queue.png) + +**Subscriptions Tab** + +![](https://www.ostechnix.com/wp-content/uploads/2018/09/CPod-subscriptions-tab.png) + +You can of course, subscribe to podcasts that you like. A few other things you can do in the Subscriptions Tab is: + + 1. Refresh Podcast Artwork + 2. Export and Import Subscriptions to/from an .OPML file. + + + +**The Player** + +![](https://www.ostechnix.com/wp-content/uploads/2018/09/CPod-Podcast-Player.png) + +The player is perhaps the most beautiful part of CPod. The app changes the overall look and feel according to the banner of the podcast. There’s a sound visualiser at the bottom. To the right, you can see and search for other episodes of this podcast. + +**Cons/Missing Features** + +While I love this app, there are a few features and disadvantages that CPod does have: + + 1. Poor MPRIS Integration – You can play/pause the podcast from the media player dialog of your desktop environment, but not much more. The name of the podcast is not shown, and you can go to the next/previous episode. + 2. No support for chapters. + 3. No auto-downloading – you have to manually download episodes. + 4. CPU usage during use is pretty high (even for an Electron app). + + + +### Verdict + +While it does have its cons, CPod is clearly the most aesthetically pleasing podcast player app out there, and it has most basic features down. If you love using visually beautiful apps, and don’t need the advanced features, this is the perfect app for you. I know for a fact that I’m going to use it. + +Do you like CPod? Please put your opinions on the comments below! + + + +-------------------------------------------------------------------------------- + +via: https://www.ostechnix.com/cpod-a-simple-beautiful-and-cross-platform-podcast-app/ + +作者:[EDITOR][a] +选题:[lujun9972](https://github.com/lujun9972) +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://www.ostechnix.com/author/editor/ +[1]: https://github.com/z-------------/CPod/releases From ead74acd359cd2bb75829d292b3770c2b3fbd4f2 Mon Sep 17 00:00:00 2001 From: darksun Date: Tue, 25 Sep 2018 11:44:51 +0800 Subject: [PATCH 005/736] =?UTF-8?q?=E9=80=89=E9=A2=98:=20Make=20The=20Outp?= =?UTF-8?q?ut=20Of=20Ping=20Command=20Prettier=20And=20Easier=20To=20Read?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...ing Command Prettier And Easier To Read.md | 129 ++++++++++++++++++ 1 file changed, 129 insertions(+) create mode 100644 sources/tech/20180924 Make The Output Of Ping Command Prettier And Easier To Read.md diff --git a/sources/tech/20180924 Make The Output Of Ping Command Prettier And Easier To Read.md b/sources/tech/20180924 Make The Output Of Ping Command Prettier And Easier To Read.md new file mode 100644 index 0000000000..39ca57bc43 --- /dev/null +++ b/sources/tech/20180924 Make The Output Of Ping Command Prettier And Easier To Read.md @@ -0,0 +1,129 @@ +Make The Output Of Ping Command Prettier And Easier To Read +====== + +![](https://www.ostechnix.com/wp-content/uploads/2018/09/prettyping-720x340.png) + +As we all know, the **ping** command is used to check if a target host is reachable or not. Using Ping command, we can send ICMP Echo request to our target host, and verify whether the destination host is up or down. If you use ping command often, I’d like to recommend you to try **“Prettyping”**. Prettyping is just a wrapper for the standard ping tool and makes the output of the ping command prettier, easier to read, colorful and compact. The prettyping runs the standard ping command in the background and parses the output with colors and unicode characters. It is free and open source tool written in **Bash** and **awk** and supports most Unix-like operating systems such as GNU/Linux, FreeBSD and Mac OS X. Prettyping is not only used to make the output of ping command prettier, but also ships with other notable features as listed below. + + * Detects the lost or missing packets and marks them in the output. + * Shows live statistics. The statistics are constantly updated after each response is received, while ping only shows after it ends. + * Smart enough to handle “unknown messages” (like error messages) without messing up the output. + * Avoids printing the repeated messages. + * You can use most common ping parameters with Prettyping. + * Can run as normal user. + * Can be able to redirect the output to a file. + * Requires no installation. Just download the binary, make it executable and run. + * Fast and lightweight. + * And, finally makes the output pretty, colorful and very intuitive. + + + +### Installing Prettyping + +Like I said already, Prettyping does not requires any installation. It is portable application! Just download the Prettyping binary file using command: + +``` +$ curl -O https://raw.githubusercontent.com/denilsonsa/prettyping/master/prettyping +``` + +Move the binary file to your $PATH, for example **/usr/local/bin**. + +``` +$ sudo mv prettyping /usr/local/bin +``` + +And, make it executable as like below: + +``` +$ sudo chmod +x /usr/local/bin/prettyping +``` + +It’s that simple. + +### Let us Make The Output Of Ping Command Prettier And Easier To Read + +Once installed, ping any host or IP address and see the ping command output in graphical way. + +``` +$ prettyping ostechnix.com +``` + +Here is the visually displayed ping output: + +![](https://www.ostechnix.com/wp-content/uploads/2018/09/prettyping-in-action.gif) + +If you run Prettyping without any arguments, it will keep running until you manually stop it by pressing **Ctrl+c**. + +Since Prettyping is just a wrapper to the ping command, you can use most common ping parameters. For instance, you can use **-c** flag to ping a host only a specific number of times, for example **5** : + +``` +$ prettyping -c 5 ostechnix.com +``` + +By default, prettynping displays the output in colored format. Don’t like the colored output? No problem! Use `--nocolor` option. + +``` +$ prettyping --nocolor ostechnix.com +``` + +Similarly, you can disable mult-color support using `--nomulticolor` option: + +``` +$ prettyping --nomulticolor ostechnix.com +``` + +To disable unicode characters, use `--nounicode` option: + +![](https://www.ostechnix.com/wp-content/uploads/2018/09/prettyping-without-unicode-support.png) + +This can be useful if your terminal does not support **UTF-8**. If you can’t fix the unicode (fonts) in your system, simply pass `--nounicode` option. + +Prettyping can redirect the output to a file as well. The following command will write the output of `prettyping ostechnix.com` command in `ostechnix.txt` file. + +``` +$ prettyping ostechnix.com | tee ostechnix.txt +``` + +Prettyping has few more options which helps you to do various tasks, such as, + + * Enable/disable the latency legend. (default value is: enabled) + * Force the output designed to a terminal. (default: auto) + * Use the last “n” pings at the statistics line. (default: 60) + * Override auto-detection of terminal dimensions. + * Override the awk interpreter. (default: awk) + * Override the ping tool. (default: ping) + + + +For more details, view the help section: + +``` +$ prettyping --help +``` + +Even though Prettyping doesn’t add any extra functionality, I personally like the following feature implementations in it: + + * Live statistics – You can see all the live statistics all the time. The standard ping command will only shows the statistics after it ends. + * Compact – You can see a longer timespan at your terminal. + * Prettyping detects missing responses. + + + +If you’re ever looking for a way to visually display the output of the ping command, Prettyping will definitely help. Give it a try, you won’t be disappointed. + +And, that’s all for now. Hope this was useful. More good stuffs to come. Stay tuned! + + + +-------------------------------------------------------------------------------- + +via: https://www.ostechnix.com/prettyping-make-the-output-of-ping-command-prettier-and-easier-to-read/ + +作者:[SK][a] +选题:[lujun9972](https://github.com/lujun9972) +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://www.ostechnix.com/author/sk/ From 1e27322f2b424de259f85fd13ea40080b19ac66d Mon Sep 17 00:00:00 2001 From: darksun Date: Tue, 25 Sep 2018 11:46:23 +0800 Subject: [PATCH 006/736] =?UTF-8?q?=E9=80=89=E9=A2=98:=20IssueHunt:=20A=20?= =?UTF-8?q?New=20Bounty=20Hunting=20Platform=20for=20Open=20Source=20Softw?= =?UTF-8?q?are?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...nting Platform for Open Source Software.md | 66 +++++++++++++++++++ 1 file changed, 66 insertions(+) create mode 100644 sources/talk/20180921 IssueHunt- A New Bounty Hunting Platform for Open Source Software.md diff --git a/sources/talk/20180921 IssueHunt- A New Bounty Hunting Platform for Open Source Software.md b/sources/talk/20180921 IssueHunt- A New Bounty Hunting Platform for Open Source Software.md new file mode 100644 index 0000000000..2deeb75547 --- /dev/null +++ b/sources/talk/20180921 IssueHunt- A New Bounty Hunting Platform for Open Source Software.md @@ -0,0 +1,66 @@ +IssueHunt: A New Bounty Hunting Platform for Open Source Software +====== +One of the issues that many open-source developers and companies struggle with is funding. There is an assumption, an expectation even, among the community that Free and Open Source Software must be provided free of cost. But even FOSS needs funding for continued development. How can we keep expecting better quality software if we don’t create systems that enable continued development? + +We already wrote an article about [open source funding platforms][1] out there that try to tackle this shortcoming, as of this July there is a new contender in the market that aims to help fill this gap: [IssueHunt][2]. + +### IssueHunt: A Bounty Hunting platform for Open Source Software + +![IssueHunt website][3] + +IssueHunt offers a service that pays freelance developers for contributing to open-source code. It does so through what are called bounties: financial rewards granted to whoever solves a given problem. The funding for these bounties comes from anyone who is willing to donate to have any given bug fixed or feature added. + +If there is a problem with a piece of open-source software that you want fixed, you can offer up a reward amount of your choosing to whoever fixes it. + +Do you want your own product snapped? Offer a bounty on IssueHunt to whoever snaps it. It’s as simple as that. + +And if you are a programmer, you can browse through open issues. Fix the issue (if you could), submit a pull request on the GitHub repository and if your pull request is merged, you get the money. + +#### IssueHunt was originally an internal project for Boostnote + +![IssueHunt][4] + +The product came to be when the developers behind the note-taking app [Boostnote][5] reached out to the community for contributions to their own product. + +In the first two years of utilizing IssueHunt, Boostnote received over 8,400 Github stars through hundreds contributors and overwhelming donations. + +The product was so successful that the team decided to open it up to the rest of the community. + +Today, [a list of projects utilize this service][6], offering thousands of dollars in bounties among them. + +Boostnote boasts [$2,800 in total bounties][7], while Settings Sync, previously known as Visual Studio Code Settings Sync, offers [more than $1,600 in bounties.][8] + +There are other services that provide something similar to what IssueHunt is offering here. Perhaps the most notable is [Bountysource][9], which offers a similar bounty service to IssueHunt, while also offering subscription payment processing similar to [Librepay][10]. + +#### What do you think of IssueHunt? + +At the time of writing this article, IssueHunt is in its infancy, but I am incredibly excited to see where this project ends up in the comings years. + +I don’t know about you, but I am more than happy paying for FOSS. If the product is high quality and adds value to my life, then I will happily pay the developer the product. Especially since FOSS developers are creating products that respect my freedom in the process. + +That being said, I will definitely keep my eye on IssueHunt moving forward for ways I can support the community either with my own money or by spreading the word where contribution is needed. + +But what do you think? Do you agree with me, or do you think software should be Gratis free, and that contributions should be made on a volunteer basis? Let us know what you think in the comments below. + +-------------------------------------------------------------------------------- + +via: https://itsfoss.com/issuehunt/ + +作者:[Phillip Prado][a] +选题:[lujun9972](https://github.com/lujun9972) +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://itsfoss.com/author/phillip/ +[1]: https://itsfoss.com/open-source-funding-platforms/ +[2]: https://issuehunt.io +[3]: https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/09/issuehunt-website.png +[4]: https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/09/issuehunt.jpg +[5]: https://itsfoss.com/boostnote-linux-review/ +[6]: https://issuehunt.io/repos +[7]: https://issuehunt.io/repos/53266139 +[8]: https://issuehunt.io/repos/47984369 +[9]: https://www.bountysource.com/ +[10]: https://liberapay.com/ From 631b6af13e46e307d2b70a27b95e2f7a55812916 Mon Sep 17 00:00:00 2001 From: darksun Date: Tue, 25 Sep 2018 11:49:22 +0800 Subject: [PATCH 007/736] =?UTF-8?q?=E9=80=89=E9=A2=98:=205=20ways=20to=20p?= =?UTF-8?q?lay=20old-school=20games=20on=20a=20Raspberry=20Pi?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...play old-school games on a Raspberry Pi.md | 169 ++++++++++++++++++ 1 file changed, 169 insertions(+) create mode 100644 sources/tech/20180924 5 ways to play old-school games on a Raspberry Pi.md diff --git a/sources/tech/20180924 5 ways to play old-school games on a Raspberry Pi.md b/sources/tech/20180924 5 ways to play old-school games on a Raspberry Pi.md new file mode 100644 index 0000000000..539ac42082 --- /dev/null +++ b/sources/tech/20180924 5 ways to play old-school games on a Raspberry Pi.md @@ -0,0 +1,169 @@ +5 ways to play old-school games on a Raspberry Pi +====== + +Relive the golden age of gaming with these open source platforms for Raspberry Pi. + +![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/arcade_game_gaming.jpg?itok=84Rjk_32) + +They don't make 'em like they used to, do they? Video games, I mean. + +Sure, there's a bit more grunt in the gear now. Princess Zelda used to be 16 pixels in each direction; there's now enough graphics power for every hair on her head. Today's processors could beat up 1988's processors in a cage-fight deathmatch without breaking a sweat. + +But you know what's missing? The fun. + +You've got a squillion and one buttons to learn just to get past the tutorial mission. There's probably a storyline, too. You shouldn't need a backstory to kill bad guys. All you need is jump and shoot. So, it's little wonder that one of the most enduring popular uses for a Raspberry Pi is to relive the 8- and 16-bit golden age of gaming in the '80s and early '90s. But where to start? + +There are a few ways to play old-school games on the Pi. Each has its strengths and weaknesses, which I'll discuss here. + +### Retropie + +[Retropie][1] is probably the most popular retro-gaming platform for the Raspberry Pi. It's a solid all-rounder and a great default option for emulating classic desktop and console gaming systems. + +#### What is it? + +Retropie is built to run on [Raspbian][2]. It can also be installed over an existing Raspbian image if you'd prefer. It uses [EmulationStation][3] as a graphical front-end for a library of open source emulators, including the [Libretro][4] emulators. + +You don't need to understand a word of that to play your games, though. + +#### What's great about it + +It's very easy to get started. All you need to do is burn the image to an SD card, configure your controllers, copy your games over, and start killing bad guys. + +The huge user base means that there is a wealth of support and information out there, and active online communities to turn to for questions. + +In addition to the emulators that come installed with the Retropie image, there's a huge library of emulators you can install from the package manager, and it's growing all the time. Retropie also offers a user-friendly menu system to manage this, saving you time. + +From the Retropie menu, it's easy to add Kodi and the Raspbian desktop, which comes with the Chromium web browser. This means your retro-gaming rig is also good for home theatre, [YouTube][5], [SoundCloud][6], and all those other “lounge room computer” goodies. + +Retropie also has a number of other customization options: You can change the graphics in the menus, set up different control pad configurations for different emulators, make your Raspberry Pi file system visible to your local Windows network—all sorts of stuff. + +Retropie is built on Raspbian, which means you have the Raspberry Pi's most popular operating system to explore. Most Raspberry Pi projects and tutorials you find floating around are written for Raspbian, making it easy to customize and install new things on it. I've used my Retropie rig as a wireless bridge, installed MIDI synthesizers on it, taught myself a bit of Python, and more—all without compromising its use as a gaming machine. + +#### What's not so great about it + +Retropie's simple installation and ease of use is, in a way, a double-edged sword. You can go for a long time with Retropie without ever learning simple stuff like `sudo apt-get`, which means you're missing out on a lot of the Raspberry Pi experience. + +It doesn't have to be this way; the command line is still there under the hood when you want it, but perhaps users are a bit too insulated from a Bash shell that's ultimately a lot less scary than it looks. Retropie's main menu is operable only with a control pad, which can be annoying when you don't have one plugged in because you've been using the system for things other than gaming. + +#### Who's it for? + +Anyone who wants to get straight into some gaming, anyone who wants the biggest and best library of emulators, and anyone who wants a great way to start exploring Linux when they're not playing games. + +### Recalbox + +[Recalbox][7] is a newer open source suite of emulators for the Raspberry Pi. It also supports other ARM-based small-board computers. + +#### What is it? + +Like Retropie, Recalbox is built on EmulationStation and Libretro. Where it differs is that it's not built on Raspbian, but on its own flavor of Linux: RecalboxOS. + +#### What's great about it + +The setup for Recalbox is even easier than for Retropie. You don't even need to image an SD card; simply copy some files over and go. It also has out-of-the-box support for some game controllers, getting you to Level 1 that little bit faster. Kodi comes preinstalled. This is a ready-to-go gaming and media rig. + +#### What's not so great about it + +Recalbox has fewer emulators than Retropie, fewer customization options, and a smaller user community. + +Your Recalbox rig is probably always just going to be for emulators and Kodi, the same as when you installed it. If you feel like getting deeper into Linux, you'll probably want a new SD card for Raspbian. + +#### Who's it for? + +Recalbox is great if you want the absolute easiest retro gaming experience and can happily go without some of the more obscure gaming platforms, or if you are intimidated by the idea of doing anything a bit technical (and have no interest in growing out of that). + +For most opensource.com readers, Recalbox will probably come in most handy to recommend to your not-so-technical friend or relative. Its super-simple setup and overall lack of options might even help you avoid having to help them with it. + +### Roll your own + +Ok, if you've been paying attention, you might have noticed that both Retropie and Recalbox are built from many of the same open source components. So what's to stop you from putting them together yourself? + +#### What is it? + +Whatever you want it to be, baby. The nature of open source software means you could use an existing emulator suite as a starting point, or pilfer from them at will. + +#### What's great about it + +If you have your own custom interface in mind, I guess there's nothing to do but roll your sleeves up and get to it. This is also a way to install emulators that haven't quite found their way into Retropie yet, such as [BeebEm][8] or [ArcEm][9]. + +#### What's not so great about it + +Well, it's a bit of work, isn't it? + +#### Who's it for? + +Hackers, tinkerers, builders, seasoned hobbyists, and such. + +### Native RISC OS gaming + +Now here's a dark horse: [RISC OS][10], the original operating system for ARM devices. + +#### What is it? + +Before ARM went on to become the world's most popular CPU architecture, it was originally built to be the heart of the Acorn Archimedes. That's kind of a forgotten beast nowadays, but for a few years it was light years ahead as the most powerful desktop computer in the world, and it attracted a lot of games development. + +Because the ARM processor in the Pi is the great-grandchild of the one in the Archimedes, we can still install RISC OS on it, and with a little bit of work, get these games running. This is different to the emulator options we've covered so far because we're playing our games on the operating system and CPU architecture for which they were written. + +#### What's great about it + +It's the perfect introduction to RISC OS. This is an absolute gem of an operating system and well worth checking out in its own right. + +The fact that you're using much the same operating system as back in the day to load and play your games makes your retro gaming rig just that little bit more of a time machine. This definitely adds some charm and retro value to the project. + +There are a few superb games that were released only on the Archimedes. The massive hardware advantage of the Archimedes also means that it often had the best graphics and smoothest gameplay of a lot of multi-platform titles. The rights holders to many of these games have been generous enough to make them legally available for free download. + +#### What's not so great about it + +Once you have installed RISC OS, it still takes a bit of elbow grease to get the games working. Here's a [guide to getting started][11]. + +This is definitely not a great all-rounder for the lounge room. There's nothing like [Kodi][12]. There's a web browser, [NetSurf][13], but it's struggling to catch up to the modern web. You won't get the range of titles to play as you would with an emulator suite. RISC OS Open is free for hobbyists to download and use and much of the source code has been made open. But despite the name, it's not a 100% open source operating system. + +#### Who's it for? + +This one's for novelty seekers, absolute retro heads, people who want to explore an interesting operating system from the '80s, people who are nostalgic for Acorn machines from back in the day, and people who want a totally different retro gaming project. + +### Command line gaming + +Do you really need to install an emulator or an exotic operating system just to relive the glory days? Why not just install some native linux games from the command line? + +#### What is it? + +There's a whole range of native Linux games tested to work on the [Raspberry Pi][14]. + +#### What's great about it + +You can install most of these from packages using the command line and start playing. Easy. If you've already got Raspbian up and running, it's probably your fastest path to getting a game running. + +#### What's not so great about it + +This isn't, strictly speaking, actual retro gaming. Linux was born in 1991 and took a while longer to come together as a gaming platform. This isn't quite gaming from the classic 8- and 16-bit era; these are ports and retro-influenced games that were built later. + +#### Who's it for? + +If you're just after a bucket of fun, no problem. But if you're trying to relive the actual era, this isn't quite it. + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/18/9/retro-gaming-raspberry-pi + +作者:[James Mawson][a] +选题:[lujun9972](https://github.com/lujun9972) +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/dxmjames +[1]: https://retropie.org.uk/ +[2]: https://www.raspbian.org/ +[3]: https://emulationstation.org/ +[4]: https://www.libretro.com/ +[5]: https://www.youtube.com/ +[6]: https://soundcloud.com/ +[7]: https://www.recalbox.com/ +[8]: http://www.mkw.me.uk/beebem/ +[9]: http://arcem.sourceforge.net/ +[10]: https://opensource.com/article/18/7/gentle-intro-risc-os +[11]: https://blog.dxmtechsupport.com.au/playing-badass-acorn-archimedes-games-on-a-raspberry-pi/ +[12]: https://kodi.tv/ +[13]: https://www.netsurf-browser.org/ +[14]: https://www.raspberrypi.org/forums/viewtopic.php?f=78&t=51794 From be92daaafcc72653662b7510887e207d8bdd2fe8 Mon Sep 17 00:00:00 2001 From: darksun Date: Tue, 25 Sep 2018 12:04:38 +0800 Subject: [PATCH 008/736] =?UTF-8?q?=E9=80=89=E9=A2=98:=20Gunpoint=20is=20a?= =?UTF-8?q?=20Delight=20for=20Stealth=20Game=20Fans?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...oint is a Delight for Stealth Game Fans.md | 104 ++++++++++++++++++ 1 file changed, 104 insertions(+) create mode 100644 sources/tech/20180923 Gunpoint is a Delight for Stealth Game Fans.md diff --git a/sources/tech/20180923 Gunpoint is a Delight for Stealth Game Fans.md b/sources/tech/20180923 Gunpoint is a Delight for Stealth Game Fans.md new file mode 100644 index 0000000000..3ff6857f78 --- /dev/null +++ b/sources/tech/20180923 Gunpoint is a Delight for Stealth Game Fans.md @@ -0,0 +1,104 @@ +Gunpoint is a Delight for Stealth Game Fans +====== +Gunpoint is a 2D stealth game in which you play as a spy stealing secrets and hacking networks like Ethan Hunt of Mission Impossible movie series. + + + +Hi, Fellow Linux gamers. Let’s take a look at a fun stealth game. Let’s take a look at [Gunpoint][1]. + +Gunpoint is neither free nor open source. It is an independent game you can purchase directly from the creator or from Steam. + +![][2] + +### The Interesting History of Gunpoint + +> The instant success of Gunpoint enabled its creator to become a full time game developer. + +Gunpoint is a stealth game created by [Tom Francis][3]. Francis was inspired to create the game after he heard about Spelunky, which was created by one person. Francis played games as part of his day job, as an editor for PC Gamer UK magazine. He had no previous programming experience but used the easy-to-use Game Maker. He planned to create a demo with the hopes of getting a job as a developer. + +He released his first prototype in May 2010 under the name Private Dick. Based on the response, Francis continued to work on the game. The final version was released in June of 2013 to high praise. + +In a [blog post][4] weeks after Gunpoint’s launch, Francis revealed that he made back all the money he spent on development ($30 for Game Maker 8) in 64 seconds. Francis didn’t reveal Gunpoint’s sales figures, but he did quit his job and today creates [games][5] full time. + +### Experiencing the Gunpoint Gameplay + +![Gunpoint Gameplay][6] + +Like I said earlier, Gunpoint is a stealth game. You play a freelance spy named Richard Conway. As Conway, you will use a pair of Bullfrog hypertrousers to infiltrate buildings for clients. The hypertrousers allow you to jump very high, even through windows. You can also cling to walls or ceilings like a ninja. + +Another tool you have is the Crosslink, which allows you to rewire circuits. Often you will need to use the Crosslink to reroute motion detections to unlock doors instead of setting off an alarm or rewire a light switch to turn off the light on another floor to distract a guard. + +When you sneak into a building, your biggest concern is the on-site security guards. If they see Conway, they will shoot and in this game, it’s one shot one kill. You can jump off a three-story building no problem, but bullets will take you down. Thankfully, if Conway is killed you can just jump back a few seconds and try again. + +Along the way, you will earn money to upgrade your tools and unlock new features. For example, I just unlocked the ability to rewire a guard’s gun. Don’t ask me how that works. + +### Minimum System Requirements + +Here are the minimum system requirements for Gunpoint: + +##### Linux + + * Processor: 2GHz + * Memory: 1GB RAM + * Video card: 512MB + * Hard Drive: 700MB HD space + + + +##### Windows + + * OS: Windows XP, Visa, 7 or 8 + * Processor: 2GHz + * Memory: 1GB RAM + * Video card: 512MB + * DirectX®: 9.0 + * Hard Drive: 700MB HD space + + + +##### macOS + + * OS: OS X 10.7 or later + * Processor: 2GHz + * Memory: 1GB RAM + * Video card: 512MB + * Hard Drive: 700MB HD space + + + +### Thoughts on Gunpoint + +![Gunpoint game on Linux][7] +Image Courtesy: Steam Community + +Gunpoint is a very fun game. The early levels are easy to get through, but the later levels make you put your thinking cap on. The hypertrousers and Crosslink are fun to play with. There is nothing like turning the lights off on a guard and bouncing over his head to hack a terminal. + +Besides the fun mechanics, it also has an interesting [noir][8] murder mystery story. Several different (and conflicting) clients hire you to look into different aspects of the case. Some of them seem to have ulterior motives that are not in your best interest. + +I always enjoy good mysteries and this one is no different. If you like noir or platforming games, be sure to check out [Gunpoint][1]. + +Have you every played Gunpoint? What other games should we review for your entertainment? Let us know in the comments below. + +If you found this article interesting, please take a minute to share it on social media, Hacker News or [Reddit][9]. + +-------------------------------------------------------------------------------- + +via: https://itsfoss.com/gunpoint-game-review/ + +作者:[John Paul][a] +选题:[lujun9972](https://github.com/lujun9972) +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://itsfoss.com/author/john/ +[1]: http://www.gunpointgame.com/ +[2]: https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/08/gunpoint.jpg +[3]: https://www.pentadact.com/ +[4]: https://www.pentadact.com/2013-06-18-gunpoint-recoups-development-costs-in-64-seconds/ +[5]: https://www.pentadact.com/2014-08-09-what-im-working-on-and-what-ive-done/ +[6]: https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/08/gunpoint-gameplay-1.jpeg +[7]: https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/08/gunpoint-game-1.jpeg +[8]: https://en.wikipedia.org/wiki/Noir_fiction +[9]: http://reddit.com/r/linuxusersgroup From 75e68628bb808d678aedc7c39f2692800e9fea08 Mon Sep 17 00:00:00 2001 From: way-ww <40491614+way-ww@users.noreply.github.com> Date: Tue, 25 Sep 2018 12:43:07 +0800 Subject: [PATCH 009/736] Update 20180516 Manipulating Directories in Linux.md (#10335) request to translate --- sources/tech/20180516 Manipulating Directories in Linux.md | 1 + 1 file changed, 1 insertion(+) diff --git a/sources/tech/20180516 Manipulating Directories in Linux.md b/sources/tech/20180516 Manipulating Directories in Linux.md index 9c6df23e43..4cc8ca4ea1 100644 --- a/sources/tech/20180516 Manipulating Directories in Linux.md +++ b/sources/tech/20180516 Manipulating Directories in Linux.md @@ -1,3 +1,4 @@ +Translating by way-ww Manipulating Directories in Linux ====== From faf726be70bf8e91d6be6ebd60e72f321248e985 Mon Sep 17 00:00:00 2001 From: Hank Chow <280630620@qq.com> Date: Tue, 25 Sep 2018 12:48:10 +0800 Subject: [PATCH 010/736] translated (#10334) --- ...Understand Fedora memory usage with top.md | 62 ------------------- ...Understand Fedora memory usage with top.md | 60 ++++++++++++++++++ 2 files changed, 60 insertions(+), 62 deletions(-) delete mode 100644 sources/tech/20180919 Understand Fedora memory usage with top.md create mode 100644 translated/tech/20180919 Understand Fedora memory usage with top.md diff --git a/sources/tech/20180919 Understand Fedora memory usage with top.md b/sources/tech/20180919 Understand Fedora memory usage with top.md deleted file mode 100644 index ef72988469..0000000000 --- a/sources/tech/20180919 Understand Fedora memory usage with top.md +++ /dev/null @@ -1,62 +0,0 @@ -HankChow translating - -Understand Fedora memory usage with top -====== - -![](https://fedoramagazine.org/wp-content/uploads/2018/09/memory-top-816x345.jpg) - -Have you used the top utility in a terminal to see memory usage on your Fedora system? If so, you might be surprised to see some of the numbers there. It might look like a lot more memory is consumed than your system has available. This article will explain a little more about memory usage, and how to read these numbers. - -### Memory usage in real terms - -The way the operating system (OS) uses memory may not be self-evident. In fact, some ingenious, behind-the-scenes techniques are at play. They help your OS use memory more efficiently, without involving you. - -Most applications are not self contained. Instead, each relies on sets of functions collected in libraries. These libraries are also installed on the system. In Fedora, the RPM packaging system ensures that when you install an app, any libraries on which it relies are installed, too. - -When an app runs, the OS doesn’t necessarily load all the information it uses into real memory. Instead, it builds a map to the storage where that code is stored, called virtual memory. The OS then loads only the parts it needs. When it no longer needs portions of memory, it might release or swap them out as appropriate. - -This means an app can map a very large amount of virtual memory, while using less real memory on the system at one time. It might even map more RAM than the system has available! In fact, across a whole OS that’s often the case. - -In addition, related applications may rely on the same libraries. The Linux kernel in your Fedora system often shares memory between applications. It doesn’t need to load multiple copies of the same library for related apps. This works similarly for separate instances of the same app, too. - -Without understanding these details, the output of the top application can be confusing. The following example will clarify this view into memory usage. - -### Viewing memory usage in top - -If you haven’t tried yet, open a terminal and run the top command to see some output. Hit **Shift+M** to see the list sorted by memory usage. Your display may look slightly different than this example from a running Fedora Workstation: - - - -There are three columns showing memory usage to examine: VIRT, RES, and SHR. The measurements are currently shown in kilobytes (KB). - -The VIRT column is the virtual memory mapped for this process. Recall from the earlier description that virtual memory is not actual RAM consumed. For example, the GNOME Shell process gnome-shell is not actually consuming over 3.1 gigabytes of actual RAM. However, it’s built on a number of lower and higher level libraries. The system must map each of those to ensure they can be loaded when necessary. - -The RES column shows you how much actual (resident) memory is consumed by the app. In the case of GNOME Shell, that’s about 180788 KB. The example system has roughly 7704 MB of physical memory, which is why the memory usage shows up as 2.3%. - -However, of that number, at least 88212 KB is shared memory, shown in the SHR column. This memory might be, for example, library functions that other apps also use. This means the GNOME Shell is using about 92 MB on its own not shared with other processes. Notice that other apps in the example share an even higher percentage of their resident memory. In some apps, the shared portion is the vast majority of the memory usage. - -There is a wrinkle here, which is that sometimes processes communicate with each other via memory. That memory is also shared, but can’t necessarily be detected by a utility like top. So yes — even the above clarifications still have some uncertainty! - -### A note about swap - -Your system has another facility it uses to store information, which is swap. Typically this is an area of slower storage (like a hard disk). If the physical memory on the system fills up as needs increase, the OS looks for portions of memory that haven’t been needed in a while. It writes them out to the swap area, where they sit until needed later. - -Therefore, prolonged, high swap usage usually means a system is suffering from too little memory for its demands. Sometimes an errant application may be at fault. Or, if you see this often on your system, consider upgrading your machine’s memory, or restricting what you run. - -Photo courtesy of [Stig Nygaard][1], via [Flickr][2] (CC BY 2.0). - - --------------------------------------------------------------------------------- - -via: https://fedoramagazine.org/understand-fedora-memory-usage-top/ - -作者:[Paul W. Frields][a] -选题:[lujun9972](https://github.com/lujun9972) -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://fedoramagazine.org/author/pfrields/ -[1]: https://www.flickr.com/photos/stignygaard/ -[2]: https://www.flickr.com/photos/stignygaard/3138001676/ diff --git a/translated/tech/20180919 Understand Fedora memory usage with top.md b/translated/tech/20180919 Understand Fedora memory usage with top.md new file mode 100644 index 0000000000..a55d5d7b55 --- /dev/null +++ b/translated/tech/20180919 Understand Fedora memory usage with top.md @@ -0,0 +1,60 @@ +使用 `top` 命令了解 Fedora 的内存使用情况 +====== + +![](https://fedoramagazine.org/wp-content/uploads/2018/09/memory-top-816x345.jpg) + +如果你使用过 `top` 命令来查看 Fedora 系统中的内存使用情况,你可能会惊讶,显示的数值看起来比系统可用的内存消耗更多。下面会详细介绍内存使用情况以及如何理解这些数据。 + +### 内存实际使用情况 + +操作系统对内存的使用方式并不是太通俗易懂,而是有很多不为人知的巧妙方式。通过这些方式,可以在无需用户干预的情况下,让操作系统更有效地使用内存。 + +大多数应用程序都不是系统自带的,但每个应用程序都依赖于安装在系统中的库中的一些函数集。在 Fedora 中,RPM 包管理系统能够确保在安装应用程序时也会安装所依赖的库。 + +当应用程序运行时,操作系统并不需要将它要用到的所有信息都加载到物理内存中。而是会为存放代码的存储构建一个映射,称为虚拟内存。操作系统只把需要的部分加载到内存中,当某一个部分不再需要后,这一部分内存就会被释放掉。 + +这意味着应用程序可以映射大量的虚拟内存,而使用较少的系统物理内存。特殊情况下,映射的虚拟内存甚至可以比系统实际可用的物理内存更多!而且在操作系统中这种情况也并不少见。 + +另外,不同的应用程序可能会对同一个库都有依赖。Fedora 中的 Linux 内核通常会在各个应用程序之间共享内存,而不需要为不同应用分别加载同一个库的多个副本。类似地,对于同一个应用程序的不同实例也是采用这种方式共享内存。 + +如果不首先了解这些细节,`top` 命令显示的数据可能会让人摸不着头脑。下面就举例说明如何正确查看内存使用量。 + +### 使用 `top` 命令查看内存使用量 + +如果你还没有使用过 `top` 命令,可以打开终端直接执行查看。使用 **Shift + M** 可以按照内存使用量来进行排序。下图是在 Fedora Workstation 中执行的结果,在你的机器上显示的结果可能会略有不同: + +![](https://fedoramagazine.org/wp-content/uploads/2018/09/Screenshot-from-2018-09-17-14-23-17.png) + +主要通过一下三列来查看内存使用情况:VIRT,RES 和 SHR。目前以 KB 为单位显示相关数值。 + +VIRT 列代表该进程映射的虚拟内存。如上所述,虚拟内存不是实际消耗的物理内存。例如, GNOME Shell 进程 gnome-shell 实际上没有消耗超过 3.1 GB 的物理内存,但它对很多更低或更高级的库都有依赖,系统必须对每个库都进行映射,以确保在有需要时可以加载这些库。 + +RES 列代表应用程序消耗了多少实际(驻留)内存。对于 GNOME Shell 大约是 180788 KB。例子中的系统拥有大约 7704 MB 的物理内存,因此内存使用率显示为 2.3%。 + +但根据 SHR 列显示,其中至少有 88212 KB 是共享内存,这部分内存可能是其它应用程序也在使用的库函数。这意味着 GNOME Shell 本身大约有 92 MB 内存不与其他进程共享。需要注意的是,上述例子中的其它程序也共享了很多内存。在某些应用程序中,共享内存在内存使用量中会占很大的比例。 + +值得一提的是,有时进程之间通过内存通信,这些内存也是共享的,但 `top` 工具却不一定能检测到,所以以上的说明也不一定准确。(这一句不太会翻译出来,烦请校对大佬帮忙看看,谢谢) + +### 关于交换分区 + +系统还可以通过交换分区来存储数据(例如硬盘),但读写的速度相对较慢。当物理内存渐渐用满,操作系统就会查找内存中暂时不会使用的部分,将其写出到交换区域等待需要的时候使用。 + +因此,如果交换内存的使用量一直偏高,表明系统的物理内存已经供不应求了。尽管错误的内存申请也有可能导致出现这种情况,但如果这种现象经常出现,就需要考虑提升物理内存或者限制某些程序的运行了。 + +感谢 [Stig Nygaard][1] 在 [Flickr][2] 上提供的图片(CC BY 2.0)。 + +-------------------------------------------------------------------------------- + +via: https://fedoramagazine.org/understand-fedora-memory-usage-top/ + +作者:[Paul W. Frields][a] +选题:[lujun9972](https://github.com/lujun9972) +译者:[HankChow](https://github.com/HankChow) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://fedoramagazine.org/author/pfrields/ +[1]: https://www.flickr.com/photos/stignygaard/ +[2]: https://www.flickr.com/photos/stignygaard/3138001676/ + From 57f6de97706dabb015b14dd3caa0fbf72ca6ebaa Mon Sep 17 00:00:00 2001 From: GraveAccent <39041505+GraveAccent@users.noreply.github.com> Date: Tue, 25 Sep 2018 12:50:22 +0800 Subject: [PATCH 011/736] =?UTF-8?q?GraveAccent=E7=94=B3=E8=AF=B7=E7=BF=BB?= =?UTF-8?q?=E8=AF=91=20(#10336)?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...0201 Conditional Rendering in React using Ternaries and.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/sources/tech/20180201 Conditional Rendering in React using Ternaries and.md b/sources/tech/20180201 Conditional Rendering in React using Ternaries and.md index b5f740c92c..b99c787e31 100644 --- a/sources/tech/20180201 Conditional Rendering in React using Ternaries and.md +++ b/sources/tech/20180201 Conditional Rendering in React using Ternaries and.md @@ -1,4 +1,4 @@ -Conditional Rendering in React using Ternaries and Logical AND +GraveAccent 翻译中 Conditional Rendering in React using Ternaries and Logical AND ============================================================ @@ -203,4 +203,4 @@ via: https://medium.freecodecamp.org/conditional-rendering-in-react-using-ternar [a]:https://medium.freecodecamp.org/@donavon [1]:https://unsplash.com/photos/pKeF6Tt3c08?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText [2]:https://unsplash.com/search/photos/road-sign?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText -[3]:https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Operators/Logical_Operators \ No newline at end of file +[3]:https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Operators/Logical_Operators From 6e0980cf07857c1143cdb41248d6914d8e0cf001 Mon Sep 17 00:00:00 2001 From: "Xingyu.Wang" Date: Tue, 25 Sep 2018 14:58:55 +0800 Subject: [PATCH 012/736] PUB:20180308 What is open source programming.md @Valoniakim https://linux.cn/article-10045-1.html --- .../20180308 What is open source programming.md | 0 1 file changed, 0 insertions(+), 0 deletions(-) rename {translated/talk => published}/20180308 What is open source programming.md (100%) diff --git a/translated/talk/20180308 What is open source programming.md b/published/20180308 What is open source programming.md similarity index 100% rename from translated/talk/20180308 What is open source programming.md rename to published/20180308 What is open source programming.md From d45f7d43cfa7d04899c93af48c6fbc1d6020e78d Mon Sep 17 00:00:00 2001 From: "Xingyu.Wang" Date: Tue, 25 Sep 2018 15:09:02 +0800 Subject: [PATCH 013/736] PRF:20180831 How to Create a Slideshow of Photos in Ubuntu 18.04 and other Linux Distributions.md @geekpi --- ...buntu 18.04 and other Linux Distributions.md | 17 +++++++++-------- 1 file changed, 9 insertions(+), 8 deletions(-) diff --git a/translated/tech/20180831 How to Create a Slideshow of Photos in Ubuntu 18.04 and other Linux Distributions.md b/translated/tech/20180831 How to Create a Slideshow of Photos in Ubuntu 18.04 and other Linux Distributions.md index 2109a450fa..6dc4f4dba6 100644 --- a/translated/tech/20180831 How to Create a Slideshow of Photos in Ubuntu 18.04 and other Linux Distributions.md +++ b/translated/tech/20180831 How to Create a Slideshow of Photos in Ubuntu 18.04 and other Linux Distributions.md @@ -1,16 +1,17 @@ -如何在 Ubuntu 18.04 和其他 Linux 发行版中创建照片幻灯片 +如何在 Ubuntu 和其他 Linux 发行版中创建照片幻灯片 ====== + 创建照片幻灯片只需点击几下。以下是如何在 Ubuntu 18.04 和其他 Linux 发行版中制作照片幻灯片。 ![How to create slideshow of photos in Ubuntu Linux][1] -想象一下,你的朋友和亲戚正在拜访你,并请求你展示最近的活动/旅行照片。 +想象一下,你的朋友和亲戚正在拜访你,并请你展示最近的活动/旅行照片。 你将照片保存在计算机上,并整齐地放在单独的文件夹中。你邀请计算机附近的所有人。你进入该文件夹​​,单击其中一张图片,然后按箭头键逐个显示照片。 但那太累了!如果这些图片每隔几秒自动更改一次,那将会好很多。 -这称之为为幻灯片,我将向你展示如何在 Ubuntu 中创建照片幻灯片。这能让你在文件夹中循环播放图片并以全屏模式显示它们。 +这称之为幻灯片,我将向你展示如何在 Ubuntu 中创建照片幻灯片。这能让你在文件夹中循环播放图片并以全屏模式显示它们。 ### 在 Ubuntu 18.04 和其他 Linux 发行版中创建照片幻灯片 @@ -20,19 +21,19 @@ 如果你在 Ubuntu 18.04 或任何其他发行版中使用 GNOME,那么你很幸运。Gnome 的默认图像浏览器,Eye of GNOME,能够在当前文件夹中显示图片的幻灯片。 -只需单击其中一张图片,你将在程序的右上角菜单中看到设置选项。它看起来像三条横栏堆在彼此的顶部。 +只需单击其中一张图片,你将在程序的右上角菜单中看到设置选项。它看起来像堆叠在一起的三条横栏。 你会在这里看到几个选项。勾选幻灯片选项,它将全屏显示图像。 ![How to create slideshow of photos in Ubuntu Linux][2] -默认情况下,图像以 5 秒的间隔变化。你可以进入 Preferences->Slideshow 来更改幻灯片放映间隔。 +默认情况下,图像以 5 秒的间隔变化。你可以进入 “Preferences -> Slideshow” 来更改幻灯片放映间隔。 -![change slideshow interval in Ubuntu][3]Changing slideshow interval +![change slideshow interval in Ubuntu][3] #### 方法 2:使用 Shotwell Photo Manager 进行照片幻灯片放映 -[Shotwell][4] 是一种流行的[ Linux 照片管理程序][5]。适用于所有主要的 Linux 发行版。 +[Shotwell][4] 是一款流行的 [Linux 照片管理程序][5]。适用于所有主要的 Linux 发行版。 如果尚未安装,请在你的发行版软件中心中搜索 Shotwell 并安装。 @@ -55,7 +56,7 @@ via: https://itsfoss.com/photo-slideshow-ubuntu/ 作者:[Abhishek Prakash][a] 选题:[lujun9972](https://github.com/lujun9972) 译者:[geekpi](https://github.com/geekpi) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 From 3bb040ed3aa0459f4dff48de08e295e96e71890d Mon Sep 17 00:00:00 2001 From: "Xingyu.Wang" Date: Tue, 25 Sep 2018 15:09:24 +0800 Subject: [PATCH 014/736] PUB:20180831 How to Create a Slideshow of Photos in Ubuntu 18.04 and other Linux Distributions.md @geekpi https://linux.cn/article-10046-1.html --- ...how of Photos in Ubuntu 18.04 and other Linux Distributions.md | 0 1 file changed, 0 insertions(+), 0 deletions(-) rename {translated/tech => published}/20180831 How to Create a Slideshow of Photos in Ubuntu 18.04 and other Linux Distributions.md (100%) diff --git a/translated/tech/20180831 How to Create a Slideshow of Photos in Ubuntu 18.04 and other Linux Distributions.md b/published/20180831 How to Create a Slideshow of Photos in Ubuntu 18.04 and other Linux Distributions.md similarity index 100% rename from translated/tech/20180831 How to Create a Slideshow of Photos in Ubuntu 18.04 and other Linux Distributions.md rename to published/20180831 How to Create a Slideshow of Photos in Ubuntu 18.04 and other Linux Distributions.md From bdcad5518f9988f5eccf4e9355b577651eff1819 Mon Sep 17 00:00:00 2001 From: HankChow <280630620@qq.com> Date: Tue, 25 Sep 2018 15:13:57 +0800 Subject: [PATCH 015/736] HankChow translating --- .../20180917 Linux tricks that can save you time and trouble.md | 2 ++ 1 file changed, 2 insertions(+) diff --git a/sources/tech/20180917 Linux tricks that can save you time and trouble.md b/sources/tech/20180917 Linux tricks that can save you time and trouble.md index 786e2df2c1..61fae6d4bc 100644 --- a/sources/tech/20180917 Linux tricks that can save you time and trouble.md +++ b/sources/tech/20180917 Linux tricks that can save you time and trouble.md @@ -1,3 +1,5 @@ +HankChow translating + Linux tricks that can save you time and trouble ====== Some command line tricks can make you even more productive on the Linux command line. From 5fe5882d962b8e906d5bb28c29c74c7b7888f0eb Mon Sep 17 00:00:00 2001 From: "Xingyu.Wang" Date: Tue, 25 Sep 2018 15:51:33 +0800 Subject: [PATCH 016/736] PRF:20180905 5 tips to improve productivity with zsh.md @tnuoccalanosrep --- ...5 tips to improve productivity with zsh.md | 91 +++++++++---------- 1 file changed, 43 insertions(+), 48 deletions(-) diff --git a/translated/tech/20180905 5 tips to improve productivity with zsh.md b/translated/tech/20180905 5 tips to improve productivity with zsh.md index 05c7e845ab..22cd2dfc2b 100644 --- a/translated/tech/20180905 5 tips to improve productivity with zsh.md +++ b/translated/tech/20180905 5 tips to improve productivity with zsh.md @@ -1,24 +1,24 @@ -用 zsh 提高生产力的5个 tips +用 zsh 提高生产力的 5 个技巧 ====== +> zsh 提供了数之不尽的功能和特性,这里有五个可以让你在命令行暴增效率的方法。 ![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/features_solutions_command_data.png?itok=4_VQN3RK) -Z shell (亦称 zsh) 是 *unx 系统中的命令解析器 。 它跟 `sh` (Bourne shell) 家族的其他解析器 ( 如 `bash` 和 `ksh` ) 有着相似的特点,但它还提供了大量的高级特性以及强大的命令行编辑功能(选项?),如增强版tab补全。 +Z shell([zsh][1])是 Linux 和类 Unix 系统中的一个[命令解析器][2]。 它跟 sh (Bourne shell) 家族的其它解析器(如 bash 和 ksh)有着相似的特点,但它还提供了大量的高级特性以及强大的命令行编辑功能,如增强版 Tab 补全。 -由于 zsh 有好几百页的文档去描述他的特性,所以我无法在这里阐明 zsh 的所有功能。在本文,我会列出5个 tips,让你通过使用 zsh 来提高你的生产力。 +在这里不可能涉及到 zsh 的所有功能,[描述][3]它的特性需要好几百页。在本文中,我会列出 5 个技巧,让你通过在命令行使用 zsh 来提高你的生产力。 -### 1\. 主题和插件 +### 1、主题和插件 -多年来,开源社区已经为 zsh 开发了数不清的主题和插件。主题是预定义提示符的配置,而插件则是一组常用的别名命令和功能,让你更方便的使用一种特定的命令或者编程语言。 +多年来,开源社区已经为 zsh 开发了数不清的主题和插件。主题是一个预定义提示符的配置,而插件则是一组常用的别名命令和函数,可以让你更方便的使用一种特定的命令或者编程语言。 -如果你现在想开始用 zsh 的主题和插件,那么使用 zsh 的配置框架 (configuiration framework) 是你最快的入门方式。在众多的配置框架中,最受欢迎的则是 [Oh My Zsh][4]。在默认配置中,他就已经为 zsh 启用了一些合理的配置,同时它也自带多个主题和插件。 +如果你现在想开始用 zsh 的主题和插件,那么使用一种 zsh 的配置框架是你最快的入门方式。在众多的配置框架中,最受欢迎的则是 [Oh My Zsh][4]。在默认配置中,它就已经为 zsh 启用了一些合理的配置,同时它也自带上百个主题和插件。 -由于主题会在你的命令行提示符之前添加一些常用的信息,比如你 Git 仓库的状态,或者是当前使用的 Python 虚拟环境,所以它会让你的工作更高效。只需要看到这些信息,你就不用再敲命令去重新获取它们,而且这些提示也相当酷炫。 -下图就是我(作者)选用的主题 [Powerlevel9k][5] +主题会在你的命令行提示符之前添加一些有用的信息,比如你 Git 仓库的状态,或者是当前使用的 Python 虚拟环境,所以它会让你的工作更高效。只需要看到这些信息,你就不用再敲命令去重新获取它们,而且这些提示也相当酷炫。下图就是我选用的主题 [Powerlevel9k][5]: ![zsh Powerlevel9K theme][7] -zsh 主题 Powerlevel9k +*zsh 主题 Powerlevel9k* 除了主题,Oh my Zsh 还自带了大量常用的 zsh 插件。比如,通过启用 Git 插件,你可以用一组简便的命令别名操作 Git, 比如 @@ -36,39 +36,37 @@ gcs='git commit -S' glg='git log --stat' ``` -zsh 还有许多插件是用于多种编程语言,打包系统和一些平时在命令行中常用的工具。 -以下是我(作者) Ferdora 工作站中用到的插件表: +zsh 还有许多插件可以用于许多编程语言、打包系统和一些平时在命令行中常用的工具。以下是我 Ferdora 工作站中用到的插件表: ``` git golang fedora docker oc sudo vi-mode virtualenvwrapper ``` -### 2\. 智能的命令别名 +### 2、智能的命令别名 -命令别名在 zsh 中十分常用。为你常用的命令定义别名可以节省你的打字时间。Oh My Zsh 默认配置了一些常用的命令别名,包括目录导航命令别名,为常用的命令添加额外的选项,比如: +命令别名在 zsh 中十分有用。为你常用的命令定义别名可以节省你的打字时间。Oh My Zsh 默认配置了一些常用的命令别名,包括目录导航命令别名,为常用的命令添加额外的选项,比如: ``` ls='ls --color=tty' grep='grep  --color=auto --exclude-dir={.bzr,CVS,.git,.hg,.svn}' ``` +除了命令别名以外, zsh 还自带两种额外常用的别名类型:后缀别名和全局别名。 -除了命令别名意外, zsh 还自带两种额外常用的别名类型:后缀别名和全局别名。 - -后缀别名可以让你在基于文件后缀的前提下,在命令行中利用指定程序打开这个文件。比如,要用 vim 打开 YAML 文件,可以定义以下命令行别名: +后缀别名可以让你基于文件后缀,在命令行中利用指定程序打开这个文件。比如,要用 vim 打开 YAML 文件,可以定义以下命令行别名: ``` alias -s {yml,yaml}=vim ``` -现在,如果你在命令行中输入任何后缀名为 `yml` 或 `yaml` 文件, zsh 都会用 vim 打开这个文件 +现在,如果你在命令行中输入任何后缀名为 `yml` 或 `yaml` 文件, zsh 都会用 vim 打开这个文件。 ``` $ playbook.yml # Opens file playbook.yml using vim ``` -全局别名可以让你在使用命令行的任何时刻创建命令别名,而不仅仅是在开始的时候。这个在你想替换常用文件名或者管道命令的时候就显得非常有用了。比如 +全局别名可以让你创建一个可在命令行的任何地方展开的别名,而不仅仅是在命令开始的时候。这个在你想替换常用文件名或者管道命令的时候就显得非常有用了。比如: ``` alias -g G='| grep -i' @@ -84,9 +82,9 @@ drwxr-xr-x.  6 rgerardi rgerardi 4096 Aug 24 14:51 Downloads 接着,我们就来看看 zsh 是如何导航文件系统的。 -### 3\. 便捷的目录导航 +### 3、便捷的目录导航 -当你使用命令行的时候, 在不同的目录之间切换访问是最常见的工作了。 zsh 提供了一些十分有用的目录导航功能来简化这个操作。这些功能已经集成到 Oh My Zsh 中了, 而你可以用以下命令来启用它 +当你使用命令行的时候,在不同的目录之间切换访问是最常见的工作了。 zsh 提供了一些十分有用的目录导航功能来简化这个操作。这些功能已经集成到 Oh My Zsh 中了, 而你可以用以下命令来启用它 ``` setopt  autocd autopushd \ pushdignoredups @@ -104,7 +102,7 @@ $ pwd 如果想要回退,只要输入 `-`: -Zsh 会记录你访问过的目录,这样下次你就可以快速切换到这些目录中。如果想要看这个目录列表,只要输入 `dirs -v`: +zsh 会记录你访问过的目录,这样下次你就可以快速切换到这些目录中。如果想要看这个目录列表,只要输入 `dirs -v`: ``` $ dirs -v @@ -168,7 +166,7 @@ $ pwd /tmp ``` -最后,你可以在 zsh 中利用 Tab 来自动补全目录名称。你可以先输入目录的首字母,然后用 `TAB` 来补全它们: +最后,你可以在 zsh 中利用 Tab 来自动补全目录名称。你可以先输入目录的首字母,然后按 `TAB` 键来补全它们: ``` $ pwd @@ -179,22 +177,22 @@ $ Projects/Opensource.com/zsh-5tips/ 以上仅仅是 zsh 强大的 Tab 补全系统中的一个功能。接来下我们来探索它更多的功能。 -### 4\. 先进的 Tab 补全 +### 4、先进的 Tab 补全 -Zsh 强大的补全系统是它其中一个卖点。为了简便起见,我称它为 Tab 补全,然而在系统底层,它不仅仅只做一件事。这里通常包括扩展以及命令的补全,我会在这里同时讨论它们。如果想了解更多,详见 [用户手册][8] ( [User's Guide][8] )。 +zsh 强大的补全系统是它的卖点之一。为了简便起见,我称它为 Tab 补全,然而在系统底层,它起到了几个作用。这里通常包括展开以及命令补全,我会在这里用讨论它们。如果想了解更多,详见 [用户手册][8]。 + +在 Oh My Zsh 中,命令补全是默认启用的。要启用它,你只要在 `.zshrc` 文件中添加以下命令: -在 Oh My Zsh 中,命令补全是默认可用的。要启用它,你只要在 `.zshrc` 文件中添加以下命令: ``` autoload -U compinit compinit ``` -Zsh 的补全系统非常智能。他会根据当前上下文来进行命令的提示——比如,你输入了 `cd` 和 `TAB`,zsh 只会为你提示目录名,因为它知道 -当前的 `cd` 没有任何作用。 +zsh 的补全系统非常智能。它会尝试唯一提示可用在当前上下文环境中的项目 —— 比如,你输入了 `cd` 和 `TAB`,zsh 只会为你提示目录名,因为它知道其它的项目放在 `cd` 后面没用。 -反之,如果你使用 `ssh` 或者 `ping` 这类与用户或者主机相关的命令, zsh 便会提示用户名。 +反之,如果你使用与用户相关的命令便会提示用户名,而 `ssh` 或者 `ping` 这类则会提示主机名。 -`zsh` 拥有一个巨大而又完整的库,因此它能识别许多不同的命令。比如,如果你使用 `tar` 命令, 你可以按 Tab 键,他会为你展示一个可以用于解压的文件列表: +zsh 拥有一个巨大而又完整的库,因此它能识别许多不同的命令。比如,如果你使用 `tar` 命令, 你可以按 `TAB` 键,它会为你展示一个可以用于解压的文件列表: ``` $ tar -xzvf test1.tar.gz test1/file1 (TAB) @@ -221,7 +219,7 @@ $ git add (TAB) $ git add zsh-5tips.md ``` -zsh 还能识别命令行选项,同时他只会提示与选中子命令相关的命令列表: +zsh 还能识别命令行选项,同时它只会提示与选中子命令相关的命令列表: ``` $ git commit - (TAB) @@ -243,27 +241,27 @@ $ git commit - (TAB) ... TRUNCATED ... ``` -在按 `TAB` 键之后,你可以使用方向键来选择你想用的命令。现在你就不用记住所有的 Git 命令项了。 +在按 `TAB` 键之后,你可以使用方向键来选择你想用的命令。现在你就不用记住所有的 `git` 命令项了。 zsh 还有很多有用的功能。当你用它的时候,你就知道哪些对你才是最有用的。 -### 5\. 命令行编辑与历史记录 +### 5、命令行编辑与历史记录 -Zsh 的命令行编辑功能也十分有效。默认条件下,他是模拟 emacs 编辑器的。如果你是跟我一样更喜欢用 vi/vim,你可以用以下命令启用 vi 编辑。 +zsh 的命令行编辑功能也十分有用。默认条件下,它是模拟 emacs 编辑器的。如果你是跟我一样更喜欢用 vi/vim,你可以用以下命令启用 vi 的键绑定。 ``` $ bindkey -v ``` -如果你使用 Oh My Zsh,`vi-mode` 插件可以启用额外的绑定,同时会在你的命令提示符上增加 vi 的模式提示--这个非常有用。 +如果你使用 Oh My Zsh,`vi-mode` 插件可以启用额外的绑定,同时会在你的命令提示符上增加 vi 的模式提示 —— 这个非常有用。 -当启用 vi 的绑定后,你可以再命令行中使用 vi 命令进行编辑。比如,输入 `ESC+/` 来查找命令行记录。在查找的时候,输入 `n` 来找下一个匹配行,输入 `N` 来找上一个。输入 `ESC` 后,最常用的 vi 命令有以下几个,如输入 `0` 跳转到第一行,输入 `$` 跳转到最后一行,输入 `i` 来插入文本,输入 `a` 来追加文本等等,一些直接操作的命令也同样有效,比如输入 `cw` 来修改单词。 +当启用 vi 的绑定后,你可以在命令行中使用 vi 命令进行编辑。比如,输入 `ESC+/` 来查找命令行记录。在查找的时候,输入 `n` 来找下一个匹配行,输入 `N` 来找上一个。输入 `ESC` 后,常用的 vi 命令都可以使用,如输入 `0` 跳转到行首,输入 `$` 跳转到行尾,输入 `i` 来插入文本,输入 `a` 来追加文本等等,即使是跟随的命令也同样有效,比如输入 `cw` 来修改单词。 除了命令行编辑,如果你想修改或重新执行之前使用过的命令,zsh 还提供几个常用的命令行历史功能。比如,你打错了一个命令,输入 `fc`,你可以在你偏好的编辑器中修复最后一条命令。使用哪个编辑是参照 `$EDITOR` 变量的,而默认是使用 vi。 -另外一个有用的命令是 `r`, 他会重新执行上一条命令;而 `r ` 则会执行上一条包含 `WORD` 的命令。 +另外一个有用的命令是 `r`, 它会重新执行上一条命令;而 `r ` 则会执行上一条包含 `WORD` 的命令。 -最后,输入两个感叹号( `!!` ),可以在命令行中回溯最后一条命令。这个十分有用,比如,当你忘记使用 `sudo` 去执行需要权限的命令时: +最后,输入两个感叹号(`!!`),可以在命令行中回溯最后一条命令。这个十分有用,比如,当你忘记使用 `sudo` 去执行需要权限的命令时: ``` $ less /var/log/dnf.log @@ -274,19 +272,16 @@ $ sudo less /var/log/dnf.log 这个功能让查找并且重新执行之前命令的操作更加方便。 -### 何去何从? +### 下一步呢? -这里仅仅介绍了几个可以让你提高生产率的 zsh 特性;其实还有更多功能带你发掘;想知道更多的信息,你可以访问以下的资源: +这里仅仅介绍了几个可以让你提高生产率的 zsh 特性;其实还有更多功能有待你的发掘;想知道更多的信息,你可以访问以下的资源: -[An Introduction to the Z Shell][9] +- [An Introduction to the Z Shell][9] +- [A User's Guide to ZSH][10] +- [Archlinux Wiki][11] +- [zsh-lovers][12] -[A User's Guide to ZSH][10] - -[Archlinux Wiki][11] - -[zsh-lovers][12] - -你有使用 zsh 提高生产力的tips可以分享吗?我(作者)很乐意在下方评论看到它们。 +你有使用 zsh 提高生产力的技巧可以分享吗?我很乐意在下方评论中看到它们。 -------------------------------------------------------------------------------- @@ -295,7 +290,7 @@ via: https://opensource.com/article/18/9/tips-productivity-zsh 作者:[Ricardo Gerardi][a] 选题:[lujun9972](https://github.com/lujun9972) 译者:[tnuoccalanosrep](https://github.com/tnuoccalanosrep) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 From f8fd032ec1bdf826324da2233a7a6d972dad4691 Mon Sep 17 00:00:00 2001 From: "Xingyu.Wang" Date: Tue, 25 Sep 2018 15:52:10 +0800 Subject: [PATCH 017/736] PUB: 20180905 5 tips to improve productivity with zsh.md @tnuoccalanosrep https://linux.cn/article-10047-1.html --- .../20180905 5 tips to improve productivity with zsh.md | 0 1 file changed, 0 insertions(+), 0 deletions(-) rename {translated/tech => published}/20180905 5 tips to improve productivity with zsh.md (100%) diff --git a/translated/tech/20180905 5 tips to improve productivity with zsh.md b/published/20180905 5 tips to improve productivity with zsh.md similarity index 100% rename from translated/tech/20180905 5 tips to improve productivity with zsh.md rename to published/20180905 5 tips to improve productivity with zsh.md From d54d204ffb0eaab55fdd1c23a46ffc245fbb00bb Mon Sep 17 00:00:00 2001 From: jrg Date: Tue, 25 Sep 2018 22:08:36 +0800 Subject: [PATCH 018/736] Delete 20180814 Automating backups on a Raspberry Pi NAS.md --- ...utomating backups on a Raspberry Pi NAS.md | 223 ------------------ 1 file changed, 223 deletions(-) delete mode 100644 sources/tech/20180814 Automating backups on a Raspberry Pi NAS.md diff --git a/sources/tech/20180814 Automating backups on a Raspberry Pi NAS.md b/sources/tech/20180814 Automating backups on a Raspberry Pi NAS.md deleted file mode 100644 index 28f7a5db35..0000000000 --- a/sources/tech/20180814 Automating backups on a Raspberry Pi NAS.md +++ /dev/null @@ -1,223 +0,0 @@ -[翻译中]translating by jrg! - -Automating backups on a Raspberry Pi NAS -====== -![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/brain_data.png?itok=RH6NA32X) - -In the [first part][1] of this three-part series using a Raspberry Pi for network-attached storage (NAS), we covered the fundamentals of the NAS setup, attached two 1TB hard drives (one for data and one for backups), and mounted the data drive on a remote device via the network filesystem (NFS). In part two, we will look at automating backups. Automated backups allow you to continually secure your data and recover from a hardware defect or accidental file removal. - -![](https://opensource.com/sites/default/files/uploads/nas_part2.png) - -### Backup strategy - -Let's get started by coming up with with a backup strategy for our small NAS. I recommend creating daily backups of your data and scheduling them for a time they won't interfere with other NAS activities, including when you need to access or store your files. For example, you could trigger the backup activities each day at 2am. - -You also need to decide how long you'll keep each backup, since you would quickly run out of storage if you kept each daily backup indefinitely. Keeping your daily backups for one week allows you to travel back into your recent history if you realize something went wrong over the previous seven days. But what if you need something from further in the past? Keeping each Monday backup for a month and one monthly backup for a longer period of time should be sufficient. Let's keep the monthly backups for a year and one backup every year for long-distance time travels, e.g., for the last five years. - -This results in a bunch of backups on your backup drive over a five-year period: - - * 7 daily backups - * 4 (approx.) weekly backups - * 12 monthly backups - * 5 annual backups - - - -You may recall that your backup drive and your data drive are of equal size (1TB each). How will more than 10 backups of 1TB from your data drive fit onto a 1TB backup disk? If you create full backups, they won't. Instead, you will create incremental backups, reusing the data from the last backup if it didn't change and creating replicas of new or changed files. That way, the backup doesn't double every night, but only grows a little bit depending on the changes that happen to your data over a day. - -Here is my situation: My NAS has been running since August 2016, and 20 backups are on the backup drive. Currently, I store 406GB of files on the data drive. The backups take up 726GB on my backup drive. Of course, this depends heavily on your data's change frequency, but as you can see, the incremental backups don't consume as much space as 20 full backups would. Nevertheless, over time the 1TB disk will probably become insufficient for your backups. Once your data grows close to the 1TB limit (or whatever your backup drive capacity), you should choose a bigger backup drive and move your data there. - -### Creating backups with rsync - -To create a full backup, you can use the rsync command line tool. Here is an example command to create the initial full backup. -``` -pi@raspberrypi:~ $ rsync -a /nas/data/ /nas/backup/2018-08-01 - -``` - -This command creates a full replica of all data stored on the data drive, mounted on `/nas/data`, on the backup drive. There, it will create the folder `2018-08-01` and create the backup inside it. The `-a` flag starts rsync in archive-mode, which means it preserves all kinds of metadata, like modification dates, permissions, and owners, and copies soft links as soft links. - -Now that you have created your full, initial backup as of August 1, on August 2, you will create your first daily incremental backup. -``` -pi@raspberrypi:~ $ rsync -a --link-dest /nas/backup/2018-08-01/ /nas/data/ /nas/backup/2018-08-02 - -``` - -This command tells rsync to again create a backup of `/nas/data`. The target directory this time is `/nas/backup/2018-08-02`. The script also specified the `--link-dest` option and passed the location of the last backup as an argument. With this option specified, rsync looks at the folder `/nas/backup/2018-08-01` and checks what data files changed compared to that folder's content. Unchanged files will not be copied, rather they will be hard-linked to their counterparts in yesterday's backup folder. - -When using a hard-linked file from a backup, you won't notice any difference between the initial copy and the link. They behave exactly the same, and if you delete either the link or the initial file, the other will still exist. You can imagine them as two equal entry points to the same file. Here is an example: - -![](https://opensource.com/sites/default/files/uploads/backup_flow.png) - -The left box reflects the state shortly after the second backup. The box in the middle is yesterday's replica. The `file2.txt` didn't exist yesterday, but the image `file1.jpg` did and was copied to the backup drive. The box on the right reflects today's incremental backup. The incremental backup command created `file2.txt`, which didn't exist yesterday. Since `file1.jpg` didn't change since yesterday, today a hard link is created so it doesn't take much additional space on the disk. - -### Automate your backups - -You probably don't want to execute your daily backup command by hand at 2am each day. Instead, you can automate your backup by using a script like the following, which you may want to start with a cron job. -``` -#!/bin/bash - - - -TODAY=$(date +%Y-%m-%d) - -DATADIR=/nas/data/ - -BACKUPDIR=/nas/backup/ - -SCRIPTDIR=/nas/data/backup_scripts - -LASTDAYPATH=${BACKUPDIR}/$(ls ${BACKUPDIR} | tail -n 1) - -TODAYPATH=${BACKUPDIR}/${TODAY} - -if [[ ! -e ${TODAYPATH} ]]; then - -        mkdir -p ${TODAYPATH} - -fi - - - -rsync -a --link-dest ${LASTDAYPATH} ${DATADIR} ${TODAYPATH} $@ - - - -${SCRIPTDIR}/deleteOldBackups.sh - -``` - -The first block calculates the last backup's folder name to use for links and the name of today's backup folder. The second block has the rsync command (as described above). The last block executes a `deleteOldBackups.sh` script. It will clean up the old, unnecessary backups based on the backup strategy outlined above. You could also execute the cleanup script independently from the backup script if you want it to run less frequently. - -The following script is an example implementation of the backup strategy in this how-to article. -``` -#!/bin/bash - -BACKUPDIR=/nas/backup/ - - - -function listYearlyBackups() { - -        for i in 0 1 2 3 4 5 - -                do ls ${BACKUPDIR} | egrep "$(date +%Y -d "${i} year ago")-[0-9]{2}-[0-9]{2}" | sort -u | head -n 1 - -        done - -} - - - -function listMonthlyBackups() { - -        for i in 0 1 2 3 4 5 6 7 8 9 10 11 12 - -                do ls ${BACKUPDIR} | egrep "$(date +%Y-%m -d "${i} month ago")-[0-9]{2}" | sort -u | head -n 1 - -        done - -} - - - -function listWeeklyBackups() { - -        for i in 0 1 2 3 4 - -                do ls ${BACKUPDIR} | grep "$(date +%Y-%m-%d -d "last monday -${i} weeks")" - -        done - -} - - - -function listDailyBackups() { - -        for i in 0 1 2 3 4 5 6 - -                do ls ${BACKUPDIR} | grep "$(date +%Y-%m-%d -d "-${i} day")" - -        done - -} - - - -function getAllBackups() { - -        listYearlyBackups - -        listMonthlyBackups - -        listWeeklyBackups - -        listDailyBackups - -} - - - -function listUniqueBackups() { - -        getAllBackups | sort -u - -} - - - -function listBackupsToDelete() { - -        ls ${BACKUPDIR} | grep -v -e "$(echo -n $(listUniqueBackups) |sed "s/ /\\\|/g")" - -} - - - -cd ${BACKUPDIR} - -listBackupsToDelete | while read file_to_delete; do - -        rm -rf ${file_to_delete} - -done - -``` - -This script will first list all the backups to keep (according to our backup strategy), then it will delete all the backup folders that are not necessary anymore. - -To execute the scripts every night to create daily backups, schedule the backup script by running `crontab -e` as the root user. (You need to be in root to make sure it has permission to read all the files on the data drive, no matter who created them.) Add a line like the following, which starts the script every night at 2am. -``` -0 2 * * * /nas/data/backup_scripts/daily.sh - -``` - -For more information, read about [scheduling tasks with cron][2]. - - * Unmount your backup drive or mount it as read-only when no backups are running - * Attach the backup drive to a remote server and sync the files over the internet - - - -There are additional things you can do to fortify your backups against accidental removal or damage, including the following: - -This example backup strategy enables you to back up your valuable data to make sure it won't get lost. You can also easily adjust this technique for your personal needs and preferences. - -In part three of this series, we will talk about [Nextcloud][3], a convenient way to store and access data on your NAS system that also provides offline access as it synchronizes your data to the client devices. - --------------------------------------------------------------------------------- - -via: https://opensource.com/article/18/8/automate-backups-raspberry-pi - -作者:[Manuel Dewald][a] -选题:[lujun9972](https://github.com/lujun9972) -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:https://opensource.com/users/ntlx -[1]:https://opensource.com/article/18/7/network-attached-storage-Raspberry-Pi -[2]:https://opensource.com/article/17/11/how-use-cron-linux -[3]:https://nextcloud.com/ From 7dd12969ab56bd8c16a703673e8c9c28f28ebe88 Mon Sep 17 00:00:00 2001 From: jrg Date: Tue, 25 Sep 2018 22:11:10 +0800 Subject: [PATCH 019/736] Create 20180814 Automating backups on a Raspberry Pi NAS.md MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit 翻译完成 20180814 Automating backups on a Raspberry Pi NAS.md --- ...utomating backups on a Raspberry Pi NAS.md | 229 ++++++++++++++++++ 1 file changed, 229 insertions(+) create mode 100644 translated/tech/20180814 Automating backups on a Raspberry Pi NAS.md diff --git a/translated/tech/20180814 Automating backups on a Raspberry Pi NAS.md b/translated/tech/20180814 Automating backups on a Raspberry Pi NAS.md new file mode 100644 index 0000000000..111b508245 --- /dev/null +++ b/translated/tech/20180814 Automating backups on a Raspberry Pi NAS.md @@ -0,0 +1,229 @@ +Part-II 树莓派自建 NAS 云盘之数据自动备份 +====== + +![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/brain_data.png?itok=RH6NA32X) + +在《树莓派自建 NAS 云盘》系列的 [第一篇][1] 文章中,我们讨论了建立 NAS 的一些基本步骤,添加了两块 1TB 的存储硬盘驱动(一个用于数据存储,一个用于数据备份),并且通过 网络文件系统(NFS)将数据存储盘挂载到远程终端上。本文是此系列的第二篇文章,我们将探讨数据自动备份。数据自动备份保证了数据的安全,为硬件损坏后的数据恢复提供便利以及减少了文件误操作带来的不必要的麻烦。 + + + +![](https://opensource.com/sites/default/files/uploads/nas_part2.png) + + + +### 备份策略 + +我们就从为小型 NAS 构想一个备份策略着手开始吧。我建议每天有时间节点有计划的去备份数据,以防止干扰到我们正常的访问 NAS,比如备份时间点避开正在访问 NAS 并写入文件的时间点。举个例子,你可以每天凌晨 2 点去进行数据备份。 + +另外,你还得决定每天的备份需要被保留的时间长短,因为如果没有时间限制,存储空间很快就会被用完。一般每天的备份保留一周便可以,如果数据出了问题,你便可以很方便的从备份中恢复出来原数据。但是如果需要恢复数据到更久之前怎么办?可以将每周一的备份文件保留一个月、每个月的备份保留更长时间。让我们把每月的备份保留一年时间,每一年的备份保留更长时间、例如五年。 + +这样,五年内在备份盘上产生大量备份: + +* 每周 7 个日备份 +* 每月 4 个周备份 +* 每年 12 个月备份 +* 每五年 5 个年备份 + + +你应该还记得,我们搭建的备份盘和数据盘大小相同(每个 1 TB)。如何将不止 10 个 1TB 数据的备份从数据盘存放到只有 1TB 大小的备份盘呢?如果你创建的是完整备份,这显然不可能。因此,你需要创建增量备份,它是每一份备份都基于上一份备份数据而创建的。增量备份方式不会每隔一天就成倍的去占用存储空间,它每天只会增加一点占用空间。 + +以下是我的情况:我的 NAS 自 2016 年 8 月开始运行,备份盘上有 20 个备份。目前,我在数据盘上存储了 406GB 的文件。我的备份盘用了 726GB。当然,备份盘空间使用率在很大程度上取决于数据的更改频率,但正如你所看到的,增量备份不会占用 20 个完整备份所需的空间。然而,随着时间的推移,1TB 空间也可能不足以进行备份。一旦数据增长接近 1TB 限制(或任何备份盘容量),应该选择更大的备份盘空间并将数据移动转移过去。 + +### 利用 rsync 进行数据备份 + +利用 rsync 命令行工具可以生成完整备份。 + +``` +pi@raspberrypi:~ $ rsync -a /nas/data/ /nas/backup/2018-08-01 + +``` + +这段命令将挂载在 /nas/data/ 目录下的数据盘中的数据进行了完整的复制备份。备份文件保存在 /nas/backup/2018-08-01 目录下。`-a` 参数是以归档模式进行备份,这将会备份所有的元数据,例如文件的修改日期、权限、拥有者以及软连接文件。 + +现在,你已经在 8 月 1 日创建了完整的初始备份,你将在 8 月 2 日创建第一个增量备份。 + +``` +pi@raspberrypi:~ $ rsync -a --link-dest /nas/backup/2018-08-01/ /nas/data/ /nas/backup/2018-08-02 + +``` + +上面这行代码又创建了一个关于 `/nas/data` 目录中数据的备份。备份路径是 `/nas/backup/2018-08-02`。这里的参数 `--link-dest` 指定了一个备份文件所在的路径。这样,这次备份会与 `/nas/backup/2018-08-01` 的备份进行比对,只备份已经修改过的文件,未做修改的文件将不会被复制,而是创建一个到上一个备份文件中它们的硬链接。 + +使用备份文件中的硬链接文件时,你一般不会注意到硬链接和初始拷贝之间的差别。它们表现的完全一样,如果删除其中一个硬链接或者文件,其他的依旧存在。你可以把它们看做是同一个文件的两个不同入口。下面就是一个例子: + +![](https://opensource.com/sites/default/files/uploads/backup_flow.png) + +左侧框是在进行了第二次备份后的原数据状态。中间的盒子是昨天的备份。昨天的备份中只有图片 `file1.jpg` 并没有 `file2.txt` 。右侧的框反映了今天的增量备份。增量备份命令创建昨天不存在的 `file2.txt`。由于 `file1.jpg` 自昨天以来没有被修改,所以今天创建了一个硬链接,它不会额外占用磁盘上的空间。 + +### 自动化备份 + +你肯定也不想每天凌晨去输入命令进行数据备份吧。你可以创建一个任务定时去调用下面的脚本让它自动化备份 + +``` +#!/bin/bash + + + +TODAY=$(date +%Y-%m-%d) + +DATADIR=/nas/data/ + +BACKUPDIR=/nas/backup/ + +SCRIPTDIR=/nas/data/backup_scripts + +LASTDAYPATH=${BACKUPDIR}/$(ls ${BACKUPDIR} | tail -n 1) + +TODAYPATH=${BACKUPDIR}/${TODAY} + +if [[ ! -e ${TODAYPATH} ]]; then + +        mkdir -p ${TODAYPATH} + +fi + + + +rsync -a --link-dest ${LASTDAYPATH} ${DATADIR} ${TODAYPATH} $@ + + + +${SCRIPTDIR}/deleteOldBackups.sh + +``` + +第一段代码指定了数据路径、备份路劲、脚本路径以及昨天和今天的备份路径。第二段代码调用 rsync 命令。最后一段代码执行 `deleteOldBackups.sh` 脚本,它会清除一些过期的没有必要的备份数据。如果不想频繁的调用 `deleteOldBackups.sh`,你也可以手动去执行它。 + +下面是今天讨论的备份策略的一个简单完整的示例脚本。 + +``` +#!/bin/bash + +BACKUPDIR=/nas/backup/ + + + +function listYearlyBackups() { + +        for i in 0 1 2 3 4 5 + +                do ls ${BACKUPDIR} | egrep "$(date +%Y -d "${i} year ago")-[0-9]{2}-[0-9]{2}" | sort -u | head -n 1 + +        done + +} + + + +function listMonthlyBackups() { + +        for i in 0 1 2 3 4 5 6 7 8 9 10 11 12 + +                do ls ${BACKUPDIR} | egrep "$(date +%Y-%m -d "${i} month ago")-[0-9]{2}" | sort -u | head -n 1 + +        done + +} + + + +function listWeeklyBackups() { + +        for i in 0 1 2 3 4 + +                do ls ${BACKUPDIR} | grep "$(date +%Y-%m-%d -d "last monday -${i} weeks")" + +        done + +} + + + +function listDailyBackups() { + +        for i in 0 1 2 3 4 5 6 + +                do ls ${BACKUPDIR} | grep "$(date +%Y-%m-%d -d "-${i} day")" + +        done + +} + + + +function getAllBackups() { + +        listYearlyBackups + +        listMonthlyBackups + +        listWeeklyBackups + +        listDailyBackups + +} + + + +function listUniqueBackups() { + +        getAllBackups | sort -u + +} + + + +function listBackupsToDelete() { + +        ls ${BACKUPDIR} | grep -v -e "$(echo -n $(listUniqueBackups) |sed "s/ /\\\|/g")" + +} + + + +cd ${BACKUPDIR} + +listBackupsToDelete | while read file_to_delete; do + +        rm -rf ${file_to_delete} + +done + +``` + +这段脚本会首先根据你的备份策略列出所有需要保存的备份文件,然后它会删除那些再也不需要了的备份目录。 + +下面创建一个定时任务去执行上面这段代码。以 root 用户权限打开 `crontab -e`,输入以下这段命令,它将会创建一个每天凌晨 2 点去执行 `/nas/data/backup_scripts/daily.sh` 的定时任务。 + +``` +0 2 * * * /nas/data/backup_scripts/daily.sh + +``` + +有关创建定时任务请参考 [cron 创建定时任务][2]。 + +* 当没有备份任务时,卸载你的备份盘或者将它挂载为只读盘; +* 利用远程服务器作为你的备份盘,这样就可以通过互联网同步数据 + +你也可用下面的方法来加强你的备份策略,以防止备份数据的误删除或者被破坏: + +本文中备份策略示例是备份一些我觉得有价值的数据,你也可以根据个人需求去修改这些策略。 + +我将会在 《树莓派自建 NAS 云盘》 系列的第三篇文章中讨论 [Nextcloud][3]。Nextcloud 提供了更方便的方式去访问 NAS 云盘上的数据并且它还提供了离线操作,你还可以在客户端中同步你的数据。 + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/18/8/automate-backups-raspberry-pi + +作者:[Manuel Dewald][a] +选题:[lujun9972](https://github.com/lujun9972) +译者:[jrg](https://github.com/jrglinux) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/ntlx +[1]: https://opensource.com/article/18/7/network-attached-storage-Raspberry-Pi +[2]: https://opensource.com/article/17/11/how-use-cron-linux +[3]: https://nextcloud.com/ + From 7c229082b54a20d46fb508bbc634431fe6704d02 Mon Sep 17 00:00:00 2001 From: jrg Date: Tue, 25 Sep 2018 22:13:42 +0800 Subject: [PATCH 020/736] Delete 20180919 Host your own cloud with Raspberry Pi NAS.md --- ...st your own cloud with Raspberry Pi NAS.md | 113 ------------------ 1 file changed, 113 deletions(-) delete mode 100644 sources/tech/20180919 Host your own cloud with Raspberry Pi NAS.md diff --git a/sources/tech/20180919 Host your own cloud with Raspberry Pi NAS.md b/sources/tech/20180919 Host your own cloud with Raspberry Pi NAS.md deleted file mode 100644 index 0ecb3801c9..0000000000 --- a/sources/tech/20180919 Host your own cloud with Raspberry Pi NAS.md +++ /dev/null @@ -1,113 +0,0 @@ -[翻译中]translating by jrg! - -Host your own cloud with Raspberry Pi NAS -====== - -Protect and secure your data with a self-hosted cloud powered by your Raspberry Pi. - -In the first two parts of this series, we discussed the [hardware and software fundamentals][1] for building network-attached storage (NAS) on a Raspberry Pi. We also put a proper [backup strategy][2] in place to secure the data on the NAS. In this third part, we will talk about a convenient way to store, access, and share your data with [Nextcloud][3]. - -### Prerequisites - -To use Nextcloud conveniently, you have to meet a few prerequisites. First, you should have a domain you can use for the Nextcloud instance. For the sake of simplicity in this how-to, we'll use **nextcloud.pi-nas.com**. This domain should be directed to your Raspberry Pi. If you want to run it on your home network, you probably need to set up dynamic DNS for this domain and enable port forwarding of ports 80 and 443 (if you go for an SSL setup, which is highly recommended; otherwise port 80 should be sufficient) from your router to the Raspberry Pi. - -You can automate dynamic DNS updates from the Raspberry Pi using [ddclient][4]. - -### Install Nextcloud - -To run Nextcloud on your Raspberry Pi (using the setup described in the [first part][1] of this series), install the following packages as dependencies to Nextcloud using **apt**. - -``` -sudo apt install unzip wget php apache2 mysql-server php-zip php-mysql php-dom php-mbstring php-gd php-curl -``` - -The next step is to download Nextcloud. [Get the latest release's URL][5] and copy it to download via **wget** on the Raspberry Pi. In the first article in this series, we attached two disk drives to the Raspberry Pi, one for current data and one for backups. Install Nextcloud on the data drive to make sure data is backed up automatically every night. - -``` -sudo mkdir -p /nas/data/nextcloud -sudo chown pi /nas/data/nextcloud -cd /nas/data/ -wget https://download.nextcloud.com/server/releases/nextcloud-14.0.0.zip -O /nas/data/nextcloud.zip -unzip nextcloud.zip -sudo ln -s /nas/data/nextcloud /var/www/nextcloud -sudo chown -R www-data:www-data /nas/data/nextcloud -``` - -When I wrote this, the latest release (as you see in the code above) was 14. Nextcloud is under heavy development, so you may find a newer version when installing your copy of Nextcloud onto your Raspberry Pi. - -### Database setup - -When we installed Nextcloud above, we also installed MySQL as a dependency to use it for all the metadata Nextcloud generates (for example, the users you create to access Nextcloud). If you would rather use a Postgres database, you'll need to adjust some of the modules installed above. - -To access the MySQL database as root, start the MySQL client as root: - -``` -sudo mysql -``` - -This will open a SQL prompt where you can insert the following commands—substituting the placeholder with the password you want to use for the database connection—to create a database for Nextcloud. - -``` -CREATE USER nextcloud IDENTIFIED BY ''; -CREATE DATABASE nextcloud; -GRANT ALL ON nextcloud.* TO nextcloud; -``` - -You can exit the SQL prompt by pressing **Ctrl+D** or entering **quit**. - -### Web server configuration - -Nextcloud can be configured to run using Nginx or other web servers, but for this how-to, I decided to go with the Apache web server on my Raspberry Pi NAS. (Feel free to try out another alternative and let me know if you think it performs better.) - -To set it up, configure a virtual host for the domain you created for your Nextcloud instance **nextcloud.pi-nas.com**. To create a virtual host, create the file **/etc/apache2/sites-available/001-nextcloud.conf** with content similar to the following. Make sure to adjust the ServerName to your domain and paths, if you didn't use the ones suggested earlier in this series. - -``` - -ServerName nextcloud.pi-nas.com -ServerAdmin admin@pi-nas.com -DocumentRoot /var/www/nextcloud/ - - -AllowOverride None - - -``` - -To enable this virtual host, run the following two commands. - -``` -a2ensite 001-nextcloud -sudo systemctl reload apache2 -``` - -With this configuration, you should now be able to reach the web server with your domain via the web browser. To secure your data, I recommend using HTTPS instead of HTTP to access Nextcloud. A very easy (and free) way is to obtain a [Let's Encrypt][6] certificate with [Certbot][7] and have a cron job automatically refresh it. That way you don't have to mess around with self-signed or expiring certificates. Follow Certbot's simple how-to [instructions to install it on your Raspberry Pi][8]. During Certbot configuration, you can even decide to automatically forward HTTP to HTTPS, so visitors to **** will be redirected to ****. Please note, if your Raspberry Pi is running behind your home router, you must have port forwarding enabled for ports 443 and 80 to obtain Let's Encrypt certificates. - -### Configure Nextcloud - -The final step is to visit your fresh Nextcloud instance in a web browser to finish the configuration. To do so, open your domain in a browser and insert the database details from above. You can also set up your first Nextcloud user here, the one you can use for admin tasks. By default, the data directory should be inside the Nextcloud folder, so you don't need to change anything for the backup mechanisms from the [second part of this series][2] to pick up the data stored by users in Nextcloud. - -Afterward, you will be directed to your Nextcloud and can log in with the admin user you created previously. To see a list of recommended steps to ensure a performant and secure Nextcloud installation, visit the Basic Settings tab in the Settings page (in our example: settings/admin) and see the Security & Setup Warnings section. - -Congratulations! You've set up your own Nextcloud powered by a Raspberry Pi. Go ahead and [download a Nextcloud client][9] from the Nextcloud page to sync data with your client devices and access it offline. Mobile clients even provide features like instant upload of pictures you take, so they'll automatically sync to your desktop PC without wondering how to get them there. - --------------------------------------------------------------------------------- - -via: https://opensource.com/article/18/9/host-cloud-nas-raspberry-pi - -作者:[Manuel Dewald][a] -选题:[lujun9972](https://github.com/lujun9972) -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://opensource.com/users/ntlx -[1]: https://opensource.com/article/18/7/network-attached-storage-Raspberry-Pi -[2]: https://opensource.com/article/18/8/automate-backups-raspberry-pi -[3]: https://nextcloud.com/ -[4]: https://sourceforge.net/p/ddclient/wiki/Home/ -[5]: https://nextcloud.com/install/#instructions-server -[6]: https://letsencrypt.org/ -[7]: https://certbot.eff.org/ -[8]: https://certbot.eff.org/lets-encrypt/debianother-apache -[9]: https://nextcloud.com/install/#install-clients From ab1707a35d097d10671d3f724db5b26a25626029 Mon Sep 17 00:00:00 2001 From: jrg Date: Tue, 25 Sep 2018 22:15:10 +0800 Subject: [PATCH 021/736] Create 20180919 Host your own cloud with Raspberry Pi NAS.md --- ...st your own cloud with Raspberry Pi NAS.md | 112 ++++++++++++++++++ 1 file changed, 112 insertions(+) create mode 100644 translated/tech/20180919 Host your own cloud with Raspberry Pi NAS.md diff --git a/translated/tech/20180919 Host your own cloud with Raspberry Pi NAS.md b/translated/tech/20180919 Host your own cloud with Raspberry Pi NAS.md new file mode 100644 index 0000000000..312fed7c4c --- /dev/null +++ b/translated/tech/20180919 Host your own cloud with Raspberry Pi NAS.md @@ -0,0 +1,112 @@ +Part-III 树莓派自建 NAS 云盘之云盘构建 +====== + +用树莓派 NAS 云盘来保护数据的安全! + +在前面两篇文章中(译注:文章链接 [Part-I][1],[Part-II][2]),我们讨论了用树莓派搭建一个 NAS(network-attached storage) 所需要的一些 [软硬件环境及其操作步骤][1]。我们还制定了适当的 [备份策略][2] 来保护NAS上的数据。本文中,我们将介绍讨论利用 [Nestcloud][3] 来方便快捷的存储、获取以及分享你的数据。 + +### 必要的准备工作 + +想要方便的使用 Nextcloud,需要一些必要的准备工作。首先,你需要一个指向 Nextcloud 的域名。方便起见,本文将使用 **nextcloud.pi-nas.com** 。如果你是在家庭网络里运行,你需要为该域名配置 DNS 服务(动态域名解析服务)并在路由器中开启 80 端口和 443 端口转发功能(如果需要使用 https,则需要开启 443 端口转发,如果只用 http,80 端口足以)。 + +你可以使用 [ddclient][4] 在树莓派中自动更新 DNS。 + +### 安装 Nextcloud + +为了在树莓派(参考 [Part-I][1] 中步骤设置)中运行 Nextcloud,首先用命令 **apt** 安装 以下的一些依赖软件包。 + +``` +sudo apt install unzip wget php apache2 mysql-server php-zip php-mysql php-dom php-mbstring php-gd php-curl +``` + +其次,下载 Nextcloud。在树莓派中利用 **wget** 下载其 [最新的版本][5]。在 [Part-I] 文章中,我们将两个磁盘驱动器连接到树莓派,一个用于存储当前数据,另一个用于备份。这里在数据存储盘上安装 Nextcloud,以确保每晚自动备份数据。 + +``` +sudo mkdir -p /nas/data/nextcloud +sudo chown pi /nas/data/nextcloud +cd /nas/data/ +wget https://download.nextcloud.com/server/releases/nextcloud-14.0.0.zip -O /nas/data/nextcloud.zip +unzip nextcloud.zip +sudo ln -s /nas/data/nextcloud /var/www/nextcloud +sudo chown -R www-data:www-data /nas/data/nextcloud +``` + +截止到写作本文时,Nextcloud 最新版更新到如上述代码中所示的 14.0.0 版本。Nextcloud 正在快速的迭代更新中,所以你可以在你的树莓派中安装更新一点的版本。 + +### 配置数据库 + +如上所述,Nextcloud 安装完毕。之前安装依赖软件包时就已经安装了 MySQL 数据库来存储 Nextcloud 的一些重要数据(例如,那些你创建的可以访问 Nextcloud 的用户的信息)。如果你更愿意使用 Pstgres 数据库,则上面的依赖软件包需要做一些调整。 + +以 root 权限启动 MySQL: + +``` +sudo mysql +``` + +这将会打开 SQL 提示符界面,在那里可以插入如下指令--使用数据库连接密码替换其中的占位符--为 Nextcloud 创建一个数据库。 + +``` +CREATE USER nextcloud IDENTIFIED BY ''; +CREATE DATABASE nextcloud; +GRANT ALL ON nextcloud.* TO nextcloud; +``` + +按 **Ctrl+D** 或输入 **quit** 退出 SQL 提示符界面。 + +### Web 服务器配置 + +Nextcloud 可以配置以适配于 Nginx 服务器或者其他 Web 服务器运行的环境。但本文中,我决定在我的树莓派 NAS 中运行 Apache 服务器(如果你有其他效果更好的服务器选择方案,不妨也跟我分享一下)。 + +首先为你的 Nextcloud 域名创建一个虚拟主机,创建配置文件 **/etc/apache2/sites-available/001-netxcloud.conf**,在其中输入下面的参数内容。修改其中 ServerName 为你的域名。 + +``` + +ServerName nextcloud.pi-nas.com +ServerAdmin admin@pi-nas.com +DocumentRoot /var/www/nextcloud/ + + +AllowOverride None + + +``` + +使用下面的命令来启动该虚拟主机。 + +``` +a2ensite 001-nextcloud +sudo systemctl reload apache2 +``` + +现在,你应该可以通过浏览器中输入域名访问到 web 服务器了。这里我推荐使用 HTTPS 协议而不是 HTTP 协议来访问 Nextcloud。一个简单而且免费的方法就是利用 [Certbot][7] 下载 [Let's Encrypt][6] 证书,然后设置定时任务自动刷新。这样就避免了自签证书等的麻烦。参考 [如何在树莓派中安装][8] Certbot 。在配置 Certbot 的时候,你甚至可以配置将 HTTP 自动转到 HTTPS ,例如访问 **** 自动跳转到 ****。注意,如果你的树莓派 NAS 运行在家庭路由器的下面,别忘了设置路由器的 443 端口和 80 端口转发。 + +### 配置 Nextcloud + +最后一步,通过浏览器访问 Nextcloud 来配置它。在浏览器中输入域名地址,插入上文中的数据库设置信息。这里,你可以创建 Nextcloud 管理员用户。默认情况下,数据保存目录在在 Nextcloud 目录下,所以你也无需修改我们在 [Part-II][2] 一文中设置的备份策略。 + +然后,页面会跳转到 Nextcloud 登陆界面,用刚才创建的管理员用户登陆。在设置页面中会有基础操作教程和安全安装教程(这里是访问 settings/admin)。 + +恭喜你,到此为止,你已经成功在树莓派中安装了你自己的云 Nextcloud。去 Nextcloud 主页 [下载 Nextcloud 客户端][9],客户端可以同步数据并且离线访问服务器。移动端甚至可以上传图片等资源,然后电脑桌面都可以去访问它们。 + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/18/9/host-cloud-nas-raspberry-pi + +作者:[Manuel Dewald][a] +选题:[lujun9972](https://github.com/lujun9972) +译者:[jrg](https://github.com/jrglinux) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/ntlx +[1]: https://opensource.com/article/18/7/network-attached-storage-Raspberry-Pi +[2]: https://opensource.com/article/18/8/automate-backups-raspberry-pi +[3]: https://nextcloud.com/ +[4]: https://sourceforge.net/p/ddclient/wiki/Home/ +[5]: https://nextcloud.com/install/#instructions-server +[6]: https://letsencrypt.org/ +[7]: https://certbot.eff.org/ +[8]: https://certbot.eff.org/lets-encrypt/debianother-apache +[9]: https://nextcloud.com/install/#install-clients + From 86dac844d8de74b9d770b30cb3bccf534df1db15 Mon Sep 17 00:00:00 2001 From: "Xingyu.Wang" Date: Tue, 25 Sep 2018 22:38:20 +0800 Subject: [PATCH 022/736] PRF:20180326 Start a blog in 30 minutes with Hugo, a static site generator written in Go.md MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit @jrglinux 翻译的很好,就是 markdown 格式上再注意点,如果使用 Mac,可以使用 MacDown 编辑器。 --- ..., a static site generator written in Go.md | 232 ++++++++---------- 1 file changed, 104 insertions(+), 128 deletions(-) diff --git a/translated/tech/20180326 Start a blog in 30 minutes with Hugo, a static site generator written in Go.md b/translated/tech/20180326 Start a blog in 30 minutes with Hugo, a static site generator written in Go.md index f5366353b3..648073a246 100644 --- a/translated/tech/20180326 Start a blog in 30 minutes with Hugo, a static site generator written in Go.md +++ b/translated/tech/20180326 Start a blog in 30 minutes with Hugo, a static site generator written in Go.md @@ -1,179 +1,155 @@ -Hugo,30分钟搭建博客,一个Go语言开发的静态站点生成工具 +用 Hugo 30 分钟搭建静态博客 ====== - ![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/programming-code-keyboard-laptop-music-headphones.png?itok=EQZ2WKzy) +> 了解 Hugo 如何使构建网站变得有趣。 + +![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/programming-code-keyboard-laptop-music-headphones.png?itok=EQZ2WKzy) - 你是不是强烈的想搭建博客来将自己对软件框架等的探索学习成果分享呢? +你是不是强烈地想搭建博客来将自己对软件框架等的探索学习成果分享呢?你是不是面对缺乏指导文档而一团糟的项目就有一种想去改变它的冲动呢?或者换个角度,你是不是十分期待能创建一个属于自己的个人博客网站呢? - 你是不是面对缺乏指导文档而一团糟的项目就有一种想去改变它的冲动呢? +很多人在想搭建博客之前都有一些严重的迟疑顾虑:感觉自己缺乏内容管理系统(CMS)的相关知识,更缺乏时间去学习这些知识。现在,如果我说不用花费大把的时间去学习 CMS 系统、学习如何创建一个静态网站、更不用操心如何去强化网站以防止它受到黑客攻击的问题,你就可以在 30 分钟之内创建一个博客?你信不信?利用 Hugo 工具,就可以实现这一切。 - 或者换个角度,你是不是十分期待能创建一个属于自己的个人博客网站呢? +![](https://opensource.com/sites/default/files/styles/panopoly_image_original/public/u128651/hugo_1.png?itok=JgxBSOBG) - 很多人在想搭建博客之前都有一些严重的迟疑顾虑:感觉自己缺乏内容管理系统(CMS)的相关知识,更缺乏时间去学习这些知识。现在,如果我说不用花费大把的时间去学习 CMS 系统、学习如何创建一个静态网站、更不用操心如何去强化网站以防止它受到黑客攻击的问题,你就可以在 30 分钟之内创建一个博客?你信不信?利用 Hugo 工具,就可以实现这一切。 +Hugo 是一个基于 Go 语言开发的静态站点生成工具。也许你会问,为什么选择它? + +* 无需数据库、无需需要各种权限的插件、无需跑在服务器上的底层平台,更没有额外的安全问题。 +* 都是静态站点,因此拥有轻量级、快速响应的服务性能。此外,所有的网页都是在部署的时候生成,所以服务器负载很小。 +* 极易操作的版本控制。一些 CMS 平台使用它们自己的版本控制软件(VCS)或者在网页上集成 Git 工具。而 Hugo,所有的源文件都可以用你所选的 VCS 软件来管理。 - ![](https://opensource.com/sites/default/files/styles/panopoly_image_original/public/u128651/hugo_1.png?itok=JgxBSOBG) +### 0-5 分钟:下载 Hugo,生成一个网站 - Hugo 是一个基于 Go 语言开发的静态站点生成工具。也许你会问,为什么选择它? - * 无需数据库、无需需要各种权限的插件、无需跑在服务器上的底层平台,更没有额外的安全问题。 - * 都是静态站点,因此拥有轻量级、快速响应的服务性能。此外,所有的网页都是在部署的时候呈现,所以服务器负载很小。 - * 极易操作的版本控制。一些 CMS 平台使用它们自己的版本控制软件(VCS)或者在网页上集成 Git 工具。而 Hugo,所有的源文件都可以用你所选的 VCS 软件来管理。 +直白的说,Hugo 使得写一个网站又一次变得有趣起来。让我们来个 30 分钟计时,搭建一个网站。 + +为了简化 Hugo 安装流程,这里直接使用 Hugo 可执行安装文件。 - ### 0-5 分钟:下载 Hugo,生成一个网站 +1. 下载和你操作系统匹配的 Hugo [版本][2]; +2. 压缩包解压到指定路径,例如 windows 系统的 `C:\hugo_dir` 或者 Linux 系统的 `~/hugo_dir` 目录;下文中的变量 `${HUGO_HOME}` 所指的路径就是这个安装目录; +3. 打开命令行终端,进入安装目录:`cd ${HUGO_HOME}`; +4. 确认 Hugo 已经启动: + * Unix 系统:`${HUGO_HOME}/[hugo version]`; + * Windows 系统:`${HUGO_HOME}\[hugo.exe version]`,例如:cmd 命令行中输入:`c:\hugo_dir\hugo version`。 + + 为了书写上的简化,下文中的 `hugo` 就是指 hugo 可执行文件所在的路径(包括可执行文件),例如命令 `hugo version` 就是指命令 `c:\hugo_dir\hugo version` 。(LCTT 译注:可以把 hugo 可执行文件所在的路径添加到系统环境变量下,这样就可以直接在终端中输入 `hugo version`) + + 如果命令 `hugo version` 报错,你可能下载了错误的版本。当然,有很多种方法安装 Hugo,更多详细信息请查阅 [官方文档][3]。最稳妥的方法就是把 Hugo 可执行文件放在某个路径下,然后执行的时候带上路径名 +5. 创建一个新的站点来作为你的博客,输入命令:`hugo new site awesome-blog`; +6. 进入新创建的路径下: `cd awesome-blog`; + +恭喜你!你已经创建了自己的新博客。 - 直白的说,Hugo 使得写一个网站又一次变得有趣起来。让我们来个 30 分钟计时,搭建一个网站。 +### 5-10 分钟:为博客设置主题 - 为了简化 Hugo 安装流程,这里直接使用 Hugo 可执行安装文件。 +Hugo 中你可以自己构建博客的主题或者使用网上已经有的一些主题。这里选择 [Kiera][4] 主题,因为它简洁漂亮。按以下步骤来安装该主题: + +1. 进入主题所在目录:`cd themes`; +2. 克隆主题:`git clone https://github.com/avianto/hugo-kiera kiera`。如果你没有安装 Git 工具: + * 从 [Github][5] 上下载 hugo 的 .zip 格式的文件; + * 解压该 .zip 文件到你的博客主题 `theme` 路径; + * 重命名 `hugo-kiera-master` 为 `kiera`; +3. 返回博客主路径:`cd awesome-blog`; +4. 激活主题;通常来说,主题(包括 Kiera)都自带文件夹 `exampleSite`,里面存放了内容配置的示例文件。激活 Kiera 主题需要拷贝它提供的 `config.toml` 到你的博客下: + * Unix 系统:`cp themes/kiera/exampleSite/config.toml .`; + * Windows 系统:`copy themes\kiera\exampleSite\config.toml .`; + * 选择 `Yes` 来覆盖原有的 `config.toml`; + +5. ( 可选操作 )你可以选择可视化的方式启动服务器来验证主题是否生效:`hugo server -D` 然后在浏览器中输入 `http://localhost:1313`。可用通过在终端中输入 `Crtl+C` 来停止服务器运行。现在你的博客还是空的,但这也给你留了写作的空间。它看起来如下所示: + +![](https://opensource.com/sites/default/files/styles/panopoly_image_original/public/u128651/hugo_2.png?itok=PINOIOSU) - 1. 下载和你操作系统匹配的 Hugo [版本][2]; - - 2. 压缩包解压到指定路径,例如 windows 系统的 `C:\hugo_dir` 或者 Linux 系统的 `~/hugo_dir` 目录;下文中的变量 `${HUGO_HOME}` 所指的路径就是这个安装目录; - - 3. 打开命令行终端,进入安装目录:`cd ${HUGO_HOME}`; - - 4. 确认 Hugo 已经启动: - * Unix 系统:`${HUGO_HOME}/[hugo version]`; - * Windows 系统:`${HUGO_HOME}\[hugo.exe version]`; - - 例如:Windows 系统下,cmd 命令行中输入:`c:\hugo_dir\hugo version`。 - - 为了书写上的简化,下文中的 `hugo` 就是指 hugo 可执行文件所在的路径(包括可执行文件),例如命令 `hugo version` 就是指命令 `c:\hugo_dir\hugo version` 。(译者注:可以把 hugo 可执行文件所在的路径添加到系统环境变量下,这样就可以直接在终端中输入 `hugo version`) - 如果命令 `hugo version` 报错,你可能下载了错误的版本。当然,有很多种方法安装 Hugo,更多详细信息请查阅 [官方文档][3]。最稳妥的方法就是把 Hugo 可执行文件放在某个路径下,然后执行的时候带上路径名 - - 5. 创建一个新的站点来作为你的博客,输入命令:`hugo new site awesome-blog`; - - 6. 进入新创建的路径下: `cd awesome-blog`; - - 恭喜你!你已经创建了自己的新博客。 +你已经成功的给博客设置了主题!你可以在官方 [Hugo 主题][4] 网站上找到上百种漂亮的主题供你使用。 - ### 5-10 分钟:为博客设置主题 +### 10-20 分钟:给博客添加内容 - Hugo 中你可以自己构建博客的主题或者使用网上已经有的一些主题。这里选择 [Kiera][4] 主题,因为它简洁漂亮。按以下步骤来安装该主题: - - 1. 进入主题所在目录:`cd themes`; - - 2. 克隆主题:`git clone https://github.com/avianto/hugo-kiera kiera`。如果你没有安装 Git 工具: - * 从 [Github][5] 上下载 hugo 的 .zip 格式的文件; - * 解压该 .zip 文件到你的博客主题 `theme` 路径; - * 重命名 `hugo-kiera-master` 为 `kiera`; - - 3. 返回博客主路径:`cd awesome-blog`; - - 4. 激活主题;通常来说,主题(包括 Kiera )都自带文件夹 `exampleSite`,里面存放了内容配置的示例文件。激活 Kiera 主题需要拷贝它提供的 `config.toml` 到你的博客下: - * Unix 系统:`cp themes/kiera/exampleSite/config.toml .`; - * Windows 系统:`copy themes\kiera\exampleSite\config.toml .`; - * 选择 `Yes` 来覆盖原有的 `config.toml`; - - 5. ( 可选操作 )你可以选择可视化的方式启动服务器来验证主题是否生效:`hugo server -D` 然后在浏览器中输入 `http://localhost:1313`。可用通过在终端中输入 `Crtl+C` 来停止服务器运行。现在你的博客还是空的,但这也给你留了写作的空间。它看起来如下所示: - - ![](https://opensource.com/sites/default/files/styles/panopoly_image_original/public/u128651/hugo_2.png?itok=PINOIOSU) +对于碗来说,它是空的时候用处最大,可以用来盛放东西;但对于博客来说不是这样,空博客几乎毫无用处。在这一步,你将会给博客添加内容。Hugo 和 Kiera 主题都为这个工作提供了方便性。按以下步骤来进行你的第一次提交: - 你已经成功的给博客设置了主题!你可以在官方 [Hugo 主题][4] 网站上找到上百种漂亮的主题供你使用。 - - ### 10-20 分钟:给博客添加内容 - - 对于碗来说,它是空的时候用处最大,可以用来盛放东西;但对于博客来说不是这样,空博客几乎毫无用处。在这一步,你将会给博客添加内容。Hugo 和 Kiera 主题都为这个工作提供了方便性。按以下步骤来进行你的第一次提交: - - 1. archetypes 将会是你的内容模板。 - - 2. 添加主题中的 archtypes 至你的博客: - * Unix 系统: `cp themes/kiera/archetypes/* archetypes/` - * Windows 系统:`copy themes\kiera\archetypes\* archetypes\` - * 选择 `Yes` 来覆盖原来的 `default.md` 内容架构类型 - - 3. 创建博客 posts 目录: - * Unix 系统: `mkdir content/posts` - * Windows 系统: `mkdir content\posts` - - 4. 利用 Hugo 生成你的 post: +1. archetypes 将会是你的内容模板。 +2. 添加主题中的 archtypes 至你的博客: + * Unix 系统: `cp themes/kiera/archetypes/* archetypes/` + * Windows 系统:`copy themes\kiera\archetypes\* archetypes\` + * 选择 `Yes` 来覆盖原来的 `default.md` 内容架构类型 + +3. 创建博客 posts 目录: + * Unix 系统: `mkdir content/posts` + * Windows 系统: `mkdir content\posts` + +4. 利用 Hugo 生成你的 post: * Unix 系统:`hugo nes posts/first-post.md`; * Windows 系统:`hugo new posts\first-post.md`; - 5. 在文本编辑器中打开这个新建的 post 文件: +5. 在文本编辑器中打开这个新建的 post 文件: * Unix 系统:`gedit content/posts/first-post.md`; * Windows 系统:`notepadd content\posts\first-post.md`; - 此刻,你可以疯狂起来了。注意到你的提交文件中包括两个部分。第一部分是以 `+++` 符号分隔开的。它包括了提交文档的主要数据,例如名称、时间等。在 Hugo 中,这叫做前缀。在前缀之后,才是正文。下面编辑第一个提交文件内容: +此刻,你可以疯狂起来了。注意到你的提交文件中包括两个部分。第一部分是以 `+++` 符号分隔开的。它包括了提交文档的主要数据,例如名称、时间等。在 Hugo 中,这叫做前缀。在前缀之后,才是正文。下面编辑第一个提交文件内容: - ``` +``` +++ - title = "First Post" - date = 2018-03-03T13:23:10+01:00 - draft = false - tags = ["Getting started"] - categories = [] - +++ - Hello Hugo world! No more excuses for having no blog or documentation now! - ``` +title = "First Post" +date = 2018-03-03T13:23:10+01:00 +draft = false +tags = ["Getting started"] +categories = [] ++++ + +Hello Hugo world! No more excuses for having no blog or documentation now! +``` - 现在你要做的就是启动你的服务器:`hugo server -D`;然后打开浏览器,输入 `http://localhost:1313/`。 +现在你要做的就是启动你的服务器:`hugo server -D`;然后打开浏览器,输入 `http://localhost:1313/`。 ![](https://opensource.com/sites/default/files/styles/panopoly_image_original/public/u128651/hugo_3.png?itok=I-_v0qLx) - ### 20-30 分钟:调整网站 +### 20-30 分钟:调整网站 - 前面的工作很完美,但还有一些问题需要解决。例如,简单地命名你的站点: - - 1. 终端中按下 `Ctrl+C` 以停止服务器。 +前面的工作很完美,但还有一些问题需要解决。例如,简单地命名你的站点: + +1. 终端中按下 `Ctrl+C` 以停止服务器。 +2. 打开 `config.toml`,编辑博客的名称,版权,你的姓名,社交网站等等。 - 2. 打开 `config.toml`,编辑博客的名称,版权,你的姓名,社交网站等等。 - - 当你再次启动服务器后,你会发现博客私人订制味道更浓了。不过,还少一个重要的基础内容:主菜单。快速的解决这个问题。返回 `config.toml` 文件,在末尾插入如下一段: +当你再次启动服务器后,你会发现博客私人订制味道更浓了。不过,还少一个重要的基础内容:主菜单。快速的解决这个问题。返回 `config.toml` 文件,在末尾插入如下一段: ``` [[menu.main]] - name = "Home" #Name in the navigation bar - weight = 10 #The larger the weight, the more on the right this item will be - url = "/" #URL address - [[menu.main]] - name = "Posts" - weight = 20 - url = "/posts/" - ``` + name = "Home" #Name in the navigation bar + weight = 10 #The larger the weight, the more on the right this item will be + url = "/" #URL address +[[menu.main]] + name = "Posts" + weight = 20 + url = "/posts/" +``` - 上面这段代码添加了 `Home` 和 `Posts` 到主菜单中。你还需要一个 `About` 页面。这次是创建一个 `.md` 文件,而不是编辑 `config.toml` 文件: +上面这段代码添加了 `Home` 和 `Posts` 到主菜单中。你还需要一个 `About` 页面。这次是创建一个 `.md` 文件,而不是编辑 `config.toml` 文件: - 1. 创建 `about.md` 文件:`hugo new about.md` 。注意它是 `about.md`,不是 `posts/about.md`。该页面不是博客提交内容,所以你不想它显示到博客内容提交当中吧。 - - 2. 用文本编辑器打开该文件,输入如下一段: - +1. 创建 `about.md` 文件:`hugo new about.md` 。注意它是 `about.md`,不是 `posts/about.md`。该页面不是博客提交内容,所以你不想它显示到博客内容提交当中吧。 +2. 用文本编辑器打开该文件,输入如下一段: + ``` +++ - title = "About" - date = 2018-03-03T13:50:49+01:00 - menu = "main" #Display this page on the nav menu - weight = "30" #Right-most nav item - meta = "false" #Do not display tags or categories - +++ - > Waves are the practice of the water. Shunryu Suzuki - - ``` - - 当你启动你的服务器并输入:`http://localhost:1313/`,你将会看到你的博客。(访问我 Gihub 主页上的 [例子][6] )如果你想让文章的菜单栏和 Github 相似,给 `themes/kiera/static/css/styles.css` 打上这个 [补丁][7]。 - +title = "About" +date = 2018-03-03T13:50:49+01:00 +menu = "main" #Display this page on the nav menu +weight = "30" #Right-most nav item +meta = "false" #Do not display tags or categories ++++ +> Waves are the practice of the water. Shunryu Suzuki +``` + +当你启动你的服务器并输入:`http://localhost:1313/`,你将会看到你的博客。(访问我 Gihub 主页上的 [例子][6] )如果你想让文章的菜单栏和 Github 相似,给 `themes/kiera/static/css/styles.css` 打上这个 [补丁][7]。 + -------------------------------------------------------------------------------- via: https://opensource.com/article/18/3/start-blog-30-minutes-hugo -作者:[Marek Czernek][a] - -译者:[jrg](https://github.com/jrglinux) - -校对:[校对者ID](https://github.com/校对者ID) +作者:[Marek Czernek][a] 
译者:[jrg](https://github.com/jrglinux) 
校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 [a]:https://opensource.com/users/mczernek - [1]:https://gohugo.io/ - [2]:https://github.com/gohugoio/hugo/releases - [3]:https://gohugo.io/getting-started/installing/ - [4]:https://themes.gohugo.io/ - [5]:https://github.com/avianto/hugo-kiera - [6]:https://m-czernek.github.io/awesome-blog/ - [7]:https://github.com/avianto/hugo-kiera/pull/18/files From 91ba4a1a4e21569a207d790e899be1a857c8c3e2 Mon Sep 17 00:00:00 2001 From: "Xingyu.Wang" Date: Tue, 25 Sep 2018 22:38:49 +0800 Subject: [PATCH 023/736] PUB:20180326 Start a blog in 30 minutes with Hugo, a static site generator written in Go.md @jrglinux https://linux.cn/article-10048-1.html --- ...30 minutes with Hugo, a static site generator written in Go.md | 0 1 file changed, 0 insertions(+), 0 deletions(-) rename {translated/tech => published}/20180326 Start a blog in 30 minutes with Hugo, a static site generator written in Go.md (100%) diff --git a/translated/tech/20180326 Start a blog in 30 minutes with Hugo, a static site generator written in Go.md b/published/20180326 Start a blog in 30 minutes with Hugo, a static site generator written in Go.md similarity index 100% rename from translated/tech/20180326 Start a blog in 30 minutes with Hugo, a static site generator written in Go.md rename to published/20180326 Start a blog in 30 minutes with Hugo, a static site generator written in Go.md From 244a857e0f0f69d469a8ff6b1f96b6a6ca0f0a5e Mon Sep 17 00:00:00 2001 From: GraveAccent Date: Tue, 25 Sep 2018 23:25:55 +0800 Subject: [PATCH 024/736] =?UTF-8?q?GraveAccent=E7=BF=BB=E8=AF=91=E5=AE=8C?= =?UTF-8?q?=E6=88=90?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ... Rendering in React using Ternaries and.md | 206 ------------------ ... Rendering in React using Ternaries and.md | 205 +++++++++++++++++ 2 files changed, 205 insertions(+), 206 deletions(-) delete mode 100644 sources/tech/20180201 Conditional Rendering in React using Ternaries and.md create mode 100644 translated/tech/20180201 Conditional Rendering in React using Ternaries and.md diff --git a/sources/tech/20180201 Conditional Rendering in React using Ternaries and.md b/sources/tech/20180201 Conditional Rendering in React using Ternaries and.md deleted file mode 100644 index b99c787e31..0000000000 --- a/sources/tech/20180201 Conditional Rendering in React using Ternaries and.md +++ /dev/null @@ -1,206 +0,0 @@ -GraveAccent 翻译中 Conditional Rendering in React using Ternaries and Logical AND -============================================================ - - -![](https://cdn-images-1.medium.com/max/2000/1*eASRJrCIVgsy5VbNMAzD9w.jpeg) -Photo by [Brendan Church][1] on [Unsplash][2] - -There are several ways that your React component can decide what to render. You can use the traditional `if` statement or the `switch` statement. In this article, we’ll explore a few alternatives. But be warned that some come with their own gotchas, if you’re not careful. - -### Ternary vs if/else - -Let’s say we have a component that is passed a `name` prop. If the string is non-empty, we display a greeting. Otherwise we tell the user they need to sign in. - -Here’s a Stateless Function Component (SFC) that does just that. - -``` -const MyComponent = ({ name }) => { - if (name) { - return ( -
- Hello {name} -
- ); - } - return ( -
- Please sign in -
- ); -}; -``` - -Pretty straightforward. But we can do better. Here’s the same component written using a conditional ternary operator. - -``` -const MyComponent = ({ name }) => ( -
- {name ? `Hello ${name}` : 'Please sign in'} -
-); -``` - -Notice how concise this code is compared to the example above. - -A few things to note. Because we are using the single statement form of the arrow function, the `return` statement is implied. Also, using a ternary allowed us to DRY up the duplicate `
` markup. 🎉 - -### Ternary vs Logical AND - -As you can see, ternaries are wonderful for `if/else` conditions. But what about simple `if` conditions? - -Let’s look at another example. If `isPro` (a boolean) is `true`, we are to display a trophy emoji. We are also to render the number of stars (if not zero). We could go about it like this. - -``` -const MyComponent = ({ name, isPro, stars}) => ( -
-
- Hello {name} - {isPro ? '🏆' : null} -
- {stars ? ( -
- Stars:{'⭐️'.repeat(stars)} -
- ) : null} -
-); -``` - -But notice the “else” conditions return `null`. This is becasue a ternary expects an else condition. - -For simple `if` conditions, we could use something a little more fitting: the logical AND operator. Here’s the same code written using a logical AND. - -``` -const MyComponent = ({ name, isPro, stars}) => ( -
-
- Hello {name} - {isPro && '🏆'} -
- {stars && ( -
- Stars:{'⭐️'.repeat(stars)} -
- )} -
-); -``` - -Not too different, but notice how we eliminated the `: null` (i.e. else condition) at the end of each ternary. Everything should render just like it did before. - - -Hey! What gives with John? There is a `0` when nothing should be rendered. That’s the gotcha that I was referring to above. Here’s why. - -[According to MDN][3], a Logical AND (i.e. `&&`): - -> `expr1 && expr2` - -> Returns `expr1` if it can be converted to `false`; otherwise, returns `expr2`. Thus, when used with Boolean values, `&&` returns `true` if both operands are true; otherwise, returns `false`. - -OK, before you start pulling your hair out, let me break it down for you. - -In our case, `expr1` is the variable `stars`, which has a value of `0`. Because zero is falsey, `0` is returned and rendered. See, that wasn’t too bad. - -I would write this simply. - -> If `expr1` is falsey, returns `expr1`, else returns `expr2`. - -So, when using a logical AND with non-boolean values, we must make the falsey value return something that React won’t render. Say, like a value of `false`. - -There are a few ways that we can accomplish this. Let’s try this instead. - -``` -{!!stars && ( -
- {'⭐️'.repeat(stars)} -
-)} -``` - -Notice the double bang operator (i.e. `!!`) in front of `stars`. (Well, actually there is no “double bang operator”. We’re just using the bang operator twice.) - -The first bang operator will coerce the value of `stars` into a boolean and then perform a NOT operation. If `stars` is `0`, then `!stars` will produce `true`. - -Then we perform a second NOT operation, so if `stars` is 0, `!!stars` would produce `false`. Exactly what we want. - -If you’re not a fan of `!!`, you can also force a boolean like this (which I find a little wordy). - -``` -{Boolean(stars) && ( -``` - -Or simply give a comparator that results in a boolean value (which some might say is even more semantic). - -``` -{stars > 0 && ( -``` - -#### A word on strings - -Empty string values suffer the same issue as numbers. But because a rendered empty string is invisible, it’s not a problem that you will likely have to deal with, or will even notice. However, if you are a perfectionist and don’t want an empty string on your DOM, you should take similar precautions as we did for numbers above. - -### Another solution - -A possible solution, and one that scales to other variables in the future, would be to create a separate `shouldRenderStars` variable. Then you are dealing with boolean values in your logical AND. - -``` -const shouldRenderStars = stars > 0; -``` - -``` -return ( -
- {shouldRenderStars && ( -
- {'⭐️'.repeat(stars)} -
- )} -
-); -``` - -Then, if in the future, the business rule is that you also need to be logged in, own a dog, and drink light beer, you could change how `shouldRenderStars` is computed, and what is returned would remain unchanged. You could also place this logic elsewhere where it’s testable and keep the rendering explicit. - -``` -const shouldRenderStars = - stars > 0 && loggedIn && pet === 'dog' && beerPref === 'light`; -``` - -``` -return ( -
- {shouldRenderStars && ( -
- {'⭐️'.repeat(stars)} -
- )} -
-); -``` - -### Conclusion - -I’m of the opinion that you should make best use of the language. And for JavaScript, this means using conditional ternary operators for `if/else`conditions and logical AND operators for simple `if` conditions. - -While we could just retreat back to our safe comfy place where we use the ternary operator everywhere, you now possess the knowledge and power to go forth AND prosper. - --------------------------------------------------------------------------------- - -作者简介: - -Managing Editor at the American Express Engineering Blog http://aexp.io and Director of Engineering @AmericanExpress. MyViews !== ThoseOfMyEmployer. - ----------------- - -via: https://medium.freecodecamp.org/conditional-rendering-in-react-using-ternaries-and-logical-and-7807f53b6935 - -作者:[Donavon West][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:https://medium.freecodecamp.org/@donavon -[1]:https://unsplash.com/photos/pKeF6Tt3c08?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText -[2]:https://unsplash.com/search/photos/road-sign?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText -[3]:https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Operators/Logical_Operators diff --git a/translated/tech/20180201 Conditional Rendering in React using Ternaries and.md b/translated/tech/20180201 Conditional Rendering in React using Ternaries and.md new file mode 100644 index 0000000000..aa7ba0017e --- /dev/null +++ b/translated/tech/20180201 Conditional Rendering in React using Ternaries and.md @@ -0,0 +1,205 @@ +在 React 条件渲染中使用三元表达式和 “&&” +============================================================ + +![](https://cdn-images-1.medium.com/max/2000/1*eASRJrCIVgsy5VbNMAzD9w.jpeg) +Photo by [Brendan Church][1] on [Unsplash][2] + +React 组件可以通过多种方式决定渲染内容。你可以使用传统的 if 语句或 switch 语句。在本文中,我们将探讨一些替代方案。但要注意,如果你不小心,有些方案会带来自己的陷阱。 + +### 三元表达式 vs if/else + +假设我们有一个组件被传进来一个 `name` prop。 如果这个字符串非空,我们会显示一个问候语。否则,我们会告诉用户他们需要登录。 + +这是一个只实现了如上功能的无状态函数式组件。 + +``` +const MyComponent = ({ name }) => { + if (name) { + return ( +
+ Hello {name} +
+ ); + } + return ( +
+ Please sign in +
+ ); +}; +``` + +这个很简单但是我们可以做得更好。这是使用三元运算符编写的相同组件。 + +``` +const MyComponent = ({ name }) => ( +
+ {name ? `Hello ${name}` : 'Please sign in'} +
+); +``` + +请注意这段代码与上面的例子相比是多么简洁。 + +有几点需要注意。因为我们使用了箭头函数的单语句形式,所以隐含了return语句。另外,使用三元运算符允许我们省略掉重复的 `
` 标记。🎉 + +### 三元表达式 vs && + +正如您所看到的,三元表达式用于表达 if/else 条件式非常好。但是对于简单的 if 条件式怎么样呢? + +让我们看另一个例子。如果 isPro(一个布尔值)为真,我们将显示一个奖杯表情符号。我们也要渲染星星的数量(如果不是0)。我们可以这样写。 + +``` +const MyComponent = ({ name, isPro, stars}) => ( +
+
+ Hello {name} + {isPro ? '🏆' : null} +
+ {stars ? ( +
+ Stars:{'⭐️'.repeat(stars)} +
+ ) : null} +
+); +``` + +请注意 “else” 条件返回 null 。 这是因为三元表达式要有"否则"条件。 + +对于简单的 “if” 条件式,我们可以使用更合适的东西:&& 运算符。这是使用 “&&” 编写的相同代码。 + +``` +const MyComponent = ({ name, isPro, stars}) => ( +
+
+ Hello {name} + {isPro && '🏆'} +
+ {stars && ( +
+ Stars:{'⭐️'.repeat(stars)} +
+ )} +
+); +``` + +没有太多区别,但是注意我们消除了每个三元表达式最后面的 `: null` (else 条件式)。一切都应该像以前一样渲染。 + + +嘿!约翰得到了什么?当什么都不应该渲染时,只有一个0。这就是我上面提到的陷阱。这里有解释为什么。 + +[根据 MDN][3],一个逻辑运算符“和”(也就是`&&`): + +> `expr1 && expr2` + +> 如果 `expr1` 可以被转换成 `false` ,返回 `expr1`;否则返回 `expr2`。 如此,当与布尔值一起使用时,如果两个操作数都是 true,`&&` 返回 `true` ;否则,返回 `false`。 + +好的,在你开始拔头发之前,让我为你解释它。 + +在我们这个例子里, `expr1` 是变量 `stars`,它的值是 `0`,因为0是 falsey 的值, `0` 会被返回和渲染。看,这还不算太坏。 + +我会简单地这么写。 + +> 如果 `expr1` 是 falsey,返回 `expr1` ,否则返回 `expr2` + +所以,当对非布尔值使用 “&&” 时,我们必须让 falsy 的值返回 React 无法渲染的东西,比如说,`false` 这个值。 + +我们可以通过几种方式实现这一目标。让我们试试吧。 + +``` +{!!stars && ( +
+ {'⭐️'.repeat(stars)} +
+)} +``` + +注意 `stars` 前的双感叹操作符( `!!`)(呃,其实没有双感叹操作符。我们只是用了感叹操作符两次)。 + +第一个感叹操作符会强迫 `stars` 的值变成布尔值并且进行一次“非”操作。如果 `stars` 是 `0` ,那么 `!stars` 会 是 `true`。 + +然后我们执行第二个`非`操作,所以如果 `stars` 是0,`!!stars` 会是 `false`。正好是我们想要的。 + +如果你不喜欢 `!!`,那么你也可以强制转换出一个布尔数比如这样(这种方式我觉得有点冗长)。 + +``` +{Boolean(stars) && ( +``` + +或者只是用比较符产生一个布尔值(有些人会说这样甚至更加语义化)。 + +``` +{stars > 0 && ( +``` + +#### 关于字符串 + +空字符串与数字有一样的毛病。但是因为渲染后的空字符串是不可见的,所以这不是那种你很可能会去处理的难题,甚至可能不会注意到它。然而,如果你是完美主义者并且不希望DOM上有空字符串,你应采取我们上面对数字采取的预防措施。 + +### 其它解决方案 + +一种可能的将来可扩展到其他变量的解决方案,是创建一个单独的 `shouldRenderStars` 变量。然后你用“&&”处理布尔值。 + +``` +const shouldRenderStars = stars > 0; +``` + +``` +return ( +
+ {shouldRenderStars && ( +
+ {'⭐️'.repeat(stars)} +
+ )} +
+); +``` + +之后,在将来,如果业务规则要求你还需要已登录,拥有一条狗以及喝淡啤酒,你可以改变 `shouldRenderStars` 的得出方式,而返回的内容保持不变。你还可以把这个逻辑放在其它可测试的地方,并且保持渲染明晰。 + +``` +const shouldRenderStars = + stars > 0 && loggedIn && pet === 'dog' && beerPref === 'light`; +``` + +``` +return ( +
+ {shouldRenderStars && ( +
+ {'⭐️'.repeat(stars)} +
+ )} +
+); +``` + +### 结论 + +我认为你应该充分利用这种语言。对于 JavaScript,这意味着为 `if/else` 条件式使用三元表达式,以及为 `if` 条件式使用 `&&` 操作符。 + +我们可以回到每处都使用三元运算符的舒适区,但你现在消化了这些知识和力量,可以继续前进 && 取得成功了。 + +-------------------------------------------------------------------------------- + +作者简介: + +美国运通工程博客的执行编辑 http://aexp.io 以及 @AmericanExpress 的工程总监。MyViews !== ThoseOfMyEmployer. + +---------------- + +via: https://medium.freecodecamp.org/conditional-rendering-in-react-using-ternaries-and-logical-and-7807f53b6935 + +作者:[Donavon West][a] +译者:[GraveAccent](https://github.com/GraveAccent) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://medium.freecodecamp.org/@donavon +[1]:https://unsplash.com/photos/pKeF6Tt3c08?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText +[2]:https://unsplash.com/search/photos/road-sign?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText +[3]:https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Operators/Logical_Operators From bb41fabb48c81c9f48faacdb09519cb1cc94c037 Mon Sep 17 00:00:00 2001 From: geekpi Date: Wed, 26 Sep 2018 08:57:21 +0800 Subject: [PATCH 025/736] translated --- ...age Manager To Use IPv4 In Ubuntu 16.04.md | 49 ------------------- ...age Manager To Use IPv4 In Ubuntu 16.04.md | 47 ++++++++++++++++++ 2 files changed, 47 insertions(+), 49 deletions(-) delete mode 100644 sources/tech/20180918 How To Force APT Package Manager To Use IPv4 In Ubuntu 16.04.md create mode 100644 translated/tech/20180918 How To Force APT Package Manager To Use IPv4 In Ubuntu 16.04.md diff --git a/sources/tech/20180918 How To Force APT Package Manager To Use IPv4 In Ubuntu 16.04.md b/sources/tech/20180918 How To Force APT Package Manager To Use IPv4 In Ubuntu 16.04.md deleted file mode 100644 index f81d944570..0000000000 --- a/sources/tech/20180918 How To Force APT Package Manager To Use IPv4 In Ubuntu 16.04.md +++ /dev/null @@ -1,49 +0,0 @@ -translating---geekpi - -How To Force APT Package Manager To Use IPv4 In Ubuntu 16.04 -====== - -![](https://www.ostechnix.com/wp-content/uploads/2018/09/ipv4-720x340.png) - -**APT** , short or **A** dvanced **P** ackage **T** ool, is the default package manager for Debian-based systems. Using APT, we can install, update, upgrade and remove applications from the system. Lately, I have been facing a strange error. Whenever I try update my Ubuntu 16.04 box, I get this error – **“0% [Connecting to in.archive.ubuntu.com (2001:67c:1560:8001::14)]”** and the update process gets stuck for a long time. My Internet connection is working well and I can able to ping all websites including Ubuntu official site. After a couple Google searches, I realized that sometimes the Ubuntu mirrors are not reachable over IPv6. This problem is solved after I force APT package manager to use IPv4 in place of IPv6 to access Ubuntu mirrors while updating the system. If you ever encountered with this error, you can solve it as described below. - -### Force APT Package Manager To Use IPv4 In Ubuntu 16.04 - -To force APT to use IPv4 in place of IPv6 while updating and upgrading your Ubuntu 16.04 LTS systems, simply use the following commands: - -``` -$ sudo apt-get -o Acquire::ForceIPv4=true update - -$ sudo apt-get -o Acquire::ForceIPv4=true upgrade -``` - -Voila! This time update process run and completed quickly. - -You can also make this persistent for all **apt-get** transactions in the future by adding the following line in **/etc/apt/apt.conf.d/99force-ipv4** file using command: - -``` -$ echo 'Acquire::ForceIPv4 "true";' | sudo tee /etc/apt/apt.conf.d/99force-ipv4 -``` - -**Disclaimer:** - -I don’t know if anyone is having this issue lately, but I kept getting this error today at least four to five times in my Ubuntu 16.04 LTS virtual machine and I solved it as described above. I am not sure that it is the recommended solution. Go through Ubuntu forums and make sure this method is legitimate. Since mine is just a VM which I use it only for testing and learning purposes, I don’t mind about the authenticity of this method. Use it on your own risk. - -Hope this helps. More good stuffs to come. Stay tuned! - -Cheers! - - - --------------------------------------------------------------------------------- - -via: https://www.ostechnix.com/how-to-force-apt-package-manager-to-use-ipv4-in-ubuntu-16-04/ - -作者:[SK][a] -选题:[lujun9972](https://github.com/lujun9972) -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://www.ostechnix.com/author/sk/ diff --git a/translated/tech/20180918 How To Force APT Package Manager To Use IPv4 In Ubuntu 16.04.md b/translated/tech/20180918 How To Force APT Package Manager To Use IPv4 In Ubuntu 16.04.md new file mode 100644 index 0000000000..02bc39addc --- /dev/null +++ b/translated/tech/20180918 How To Force APT Package Manager To Use IPv4 In Ubuntu 16.04.md @@ -0,0 +1,47 @@ +如何在 Ubuntu 16.04 强制 APT 包管理器使用 IPv4 +====== + +![](https://www.ostechnix.com/wp-content/uploads/2018/09/ipv4-720x340.png) + +**APT**, 是 **A** dvanced **P** ackage **T** ool 的缩写,是基于 Debian 的系统的默认包管理器。我们可以使用 APT 安装、更新、升级和删除应用程序。最近,我一直遇到一个奇怪的错误。每当我尝试更新我的 Ubuntu 16.04 时,我都会收到此错误 - **“0% [Connecting to in.archive.ubuntu.com (2001:67c:1560:8001::14)]”** ,同时更新流程会卡住很长时间。我的网络连接没问题,我可以 ping 通所有网站,包括 Ubuntu 官方网站。在搜索了一番谷歌后,我意识到 Ubuntu 镜像有时无法通过 IPv6 访问。在我强制将 APT 包管理器在更新系统时使用 IPv4 代替 IPv6 访问 Ubuntu 镜像后,此问题得以解决。如果你遇到过此错误,可以按照以下说明解决。 + +### 强制 APT 包管理器在 Ubuntu 16.04 中使用 IPv4 + +要在更新和升级 Ubuntu 16.04 LTS 系统时强制 APT 使用 IPv4 代替 IPv6,只需使用以下命令: + +``` +$ sudo apt-get -o Acquire::ForceIPv4=true update + +$ sudo apt-get -o Acquire::ForceIPv4=true upgrade +``` + +瞧!这次更新很快就完成了。 + +你还可以使用以下命令在 **/etc/apt/apt.conf.d/99force-ipv4** 中添加以下行,以便将来对所有 **apt-get** 事务保持持久性: + +``` +$ echo 'Acquire::ForceIPv4 "true";' | sudo tee /etc/apt/apt.conf.d/99force-ipv4 +``` + +**免责声明:** + +我不知道最近是否有人遇到这个问题,但我今天在我的 Ubuntu 16.04 LTS 虚拟机中遇到了至少四五次这样的错误,我按照上面的说法解决了这个问题。我不确定这是推荐的解决方案。请浏览 Ubuntu 论坛来确保此方法合法。由于我只是一个 VM,我只将它用于测试和学习目的,我不介意这种方法的真实性。请自行承担使用风险。 + +希望这有帮助。还有更多的好东西。敬请关注! + +干杯! + + + +-------------------------------------------------------------------------------- + +via: https://www.ostechnix.com/how-to-force-apt-package-manager-to-use-ipv4-in-ubuntu-16-04/ + +作者:[SK][a] +选题:[lujun9972](https://github.com/lujun9972) +译者:[geekpi](https://github.com/geekpi) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://www.ostechnix.com/author/sk/ From 557ab76142c14188ffce06d086757f49eff391b8 Mon Sep 17 00:00:00 2001 From: geekpi Date: Wed, 26 Sep 2018 09:03:16 +0800 Subject: [PATCH 026/736] translating --- ...Your Linux Desktop With browser-mpris2 (Chrome Extension).md | 2 ++ 1 file changed, 2 insertions(+) diff --git a/sources/tech/20180816 Add YouTube Player Controls To Your Linux Desktop With browser-mpris2 (Chrome Extension).md b/sources/tech/20180816 Add YouTube Player Controls To Your Linux Desktop With browser-mpris2 (Chrome Extension).md index 1a0f1e9dbe..acc8f56e0c 100644 --- a/sources/tech/20180816 Add YouTube Player Controls To Your Linux Desktop With browser-mpris2 (Chrome Extension).md +++ b/sources/tech/20180816 Add YouTube Player Controls To Your Linux Desktop With browser-mpris2 (Chrome Extension).md @@ -1,3 +1,5 @@ +translating---geekpi + Add YouTube Player Controls To Your Linux Desktop With browser-mpris2 (Chrome Extension) ====== A Unity feature that I miss (it only actually worked for a short while though) is automatically getting player controls in the Ubuntu Sound Indicator when visiting a website like YouTube in a web browser, so you could pause or stop the video directly from the top bar, as well as see the video / song information and a preview. From db69db3e9257d0ff232615c8cb6d5691373f43ee Mon Sep 17 00:00:00 2001 From: "Xingyu.Wang" Date: Wed, 26 Sep 2018 09:09:53 +0800 Subject: [PATCH 027/736] PRF:20180725 How do private keys work in PKI and cryptography.md MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit @pinewall 翻译的棒极了 --- ...ivate keys work in PKI and cryptography.md | 31 ++++++++++--------- 1 file changed, 16 insertions(+), 15 deletions(-) diff --git a/translated/tech/20180725 How do private keys work in PKI and cryptography.md b/translated/tech/20180725 How do private keys work in PKI and cryptography.md index 6c42531396..e52e7282a7 100644 --- a/translated/tech/20180725 How do private keys work in PKI and cryptography.md +++ b/translated/tech/20180725 How do private keys work in PKI and cryptography.md @@ -1,47 +1,48 @@ -PKI 和 密码学中的私钥的角色 +公钥基础设施和密码学中的私钥的角色 ====== +> 了解如何验证某人所声称的身份。 ![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/security_privacy_lock.png?itok=ZWjrpFzx) -在[上一篇文章][1]中,我们概述了密码学并讨论了密码学的核心概念:保密性confidentiality (让数据保密),完整性integrity (防止数据被篡改)和身份认证authentication (确认数据源的身份identity)。由于要在存在各种身份混乱的现实世界中完成身份认证,人们逐渐建立起一个复杂的技术生态体系technological ecosystem,用于证明某人就是其声称的那个人。在本文中,我们将大致介绍这些体系是如何工作的。 +在[上一篇文章][1]中,我们概述了密码学并讨论了密码学的核心概念:保密性confidentiality (让数据保密)、完整性integrity (防止数据被篡改)和身份认证authentication (确认数据源的身份identity)。由于要在存在各种身份混乱的现实世界中完成身份认证,人们逐渐建立起一个复杂的技术生态体系technological ecosystem,用于证明某人就是其声称的那个人。在本文中,我们将大致介绍这些体系是如何工作的。 -### 公钥密码学及数字签名快速回顾 +### 快速回顾公钥密码学及数字签名 互联网世界中的身份认证依赖于公钥密码学,其中密钥分为两部分:拥有者需要保密的私钥和可以对外公开的公钥。经过公钥加密过的数据,只能用对应的私钥解密。举个例子,对于希望与[记者][2]建立联系的举报人来说,这个特性非常有用。但就本文介绍的内容而言,私钥更重要的用途是与一个消息一起创建一个数字签名digital signature,用于提供完整性和身份认证。 -在实际应用中,我们签名的并不是真实消息,而是经过密码学哈希函数cryptographic hash function处理过的消息摘要digest。要发送一个包含源代码的压缩文件,发送者会对该压缩文件的 256 比特长度的 [SHA-256][3] 摘要而不是文件本身进行签名,然后用明文发送该压缩包(和签名)。接收者会独立计算收到文件的 SHA-256 摘要,然后结合该摘要、收到的签名及发送者的公钥,使用签名验证算法进行验证。验证过程取决于加密算法,加密算法不同,验证过程也相应不同;而且,由于不断发现微妙的触发条件,签名验证[漏洞][4]依然[层出不穷][5]。如果签名验证通过,说明文件在传输过程中没有被篡改而且来自于发送者,这是因为只有发送者拥有创建签名所需的私钥。 +在实际应用中,我们签名的并不是真实消息,而是经过密码学哈希函数cryptographic hash function处理过的消息摘要digest。要发送一个包含源代码的压缩文件,发送者会对该压缩文件的 256 比特长度的 [SHA-256][3] 摘要进行签名,而不是文件本身进行签名,然后用明文发送该压缩包(和签名)。接收者会独立计算收到文件的 SHA-256 摘要,然后结合该摘要、收到的签名及发送者的公钥,使用签名验证算法进行验证。验证过程取决于加密算法,加密算法不同,验证过程也相应不同;而且,很微妙的是签名验证[漏洞][4]依然[层出不穷][5]。如果签名验证通过,说明文件在传输过程中没有被篡改而且来自于发送者,这是因为只有发送者拥有创建签名所需的私钥。 ### 方案中缺失的环节 上述方案中缺失了一个重要的环节:我们从哪里获得发送者的公钥?发送者可以将公钥与消息一起发送,但除了发送者的自我宣称,我们无法核验其身份。假设你是一名银行柜员,一名顾客走过来向你说,“你好,我是 Jane Doe,我要取一笔钱”。当你要求其证明身份时,她指着衬衫上贴着的姓名标签说道,“看,Jane Doe!”。如果我是这个柜员,我会礼貌的拒绝她的请求。 -如果你认识发送者,你们可以私下见面并彼此交换公钥。如果你并不认识发送者,你们可以私下见面,检查对方的证件,确认真实性后接受对方的公钥。为提高流程效率,你可以举办聚会并邀请一堆人,检查他们的证件,然后接受他们的公钥。此外,如果你认识并信任 Jane Doe (尽管她在银行的表现比较反常),Jane 可以参加聚会,收集大家的公钥然后交给你。事实上,Jane 可以使用她自己的私钥对这些公钥(及对应的身份信息)进行签名,进而你可以从一个[线上密钥库][7]获取公钥(及对应的身份信息)并信任已被 Jane 签名的那部分。如果一个人的公钥被很多你信任的人(即使你并不认识他们)签名,你也可能选择信任这个人。按照这种方式,你可以建立一个[信任网络Web of Trust][8]。 +如果你认识发送者,你们可以私下见面并彼此交换公钥。如果你并不认识发送者,你们可以私下见面,检查对方的证件,确认真实性后接受对方的公钥。为提高流程效率,你可以举办[聚会][6]并邀请一堆人,检查他们的证件,然后接受他们的公钥。此外,如果你认识并信任 Jane Doe(尽管她在银行的表现比较反常),Jane 可以参加聚会,收集大家的公钥然后交给你。事实上,Jane 可以使用她自己的私钥对这些公钥(及对应的身份信息)进行签名,进而你可以从一个[线上密钥库][7]获取公钥(及对应的身份信息)并信任已被 Jane 签名的那部分。如果一个人的公钥被很多你信任的人(即使你并不认识他们)签名,你也可能选择信任这个人。按照这种方式,你可以建立一个[信任网络][8]Web of Trust。 -但事情也变得更加复杂:我们需要建立一种标准的编码机制,可以将公钥和其对应的身份信息编码成一个数字捆绑digital bundle,以便我们进一步进行签名。更准确的说,这类数字捆绑被称为证书cerificates。我们还需要可以创建、使用和管理这些证书的工具链。满足诸如此类的各种需求的方案构成了公钥基础设施public key infrastructure, PKI。 +但事情也变得更加复杂:我们需要建立一种标准的编码机制,可以将公钥和其对应的身份信息编码成一个数字捆绑digital bundle,以便我们进一步进行签名。更准确的说,这类数字捆绑被称为证书cerificate。我们还需要可以创建、使用和管理这些证书的工具链。满足诸如此类的各种需求的方案构成了公钥基础设施public key infrastructure(PKI)。 ### 比信任网络更进一步 -你可以用人际关系网类比信任网络。如果人们之间广泛互信,可以很容易找到(两个人之间的)一条短信任链short path of trust:不妨以社交圈为例。基于 [GPG][9] 加密的邮件依赖于信任网络,([理论上][10])只适用于与少量朋友、家庭或同事进行联系的情形。 +你可以用人际关系网类比信任网络。如果人们之间广泛互信,可以很容易找到(两个人之间的)一条短信任链short path of trust:就像一个社交圈。基于 [GPG][9] 加密的邮件依赖于信任网络,([理论上][10])只适用于与少量朋友、家庭或同事进行联系的情形。 (LCTT 译注:作者提到的“短信任链”应该是暗示“六度空间理论”,即任意两个陌生人之间所间隔的人一般不会超过 6 个。对 GPG 的唱衰,一方面是因为密钥管理的复杂性没有改善,另一方面 Yahoo 和 Google 都提出了更便利的端到端加密方案。) -在实际应用中,信任网络有一些["硬伤"significant problems][11],主要是在可扩展性方面。当网络规模逐渐增大或者人们之间的连接逐渐降低时,信任网络就会慢慢失效。如果信任链逐渐变长,信任链中某人有意或无意误签证书的几率也会逐渐增大。如果信任链不存在,你不得不自己创建一条信任链;具体而言,你与其它组织建立联系,验证它们的密钥符合你的要求。考虑下面的场景,你和你的朋友要访问一个从未使用过的在线商店。你首先需要核验网站所用的公钥属于其对应的公司而不是伪造者,进而建立安全通信信道,最后完成下订单操作。核验公钥的方法包括去实体店、打电话等,都比较麻烦。这样会导致在线购物变得不那么便利(或者说不那么安全,毕竟很多人会图省事,不去核验密钥)。 +在实际应用中,信任网络有一些“[硬伤][11]significant problems”,主要是在可扩展性方面。当网络规模逐渐增大或者人们之间的连接较少时,信任网络就会慢慢失效。如果信任链逐渐变长,信任链中某人有意或无意误签证书的几率也会逐渐增大。如果信任链不存在,你不得不自己创建一条信任链,与其它组织建立联系,验证它们的密钥以符合你的要求。考虑下面的场景,你和你的朋友要访问一个从未使用过的在线商店。你首先需要核验网站所用的公钥属于其对应的公司而不是伪造者,进而建立安全通信信道,最后完成下订单操作。核验公钥的方法包括去实体店、打电话等,都比较麻烦。这样会导致在线购物变得不那么便利(或者说不那么安全,毕竟很多人会图省事,不去核验密钥)。 -如果世界上有那么几个格外值得信任的人,他们专门负责核验和签发网站证书,情况会怎样呢?你可以只信任他们,那么浏览互联网也会变得更加容易。整体来看,这就是当今互联网的工作方式。那些“格外值得信任的人”就是被称为证书颁发机构cerificate authorities, CAs的公司。当网站希望获得公钥签名时,只需向 CA 提交证书签名请求certificate signing request。 +如果世界上有那么几个格外值得信任的人,他们专门负责核验和签发网站证书,情况会怎样呢?你可以只信任他们,那么浏览互联网也会变得更加容易。整体来看,这就是当今互联网的工作方式。那些“格外值得信任的人”就是被称为证书颁发机构cerificate authoritie(CA)的公司。当网站希望获得公钥签名时,只需向 CA 提交证书签名请求certificate signing request(CSR)。 -CSR 类似于包括公钥和身份信息(在本例中,即服务器的主机名)的存根stub证书,但CA 并不会直接对 CSR 本身进行签名。CA 在签名之前会进行一些验证。对于一些证书类型(LCTT 译注:DVDomain Validated 类型),CA 只验证申请者的确是 CSR 中列出主机名对应域名的控制者(例如通过邮件验证,让申请者完成指定的域名解析)。[对于另一些证书类型][12] (LCTT 译注:链接中提到EVExtended Validated 类型,其实还有 OVOrganization Validated 类型),CA 还会检查相关法律文书,例如公司营业执照等。一旦验证完成,CA(一般在申请者付费后)会从 CSR 中取出数据(即公钥和身份信息),使用 CA 自己的私钥进行签名,创建一个(签名)证书并发送给申请者。申请者将该证书部署在网站服务器上,当用户使用 HTTPS (或其它基于 [TLS][13] 加密的协议)与服务器通信时,该证书被分发给用户。 +CSR 类似于包括公钥和身份信息(在本例中,即服务器的主机名)的存根stub证书,但 CA 并不会直接对 CSR 本身进行签名。CA 在签名之前会进行一些验证。对于一些证书类型(LCTT 译注:域名证实Domain Validated(DV) 类型),CA 只验证申请者的确是 CSR 中列出主机名对应域名的控制者(例如通过邮件验证,让申请者完成指定的域名解析)。[对于另一些证书类型][12] (LCTT 译注:链接中提到扩展证实Extended Validated(EV)类型,其实还有 OVOrganization Validated 类型),CA 还会检查相关法律文书,例如公司营业执照等。一旦验证完成,CA(一般在申请者付费后)会从 CSR 中取出数据(即公钥和身份信息),使用 CA 自己的私钥进行签名,创建一个(签名)证书并发送给申请者。申请者将该证书部署在网站服务器上,当用户使用 HTTPS (或其它基于 [TLS][13] 加密的协议)与服务器通信时,该证书被分发给用户。 -当用户访问该网站时,浏览器获取该证书,接着检查证书中的主机名是否与当前正在连接的网站一致(下文会详细说明),核验 CA 签名有效性。如果其中一步验证不通过,浏览器会给出安全警告并切断与网站的连接。反之,如果验证通过,浏览器会使用证书中的公钥核验服务器发送的签名信息,确认该服务器持有该证书的私钥。有几种算法用于协商后续通信用到的共享密钥shared secret key,其中一种也用到了服务器发送的签名信息。密钥交换Key exchange算法不在本文的讨论范围,可以参考这个[视频][14],其中仔细说明了一种密钥交换算法。 +当用户访问该网站时,浏览器获取该证书,接着检查证书中的主机名是否与当前正在连接的网站一致(下文会详细说明),核验 CA 签名有效性。如果其中一步验证不通过,浏览器会给出安全警告并切断与网站的连接。反之,如果验证通过,浏览器会使用证书中的公钥来核验该服务器发送的签名信息,确认该服务器持有该证书的私钥。有几种算法用于协商后续通信用到的共享密钥shared secret key,其中一种也用到了服务器发送的签名信息。密钥交换key exchange算法不在本文的讨论范围,可以参考这个[视频][14],其中仔细说明了一种密钥交换算法。 ### 建立信任 你可能会问,“如果 CA 使用其私钥对证书进行签名,也就意味着我们需要使用 CA 的公钥验证证书。那么 CA 的公钥从何而来,谁对其进行签名呢?” 答案是 CA 对自己签名!可以使用证书公钥对应的私钥,对证书本身进行签名!这类签名证书被称为是自签名的self-signed;在 PKI 体系下,这意味着对你说“相信我”。(为了表达方便,人们通常说用证书进行了签名,虽然真正用于签名的私钥并不在证书中。) -通过遵守[浏览器][15]和[操作系统][16]供应商建立的规则,CA 表明自己足够可靠并寻求加入到浏览器或操作系统预装的一组自签名证书中。这些证书被称为“信任锚trust anchors”或 CA 根证书root CA certificates,被存储在根证书区,我们约定implicitly信任该区域内的证书。 +通过遵守[浏览器][15]和[操作系统][16]供应商建立的规则,CA 表明自己足够可靠并寻求加入到浏览器或操作系统预装的一组自签名证书中。这些证书被称为“信任锚trust anchor”或 CA 根证书root CA certificate,被存储在根证书区,我们约定implicitly信任该区域内的证书。 CA 也可以签发一种特殊的证书,该证书自身可以作为 CA。在这种情况下,它们可以生成一个证书链。要核验证书链,需要从“信任锚”(也就是 CA 根证书)开始,使用当前证书的公钥核验下一层证书的签名(或其它一些信息)。按照这个方式依次核验下一层证书,直到证书链底部。如果整个核验过程没有问题,信任链也建立完成。当向 CA 付费为网站签发证书时,实际购买的是将证书放置在证书链下的权利。CA 将卖出的证书标记为“不可签发子证书”,这样它们可以在适当的长度终止信任链(防止其继续向下扩展)。 -为何要使用长度超过 2 的信任链呢?毕竟网站的证书可以直接被 CA 根证书签名。在实际应用中,很多因素促使 CA 创建中间 CA 证书intermediate CA certificate,最主要是为了方便。由于价值连城,CA 根证书对应的私钥通常被存放在特定的设备中,一种需要多人解锁的硬件安全模块hardware security module, HSM,该模块完全离线并被保管在配备监控和报警设备的[地下室][18]中。 +为何要使用长度超过 2 的信任链呢?毕竟网站的证书可以直接被 CA 根证书签名。在实际应用中,很多因素促使 CA 创建中间 CA 证书intermediate CA certificate,最主要是为了方便。由于价值连城,CA 根证书对应的私钥通常被存放在特定的设备中,一种需要多人解锁的硬件安全模块hardware security module(HSM),该模块完全离线并被保管在配备监控和报警设备的[地下室][18]中。 CA/浏览器论坛CAB Forum, CA/Browser Forum负责管理 CA,[要求][19]任何与 CA 根证书(LCTT 译注:就像前文提到的那样,这里是指对应的私钥)相关的操作必须由人工完成。设想一下,如果每个证书请求都需要员工将请求内容拷贝到保密介质中、进入地下室、与同事一起解锁 HSM、(使用 CA 根证书对应的私钥)签名证书,最后将签名证书从保密介质中拷贝出来;那么每天为大量网站签发证书是相当繁重乏味的工作。因此,CA 创建内部使用的中间 CA,用于证书签发自动化。 @@ -72,12 +73,12 @@ via: https://opensource.com/article/18/7/private-keys 作者:[Alex Wood][a] 选题:[lujun9972](https://github.com/lujun9972) 译者:[pinewall](https://github.com/pinewall) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 [a]:https://opensource.com/users/awood -[1]:https://opensource.com/article/18/5/cryptography-pki +[1]:https://linux.cn/article-9792-1.html [2]:https://theintercept.com/2014/10/28/smuggling-snowden-secrets/ [3]:https://en.wikipedia.org/wiki/SHA-2 [4]:https://www.ietf.org/mail-archive/web/openpgp/current/msg00999.html From 33031b4e014adb372bf439f1293bcf50a1b00f99 Mon Sep 17 00:00:00 2001 From: "Xingyu.Wang" Date: Wed, 26 Sep 2018 09:10:24 +0800 Subject: [PATCH 028/736] PUB:20180725 How do private keys work in PKI and cryptography.md @pinewall https://linux.cn/article-10049-1.html --- .../20180725 How do private keys work in PKI and cryptography.md | 0 1 file changed, 0 insertions(+), 0 deletions(-) rename {translated/tech => published}/20180725 How do private keys work in PKI and cryptography.md (100%) diff --git a/translated/tech/20180725 How do private keys work in PKI and cryptography.md b/published/20180725 How do private keys work in PKI and cryptography.md similarity index 100% rename from translated/tech/20180725 How do private keys work in PKI and cryptography.md rename to published/20180725 How do private keys work in PKI and cryptography.md From 832df53f305c831565142cb9d5b3475713ecde01 Mon Sep 17 00:00:00 2001 From: "Xingyu.Wang" Date: Wed, 26 Sep 2018 09:55:09 +0800 Subject: [PATCH 029/736] PRF:20180826 How to capture and analyze packets with tcpdump command on Linux.md @ypingcn --- ...e packets with tcpdump command on Linux.md | 99 +++++++++---------- 1 file changed, 45 insertions(+), 54 deletions(-) diff --git a/translated/tech/20180826 How to capture and analyze packets with tcpdump command on Linux.md b/translated/tech/20180826 How to capture and analyze packets with tcpdump command on Linux.md index 307aeeb0ec..431bbefe5e 100644 --- a/translated/tech/20180826 How to capture and analyze packets with tcpdump command on Linux.md +++ b/translated/tech/20180826 How to capture and analyze packets with tcpdump command on Linux.md @@ -1,26 +1,27 @@ 如何在 Linux 上使用 tcpdump 命令捕获和分析数据包 ====== -tcpdump 是一个有名的命令行**数据包分析**工具。我们可以使用 tcpdump 命令捕获实时 TCP/IP 数据包,这些数据包也可以保存到文件中。之后这些捕获的数据包可以通过 tcpdump 命令进行分析。tcpdump 命令在网络级故障排除时变得非常方便。 + +`tcpdump` 是一个有名的命令行**数据包分析**工具。我们可以使用 `tcpdump` 命令捕获实时 TCP/IP 数据包,这些数据包也可以保存到文件中。之后这些捕获的数据包可以通过 `tcpdump` 命令进行分析。`tcpdump` 命令在网络层面进行故障排除时变得非常方便。 ![](https://www.linuxtechi.com/wp-content/uploads/2018/08/tcpdump-command-examples-linux.jpg) -tcpdump 在大多数 Linux 发行版中都能用,对于基于 Debian 的Linux,可以使用 apt 命令安装它 +`tcpdump` 在大多数 Linux 发行版中都能用,对于基于 Debian 的Linux,可以使用 `apt` 命令安装它。 ``` # apt install tcpdump -y ``` -在基于 RPM 的 Linux 操作系统上,可以使用下面的 yum 命令安装 tcpdump +在基于 RPM 的 Linux 操作系统上,可以使用下面的 `yum` 命令安装 `tcpdump`。 ``` # yum install tcpdump -y ``` -当我们在没用任何选项的情况下运行 tcpdump 命令时,它将捕获所有接口的数据包。因此,要停止或取消 tcpdump 命令,请输入 '**ctrl+c**'。在本教程中,我们将使用不同的实例来讨论如何捕获和分析数据包, +当我们在没用任何选项的情况下运行 `tcpdump` 命令时,它将捕获所有接口的数据包。因此,要停止或取消 `tcpdump` 命令,请键入 `ctrl+c`。在本教程中,我们将使用不同的实例来讨论如何捕获和分析数据包。 -### 示例: 1) 从特定接口捕获数据包 +### 示例:1)从特定接口捕获数据包 -当我们在没用任何选项的情况下运行 tcpdump 命令时,它将捕获所有接口上的数据包,因此,要从特定接口捕获数据包,请使用选项 '**-i**',后跟接口名称。 +当我们在没用任何选项的情况下运行 `tcpdump` 命令时,它将捕获所有接口上的数据包,因此,要从特定接口捕获数据包,请使用选项 `-i`,后跟接口名称。 语法: @@ -28,7 +29,7 @@ tcpdump 在大多数 Linux 发行版中都能用,对于基于 Debian 的Linux # tcpdump -i {接口名} ``` -假设我想从接口“enp0s3”捕获数据包 +假设我想从接口 `enp0s3` 捕获数据包。 输出将如下所示, @@ -49,21 +50,21 @@ listening on enp0s3, link-type EN10MB (Ethernet), capture size 262144 bytes ``` -### 示例: 2) 从特定接口捕获特定数量数据包 +### 示例:2)从特定接口捕获特定数量数据包 -假设我们想从特定接口(如“enp0s3”)捕获12个数据包,这可以使用选项 '**-c {数量} -I {接口名称}**' 轻松实现 +假设我们想从特定接口(如 `enp0s3`)捕获 12 个数据包,这可以使用选项 `-c {数量} -I {接口名称}` 轻松实现。 ``` root@compute-0-1 ~]# tcpdump -c 12 -i enp0s3 ``` -上面的命令将生成如下所示的输出 +上面的命令将生成如下所示的输出, [![N-Number-Packsets-tcpdump-interface][1]][2] -### 示例: 3) 显示 tcpdump 的所有可用接口 +### 示例:3)显示 tcpdump 的所有可用接口 -使用 '**-D**' 选项显示 tcpdump 命令的所有可用接口, +使用 `-D` 选项显示 `tcpdump` 命令的所有可用接口, ``` [root@compute-0-1 ~]# tcpdump -D @@ -86,11 +87,11 @@ root@compute-0-1 ~]# tcpdump -c 12 -i enp0s3 [[email protected] ~]# ``` -我正在我的一个openstack计算节点上运行tcpdump命令,这就是为什么在输出中你会看到数字接口、标签接口、网桥和vxlan接口 +我正在我的一个 openstack 计算节点上运行 `tcpdump` 命令,这就是为什么在输出中你会看到数字接口、标签接口、网桥和 vxlan 接口 -### 示例: 4) 捕获带有可读时间戳(-tttt 选项)的数据包 +### 示例:4)捕获带有可读时间戳的数据包(`-tttt` 选项) -默认情况下,在tcpdump命令输出中,没有显示可读性好的时间戳,如果您想将可读性好的时间戳与每个捕获的数据包相关联,那么使用 '**-tttt**'选项,示例如下所示, +默认情况下,在 `tcpdump` 命令输出中,不显示可读性好的时间戳,如果您想将可读性好的时间戳与每个捕获的数据包相关联,那么使用 `-tttt` 选项,示例如下所示, ``` [[email protected] ~]# tcpdump -c 8 -tttt -i enp0s3 @@ -108,12 +109,11 @@ listening on enp0s3, link-type EN10MB (Ethernet), capture size 262144 bytes 134 packets received by filter 69 packets dropped by kernel [[email protected] ~]# - ``` -### 示例: 5) 捕获数据包并将其保存到文件( -w 选项) +### 示例:5)捕获数据包并将其保存到文件(`-w` 选项) -使用 tcpdump 命令中的 '**-w**' 选项将捕获的 TCP/IP 数据包保存到一个文件中,以便我们可以在将来分析这些数据包以供进一步分析。 +使用 `tcpdump` 命令中的 `-w` 选项将捕获的 TCP/IP 数据包保存到一个文件中,以便我们可以在将来分析这些数据包以供进一步分析。 语法: @@ -121,9 +121,9 @@ listening on enp0s3, link-type EN10MB (Ethernet), capture size 262144 bytes # tcpdump -w 文件名.pcap -i {接口名} ``` -注意:文件扩展名必须为 **.pcap** +注意:文件扩展名必须为 `.pcap`。 -假设我要把 '**enp0s3**' 接口捕获到的包保存到文件名为 **enp0s3-26082018.pcap** +假设我要把 `enp0s3` 接口捕获到的包保存到文件名为 `enp0s3-26082018.pcap`。 ``` [root@compute-0-1 ~]# tcpdump -w enp0s3-26082018.pcap -i enp0s3 @@ -140,24 +140,23 @@ tcpdump: listening on enp0s3, link-type EN10MB (Ethernet), capture size 262144 b [root@compute-0-1 ~]# ls anaconda-ks.cfg enp0s3-26082018.pcap [root@compute-0-1 ~]# - ``` -捕获并保存大小**大于 N 字节**的数据包 +捕获并保存大小**大于 N 字节**的数据包。 ``` [root@compute-0-1 ~]# tcpdump -w enp0s3-26082018-2.pcap greater 1024 ``` -捕获并保存大小**小于 N 字节**的数据包 +捕获并保存大小**小于 N 字节**的数据包。 ``` [root@compute-0-1 ~]# tcpdump -w enp0s3-26082018-3.pcap less 1024 ``` -### 示例: 6) 从保存的文件中读取数据包( -r 选项) +### 示例:6)从保存的文件中读取数据包(`-r` 选项) -在上面的例子中,我们已经将捕获的数据包保存到文件中,我们可以使用选项 '**-r**' 从文件中读取这些数据包,例子如下所示, +在上面的例子中,我们已经将捕获的数据包保存到文件中,我们可以使用选项 `-r` 从文件中读取这些数据包,例子如下所示, ``` [root@compute-0-1 ~]# tcpdump -r enp0s3-26082018.pcap @@ -183,12 +182,11 @@ p,TS val 81359114 ecr 81350901], length 508 2018-08-25 22:03:17.647502 IP controller0.example.com.amqp > compute-0-1.example.com.57788: Flags [.], ack 1956, win 1432, options [nop,nop,TS val 813 52753 ecr 81359114], length 0 ......................................................................................................................... - ``` -### 示例: 7) 仅捕获特定接口上的 IP 地址数据包( -n 选项) +### 示例:7)仅捕获特定接口上的 IP 地址数据包(`-n` 选项) -使用 tcpdump 命令中的 -n 选项,我们能只捕获特定接口上的 IP 地址数据包,示例如下所示, +使用 `tcpdump` 命令中的 `-n` 选项,我们能只捕获特定接口上的 IP 地址数据包,示例如下所示, ``` [root@compute-0-1 ~]# tcpdump -n -i enp0s3 @@ -211,19 +209,18 @@ listening on enp0s3, link-type EN10MB (Ethernet), capture size 262144 bytes 22:22:28.539595 IP 169.144.0.1.39406 > 169.144.0.20.ssh: Flags [.], ack 1572, win 9086, options [nop,nop,TS val 20666614 ecr 82510006], length 0 22:22:28.539760 IP 169.144.0.20.ssh > 169.144.0.1.39406: Flags [P.], seq 1572:1912, ack 1, win 291, options [nop,nop,TS val 82510007 ecr 20666614], length 340 ......................................................................... - ``` -您还可以使用 tcpdump 命令中的 -c 和 -N 选项捕获 N 个 IP 地址包, +您还可以使用 `tcpdump` 命令中的 `-c` 和 `-N` 选项捕获 N 个 IP 地址包, ``` [root@compute-0-1 ~]# tcpdump -c 25 -n -i enp0s3 ``` -### 示例: 8) 仅捕获特定接口上的TCP数据包 +### 示例:8)仅捕获特定接口上的 TCP 数据包 -在 tcpdump 命令中,我们能使用 '**tcp**' 选项来只捕获TCP数据包, +在 `tcpdump` 命令中,我们能使用 `tcp` 选项来只捕获 TCP 数据包, ``` [root@compute-0-1 ~]# tcpdump -i enp0s3 tcp @@ -241,9 +238,9 @@ listening on enp0s3, link-type EN10MB (Ethernet), capture size 262144 bytes ................................................................................................................................................... ``` -### 示例: 9) 从特定接口上的特定端口捕获数据包 +### 示例:9)从特定接口上的特定端口捕获数据包 -使用 tcpdump 命令,我们可以从特定接口 enp0s3 上的特定端口(例如 22 )捕获数据包 +使用 `tcpdump` 命令,我们可以从特定接口 `enp0s3` 上的特定端口(例如 22)捕获数据包。 语法: @@ -262,13 +259,12 @@ listening on enp0s3, link-type EN10MB (Ethernet), capture size 262144 bytes 22:54:55.038564 IP 169.144.0.1.39406 > compute-0-1.example.com.ssh: Flags [.], ack 940, win 9177, options [nop,nop,TS val 21153238 ecr 84456505], length 0 22:54:55.038708 IP compute-0-1.example.com.ssh > 169.144.0.1.39406: Flags [P.], seq 940:1304, ack 1, win 291, options [nop,nop,TS val 84456506 ecr 21153238], length 364 ............................................................................................................................ -[root@compute-0-1 ~]# ``` -### 示例: 10) 在特定接口上捕获来自特定来源 IP 的数据包 +### 示例:10)在特定接口上捕获来自特定来源 IP 的数据包 -在tcpdump命令中,使用 '**src**' 关键字后跟 '**IP 地址**',我们可以捕获来自特定来源 IP 的数据包, +在 `tcpdump` 命令中,使用 `src` 关键字后跟 IP 地址,我们可以捕获来自特定来源 IP 的数据包, 语法: @@ -296,17 +292,16 @@ listening on enp0s3, link-type EN10MB (Ethernet), capture size 262144 bytes 10 packets captured 12 packets received by filter 0 packets dropped by kernel -[root@compute-0-1 ~]# - ``` -### 示例: 11) 在特定接口上捕获来自特定目的IP的数据包 +### 示例:11)在特定接口上捕获来自特定目的 IP 的数据包 语法: ``` # tcpdump -n -i {接口名} dst {IP 地址} ``` + ``` [root@compute-0-1 ~]# tcpdump -n -i enp0s3 dst 169.144.0.1 tcpdump: verbose output suppressed, use -v or -vv for full protocol decode @@ -318,42 +313,39 @@ listening on enp0s3, link-type EN10MB (Ethernet), capture size 262144 bytes 23:10:43.522157 IP 169.144.0.20.ssh > 169.144.0.1.39406: Flags [P.], seq 800:996, ack 1, win 291, options [nop,nop,TS val 85404989 ecr 21390359], length 196 23:10:43.522346 IP 169.144.0.20.ssh > 169.144.0.1.39406: Flags [P.], seq 996:1192, ack 1, win 291, options [nop,nop,TS val 85404989 ecr 21390359], length 196 ......................................................................................... - ``` -### 示例: 12) 捕获两台主机之间的 TCP 数据包通信 +### 示例:12)捕获两台主机之间的 TCP 数据包通信 假设我想捕获两台主机 169.144.0.1 和 169.144.0.20 之间的 TCP 数据包,示例如下所示, ``` [root@compute-0-1 ~]# tcpdump -w two-host-tcp-comm.pcap -i enp0s3 tcp and \(host 169.144.0.1 or host 169.144.0.20\) - ``` -使用 tcpdump 命令只捕获两台主机之间的 SSH 数据包流, +使用 `tcpdump` 命令只捕获两台主机之间的 SSH 数据包流, ``` [root@compute-0-1 ~]# tcpdump -w ssh-comm-two-hosts.pcap -i enp0s3 src 169.144.0.1 and port 22 and dst 169.144.0.20 and port 22 - ``` -示例: 13) 捕获两台主机之间的 UDP 网络数据包(来回) +### 示例:13)捕获两台主机之间(来回)的 UDP 网络数据包 语法: ``` # tcpdump -w -s -i udp and \(host and host \) ``` + ``` [root@compute-0-1 ~]# tcpdump -w two-host-comm.pcap -s 1000 -i enp0s3 udp and \(host 169.144.0.10 and host 169.144.0.20\) - ``` -### 示例: 14) 捕获十六进制和ASCII格式的数据包 +### 示例:14)捕获十六进制和 ASCII 格式的数据包 -使用 tcpdump 命令,我们可以以 ASCII 和十六进制格式捕获 TCP/IP 数据包, +使用 `tcpdump` 命令,我们可以以 ASCII 和十六进制格式捕获 TCP/IP 数据包, -要使用** -A **选项捕获ASCII格式的数据包,示例如下所示: +要使用 `-A` 选项捕获 ASCII 格式的数据包,示例如下所示: ``` [root@compute-0-1 ~]# tcpdump -c 10 -A -i enp0s3 @@ -376,7 +368,7 @@ root@compute-0-1 @.......... .................................................................................................................................................. ``` -要同时以十六进制和 ASCII 格式捕获数据包,请使用** -XX **选项 +要同时以十六进制和 ASCII 格式捕获数据包,请使用 `-XX` 选项。 ``` [root@compute-0-1 ~]# tcpdump -c 10 -XX -i enp0s3 @@ -406,10 +398,9 @@ listening on enp0s3, link-type EN10MB (Ethernet), capture size 262144 bytes 0x0030: 3693 7c0e 0000 0101 080a 015a a734 0568 6.|........Z.4.h 0x0040: 39af ....................................................................... - ``` -这就是本文的全部内容,我希望您能了解如何使用 tcpdump 命令捕获和分析 TCP/IP 数据包。请分享你的反馈和评论。 +这就是本文的全部内容,我希望您能了解如何使用 `tcpdump` 命令捕获和分析 TCP/IP 数据包。请分享你的反馈和评论。 -------------------------------------------------------------------------------- @@ -418,7 +409,7 @@ via: https://www.linuxtechi.com/capture-analyze-packets-tcpdump-command-linux/ 作者:[Pradeep Kumar][a] 选题:[lujun9972](https://github.com/lujun9972) 译者:[ypingcn](https://github.com/ypingcn) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 From f1d5a8dabc1fdca137acc696399688aa5e7b59c4 Mon Sep 17 00:00:00 2001 From: "Xingyu.Wang" Date: Wed, 26 Sep 2018 09:57:04 +0800 Subject: [PATCH 030/736] PUB:20180826 How to capture and analyze packets with tcpdump command on Linux.md @ypingcn https://linux.cn/article-10050-1.html --- ...o capture and analyze packets with tcpdump command on Linux.md | 0 1 file changed, 0 insertions(+), 0 deletions(-) rename {translated/tech => published}/20180826 How to capture and analyze packets with tcpdump command on Linux.md (100%) diff --git a/translated/tech/20180826 How to capture and analyze packets with tcpdump command on Linux.md b/published/20180826 How to capture and analyze packets with tcpdump command on Linux.md similarity index 100% rename from translated/tech/20180826 How to capture and analyze packets with tcpdump command on Linux.md rename to published/20180826 How to capture and analyze packets with tcpdump command on Linux.md From e15bdabdf6660c98492672a8eb0d3a0a8278d018 Mon Sep 17 00:00:00 2001 From: "Xingyu.Wang" Date: Wed, 26 Sep 2018 10:16:42 +0800 Subject: [PATCH 031/736] PRF:20180919 Understand Fedora memory usage with top.md @HankChow --- ...Understand Fedora memory usage with top.md | 24 +++++++++---------- 1 file changed, 12 insertions(+), 12 deletions(-) diff --git a/translated/tech/20180919 Understand Fedora memory usage with top.md b/translated/tech/20180919 Understand Fedora memory usage with top.md index a55d5d7b55..a8c6906ca1 100644 --- a/translated/tech/20180919 Understand Fedora memory usage with top.md +++ b/translated/tech/20180919 Understand Fedora memory usage with top.md @@ -1,17 +1,17 @@ -使用 `top` 命令了解 Fedora 的内存使用情况 +使用 top 命令了解 Fedora 的内存使用情况 ====== ![](https://fedoramagazine.org/wp-content/uploads/2018/09/memory-top-816x345.jpg) -如果你使用过 `top` 命令来查看 Fedora 系统中的内存使用情况,你可能会惊讶,显示的数值看起来比系统可用的内存消耗更多。下面会详细介绍内存使用情况以及如何理解这些数据。 +如果你使用过 `top` 命令来查看 Fedora 系统中的内存使用情况,你可能会惊讶,看起来消耗的数量比系统可用的内存更多。下面会详细介绍内存使用情况以及如何理解这些数据。 ### 内存实际使用情况 -操作系统对内存的使用方式并不是太通俗易懂,而是有很多不为人知的巧妙方式。通过这些方式,可以在无需用户干预的情况下,让操作系统更有效地使用内存。 +操作系统对内存的使用方式并不是太通俗易懂。事实上,其背后有很多不为人知的巧妙技术在发挥着作用。通过这些方式,可以在无需用户干预的情况下,让操作系统更有效地使用内存。 大多数应用程序都不是系统自带的,但每个应用程序都依赖于安装在系统中的库中的一些函数集。在 Fedora 中,RPM 包管理系统能够确保在安装应用程序时也会安装所依赖的库。 -当应用程序运行时,操作系统并不需要将它要用到的所有信息都加载到物理内存中。而是会为存放代码的存储构建一个映射,称为虚拟内存。操作系统只把需要的部分加载到内存中,当某一个部分不再需要后,这一部分内存就会被释放掉。 +当应用程序运行时,操作系统并不需要将它要用到的所有信息都加载到物理内存中。而是会为存放代码的存储空间构建一个映射,称为虚拟内存。操作系统只把需要的部分加载到内存中,当某一个部分不再需要后,这一部分内存就会被释放掉。 这意味着应用程序可以映射大量的虚拟内存,而使用较少的系统物理内存。特殊情况下,映射的虚拟内存甚至可以比系统实际可用的物理内存更多!而且在操作系统中这种情况也并不少见。 @@ -21,25 +21,25 @@ ### 使用 `top` 命令查看内存使用量 -如果你还没有使用过 `top` 命令,可以打开终端直接执行查看。使用 **Shift + M** 可以按照内存使用量来进行排序。下图是在 Fedora Workstation 中执行的结果,在你的机器上显示的结果可能会略有不同: +如果你还没有使用过 `top` 命令,可以打开终端直接执行查看。使用 `Shift + M` 可以按照内存使用量来进行排序。下图是在 Fedora Workstation 中执行的结果,在你的机器上显示的结果可能会略有不同: ![](https://fedoramagazine.org/wp-content/uploads/2018/09/Screenshot-from-2018-09-17-14-23-17.png) -主要通过一下三列来查看内存使用情况:VIRT,RES 和 SHR。目前以 KB 为单位显示相关数值。 +主要通过以下三列来查看内存使用情况:`VIRT`、`RES` 和 `SHR`。目前以 KB 为单位显示相关数值。 -VIRT 列代表该进程映射的虚拟内存。如上所述,虚拟内存不是实际消耗的物理内存。例如, GNOME Shell 进程 gnome-shell 实际上没有消耗超过 3.1 GB 的物理内存,但它对很多更低或更高级的库都有依赖,系统必须对每个库都进行映射,以确保在有需要时可以加载这些库。 +`VIRT` 列代表该进程映射的虚拟virtual内存。如上所述,虚拟内存不是实际消耗的物理内存。例如, GNOME Shell 进程 `gnome-shell` 实际上没有消耗超过 3.1 GB 的物理内存,但它对很多更低或更高级的库都有依赖,系统必须对每个库都进行映射,以确保在有需要时可以加载这些库。 -RES 列代表应用程序消耗了多少实际(驻留)内存。对于 GNOME Shell 大约是 180788 KB。例子中的系统拥有大约 7704 MB 的物理内存,因此内存使用率显示为 2.3%。 +`RES` 列代表应用程序消耗了多少实际(驻留resident)内存。对于 GNOME Shell 大约是 180788 KB。例子中的系统拥有大约 7704 MB 的物理内存,因此内存使用率显示为 2.3%。 -但根据 SHR 列显示,其中至少有 88212 KB 是共享内存,这部分内存可能是其它应用程序也在使用的库函数。这意味着 GNOME Shell 本身大约有 92 MB 内存不与其他进程共享。需要注意的是,上述例子中的其它程序也共享了很多内存。在某些应用程序中,共享内存在内存使用量中会占很大的比例。 +但根据 `SHR` 列显示,其中至少有 88212 KB 是共享shared内存,这部分内存可能是其它应用程序也在使用的库函数。这意味着 GNOME Shell 本身大约有 92 MB 内存不与其他进程共享。需要注意的是,上述例子中的其它程序也共享了很多内存。在某些应用程序中,共享内存在内存使用量中会占很大的比例。 -值得一提的是,有时进程之间通过内存通信,这些内存也是共享的,但 `top` 工具却不一定能检测到,所以以上的说明也不一定准确。(这一句不太会翻译出来,烦请校对大佬帮忙看看,谢谢) +值得一提的是,有时进程之间通过内存通信,这些内存也是共享的,但 `top` 这样的工具却不一定能检测到,所以以上的说明也不一定准确。 ### 关于交换分区 系统还可以通过交换分区来存储数据(例如硬盘),但读写的速度相对较慢。当物理内存渐渐用满,操作系统就会查找内存中暂时不会使用的部分,将其写出到交换区域等待需要的时候使用。 -因此,如果交换内存的使用量一直偏高,表明系统的物理内存已经供不应求了。尽管错误的内存申请也有可能导致出现这种情况,但如果这种现象经常出现,就需要考虑提升物理内存或者限制某些程序的运行了。 +因此,如果交换内存的使用量一直偏高,表明系统的物理内存已经供不应求了。有时候一个不正常的应用也有可能导致出现这种情况,但如果这种现象经常出现,就需要考虑提升物理内存或者限制某些程序的运行了。 感谢 [Stig Nygaard][1] 在 [Flickr][2] 上提供的图片(CC BY 2.0)。 @@ -50,7 +50,7 @@ via: https://fedoramagazine.org/understand-fedora-memory-usage-top/ 作者:[Paul W. Frields][a] 选题:[lujun9972](https://github.com/lujun9972) 译者:[HankChow](https://github.com/HankChow) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 From ddfd8e8baad7cdea4fdfbe862b8bf36f93e511e8 Mon Sep 17 00:00:00 2001 From: "Xingyu.Wang" Date: Wed, 26 Sep 2018 10:17:09 +0800 Subject: [PATCH 032/736] PUB:20180919 Understand Fedora memory usage with top.md @HankChow https://linux.cn/article-10051-1.html --- .../20180919 Understand Fedora memory usage with top.md | 0 1 file changed, 0 insertions(+), 0 deletions(-) rename {translated/tech => published}/20180919 Understand Fedora memory usage with top.md (100%) diff --git a/translated/tech/20180919 Understand Fedora memory usage with top.md b/published/20180919 Understand Fedora memory usage with top.md similarity index 100% rename from translated/tech/20180919 Understand Fedora memory usage with top.md rename to published/20180919 Understand Fedora memory usage with top.md From 6d4b83d471ed9b473d7aa7a4260c50df36ac328b Mon Sep 17 00:00:00 2001 From: qhwdw <33189910+qhwdw@users.noreply.github.com> Date: Wed, 26 Sep 2018 12:13:48 +0800 Subject: [PATCH 033/736] Translating by qhwdw (#10351) --- ... of the Best Linux Educational Software and Games for Kids.md | 1 + 1 file changed, 1 insertion(+) diff --git a/sources/tech/20180813 5 of the Best Linux Educational Software and Games for Kids.md b/sources/tech/20180813 5 of the Best Linux Educational Software and Games for Kids.md index f6013baab2..66850b2260 100644 --- a/sources/tech/20180813 5 of the Best Linux Educational Software and Games for Kids.md +++ b/sources/tech/20180813 5 of the Best Linux Educational Software and Games for Kids.md @@ -1,3 +1,4 @@ +Translating by qhwdw 5 of the Best Linux Educational Software and Games for Kids ====== From 9c982a4d75ee5612d522be463c93164bf336d6b4 Mon Sep 17 00:00:00 2001 From: Hank Chow <280630620@qq.com> Date: Wed, 26 Sep 2018 12:16:30 +0800 Subject: [PATCH 034/736] translated (#10352) --- ...icks that can save you time and trouble.md | 171 ------------------ ...icks that can save you time and trouble.md | 170 +++++++++++++++++ 2 files changed, 170 insertions(+), 171 deletions(-) delete mode 100644 sources/tech/20180917 Linux tricks that can save you time and trouble.md create mode 100644 translated/tech/20180917 Linux tricks that can save you time and trouble.md diff --git a/sources/tech/20180917 Linux tricks that can save you time and trouble.md b/sources/tech/20180917 Linux tricks that can save you time and trouble.md deleted file mode 100644 index 61fae6d4bc..0000000000 --- a/sources/tech/20180917 Linux tricks that can save you time and trouble.md +++ /dev/null @@ -1,171 +0,0 @@ -HankChow translating - -Linux tricks that can save you time and trouble -====== -Some command line tricks can make you even more productive on the Linux command line. - -![](https://images.idgesg.net/images/article/2018/09/boy-jumping-off-swing-100772498-large.jpg) - -Good Linux command line tricks don’t only save you time and trouble. They also help you remember and reuse complex commands, making it easier for you to focus on what you need to do, not how you should go about doing it. In this post, we’ll look at some handy command line tricks that you might come to appreciate. - -### Editing your commands - -When making changes to a command that you're about to run on the command line, you can move your cursor to the beginning or the end of the command line to facilitate your changes using the ^a (control key plus “a”) and ^e (control key plus “e”) sequences. - -You can also fix and rerun a previously entered command with an easy text substitution by putting your before and after strings between **^** characters -- as in ^before^after^. - -``` -$ eho hello world <== oops! - -Command 'eho' not found, did you mean: - - command 'echo' from deb coreutils - command 'who' from deb coreutils - -Try: sudo apt install - -$ ^e^ec^ <== replace text -echo hello world -hello world - -``` - -### Logging into a remote system with just its name - -If you log into other systems from the command line (I do this all the time), you might consider adding some aliases to your system to supply the details. Your alias can provide the username you want to use (which may or may not be the same as your username on your local system) and the identity of the remote server. Use an alias server_name=’ssh -v -l username IP-address' type of command like this: - -``` -$ alias butterfly=”ssh -v -l jdoe 192.168.0.11” -``` - -You can use the system name in place of the IP address if it’s listed in your /etc/hosts file or available through your DNS server. - -And remember you can list your aliases with the **alias** command. - -``` -$ alias -alias butterfly='ssh -v -l jdoe 192.168.0.11' -alias c='clear' -alias egrep='egrep --color=auto' -alias fgrep='fgrep --color=auto' -alias grep='grep --color=auto' -alias l='ls -CF' -alias la='ls -A' -alias list_repos='grep ^[^#] /etc/apt/sources.list /etc/apt/sources.list.d/*' -alias ll='ls -alF' -alias ls='ls --color=auto' -alias show_dimensions='xdpyinfo | grep '\''dimensions:'\''' -``` - -It's good practice to test new aliases and then add them to your ~/.bashrc or similar file to be sure they will be available any time you log in. - -### Freezing and thawing out your terminal window - -The ^s (control key plus “s”) sequence will stop a terminal from providing output by running an XOFF (transmit off) flow control. This affects PuTTY sessions, as well as terminal windows on your desktop. Sometimes typed by mistake, however, the way to make the terminal window responsive again is to enter ^q (control key plus “q”). The only real trick here is remembering ^q since you aren't very likely run into this situation very often. - -### Repeating commands - -Linux provides many ways to reuse commands. The key to command reuse is your history buffer and the commands it collects for you. The easiest way to repeat a command is to type an ! followed by the beginning letters of a recently used command. Another is to press the up-arrow on your keyboard until you see the command you want to reuse and then press enter. You can also display previously entered commands and then type ! followed by the number shown next to the command you want to reuse in the displayed command history entries. - -``` -!! <== repeat previous command -!ec <== repeat last command that started with "ec" -!76 <== repeat command #76 from command history -``` - -### Watching a log file for updates - -Commands such as tail -f /var/log/syslog will show you lines as they are being added to the specified log file — very useful if you are waiting for some particular activity or want to track what’s happening right now. The command will show the end of the file and then additional lines as they are added. - -``` -$ tail -f /var/log/auth.log -Sep 17 09:41:01 fly CRON[8071]: pam_unix(cron:session): session closed for user smmsp -Sep 17 09:45:01 fly CRON[8115]: pam_unix(cron:session): session opened for user root -Sep 17 09:45:01 fly CRON[8115]: pam_unix(cron:session): session closed for user root -Sep 17 09:47:00 fly sshd[8124]: Accepted password for shs from 192.168.0.22 port 47792 -Sep 17 09:47:00 fly sshd[8124]: pam_unix(sshd:session): session opened for user shs by -Sep 17 09:47:00 fly systemd-logind[776]: New session 215 of user shs. -Sep 17 09:55:01 fly CRON[8208]: pam_unix(cron:session): session opened for user root -Sep 17 09:55:01 fly CRON[8208]: pam_unix(cron:session): session closed for user root - <== waits for additional lines to be added -``` - -### Asking for help - -For most Linux commands, you can enter the name of the command followed by the option **\--help** to get some fairly succinct information on what the command does and how to use it. Less extensive than the man command, the --help option often tells you just what you need to know without expanding on all of the options available. - -``` -$ mkdir --help -Usage: mkdir [OPTION]... DIRECTORY... -Create the DIRECTORY(ies), if they do not already exist. - -Mandatory arguments to long options are mandatory for short options too. - -m, --mode=MODE set file mode (as in chmod), not a=rwx - umask - -p, --parents no error if existing, make parent directories as needed - -v, --verbose print a message for each created directory - -Z set SELinux security context of each created directory - to the default type - --context[=CTX] like -Z, or if CTX is specified then set the SELinux - or SMACK security context to CTX - --help display this help and exit - --version output version information and exit - -GNU coreutils online help: -Full documentation at: -or available locally via: info '(coreutils) mkdir invocation' -``` - -### Removing files with care - -To add a little caution to your use of the rm command, you can set it up with an alias that asks you to confirm your request to delete files before it goes ahead and deletes them. Some sysadmins make this the default. In that case, you might like the next option even more. - -``` -$ rm -i <== prompt for confirmation -``` - -### Turning off aliases - -You can always disable an alias interactively by using the unalias command. It doesn’t change the configuration of the alias in question; it just disables it until the next time you log in or source the file in which the alias is set up. - -``` -$ unalias rm -``` - -If the **rm -i** alias is set up as the default and you prefer to never have to provide confirmation before deleting files, you can put your **unalias** command in one of your startup files (e.g., ~/.bashrc). - -### Remembering to use sudo - -If you often forget to precede commands that only root can run with “sudo”, there are two things you can do. You can take advantage of your command history by using the “sudo !!” (use sudo to run your most recent command with sudo prepended to it), or you can turn some of these commands into aliases with the required "sudo" attached. - -``` -$ alias update=’sudo apt update’ -``` - -### More complex tricks - -Some useful command line tricks require a little more than a clever alias. An alias, after all, replaces a command, often inserting options so you don't have to enter them and allowing you to tack on additional information. If you want something more complex than an alias can manage, you can write a simple script or add a function to your .bashrc or other start-up file. The function below, for example, creates a directory and moves you into it. Once it's been set up, source your .bashrc or other file and you can use commands such as "md temp" to set up a directory and cd into it. - -``` -md () { mkdir -p "$@" && cd "$1"; } -``` - -### Wrap-up - -Working on the Linux command line remains one of the most productive and enjoyable ways to get work done on my Linux systems, but a group of command line tricks and clever aliases can make that experience even better. - -Join the Network World communities on [Facebook][1] and [LinkedIn][2] to comment on topics that are top of mind. - --------------------------------------------------------------------------------- - -via: https://www.networkworld.com/article/3305811/linux/linux-tricks-that-even-you-can-love.html - -作者:[Sandra Henry-Stocker][a] -选题:[lujun9972](https://github.com/lujun9972) -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://www.networkworld.com/author/Sandra-Henry_Stocker/ -[1]: https://www.facebook.com/NetworkWorld/ -[2]: https://www.linkedin.com/company/network-world diff --git a/translated/tech/20180917 Linux tricks that can save you time and trouble.md b/translated/tech/20180917 Linux tricks that can save you time and trouble.md new file mode 100644 index 0000000000..1dbc81bfbd --- /dev/null +++ b/translated/tech/20180917 Linux tricks that can save you time and trouble.md @@ -0,0 +1,170 @@ +让你提高效率的 Linux 技巧 +====== +想要在 Linux 命令行工作中提高效率,你需要使用一些技巧。 + +![](https://images.idgesg.net/images/article/2018/09/boy-jumping-off-swing-100772498-large.jpg) + +巧妙的 Linux 命令行技巧能让你节省时间、避免出错,还能让你记住和复用各种复杂的命令,专注在需要做的事情本身,而不是做事的方式。以下介绍一些好用的命令行技巧。 + +### 命令编辑 + +如果要对一个已输入的命令进行修改,可以使用 ^a(ctrl + a)或 ^e(ctrl + e)将光标快速移动到命令的开头或命令的末尾。 + +还可以使用 `^` 字符实现对上一个命令的文本替换并重新执行命令,例如 `^before^after^` 相当于把上一个命令中的 `before` 替换为 `after` 然后重新执行一次。 + +``` +$ eho hello world <== 错误的命令 + +Command 'eho' not found, did you mean: + + command 'echo' from deb coreutils + command 'who' from deb coreutils + +Try: sudo apt install + +$ ^e^ec^ <== 替换 +echo hello world +hello world + +``` + +### 使用远程机器的名称登录到机器上 + +如果使用命令行登录其它机器上,可以考虑添加别名。在别名中,可以填入需要登录的用户名(与本地系统上的用户名可能相同,也可能不同)以及远程机器的登录信息。例如使用 `server_name ='ssh -v -l username IP-address'` 这样的别名命令: + +``` +$ alias butterfly=”ssh -v -l jdoe 192.168.0.11” +``` + +也可以通过在 `/etc/hosts` 文件中添加记录或者在 DNS 服务器中加入解析记录来把 IP 地址替换成易记的机器名称。 + +执行 `alias` 命令可以列出机器上已有的别名。 + +``` +$ alias +alias butterfly='ssh -v -l jdoe 192.168.0.11' +alias c='clear' +alias egrep='egrep --color=auto' +alias fgrep='fgrep --color=auto' +alias grep='grep --color=auto' +alias l='ls -CF' +alias la='ls -A' +alias list_repos='grep ^[^#] /etc/apt/sources.list /etc/apt/sources.list.d/*' +alias ll='ls -alF' +alias ls='ls --color=auto' +alias show_dimensions='xdpyinfo | grep '\''dimensions:'\''' +``` + +只要将新的别名添加到 `~/.bashrc` 或类似的文件中,就可以让别名在每次登录后都能立即生效。 + +### 冻结、解冻终端界面 + +^s(ctrl + s)将通过执行流量控制命令 XOFF 来停止终端输出内容,这会对 PuTTY 会话和桌面终端窗口产生影响。如果误输入了这个命令,可以使用 ^q(ctrl + q)让终端重新响应。所以只需要记住^q 这个组合键就可以了,毕竟这种情况并不多见。 + +### 复用命令 + +Linux 提供了很多让用户复用命令的方法,其核心是通过历史缓冲区收集执行过的命令。复用命令的最简单方法是输入 `!` 然后接最近使用过的命令的开头字母;当然也可以按键盘上的向上箭头,直到看到要复用的命令,然后按 Enter 键。还可以先使用 `history` 显示命令历史,然后输入 `!` 后面再接命令历史记录中需要复用的命令旁边的数字。 + +``` +!! <== 复用上一条命令 +!ec <== 复用上一条以 “ec” 开头的命令 +!76 <== 复用命令历史中的 76 号命令 +``` + +### 查看日志文件并动态显示更新内容 + +使用形如 `tail -f /var/log/syslog` 的命令可以查看指定的日志文件,并动态显示文件中增加的内容,需要监控向日志文件中追加内容的的事件时相当有用。这个命令会输出文件内容的末尾部分,并逐渐显示新增的内容。 + +``` +$ tail -f /var/log/auth.log +Sep 17 09:41:01 fly CRON[8071]: pam_unix(cron:session): session closed for user smmsp +Sep 17 09:45:01 fly CRON[8115]: pam_unix(cron:session): session opened for user root +Sep 17 09:45:01 fly CRON[8115]: pam_unix(cron:session): session closed for user root +Sep 17 09:47:00 fly sshd[8124]: Accepted password for shs from 192.168.0.22 port 47792 +Sep 17 09:47:00 fly sshd[8124]: pam_unix(sshd:session): session opened for user shs by +Sep 17 09:47:00 fly systemd-logind[776]: New session 215 of user shs. +Sep 17 09:55:01 fly CRON[8208]: pam_unix(cron:session): session opened for user root +Sep 17 09:55:01 fly CRON[8208]: pam_unix(cron:session): session closed for user root + <== 等待显示追加的内容 +``` + +### 寻求帮助 + +对于大多数 Linux 命令,都可以通过在输入命令后加上选项 `--help` 来获得这个命令的作用、用法以及它的一些相关信息。除了 `man` 命令之外, `--help` 选项可以让你在不使用所有扩展选项的情况下获取到所需要的内容。 + +``` +$ mkdir --help +Usage: mkdir [OPTION]... DIRECTORY... +Create the DIRECTORY(ies), if they do not already exist. + +Mandatory arguments to long options are mandatory for short options too. + -m, --mode=MODE set file mode (as in chmod), not a=rwx - umask + -p, --parents no error if existing, make parent directories as needed + -v, --verbose print a message for each created directory + -Z set SELinux security context of each created directory + to the default type + --context[=CTX] like -Z, or if CTX is specified then set the SELinux + or SMACK security context to CTX + --help display this help and exit + --version output version information and exit + +GNU coreutils online help: +Full documentation at: +or available locally via: info '(coreutils) mkdir invocation' +``` + +### 谨慎删除文件 + +如果要谨慎使用 `rm` 命令,可以为它设置一个别名,在删除文件之前需要进行确认才能删除。有些系统管理员会默认使用这个别名,对于这种情况,你可能需要看看下一个技巧。 + +``` +$ rm -i <== 请求确认 +``` + +### 关闭别名 + +你可以使用 `unalias` 命令以交互方式禁用别名。它不会更改别名的配置,而仅仅是暂时禁用,直到下次登录或重新设置了这一个别名才会重新生效。 + +``` +$ unalias rm +``` + +如果已经将 `rm -i` 默认设置为 `rm` 的别名,但你希望在删除文件之前不必进行确认,则可以将 `unalias` 命令放在一个启动文件(例如 ~/.bashrc)中。 + +### 使用 sudo + +如果你经常在只有 root 用户才能执行的命令前忘记使用 `sudo`,这里有两个方法可以解决。一是利用命令历史记录,可以使用 `sudo !!`(使用 `!!` 来运行最近的命令,并在前面添加 `sudo`)来重复执行,二是设置一些附加了所需 `sudo` 的命令别名。 + +``` +$ alias update=’sudo apt update’ +``` + +### 更复杂的技巧 + +有时命令行技巧并不仅仅是一个别名。毕竟,别名能帮你做的只有替换命令以及增加一些命令参数,节省了输入的时间。但如果需要比别名更复杂功能,可以通过编写脚本、向 `.bashrc` 或其他启动文件添加函数来实现。例如,下面这个函数会在创建一个目录后进入到这个目录下。在设置完毕后,执行 `source .bashrc`,就可以使用 `md temp` 这样的命令来创建目录立即进入这个目录下。 + +``` +md () { mkdir -p "$@" && cd "$1"; } +``` + +### 总结 + +使用 Linux 命令行是在 Linux 系统上工作最有效也最有趣的方法,但配合命令行技巧和巧妙的别名可以让你获得更好的体验。 + +加入 [Facebook][1] 和 [LinkedIn][2] 上的 Network World 社区可以和我们一起讨论。 + +-------------------------------------------------------------------------------- + +via: https://www.networkworld.com/article/3305811/linux/linux-tricks-that-even-you-can-love.html + +作者:[Sandra Henry-Stocker][a] +选题:[lujun9972](https://github.com/lujun9972) +译者:[HankChow](https://github.com/HankChow) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://www.networkworld.com/author/Sandra-Henry_Stocker/ +[1]: https://www.facebook.com/NetworkWorld/ +[2]: https://www.linkedin.com/company/network-world + From 5fe1a903760a28633a343e7b56f634b2269fd12a Mon Sep 17 00:00:00 2001 From: LuMing <784315443@qq.com> Date: Wed, 26 Sep 2018 12:17:18 +0800 Subject: [PATCH 035/736] LuuMing translating (#10353) --- ...How to Use the Netplan Network Configuration Tool on Linux.md | 1 + 1 file changed, 1 insertion(+) diff --git a/sources/tech/20180907 How to Use the Netplan Network Configuration Tool on Linux.md b/sources/tech/20180907 How to Use the Netplan Network Configuration Tool on Linux.md index 9ba21a367f..a9d3eb0895 100644 --- a/sources/tech/20180907 How to Use the Netplan Network Configuration Tool on Linux.md +++ b/sources/tech/20180907 How to Use the Netplan Network Configuration Tool on Linux.md @@ -1,3 +1,4 @@ +LuuMing translating How to Use the Netplan Network Configuration Tool on Linux ====== From 24b0867ee36f12df2aadfa08fa65fa95350217e7 Mon Sep 17 00:00:00 2001 From: hopefully2333 <787016457@qq.com> Date: Wed, 26 Sep 2018 12:20:36 +0800 Subject: [PATCH 036/736] translated over (#10354) * translated over translated over * Update 20180824 Steam Makes it Easier to Play Windows Games on Linux.md * translated over translated over --- ...t Easier to Play Windows Games on Linux.md | 74 ------------------- ...t Easier to Play Windows Games on Linux.md | 73 ++++++++++++++++++ 2 files changed, 73 insertions(+), 74 deletions(-) delete mode 100644 sources/tech/20180824 Steam Makes it Easier to Play Windows Games on Linux.md create mode 100644 translated/tech/20180824 Steam Makes it Easier to Play Windows Games on Linux.md diff --git a/sources/tech/20180824 Steam Makes it Easier to Play Windows Games on Linux.md b/sources/tech/20180824 Steam Makes it Easier to Play Windows Games on Linux.md deleted file mode 100644 index 73080089bf..0000000000 --- a/sources/tech/20180824 Steam Makes it Easier to Play Windows Games on Linux.md +++ /dev/null @@ -1,74 +0,0 @@ -translated by hopefully2333 - -Steam Makes it Easier to Play Windows Games on Linux -====== -![Steam Wallpaper][1] - -It’s no secret that the [Linux gaming][2] library offers only a fraction of what the Windows library offers. In fact, many people wouldn’t even consider [switching to Linux][3] simply because most of the games they want to play aren’t available on the platform. - -At the time of writing this article, Linux has just over 5,000 games available on Steam compared to the library’s almost 27,000 total games. Now, 5,000 games may be a lot, but it isn’t 27,000 games, that’s for sure. - -And though almost every new indie game seems to launch with a Linux release, we are still left without a way to play many [Triple-A][4] titles. For me, though there are many titles I would love the opportunity to play, this has never been a make-or-break problem since almost all of my favorite titles are available on Linux since I primarily play indie and [retro games][5] anyway. - -### Meet Proton: a WINE Fork by Steam - -Now, that problem is a thing of the past since this week Valve [announced][6] a new update to Steam Play that adds a forked version of Wine to the Linux and Mac Steam clients called Proton. Yes, the tool is open-source, and Valve has made the source code available on [Github][7]. The feature is still in beta though, so you must opt into the beta Steam client in order to take advantage of this functionality. - -#### With proton, more Windows games are available for Linux on Steam - -What does that actually mean for us Linux users? In short, it means that both Linux and Mac computers can now play all 27,000 of those games without needing to configure something like [PlayOnLinux][8] or [Lutris][9] to do so! Which, let me tell you, can be quite the headache at times. - -The more complicated answer to this is that it sounds too good to be true for a reason. Though, in theory, you can play literally every Windows game on Linux this way, there is only a short list of games that are officially supported at launch, including DOOM, Final Fantasy VI, Tekken 7, Star Wars: Battlefront 2, and several more. - -#### You can play all Windows games on Linux (in theory) - -Though the list only has about 30 games thus far, you can force enable Steam to install and play any game through Proton by marking the “Enable Steam Play for all titles” checkbox. But don’t get your hopes too high. They do not guarantee the stability and performance you may be hoping for, so keep your expectations reasonable. - -![Steam Play][10] - -#### Experiencing Proton: Not as bad as I expected - -For example, I installed a few moderately taxing games to put Proton through its paces. One of which was The Elder Scrolls IV: Oblivion, and in the two hours I played the game, it only crashed once, and it was almost immediately after an autosave point during the tutorial. - -I have an Nvidia Gtx 1050 Ti, so I was able to play the game at 1080p with high settings, and I didn’t see a single problem outside of that one crash. The only negative feedback I really have is that the framerate was not nearly as high as it would have been if it was a native game. I got above 60 frames 90% of the time, but I admit it could have been better. - -Every other game that I have installed and launched has also worked flawlessly, granted I haven’t played any of them for an extended amount of time yet. Some games I installed include The Forest, Dead Rising 4, H1Z1, and Assassin’s Creed II (can you tell I like horror games?). - -#### Why is Steam (still) betting on Linux? - -Now, this is all fine and dandy, but why did this happen? Why would Valve spend the time, money, and resources needed to implement something like this? I like to think they did so because they value the Linux community, but if I am honest, I don’t believe we had anything to do with it. - -If I had to put money on it, I would say Valve has developed Proton because they haven’t given up on [Steam machines][11] yet. And since [Steam OS][12] is running on Linux, it is in their best interest financially to invest in something like this. The more games available on Steam OS, the more people might be willing to buy a Steam Machine. - -Maybe I am wrong, but I bet this means we will see a new wave of Steam machines coming in the not-so-distant future. Maybe we will see them in one year, or perhaps we won’t see them for another five, who knows! - -Either way, all I know is that I am beyond excited to finally play the games from my Steam library that I have slowly accumulated over the years from all of the Humble Bundles, promo codes, and random times I bought a game on sale just in case I wanted to try to get it running in Lutris. - -#### Excited for more gaming on Linux? - -What do you think? Are you excited about this, or are you afraid fewer developers will create native Linux games because there is almost no need to now? Does Valve love the Linux community, or do they love money? Let us know what you think in the comment section below, and check back in for more FOSS content like this. - --------------------------------------------------------------------------------- - -via: https://itsfoss.com/steam-play-proton/ - -作者:[Phillip Prado][a] -选题:[lujun9972](https://github.com/lujun9972) -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:https://itsfoss.com/author/phillip/ -[1]:https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/08/steam-wallpaper.jpeg -[2]:https://itsfoss.com/linux-gaming-guide/ -[3]:https://itsfoss.com/reasons-switch-linux-windows-xp/ -[4]:https://itsfoss.com/triplea-game-review/ -[5]:https://itsfoss.com/play-retro-games-linux/ -[6]:https://steamcommunity.com/games/221410 -[7]:https://github.com/ValveSoftware/Proton/ -[8]:https://www.playonlinux.com/en/ -[9]:https://lutris.net/ -[10]:https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/08/SteamProton.jpg -[11]:https://store.steampowered.com/sale/steam_machines -[12]:https://itsfoss.com/valve-annouces-linux-based-gaming-operating-system-steamos/ diff --git a/translated/tech/20180824 Steam Makes it Easier to Play Windows Games on Linux.md b/translated/tech/20180824 Steam Makes it Easier to Play Windows Games on Linux.md new file mode 100644 index 0000000000..75c42b3ab3 --- /dev/null +++ b/translated/tech/20180824 Steam Makes it Easier to Play Windows Games on Linux.md @@ -0,0 +1,73 @@ + +steam 让我们在 Linux 上玩 Windows 的游戏更加容易 +====== +![Steam Wallpaper][1] + +总所周知,Linux 游戏库中的游戏只有 Windows 游戏库中的一部分,实际上,许多人甚至都不会考虑将操作系统转换为 Linux,原因很简单,因为他们喜欢的游戏,大多数都不能在这个平台上运行。 + +在撰写本文时,steam 上已有超过 5000 种游戏可以在 Linux 上运行,而 steam 上的游戏总数已经接近 27000 种了。现在 5000 种游戏可能看起来很多,但还没有达到 27000 种,确实没有。 + +虽然几乎所有的新的独立游戏都是在 Linux 中推出的,但我们仍然无法在这上面玩很多的 3A 大作。对我而言,虽然这其中有很多游戏我都很希望能有机会玩,但这从来都不是一个非黑即白的问题。因为我主要是玩独立游戏和复古游戏,所以几乎所有我喜欢的游戏都可以在 Linux 系统上运行。 + +### 认识 Proton,Steam 的一次 WINE 分叉。 + +现在,这个问题已经成为过去式了,因为本周 Valve 宣布要对 Steam Play 进行一次更新,此次更新会将一个名为 Proton 的分叉版本的 Wine 添加到 Linux 和 Mac 的客户端中。是的,这个工具是开源的,Valve 已经在 GitHub 上开源了源代码,但该功能仍然处于测试阶段,所以你必须使用测试版的 Steam 客户端才能使用这项功能。 + +#### 使用 proton ,可以在 Linux 系统上使用 Steam 运行更多的 Windows 上的游戏。 + +这对我们这些 Linux 用户来说,实际上意味着什么?简单来说,这意味着我们可以在 Linux 和 Mac 这两种操作系统的电脑上运行全部 27000 种游戏,而无需配置像 PlayOnLinux 或 Lutris 这样的服务。我要告诉你的是,配置这些东西有时候会非常让人头疼。 + +对此更为复杂的答案是,某种原因听起来非常美好。虽然在理论上,你可以用这种方式在 Linux 上玩所有的 Windows 平台上的游戏。但只有一少部分游戏在推出时会正式支持 Linux。这少部分游戏包括 DOOM,最终幻想 VI,铁拳 7,星球大战:前线 2,和其他几个。 + +#### 你可以在 Linux 上玩所有的 Windows 平台的游戏(理论上) + +虽然目前该列表只有大约 30 个游戏,你可以点击“为所有游戏使用 Steam play 进行运行”复选框来强制使用 Steam 的 Proton 来安装和运行任意游戏。但你最好不要有太高的期待,它们的稳定性和性能表现不一定有你希望的那么好,所以请把期望值压低一点。 + +![Steam Play][10] + +#### 体验 Proton,没有我想的那么烂。 + +例如,我安装了一些中等价格的游戏,使用 Proton 来进行安装。其中一个是上古卷轴 4:湮没,在我玩这个游戏的两个小时里,它只崩溃了一次,而且几乎是紧跟在游戏教程的自动保存点之后。 + +我有一块英伟达 Gtx 1050 Ti 的显卡。所以我可以使用 1080P 的高配置来玩这个游戏。而且我没有遇到除了这次崩溃之外的任何问题。我唯一真正感到不爽的只有它的帧数没有原本的高。在 90% 的时间里,游戏的帧数都在 60 帧以上,但我知道它的帧数应该能更高。 + +我安装和发布的其他所有游戏都运行得很完美,虽然我还没有较长时间地玩过它们中的任何一个。我安装的游戏中包括森林,丧尸围城 4,H1Z1,和刺客信条 2.(你能说我这是喜欢恐怖游戏吗?)。 + +#### 为什么 Steam(仍然)要下注在 Linux 上? + +现在,一切都很好,这件事为什么会发生呢?为什么 Valve 要花费时间,金钱和资源来做这样的事?我倾向于认为,他们这样做是因为他们懂得 Linux 社区的价值,但是如果要我老实地说,我不相信我们和它有任何的关系。 + +如果我一定要在这上面花钱,我想说 Valve 开发了 Proton,因为他们还没有放弃 Steam 机器。因为 Steam OS 是基于 Linux 的发行版,在这类东西上面投资可以获取最大的利润,Steam OS 上可用的游戏越多,就会有更多的人愿意购买 Steam 的机器。 + +可能我是错的,但是我敢打赌啊,我们会在不远的未来看到新一批的 Steam 机器。可能我们会在一年内看到它们,也有可能我们再等五年都见不到,谁知道呢! + +无论哪种方式,我所知道的是,我终于能兴奋地从我的 Steam 游戏库里玩游戏了。多年来我通过所有的收藏包,游戏促销,和不定时地买一个贱卖的游戏来以防万一,我想去尝试让它在 Lutris 中运行。 + +#### 为 Linux 上越来越多的游戏而激动? + +你怎么看?你对此感到激动吗?或者说你会害怕只有很少的开发者会开发 Linux 平台上的游戏,因为现在几乎没有需求?Valve 喜欢 Linux 社区,还是说他们喜欢钱?请在下面的评论区告诉我们您的想法,然后重新搜索来查看更多类似这样的开源软件方面的文章。 + +-------------------------------------------------------------------------------- + +via: https://itsfoss.com/steam-play-proton/ + +作者:[Phillip Prado][a] +选题:[lujun9972](https://github.com/lujun9972) +译者:[hopefully2333](https://github.com/hopefully2333) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://itsfoss.com/author/phillip/ +[1]:https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/08/steam-wallpaper.jpeg +[2]:https://itsfoss.com/linux-gaming-guide/ +[3]:https://itsfoss.com/reasons-switch-linux-windows-xp/ +[4]:https://itsfoss.com/triplea-game-review/ +[5]:https://itsfoss.com/play-retro-games-linux/ +[6]:https://steamcommunity.com/games/221410 +[7]:https://github.com/ValveSoftware/Proton/ +[8]:https://www.playonlinux.com/en/ +[9]:https://lutris.net/ +[10]:https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/08/SteamProton.jpg +[11]:https://store.steampowered.com/sale/steam_machines +[12]:https://itsfoss.com/valve-annouces-linux-based-gaming-operating-system-steamos/ From d5e112d38945d772a97645a90acd97eeb0290ebc Mon Sep 17 00:00:00 2001 From: darksun Date: Wed, 26 Sep 2018 14:42:58 +0800 Subject: [PATCH 037/736] =?UTF-8?q?=E9=80=89=E9=A2=98:=20How=20to=20Replac?= =?UTF-8?q?e=20one=20Linux=20Distro=20With=20Another=20in=20Dual=20Boot=20?= =?UTF-8?q?[Guide]?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...Distro With Another in Dual Boot -Guide.md | 160 ++++++++++++++++++ 1 file changed, 160 insertions(+) create mode 100644 sources/tech/20180925 How to Replace one Linux Distro With Another in Dual Boot -Guide.md diff --git a/sources/tech/20180925 How to Replace one Linux Distro With Another in Dual Boot -Guide.md b/sources/tech/20180925 How to Replace one Linux Distro With Another in Dual Boot -Guide.md new file mode 100644 index 0000000000..ab9fa8acc3 --- /dev/null +++ b/sources/tech/20180925 How to Replace one Linux Distro With Another in Dual Boot -Guide.md @@ -0,0 +1,160 @@ +How to Replace one Linux Distro With Another in Dual Boot [Guide] +====== +**If you have a Linux distribution installed, you can replace it with another distribution in the dual boot. You can also keep your personal documents while switching the distribution.** + +![How to Replace One Linux Distribution With Another From Dual Boot][1] + +Suppose you managed to [successfully dual boot Ubuntu and Windows][2]. But after reading the [Linux Mint versus Ubuntu discussion][3], you realized that [Linux Mint][4] is more suited for your needs. What would you do now? How would you [remove Ubuntu][5] and [install Mint in dual boot][6]? + +You might think that you need to uninstall [Ubuntu][7] from dual boot first and then repeat the dual booting steps with Linux Mint. Let me tell you something. You don’t need to do all of that. + +If you already have a Linux distribution installed in dual boot, you can easily replace it with another. You don’t have to uninstall the existing Linux distribution. You simply delete its partition and install the new distribution on the disk space vacated by the previous distribution. + +Another good news is that you may be able to keep your Home directory with all your documents and pictures while switching the Linux distributions. + +Let me show you how to switch Linux distributions. + +### Replace one Linux with another from dual boot + + + +Let me describe the scenario I am going to use here. I have Linux Mint 19 installed on my system in dual boot mode with Windows 10. I am going to replace it with elementary OS 5. I’ll also keep my personal files (music, pictures, videos, documents from my home directory) while switching distributions. + +Let’s first take a look at the requirements: + + * A system with Linux and Windows dual boot + * Live USB of Linux you want to install + * Backup of your important files in Windows and in Linux on an external disk (optional yet recommended) + + + +#### Things to keep in mind for keeping your home directory while changing Linux distribution + +If you want to keep your files from existing Linux install as it is, you must have a separate root and home directory. You might have noticed that in my [dual boot tutorials][8], I always go for ‘Something Else’ option and then manually create root and home partitions instead of choosing ‘Install alongside Windows’ option. This is where all the troubles in manually creating separate home partition pay off. + +Keeping Home on a separate partition is helpful in situations when you want to replace your existing Linux install with another without losing your files. + +Note: You must remember the exact username and password of your existing Linux install in order to use the same home directory as it is in the new distribution. + +If you don’t have a separate Home partition, you may create it later as well BUT I won’t recommend that. That process is slightly complicated and I don’t want you to mess up your system. + +With that much background information, it’s time to see how to replace a Linux distribution with another. + +#### Step 1: Create a live USB of the new Linux distribution + +Alright! I already mentioned it in the requirements but I still included it in the main steps to avoid confusion. + +You can create a live USB using a start up disk creator like [Etcher][9] in Windows or Linux. The process is simple so I am not going to list the steps here. + +#### Step 2: Boot into live USB and proceed to installing Linux + +Since you have already dual booted before, you probably know the drill. Plugin the live USB, restart your system and at the boot time, press F10 or F12 repeatedly to enter BIOS settings. + +In here, choose to boot from the USB. And then you’ll see the option to try the live environment or installing it immediately. + +You should start the installation procedure. When you reach the ‘Installation type’ screen, choose the ‘Something else’ option. + +![Replacing one Linux with another from dual boot][10] +Select ‘Something else’ here + +#### Step 3: Prepare the partition + +You’ll see the partitioning screen now. Look closely and you’ll see your Linux installation with Ext4 file system type. + +![Identifying Linux partition in dual boot][11] +Identify where your Linux is installed + +In the above picture, the Ext4 partition labeled as Linux Mint 19 is the root partition. The second Ext4 partition of 82691 MB is the Home partition. I [haven’t used any swap space][12] here. + +Now, if you have just one Ext4 partition, that means that your home directory is on the same partition as root. In this case, you won’t be able to keep your Home directory. I suggest that you copy the important files to an external disk else you’ll lose them forever. + +It’s time to delete the root partition. Select the root partition and click the – sign. This will create some free space. + +![Delete root partition of your existing Linux install][13] +Delete root partition + +When you have the free space, click on + sign. + +![Create root partition for the new Linux][14] +Create a new root partition + +Now you should create a new partition out of this free space. If you had just one root partition in your previous Linux install, you should create root and home partitions here. You can also create the swap partition if you want to. + +If you had root and home partition separately, just create a root partition from the deleted root partition. + +![Create root partition for the new Linux][15] +Creating root partition + +You may ask why did I use delete and add instead of using the ‘change’ option. It’s because a few years ago, using change didn’t work for me. So I prefer to do a – and +. Is it superstition? Maybe. + +One important thing to do here is to mark the newly created partition for format. f you don’t change the size of the partition, it won’t be formatted unless you explicitly ask it to format. And if the partition is not formatted, you’ll have issues. + +![][16] +It’s important to format the root partition + +Now if you already had a separate Home partition on your existing Linux install, you should select it and click on change. + +![Recreate home partition][17] +Retouch the already existing home partition (if any) + +You just have to specify that you are mounting it as home partition. + +![Specify the home mount point][18] +Specify the home mount point + +If you had a swap partition, you can repeat the same steps as the home partition. This time specify that you want to use the space as swap. + +At this stage, you should have a root partition (with format option selected) and a home partition (and a swap if you want to). Hit the install now button to start the installation. + +![Verify partitions while replacing one Linux with another][19] +Verify the partitions + +The next few screens would be familiar to you. What matters is the screen where you are asked to create user and password. + +If you had a separate home partition previously and you want to use the same home directory, you MUST use the same username and password that you had before. Computer name doesn’t matter. + +![To keep the home partition intact, use the previous user and password][20] +To keep the home partition intact, use the previous user and password + +Your struggle is almost over. You don’t have to do anything else other than waiting for the installation to finish. + +![Wait for installation to finish][21] +Wait for installation to finish + +Once the installation is over, restart your system. You’ll have a new Linux distribution or version. + +In my case, I had the entire home directory of Linux Mint 19 as it is in the elementary OS. All the videos, pictures I had remained as it is. Isn’t that nice? + +-------------------------------------------------------------------------------- + +via: https://itsfoss.com/replace-linux-from-dual-boot/ + +作者:[Abhishek Prakash][a] +选题:[lujun9972](https://github.com/lujun9972) +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://itsfoss.com/author/abhishek/ +[1]: https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/09/Replace-Linux-Distro-from-dual-boot.png +[2]: https://itsfoss.com/install-ubuntu-1404-dual-boot-mode-windows-8-81-uefi/ +[3]: https://itsfoss.com/linux-mint-vs-ubuntu/ +[4]: https://www.linuxmint.com/ +[5]: https://itsfoss.com/uninstall-ubuntu-linux-windows-dual-boot/ +[6]: https://itsfoss.com/guide-install-linux-mint-16-dual-boot-windows/ +[7]: https://www.ubuntu.com/ +[8]: https://itsfoss.com/guide-install-elementary-os-luna/ +[9]: https://etcher.io/ +[10]: https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/09/replace-linux-with-another-1.jpg +[11]: https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/09/replace-linux-with-another-2.jpg +[12]: https://itsfoss.com/swap-size/ +[13]: https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/09/replace-linux-with-another-3.jpg +[14]: https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/09/replace-linux-with-another-4.jpg +[15]: https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/09/replace-linux-with-another-5.jpg +[16]: https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/09/replace-linux-with-another-6.jpg +[17]: https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/09/replace-linux-with-another-7.jpg +[18]: https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/09/replace-linux-with-another-8.jpg +[19]: https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/09/replace-linux-with-another-9.jpg +[20]: https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/09/replace-linux-with-another-10.jpg +[21]: https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/09/replace-linux-with-another-11.jpg From bade2cc0b46f98aaa8067426f60b1f9fc1ae2d6e Mon Sep 17 00:00:00 2001 From: darksun Date: Wed, 26 Sep 2018 14:46:34 +0800 Subject: [PATCH 038/736] =?UTF-8?q?=E9=80=89=E9=A2=98:=20Why=20Linux=20use?= =?UTF-8?q?rs=20should=20try=20Rust?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...0180924 Why Linux users should try Rust.md | 171 ++++++++++++++++++ 1 file changed, 171 insertions(+) create mode 100644 sources/tech/20180924 Why Linux users should try Rust.md diff --git a/sources/tech/20180924 Why Linux users should try Rust.md b/sources/tech/20180924 Why Linux users should try Rust.md new file mode 100644 index 0000000000..db60883eb9 --- /dev/null +++ b/sources/tech/20180924 Why Linux users should try Rust.md @@ -0,0 +1,171 @@ +Why Linux users should try Rust +====== + +![](https://images.idgesg.net/images/article/2018/09/rust-rusted-metal-100773678-large.jpg) + +Rust is a fairly young and modern programming language with a lot of features that make it incredibly flexible and very secure. It's also becoming quite popular, having won first place for the "most loved programming language" in the Stack Overflow Developer Survey three years in a row — [2016][1], [2017][2], and [2018][3]. + +Rust is also an _open-source_ language with a suite of special features that allow it to be adapted to many different programming projects. It grew out of what was a personal project of a Mozilla employee back in 2006, was picked up as a special project by Mozilla a few years later (2009), and then announced for public use in 2010. + +Rust programs run incredibly fast, prevent segfaults, and guarantee thread safety. These attributes make the language tremendously appealing to developers focused on application security. Rust is also a very readable language and one that can be used for anything from simple programs to very large and complex projects. + +Rust is: + + * Memory safe — Rust will not suffer from dangling pointers, buffer overflows, or other memory-related errors. And it provides memory safety without garbage collection. + * General purpose — Rust is an appropriate language for any type of programming + * Fast — Rust is comparable in performance to C/C++ but with far better security features. + * Efficient — Rust is built to facilitate concurrent programming. + * Project-oriented — Rust has a built-in dependency and build management system called Cargo. + * Well supported — Rust has an impressive [support community][4]. + + + +Rust also enforces RAII (Resource Acquisition Is Initialization). That means when an object goes out of scope, its destructor will be called and its resources will be freed, providing a shield against resource leaks. It provides functional abstractions and a great [type system][5] together with speed and mathematical soundness. + +In short, Rust is an impressive systems programming language with features that other most languages lack, making it a serious contender for languages like C, C++ and Objective-C that have been used for years. + +### Installing Rust + +Installing Rust is a fairly simple process. + +``` +$ curl https://sh.rustup.rs -sSf | sh +``` + +Once Rust in installed, calling rustc with the **\--version** argument or using the **which** command displays version information. + +``` +$ which rustc +rustc 1.27.2 (58cc626de 2018-07-18) +$ rustc --version +rustc 1.27.2 (58cc626de 2018-07-18) +``` + +### Getting started with Rust + +The simplest code example is not all that different from what you'd enter if you were using one of many scripting languages. + +``` +$ cat hello.rs +fn main() { + // Print a greeting + println!("Hello, world!"); +} +``` + +In these lines, we are setting up a function (main), adding a comment describing the function, and using a println statement to create output. You could compile and then run a program like this using the command shown below. + +``` +$ rustc hello.rs +$ ./hello +Hello, world! +``` + +Alternately, you might create a "project" (generally used only for more complex programs than this one!) to keep your code organized. + +``` +$ mkdir ~/projects +$ cd ~/projects +$ mkdir hello_world +$ cd hello_world +``` + +Notice that even a simple program, once compiled, becomes a fairly large executable. + +``` +$ ./hello +Hello, world! +$ ls -l hello* +-rwxrwxr-x 1 shs shs 5486784 Sep 23 19:02 hello <== executable +-rw-rw-r-- 1 shs shs 68 Sep 23 15:25 hello.rs +``` + +And, of course, that's just a start — the traditional "Hello, world!" program. The Rust language has a suite of features to get you moving quickly to advanced levels of programming skill. + +### Learning Rust + +![rust programming language book cover][6] +No Starch Press + +The Rust Programming Language book by Steve Klabnik and Carol Nichols (2018) provides one of the best ways to learn Rust. Written by two members of the core development team, this book is available in print from [No Starch Press][7] or in ebook format at [rust-lang.org][8]. It has earned its reference as "the book" among the Rust developer community. + +Among the many topics covered, you will learn about these advanced topics: + + * Ownership and borrowing + * Safety guarantees + * Testing and error handling + * Smart pointers and multi-threading + * Advanced pattern matching + * Using Cargo (the built-in package manager) + * Using Rust's advanced compiler + + + +#### Table of Contents + +The table of contents is shown below. + +``` +Foreword by Nicholas Matsakis and Aaron Turon +Acknowledgements +Introduction +Chapter 1: Getting Started +Chapter 2: Guessing Game +Chapter 3: Common Programming Concepts +Chapter 4: Understanding Ownership +Chapter 5: Structs +Chapter 6: Enums and Pattern Matching +Chapter 7: Modules +Chapter 8: Common Collections +Chapter 9: Error Handling +Chapter 10: Generic Types, Traits, and Lifetimes +Chapter 11: Testing +Chapter 12: An Input/Output Project +Chapter 13: Iterators and Closures +Chapter 14: More About Cargo and Crates.io +Chapter 15: Smart Pointers +Chapter 16: Concurrency +Chapter 17: Is Rust Object Oriented? +Chapter 18: Patterns +Chapter 19: More About Lifetimes +Chapter 20: Advanced Type System Features +Appendix A: Keywords +Appendix B: Operators and Symbols +Appendix C: Derivable Traits +Appendix D: Macros +Index + +``` + +[The Rust Programming Language][7] takes you from basic installation and language syntax to complex topics, such as modules, error handling, crates (synonymous with a ‘library’ or ‘package’ in other languages), modules (allowing you to partition your code within the crate itself), lifetimes, etc. + +Probably the most important thing to say is that the book can move you from basic programming skills to building and compiling complex, secure and very useful programs. + +### Wrap-up + +If you're ready to get into some serious programming with a language that's well worth the time and effort to study and becoming increasingly popular, Rust is a good bet! + +Join the Network World communities on [Facebook][9] and [LinkedIn][10] to comment on topics that are top of mind. + +-------------------------------------------------------------------------------- + +via: https://www.networkworld.com/article/3308162/linux/why-you-should-try-rust.html + +作者:[Sandra Henry-Stocker][a] +选题:[lujun9972](https://github.com/lujun9972) +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://www.networkworld.com/author/Sandra-Henry_Stocker/ +[1]: https://insights.stackoverflow.com/survey/2016#technology-most-loved-dreaded-and-wanted +[2]: https://insights.stackoverflow.com/survey/2017#technology-most-loved-dreaded-and-wanted-languages +[3]: https://insights.stackoverflow.com/survey/2018#technology-most-loved-dreaded-and-wanted-languages +[4]: https://www.rust-lang.org/en-US/community.html +[5]: https://doc.rust-lang.org/reference/type-system.html +[6]: https://images.idgesg.net/images/article/2018/09/rust-programming-language_book-cover-100773679-small.jpg +[7]: https://nostarch.com/Rust +[8]: https://doc.rust-lang.org/book/2018-edition/index.html +[9]: https://www.facebook.com/NetworkWorld/ +[10]: https://www.linkedin.com/company/network-world From e0285aa5ab48a6ce7082ae736f307391f2291387 Mon Sep 17 00:00:00 2001 From: darksun Date: Wed, 26 Sep 2018 14:50:23 +0800 Subject: [PATCH 039/736] =?UTF-8?q?=E9=80=89=E9=A2=98:=20Taking=20the=20Au?= =?UTF-8?q?diophile=20Linux=20distro=20for=20a=20spin?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ... the Audiophile Linux distro for a spin.md | 161 ++++++++++++++++++ 1 file changed, 161 insertions(+) create mode 100644 sources/tech/20180925 Taking the Audiophile Linux distro for a spin.md diff --git a/sources/tech/20180925 Taking the Audiophile Linux distro for a spin.md b/sources/tech/20180925 Taking the Audiophile Linux distro for a spin.md new file mode 100644 index 0000000000..1c813cb30a --- /dev/null +++ b/sources/tech/20180925 Taking the Audiophile Linux distro for a spin.md @@ -0,0 +1,161 @@ +Taking the Audiophile Linux distro for a spin +====== + +This lightweight open source audio OS offers a rich feature set and high-quality digital sound. + +![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/LIFE_givingmusic.jpg?itok=xVKF1dlb) + +I recently stumbled on the [Audiophile Linux project][1], one of a number of special-purpose music-oriented Linux distributions. Audiophile Linux: + + 1. is based on [ArchLinux][2] + + 2. provides a real-time Linux kernel customized for playing music + + 3. uses the lightweight [Fluxbox][3] window manager + + 4. avoids unnecessary daemons and services + + 5. allows playback of DSF and supports the usual PCM formats + + 6. supports various music players, including one of my favorite combos: MPD + Cantata + + + + +The Audiophile Linux site hasn’t shown a lot of activity since April 2017, but it does contain some updates and commentary from this year. Given its orientation and feature set, I decided to take it for a spin on my old Toshiba laptop. + +### Installing Audiophile Linux + +The site provides [a clear set of install instructions][4] that require the use of the terminal. The first step after downloading the .iso is burning it to a USB stick. I used the GNOME Disks utility’s Restore Disk Image for this purpose. Once I had the USB set up and ready to go, I plugged it into the Toshiba and booted it. When the splash screen came up, I set the boot device to the USB stick and a minute or so later, the Arch Grub menu was displayed. I booted Linux from that menu, which put me in a root shell session, where I could carry out the install to the hard drive: + +![](https://opensource.com/sites/default/files/uploads/root_shell_session.jpg) + +I was willing to sacrifice the 320-GB hard drive in the Toshiba for this test, so I was able to use the previous Linux partitioning (from the last experiment). I then proceeded as follows: + +``` +fdisk -l              # find the disk / partition, in my case /dev/sda and /dev/sda1 +mkfs.ext4 /dev/sda1   # build the ext4 filesystem in the root partition +mount /dev/sda1 /mnt  # mount the new file system +time cp -ax / /mnt    # copy over the OS +        # reported back cp -ax / /mnt 1.36s user 136.54s system 88% cpu 2:36.37 total +arch-chroot /mnt /bin/bash # run in the new system root +cd /etc/apl-files +./runme.sh            # do the rest of the install +grub-install --target=i386-pc /dev/sda # make the new OS bootable part 1 +grub-mkconfig -o /boot/grub/grub.cfg   # part 2 +passwd root           # set root’s password +ln -s /usr/share/zoneinfo/America/Vancouver /etc/localtime # set my time zone +hwclock --systohc --utc # update the hardware clock +./autologin.sh        # set the system up so that it automatically logs in +exit                  # done with the chroot session +genfstab -U /mnt >> /mnt/etc/fstab # create the fstab for the new system +``` + +At that point, I was ready to boot the new operating system, so I did—and voilà, up came the system! + +![](https://opensource.com/sites/default/files/uploads/audiophile_linux.jpg) + +### Finishing the configuration + +Once Audiophile Linux was up and running, I needed to [finish the configuration][4] and load some music. Grabbing the application menu by right-clicking on the screen background, I started **X-terminal** and entered the remaining configuration commands: + +``` +ping 8.8.8.8 # check connectivity (works fine) +su # become root +pacman-key –init # create pacman’s encryption data part 1 +pacman-key --populate archlinux # part 2 +pacman -Sy # part 3 +pacman -S archlinux-keyring # part 4 +``` + +At this point, the install instructions note that there is a problem with updating software with the `pacman -Suy` command, and that first the **libxfont** package must be removed using `pacman -Rc libxfont`. I followed this instruction, but the second run of `pacman -Suy` led to another dependency error, this time with the **x265** package. I looked further down the page in the install instructions and saw this recommendation: + +_Again there is an error in upstream repo of Arch packages. Try to remove conflicting packages with “pacman -R ffmpeg2.8” and then do pacman -Suy later._ + +I chose to use `pacman -Rc ffmpeg2.8`, and then reran `pacman -Suy`. (As an aside, typing all these **pacman** commands made me realize how familiar I am with **apt** , and how much this whole process made me feel like I was trying to write an email in some language I don’t know using an online translator.) + +To be clear, here was my sequence of operations: + +``` +pacman -Suy # failed +pacman -Rc libxfont +pacman -Suy # failed, again +pacman -Rc ffmpeg2.8 # uninstalled Cantata, have to fix that later! +pacman -Suy # worked! +``` + +Now back to the rest of the instructions: + +``` +pacman -S terminus-font +pacman -S xorg-server +pacman -S firefox # the docs suggested installing chromium but I prefer FF +reboot +``` + +And the last little bit, fiddling `/etc/fstab` to avoid access time modifications. I also thought I’d try installing [Cantata][5] once more using `pacman -S cantata`, and it worked just fine (no `ffmpeg2.8` problems). + +I found the `DAC Setup > List cards` on the application menu, which showed the built-in Intel sound hardware plus my USB DAC that I had plugged in earlier. Then I selected `DAC Setup > Edit mpd.conf` and adjusted the output stanza of `mpd.conf`. I used `scp` to copy an album over from my main music server into **~/Music**. And finally, I used the application menu `DAC Setup > Restart mpd`. And… nothing… the **conky** info on the screen indicated “MPD not responding”. So I scanned again through the comments at the bottom of the installation instructions and spotted this: + +_After every update of mpd, you have to do: +1. Become root +``` +$su +``` +2. run this commands +``` +# cat /etc/apl-files/mpd.service > /usr/lib/systemd/system/mpd.service +# systemctl daemon-reload +# systemctl restart mpd.service_ +``` +_And this will be fixed._ + +![](https://opensource.com/sites/default/files/uploads/library.png) + +And it works! Right now I’m enjoying [Nils Frahm’s "All Melody"][6] from the album of the same name, playing over my [Schiit Fulla 2][7] in glorious high-resolution sound. Time to copy in some more music so I can give it a better listen. + +So… does it sound better than the same DAC connected to my regular work laptop and playing back through [Guayadeque][8] or [GogglesMM][9]? I’m going to see if I can detect a difference at some point, but right now all I can say is it sounds just wonderful; plus [I like the Cantata / mpd combo a lot][10], and I really enjoy having the heads-up display in the upper right of the screen. + +### As for the music... + +The other day I was reorganizing my work hard drive a bit and I decided to check to make sure that 1) all the music on it was also on the house music servers and 2) _vice versa_ (gotta set up `rsync` for that purpose one day soon). In doing so, I found some music I hadn’t enjoyed for a while, which is kind of like buying a brand-new album, except it costs much less. + +[Six Degrees Records][11] has long been one of my favorite purveyors of unusual music. A great example is the group [Zuco 103][12]'s album [Whaa!][13], whose CD version I purchased from Six Degrees’ online store some years ago. Check out [this fun documentary about the group][14]. + + + +For a completely different experience, take a look at the [Ragazze Quartet’s performance of Terry Riley’s "Four Four Three."][15] I picked up ahigh-resolutionn version of this fascinating music from [Channel Classics][16], which operates a Linux-friendly download store (no bloatware to install on your computer). + +And finally, I was saddened to hear of the recent passing of [Rachid Taha][17], whose wonderful blend of North African and French musical traditions, along with his frank confrontation of the challenges of being North African and living in Europe, has made some powerful—and fun—music. Check out [Taha’s version of "Rock the Casbah."][18] I have a few of his songs scattered around various compilation albums, and some time ago bought the CD version of [Rachid Taha: The Definitive Collection][19], which I’ve been enjoying again recently. + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/18/9/audiophile-linux-distro + +作者:[Chris Hermansen][a] +选题:[lujun9972](https://github.com/lujun9972) +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/clhermansen +[1]: https://www.ap-linux.com/ +[2]: https://www.archlinux.org/ +[3]: http://fluxbox.org/ +[4]: https://www.ap-linux.com/documentation/ap-linux-v4-install-instructions/ +[5]: https://github.com/CDrummond/cantata +[6]: https://www.youtube.com/watch?v=1PTj1qIqcWM +[7]: https://www.audiostream.com/content/listening-session-history-lesson-bw-schiit-and-shinola-together-last +[8]: http://www.guayadeque.org/ +[9]: https://gogglesmm.github.io/ +[10]: https://opensource.com/article/17/8/cantata-music-linux +[11]: https://www.sixdegreesrecords.com/ +[12]: https://www.sixdegreesrecords.com/?s=zuco+103 +[13]: https://www.musicomh.com/reviews/albums/zuco-103-whaa +[14]: https://www.youtube.com/watch?v=ncaqD92cjQ8 +[15]: https://www.youtube.com/watch?v=DwMaO7bMVD4 +[16]: https://www.channelclassics.com/catalogue/37816-Riley-Four-Four-Three/ +[17]: https://en.wikipedia.org/wiki/Rachid_Taha +[18]: https://www.youtube.com/watch?v=n1p_dkJo6Y8 +[19]: http://www.bbc.co.uk/music/reviews/26rg/ From 02b69bf88a8005574b7cb77cd11908d2bb97e1b5 Mon Sep 17 00:00:00 2001 From: darksun Date: Wed, 26 Sep 2018 14:58:15 +0800 Subject: [PATCH 040/736] =?UTF-8?q?=E9=80=89=E9=A2=98:=20Clinews=20?= =?UTF-8?q?=E2=80=93=20Read=20News=20And=20Latest=20Headlines=20From=20Com?= =?UTF-8?q?mandline?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...s And Latest Headlines From Commandline.md | 136 ++++++++++++++++++ 1 file changed, 136 insertions(+) create mode 100644 sources/tech/20180921 Clinews - Read News And Latest Headlines From Commandline.md diff --git a/sources/tech/20180921 Clinews - Read News And Latest Headlines From Commandline.md b/sources/tech/20180921 Clinews - Read News And Latest Headlines From Commandline.md new file mode 100644 index 0000000000..24ae89f461 --- /dev/null +++ b/sources/tech/20180921 Clinews - Read News And Latest Headlines From Commandline.md @@ -0,0 +1,136 @@ +Clinews – Read News And Latest Headlines From Commandline +====== + +![](https://www.ostechnix.com/wp-content/uploads/2018/09/clinews-720x340.jpeg) + +A while ago, we have written about a CLI news client named [**InstantNews**][1] that helps you to read news and latest headlines from commandline instantly. Today, I stumbled upon a similar utility named **Clinews** which serves the same purpose – reading news and latest headlines from popular websites, blogs from Terminal. You don’t need to install GUI applications or mobile apps. You can read what’s happening in the world right from your Terminal. It is free, open source utility written using **NodeJS**. + +### Installing Clinews + +Since Clinews is written using NodeJS, you can install it using NPM package manager. If you haven’t install NodeJS, install it as described in the following link. + +Once node installed, run the following command to install Clinews: + +``` +$ npm i -g clinews +``` + +You can also install Clinews using **Yarn** : + +``` +$ yarn global add clinews +``` + +Yarn itself can installed using npm + +``` +$ npm -i yarn +``` + +### Configure News API + +Clinews retrieves all news headlines from [**News API**][2]. News API is a simple and easy-to-use API that returns JSON metadata for the headlines currently published on a range of news sources and blogs. It currently provides live headlines from 70 popular sources, including Ars Technica, BBC, Blooberg, CNN, Daily Mail, Engadget, ESPN, Financial Times, Google News, hacker News, IGN, Mashable, National Geographic, Reddit r/all, Reuters, Speigel Online, Techcrunch, The Guardian, The Hindu, The Huffington Post, The Newyork Times, The Next Web, The Wall street Journal, USA today and [**more**][3]. + +First, you need an API key from News API. Go to [**https://newsapi.org/register**][4] URL and register a free account to get the API key. + +Once you got the API key from News API site, edit your **.bashrc** file: + +``` +$ vi ~/.bashrc + +``` + +Add newsapi API key at the end like below: + +``` +export IN_API_KEY="Paste-API-key-here" + +``` + +Please note that you need to paste the key inside the double quotes. Save and close the file. + +Run the following command to update the changes. + +``` +$ source ~/.bashrc + +``` + +Done. Now let us go ahead and fetch the latest headlines from new sources. + +### Read News And Latest Headlines From Commandline + +To read news and latest headlines from specific new source, for example **The Hindu** , run: + +``` +$ news fetch the-hindu + +``` + +Here, **“the-hindu”** is the new source id (fetch id). + +The above command will fetch latest 10 headlines from The Hindu news portel and display them in the Terminal. Also, it displays a brief description of the news, the published date and time, and the actual link to the source. + +**Sample output:** + +![](https://www.ostechnix.com/wp-content/uploads/2018/09/clinews-1.png) + +To read a news in your browser, hold Ctrl key and click on the URL. It will open in your default web browser. + +To view all the sources you can get news from, run: + +``` +$ news sources + +``` + +**Sample output:** + +![](https://www.ostechnix.com/wp-content/uploads/2018/09/clinews-2.png) + +As you see in the above screenshot, Clinews lists all news sources including the name of the news source, fetch id, description of the site, website URL and the country where it is located. As of writing this guide, Clinews currently supports 70+ news sources. + +Clinews can also able to search for news stories across all sources matching search criteria/term. Say for example, to list all news stories with titles containing the words **“Tamilnadu”** , use the following command: + +``` +$ news search "Tamilnadu" +``` + +This command will scrap all news sources for stories that match term **Tamilnadu**. + +Clinews has some extra flags that helps you to + + * limit the amount of news stories you want to see, + * sort news stories (top, latest, popular), + * display news stories category wise (E.g. business, entertainment, gaming, general, music, politics, science-and-nature, sport, technology) + + + +For more details, see the help section: + +``` +$ clinews -h +``` + +And, that’s all for now. Hope this was useful. More good stuffs to come. Stay tuned! + +Cheers! + + + +-------------------------------------------------------------------------------- + +via: https://www.ostechnix.com/clinews-read-news-and-latest-headlines-from-commandline/ + +作者:[SK][a] +选题:[lujun9972](https://github.com/lujun9972) +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://www.ostechnix.com/author/sk/ +[1]: https://www.ostechnix.com/get-news-instantly-commandline-linux/ +[2]: https://newsapi.org/ +[3]: https://newsapi.org/sources +[4]: https://newsapi.org/register From d3f075d5e18638fbd3054c4536d78bae33961719 Mon Sep 17 00:00:00 2001 From: "Xingyu.Wang" Date: Wed, 26 Sep 2018 14:59:14 +0800 Subject: [PATCH 041/736] PRF:20180907 What do open source and cooking have in common.md @sd886393 --- ... open source and cooking have in common.md | 31 ++++++++----------- 1 file changed, 13 insertions(+), 18 deletions(-) diff --git a/translated/talk/20180907 What do open source and cooking have in common.md b/translated/talk/20180907 What do open source and cooking have in common.md index 2f5f150f1a..ae99bed725 100644 --- a/translated/talk/20180907 What do open source and cooking have in common.md +++ b/translated/talk/20180907 What do open source and cooking have in common.md @@ -3,48 +3,46 @@ ![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/waffles-recipe-eggs-cooking-mix.png?itok=Fp06VOBx) -有什么好的方法,既可以宣传开源的精神又不用写代码呢?这里有个点子:“开源食堂”。在过去的8年间,这就是我们在慕尼黑做的事情。 +有什么好的方法,既可以宣传开源的精神又不用写代码呢?这里有个点子:“开源食堂open source cooking”。在过去的 8 年间,这就是我们在慕尼黑做的事情。 开源食堂已经是我们常规的开源宣传活动了,因为我们发现开源与烹饪有很多共同点。 ### 协作烹饪 -[慕尼黑开源聚会][1]自2009年7月在[Café Netzwerk][2]创办以来,已经组织了若干次活动,活动一般在星期五的晚上组织。该聚会为开源项目工作者或者开源爱好者们提供了相互认识的方式。我们的信条是:“每四周的星期五属于免费软件(Every fourth Friday for free software)”。当然在一些周末,我们还会举办一些研讨会。那之后,我们很快加入了很多其他的活动,包括白香肠早餐、桑拿与烹饪活动。 +[慕尼黑开源聚会][1]自 2009 年 7 月在 [Café Netzwerk][2] 创办以来,已经组织了若干次活动,活动一般在星期五的晚上组织。该聚会为开源项目工作者或者开源爱好者们提供了相互认识的方式。我们的信条是:“每四周的星期五属于自由软件Every fourth Friday for free software”。当然在一些周末,我们还会举办一些研讨会。那之后,我们很快加入了很多其他的活动,包括白香肠早餐、桑拿与烹饪活动。 -事实上,第一次开源烹饪聚会举办的有些混乱,但是我们经过这8年来以及15次的组织,已经可以为25-30个与会者提供丰盛的美食了。 +事实上,第一次开源烹饪聚会举办的有些混乱,但是我们经过这 8 年来以及 15 次的活动,已经可以为 25-30 个与会者提供丰盛的美食了。 回头看看这些夜晚,我们愈发发现共同烹饪与开源社区协作之间,有很多相似之处。 -### 烹饪步骤中的开源精神 +### 烹饪步骤中的自由开源精神 这里是几个烹饪与开源精神相同的地方: * 我们乐于合作且朝着一个共同的目标前进 - * 我们成立社区组织 + * 我们成了一个社区 * 由于我们有相同的兴趣与爱好,我们可以更多的了解我们自身与他人,并且可以一同协作 - * 我们也会犯错,但我们会从错误中学习,并为了共同的李医生去分享关于错误的经验,从而让彼此避免再犯相同的错误 + * 我们也会犯错,但我们会从错误中学习,并为了共同的利益去分享关于错误的经验,从而让彼此避免再犯相同的错误 * 每个人都会贡献自己擅长的事情,因为每个人都有自己的一技之长 * 我们会动员其他人去做出贡献并加入到我们之中 * 虽说协作是关键,但难免会有点混乱 * 每个人都会从中收益 - - ### 烹饪中的开源气息 同很多成功的开源聚会一样,开源烹饪也需要一些协作和组织结构。在每次活动之前,我们会组织所有的成员对菜单进行投票,而不单单是直接给每个人分一角披萨,我们希望真正的作出一道美味,迄今为止我们做过日本、墨西哥、匈牙利、印度等地区风味的美食,限于篇幅就不一一列举了。 -就像在生活中,共同烹饪一样需要各个成员之间相互的尊重和理解,所以我们也会试着为素食主义者、食物过敏者、或者对某些事物有偏好的人提供针对性的事物。正式开始烹饪之前,在家预先进行些小规模的测试会非常有帮助(乐趣!) +就像在生活中,共同烹饪同样需要各个成员之间相互的尊重和理解,所以我们也会试着为素食主义者、食物过敏者、或者对某些事物有偏好的人提供针对性的事物。正式开始烹饪之前,在家预先进行些小规模的测试会非常有帮助(和乐趣!) -可扩展性也很重要,在杂货店采购必要的食材很容易就消耗掉3个小时。所以我们使用一些表格工具(自然是 LibreOffice Calc)来做一些所需要的食材以及相应的成本。 +可扩展性也很重要,在杂货店采购必要的食材很容易就消耗掉 3 个小时。所以我们使用一些表格工具(自然是 LibreOffice Calc)来做一些所需要的食材以及相应的成本。 -我们会同志愿者一起,为每次晚餐准备一个“包管理器”,从而及时的制作出菜单并在问题产生的时候寻找一些独到的解决方法。 +我们会同志愿者一起,对于每次晚餐我们都有一个“包维护者”,从而及时的制作出菜单并在问题产生的时候寻找一些独到的解决方法。 虽然不是所有人都是大厨,但是只要给与一些帮助,并比较合理的分配任务和责任,就很容易让每个人都参与其中。某种程度上来说,处理 18kg 的西红柿和 100 个鸡蛋都不会让你觉得是件难事,相信我!唯一的限制是一个烤炉只有四个灶,所以可能是时候对基础设施加大投入了。 -发布有时间要求,当然要求也不那么严格,我们通常会在21:30和01:30之间的相当“灵活”时间内供应主菜,即便如此,这个时间也是硬性的发布规定。 +发布有时间要求,当然要求也不那么严格,我们通常会在 21:30 和 01:30 之间的相当“灵活”时间内供应主菜,即便如此,这个时间也是硬性的发布规定。 -最后,想很多开源项目一样,烹饪文档同样有提升的空间。类似洗碟子这样的扫尾工作同样也有可优化的地方。 +最后,像很多开源项目一样,烹饪文档同样有提升的空间。类似洗碟子这样的扫尾工作同样也有可优化的地方。 ### 未来的一些新功能点 @@ -54,21 +52,18 @@ * 购买和烹饪一个价值 700 欧元的大南瓜,并且 * 找家可以为我们采购提供折扣的商店 - 最后一点,也是开源软件的动机:永远记住,还有一些人们生活在阴影中,他们为没有同等的权限去访问资源而苦恼着。我们如何通过开源的精神去帮助他们呢? 一想到这点,我便期待这下一次的开源烹饪聚会。如果读了上面的东西让你觉得不够完美,并且想自己运作这样的活动,我们非常乐意你能够借鉴我们的想法,甚至抄袭一个。我们也乐意你能够参与到我们其中,甚至做一些演讲和问答。 -Article originally appeared on [blog.effenberger.org][3]. Reprinted with permission. - -------------------------------------------------------------------------------- via: https://opensource.com/article/18/9/open-source-cooking 作者:[Florian Effenberger][a] 选题:[lujun9972](https://github.com/lujun9972) -译者:[译者ID](https://github.com/sd886393) -校对:[校对者ID](https://github.com/校对者ID) +译者:[sd886393](https://github.com/sd886393) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 From 2dd5f756e62d138abe843d914a3be0ed383e018b Mon Sep 17 00:00:00 2001 From: "Xingyu.Wang" Date: Wed, 26 Sep 2018 14:59:32 +0800 Subject: [PATCH 042/736] PUB:20180907 What do open source and cooking have in common.md @sd886393 https://linux.cn/article-10052-1.html --- .../20180907 What do open source and cooking have in common.md | 0 1 file changed, 0 insertions(+), 0 deletions(-) rename {translated/talk => published}/20180907 What do open source and cooking have in common.md (100%) diff --git a/translated/talk/20180907 What do open source and cooking have in common.md b/published/20180907 What do open source and cooking have in common.md similarity index 100% rename from translated/talk/20180907 What do open source and cooking have in common.md rename to published/20180907 What do open source and cooking have in common.md From f1e6337f2faa1d4b7d370ffa77ff197cb13d576e Mon Sep 17 00:00:00 2001 From: darksun Date: Wed, 26 Sep 2018 15:19:09 +0800 Subject: [PATCH 043/736] =?UTF-8?q?=E9=80=89=E9=A2=98:=20How=20To=20Find?= =?UTF-8?q?=20Out=20Which=20Port=20Number=20A=20Process=20Is=20Using=20In?= =?UTF-8?q?=20Linux?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...Port Number A Process Is Using In Linux.md | 278 ++++++++++++++++++ 1 file changed, 278 insertions(+) create mode 100644 sources/tech/20180924 How To Find Out Which Port Number A Process Is Using In Linux.md diff --git a/sources/tech/20180924 How To Find Out Which Port Number A Process Is Using In Linux.md b/sources/tech/20180924 How To Find Out Which Port Number A Process Is Using In Linux.md new file mode 100644 index 0000000000..21b6633730 --- /dev/null +++ b/sources/tech/20180924 How To Find Out Which Port Number A Process Is Using In Linux.md @@ -0,0 +1,278 @@ +How To Find Out Which Port Number A Process Is Using In Linux +====== +As a Linux administrator, you should know whether the corresponding service is binding/listening with correct port or not. + +This will help you to easily troubleshoot further when you are facing port related issues. + +A port is a logical connection that identifies a specific process on Linux. There are two kind of port are available like, physical and software. + +Since Linux operating system is a software hence, we are going to discuss about software port. + +Software port is always associated with an IP address of a host and the relevant protocol type for communication. The port is used to distinguish the application. + +Most of the network related services have to open up a socket to listen incoming network requests. Socket is unique for every service. + +**Suggested Read :** +**(#)** [4 Easiest Ways To Find Out Process ID (PID) In Linux][1] +**(#)** [3 Easy Ways To Kill Or Terminate A Process In Linux][2] + +Socket is combination of IP address, software Port and protocol. The port numbers area available for both TCP and UDP protocol. + +The Transmission Control Protocol (TCP) and the User Datagram Protocol (UDP) use port numbers for communication. It is a value from 0 to 65535. + +Below are port assignments categories. + + * `0-1023:` Well Known Ports or System Ports + * `1024-49151:` Registered Ports for applications + * `49152-65535:` Dynamic Ports or Private Ports + + + +You can check the details of the reserved ports in the /etc/services file on Linux. + +``` +# less /etc/services +# /etc/services: +# $Id: services,v 1.55 2013/04/14 ovasik Exp $ +# +# Network services, Internet style +# IANA services version: last updated 2013-04-10 +# +# Note that it is presently the policy of IANA to assign a single well-known +# port number for both TCP and UDP; hence, most entries here have two entries +# even if the protocol doesn't support UDP operations. +# Updated from RFC 1700, ``Assigned Numbers'' (October 1994). Not all ports +# are included, only the more common ones. +# +# The latest IANA port assignments can be gotten from +# http://www.iana.org/assignments/port-numbers +# The Well Known Ports are those from 0 through 1023. +# The Registered Ports are those from 1024 through 49151 +# The Dynamic and/or Private Ports are those from 49152 through 65535 +# +# Each line describes one service, and is of the form: +# +# service-name port/protocol [aliases ...] [# comment] + +tcpmux 1/tcp # TCP port service multiplexer +tcpmux 1/udp # TCP port service multiplexer +rje 5/tcp # Remote Job Entry +rje 5/udp # Remote Job Entry +echo 7/tcp +echo 7/udp +discard 9/tcp sink null +discard 9/udp sink null +systat 11/tcp users +systat 11/udp users +daytime 13/tcp +daytime 13/udp +qotd 17/tcp quote +qotd 17/udp quote +msp 18/tcp # message send protocol (historic) +msp 18/udp # message send protocol (historic) +chargen 19/tcp ttytst source +chargen 19/udp ttytst source +ftp-data 20/tcp +ftp-data 20/udp +# 21 is registered to ftp, but also used by fsp +ftp 21/tcp +ftp 21/udp fsp fspd +ssh 22/tcp # The Secure Shell (SSH) Protocol +ssh 22/udp # The Secure Shell (SSH) Protocol +telnet 23/tcp +telnet 23/udp +# 24 - private mail system +lmtp 24/tcp # LMTP Mail Delivery +lmtp 24/udp # LMTP Mail Delivery + +``` + +This can be achieved using the below six methods. + + * `ss:` ss is used to dump socket statistics. + * `netstat:` netstat is displays a list of open sockets. + * `lsof:` lsof – list open files. + * `fuser:` fuser – list process IDs of all processes that have one or more files open + * `nmap:` nmap – Network exploration tool and security / port scanner + * `systemctl:` systemctl – Control the systemd system and service manager + + + +In this tutorial we are going to find out which port number the SSHD daemon is using. + +### Method-1: Using ss Command + +ss is used to dump socket statistics. It allows showing information similar to netstat. It can display more TCP and state informations than other tools. + +It can display stats for all kind of sockets such as PACKET, TCP, UDP, DCCP, RAW, Unix domain, etc. + +``` +# ss -tnlp | grep ssh +LISTEN 0 128 *:22 *:* users:(("sshd",pid=997,fd=3)) +LISTEN 0 128 :::22 :::* users:(("sshd",pid=997,fd=4)) +``` + +Alternatively you can check this with port number as well. + +``` +# ss -tnlp | grep ":22" +LISTEN 0 128 *:22 *:* users:(("sshd",pid=997,fd=3)) +LISTEN 0 128 :::22 :::* users:(("sshd",pid=997,fd=4)) +``` + +### Method-2: Using netstat Command + +netstat – Print network connections, routing tables, interface statistics, masquerade connections, and multicast memberships. + +By default, netstat displays a list of open sockets. If you don’t specify any address families, then the active sockets of all configured address families will be printed. This program is obsolete. Replacement for netstat is ss. + +``` +# netstat -tnlp | grep ssh +tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN 997/sshd +tcp6 0 0 :::22 :::* LISTEN 997/sshd +``` + +Alternatively you can check this with port number as well. + +``` +# netstat -tnlp | grep ":22" +tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN 1208/sshd +tcp6 0 0 :::22 :::* LISTEN 1208/sshd +``` + +### Method-3: Using lsof Command + +lsof – list open files. The Linux lsof command lists information about files that are open by processes running on the system. + +``` +# lsof -i -P | grep ssh +COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME +sshd 11584 root 3u IPv4 27625 0t0 TCP *:22 (LISTEN) +sshd 11584 root 4u IPv6 27627 0t0 TCP *:22 (LISTEN) +sshd 11592 root 3u IPv4 27744 0t0 TCP vps.2daygeek.com:ssh->103.5.134.167:49902 (ESTABLISHED) +``` + +Alternatively you can check this with port number as well. + +``` +# lsof -i tcp:22 +COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME +sshd 1208 root 3u IPv4 20919 0t0 TCP *:ssh (LISTEN) +sshd 1208 root 4u IPv6 20921 0t0 TCP *:ssh (LISTEN) +sshd 11592 root 3u IPv4 27744 0t0 TCP vps.2daygeek.com:ssh->103.5.134.167:49902 (ESTABLISHED) +``` + +### Method-4: Using fuser Command + +The fuser utility shall write to standard output the process IDs of processes running on the local system that have one or more named files open. + +``` +# fuser -v 22/tcp + USER PID ACCESS COMMAND +22/tcp: root 1208 F.... sshd + root 12388 F.... sshd + root 49339 F.... sshd +``` + +### Method-5: Using nmap Command + +Nmap (“Network Mapper”) is an open source tool for network exploration and security auditing. It was designed to rapidly scan large networks, although it works fine against single hosts. + +Nmap uses raw IP packets in novel ways to determine what hosts are available on the network, what services (application name and version) those hosts are offering, what operating systems (and OS versions) they are running, what type of packet filters/firewalls are in use, and dozens of other characteristics. + +``` +# nmap -sV -p 22 localhost + +Starting Nmap 6.40 ( http://nmap.org ) at 2018-09-23 12:36 IST +Nmap scan report for localhost (127.0.0.1) +Host is up (0.000089s latency). +Other addresses for localhost (not scanned): 127.0.0.1 +PORT STATE SERVICE VERSION +22/tcp open ssh OpenSSH 7.4 (protocol 2.0) + +Service detection performed. Please report any incorrect results at http://nmap.org/submit/ . +Nmap done: 1 IP address (1 host up) scanned in 0.44 seconds +``` + +### Method-6: Using systemctl Command + +systemctl – Control the systemd system and service manager. This is the replacement of old SysV init system management and most of the modern Linux operating systems were adapted systemd. + +**Suggested Read :** +**(#)** [chkservice – A Tool For Managing Systemd Units From Linux Terminal][3] +**(#)** [How To Check All Running Services In Linux][4] + +``` +# systemctl status sshd +● sshd.service - OpenSSH server daemon + Loaded: loaded (/usr/lib/systemd/system/sshd.service; enabled; vendor preset: enabled) + Active: active (running) since Sun 2018-09-23 02:08:56 EDT; 6h 11min ago + Docs: man:sshd(8) + man:sshd_config(5) + Main PID: 11584 (sshd) + CGroup: /system.slice/sshd.service + └─11584 /usr/sbin/sshd -D + +Sep 23 02:08:56 vps.2daygeek.com systemd[1]: Starting OpenSSH server daemon... +Sep 23 02:08:56 vps.2daygeek.com sshd[11584]: Server listening on 0.0.0.0 port 22. +Sep 23 02:08:56 vps.2daygeek.com sshd[11584]: Server listening on :: port 22. +Sep 23 02:08:56 vps.2daygeek.com systemd[1]: Started OpenSSH server daemon. +Sep 23 02:09:15 vps.2daygeek.com sshd[11589]: Connection closed by 103.5.134.167 port 49899 [preauth] +Sep 23 02:09:41 vps.2daygeek.com sshd[11592]: Accepted password for root from 103.5.134.167 port 49902 ssh2 +``` + +The above out will be showing the actual listening port of SSH service when you start the SSHD service recently. Otherwise it won’t because it updates recent logs in the output frequently. + +``` +# systemctl status sshd +● sshd.service - OpenSSH server daemon + Loaded: loaded (/usr/lib/systemd/system/sshd.service; enabled; vendor preset: enabled) + Active: active (running) since Thu 2018-09-06 07:40:59 IST; 2 weeks 3 days ago + Docs: man:sshd(8) + man:sshd_config(5) + Main PID: 1208 (sshd) + CGroup: /system.slice/sshd.service + ├─ 1208 /usr/sbin/sshd -D + ├─23951 sshd: [accepted] + └─23952 sshd: [net] + +Sep 23 12:50:36 vps.2daygeek.com sshd[23909]: Invalid user pi from 95.210.113.142 port 51666 +Sep 23 12:50:36 vps.2daygeek.com sshd[23909]: input_userauth_request: invalid user pi [preauth] +Sep 23 12:50:37 vps.2daygeek.com sshd[23911]: pam_unix(sshd:auth): check pass; user unknown +Sep 23 12:50:37 vps.2daygeek.com sshd[23911]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=95.210.113.142 +Sep 23 12:50:37 vps.2daygeek.com sshd[23909]: pam_unix(sshd:auth): check pass; user unknown +Sep 23 12:50:37 vps.2daygeek.com sshd[23909]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=95.210.113.142 +Sep 23 12:50:39 vps.2daygeek.com sshd[23911]: Failed password for invalid user pi from 95.210.113.142 port 51670 ssh2 +Sep 23 12:50:39 vps.2daygeek.com sshd[23909]: Failed password for invalid user pi from 95.210.113.142 port 51666 ssh2 +Sep 23 12:50:40 vps.2daygeek.com sshd[23911]: Connection closed by 95.210.113.142 port 51670 [preauth] +Sep 23 12:50:40 vps.2daygeek.com sshd[23909]: Connection closed by 95.210.113.142 port 51666 [preauth] +``` + +Most of the time the above output won’t shows the process actual port number. in this case i would suggest you to check the details using the below command from the journalctl log file. + +``` +# journalctl | grep -i "openssh\|sshd" +Sep 23 02:08:56 vps138235.vps.ovh.ca sshd[997]: Received signal 15; terminating. +Sep 23 02:08:56 vps138235.vps.ovh.ca systemd[1]: Stopping OpenSSH server daemon... +Sep 23 02:08:56 vps138235.vps.ovh.ca systemd[1]: Starting OpenSSH server daemon... +Sep 23 02:08:56 vps138235.vps.ovh.ca sshd[11584]: Server listening on 0.0.0.0 port 22. +Sep 23 02:08:56 vps138235.vps.ovh.ca sshd[11584]: Server listening on :: port 22. +Sep 23 02:08:56 vps138235.vps.ovh.ca systemd[1]: Started OpenSSH server daemon. +``` + +-------------------------------------------------------------------------------- + +via: https://www.2daygeek.com/how-to-find-out-which-port-number-a-process-is-using-in-linux/ + +作者:[Prakash Subramanian][a] +选题:[lujun9972](https://github.com/lujun9972) +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://www.2daygeek.com/author/prakash/ +[1]: https://www.2daygeek.com/how-to-check-find-the-process-id-pid-ppid-of-a-running-program-in-linux/ +[2]: https://www.2daygeek.com/kill-terminate-a-process-in-linux-using-kill-pkill-killall-command/ +[3]: https://www.2daygeek.com/chkservice-a-tool-for-managing-systemd-units-from-linux-terminal/ +[4]: https://www.2daygeek.com/how-to-check-all-running-services-in-linux/ From 56e04296329b5b4357b9380e1584edc0adbf91a1 Mon Sep 17 00:00:00 2001 From: HankChow <280630620@qq.com> Date: Wed, 26 Sep 2018 15:26:12 +0800 Subject: [PATCH 044/736] hankchow translating --- ...ke The Output Of Ping Command Prettier And Easier To Read.md | 2 ++ 1 file changed, 2 insertions(+) diff --git a/sources/tech/20180924 Make The Output Of Ping Command Prettier And Easier To Read.md b/sources/tech/20180924 Make The Output Of Ping Command Prettier And Easier To Read.md index 39ca57bc43..7ef713eae4 100644 --- a/sources/tech/20180924 Make The Output Of Ping Command Prettier And Easier To Read.md +++ b/sources/tech/20180924 Make The Output Of Ping Command Prettier And Easier To Read.md @@ -1,3 +1,5 @@ +HankChow translating + Make The Output Of Ping Command Prettier And Easier To Read ====== From 757a39980406948b0b921d0398489c9b71f88c42 Mon Sep 17 00:00:00 2001 From: belitex Date: Wed, 26 Sep 2018 17:04:20 +0800 Subject: [PATCH 045/736] =?UTF-8?q?=E7=BF=BB=E8=AF=91=E5=AE=8C=E6=88=90:?= =?UTF-8?q?=208=20Python=20packages=20that=20will=20simplify=20your=20life?= =?UTF-8?q?=20with=20Django?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...hat will simplify your life with Django.md | 124 ------------------ ...hat will simplify your life with Django.md | 121 +++++++++++++++++ 2 files changed, 121 insertions(+), 124 deletions(-) delete mode 100644 sources/tech/20180920 8 Python packages that will simplify your life with Django.md create mode 100644 translated/tech/20180920 8 Python packages that will simplify your life with Django.md diff --git a/sources/tech/20180920 8 Python packages that will simplify your life with Django.md b/sources/tech/20180920 8 Python packages that will simplify your life with Django.md deleted file mode 100644 index e341a8b0a6..0000000000 --- a/sources/tech/20180920 8 Python packages that will simplify your life with Django.md +++ /dev/null @@ -1,124 +0,0 @@ -belitex 翻译中 -8 Python packages that will simplify your life with Django -====== - -This month's Python column looks at Django packages that will benefit your work, personal, or side projects. - -![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/water-stone-balance-eight-8.png?itok=1aht_V5V) - -Django developers, we're devoting this month's Python column to packages that will help you. These are our favorite [Django][1] libraries for saving time, cutting down on boilerplate code, and generally simplifying our lives. We've got six packages for Django apps and two for Django's REST Framework, and we're not kidding when we say these packages show up in almost every project we work on. - -But first, see our tips for making the [Django Admin more secure][2] and an article on 5 favorite [open source Django packages][3]. - -### A kitchen sink of useful time-savers: django-extensions - -[Django-extensions][4] is a favorite Django package chock full of helpful tools like these management commands: - - * **shell_plus** starts the Django shell with all your database models already loaded. No more importing from several different apps to test one complex relationship! - * **clean_pyc** removes all .pyc projects from everywhere inside your project directory. - * **create_template_tags** creates a template tag directory structure inside the app you specify. - * **describe_form** displays a form definition for a model, which you can then copy/paste into forms.py. (Note that this produces a regular Django form, not a ModelForm.) - * **notes** displays all comments with stuff like TODO, FIXME, etc. throughout your project. - - - * **TimeStampedModel** : This base class includes the fields **created** and **modified** and a **save()** method that automatically updates these fields appropriately. - * **ActivatorModel** : If your model will need fields like **status** , **activate_date** , and **deactivate_date** , use this base class. It comes with a manager that enables **.active()** and **.inactive()** querysets. - * **TitleDescriptionModel** and **TitleSlugDescriptionModel** : These include the **title** and **description** fields, and the latter also includes a **slug** field. The **slug** field will automatically populate based on the **title** field. - - - -Django-extensions also includes useful abstract base classes to use for common patterns in your own models. Inherit from these base classes when you create your models to get their: - -Django-extensions has more features you may find useful in your projects, so take a tour through its [docs][5]! - -### 12-factor-app settings: django-environ - -[Django-environ][6] allows you to use [12-factor app][7] methodology to manage your settings in your Django project. It collects other libraries, including [envparse][8] and [honcho][9]. Once you install django-environ, create an .env file at your project's root. Define in that module any settings variables that may change between environments or should remain secret (like API keys, debug status, and database URLs). - -Then, in your project's settings.py file, import **environ** and set up variables for **environ.PATH()** and **environ.Env()** according to the [example][10]. Access settings variables defined in your .env file with **env('VARIABLE_NAME')**. - -### Creating great management commands: django-click - -[Django-click][11], based on [Click][12] (which we have recommended [before][13]… [twice][14]), helps you write Django management commands. This library doesn't have extensive documentation, but it does have a directory of [test commands][15] in its repository that are pretty useful. A basic Hello World command would look like this: - -``` -# app_name.management.commands.hello.py -import djclick as click - -@click.command() -@click.argument('name') -def command(name): -    click.secho(f'Hello, {name}') -``` - -Then in the command line, run: - -``` ->> ./manage.py hello Lacey -Hello, Lacey -``` - -### Handling finite state machines: django-fsm - -[Django-fsm][16] adds support for finite state machines to your Django models. If you run a news website and need articles to process through states like Writing, Editing, and Published, django-fsm can help you define those states and manage the rules and restrictions around moving from one state to another. - -Django-fsm provides an FSMField to use for the model attribute that defines the model instance's state. Then you can use django-fsm's **@transition** decorator to define methods that move the model instance from one state to another and handle any side effects from that transition. - -Although django-fsm is light on documentation, [Workflows (States) in Django][17] is a gist that serves as an excellent introduction to both finite state machines and django-fsm. - -### Contact forms: #django-contact-form - -A contact form is such a standard thing on a website. But don't write all that boilerplate code yourself—set yours up in minutes with [django-contact-form][18]. It comes with an optional spam-filtering contact form class (and a regular, non-filtering class) and a **ContactFormView** base class with methods you can override or customize, and it walks you through the templates you will need to create to make your form work. - -### Registering and authenticating users: django-allauth - -[Django-allauth][19] is an app that provides views, forms, and URLs for registering users, logging them in and out, resetting their passwords, and authenticating users with outside sites like GitHub or Twitter. It supports email-as-username authentication and is extensively documented. It can be a little confusing to set up the first time you use it; follow the [installation instructions][20] carefully and read closely when you [customize your settings][21] to make sure you're using all the settings you need to enable a specific feature. - -### Handling user authentication with Django REST Framework: django-rest-auth - -If your Django development includes writing APIs, you're probably using [Django REST Framework][22] (DRF). If you're using DRF, you should check out [django-rest-auth][23], a package that enables endpoints for user registration, login/logout, password reset, and social media authentication (by adding django-allauth, which works well with django-rest-auth). - -### Visualizing a Django REST Framework API: django-rest-swagger - -[Django REST Swagger][24] provides a feature-rich user interface for interacting with your Django REST Framework API. Once you've installed Django REST Swagger and added it to installed apps, add the Swagger view and URL pattern to your urls.py file; the rest is taken care of in the docstrings of your APIs. - -![](https://opensource.com/sites/default/files/uploads/swagger-ui.png) - -The UI for your API will include all your endpoints and available methods broken out by app. It will also list available operations for those endpoints and enable you to interact with the API (adding/deleting/fetching records, for example). It uses the docstrings in your API views to generate documentation for each endpoint, creating a set of API documentation for your project that's useful to you, your frontend developers, and your users. - --------------------------------------------------------------------------------- - -via: https://opensource.com/article/18/9/django-packages - -作者:[Jeff Triplett][a] -选题:[lujun9972](https://github.com/lujun9972) -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://opensource.com/users/laceynwilliams -[1]: https://www.djangoproject.com/ -[2]: https://opensource.com/article/18/1/10-tips-making-django-admin-more-secure -[3]: https://opensource.com/business/15/12/5-favorite-open-source-django-packages -[4]: https://django-extensions.readthedocs.io/en/latest/ -[5]: https://django-extensions.readthedocs.io/ -[6]: https://django-environ.readthedocs.io/en/latest/ -[7]: https://www.12factor.net/ -[8]: https://github.com/rconradharris/envparse -[9]: https://github.com/nickstenning/honcho -[10]: https://django-environ.readthedocs.io/ -[11]: https://github.com/GaretJax/django-click -[12]: http://click.pocoo.org/5/ -[13]: https://opensource.com/article/18/9/python-libraries-side-projects -[14]: https://opensource.com/article/18/5/3-python-command-line-tools -[15]: https://github.com/GaretJax/django-click/tree/master/djclick/test/testprj/testapp/management/commands -[16]: https://github.com/viewflow/django-fsm -[17]: https://gist.github.com/Nagyman/9502133 -[18]: https://django-contact-form.readthedocs.io/en/1.5/ -[19]: https://django-allauth.readthedocs.io/en/latest/ -[20]: https://django-allauth.readthedocs.io/en/latest/installation.html -[21]: https://django-allauth.readthedocs.io/en/latest/configuration.html -[22]: http://www.django-rest-framework.org/ -[23]: https://django-rest-auth.readthedocs.io/ -[24]: https://django-rest-swagger.readthedocs.io/en/latest/ diff --git a/translated/tech/20180920 8 Python packages that will simplify your life with Django.md b/translated/tech/20180920 8 Python packages that will simplify your life with Django.md new file mode 100644 index 0000000000..f242007433 --- /dev/null +++ b/translated/tech/20180920 8 Python packages that will simplify your life with Django.md @@ -0,0 +1,121 @@ +简化 Django 开发的八个 Python 包 +====== + +这个月的 Python 专栏将介绍一些 Django 包,它们有益于你的工作,以及你的个人或业余项目。 + +![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/water-stone-balance-eight-8.png?itok=1aht_V5V) + +Django 开发者们,在这个月的 Python 专栏中,我们会介绍一些能帮助你们的软件包。这些软件包是我们最喜欢的 [Django][1] 库,能够节省开发时间,减少样板代码,通常来说,这会让我们的生活更加轻松。我们为 Django 应用准备了六个包,为 Django 的 REST 框架准备了两个包。几乎所有我们的项目里,都用到了这些包,真的,不是说笑。 + +不过在继续阅读之前,请先看看我们关于[让 Django 管理后台更安全][2]的几个提示,以及这篇关于 [5 个最受欢迎的开源 Django 包][3] 的文章。 + +### 有用又省时的工具集合:django-extensions + +[Django-extensions][4] 这个 Django 包非常受欢迎,全是有用的工具,比如下面这些管理命令: + + * **shell_plus** 打开 Django 的管理 shell,这个 shell 已经自动导入了所有的数据库模型。在测试复杂的数据关系时,就不需要再从几个不同的应用里做 import 的操作了。 + * **clean_pyc** 删除项目目录下所有位置的 .pyc 文件 + * **create_template_tags** 在指定的应用下,创建模板标签的目录结构。 + * **describe_form** 输出模型的表单定义,可以粘贴到 forms.py 文件中。(需要注意的是,这种方法创建的是普通 Django 表单,而不是模型表单。) + * **notes** 输出你项目里所有带 TODO,FIXME 等标记的注释。 + +Django-extensions 还包括几个有用的抽象基类,在定义模型时,它们能满足常见的模式。当你需要以下模型时,可以继承这些基类: + + + * **TimeStampedModel** : 这个模型的基类包含了 **created** 字段和 **modified** 字段,还有一个 **save()** 方法,在适当的场景下,该方法自动更新 created 和 modified 字段的值。 + * **ActivatorModel** : 如果你的模型需要像 **status**,**activate_date** 和 **deactivate_date** 这样的字段,可以使用这个基类。它还自带了一个启用 **.active()** 和 **.inactive()** 查询集的 manager。 + * **TitleDescriptionModel** 和 **TitleSlugDescriptionModel** : 这两个模型包括了 **title** 和 **description** 字段,其中 description 字段还包括 **slug**,它根据 **title** 字段自动产生。 + +Django-extensions 还有其他更多的功能,也许对你的项目有帮助,所以,去浏览一下它的[文档][5]吧! + +### 12 因子应用的配置:django-environ + +在 Django 项目的配置方面,[Django-environ][6] 提供了符合 [12 因子应用][7] 方法论的管理方法。它是其他一些库的集合,包括 [envparse][8] 和 [honcho][9] 等。安装了 django-environ 之后,在项目的根目录创建一个 .env 文件,用这个文件去定义那些随环境不同而不同的变量,或者需要保密的变量。(比如 API keys,是否启用 debug,数据库的 URLs 等) + +然后,在项目的 settings.py 中引入 **environ**,并参考[官方文档的例子][10]设置好 **environ.PATH()** 和 **environ.Env()**。就可以通过 **env('VARIABLE_NAME')** 来获取 .env 文件中定义的变量值了。 + +### 创建出色的管理命令:django-click + +[Django-click][11] 是基于 [Click][12] 的, ( 我们[之前推荐过][13]… [两次][14] Click),它对编写 Django 管理命令很有帮助。这个库没有很多文档,但是代码仓库中有个存放[测试命令][15]的目录,非常有参考价值。 Django-click 基本的 Hello World 命令是这样写的: + +``` +# app_name.management.commands.hello.py +import djclick as click + +@click.command() +@click.argument('name') +def command(name): + click.secho(f'Hello, {name}') +``` + +在命令行下调用它,这样执行即可: + +``` +>> ./manage.py hello Lacey +Hello, Lacey +``` + +### 处理有限状态机:django-fsm + +[Django-fsm][16] 给 Django 的模型添加了有限状态机的支持。如果你管理一个新闻网站,想用类似于“写作中”,“编辑中”,“已发布”来流转文章的状态,django-fsm 能帮你定义这些状态,还能管理状态变化的规则与限制。 + +Django-fsm 为模型提供了 FSMField 字段,用来定义模型实例的状态。用 django-fsm 的 **@transition** 修饰符,可以定义状态变化的方法,并处理状态变化的任何副作用。 + +虽然 django-fsm 文档很轻量,不过 [Django 中的工作流(状态)][17] 这篇 GitHubGist 对有限状态机和 django-fsm 做了非常好的介绍。 + +### 联系人表单:#django-contact-form + +联系人表单可以说是网站的标配。但是不要自己去写全部的样板代码,用 [django-contact-form][18] 在几分钟内就可以搞定。它带有一个可选的能过滤垃圾邮件的表单类(也有不过滤的普通表单类)和一个 **ContactFormView** 基类,基类的方法可以覆盖或自定义修改。而且它还能引导你完成模板的创建,好让表单正常工作。 + +### 用户注册和认证:django-allauth + +[Django-allauth][19] 是一个 Django 应用,它为用户注册,登录注销,密码重置,还有第三方用户认证(比如 GitHub 或 Twitter)提供了视图,表单和 URLs,支持邮件地址作为用户名的认证方式,而且有大量的文档记录。第一次用的时候,它的配置可能会让人有点晕头转向;请仔细阅读[安装说明][20],在[自定义你的配置][21]时要专注,确保启用某个功能的所有配置都用对了。 + +### 处理 Django REST 框架的用户认证:django-rest-auth + +如果 Django 开发中涉及到对外提供 API,你很可能用到了 [Django REST Framework][22] (DRF)。如果你在用 DRF,那么你应该试试 django-rest-auth,它提供了用户注册,登录/注销,密码重置和社交媒体认证的 endpoints (是通过添加 django-allauth 的支持来实现的,这两个包协作得很好)。 + +### Django REST 框架的 API 可视化:django-rest-swagger + +[Django REST Swagger][24] 提供了一个功能丰富的用户界面,用来和 Django REST 框架的 API 交互。你只需要安装 Django REST Swagger,把它添加到 Django 项目的 installed apps 中,然后在 urls.py 中添加 Swagger 的视图和 URL 模式就可以了,剩下的事情交给 API 的 docstring 处理。 + +![](https://opensource.com/sites/default/files/uploads/swagger-ui.png) + +API 的用户界面按照 app 的维度展示了所有 endpoints 和可用方法,并列出了这些 endpoints 的可用操作,而且它提供了和 API 交互的功能(比如添加/删除/获取记录)。django-rest-swagger 从 API 视图中的 docstrings 生成每个 endpoint 的文档,通过这种方法,为你的项目创建了一份 API 文档,这对你,对前端开发人员和用户都很有用。 + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/18/9/django-packages + +作者:[Jeff Triplett][a] +选题:[lujun9972](https://github.com/lujun9972) +译者:[belitex](https://github.com/belitex) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/laceynwilliams +[1]: https://www.djangoproject.com/ +[2]: https://opensource.com/article/18/1/10-tips-making-django-admin-more-secure +[3]: https://opensource.com/business/15/12/5-favorite-open-source-django-packages +[4]: https://django-extensions.readthedocs.io/en/latest/ +[5]: https://django-extensions.readthedocs.io/ +[6]: https://django-environ.readthedocs.io/en/latest/ +[7]: https://www.12factor.net/ +[8]: https://github.com/rconradharris/envparse +[9]: https://github.com/nickstenning/honcho +[10]: https://django-environ.readthedocs.io/ +[11]: https://github.com/GaretJax/django-click +[12]: http://click.pocoo.org/5/ +[13]: https://opensource.com/article/18/9/python-libraries-side-projects +[14]: https://opensource.com/article/18/5/3-python-command-line-tools +[15]: https://github.com/GaretJax/django-click/tree/master/djclick/test/testprj/testapp/management/commands +[16]: https://github.com/viewflow/django-fsm +[17]: https://gist.github.com/Nagyman/9502133 +[18]: https://django-contact-form.readthedocs.io/en/1.5/ +[19]: https://django-allauth.readthedocs.io/en/latest/ +[20]: https://django-allauth.readthedocs.io/en/latest/installation.html +[21]: https://django-allauth.readthedocs.io/en/latest/configuration.html +[22]: http://www.django-rest-framework.org/ +[23]: https://django-rest-auth.readthedocs.io/ +[24]: https://django-rest-swagger.readthedocs.io/en/latest/ \ No newline at end of file From 50f53143e42ac21f9d481b03c218bbc076c656f1 Mon Sep 17 00:00:00 2001 From: qhwdw Date: Wed, 26 Sep 2018 21:45:48 +0800 Subject: [PATCH 046/736] Translated by qhwdw --- ...Educational Software and Games for Kids.md | 82 ------------------- ...Educational Software and Games for Kids.md | 80 ++++++++++++++++++ 2 files changed, 80 insertions(+), 82 deletions(-) delete mode 100644 sources/tech/20180813 5 of the Best Linux Educational Software and Games for Kids.md create mode 100644 translated/tech/20180813 5 of the Best Linux Educational Software and Games for Kids.md diff --git a/sources/tech/20180813 5 of the Best Linux Educational Software and Games for Kids.md b/sources/tech/20180813 5 of the Best Linux Educational Software and Games for Kids.md deleted file mode 100644 index 66850b2260..0000000000 --- a/sources/tech/20180813 5 of the Best Linux Educational Software and Games for Kids.md +++ /dev/null @@ -1,82 +0,0 @@ -Translating by qhwdw -5 of the Best Linux Educational Software and Games for Kids -====== - -![](https://www.maketecheasier.com/assets/uploads/2018/07/Linux-educational-programs-for-kids-featured.jpg) - -Linux is a very powerful operating system, and that explains why it powers most of the servers on the Internet. Though it may not be the best OS in terms of user friendliness, its diversity is commendable. Everyone has their own need for Linux. Be it for coding, educational purposes or the internet of things (IoT), you’ll always find a suitable Linux distro for every use. To that end, many have dubbed Linux as the OS for future computing. - -Because the future belongs to the kids of today, introducing them to Linux is the best way to prepare them for what the future holds. This OS may not have a reputation for popular games such as FIFA or PES; however, it offers the best educational software and games for kids. These are five of the best Linux educational software to keep your kids ahead of the game. - -**Related** : [The Beginner’s Guide to Using a Linux Distro][1] - -### 1. GCompris - -If you’re looking for the best educational software for kids, [GCompris][2] should be your starting point. This software is specifically designed for kids education and is ideal for kids between two and ten years old. As the pinnacle of all Linux educational software suites for children, GCompris offers about 100 activities for kids. It packs everything you want for your kids from reading practice to science, geography, drawing, algebra, quizzes, and more. - -![Linux educational software and games][3] - -GCompris even has activities for helping your kids learn computer peripherals. If your kids are young and you want them to learn alphabets, colors, and shapes, GCompris has programmes for those, too. What’s more, it also comes with helpful games for kids such as chess, tic-tac-toe, memory, and hangman. GCompris is not a Linux-only app. It’s also available for Windows and Android. - -### 2. TuxMath - -Most students consider math a tough subject. You can change that perception by acquainting your kids with mathematical skills through Linux software applications such as [TuxMath][4]. TuxMath is a top-rated educational Math tutorial game for kids. In this game your role is to help Tux the penguin of Linux protect his planet from a rain of mathematical problems. - -![linux-educational-software-tuxmath-1][5] - -By finding the answer, you help Tux save the planet by destroying the asteroids with your laser before they make an impact. The difficulty of the math problems increases with each level you pass. This game is ideal for kids, as it can help them rack their brains for solutions. Besides making them good at math, it also helps them improve their mental agility. - -### 3. Sugar on a Stick - -[Sugar on a Stick][6] is a dedicated learning program for kids – a brand new pedagogy that has gained a lot of traction. This program provides your kids with a fully-fledged learning platform where they can gain skills in creating, exploring, discovering and also reflecting on ideas. Just like GCompris, Sugar on a Stick comes with a host of learning resources for kids, including games and puzzles. - -![linux-educational-software-sugar-on-a-stick][7] - -The best thing about Sugar on a Stick is that you can set it up on a USB Drive. All you need is an X86-based PC, then plug in the USB, and boot the distro from it. Sugar on a Stick is a project by Sugar Labs – a non-profit organization that is run by volunteers. - -### 4. KDE Edu Suite - -[KDE Edu Suite][8] is a package of software for different user purposes. With a host of applications from different fields, the KDE community has proven that it isn’t just serious about empowering adults; it also cares about bringing the young generation to speed with everything surrounding them. It comes packed with various applications for kids ranging from science to math, geography, and more. - -![linux-educational-software-kde-1][9] - -The KDE Suite can be used for adult needs based on necessities, as a school teaching software, or as a kid’s leaning app. It offers a huge software package and is free to download. The KDE Edu suite can be installed on most GNU/Linux Distros. - -### 5. Tux Paint - -![linux-educational-software-tux-paint-2][10] - -[Tux Paint][11] is another great Linux educational software for kids. This award-winning drawing program is used in schools around the world to help children nurture the art of drawing. It comes with a clean, easy-to-use interface and fun sound effects that help children use the program. There is also an encouraging cartoon mascot that guides kids as they use the program. Tux Paint comes with a variety of drawing tools that help kids unleash their creativity. - -### Summing Up - -Due to the popularity of these educational software for kids, many institutions have embraced these programs as teaching aids in schools and kindergartens. A typical example is [Edubuntu][12], an Ubuntu-derived distro that is widely used by teachers and parents for educating kids. - -Tux Paint is another great example that has grown in popularity over the years and is being used in schools to teach children how to draw. This list is by no means exhaustive. There are hundreds of other Linux educational software and games that can be very useful for your kids. - -If you know of any other great Linux educational software and games for kids, share with us in the comments section below. - --------------------------------------------------------------------------------- - -via: https://www.maketecheasier.com/5-best-linux-software-packages-for-kids/ - -作者:[Kenneth Kimari][a] -选题:[lujun9972](https://github.com/lujun9972) -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:https://www.maketecheasier.com/author/kennkimari/ -[1]:https://www.maketecheasier.com/beginner-guide-to-using-linux-distro/ (The Beginner’s Guide to Using a Linux Distro) -[2]:http://www.gcompris.net/downloads-en.html -[3]:https://www.maketecheasier.com/assets/uploads/2018/07/Linux-educational-software-gcompris.jpg (Linux educational software and games) -[4]:https://tuxmath.en.uptodown.com/ubuntu -[5]:https://www.maketecheasier.com/assets/uploads/2018/07/Linux-educational-software-tuxmath-1.jpg (linux-educational-software-tuxmath-1) -[6]:http://wiki.sugarlabs.org/go/Sugar_on_a_Stick/Downloads -[7]:https://www.maketecheasier.com/assets/uploads/2018/07/Linux-educational-software-sugar-on-a-stick.png (linux-educational-software-sugar-on-a-stick) -[8]:https://edu.kde.org/ -[9]:https://www.maketecheasier.com/assets/uploads/2018/07/Linux-educational-software-kde-1.jpg (linux-educational-software-kde-1) -[10]:https://www.maketecheasier.com/assets/uploads/2018/07/Linux-educational-software-tux-paint-2.jpg (linux-educational-software-tux-paint-2) -[11]:http://www.tuxpaint.org/ -[12]:http://edubuntu.org/ diff --git a/translated/tech/20180813 5 of the Best Linux Educational Software and Games for Kids.md b/translated/tech/20180813 5 of the Best Linux Educational Software and Games for Kids.md new file mode 100644 index 0000000000..3a1981f0bc --- /dev/null +++ b/translated/tech/20180813 5 of the Best Linux Educational Software and Games for Kids.md @@ -0,0 +1,80 @@ +# 5 个给孩子的非常好的 Linux 教育软件和游戏 + +![](https://www.maketecheasier.com/assets/uploads/2018/07/Linux-educational-programs-for-kids-featured.jpg) + +Linux 是一个非常强大的操作系统,因此因特网上的大多数服务器都使用它。尽管它算不上是对用户友好的最佳操作系统,但它的多元化还是值的称赞的。对于 Linux 来说,每个人都能在它上面找到他们自己的所需。不论你是用它来写代码、还是用于教学或物联网(IoT),你总能找到一个适合你用的 Linux 发行版。为此,许多人认为 Linux 是未来计算的最佳操作系统。 + +未来是属于孩子们的,让孩子们了解 Linux 是他们掌控未来的最佳方式。这个操作系统上或许并没有一些像 FIFA 或 PES 那样的声名赫赫的游戏;但是,它为孩子们提供了一些非常好的教育软件和游戏。这里有五款最好的 Linux 教育软件,可以让你的孩子远离游戏。 + +**相关阅读**:[使用一个 Linux 发行版的新手指南][1] + +### 1. GCompris + +如果你正在为你的孩子寻找一款最佳的教育软件,[GCompris][2] 将是你的最好的开端。这款软件专门为 2 到 10 岁的孩子所设计。作为所有的 Linux 教育软件套装的巅峰之作,GCompris 为孩子们提供了大约 100 项活动。它囊括了你期望你的孩子学习的所有内容,从阅读材料到科学、地理、绘画、代数、测验、等等。 + +![Linux educational software and games][3] + +GCompris 甚至有一项活动可以帮你的孩子学习计算机的相关知识。如果你的孩子还很小,你希望他去学习字母、颜色、和形状,GCompris 也有这方面的相关内容。更重要的是,它也为孩子们准备了一些益智类游戏,比如国际象棋、井字棋、好记性、以及猜词游戏。GCompris 并不是一个仅在 Linux 上可运行的游戏。它也可以运行在 Windows 和 Android 上。 + +### 2. TuxMath + +很多学生认为数学是们非常难的课程。你可以通过 Linux 教育软件如 [TuxMath][4] 来让你的孩子了解数学技能,从而来改变这种看法。TuxMath 是为孩子开发的顶级的数学教育辅助游戏。在这个游戏中,你的角色是在如雨点般下降的数学问题中帮助 Linux 企鹅 Tux 来保护它的星球。 + +![linux-educational-software-tuxmath-1][5] + +在它们落下来毁坏 Tux 的星球之前,找到问题的答案,就可以使用你的激光去帮助 Tux 拯救它的星球。数字问题的难度每过一关就会提升一点。这个游戏非常适合孩子,因为它可以让孩子们去开动脑筋解决问题。而且还有助他们学好数学,以及帮助他们开发智力。 + +### 3. Sugar on a Stick + +[Sugar on a Stick][6] 是献给孩子们的学习程序 —— 一个广受好评的全新教学法。这个程序为你的孩子提供一个成熟的教学平台,在那里,他们可以收获创造、探索、发现和思考方面的技能。和 GCompris 一样,Sugar on a Stick 为孩子们带来了包括游戏和谜题在内的大量学习资源。 + +![linux-educational-software-sugar-on-a-stick][7] + +关于 Sugar on a Stick 最大的一个好处是你可以将它配置在一个 U 盘上。你只要有一台 X86 的 PC,插入那个 U 盘,然后就可以从 U 盘引导这个发行版。Sugar on a Stick 是由 Sugar 实验室提供的一个项目 —— 这个实验室是一个由志愿者运作的非盈利组织。 + +### 4. KDE Edu Suite + +[KDE Edu Suite][8] 是一个用途与众不同的软件包。带来了大量不同领域的应用程序,KDE 社区已经证实,它不仅是一系列成年人授权的问题;它还关心年青一代如何适应他们周围的一切。它囊括了一系列孩子们使用的应用程序,从科学到数学、地理等等。 + +![linux-educational-software-kde-1][9] + +KDE Edu 套件根据长大后所必需的知识为基础,既能够用作学校的教学软件,也能够作为孩子们的学习 APP。它提供了大量的可免费下载的软件包。KDE Edu 套件在主流的 GNU/Linux 发行版都能安装。 + +### 5. Tux Paint + +![linux-educational-software-tux-paint-2][10] + +[Tux Paint][11] 是给孩子们的另一个非常好的 Linux 教育软件。这个屡获殊荣的绘画软件在世界各地被用于帮助培养孩子们的绘画技能,它有一个简洁的、易于使用的界面和有趣的音效,可以高效地帮助孩子去使用这个程序。它也有一个卡通吉祥物去鼓励孩子们使用这个程序。Tux Paint 中有许多绘画工具,它们可以帮助孩子们放飞他们的创意。 + +### 总结 + +由于这些教育软件深受孩子们的欢迎,许多学校和幼儿园都使用这些程序进行辅助教学。典型的一个例子就是 [Edubuntu][12],它是儿童教育领域中广受老师和家长们欢迎的一个基于 Ubuntu 的发行版。 + +Tux Paint 是另一个非常好的例子,它在这些年越来越流行,它大量地用于学校中教孩子们如何绘画。以上的这个清单并不很详细。还有成百上千的对孩子有益的其它 Linux 教育软件和游戏。 + +如果你还知道给孩子们的其它非常好的 Linux 教育软件和游戏,在下面的评论区分享给我们吧。 + +------ + +via: https://www.maketecheasier.com/5-best-linux-software-packages-for-kids/ + +作者:[Kenneth Kimari][a] +选题:[lujun9972](https://github.com/lujun9972) +译者:[qhwdw](https://github.com/qhwdw) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://www.maketecheasier.com/author/kennkimari/ +[1]: https://www.maketecheasier.com/beginner-guide-to-using-linux-distro/ "The Beginner’s Guide to Using a Linux Distro" +[2]: http://www.gcompris.net/downloads-en.html +[3]: https://www.maketecheasier.com/assets/uploads/2018/07/Linux-educational-software-gcompris.jpg "Linux educational software and games" +[4]: https://tuxmath.en.uptodown.com/ubuntu +[5]: https://www.maketecheasier.com/assets/uploads/2018/07/Linux-educational-software-tuxmath-1.jpg "linux-educational-software-tuxmath-1" +[6]: http://wiki.sugarlabs.org/go/Sugar_on_a_Stick/Downloads +[7]: https://www.maketecheasier.com/assets/uploads/2018/07/Linux-educational-software-sugar-on-a-stick.png "linux-educational-software-sugar-on-a-stick" +[8]: https://edu.kde.org/ +[9]: https://www.maketecheasier.com/assets/uploads/2018/07/Linux-educational-software-kde-1.jpg "linux-educational-software-kde-1" +[10]: https://www.maketecheasier.com/assets/uploads/2018/07/Linux-educational-software-tux-paint-2.jpg "linux-educational-software-tux-paint-2" +[11]: http://www.tuxpaint.org/ +[12]: http://edubuntu.org/ \ No newline at end of file From 249d0c33d4d4ac1b33db6bb8818988ad9016c4fd Mon Sep 17 00:00:00 2001 From: "Xingyu.Wang" Date: Wed, 26 Sep 2018 23:23:39 +0800 Subject: [PATCH 047/736] PRF:20180910 3 open source log aggregation tools.md @heguangzhi --- ...910 3 open source log aggregation tools.md | 76 +++++++++---------- 1 file changed, 34 insertions(+), 42 deletions(-) diff --git a/translated/tech/20180910 3 open source log aggregation tools.md b/translated/tech/20180910 3 open source log aggregation tools.md index a026b47625..0bad973eba 100644 --- a/translated/tech/20180910 3 open source log aggregation tools.md +++ b/translated/tech/20180910 3 open source log aggregation tools.md @@ -1,83 +1,75 @@ - - - -3个开源日志聚合工具 +3 个开源日志聚合工具 ====== -日志聚合系统可以帮助我们故障排除并进行其他的任务。以下是三个主要工具介绍。 +> 日志聚合系统可以帮助我们进行故障排除和其它任务。以下是三个主要工具介绍。 ![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/data_metrics_analytics_desktop_laptop.png?itok=9QXd7AUr) - - -指标聚合与日志聚合有何不同?日志不能包括指标吗?日志聚合系统不能做与指标聚合系统相同的事情吗? +指标聚合metrics aggregation日志聚合log aggregation有何不同?日志不能包括指标吗?日志聚合系统不能做与指标聚合系统相同的事情吗? 这些是我经常听到的问题。我还看到供应商推销他们的日志聚合系统作为所有可观察问题的解决方案。日志聚合是一个有价值的工具,但它通常对时间序列数据的支持不够好。 -时间序列指标聚合系统中几个有价值的功能为规则间隔与专门为时间序列数据定制的存储系统。规则间隔允许用户一次性地导出真实的数据结果。如果要求日志聚合系统定期收集指标数据,它也可以。但是,它的存储系统没有针对指标聚合系统中典型的查询类型进行优化。使用日志聚合工具中的存储系统处理这些查询将花费更多的资源和时间。 +时间序列的指标聚合系统中几个有价值的功能是专门为时间序列数据定制的固定间隔regular interval和存储系统。固定间隔允许用户不断地收集实时的数据结果。如果要求日志聚合系统以固定间隔收集指标数据,它也可以。但是,它的存储系统没有针对指标聚合系统中典型的查询类型进行优化。使用日志聚合工具中的存储系统处理这些查询将花费更多的资源和时间。 -所以,我们知道日志聚合系统可能不适合时间序列数据,但是它有什么好处呢?日志聚合系统是收集事件数据的好地方。这些是非常重要的不规则活动。最好的例子为 web 服务的访问日志。这些都很重要,因为我们想知道什么东西正在访问我们的系统,什么时候访问。另一个例子是应用程序错误记录——因为它不是正常的操作记录,所以在故障排除过程中可能很有价值的。 +所以,我们知道日志聚合系统可能不适合时间序列数据,但是它有什么好处呢?日志聚合系统是收集事件数据的好地方。这些无规律的活动是非常重要的。最好的例子为 web 服务的访问日志,这些很重要,因为我们想知道什么正在访问我们的系统,什么时候访问的。另一个例子是应用程序错误记录 —— 因为它不是正常的操作记录,所以在故障排除过程中可能很有价值的。 -日志记录的一些规则: - - * 包含时间戳 - * 包含 JSON 格式 - * 不记录无关紧要的事件 - * 记录所有应用程序的错误 - * 记录警告错误 - * 日志记录开关 - * 以可读的形式记录信息 - * 不在生产环境中记录信息 - * 不记录任何无法阅读或无反馈的内容 +日志记录的一些规则: + * **须**包含时间戳 + * **须**格式化为 JSON + * **不**记录无关紧要的事件 + * **须**记录所有应用程序的错误 + * **可**记录警告错误 + * **可**开关的日志记录 + * **须**以可读的形式记录信息 + * **不**在生产环境中记录信息 + * **不**记录任何无法阅读或反馈的内容 ### 云的成本 -当研究日志聚合工具时,云可能看起来是一个有吸引力的选择。然而,这可能会带来巨大的成本。当跨数百或数千台主机和应用程序聚合时,日志数据是大量的。在基于云的系统中,数据的接收、存储和检索是昂贵的。 +当研究日志聚合工具时,云服务可能看起来是一个有吸引力的选择。然而,这可能会带来巨大的成本。当跨数百或数千台主机和应用程序聚合时,日志数据是大量的。在基于云的系统中,数据的接收、存储和检索是昂贵的。 -作为一个真实的系统,大约500个节点和几百个应用程序的集合每天产生 200GB 的日志数据。这个系统可能还有改进的空间,但是在许多 SaaS 产品中,即使将它减少一半,每月也要花费将近10,000美元。这通常包括仅保留30天,如果你想查看一年一年的趋势数据,就不可能了。 - -并不是要不使用这些系统,尤其是对于较小的组织它们可能非常有价值的。目的是指出可能会有很大的成本,当这些成本达到时,就可能令人非常的沮丧。本文的其余部分将集中讨论自托管的开源和商业解决方案。 +以一个真实的系统来参考,大约 500 个节点和几百个应用程序的集合每天产生 200GB 的日志数据。这个系统可能还有改进的空间,但是在许多 SaaS 产品中,即使将它减少一半,每月也要花费将近 10000 美元。而这通常仅保留 30 天,如果你想查看一年一年的趋势数据,就不可能了。 +并不是要不使用这些基于云的系统,尤其是对于较小的组织它们可能非常有价值的。这里的目的是指出可能会有很大的成本,当这些成本很高时,就可能令人非常的沮丧。本文的其余部分将集中讨论自托管的开源和商业解决方案。 ### 工具选择 #### ELK -[ELK][1] ,简称 Elasticsearch、Logstash 和 Kibana,是最流行的开源日志聚合工具。它被Netflix、Facebook、微软、LinkedIn 和思科使用。这三个组件都是由 [Elastic][2] 开发和维护的。[Elasticsearch][3] 本质上是一个NoSQL,Lucene 搜索引擎实现的。[Logstash][4] 是一个日志管道系统,可以接收数据,转换数据,并将其加载到像 Elasticsearch 这样的应用中。[Kibana][5] 是 Elasticsearch 之上的可视化层。 +[ELK][1],即 Elasticsearch、Logstash 和 Kibana 简称,是最流行的开源日志聚合工具。它被 Netflix、Facebook、微软、LinkedIn 和思科使用。这三个组件都是由 [Elastic][2] 开发和维护的。[Elasticsearch][3] 本质上是一个 NoSQL 数据库,以 Lucene 搜索引擎实现的。[Logstash][4] 是一个日志管道系统,可以接收数据,转换数据,并将其加载到像 Elasticsearch 这样的应用中。[Kibana][5] 是 Elasticsearch 之上的可视化层。 -几年前,Beats 被引入。Beats 是数据采集器。它们简化了将数据运送到日志存储的过程。用户不需要了解每种日志的正确语法,而是可以安装一个 Beats 来正确导出 NGINX 日志或代理日志,以便在Elasticsearch 中有效地使用它们。 +几年前,引入了 Beats 。Beats 是数据采集器。它们简化了将数据运送到 Logstash 的过程。用户不需要了解每种日志的正确语法,而是可以安装一个 Beats 来正确导出 NGINX 日志或 Envoy 代理日志,以便在 Elasticsearch 中有效地使用它们。 -安装生产环境级 ELK stack 时,可能会包括其他几个部分,如 [Kafka][6], [Redis][7], and [NGINX][8]。此外,用 Fluentd 替换 Logstash 也很常见,我们将在后面讨论。这个系统操作起来很复杂,这在早期导致很多问题和投诉。目前,这些基本上已经被修复,不过它仍然是一个复杂的系统,如果你使用少部分的功能,建议不要使用它了。 +安装生产环境级 ELK 套件时,可能会包括其他几个部分,如 [Kafka][6]、[Redis][7] 和 [NGINX][8]。此外,用 Fluentd 替换 Logstash 也很常见,我们将在后面讨论。这个系统操作起来很复杂,这在早期导致了很多问题和抱怨。目前,这些问题基本上已经被修复,不过它仍然是一个复杂的系统,如果你使用少部分的功能,建议不要使用它了。 -也就是说,服务是可用的,所以你不必担心。[Logz.io][9] 也可以使用,但是如果你有很多数据,它的标价有点高。当然,你可能没有很多数据,来使用用它。如果你买不起 Logz.io,你可以看看 [AWS Elasticsearch Service][10] (ES) 。ES 是 Amazon Web Services (AWS) 提供的一项服务,它使得 Elasticsearch 很容易快速工作。它还拥有使用 Lambda 和 S3 将所有AWS日志记录到 ES 的工具。这是一个更便宜的选择,但是需要一些管理操作,并有一些功能限制。 +也就是说,有其它可用的服务,所以你不必苦恼于此。可以使用 [Logz.io][9],但是如果你有很多数据,它的标价有点高。当然,你可能规模比较小,没有很多数据。如果你买不起 Logz.io,你可以看看 [AWS Elasticsearch Service][10] (ES) 。ES 是 Amazon Web Services (AWS) 提供的一项服务,它很容易就可以让 Elasticsearch 马上工作起来。它还拥有使用 Lambda 和 S3 将所有AWS 日志记录到 ES 的工具。这是一个更便宜的选择,但是需要一些管理操作,并有一些功能限制。 +ELK 套件的母公司 Elastic [提供][11] 一款更强大的产品,它使用开源核心open core模式,为分析工具和报告提供了额外的选项。它也可以在谷歌云平台或 AWS 上托管。由于这种工具和托管平台的组合提供了比大多数 SaaS 选项更加便宜,这也许是最好的选择,并且很有用。该系统可以有效地取代或提供 [安全信息和事件管理][12](SIEM)系统的功能。 -Elastic [offers][11] 的母公司提供一款更强大的产品,它使用开放核心模型,为分析工具和报告提供了额外的选项。它也可以在谷歌云平台或 AWS 上托管。由于这种工具和托管平台的组合提供了比大多数 SaaS 选项更加便宜,这将是最好的选择并且具有很大的价值。该系统可以有效地取代或提供[security information and event management][12] ( SIEM )系统的功能。 - -ELK 栈通过 Kibana 提供了很好的可视化工具,但是它缺少警报功能。Elastic 在付费的 X-Pack 插件中提供了提醒功能,但是在开源系统没有内置任何功能。Yelp 已经开发了一种解决这个问题的方法,[ElastAlert][13], 不过还有其他方式。这个额外的软件相当健壮,但是它增加了已经复杂的系统的复杂性。 +ELK 套件通过 Kibana 提供了很好的可视化工具,但是它缺少警报功能。Elastic 在付费的 X-Pack 插件中提供了警报功能,但是在开源系统没有内置任何功能。Yelp 已经开发了一种解决这个问题的方法,[ElastAlert][13],不过还有其他方式。这个额外的软件相当健壮,但是它增加了已经复杂的系统的复杂性。 #### Graylog -[Graylog][14] 最近越来越受欢迎,但它是在2010年 Lennart Koopmann 创建并开发的。两年后,一家公司以同样的名字诞生了。尽管它的使用者越来越多,但仍然远远落后于 ELK 栈。这也意味着它具有较少的社区开发特征,但是它可以使用与 ELK stack 相同的 Beats 。Graylog 由于 Graylog Collector Sidecar 使用 [Go][15] 编写所以在 Go 社区赢得了赞誉。 +[Graylog][14] 最近越来越受欢迎,但它是在 2010 年由 Lennart Koopmann 创建并开发的。两年后,一家公司以同样的名字诞生了。尽管它的使用者越来越多,但仍然远远落后于 ELK 套件。这也意味着它具有较少的社区开发特征,但是它可以使用与 ELK 套件相同的 Beats 。由于 Graylog Collector Sidecar 使用 [Go][15] 编写,所以 Graylog 在 Go 社区赢得了赞誉。 -Graylog 使用 Elasticsearch、[MongoDB][16] 并且 提供 Graylog Server 。这使得它像ELK 栈一样复杂,也许还要复杂一些。然而,Graylog 附带了内置于开源版本中的报警功能,以及其他一些值得注意的功能,如 streaming 、消息重写 和 地理定位。 +Graylog 使用 Elasticsearch、[MongoDB][16] 和底层的 Graylog Server 。这使得它像 ELK 套件一样复杂,也许还要复杂一些。然而,Graylog 附带了内置于开源版本中的报警功能,以及其他一些值得注意的功能,如流、消息重写和地理定位。 -streaming 能允许数据在被处理时被实时路由到特定的 Streams。使用此功能,用户可以在单个Stream 中看到所有数据库错误,在不同的 Stream 中看到 web 服务器错误。当添加新项目或超过阈值时,甚至可以基于这些 Stream 提供警报。延迟可能是日志聚合系统中最大的问题之一,Stream消除了灰色日志中的这一问题。一旦日志进入,它就可以通过 Stream 路由到其他系统,而无需全部处理。 +流功能可以允许数据在被处理时被实时路由到特定的 Stream。使用此功能,用户可以在单个 Stream 中看到所有数据库错误,在另外的 Stream 中看到 web 服务器错误。当添加新项目或超过阈值时,甚至可以基于这些 Stream 提供警报。延迟可能是日志聚合系统中最大的问题之一,Stream 消除了 Graylog 中的这一问题。一旦日志进入,它就可以通过 Stream 路由到其他系统,而无需完全处理好。 -消息重写功能使用开源规则引擎 [Drools][17] 。允许根据用户定义的规则文件评估所有传入的消息,从而可以删除消息(称为黑名单)、添加或删除字段或修改消息。 +消息重写功能使用开源规则引擎 [Drools][17] 。允许根据用户定义的规则文件评估所有传入的消息,从而可以删除消息(称为黑名单)、添加或删除字段或修改消息。 -Graylog 最酷的功能是它的地理定位功能,它支持在地图上绘制 IP 地址。这是一个相当常见的功能,在 Kibana 也可以这样使用,但是它增加了很多价值——特别是如果你想将它用作 SIEM 系统。地理定位功能在系统的开源版本中提供。 +Graylog 最酷的功能或许是它的地理定位功能,它支持在地图上绘制 IP 地址。这是一个相当常见的功能,在 Kibana 也可以这样使用,但是它增加了很多价值 —— 特别是如果你想将它用作 SIEM 系统。地理定位功能在系统的开源版本中提供。 -Graylog 如果你想要的话,它会对开源版本的支持收费。它还为其企业版提供了一个开放的核心模型,提供存档、审计日志记录和其他支持。如果你不需要 Graylog (the company) 支持或托管的,你可以独立使用。 +如果你需要的话,Graylog 公司会提供对开源版本的收费支持。它还为其企业版提供了一个开源核心模式,提供存档、审计日志记录和其他支持。其它提供支持或托管服务的不太多,如果你不需要 Graylog 公司的,你可以托管。 #### Fluentd -[Fluentd][18] 是 [Treasure Data][19] 开发的,[CNCF][20] 已经将它作为一个孵化项目。它是用 C 和 Ruby 编写的,并由[AWS][21] 和 [Google Cloud][22]推荐。fluentd已经成为许多装置中logstach的常用替代品。它充当本地聚合器,收集所有节点日志并将其发送到中央存储系统。它不是日志聚合系统。 +[Fluentd][18] 是 [Treasure Data][19] 开发的,[CNCF][20] 已经将它作为一个孵化项目。它是用 C 和 Ruby 编写的,并被 [AWS][21] 和 [Google Cloud][22] 所推荐。Fluentd 已经成为许多系统中 logstach 的常用替代品。它可以作为一个本地聚合器,收集所有节点日志并将其发送到中央存储系统。它不是日志聚合系统。 -它使用一个强大的插件系统,提供不同数据源和数据输出的快速和简单的集成功能。因为有超过500个插件可用,所以你的大多数用例都应该包括在内。如果没有,这听起来是一个为开源社区做出贡献的机会。 +它使用一个强大的插件系统,提供不同数据源和数据输出的快速和简单的集成功能。因为有超过 500 个插件可用,所以你的大多数用例都应该包括在内。如果没有,这听起来是一个为开源社区做出贡献的机会。 -Fluentd 由于占用内存少(只有几十兆字节)和高吞吐量特性,是 Kubernetes 环境中的常见选择。在像 [Kubernetes][23] 这样的环境中,每个pod 都有一个 Fluentd sidecar ,内存消耗会随着每个新 pod 的创建而线性增加。在这中情况下,使用 Fluentd 将大大降低你的系统利用率。这对于Java开发的工具来说,这是一个常见的问题,这些工具旨在为每个节点运行一个工具,而内存开销并不是主要问题。 +Fluentd 由于占用内存少(只有几十兆字节)和高吞吐量特性,是 Kubernetes 环境中的常见选择。在像 [Kubernetes][23] 这样的环境中,每个 pod 都有一个 Fluentd 附属件 ,内存消耗会随着每个新 pod 的创建而线性增加。在这种情况下,使用 Fluentd 将大大降低你的系统利用率。这对于 Java 开发的工具来说是一个常见的问题,这些工具旨在为每个节点运行一个工具,而内存开销并不是主要问题。 -------------------------------------------------------------------------------- @@ -87,7 +79,7 @@ via: https://opensource.com/article/18/9/open-source-log-aggregation-tools 作者:[Dan Barker][a] 选题:[lujun9972](https://github.com/lujun9972) 译者:[heguangzhi](https://github.com/heguangzhi) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 From 521967e1f7aba7643d064fdab6992fc274458ca6 Mon Sep 17 00:00:00 2001 From: "Xingyu.Wang" Date: Wed, 26 Sep 2018 23:24:02 +0800 Subject: [PATCH 048/736] PUB:20180910 3 open source log aggregation tools.md @heguangzhi https://linux.cn/article-10053-1.html --- .../20180910 3 open source log aggregation tools.md | 0 1 file changed, 0 insertions(+), 0 deletions(-) rename {translated/tech => published}/20180910 3 open source log aggregation tools.md (100%) diff --git a/translated/tech/20180910 3 open source log aggregation tools.md b/published/20180910 3 open source log aggregation tools.md similarity index 100% rename from translated/tech/20180910 3 open source log aggregation tools.md rename to published/20180910 3 open source log aggregation tools.md From f4c58714b3d1d30cce80bfa0bda7170cefd9ec07 Mon Sep 17 00:00:00 2001 From: "Xingyu.Wang" Date: Thu, 27 Sep 2018 00:08:08 +0800 Subject: [PATCH 049/736] PRF:20180824 Steam Makes it Easier to Play Windows Games on Linux.md @hopefully2333 --- ...t Easier to Play Windows Games on Linux.md | 41 +++++++++++-------- 1 file changed, 23 insertions(+), 18 deletions(-) diff --git a/translated/tech/20180824 Steam Makes it Easier to Play Windows Games on Linux.md b/translated/tech/20180824 Steam Makes it Easier to Play Windows Games on Linux.md index 75c42b3ab3..a257cee609 100644 --- a/translated/tech/20180824 Steam Makes it Easier to Play Windows Games on Linux.md +++ b/translated/tech/20180824 Steam Makes it Easier to Play Windows Games on Linux.md @@ -1,47 +1,50 @@ -steam 让我们在 Linux 上玩 Windows 的游戏更加容易 +Steam 让我们在 Linux 上玩 Windows 的游戏更加容易 ====== + ![Steam Wallpaper][1] -总所周知,Linux 游戏库中的游戏只有 Windows 游戏库中的一部分,实际上,许多人甚至都不会考虑将操作系统转换为 Linux,原因很简单,因为他们喜欢的游戏,大多数都不能在这个平台上运行。 +总所周知,[Linux 游戏][2]库中的游戏只有 Windows 游戏库中的一部分,实际上,许多人甚至都不会考虑将操作系统[转换为 Linux][3],原因很简单,因为他们喜欢的游戏,大多数都不能在这个平台上运行。 -在撰写本文时,steam 上已有超过 5000 种游戏可以在 Linux 上运行,而 steam 上的游戏总数已经接近 27000 种了。现在 5000 种游戏可能看起来很多,但还没有达到 27000 种,确实没有。 +在撰写本文时,Steam 上已有超过 5000 种游戏可以在 Linux 上运行,而 Steam 上的游戏总数已经接近 27000 种了。现在 5000 种游戏可能看起来很多,但还没有达到 27000 种,确实没有。 -虽然几乎所有的新的独立游戏都是在 Linux 中推出的,但我们仍然无法在这上面玩很多的 3A 大作。对我而言,虽然这其中有很多游戏我都很希望能有机会玩,但这从来都不是一个非黑即白的问题。因为我主要是玩独立游戏和复古游戏,所以几乎所有我喜欢的游戏都可以在 Linux 系统上运行。 +虽然几乎所有的新的独立游戏indie game都是在 Linux 中推出的,但我们仍然无法在这上面玩很多的 [3A 大作][4]。对我而言,虽然这其中有很多游戏我都很希望能有机会玩,但这从来都不是一个非黑即白的问题。因为我主要是玩独立游戏和[复古游戏][5],所以几乎所有我喜欢的游戏都可以在 Linux 系统上运行。 -### 认识 Proton,Steam 的一次 WINE 分叉。 +### 认识 Proton,Steam 的一个 WINE 复刻 -现在,这个问题已经成为过去式了,因为本周 Valve 宣布要对 Steam Play 进行一次更新,此次更新会将一个名为 Proton 的分叉版本的 Wine 添加到 Linux 和 Mac 的客户端中。是的,这个工具是开源的,Valve 已经在 GitHub 上开源了源代码,但该功能仍然处于测试阶段,所以你必须使用测试版的 Steam 客户端才能使用这项功能。 +现在,这个问题已经成为过去式了,因为本周 Valve [宣布][6]要对 Steam Play 进行一次更新,此次更新会将一个名为 Proton 的 Wine 复刻版本添加到 Linux 客户端中。是的,这个工具是开源的,Valve 已经在 [GitHub][7] 上开源了源代码,但该功能仍然处于测试阶段,所以你必须使用测试版的 Steam 客户端才能使用这项功能。 -#### 使用 proton ,可以在 Linux 系统上使用 Steam 运行更多的 Windows 上的游戏。 +#### 使用 proton ,可以在 Linux 系统上通过 Steam 运行更多 Windows 游戏 -这对我们这些 Linux 用户来说,实际上意味着什么?简单来说,这意味着我们可以在 Linux 和 Mac 这两种操作系统的电脑上运行全部 27000 种游戏,而无需配置像 PlayOnLinux 或 Lutris 这样的服务。我要告诉你的是,配置这些东西有时候会非常让人头疼。 +这对我们这些 Linux 用户来说,实际上意味着什么?简单来说,这意味着我们可以在 Linux 电脑上运行全部 27000 种游戏,而无需配置像 [PlayOnLinux][8] 或 [Lutris][9] 这样的东西。我要告诉你的是,配置这些东西有时候会非常让人头疼。 -对此更为复杂的答案是,某种原因听起来非常美好。虽然在理论上,你可以用这种方式在 Linux 上玩所有的 Windows 平台上的游戏。但只有一少部分游戏在推出时会正式支持 Linux。这少部分游戏包括 DOOM,最终幻想 VI,铁拳 7,星球大战:前线 2,和其他几个。 +对此更为复杂的答案是,某种原因听起来非常美好。虽然在理论上,你可以用这种方式在 Linux 上玩所有的 Windows 平台上的游戏。但只有一少部分游戏在推出时会正式支持 Linux。这少部分游戏包括 《DOOM》、《最终幻想 VI》、《铁拳 7》、《星球大战:前线 2》,和其他几个。 -#### 你可以在 Linux 上玩所有的 Windows 平台的游戏(理论上) +#### 你可以在 Linux 上玩所有的 Windows 游戏(理论上) -虽然目前该列表只有大约 30 个游戏,你可以点击“为所有游戏使用 Steam play 进行运行”复选框来强制使用 Steam 的 Proton 来安装和运行任意游戏。但你最好不要有太高的期待,它们的稳定性和性能表现不一定有你希望的那么好,所以请把期望值压低一点。 +虽然目前该列表只有大约 30 个游戏,你可以点击“为所有游戏启用 Steam Play”复选框来强制使用 Steam 的 Proton 来安装和运行任意游戏。但你最好不要有太高的期待,它们的稳定性和性能表现不一定有你希望的那么好,所以请把期望值压低一点。 ![Steam Play][10] -#### 体验 Proton,没有我想的那么烂。 +据[这份报告][13],已经有超过一千个游戏可以在 Linux 上玩了。按[此指南][14]来了解如何启用 Steam Play 测试版本。 -例如,我安装了一些中等价格的游戏,使用 Proton 来进行安装。其中一个是上古卷轴 4:湮没,在我玩这个游戏的两个小时里,它只崩溃了一次,而且几乎是紧跟在游戏教程的自动保存点之后。 +#### 体验 Proton,没有我想的那么烂 + +例如,我安装了一些难度适中的游戏,使用 Proton 来进行安装。其中一个是《上古卷轴 4:湮没》,在我玩这个游戏的两个小时里,它只崩溃了一次,而且几乎是紧跟在游戏教程的自动保存点之后。 我有一块英伟达 Gtx 1050 Ti 的显卡。所以我可以使用 1080P 的高配置来玩这个游戏。而且我没有遇到除了这次崩溃之外的任何问题。我唯一真正感到不爽的只有它的帧数没有原本的高。在 90% 的时间里,游戏的帧数都在 60 帧以上,但我知道它的帧数应该能更高。 -我安装和发布的其他所有游戏都运行得很完美,虽然我还没有较长时间地玩过它们中的任何一个。我安装的游戏中包括森林,丧尸围城 4,H1Z1,和刺客信条 2.(你能说我这是喜欢恐怖游戏吗?)。 +我安装和运行的其他所有游戏都运行得很完美,虽然我还没有较长时间地玩过它们中的任何一个。我安装的游戏中包括《森林》、《丧尸围城 4》和《刺客信条 2》。(你觉得我这是喜欢恐怖游戏吗?) #### 为什么 Steam(仍然)要下注在 Linux 上? 现在,一切都很好,这件事为什么会发生呢?为什么 Valve 要花费时间,金钱和资源来做这样的事?我倾向于认为,他们这样做是因为他们懂得 Linux 社区的价值,但是如果要我老实地说,我不相信我们和它有任何的关系。 -如果我一定要在这上面花钱,我想说 Valve 开发了 Proton,因为他们还没有放弃 Steam 机器。因为 Steam OS 是基于 Linux 的发行版,在这类东西上面投资可以获取最大的利润,Steam OS 上可用的游戏越多,就会有更多的人愿意购买 Steam 的机器。 +如果我一定要在这上面花钱,我想说 Valve 开发了 Proton,因为他们还没有放弃 [Steam Machine][11]。因为 [Steam OS][12] 是基于 Linux 的发行版,在这类东西上面投资可以获取最大的利润,Steam OS 上可用的游戏越多,就会有更多的人愿意购买 Steam Machine。 -可能我是错的,但是我敢打赌啊,我们会在不远的未来看到新一批的 Steam 机器。可能我们会在一年内看到它们,也有可能我们再等五年都见不到,谁知道呢! +可能我是错的,但是我敢打赌啊,我们会在不远的未来看到新一批的 Steam Machine。可能我们会在一年内看到它们,也有可能我们再等五年都见不到,谁知道呢! -无论哪种方式,我所知道的是,我终于能兴奋地从我的 Steam 游戏库里玩游戏了。多年来我通过所有的收藏包,游戏促销,和不定时地买一个贱卖的游戏来以防万一,我想去尝试让它在 Lutris 中运行。 +无论哪种方式,我所知道的是,我终于能兴奋地从我的 Steam 游戏库里玩游戏了。这个游戏库是多年来我通过各种慈善包、促销码和不定时地买的游戏慢慢积累的,只不过是想试试让它在 Lutris 中运行。 #### 为 Linux 上越来越多的游戏而激动? @@ -54,7 +57,7 @@ via: https://itsfoss.com/steam-play-proton/ 作者:[Phillip Prado][a] 选题:[lujun9972](https://github.com/lujun9972) 译者:[hopefully2333](https://github.com/hopefully2333) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 @@ -71,3 +74,5 @@ via: https://itsfoss.com/steam-play-proton/ [10]:https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/08/SteamProton.jpg [11]:https://store.steampowered.com/sale/steam_machines [12]:https://itsfoss.com/valve-annouces-linux-based-gaming-operating-system-steamos/ +[13]:https://spcr.netlify.com/ +[14]:https://itsfoss.com/steam-play/ From 82182f5de868fa6b1bb92811ed0fe4cd77ada7c4 Mon Sep 17 00:00:00 2001 From: "Xingyu.Wang" Date: Thu, 27 Sep 2018 00:08:33 +0800 Subject: [PATCH 050/736] PUB: 20180824 Steam Makes it Easier to Play Windows Games on Linux.md @hopefully2333 https://linux.cn/article-10054-1.html --- ...180824 Steam Makes it Easier to Play Windows Games on Linux.md | 0 1 file changed, 0 insertions(+), 0 deletions(-) rename {translated/tech => published}/20180824 Steam Makes it Easier to Play Windows Games on Linux.md (100%) diff --git a/translated/tech/20180824 Steam Makes it Easier to Play Windows Games on Linux.md b/published/20180824 Steam Makes it Easier to Play Windows Games on Linux.md similarity index 100% rename from translated/tech/20180824 Steam Makes it Easier to Play Windows Games on Linux.md rename to published/20180824 Steam Makes it Easier to Play Windows Games on Linux.md From 53f290e1d854fd8584e6c3b7a96be75cf6a19f9c Mon Sep 17 00:00:00 2001 From: geekpi Date: Thu, 27 Sep 2018 08:59:31 +0800 Subject: [PATCH 051/736] translated --- .../tech/20180824 5 cool music player apps.md | 110 ------------------ .../tech/20180824 5 cool music player apps.md | 108 +++++++++++++++++ 2 files changed, 108 insertions(+), 110 deletions(-) delete mode 100644 sources/tech/20180824 5 cool music player apps.md create mode 100644 translated/tech/20180824 5 cool music player apps.md diff --git a/sources/tech/20180824 5 cool music player apps.md b/sources/tech/20180824 5 cool music player apps.md deleted file mode 100644 index fbacc8f8b4..0000000000 --- a/sources/tech/20180824 5 cool music player apps.md +++ /dev/null @@ -1,110 +0,0 @@ -translating---geekpi - -5 cool music player apps -====== - -![](https://fedoramagazine.org/wp-content/uploads/2018/08/5-cool-music-apps-816x345.jpg) -Do you like music? Then Fedora may have just what you’re looking for. This article introduces different music player apps that run on Fedora. You’re covered whether you have an extensive music library, a small one, or none at all. Here are four graphical application and one terminal-based music player that will have you jamming. - -### Quod Libet - -Quod Libet is a complete manager for your large audio library. If you have an extensive audio library that you would like not just listen to, but also manage, Quod Libet might a be a good choice for you. - -![][1] - -Quod Libet can import music from multiple locations on your disk, and allows you to edit tags of the audio files — so everything is under your control. As a bonus, there are various plugins available for anything from a simple equalizer to a [last.fm][2] sync. You can also search and play music directly from [Soundcloud][3]. - -Quod Libet works great on HiDPI screens, and is available as an RPM in Fedora or on [Flathub][4] in case you run [Silverblue][5]. Install it using Gnome Software or the command line: -``` -$ sudo dnf install quodlibet - -``` - -### Audacious - -If you like a simple music player that could even look like the legendary Winamp, Audacious might be a good choice for you. - -![][6] - -Audacious probably won’t manage all your music at once, but it works great if you like to organize your music as files. You can also export and import playlists without reorganizing the music files themselves. - -As a bonus, you can make it look likeWinamp. To make it look the same as on the screenshot above, go to Settings / Appearance, select Winamp Classic Interface at the top, and choose the Refugee skin right below. And Bob’s your uncle! - -Audacious is available as an RPM in Fedora, and can be installed using the Gnome Software app or the following command on the terminal: -``` -$ sudo dnf install audacious - -``` - -### Lollypop - -Lollypop is a music player that provides great integration with GNOME. If you enjoy how GNOME looks, and would like a music player that’s nicely integrated, Lollypop could be for you. - -![][7] - -Apart from nice visual integration with the GNOME Shell, it woks nicely on HiDPI screens, and supports a dark theme. - -As a bonus, Lollypop has an integrated cover art downloader, and a so-called Party Mode (the note button at the top-right corner) that selects and plays music automatically for you. It also integrates with online services such as [last.fm][2] or [libre.fm][8]. - -Available as both an RPM in Fedora or a [Flathub][4] for your [Silverblue][5] workstation, install it using the Gnome Software app or using the terminal: -``` -$ sudo dnf install lollypop - -``` - -### Gradio - -What if you don’t own any music, but still like to listen to it? Or you just simply love radio? Then Gradio is here for you. - -![][9] - -Gradio is a simple radio player that allows you to search and play internet radio stations. You can find them by country, language, or simply using search. As a bonus, it’s visually integrated into GNOME Shell, works great with HiDPI screens, and has an option for a dark theme. - -Gradio is available on [Flathub][4] which works with both Fedora Workstation and [Silverblue][5]. Install it using the Gnome Software app. - -### sox - -Do you like using the terminal instead, and listening to some music while you work? You don’t have to leave the terminal thanks to sox. - -![][10] - -sox is a very simple, terminal-based music player. All you need to do is to run a command such as: -``` -$ play file.mp3 - -``` - -…and sox will play it for you. Apart from individual audio files, sox also supports playlists in the m3u format. - -As a bonus, because sox is a terminal-based application, you can run it over ssh. Do you have a home server with speakers attached to it? Or do you want to play music from a different computer? Try using it together with [tmux][11], so you can keep listening even when the session closes. - -sox is available in Fedora as an RPM. Install it by running: -``` -$ sudo dnf install sox - -``` - - --------------------------------------------------------------------------------- - -via: https://fedoramagazine.org/5-cool-music-player-apps/ - -作者:[Adam Šamalík][a] -选题:[lujun9972](https://github.com/lujun9972) -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:https://fedoramagazine.org/author/asamalik/ -[1]:https://fedoramagazine.org/wp-content/uploads/2018/08/qodlibet-300x217.png -[2]:https://last.fm -[3]:https://soundcloud.com/ -[4]:https://flathub.org/home -[5]:https://teamsilverblue.org/ -[6]:https://fedoramagazine.org/wp-content/uploads/2018/08/audacious-300x136.png -[7]:https://fedoramagazine.org/wp-content/uploads/2018/08/lollypop-300x172.png -[8]:https://libre.fm -[9]:https://fedoramagazine.org/wp-content/uploads/2018/08/gradio.png -[10]:https://fedoramagazine.org/wp-content/uploads/2018/08/sox-300x179.png -[11]:https://fedoramagazine.org/use-tmux-more-powerful-terminal/ diff --git a/translated/tech/20180824 5 cool music player apps.md b/translated/tech/20180824 5 cool music player apps.md new file mode 100644 index 0000000000..fb301ed4dd --- /dev/null +++ b/translated/tech/20180824 5 cool music player apps.md @@ -0,0 +1,108 @@ +5 个很酷的音乐播放器 +====== + +![](https://fedoramagazine.org/wp-content/uploads/2018/08/5-cool-music-apps-816x345.jpg) +你喜欢音乐吗?那么 Fedora 中可能有你正在寻找的东西。本文介绍在 Fedora 上运行的不同音乐播放器。无论你有大量的音乐库,还是小型音乐库,或者根本没有音乐库,你都会被覆盖到。这里有四个图形程序和一个基于终端的音乐播放器,可以让你挑选。 + +### Quod Libet + +Quod Libet 是你的大型音频库的管理员。如果你有一个大量的音频库,你不想只听,但也要管理,Quod Libet 可能是一个很好的选择。 + +![][1] + +Quod Libet 可以从磁盘上的多个位置导入音乐,并允许你编辑音频文件的标签 - 因此一切都在你的控制之下。额外地,它还有各种插件可用,从简单的均衡器到 [last.fm][2] 同步。你也可以直接从 [Soundcloud][3] 搜索和播放音乐。 + +Quod Libet 在 HiDPI 屏幕上工作得很好,它有 Fedora 的 RPM 包,如果你运行[Silverblue][5],它在 [Flathub][4] 中也有。使用 Gnome Software 或命令行安装它: +``` +$ sudo dnf install quodlibet + +``` + +### Audacious + +如果你喜欢简单的音乐播放器,甚至可能看起来像传说中的 Winamp,Audacious 可能是你的不错选择。 + +![][6] + +Audacious 可能不会立即管理你的所有音乐,但你如果想将音乐组织为文件,它能做得很好。你还可以导出和导入播放列表,而无需重新组织音乐文件本身。 + +额外地,你可以让它看起来像 Winamp。要让它与上面的截图相同,请进入 “Settings/Appearance,”,选择顶部的 “Winamp Classic Interface”,然后选择右下方的 “Refugee” 皮肤。而鲍勃是你的叔叔!这就完成了。 + +Audacious 在 Fedora 中作为 RPM 提供,可以使用 Gnome Software 或在终端运行以下命令安装: +``` +$ sudo dnf install audacious + +``` + +### Lollypop + +Lollypop 是一个音乐播放器,它与 GNOME 集成良好。如果你喜欢 GNOME 的外观,并且想要一个集成良好的音乐播放器,Lollypop 可能适合你。 + +![][7] + +除了与 GNOME Shell 的良好视觉集成之外,它还可以很好地用于 HiDPI 屏幕,并支持黑暗主题。 + +额外地,Lollypop 有一个集成的封面下载器和一个所谓的派对模式(右上角的音符按钮),它可以自动选择和播放音乐。它还集成了 [last.fm][2] 或 [libre.fm][8] 等在线服务。 + +它有 Fedora 的 RPM 也有用于 [Silverblue][5] 工作站的 [Flathub][4],使用 Gnome Software 或终端进行安装: +``` +$ sudo dnf install lollypop + +``` + +### Gradio + +如果你没有任何音乐但仍喜欢听怎么办?或者你只是喜欢收音机?Gradio 就是为你准备的。 + +![][9] + +Gradio 是一个简单的收音机,它允许你搜索和播放网络电台。你可以按国家、语言或直接搜索找到它们。额外地,它可视化地集成到了 GNOME Shell 中,可以与 HiDPI 屏幕配合使用,并且可以选择黑暗主题。 + +可以在 [Flathub][4] 中找到 Gradio,它同时可以运行在 Fedora Workstation 和 [Silverblue][5] 中。使用 Gnome Software 安装它 + +### sox + +你喜欢使用终端在工作时听一些音乐吗?多亏有了 sox,你不必离开终端。 + +![][10] + +sox 是一个非常简单的基于终端的音乐播放器。你需要做的就是运行如下命令: +``` +$ play file.mp3 + +``` + +接着 sox 就会为你播放。除了单独的音频文件外,sox 还支持 m3u 格式的播放列表。 + +额外地,因为 sox 是基于终端的程序,你可以在 ssh 中运行它。你有一个带扬声器的家用服务器吗?或者你想从另一台电脑上播放音乐吗?尝试将它与 [tmux][11] 一起使用,这样即使会话关闭也可以继续听。 + +sox 在 Fedora 中以 RPM 提供。运行下面的命令安装: +``` +$ sudo dnf install sox + +``` + + +-------------------------------------------------------------------------------- + +via: https://fedoramagazine.org/5-cool-music-player-apps/ + +作者:[Adam Šamalík][a] +选题:[lujun9972](https://github.com/lujun9972) +译者:[geekpi](https://github.com/geekpi) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://fedoramagazine.org/author/asamalik/ +[1]:https://fedoramagazine.org/wp-content/uploads/2018/08/qodlibet-300x217.png +[2]:https://last.fm +[3]:https://soundcloud.com/ +[4]:https://flathub.org/home +[5]:https://teamsilverblue.org/ +[6]:https://fedoramagazine.org/wp-content/uploads/2018/08/audacious-300x136.png +[7]:https://fedoramagazine.org/wp-content/uploads/2018/08/lollypop-300x172.png +[8]:https://libre.fm +[9]:https://fedoramagazine.org/wp-content/uploads/2018/08/gradio.png +[10]:https://fedoramagazine.org/wp-content/uploads/2018/08/sox-300x179.png +[11]:https://fedoramagazine.org/use-tmux-more-powerful-terminal/ From 424e63e8e49a0bc7d4004d12516c5b98073110a9 Mon Sep 17 00:00:00 2001 From: darksun Date: Thu, 27 Sep 2018 09:05:24 +0800 Subject: [PATCH 052/736] =?UTF-8?q?=E9=80=89=E9=A2=98:=20Hegemon=20?= =?UTF-8?q?=E2=80=93=20A=20Modular=20System=20Monitor=20Application=20Writ?= =?UTF-8?q?ten=20In=20Rust?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...tem Monitor Application Written In Rust.md | 78 +++++++++++++++++++ 1 file changed, 78 insertions(+) create mode 100644 sources/tech/20180925 Hegemon - A Modular System Monitor Application Written In Rust.md diff --git a/sources/tech/20180925 Hegemon - A Modular System Monitor Application Written In Rust.md b/sources/tech/20180925 Hegemon - A Modular System Monitor Application Written In Rust.md new file mode 100644 index 0000000000..14f6a2e947 --- /dev/null +++ b/sources/tech/20180925 Hegemon - A Modular System Monitor Application Written In Rust.md @@ -0,0 +1,78 @@ +Hegemon – A Modular System Monitor Application Written In Rust +====== + +![](https://www.ostechnix.com/wp-content/uploads/2018/09/hegemon-720x340.png) + +When it comes to monitor running processes in Unix-like systems, the most commonly used applications are **top** and **htop** , which is an enhanced version of top. My personal favorite is htop. However, the developers are releasing few alternatives to these applications every now and then. One such alternative to top and htop utilities is **Hegemon**. It is a modular system monitor application written using **Rust** programming language. + +Concerning about the features of Hegemon, we can list the following: + + * Hegemon will monitor the usage of CPU, memory and Swap. + * It monitors the system’s temperature and fan speed. + * The update interval time can be adjustable. The default value is 3 seconds. + * We can reveal more detailed graph and additional information by expanding the data streams. + * Unit tests + * Clean interface + * Free and open source. + + + +### Installing Hegemon + +Make sure you have installed **Rust 1.26** or later version. To install Rust in your Linux distribution, refer the following guide: + +[Install Rust Programming Language In Linux][2] + +Also, install [libsensors][1] library. It is available in the default repositories of most Linux distributions. For example, you can install it in RPM based systems such as Fedora using the following command: + +``` +$ sudo dnf install lm_sensors-devel +``` + +On Debian-based systems like Ubuntu, Linux Mint, it can be installed using command: + +``` +$ sudo apt-get install libsensors4-dev +``` + +Once you installed Rust and libsensors, install Hegemon using command: + +``` +$ cargo install hegemon +``` + +Once hegemon installed, start monitoring the running processes in your Linux system using command: + +``` +$ hegemon +``` + +Here is the sample output from my Arch Linux desktop. + +![](https://www.ostechnix.com/wp-content/uploads/2018/09/Hegemon-in-action.gif) + +To exit, press **Q**. + + +Please be mindful that hegemon is still in its early development stage and it is not complete replacement for **top** command. There might be bugs and missing features. If you came across any bugs, report them in the project’s github page. The developer is planning to bring more features in the upcoming versions. So, keep an eye on this project. + +And, that’s all for now. Hope this helps. More good stuffs to come. Stay tuned! + +Cheers! + + + +-------------------------------------------------------------------------------- + +via: https://www.ostechnix.com/hegemon-a-modular-system-monitor-application-written-in-rust/ + +作者:[SK][a] +选题:[lujun9972](https://github.com/lujun9972) +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://www.ostechnix.com/author/sk/ +[1]: https://github.com/lm-sensors/lm-sensors +[2]: https://www.ostechnix.com/install-rust-programming-language-in-linux/ From 40f74a4ac5ff341a4b5d656ddbebfeb606897db6 Mon Sep 17 00:00:00 2001 From: darksun Date: Thu, 27 Sep 2018 09:08:01 +0800 Subject: [PATCH 053/736] =?UTF-8?q?=E9=80=89=E9=A2=98:=20CPU=20Power=20Man?= =?UTF-8?q?ager=20=E2=80=93=20Control=20And=20Manage=20CPU=20Frequency=20I?= =?UTF-8?q?n=20Linux?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...ntrol And Manage CPU Frequency In Linux.md | 74 +++++++++++++++++++ 1 file changed, 74 insertions(+) create mode 100644 sources/talk/20180926 CPU Power Manager - Control And Manage CPU Frequency In Linux.md diff --git a/sources/talk/20180926 CPU Power Manager - Control And Manage CPU Frequency In Linux.md b/sources/talk/20180926 CPU Power Manager - Control And Manage CPU Frequency In Linux.md new file mode 100644 index 0000000000..aeffd1f144 --- /dev/null +++ b/sources/talk/20180926 CPU Power Manager - Control And Manage CPU Frequency In Linux.md @@ -0,0 +1,74 @@ +CPU Power Manager – Control And Manage CPU Frequency In Linux +====== + +![](https://www.ostechnix.com/wp-content/uploads/2018/09/Manage-CPU-Frequency-720x340.jpeg) + +If you are a laptop user, you probably know that power management on Linux isn’t really as good as on other OSes. While there are tools like **TLP** , [**Laptop Mode Tools** and **powertop**][1] to help reduce power consumption, overall battery life on Linux isn’t as good as Windows or Mac OS. Another way to reduce power consumption is to limit the frequency of your CPU. While this is something that has always been doable, it generally requires complicated terminal commands, making it inconvenient. But fortunately, there’s a gnome extension that helps you easily set and manage your CPU’s frequency – **CPU Power Manager**. CPU Power Manager uses the **intel_pstate** frequency scaling driver (supported by almost every Intel CPU) to control and manage CPU frequency in your GNOME desktop. + +Another reason to use this extension is to reduce heating in your system. There are many systems out there which can get uncomfortably hot in normal usage. Limiting your CPU’s frequency could reduce heating. It will also decrease the wear and tear on your CPU and other components. + +### Installing CPU Power Manager + +First, go to the [**extension’s page**][2], and install the extension. + +Once the extension has installed, you’ll get a CPU icon at the right side of the Gnome top bar. Click the icon, and you get an option to install the extension: + +![](https://www.ostechnix.com/wp-content/uploads/2018/09/CPU-Power-Manager-icon.png) + +If you click **“Attempt Installation”** , you’ll get a password prompt. The extension needs root privileges to add policykit rule for controlling CPU frequency. This is what the prompt looks like: + +![](https://www.ostechnix.com/wp-content/uploads/2018/09/CPU-Power-Manager-1.png) + +Type in your password and Click **“Authenticate”** , and that finishes installation. The last action adds a policykit file – **mko.cpupower.setcpufreq.policy** at **/usr/share/polkit-1/actions**. + +After installation is complete, if you click the CPU icon at the top right, you’ll get something like this: + +![](https://www.ostechnix.com/wp-content/uploads/2018/09/CPU-Power-Manager.png) + +### Features + + * **See the current CPU frequency:** Obviously, you can use this window to see the frequency that your CPU is running at. + * **Set maximum and minimum frequency:** With this extension, you can set maximum and minimum frequency limits in terms of percentage of max frequency. Once these limits are set, the CPU will operate only in this range of frequencies. + * **Turn Turbo Boost On and Off:** This is my favorite feature. Most Intel CPU’s have “Turbo Boost” feature, whereby the one of the cores of the CPU is boosted past the normal maximum frequency for extra performance. While this can make your system more performant, it also increases power consumption a lot. So if you aren’t doing anything intensive, it’s nice to be able to turn off Turbo Boost and save power. In fact, in my case, I have Turbo Boost turned off most of the time. + * **Make Profiles:** You can make profiles with max and min frequency that you can turn on/off easily instead of fiddling with max and frequencies. + + + +### Preferences + +You can also customize the extension via the preferences window: + +![](https://www.ostechnix.com/wp-content/uploads/2018/09/CPU-Power-Manager-preferences.png) + +As you can see, you can set whether CPU frequency is to be displayed, and whether to display it in **Mhz** or **Ghz**. + +You can also edit and create/delete profiles: + +![](https://www.ostechnix.com/wp-content/uploads/2018/09/CPU-Power-Manager-preferences-1.png) + +You can set maximum and minimum frequencies, and turbo boost for each profile. + +### Conclusion + +As I said in the beginning, power management on Linux is not the best, and many people are always looking to eek out a few minutes more out of their Linux laptop. If you are one of those, check out this extension. This is a unconventional method to save power, but it does work. I certainly love this extension, and have been using it for a few months now. + +What do you think about this extension? Put your thoughts in the comments below! + +Cheers! + + + +-------------------------------------------------------------------------------- + +via: https://www.ostechnix.com/cpu-power-manager-control-and-manage-cpu-frequency-in-linux/ + +作者:[EDITOR][a] +选题:[lujun9972](https://github.com/lujun9972) +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://www.ostechnix.com/author/editor/ +[1]: https://www.ostechnix.com/improve-laptop-battery-performance-linux/ +[2]: https://extensions.gnome.org/extension/945/cpu-power-manager/ From 7e099356134a14883aa35081fbda3d25a61b0fea Mon Sep 17 00:00:00 2001 From: geekpi Date: Thu, 27 Sep 2018 09:08:10 +0800 Subject: [PATCH 054/736] translating --- ...led Packages And Restore Them On Freshly Installed Ubuntu.md | 2 ++ 1 file changed, 2 insertions(+) diff --git a/sources/tech/20180915 Backup Installed Packages And Restore Them On Freshly Installed Ubuntu.md b/sources/tech/20180915 Backup Installed Packages And Restore Them On Freshly Installed Ubuntu.md index d5927effee..c775fd5040 100644 --- a/sources/tech/20180915 Backup Installed Packages And Restore Them On Freshly Installed Ubuntu.md +++ b/sources/tech/20180915 Backup Installed Packages And Restore Them On Freshly Installed Ubuntu.md @@ -1,3 +1,5 @@ +translating---geekpi + Backup Installed Packages And Restore Them On Freshly Installed Ubuntu ====== From 8a1bd55b95dae3a227404ca4b0d9a88966f5a0a7 Mon Sep 17 00:00:00 2001 From: darksun Date: Thu, 27 Sep 2018 09:11:35 +0800 Subject: [PATCH 055/736] =?UTF-8?q?=E9=80=89=E9=A2=98:=20How=20to=20Boot?= =?UTF-8?q?=20Ubuntu=2018.04=20/=20Debian=209=20Server=20in=20Rescue=20(Si?= =?UTF-8?q?ngle=20User=20mode)=20/=20Emergency=20Mode?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...cue (Single User mode) - Emergency Mode.md | 88 +++++++++++++++++++ 1 file changed, 88 insertions(+) create mode 100644 sources/tech/20180925 How to Boot Ubuntu 18.04 - Debian 9 Server in Rescue (Single User mode) - Emergency Mode.md diff --git a/sources/tech/20180925 How to Boot Ubuntu 18.04 - Debian 9 Server in Rescue (Single User mode) - Emergency Mode.md b/sources/tech/20180925 How to Boot Ubuntu 18.04 - Debian 9 Server in Rescue (Single User mode) - Emergency Mode.md new file mode 100644 index 0000000000..ff33e7c175 --- /dev/null +++ b/sources/tech/20180925 How to Boot Ubuntu 18.04 - Debian 9 Server in Rescue (Single User mode) - Emergency Mode.md @@ -0,0 +1,88 @@ +How to Boot Ubuntu 18.04 / Debian 9 Server in Rescue (Single User mode) / Emergency Mode +====== +Booting a Linux Server into a single user mode or **rescue mode** is one of the important troubleshooting that a Linux admin usually follow while recovering the server from critical conditions. In Ubuntu 18.04 and Debian 9, single user mode is known as a rescue mode. + +Apart from the rescue mode, Linux servers can be booted in **emergency mode** , the main difference between them is that, emergency mode loads a minimal environment with read only root file system file system, also it does not enable any network or other services. But rescue mode try to mount all the local file systems & try to start some important services including network. + +In this article we will discuss how we can boot our Ubuntu 18.04 LTS / Debian 9 Server in rescue mode and emergency mode. + +#### Booting Ubuntu 18.04 LTS Server in Single User / Rescue Mode: + +Reboot your server and go to boot loader (Grub) screen and Select “ **Ubuntu** “, bootloader screen would look like below, + +![](https://www.linuxtechi.com/wp-content/uploads/2018/09/Bootloader-Screen-Ubuntu18-04-Server.jpg) + +Press “ **e** ” and then go the end of line which starts with word “ **linux** ” and append “ **systemd.unit=rescue.target** “. Remove the word “ **$vt_handoff** ” if it exists. + +![](https://www.linuxtechi.com/wp-content/uploads/2018/09/rescue-target-ubuntu18-04.jpg) + +Now Press Ctrl-x or F10 to boot, + +![](https://www.linuxtechi.com/wp-content/uploads/2018/09/rescue-mode-ubuntu18-04.jpg) + +Now press enter and then you will get the shell where all file systems will be mounted in read-write mode and do the troubleshooting. Once you are done with troubleshooting, you can reboot your server using “ **reboot** ” command. + +#### Booting Ubuntu 18.04 LTS Server in emergency mode + +Reboot the server and go the boot loader screen and select “ **Ubuntu** ” and then press “ **e** ” and go to the end of line which starts with word linux, and append “ **systemd.unit=emergency.target** ” + +![](https://www.linuxtechi.com/wp-content/uploads/2018/09/Emergecny-target-ubuntu18-04-server.jpg) + +Now Press Ctlr-x or F10 to boot in emergency mode, you will get a shell and do the troubleshooting from there. As we had already discussed that in emergency mode, file systems will be mounted in read-only mode and also there will be no networking in this mode, + +![](https://www.linuxtechi.com/wp-content/uploads/2018/09/Emergency-prompt-debian9.jpg) + +Use below command to mount the root file system in read-write mode, + +``` +# mount -o remount,rw / + +``` + +Similarly, you can remount rest of file systems in read-write mode . + +#### Booting Debian 9 into Rescue & Emergency Mode + +Reboot your Debian 9.x server and go to grub screen and select “ **Debian GNU/Linux** ” + +![](https://www.linuxtechi.com/wp-content/uploads/2018/09/Debian9-Grub-Screen.jpg) + +Press “ **e** ” and go to end of line which starts with word linux and append “ **systemd.unit=rescue.target** ” to boot the system in rescue mode and to boot in emergency mode then append “ **systemd.unit=emergency.target** ” + +#### Rescue mode : + +![](https://www.linuxtechi.com/wp-content/uploads/2018/09/Rescue-mode-Debian9.jpg) + +Now press Ctrl-x or F10 to boot in rescue mode + +![](https://www.linuxtechi.com/wp-content/uploads/2018/09/Rescue-Mode-Shell-Debian9.jpg) + +Press Enter to get the shell and from there you can start troubleshooting. + +#### Emergency Mode: + +![](https://www.linuxtechi.com/wp-content/uploads/2018/09/Emergency-target-grub-debian9.jpg) + +Now press ctrl-x or F10 to boot your system in emergency mode + +![](https://www.linuxtechi.com/wp-content/uploads/2018/09/Emergency-prompt-debian9.jpg) + +Press enter to get the shell and use “ **mount -o remount,rw /** ” command to mount the root file system in read-write mode. + +**Note:** In case root password is already set in Ubuntu 18.04 and Debian 9 Server then you must enter root password to get shell in rescue and emergency mode + +That’s all from this article, please do share your feedback and comments in case you like this article. + + +-------------------------------------------------------------------------------- + +via: https://www.linuxtechi.com/boot-ubuntu-18-04-debian-9-rescue-emergency-mode/ + +作者:[Pradeep Kumar][a] +选题:[lujun9972](https://github.com/lujun9972) +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: http://www.linuxtechi.com/author/pradeep/ From 6b52471d8f5834463e97f3f4c6b5e3680ee2e1c8 Mon Sep 17 00:00:00 2001 From: darksun Date: Thu, 27 Sep 2018 09:19:52 +0800 Subject: [PATCH 056/736] =?UTF-8?q?=E9=80=89=E9=A2=98:=20An=20introduction?= =?UTF-8?q?=20to=20swap=20space=20on=20Linux=20systems?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...oduction to swap space on Linux systems.md | 300 ++++++++++++++++++ 1 file changed, 300 insertions(+) create mode 100644 sources/tech/20180926 An introduction to swap space on Linux systems.md diff --git a/sources/tech/20180926 An introduction to swap space on Linux systems.md b/sources/tech/20180926 An introduction to swap space on Linux systems.md new file mode 100644 index 0000000000..036890ef4b --- /dev/null +++ b/sources/tech/20180926 An introduction to swap space on Linux systems.md @@ -0,0 +1,300 @@ +An introduction to swap space on Linux systems +====== + +![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/fail_progress_cycle_momentum_arrow.png?itok=q-ZFa_Eh) + +Swap space is a common aspect of computing today, regardless of operating system. Linux uses swap space to increase the amount of virtual memory available to a host. It can use one or more dedicated swap partitions or a swap file on a regular filesystem or logical volume. + +There are two basic types of memory in a typical computer. The first type, random access memory (RAM), is used to store data and programs while they are being actively used by the computer. Programs and data cannot be used by the computer unless they are stored in RAM. RAM is volatile memory; that is, the data stored in RAM is lost if the computer is turned off. + +Hard drives are magnetic media used for long-term storage of data and programs. Magnetic media is nonvolatile; the data stored on a disk remains even when power is removed from the computer. The CPU (central processing unit) cannot directly access the programs and data on the hard drive; it must be copied into RAM first, and that is where the CPU can access its programming instructions and the data to be operated on by those instructions. During the boot process, a computer copies specific operating system programs, such as the kernel and init or systemd, and data from the hard drive into RAM, where it is accessed directly by the computer’s processor, the CPU. + +### Swap space + +Swap space is the second type of memory in modern Linux systems. The primary function of swap space is to substitute disk space for RAM memory when real RAM fills up and more space is needed. + +For example, assume you have a computer system with 8GB of RAM. If you start up programs that don’t fill that RAM, everything is fine and no swapping is required. But suppose the spreadsheet you are working on grows when you add more rows, and that, plus everything else that's running, now fills all of RAM. Without swap space available, you would have to stop working on the spreadsheet until you could free up some of your limited RAM by closing down some other programs. + +The kernel uses a memory management program that detects blocks, aka pages, of memory in which the contents have not been used recently. The memory management program swaps enough of these relatively infrequently used pages of memory out to a special partition on the hard drive specifically designated for “paging,” or swapping. This frees up RAM and makes room for more data to be entered into your spreadsheet. Those pages of memory swapped out to the hard drive are tracked by the kernel’s memory management code and can be paged back into RAM if they are needed. + +The total amount of memory in a Linux computer is the RAM plus swap space and is referred to as virtual memory. + +### Types of Linux swap + +Linux provides for two types of swap space. By default, most Linux installations create a swap partition, but it is also possible to use a specially configured file as a swap file. A swap partition is just what its name implies—a standard disk partition that is designated as swap space by the `mkswap` command. + +A swap file can be used if there is no free disk space in which to create a new swap partition or space in a volume group where a logical volume can be created for swap space. This is just a regular file that is created and preallocated to a specified size. Then the `mkswap` command is run to configure it as swap space. I don’t recommend using a file for swap space unless absolutely necessary. + +### Thrashing + +Thrashing can occur when total virtual memory, both RAM and swap space, become nearly full. The system spends so much time paging blocks of memory between swap space and RAM and back that little time is left for real work. The typical symptoms of this are obvious: The system becomes slow or completely unresponsive, and the hard drive activity light is on almost constantly. + +If you can manage to issue a command like `free` that shows CPU load and memory usage, you will see that the CPU load is very high, perhaps as much as 30 to 40 times the number of CPU cores in the system. Another symptom is that both RAM and swap space are almost completely allocated. + +After the fact, looking at SAR (system activity report) data can also show these symptoms. I install SAR on every system I work on and use it for post-repair forensic analysis. + +### What is the right amount of swap space? + +Many years ago, the rule of thumb for the amount of swap space that should be allocated on the hard drive was 2X the amount of RAM installed in the computer (of course, that was when most computers' RAM was measured in KB or MB). So if a computer had 64KB of RAM, a swap partition of 128KB would be an optimum size. This rule took into account the facts that RAM sizes were typically quite small at that time and that allocating more than 2X RAM for swap space did not improve performance. With more than twice RAM for swap, most systems spent more time thrashing than actually performing useful work. + +RAM has become an inexpensive commodity and most computers these days have amounts of RAM that extend into tens of gigabytes. Most of my newer computers have at least 8GB of RAM, one has 32GB, and my main workstation has 64GB. My older computers have from 4 to 8 GB of RAM. + +When dealing with computers having huge amounts of RAM, the limiting performance factor for swap space is far lower than the 2X multiplier. The Fedora 28 online Installation Guide, which can be found online at [Fedora Installation Guide][1], defines current thinking about swap space allocation. I have included below some discussion and the table of recommendations from that document. + +The following table provides the recommended size of a swap partition depending on the amount of RAM in your system and whether you want sufficient memory for your system to hibernate. The recommended swap partition size is established automatically during installation. To allow for hibernation, however, you will need to edit the swap space in the custom partitioning stage. + +_Table 1: Recommended system swap space in Fedora 28 documentation_ + +| **Amount of system RAM** | **Recommended swap space** | **Recommended swap with hibernation** | +|--------------------------|-----------------------------|---------------------------------------| +| less than 2 GB | 2 times the amount of RAM | 3 times the amount of RAM | +| 2 GB - 8 GB | Equal to the amount of RAM | 2 times the amount of RAM | +| 8 GB - 64 GB | 0.5 times the amount of RAM | 1.5 times the amount of RAM | +| more than 64 GB | workload dependent | hibernation not recommended | + +At the border between each range listed above (for example, a system with 2 GB, 8 GB, or 64 GB of system RAM), use discretion with regard to chosen swap space and hibernation support. If your system resources allow for it, increasing the swap space may lead to better performance. + +Of course, most Linux administrators have their own ideas about the appropriate amount of swap space—as well as pretty much everything else. Table 2, below, contains my recommendations based on my personal experiences in multiple environments. These may not work for you, but as with Table 1, they may help you get started. + +_Table 2: Recommended system swap space per the author_ + +| Amount of RAM | Recommended swap space | +|---------------|------------------------| +| ≤ 2GB | 2X RAM | +| 2GB – 8GB | = RAM | +| >8GB | 8GB | + +One consideration in both tables is that as the amount of RAM increases, beyond a certain point adding more swap space simply leads to thrashing well before the swap space even comes close to being filled. If you have too little virtual memory while following these recommendations, you should add more RAM, if possible, rather than more swap space. As with all recommendations that affect system performance, use what works best for your specific environment. This will take time and effort to experiment and make changes based on the conditions in your Linux environment. + +#### Adding more swap space to a non-LVM disk environment + +Due to changing requirements for swap space on hosts with Linux already installed, it may become necessary to modify the amount of swap space defined for the system. This procedure can be used for any general case where the amount of swap space needs to be increased. It assumes sufficient available disk space is available. This procedure also assumes that the disks are partitioned in “raw” EXT4 and swap partitions and do not use logical volume management (LVM). + +The basic steps to take are simple: + + 1. Turn off the existing swap space. + + 2. Create a new swap partition of the desired size. + + 3. Reread the partition table. + + 4. Configure the partition as swap space. + + 5. Add the new partition/etc/fstab. + + 6. Turn on swap. + + + + +A reboot should not be necessary. + +For safety's sake, before turning off swap, at the very least you should ensure that no applications are running and that no swap space is in use. The `free` or `top` commands can tell you whether swap space is in use. To be even safer, you could revert to run level 1 or single-user mode. + +Turn off the swap partition with the command which turns off all swap space: + +``` +swapoff -a + +``` + +Now display the existing partitions on the hard drive. + +``` +fdisk -l + +``` + +This displays the current partition tables on each drive. Identify the current swap partition by number. + +Start `fdisk` in interactive mode with the command: + +``` +fdisk /dev/ + +``` + +For example: + +``` +fdisk /dev/sda + +``` + +At this point, `fdisk` is now interactive and will operate only on the specified disk drive. + +Use the fdisk `p` sub-command to verify that there is enough free space on the disk to create the new swap partition. The space on the hard drive is shown in terms of 512-byte blocks and starting and ending cylinder numbers, so you may have to do some math to determine the available space between and at the end of allocated partitions. + +Use the `n` sub-command to create a new swap partition. fdisk will ask you the starting cylinder. By default, it chooses the lowest-numbered available cylinder. If you wish to change that, type in the number of the starting cylinder. + +The `fdisk` command now allows you to enter the size of the partitions in a number of formats, including the last cylinder number or the size in bytes, KB or MB. Type in 4000M, which will give about 4GB of space on the new partition (for example), and press Enter. + +Use the `p` sub-command to verify that the partition was created as you specified it. Note that the partition will probably not be exactly what you specified unless you used the ending cylinder number. The `fdisk` command can only allocate disk space in increments on whole cylinders, so your partition may be a little smaller or larger than you specified. If the partition is not what you want, you can delete it and create it again. + +Now it is necessary to specify that the new partition is to be a swap partition. The sub-command `t` allows you to specify the type of partition. So enter `t`, specify the partition number, and when it asks for the hex code partition type, type 82, which is the Linux swap partition type, and press Enter. + +When you are satisfied with the partition you have created, use the `w` sub-command to write the new partition table to the disk. The `fdisk` program will exit and return you to the command prompt after it completes writing the revised partition table. You will probably receive the following message as `fdisk` completes writing the new partition table: + +``` +The partition table has been altered! +Calling ioctl() to re-read partition table. +WARNING: Re-reading the partition table failed with error 16: Device or resource busy. +The kernel still uses the old table. +The new table will be used at the next reboot. +Syncing disks. +``` + +At this point, you use the `partprobe` command to force the kernel to re-read the partition table so that it is not necessary to perform a reboot. + +``` +partprobe +``` + +Now use the command `fdisk -l` to list the partitions and the new swap partition should be among those listed. Be sure that the new partition type is “Linux swap”. + +It will be necessary to modify the /etc/fstab file to point to the new swap partition. The existing line may look like this: + +``` +LABEL=SWAP-sdaX   swap        swap    defaults        0 0 + +``` + +where `X` is the partition number. Add a new line that looks similar this, depending upon the location of your new swap partition: + +``` +/dev/sdaY         swap        swap    defaults        0 0 + +``` + +Be sure to use the correct partition number. Now you can perform the final step in creating the swap partition. Use the `mkswap` command to define the partition as a swap partition. + +``` +mkswap /dev/sdaY + +``` + +The final step is to turn swap on using the command: + +``` +swapon -a + +``` + +Your new swap partition is now online along with the previously existing swap partition. You can use the `free` or `top` commands to verify this. + +#### Adding swap to an LVM disk environment + +If your disk setup uses LVM, changing swap space will be fairly easy. Again, this assumes that space is available in the volume group in which the current swap volume is located. By default, the installation procedures for Fedora Linux in an LVM environment create the swap partition as a logical volume. This makes it easy because you can simply increase the size of the swap volume. + +Here are the steps required to increase the amount of swap space in an LVM environment: + + 1. Turn off all swap. + + 2. Increase the size of the logical volume designated for swap. + + 3. Configure the resized volume as swap space. + + 4. Turn on swap. + + + + +First, let’s verify that swap exists and is a logical volume using the `lvs` command (list logical volume). + +``` +[root@studentvm1 ~]# lvs +  LV     VG                Attr       LSize  Pool   Origin Data%  Meta%  Move Log Cpy%Sync Convert +  home   fedora_studentvm1 -wi-ao----  2.00g                                                       +  pool00 fedora_studentvm1 twi-aotz--  2.00g               8.17   2.93                             +  root   fedora_studentvm1 Vwi-aotz--  2.00g pool00        8.17                                   +  swap   fedora_studentvm1 -wi-ao----  8.00g                                                       +  tmp    fedora_studentvm1 -wi-ao----  5.00g                                                       +  usr    fedora_studentvm1 -wi-ao---- 15.00g                                                       +  var    fedora_studentvm1 -wi-ao---- 10.00g                                                       +[root@studentvm1 ~]# +``` + +You can see that the current swap size is 8GB. In this case, we want to add 2GB to this swap volume. First, stop existing swap. You may have to terminate running programs if swap space is in use. + +``` +swapoff -a + +``` + +Now increase the size of the logical volume. + +``` +[root@studentvm1 ~]# lvextend -L +2G /dev/mapper/fedora_studentvm1-swap +  Size of logical volume fedora_studentvm1/swap changed from 8.00 GiB (2048 extents) to 10.00 GiB (2560 extents). +  Logical volume fedora_studentvm1/swap successfully resized. +[root@studentvm1 ~]# +``` + +Run the `mkswap` command to make this entire 10GB partition into swap space. + +``` +[root@studentvm1 ~]# mkswap /dev/mapper/fedora_studentvm1-swap +mkswap: /dev/mapper/fedora_studentvm1-swap: warning: wiping old swap signature. +Setting up swapspace version 1, size = 10 GiB (10737414144 bytes) +no label, UUID=3cc2bee0-e746-4b66-aa2d-1ea15ef1574a +[root@studentvm1 ~]# +``` + +Turn swap back on. + +``` +[root@studentvm1 ~]# swapon -a +[root@studentvm1 ~]# +``` + +Now verify the new swap space is present with the list block devices command. Again, a reboot is not required. + +``` +[root@studentvm1 ~]# lsblk +NAME                                 MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT +sda                                    8:0    0   60G  0 disk +|-sda1                                 8:1    0    1G  0 part /boot +`-sda2                                 8:2    0   59G  0 part +  |-fedora_studentvm1-pool00_tmeta   253:0    0    4M  0 lvm   +  | `-fedora_studentvm1-pool00-tpool 253:2    0    2G  0 lvm   +  |   |-fedora_studentvm1-root       253:3    0    2G  0 lvm  / +  |   `-fedora_studentvm1-pool00     253:6    0    2G  0 lvm   +  |-fedora_studentvm1-pool00_tdata   253:1    0    2G  0 lvm   +  | `-fedora_studentvm1-pool00-tpool 253:2    0    2G  0 lvm   +  |   |-fedora_studentvm1-root       253:3    0    2G  0 lvm  / +  |   `-fedora_studentvm1-pool00     253:6    0    2G  0 lvm   +  |-fedora_studentvm1-swap           253:4    0   10G  0 lvm  [SWAP] +  |-fedora_studentvm1-usr            253:5    0   15G  0 lvm  /usr +  |-fedora_studentvm1-home           253:7    0    2G  0 lvm  /home +  |-fedora_studentvm1-var            253:8    0   10G  0 lvm  /var +  `-fedora_studentvm1-tmp            253:9    0    5G  0 lvm  /tmp +sr0                                   11:0    1 1024M  0 rom   +[root@studentvm1 ~]# +``` + +You can also use the `swapon -s` command, or `top`, `free`, or any of several other commands to verify this. + +``` +[root@studentvm1 ~]# free +              total        used        free      shared  buff/cache   available +Mem:        4038808      382404     2754072        4152      902332     3404184 +Swap:      10485756           0    10485756 +[root@studentvm1 ~]# +``` + +Note that the different commands display or require as input the device special file in different forms. There are a number of ways in which specific devices are accessed in the /dev directory. My article, [Managing Devices in Linux][2], includes more information about the /dev directory and its contents. + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/18/9/swap-space-linux-systems + +作者:[David Both][a] +选题:[lujun9972](https://github.com/lujun9972) +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/dboth +[1]: https://docs.fedoraproject.org/en-US/fedora/f28/install-guide/ +[2]: https://opensource.com/article/16/11/managing-devices-linux From 982cca6773a6fdba251cff08e669b1e5bf2fd46c Mon Sep 17 00:00:00 2001 From: darksun Date: Thu, 27 Sep 2018 09:22:13 +0800 Subject: [PATCH 057/736] =?UTF-8?q?=E9=80=89=E9=A2=98:=209=20Easiest=20Way?= =?UTF-8?q?s=20To=20Find=20Out=20Process=20ID=20(PID)=20In=20Linux?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...s To Find Out Process ID (PID) In Linux.md | 208 ++++++++++++++++++ 1 file changed, 208 insertions(+) create mode 100644 sources/tech/20180925 9 Easiest Ways To Find Out Process ID (PID) In Linux.md diff --git a/sources/tech/20180925 9 Easiest Ways To Find Out Process ID (PID) In Linux.md b/sources/tech/20180925 9 Easiest Ways To Find Out Process ID (PID) In Linux.md new file mode 100644 index 0000000000..ae353bf11f --- /dev/null +++ b/sources/tech/20180925 9 Easiest Ways To Find Out Process ID (PID) In Linux.md @@ -0,0 +1,208 @@ +9 Easiest Ways To Find Out Process ID (PID) In Linux +====== +Everybody knows about PID, Exactly what is PID? Why you want PID? What are you going to do using PID? Are you having the same questions on your mind? If so, you are in the right place to get all the details. + +Mainly, we are looking PID to kill an unresponsive program and it’s similar to Windows task manager. Linux GUI also offering the same feature but CLI is an efficient way to perform the kill operation. + +### What Is Process ID? + +PID stands for process identification number which is generally used by most operating system kernels such as Linux, Unix, macOS and Windows. It is a unique identification number that is automatically assigned to each process when it is created in an operating system. A process is a running instance of a program. + +**Suggested Read :** +**(#)** [How To Find Out Which Port Number A Process Is Using In Linux][1] +**(#)** [3 Easy Ways To Kill Or Terminate A Process In Linux][2] + +Each time process ID will be getting change to all the processes except init because init is always the first process on the system and is the ancestor of all other processes. It’s PID is 1. + +The default maximum value of PIDs is `32,768`. The same has been verified by running the following command on your system `cat /proc/sys/kernel/pid_max`. On 32-bit systems 32768 is the maximum value but we can set to any value up to 2^22 (approximately 4 million) on 64-bit systems. + +You may ask, why we need such amount of PIDs? because we can’t reused the PIDs immediately that’s why. Also in order to prevent possible errors. + +The PID for the running processes on the system can be found by using the below nine methods such as pidof command, pgrep command, ps command, pstree command, ss command, netstat command, lsof command, fuser command and systemctl command. + +This can be achieved using the below six methods. + + * `pidof:` pidof — find the process ID of a running program. + * `pgrep:` pgre – look up or signal processes based on name and other attributes. + * `ps:` ps – report a snapshot of the current processes. + * `pstree:` pstree – display a tree of processes. + * `ss:` ss is used to dump socket statistics. + * `netstat:` netstat is displays a list of open sockets. + * `lsof:` lsof – list open files. + * `fuser:` fuser – list process IDs of all processes that have one or more files open + * `systemctl:` systemctl – Control the systemd system and service manager + + + +In this tutorial we are going to find out the Apache process id to test this article. Make sure your need to input your process name instead of us. + +### Method-1 : Using pidof Command + +pidof used to find the process ID of a running program. It’s prints those id’s on the standard output. To demonstrate this, we are going to find out the Apache2 process id from Debian 9 (stretch) system. + +``` +# pidof apache2 +3754 2594 2365 2364 2363 2362 2361 + +``` + +From the above output you may face difficulties to identify the Process ID because it’s shows all the PIDs (included Parent and Childs) aginst the process name. Hence we need to find out the parent PID (PPID), which is the one we are looking. It could be the first number. In my case it’s `3754` and it’s shorted in descending order. + +### Method-2 : Using pgrep Command + +pgrep looks through the currently running processes and lists the process IDs which match the selection criteria to stdout. + +``` +# pgrep apache2 +2361 +2362 +2363 +2364 +2365 +2594 +3754 + +``` + +This also similar to the above output but it’s shorting the results in ascending order, which clearly says that the parent PID is the last one. In my case it’s `3754`. + +**Note :** If you have more than one process id of the process, you may face trouble to identify the parent process id when using pidof & pgrep command. + +### Method-3 : Using pstree Command + +pstree shows running processes as a tree. The tree is rooted at either pid or init if pid is omitted. If a user name is specified in the pstree command then it’s shows all the process owned by the corresponding user. + +pstree visually merges identical branches by putting them in square brackets and prefixing them with the repetition count. + +``` +# pstree -p | grep "apache2" + |-apache2(3754)-|-apache2(2361) + | |-apache2(2362) + | |-apache2(2363) + | |-apache2(2364) + | |-apache2(2365) + | `-apache2(2594) + +``` + +To get parent process alone, use the following format. + +``` +# pstree -p | grep "apache2" | head -1 + |-apache2(3754)-|-apache2(2361) + +``` + +pstree command is very simple because it’s segregating the Parent and Child processes separately but it’s not easy when using pidof & pgrep command. + +### Method-4 : Using ps Command + +ps displays information about a selection of the active processes. It displays the process ID (pid=PID), the terminal associated with the process (tname=TTY), the cumulated CPU time in [DD-]hh:mm:ss format (time=TIME), and the executable name (ucmd=CMD). Output is unsorted by default. + +``` +# ps aux | grep "apache2" +www-data 2361 0.0 0.4 302652 9732 ? S 06:25 0:00 /usr/sbin/apache2 -k start +www-data 2362 0.0 0.4 302652 9732 ? S 06:25 0:00 /usr/sbin/apache2 -k start +www-data 2363 0.0 0.4 302652 9732 ? S 06:25 0:00 /usr/sbin/apache2 -k start +www-data 2364 0.0 0.4 302652 9732 ? S 06:25 0:00 /usr/sbin/apache2 -k start +www-data 2365 0.0 0.4 302652 8400 ? S 06:25 0:00 /usr/sbin/apache2 -k start +www-data 2594 0.0 0.4 302652 8400 ? S 06:55 0:00 /usr/sbin/apache2 -k start +root 3754 0.0 1.4 302580 29324 ? Ss Dec11 0:23 /usr/sbin/apache2 -k start +root 5648 0.0 0.0 12784 940 pts/0 S+ 21:32 0:00 grep apache2 + +``` + +From the above output we can easily identify the parent process id (PPID) based on the process start date. In my case apache2 process was started @ `Dec11` which is the parent and others are child’s. PID of apache2 is `3754`. + +### Method-5: Using ss Command + +ss is used to dump socket statistics. It allows showing information similar to netstat. It can display more TCP and state informations than other tools. + +It can display stats for all kind of sockets such as PACKET, TCP, UDP, DCCP, RAW, Unix domain, etc. + +``` +# ss -tnlp | grep apache2 +LISTEN 0 128 :::80 :::* users:(("apache2",pid=3319,fd=4),("apache2",pid=3318,fd=4),("apache2",pid=3317,fd=4)) + +``` + +### Method-6: Using netstat Command + +netstat – Print network connections, routing tables, interface statistics, masquerade connections, and multicast memberships. +By default, netstat displays a list of open sockets. + +If you don’t specify any address families, then the active sockets of all configured address families will be printed. This program is obsolete. Replacement for netstat is ss. + +``` +# netstat -tnlp | grep apache2 +tcp6 0 0 :::80 :::* LISTEN 3317/apache2 + +``` + +### Method-7: Using lsof Command + +lsof – list open files. The Linux lsof command lists information about files that are open by processes running on the system. + +``` +# lsof -i -P | grep apache2 +apache2 3317 root 4u IPv6 40518 0t0 TCP *:80 (LISTEN) +apache2 3318 www-data 4u IPv6 40518 0t0 TCP *:80 (LISTEN) +apache2 3319 www-data 4u IPv6 40518 0t0 TCP *:80 (LISTEN) + +``` + +### Method-8: Using fuser Command + +The fuser utility shall write to standard output the process IDs of processes running on the local system that have one or more named files open. + +``` +# fuser -v 80/tcp + USER PID ACCESS COMMAND +80/tcp: root 3317 F.... apache2 + www-data 3318 F.... apache2 + www-data 3319 F.... apache2 + +``` + +### Method-9: Using systemctl Command + +systemctl – Control the systemd system and service manager. This is the replacement of old SysV init system management and +most of the modern Linux operating systems were adapted systemd. + +``` +# systemctl status apache2 +● apache2.service - The Apache HTTP Server + Loaded: loaded (/lib/systemd/system/apache2.service; disabled; vendor preset: enabled) + Drop-In: /lib/systemd/system/apache2.service.d + └─apache2-systemd.conf + Active: active (running) since Tue 2018-09-25 10:03:28 IST; 3s ago + Process: 3294 ExecStart=/usr/sbin/apachectl start (code=exited, status=0/SUCCESS) + Main PID: 3317 (apache2) + Tasks: 55 (limit: 4915) + Memory: 7.9M + CPU: 71ms + CGroup: /system.slice/apache2.service + ├─3317 /usr/sbin/apache2 -k start + ├─3318 /usr/sbin/apache2 -k start + └─3319 /usr/sbin/apache2 -k start + +Sep 25 10:03:28 ubuntu systemd[1]: Starting The Apache HTTP Server... +Sep 25 10:03:28 ubuntu systemd[1]: Started The Apache HTTP Server. + +``` + + +-------------------------------------------------------------------------------- + +via: https://www.2daygeek.com/9-methods-to-check-find-the-process-id-pid-ppid-of-a-running-program-in-linux/ + +作者:[Magesh Maruthamuthu][a] +选题:[lujun9972](https://github.com/lujun9972) +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://www.2daygeek.com/author/magesh/ +[1]: https://www.2daygeek.com/how-to-find-out-which-port-number-a-process-is-using-in-linux/ +[2]: https://www.2daygeek.com/kill-terminate-a-process-in-linux-using-kill-pkill-killall-command/ From cbb1782339f6b596c3188d6e248460475a5c45ce Mon Sep 17 00:00:00 2001 From: darksun Date: Thu, 27 Sep 2018 09:24:34 +0800 Subject: [PATCH 058/736] =?UTF-8?q?=E9=80=89=E9=A2=98:=20How=20to=20use=20?= =?UTF-8?q?the=20Scikit-learn=20Python=20library=20for=20data=20science=20?= =?UTF-8?q?projects?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...ython library for data science projects.md | 258 ++++++++++++++++++ 1 file changed, 258 insertions(+) create mode 100644 sources/tech/20180926 How to use the Scikit-learn Python library for data science projects.md diff --git a/sources/tech/20180926 How to use the Scikit-learn Python library for data science projects.md b/sources/tech/20180926 How to use the Scikit-learn Python library for data science projects.md new file mode 100644 index 0000000000..4f5d9aedf6 --- /dev/null +++ b/sources/tech/20180926 How to use the Scikit-learn Python library for data science projects.md @@ -0,0 +1,258 @@ +How to use the Scikit-learn Python library for data science projects +====== + +![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/brain_data.png?itok=RH6NA32X) + +The Scikit-learn Python library, initially released in 2007, is commonly used in solving machine learning and data science problems—from the beginning to the end. The versatile library offers an uncluttered, consistent, and efficient API and thorough online documentation. + +### What is Scikit-learn? + +[Scikit-learn][1] is an open source Python library that has powerful tools for data analysis and data mining. It's available under the BSD license and is built on the following machine learning libraries: + + * **NumPy** , a library for manipulating multi-dimensional arrays and matrices. It also has an extensive compilation of mathematical functions for performing various calculations. + * **SciPy** , an ecosystem consisting of various libraries for completing technical computing tasks. + * **Matplotlib** , a library for plotting various charts and graphs. + + + +Scikit-learn offers an extensive range of built-in algorithms that make the most of data science projects. + +Here are the main ways the Scikit-learn library is used. + +#### 1. Classification + +The [classification][2] tools identify the category associated with provided data. For example, they can be used to categorize email messages as either spam or not. + + * Support vector machines (SVMs) + * Nearest neighbors + * Random forest + + + +#### 2. Regression + +Classification algorithms in Scikit-learn include: + +Regression involves creating a model that tries to comprehend the relationship between input and output data. For example, regression tools can be used to understand the behavior of stock prices. + +Regression algorithms include: + + * SVMs + * Ridge regression + * Lasso + + + +#### 3. Clustering + +The Scikit-learn clustering tools are used to automatically group data with the same characteristics into sets. For example, customer data can be segmented based on their localities. + +Clustering algorithms include: + + * K-means + * Spectral clustering + * Mean-shift + + + +#### 4. Dimensionality reduction + +Dimensionality reduction lowers the number of random variables for analysis. For example, to increase the efficiency of visualizations, outlying data may not be considered. + +Dimensionality reduction algorithms include: + + * Principal component analysis (PCA) + * Feature selection + * Non-negative matrix factorization + + + +#### 5. Model selection + +Model selection algorithms offer tools to compare, validate, and select the best parameters and models to use in your data science projects. + +Model selection modules that can deliver enhanced accuracy through parameter tuning include: + + * Grid search + * Cross-validation + * Metrics + + + +#### 6. Preprocessing + +The Scikit-learn preprocessing tools are important in feature extraction and normalization during data analysis. For example, you can use these tools to transform input data—such as text—and apply their features in your analysis. + +Preprocessing modules include: + + * Preprocessing + * Feature extraction + + + +### A Scikit-learn library example + +Let's use a simple example to illustrate how you can use the Scikit-learn library in your data science projects. + +We'll use the [Iris flower dataset][3], which is incorporated in the Scikit-learn library. The Iris flower dataset contains 150 details about three flower species: + + * Setosa—labeled 0 + * Versicolor—labeled 1 + * Virginica—labeled 2 + + + +The dataset includes the following characteristics of each flower species (in centimeters): + + * Sepal length + * Sepal width + * Petal length + * Petal width + + + +#### Step 1: Importing the library + +Since the Iris dataset is included in the Scikit-learn data science library, we can load it into our workspace as follows: + +``` +from sklearn import datasets +iris = datasets.load_iris() +``` + +These commands import the **datasets** module from **sklearn** , then use the **load_digits()** method from **datasets** to include the data in the workspace. + +#### Step 2: Getting dataset characteristics + +The **datasets** module contains several methods that make it easier to get acquainted with handling data. + +In Scikit-learn, a dataset refers to a dictionary-like object that has all the details about the data. The data is stored using the **.data** key, which is an array list. + +For instance, we can utilize **iris.data** to output information about the Iris flower dataset. + +``` +print(iris.data) +``` + +Here is the output (the results have been truncated): + +``` +[[5.1 3.5 1.4 0.2] + [4.9 3.  1.4 0.2] + [4.7 3.2 1.3 0.2] + [4.6 3.1 1.5 0.2] + [5.  3.6 1.4 0.2] + [5.4 3.9 1.7 0.4] + [4.6 3.4 1.4 0.3] + [5.  3.4 1.5 0.2] + [4.4 2.9 1.4 0.2] + [4.9 3.1 1.5 0.1] + [5.4 3.7 1.5 0.2] + [4.8 3.4 1.6 0.2] + [4.8 3.  1.4 0.1] + [4.3 3.  1.1 0.1] + [5.8 4.  1.2 0.2] + [5.7 4.4 1.5 0.4] + [5.4 3.9 1.3 0.4] + [5.1 3.5 1.4 0.3] +``` + +Let's also use **iris.target** to give us information about the different labels of the flowers. + +``` +print(iris.target) +``` + +Here is the output: + +``` +[0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 + 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 + 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 2 2 2 2 2 2 2 2 2 2 2 + 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 + 2 2] + +``` + +If we use **iris.target_names** , we'll output an array of the names of the labels found in the dataset. + +``` +print(iris.target_names) +``` + +Here is the result after running the Python code: + +``` +['setosa' 'versicolor' 'virginica'] +``` + +#### Step 3: Visualizing the dataset + +We can use the [box plot][4] to produce a visual depiction of the Iris flower dataset. The box plot illustrates how the data is distributed over the plane through their quartiles. + +Here's how to achieve this: + +``` +import seaborn as sns +box_data = iris.data #variable representing the data array +box_target = iris.target #variable representing the labels array +sns.boxplot(data = box_data,width=0.5,fliersize=5) +sns.set(rc={'figure.figsize':(2,15)}) +``` + +Let's see the result: + +![](https://opensource.com/sites/default/files/uploads/scikit_boxplot.png) + +On the horizontal axis: + + * 0 is sepal length + * 1 is sepal width + * 2 is petal length + * 3 is petal width + + + +The vertical axis is dimensions in centimeters. + +### Wrapping up + +Here is the entire code for this simple Scikit-learn data science tutorial. + +``` +from sklearn import datasets +iris = datasets.load_iris() +print(iris.data) +print(iris.target) +print(iris.target_names) +import seaborn as sns +box_data = iris.data #variable representing the data array +box_target = iris.target #variable representing the labels array +sns.boxplot(data = box_data,width=0.5,fliersize=5) +sns.set(rc={'figure.figsize':(2,15)}) +``` + +Scikit-learn is a versatile Python library you can use to efficiently complete data science projects. + +If you want to learn more, check out the tutorials on [LiveEdu][5], such as Andrey Bulezyuk's video on using the Scikit-learn library to create a [machine learning application][6]. + +Do you have any questions or comments? Feel free to share them below. + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/18/9/how-use-scikit-learn-data-science-projects + +作者:[Dr.Michael J.Garbade][a] +选题:[lujun9972](https://github.com/lujun9972) +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/drmjg +[1]: http://scikit-learn.org/stable/index.html +[2]: https://blog.liveedu.tv/regression-versus-classification-machine-learning-whats-the-difference/ +[3]: https://en.wikipedia.org/wiki/Iris_flower_data_set +[4]: https://en.wikipedia.org/wiki/Box_plot +[5]: https://www.liveedu.tv/guides/data-science/ +[6]: https://www.liveedu.tv/andreybu/REaxr-machine-learning-model-python-sklearn-kera/oPGdP-machine-learning-model-python-sklearn-kera/ From 05902bc1d225b9c9c5d54acd86b2df582a491cc8 Mon Sep 17 00:00:00 2001 From: HankChow <280630620@qq.com> Date: Thu, 27 Sep 2018 09:51:24 +0800 Subject: [PATCH 059/736] translated --- ...ing Command Prettier And Easier To Read.md | 131 ------------------ ...ing Command Prettier And Easier To Read.md | 128 +++++++++++++++++ 2 files changed, 128 insertions(+), 131 deletions(-) delete mode 100644 sources/tech/20180924 Make The Output Of Ping Command Prettier And Easier To Read.md create mode 100644 translated/tech/20180924 Make The Output Of Ping Command Prettier And Easier To Read.md diff --git a/sources/tech/20180924 Make The Output Of Ping Command Prettier And Easier To Read.md b/sources/tech/20180924 Make The Output Of Ping Command Prettier And Easier To Read.md deleted file mode 100644 index 7ef713eae4..0000000000 --- a/sources/tech/20180924 Make The Output Of Ping Command Prettier And Easier To Read.md +++ /dev/null @@ -1,131 +0,0 @@ -HankChow translating - -Make The Output Of Ping Command Prettier And Easier To Read -====== - -![](https://www.ostechnix.com/wp-content/uploads/2018/09/prettyping-720x340.png) - -As we all know, the **ping** command is used to check if a target host is reachable or not. Using Ping command, we can send ICMP Echo request to our target host, and verify whether the destination host is up or down. If you use ping command often, I’d like to recommend you to try **“Prettyping”**. Prettyping is just a wrapper for the standard ping tool and makes the output of the ping command prettier, easier to read, colorful and compact. The prettyping runs the standard ping command in the background and parses the output with colors and unicode characters. It is free and open source tool written in **Bash** and **awk** and supports most Unix-like operating systems such as GNU/Linux, FreeBSD and Mac OS X. Prettyping is not only used to make the output of ping command prettier, but also ships with other notable features as listed below. - - * Detects the lost or missing packets and marks them in the output. - * Shows live statistics. The statistics are constantly updated after each response is received, while ping only shows after it ends. - * Smart enough to handle “unknown messages” (like error messages) without messing up the output. - * Avoids printing the repeated messages. - * You can use most common ping parameters with Prettyping. - * Can run as normal user. - * Can be able to redirect the output to a file. - * Requires no installation. Just download the binary, make it executable and run. - * Fast and lightweight. - * And, finally makes the output pretty, colorful and very intuitive. - - - -### Installing Prettyping - -Like I said already, Prettyping does not requires any installation. It is portable application! Just download the Prettyping binary file using command: - -``` -$ curl -O https://raw.githubusercontent.com/denilsonsa/prettyping/master/prettyping -``` - -Move the binary file to your $PATH, for example **/usr/local/bin**. - -``` -$ sudo mv prettyping /usr/local/bin -``` - -And, make it executable as like below: - -``` -$ sudo chmod +x /usr/local/bin/prettyping -``` - -It’s that simple. - -### Let us Make The Output Of Ping Command Prettier And Easier To Read - -Once installed, ping any host or IP address and see the ping command output in graphical way. - -``` -$ prettyping ostechnix.com -``` - -Here is the visually displayed ping output: - -![](https://www.ostechnix.com/wp-content/uploads/2018/09/prettyping-in-action.gif) - -If you run Prettyping without any arguments, it will keep running until you manually stop it by pressing **Ctrl+c**. - -Since Prettyping is just a wrapper to the ping command, you can use most common ping parameters. For instance, you can use **-c** flag to ping a host only a specific number of times, for example **5** : - -``` -$ prettyping -c 5 ostechnix.com -``` - -By default, prettynping displays the output in colored format. Don’t like the colored output? No problem! Use `--nocolor` option. - -``` -$ prettyping --nocolor ostechnix.com -``` - -Similarly, you can disable mult-color support using `--nomulticolor` option: - -``` -$ prettyping --nomulticolor ostechnix.com -``` - -To disable unicode characters, use `--nounicode` option: - -![](https://www.ostechnix.com/wp-content/uploads/2018/09/prettyping-without-unicode-support.png) - -This can be useful if your terminal does not support **UTF-8**. If you can’t fix the unicode (fonts) in your system, simply pass `--nounicode` option. - -Prettyping can redirect the output to a file as well. The following command will write the output of `prettyping ostechnix.com` command in `ostechnix.txt` file. - -``` -$ prettyping ostechnix.com | tee ostechnix.txt -``` - -Prettyping has few more options which helps you to do various tasks, such as, - - * Enable/disable the latency legend. (default value is: enabled) - * Force the output designed to a terminal. (default: auto) - * Use the last “n” pings at the statistics line. (default: 60) - * Override auto-detection of terminal dimensions. - * Override the awk interpreter. (default: awk) - * Override the ping tool. (default: ping) - - - -For more details, view the help section: - -``` -$ prettyping --help -``` - -Even though Prettyping doesn’t add any extra functionality, I personally like the following feature implementations in it: - - * Live statistics – You can see all the live statistics all the time. The standard ping command will only shows the statistics after it ends. - * Compact – You can see a longer timespan at your terminal. - * Prettyping detects missing responses. - - - -If you’re ever looking for a way to visually display the output of the ping command, Prettyping will definitely help. Give it a try, you won’t be disappointed. - -And, that’s all for now. Hope this was useful. More good stuffs to come. Stay tuned! - - - --------------------------------------------------------------------------------- - -via: https://www.ostechnix.com/prettyping-make-the-output-of-ping-command-prettier-and-easier-to-read/ - -作者:[SK][a] -选题:[lujun9972](https://github.com/lujun9972) -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://www.ostechnix.com/author/sk/ diff --git a/translated/tech/20180924 Make The Output Of Ping Command Prettier And Easier To Read.md b/translated/tech/20180924 Make The Output Of Ping Command Prettier And Easier To Read.md new file mode 100644 index 0000000000..efca96da23 --- /dev/null +++ b/translated/tech/20180924 Make The Output Of Ping Command Prettier And Easier To Read.md @@ -0,0 +1,128 @@ +如何让 Ping 的输出更简单易读 +====== + +![](https://www.ostechnix.com/wp-content/uploads/2018/09/prettyping-720x340.png) + +众所周知,`ping` 命令可以用来检查目标主机是否可达。使用 `ping` 命令的时候,会发送一个 ICMP Echo 请求,通过目标主机的响应与否来确定目标主机的状态。如果你经常使用 `ping` 命令,你可以尝试一下 `prettyping`。Prettyping 只是将一个标准的 ping 工具增加了一层封装,在运行标准 ping 命令的同时添加了颜色和 unicode 字符解析输出,所以它的输出更漂亮紧凑、清晰易读。它是用 `bash` 和 `awk` 编写的免费开源工具,支持大部分类 Unix 操作系统,包括 GNU/Linux、FreeBSD 和 Mac OS X。Prettyping 除了美化 ping 命令的输出,还有很多值得注意的功能。 + + * 检测丢失的数据包并在输出中标记出来。 + * 显示实时数据。每次收到响应后,都会更新统计数据,而对于普通 ping 命令,只会在执行结束后统计。 + * 能够在输出结果不混乱的前提下灵活处理“未知信息”(例如错误信息)。 + * 能够避免输出重复的信息。 + * 兼容常用的 ping 工具命令参数。 + * 能够由普通用户执行。 + * 可以将输出重定向到文件中。 + * 不需要安装,只需要下载二进制文件,赋予可执行权限即可执行。 + * 快速且轻巧。 + * 输出结果清晰直观。 + + + +### 安装 Prettyping + +如上所述,Prettyping 是一个绿色软件,不需要任何安装,只要使用以下命令下载 Prettyping 二进制文件: + +``` +$ curl -O https://raw.githubusercontent.com/denilsonsa/prettyping/master/prettyping +``` + +将二进制文件放置到 `$PATH`(例如 `/usr/local/bin`)中: + +``` +$ sudo mv prettyping /usr/local/bin +``` + +然后对其赋予可执行权限: + +``` +$ sudo chmod +x /usr/local/bin/prettyping +``` + +就可以使用了。 + +### 让 ping 的输出清晰易读 + +安装完成后,通过 `prettyping` 来 ping 任何主机或 IP 地址,就可以以图形方式查看输出。 + +``` +$ prettyping ostechnix.com +``` + +输出效果大概会是这样: + +![](https://www.ostechnix.com/wp-content/uploads/2018/09/prettyping-in-action.gif) + +如果你不带任何参数执行 `prettyping`,它就会一直运行直到被 ctrl + c 中断。 + +由于 Prettyping 只是一个对普通 ping 命令的封装,所以常用的 ping 参数也是有效的。例如使用 `-c 5` 来指定 ping 一台主机的 5 次: + +``` +$ prettyping -c 5 ostechnix.com +``` + +Prettyping 默认会使用彩色输出,如果你不喜欢彩色的输出,可以加上 `--nocolor` 参数: + +``` +$ prettyping --nocolor ostechnix.com +``` + +同样的,也可以用 `--nomulticolor` 参数禁用多颜色支持: + +``` +$ prettyping --nomulticolor ostechnix.com +``` + +使用 `--nounicode` 参数禁用 unicode 字符: + +![](https://www.ostechnix.com/wp-content/uploads/2018/09/prettyping-without-unicode-support.png) + +如果你的终端不支持 **UTF-8**,或者无法修复系统中的 unicode 字体,只需要加上 `--nounicode` 参数就能轻松解决。 + +Prettyping 支持将输出的内容重定向到文件中,例如执行以下这个命令会将 `prettyping ostechnix.com` 的输出重定向到 `ostechnix.txt` 中: + +``` +$ prettyping ostechnix.com | tee ostechnix.txt +``` + +Prettyping 还有很多选项帮助你完成各种任务,例如: + + * 启用/禁用延时图例(默认启用) + * 强制按照终端的格式输出(默认自动) + * 在统计数据中统计最后的 n 次 ping(默认 60 次) + * 覆盖对终端尺寸的检测 + * 覆盖 awk 解释器(默认不覆盖) + * 覆盖 ping 工具(默认不覆盖) + + + +查看帮助文档可以了解更多: + +``` +$ prettyping --help +``` + +尽管 prettyping 没有添加任何额外功能,但我个人喜欢它的这些优点: + + * 实时统计 - 可以随时查看所有实时统计信息,标准 `ping` 命令只会在命令执行结束后才显示统计信息。 + * 紧凑的显示 - 可以在终端看到更长的时间跨度。 + * 检测丢失的数据包并显示出来。 + + + +如果你一直在寻找可视化显示 `ping` 命令输出的工具,那么 Prettyping 肯定会有所帮助。尝试一下,你不会失望的。 + + + +-------------------------------------------------------------------------------- + +via: https://www.ostechnix.com/prettyping-make-the-output-of-ping-command-prettier-and-easier-to-read/ + +作者:[SK][a] +选题:[lujun9972](https://github.com/lujun9972) +译者:[HankChow](https://github.com/HankChow) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://www.ostechnix.com/author/sk/ + From cef6b4d99f7e92f4c82a1b411db32b0072be16fa Mon Sep 17 00:00:00 2001 From: darksun Date: Thu, 27 Sep 2018 09:52:20 +0800 Subject: [PATCH 060/736] =?UTF-8?q?=E9=80=89=E9=A2=98:=20Control=20your=20?= =?UTF-8?q?data=20with=20Syncthing:=20An=20open=20source=20synchronization?= =?UTF-8?q?=20tool?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...ng- An open source synchronization tool.md | 108 ++++++++++++++++++ 1 file changed, 108 insertions(+) create mode 100644 sources/tech/20180921 Control your data with Syncthing- An open source synchronization tool.md diff --git a/sources/tech/20180921 Control your data with Syncthing- An open source synchronization tool.md b/sources/tech/20180921 Control your data with Syncthing- An open source synchronization tool.md new file mode 100644 index 0000000000..32be152b4c --- /dev/null +++ b/sources/tech/20180921 Control your data with Syncthing- An open source synchronization tool.md @@ -0,0 +1,108 @@ +Control your data with Syncthing: An open source synchronization tool +====== +Decide how to store and share your personal information. + +![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/bus_cloud_database.png?itok=lhhU42fg) + +These days, some of our most important possessions—from pictures and videos of family and friends to financial and medical documents—are data. And even as cloud storage services are booming, so there are concerns about privacy and lack of control over our personal data. From the PRISM surveillance program to Google [letting app developers scan your personal emails][1], the news is full of reports that should give us all pause regarding the security of our personal information. + +[Syncthing][2] can help put your mind at ease. An open source peer-to-peer file synchronization tool that runs on Linux, Windows, Mac, Android, and others (sorry, no iOS), Syncthing uses its own protocol, called [Block Exchange Protocol][3]. In brief, Syncthing lets you synchronize your data across many devices without owning a server. + +### Linux + +In this post, I will explain how to install and synchronize files between a Linux computer and an Android phone. + +Syncthing is readily available for most popular distributions. Fedora 28 includes the latest version. + +To install Syncthing in Fedora, you can either search for it in Software Center or execute the following command: + +``` +sudo dnf install syncthing syncthing-gtk + +``` + +Once it’s installed, open it. You’ll be welcomed by an assistant to help configure Syncthing. Click **Next** until it asks to configure the WebUI. The safest option is to keep the option **Listen on localhost**. That will disable the web interface and keep unauthorized users away. + +![Syncthing in Setup WebUI dialog box][5] + +Syncthing in Setup WebUI dialog box + +Close the dialog. Now that Syncthing is installed, it’s time to share a folder, connect a device, and start syncing. But first, let’s continue with your other client. + +### Android + +Syncthing is available in Google Play and in F-Droid app stores. + +![](https://opensource.com/sites/default/files/uploads/syncthing2.png) + +Once the application is installed, you’ll be welcomed by a wizard. Grant Syncthing permissions to your storage. You might be asked to disable battery optimization for this application. It is safe to do so as we will optimize the app to synchronize only when plugged in and connected to a wireless network. + +Click on the main menu icon and go to **Settings** , then **Run Conditions**. Tick **Always run in** **the background** , **Run only when charging** , and **Run only on wifi**. Now your Android client is ready to exchange files with your devices. + +There are two important concepts to remember in Syncthing: folders and devices. Folders are what you want to share, but you must have a device to share with. Syncthing allows you to share individual folders with different devices. Devices are added by exchanging device IDs. A device ID is a unique, cryptographically secure identifier that is created when Syncthing starts for the first time. + +### Connecting devices + +Now let’s connect your Linux machine and your Android client. + +In your Linux computer, open Syncthing, click on the **Settings** icon and click **Show ID**. A QR code will show up. + +In your Android mobile, open Syncthing. In the main screen, click the **Devices** tab and press the **+** symbol. In the first field, press the QR code symbol to open the QR scanner. + +Point your mobile camera to the computer QR code. The Device ID** **field will be populated with your desktop client Device ID. Give it a friendly name and save. Because adding a device goes two ways, you now need to confirm on the computer client that you want to add the Android mobile. It might take a couple of minutes for your computer client to ask for confirmation. When it does, click **Add**. + +![](https://opensource.com/sites/default/files/uploads/syncthing6.png) + +In the **New Device** window, you can verify and configure some options about your new device, like the **Device Name** and **Addresses**. If you keep dynamic, it will try to auto-discover the device IP, but if you want to force one, you can add it in this field. If you already created a folder (more on this later), you can also share it with this new device. + +![](https://opensource.com/sites/default/files/uploads/syncthing7.png) + +Your computer and Android are now paired and ready to exchange files. (If you have more than one computer or mobile phone, simply repeat these steps.) + +### Sharing folders + +Now that the devices you want to sync are already connected, it’s time to share a folder. You can share folders on your computer and the devices you add to that folder will get a copy. + +To share a folder, go to **Settings** and click **Add Shared Folder** : + +![](https://opensource.com/sites/default/files/uploads/syncthing8.png) + +In the next window, enter the information of the folder you want to share: + +![](https://opensource.com/sites/default/files/uploads/syncthing9.png) + +You can use any label you want. **Folder ID** will be generated randomly and will be used to identify the folder between the clients. In **Path** , click **Browse** and locate the folder you want to share. If you want Syncthing to monitor the folder for changes (such as deletes, new files, etc.), click **Monitor filesystem for changes**. + +Remember, when you share a folder, any change that happens on the other clients will be reflected on every single device. That means that if you share a folder containing pictures with other computers or mobile devices, changes in these other clients will be reflected everywhere. If this is not what you want, you can make your folder “Send Only” so it will send files to the clients, but the other clients’ changes won’t be synced. + +When this is done, go to **Share with Devices** and select the hosts you want to sync with your folder: + +All the devices you select will need to accept the share request; you will get a notification from the devices: + +Just as when you shared the folder, you must configure the new shared folder: + +![](https://opensource.com/sites/default/files/uploads/syncthing12.png) + +Again, here you can define any label, but the ID must match each client. In the folder option, select the destination for the folder and its files. Remember that any change done in this folder will be reflected with every device allowed in the folder. + +These are the steps to connect devices and share folders with Syncthing. It might take a few minutes to start copying, depending on your network settings or if you are not on the same network. + +Syncthing offers many more great features and options. Try it—and take control of your data. + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/18/9/take-control-your-data-syncthing + +作者:[Michael Zamot][a] +选题:[lujun9972](https://github.com/lujun9972) +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/mzamot +[1]: https://gizmodo.com/google-says-it-doesnt-go-through-your-inbox-anymore-bu-1827299695 +[2]: https://syncthing.net/ +[3]: https://docs.syncthing.net/specs/bep-v1.html +[4]: /file/410191 +[5]: https://opensource.com/sites/default/files/uploads/syncthing1.png (Syncthing in Setup WebUI dialog box) From f068057d362c6ec76f86eeae022fa459306ae2a8 Mon Sep 17 00:00:00 2001 From: darksun Date: Thu, 27 Sep 2018 09:55:43 +0800 Subject: [PATCH 061/736] =?UTF-8?q?=E9=80=89=E9=A2=98:=20How=20to=20Instal?= =?UTF-8?q?l=20Cinnamon=20Desktop=20on=20Ubuntu?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...w to Install Cinnamon Desktop on Ubuntu.md | 80 +++++++++++++++++++ 1 file changed, 80 insertions(+) create mode 100644 sources/tech/20140805 How to Install Cinnamon Desktop on Ubuntu.md diff --git a/sources/tech/20140805 How to Install Cinnamon Desktop on Ubuntu.md b/sources/tech/20140805 How to Install Cinnamon Desktop on Ubuntu.md new file mode 100644 index 0000000000..78ba32e2a2 --- /dev/null +++ b/sources/tech/20140805 How to Install Cinnamon Desktop on Ubuntu.md @@ -0,0 +1,80 @@ +How to Install Cinnamon Desktop on Ubuntu +====== +**This tutorial shows you how to install Cinnamon desktop environment on Ubuntu.** + +[Cinnamon][1] is the default desktop environment of [Linux Mint][2]. Unlike Unity desktop environment in Ubuntu, Cinnamon is more traditional but elegant looking desktop environment with the bottom panel and app menu etc. Many Windows migrants [prefer Linux Mint over Ubuntu][3] because of Cinnamon desktop and its Windows-resembling user interface. + +Now, you don’t need to [install Linux Mint][4] just for trying Cinnamon. In this tutorial, I’ll show you **how to install Cinnamon in Ubuntu 18.04, 16.04 and 14.04**. + +You should note something before you install Cinnamon desktop on Ubuntu. Sometimes, installing additional desktop environments leads to conflict between the desktop environments. This may result in a broken session, broken applications and features etc. This is why you should be careful in making this choice. + +### How to Install Cinnamon on Ubuntu + +![How to install cinnamon desktop on Ubuntu Linux][5] + +There used to be a-sort-of official PPA from Cinnamon team for Ubuntu but it doesn’t exist anymore. Don’t lose heart. There is an unofficial PPA available and it works perfectly. This PPA consists of the latest Cinnamon version. + +Open a terminal and use the following commands: + +``` +sudo add-apt-repository ppa:embrosyn/cinnamon +sudo apt update && sudo apt install cinnamon + +``` + +It will download files of around 150 MB in size (if I remember correctly). This also provides you with Nemo (Nautilus fork) and Cinnamon Control Center. This bonus stuff gives a closer feel of Linux Mint. + +### Using Cinnamon desktop environment in Ubuntu + +Once you have installed Cinnamon, log out of the current session. At the login screen, click on the Ubuntu symbol beside the username: + +![](https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2014/08/Change_Desktop_Environment_Ubuntu.jpeg) + +When you do this, it will give you all the desktop environments available for your system. No need to tell you that you have to choose Cinnamon: + +![](https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2014/08/Install_Cinnamon_Ubuntu.jpeg) + +Now you should be logged in to Ubuntu with Cinnamon desktop environment. Remember, you can do the same to switch back to Unity. Here is a quick screenshot of what it looked like to run **Cinnamon in Ubuntu** : + +![](https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2014/08/Cinnamon_Ubuntu_1404.jpeg) + +Looks completely like Linux Mint, isn’t it? I didn’t find any compatibility issue between Cinnamon and Unity. I switched back and forth between Unity and Cinnamon and both worked perfectly. + +#### Remove Cinnamon from Ubuntu + +It is understandable that you might want to uninstall Cinnamon. We will use PPA Purge for this purpose. Let’s install PPA Purge first: + +``` +sudo apt-get install ppa-purge + +``` + +Afterward, use the following command to purge the PPA: + +``` +sudo ppa-purge ppa:embrosyn/cinnamon + +``` + +In related articles, I suggest you to read more about [how to remove PPA in Linux][6]. + +I hope this post helps you to **install Cinnamon in Ubuntu**. Do share your experience with Cinnamon. + +-------------------------------------------------------------------------------- + +via: https://itsfoss.com/install-cinnamon-on-ubuntu/ + +作者:[Abhishek Prakash][a] +选题:[lujun9972](https://github.com/lujun9972) +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://itsfoss.com/author/abhishek/ +[1]: http://cinnamon.linuxmint.com/ +[2]: http://www.linuxmint.com/ +[3]: https://itsfoss.com/linux-mint-vs-ubuntu/ +[4]: https://itsfoss.com/guide-install-linux-mint-16-dual-boot-windows/ +[5]: https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/09/install-cinnamon-ubuntu.png +[6]: https://itsfoss.com/how-to-remove-or-delete-ppas-quick-tip/ From 34d2cd8445f64a60f280c7577f5995cedb1f8ce6 Mon Sep 17 00:00:00 2001 From: darksun Date: Thu, 27 Sep 2018 10:01:51 +0800 Subject: [PATCH 062/736] =?UTF-8?q?=E9=80=89=E9=A2=98:=20Distributed=20tra?= =?UTF-8?q?cing=20in=20a=20microservices=20world?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...ibuted tracing in a microservices world.md | 113 ++++++++++++++++++ 1 file changed, 113 insertions(+) create mode 100644 sources/tech/20180920 Distributed tracing in a microservices world.md diff --git a/sources/tech/20180920 Distributed tracing in a microservices world.md b/sources/tech/20180920 Distributed tracing in a microservices world.md new file mode 100644 index 0000000000..1b39a5e30a --- /dev/null +++ b/sources/tech/20180920 Distributed tracing in a microservices world.md @@ -0,0 +1,113 @@ +Distributed tracing in a microservices world +====== +What is distributed tracing and why is it so important in a microservices environment? + +![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/pixelated-world.png?itok=fHjM6m53) + +[Microservices][1] have become the default choice for greenfield applications. After all, according to practitioners, microservices provide the type of decoupling required for a full digital transformation, allowing individual teams to innovate at a far greater speed than ever before. + +Microservices are nothing more than regular distributed systems, only at a larger scale. Therefore, they exacerbate the well-known problems that any distributed system faces, like lack of visibility into a business transaction across process boundaries. + +Given that it's extremely common to have multiple versions of a single service running in production at the same time—be it in a [A/B testing][2] scenario or as part of rolling out a new release following the [Canary release][3] technique—when we account for the fact that we are talking about hundreds of services, it's clear that what we have is chaos. It's almost impossible to map the interdependencies and understand the path of a business transaction across services and their versions. + +### Observability + +This chaos ends up being a good thing, as long as we can observe what's going on and diagnose the problems that will eventually occur. + +A system is said to be observable when we can understand its state based on the [metrics, logs, and traces][4] it emits. Given that we are talking about distributed systems, knowing the state of a single instance of a single service isn't enough; we need to be able to aggregate the metrics for all instances of a given service, perhaps grouped by version. Metrics solutions like [Prometheus][5] are very popular in tackling this aspect of the observability problem. Similarly, we need logs to be stored in a central location, as it's impossible to analyze the logs from the individual instances of each service. [Logstash][6] is usually applied here, in combination with a backing storage like [Elasticsearch][7]. And finally, we need to get end-to-end traces to understand the path a given transaction has taken. This is where distributed tracing solutions come into play. + +### Distributed tracing + +In monolithic web applications, logging frameworks provide enough capabilities to do a basic root-cause analysis when something fails. A developer just needs to place log statements in the code. Information like "context" (usually "thread") and "timestamp" are automatically added to the log entry, making it easier to understand the execution of a given request and correlate the entries. + +``` +Thread-1 2018-09-03T15:52:54+02:00 Request started +Thread-2 2018-09-03T15:52:55+02:00 Charging credit card x321 +Thread-1 2018-09-03T15:52:55+02:00 Order submitted +Thread-1 2018-09-03T15:52:56+02:00 Charging credit card x123 +Thread-1 2018-09-03T15:52:57+02:00 Changing order status +Thread-1 2018-09-03T15:52:58+02:00 Dispatching event to inventory +Thread-1 2018-09-03T15:52:59+02:00 Request finished +``` + +We can safely say that the second log entry above is not related to the other entries, as it's being executed in a different thread. + +In microservices architectures, logging alone fails to deliver the complete picture. Is this service the first one in the call chain? And what happened at the inventory service (where we apparently dispatched an event)? + +A common strategy to answer this question is creating an identifier at the very first building block of our transaction and propagating this identifier across all the calls, probably by sending it as an HTTP header whenever a remote call is made. + +In a central log collector, we could then see entries like the ones below. Note how we could log the correlation ID (the first column in our example), so we know that the second entry is not related to the other entries. + +``` +abc123 Order     2018-09-03T15:52:58+02:00 Dispatching event to inventory +def456 Order     2018-09-03T15:52:58+02:00 Dispatching event to inventory +abc123 Inventory 2018-09-03T15:52:59+02:00 Received `order-submitted` event +abc123 Inventory 2018-09-03T15:53:00+02:00 Checking inventory status +abc123 Inventory 2018-09-03T15:53:01+02:00 Updating inventory +abc123 Inventory 2018-09-03T15:53:02+02:00 Preparing order manifest +``` + +This technique is one of the concepts at the core of any modern distributed tracing solution, but it's not really new; correlating log entries is decades old, probably as old as "distributed systems" itself. + +What sets distributed tracing apart from regular logging is that the data structure that holds tracing data is more specialized, so we can also identify causality. Looking at the log entries above, it's hard to tell if the last step was caused by the previous entry, if they were performed concurrently, or if they share the same caller. Having a dedicated data structure also allows distributed tracing to record not only a message in a single point in time but also the start and end time of a given procedure. + +![Trace showing spans][9] + +Trace showing spans similar to the logs described above + +[Click to enlarge][10] + +Most of the modern distributed tracing tools are inspired by a 2010 [paper about Dapper][11], the distributed tracing solution used at Google. In that paper, the data structure described above was called a span, and you can see nine of them in the image above. This particular "forest" of spans is called a trace and is equivalent to the correlated log entries we've seen before. + +The image above is a screenshot of a trace displayed in [Jaeger][12], an open source distributed tracing solution hosted by the [Cloud Native Computing Foundation (CNCF)][13]. It marks each service with a color to make it easier to see the process boundaries. Timing information can be easily visualized, both by looking at the macro timeline at the top of the screen or at the individual spans, giving a sense of how long each span takes and how impactful it is in this particular execution. It's also easy to observe when processes are asynchronous and therefore may outlive the initial request. + +Like with logging, we need to annotate or instrument our code with the data we want to record. Unlike logging, we record spans instead of messages and do some demarcation to know when the span starts and finishes so we can get accurate timing information. As we would probably like to have our business code independent from a specific distributed tracing implementation, we can use an API such as [OpenTracing][14], leaving the decision about the concrete implementation as a packaging or runtime concern. Following is pseudo-Java code showing such demarcation. + +``` +try (Scope scope = tracer.buildSpan("submitOrder").startActive(true)) { +    scope.span().setTag("order-id", "c85b7644b6b5"); +    chargeCreditCard(); +    changeOrderStatus(); +    dispatchEventToInventory(); +} +``` + +Given the nature of the distributed tracing concept, it's clear the code executed "between" our business services can also be part of the trace. For instance, we could [turn on][15] the distributed tracing integration for [Istio][16], a service mesh solution that helps in the communication between microservices, and we'll suddenly have a better picture about the network latency and routing decisions made at this layer. Another example is the work done in the OpenTracing community to provide instrumentation for popular stacks, frameworks, and APIs, such as Java's [JAX-RS][17], [Spring Cloud][18], or [JDBC][19]. This enables us to see how our business code interacts with the rest of the middleware, understand where a potential problem might be happening, and identify the best areas to improve. In fact, today's middleware instrumentation is so rich that it's common to get started with distributed tracing by using only the so-called "framework instrumentation," leaving the business code free from any tracing-related code. + +While a microservices architecture is almost unavoidable nowadays for established companies to innovate faster and for ambitious startups to achieve web scale, it's easy to feel helpless while conducting a root cause analysis when something eventually fails and the right tools aren't available. The good news is tools like Prometheus, Logstash, OpenTracing, and Jaeger provide the pieces to bring observability to your application. + +Juraci Paixão Kröhling will present [What are My Microservices Doing?][20] at [Open Source Summit Europe][21], October 22-24 in Edinburgh, Scotland. + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/18/9/distributed-tracing-microservices-world + +作者:[Juraci Paixão Kröhling][a] +选题:[lujun9972](https://github.com/lujun9972) +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/jpkroehling +[1]: https://en.wikipedia.org/wiki/Microservices +[2]: https://en.wikipedia.org/wiki/A/B_testing +[3]: https://martinfowler.com/bliki/CanaryRelease.html +[4]: https://blog.twitter.com/engineering/en_us/a/2016/observability-at-twitter-technical-overview-part-i.html +[5]: https://prometheus.io/ +[6]: https://github.com/elastic/logstash +[7]: https://github.com/elastic/elasticsearch +[8]: /file/409621 +[9]: https://opensource.com/sites/default/files/uploads/distributed-trace.png (Trace showing spans) +[10]: /sites/default/files/uploads/trace.png +[11]: https://ai.google/research/pubs/pub36356 +[12]: https://www.jaegertracing.io/ +[13]: https://www.cncf.io/ +[14]: http://opentracing.io/ +[15]: https://istio.io/docs/tasks/telemetry/distributed-tracing/ +[16]: https://istio.io/ +[17]: https://github.com/opentracing-contrib/java-jaxrs +[18]: https://github.com/opentracing-contrib/java-spring-cloud +[19]: https://github.com/opentracing-contrib/java-jdbc +[20]: https://osseu18.sched.com/event/FxW3/what-are-my-microservices-doing-juraci-paixao-krohling-red-hat# +[21]: https://osseu18.sched.com/ From 5e8fd9321d044fb0d573b835cfec1124215130ca Mon Sep 17 00:00:00 2001 From: HankChow <280630620@qq.com> Date: Thu, 27 Sep 2018 10:08:24 +0800 Subject: [PATCH 063/736] hankchow translating --- ...To Find Out Which Port Number A Process Is Using In Linux.md | 2 ++ 1 file changed, 2 insertions(+) diff --git a/sources/tech/20180924 How To Find Out Which Port Number A Process Is Using In Linux.md b/sources/tech/20180924 How To Find Out Which Port Number A Process Is Using In Linux.md index 21b6633730..add3ce719e 100644 --- a/sources/tech/20180924 How To Find Out Which Port Number A Process Is Using In Linux.md +++ b/sources/tech/20180924 How To Find Out Which Port Number A Process Is Using In Linux.md @@ -1,3 +1,5 @@ +HankChow translating + How To Find Out Which Port Number A Process Is Using In Linux ====== As a Linux administrator, you should know whether the corresponding service is binding/listening with correct port or not. From 5adaffeea5ff72b674b3bfcb52fbd2f59413ff98 Mon Sep 17 00:00:00 2001 From: "Xingyu.Wang" Date: Thu, 27 Sep 2018 10:28:38 +0800 Subject: [PATCH 064/736] PRF:20180910 13 Keyboard Shortcut Every Ubuntu 18.04 User Should Know.md @XiatianSummer --- ...cut Every Ubuntu 18.04 User Should Know.md | 97 ++++++++++--------- 1 file changed, 49 insertions(+), 48 deletions(-) diff --git a/translated/tech/20180910 13 Keyboard Shortcut Every Ubuntu 18.04 User Should Know.md b/translated/tech/20180910 13 Keyboard Shortcut Every Ubuntu 18.04 User Should Know.md index c946493a6c..461e586b2d 100644 --- a/translated/tech/20180910 13 Keyboard Shortcut Every Ubuntu 18.04 User Should Know.md +++ b/translated/tech/20180910 13 Keyboard Shortcut Every Ubuntu 18.04 User Should Know.md @@ -1,82 +1,83 @@ 每位 Ubuntu 18.04 用户都应该知道的快捷键 ====== + 了解快捷键能够提升您的生产力。这里有一些实用的 Ubuntu 快捷键助您像专业人士一样使用 Ubuntu。 -您可以使用有键盘和鼠标组合的操作系统。 +您可以用键盘和鼠标组合来使用操作系统。 -注意:本文中提到的键盘快捷键适用于 Ubuntu 18.04 GNOME 版。 通常,它们中的大多数(或者全部)也适用于其他的 Ubuntu 版本,但我不能够保证。 +> 注意:本文中提到的键盘快捷键适用于 Ubuntu 18.04 GNOME 版。 通常,它们中的大多数(或者全部)也适用于其他的 Ubuntu 版本,但我不能够保证。 ![Ubuntu keyboard shortcuts][1] ### 实用的 Ubuntu 快捷键 -让我们来看一看 Ubuntu GNOME 必备的快捷键吧!通用的快捷键如 Ctrl+C(复制),Ctrl+V(粘贴)或者 Ctrl+S(保存)不再赘述。 +让我们来看一看 Ubuntu GNOME 必备的快捷键吧!通用的快捷键如 `Ctrl+C`(复制)、`Ctrl+V`(粘贴)或者 `Ctrl+S`(保存)不再赘述。 -注意:Linux 中的 Super 键即键盘上带有 Windows 图标的键,本文中我使用了大写字母,但这不代表你需要按下 shift 键,比如,T 代表键盘上的‘t’键,而不代表 Shift+t。 +注意:Linux 中的 Super 键即键盘上带有 Windows 图标的键,本文中我使用了大写字母,但这不代表你需要按下 `shift` 键,比如,`T` 代表键盘上的 ‘t’ 键,而不代表 `Shift+t`。 -#### 1\. Super 键:打开活动搜索界面 +#### 1、 Super 键:打开活动搜索界面 -使用 Super 键可以打开活动菜单。如果你只能在 Ubuntu 上使用一个快捷键,那只能是 Super 键。 +使用 `Super` 键可以打开活动菜单。如果你只能在 Ubuntu 上使用一个快捷键,那只能是 `Super` 键。 -想要打开一个应用程序?按下 Super 键然后搜索应用程序。如果搜索的应用程序未安装,它会推荐来自应用中心的应用程序。 +想要打开一个应用程序?按下 `Super` 键然后搜索应用程序。如果搜索的应用程序未安装,它会推荐来自应用中心的应用程序。 -想要看看有哪些正在运行的程序?按下 Super 键,屏幕上就会显示所有正在运行的 GUI 应用程序。 +想要看看有哪些正在运行的程序?按下 `Super` 键,屏幕上就会显示所有正在运行的 GUI 应用程序。 -想要使用工作区吗?只需按下 Super 键,您就可以在屏幕右侧看到工作区选项。 +想要使用工作区吗?只需按下 `Super` 键,您就可以在屏幕右侧看到工作区选项。 -#### 2\. Ctrl+Alt+T:打开 Ubuntu 终端窗口 +#### 2、 Ctrl+Alt+T:打开 Ubuntu 终端窗口 ![Ubuntu Terminal Shortcut][2] *使用 Ctrl+alt+T 来打开终端窗口* -想要打开一个新的终端,您只需使用快捷键 Ctrl+Alt+T。这是我在 Ubuntu 中最喜欢的键盘快捷键。 甚至在我的许多 FOSS 教程中,当需要打开终端窗口是,我都会提到这个快捷键。 +想要打开一个新的终端,您只需使用快捷键 `Ctrl+Alt+T`。这是我在 Ubuntu 中最喜欢的键盘快捷键。 甚至在我的许多 FOSS 教程中,当需要打开终端窗口是,我都会提到这个快捷键。 -#### 3\. Super+L 或 Ctrl——Alt+L:锁屏 +#### 3、 Super+L 或 Ctrl+Alt+L:锁屏 -当您离开电脑时锁定屏幕,是最基本的安全习惯之一。您可以使用 Super + L 快捷键,而不是繁琐地点击屏幕右上角然后选择锁定屏幕选项。 +当您离开电脑时锁定屏幕,是最基本的安全习惯之一。您可以使用 `Super+L` 快捷键,而不是繁琐地点击屏幕右上角然后选择锁定屏幕选项。 -有些系统也会使用 Ctrl+Alt+L 键锁定屏幕。 +有些系统也会使用 `Ctrl+Alt+L` 键锁定屏幕。 -#### 4\. Super+D or Ctrl+Alt+D:显示桌面 +#### 4、 Super+D or Ctrl+Alt+D:显示桌面 -按下 Super + D 可以最小化所有正在运行的应用程序窗口并显示桌面。 +按下 `Super+D` 可以最小化所有正在运行的应用程序窗口并显示桌面。 -再次按 Super + D 将重新打开所有正在运行的应用程序窗口,像之前一样。 +再次按 `Super+D` 将重新打开所有正在运行的应用程序窗口,像之前一样。 -您也可以使用 Ctrl + Alt + D 来实现此目的。 +您也可以使用 `Ctrl+Alt+D` 来实现此目的。 -#### 5\. Super+A:显示应用程序菜单 +#### 5、 Super+A:显示应用程序菜单 -您可以通过单击屏幕左下角的 9个点打开 Ubuntu 18.04 GNOME 中的应用程序菜单。 但是一个更快捷的方法是使用 Super + A 快捷键。 +您可以通过单击屏幕左下角的 9 个点打开 Ubuntu 18.04 GNOME 中的应用程序菜单。 但是一个更快捷的方法是使用 `Super+A` 快捷键。 它将显示应用程序菜单,您可以在其中查看或搜索系统上已安装的应用程序。 -您可以使用 Esc 键退出应用程序菜单界面。 +您可以使用 `Esc` 键退出应用程序菜单界面。 -#### 6\. Super+Tab or Alt+Tab:在运行中的应用程序间切换 +#### 6、 Super+Tab 或 Alt+Tab:在运行中的应用程序间切换 -如果您运行的应用程序不止一个,则可以使用 Super + Tab 或 Alt + Tab 快捷键在应用程序之间切换。 +如果您运行的应用程序不止一个,则可以使用 `Super+Tab` 或 `Alt+Tab` 快捷键在应用程序之间切换。 -按住 Super 键同时按下 Tab 键,即可显示应用程序切换器。 按住 Super 的同时,继续点击 Tab 键在应用程序之间进行选择。 当光标在所需的应用程序上时,松开 Super 和 Tab 键。 +按住 `Super` 键同时按下 `Tab` 键,即可显示应用程序切换器。 按住 `Super` 的同时,继续按下 `Tab` 键在应用程序之间进行选择。 当光标在所需的应用程序上时,松开 `Super` 和 `Tab` 键。 -默认情况下,应用程序切换器从左向右移动。 如果要从右向左移动,可使用 Super + Shift + Tab 快捷键。 +默认情况下,应用程序切换器从左向右移动。 如果要从右向左移动,可使用 `Super+Shift+Tab` 快捷键。 -在这里您也可以用 Alt 键代替 Super 键。 +在这里您也可以用 `Alt` 键代替 `Super` 键。 -提示:如果有多个应用程序实例,您可以使用 Super + \` 快捷键在这些实例之间切换。 +> 提示:如果有多个应用程序实例,您可以使用 Super+` 快捷键在这些实例之间切换。 -#### 7\. Super+Arrow keys: 移动窗口位置 +#### 7、 Super+箭头:移动窗口位置 -这个快捷键也适用于 Windows 系统。 使用应用程序时,按下 Super 和左箭头键,应用程序将贴合屏幕的左边缘,占用屏幕的左半边。 +这个快捷键也适用于 Windows 系统。 使用应用程序时,按下 `Super+左箭头`,应用程序将贴合屏幕的左边缘,占用屏幕的左半边。 -同样,按下 Super 和右箭头键会使应用程序贴合右边缘。 +同样,按下 `Super+右箭头`会使应用程序贴合右边缘。 -按下 Super 和上箭头键将最大化应用程序窗口,超级和下箭头将使应用程序恢复到其正常的大小。 +按下 `Super+上箭头`将最大化应用程序窗口,`Super+下箭头`将使应用程序恢复到其正常的大小。 -#### 8\. Super+M: 切换到通知栏 +#### 8、 Super+M:切换到通知栏 GNOME 中有一个通知栏,您可以在其中查看系统和应用程序活动的通知,这里也有一个日历。 @@ -84,19 +85,19 @@ GNOME 中有一个通知栏,您可以在其中查看系统和应用程序活 *通知栏* -使用 Super + M 快捷键,您可以打开此通知栏。 如果再次按这些键,将关闭打开的通知托盘。 +使用 `Super+M` 快捷键,您可以打开此通知栏。 如果再次按这些键,将关闭打开的通知托盘。 -使用 Super+V 也可实现相同的功能。 +使用 `Super+V` 也可实现相同的功能。 -#### 9\. Super+Space:切换输入法(用于多语言设置) +#### 9、 Super+空格:切换输入法(用于多语言设置) 如果您使用多种语言,可能您的系统上安装了多个输入法。 例如,我需要在 Ubuntu 上同时使用[印地语] [4]和英语,所以我安装了印地语(梵文)输入法以及默认的英语输入法。 -如果您也使用多语言设置,则可以使用 Super + Space 快捷键快速更改输入法。 +如果您也使用多语言设置,则可以使用 `Super+空格` 快捷键快速更改输入法。 -#### 10\. Alt+F2:运行控制台 +#### 10、 Alt+F2:运行控制台 -这适用于高级用户。 如果要运行快速命令,而不是打开终端并在其中运行命令,则可以使用 Alt + F2 运行控制台。 +这适用于高级用户。 如果要运行快速命令,而不是打开终端并在其中运行命令,则可以使用 `Alt+F2` 运行控制台。 ![Alt+F2 to run commands in Ubuntu][5] @@ -104,31 +105,31 @@ GNOME 中有一个通知栏,您可以在其中查看系统和应用程序活 当您使用只能在终端运行的应用程序时,这尤其有用。 -#### 11\. Ctrl+Q:关闭应用程序窗口 +#### 11、 Ctrl+Q:关闭应用程序窗口 -如果您有正在运行的应用程序,可以使用 Ctrl + Q 快捷键关闭应用程序窗口。您也可以使用 Ctrl + W 来实现此目的。 +如果您有正在运行的应用程序,可以使用 `Ctrl+Q` 快捷键关闭应用程序窗口。您也可以使用 `Ctrl+W` 来实现此目的。 -Alt + F4 是关闭应用程序窗口更“通用”的快捷方式。 +`Alt+F4` 是关闭应用程序窗口更“通用”的快捷方式。 它不适用于一些应用程序,如 Ubuntu 中的默认终端。 -#### 12\. Ctrl+Alt+arrow:切换工作区 +#### 12、 Ctrl+Alt+箭头:切换工作区 ![Workspace switching][6] *切换工作区* -如果您是使用工作区的重度用户,可以使用 Ctrl + Alt + 上箭头和 Ctrl + Alt + 下箭头键在工作区之间切换。 +如果您是使用工作区的重度用户,可以使用 `Ctrl+Alt+上箭头`和 `Ctrl+Alt+下箭头`在工作区之间切换。 -#### 13\. Ctrl+Alt+Del:注销 +#### 13、 Ctrl+Alt+Del:注销 -不会!在 Linux 中使用著名的快捷键 Ctrl+Alt+Del 并不会像在 Windows 中一样打开任务管理器(除非您使用自定义快捷键)。 +不!在 Linux 中使用著名的快捷键 `Ctrl+Alt+Del` 并不会像在 Windows 中一样打开任务管理器(除非您使用自定义快捷键)。 ![Log Out Ubuntu][7] *注销* -在普通的 GNOME 桌面环境中,您可以使用 Ctrl + Alt + Del 键打开关机菜单,但 Ubuntu 并不总是遵循此规范,因此当您在 Ubuntu 中使用 Ctrl + Alt + Del 键时,它会打开注销菜单。 +在普通的 GNOME 桌面环境中,您可以使用 `Ctrl+Alt+Del` 键打开关机菜单,但 Ubuntu 并不总是遵循此规范,因此当您在 Ubuntu 中使用 `Ctrl+Alt+Del` 键时,它会打开注销菜单。 ### 在 Ubuntu 中使用自定义键盘快捷键 @@ -142,7 +143,7 @@ Alt + F4 是关闭应用程序窗口更“通用”的快捷方式。 ### Ubuntu 中你最喜欢的键盘快捷键是什么? -快捷键永无止境。如果需要,你可以看一看所有可能的 [GNOME 快捷键][9],看其中有没有你需要用到的快捷键。 +快捷键无穷无尽。如果需要,你可以看一看所有可能的 [GNOME 快捷键][9],看其中有没有你需要用到的快捷键。 您可以学习使用您经常使用应用程序的快捷键,这是很有必要的。例如,我使用 Kazam 进行[屏幕录制][10],键盘快捷键帮助我方便地暂停和开始录像。 @@ -155,7 +156,7 @@ via: https://itsfoss.com/ubuntu-shortcuts/ 作者:[Abhishek Prakash][a] 选题:[lujun9972](https://github.com/lujun9972) 译者:[XiatianSummer](https://github.com/XiatianSummer) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 From 0df4be75d82336f5d52527caeed0bb6e0048a64e Mon Sep 17 00:00:00 2001 From: "Xingyu.Wang" Date: Thu, 27 Sep 2018 10:29:05 +0800 Subject: [PATCH 065/736] PUB: 20180910 13 Keyboard Shortcut Every Ubuntu 18.04 User Should Know.md @XiatianSummer https://linux.cn/article-10055-1.html --- ...10 13 Keyboard Shortcut Every Ubuntu 18.04 User Should Know.md | 0 1 file changed, 0 insertions(+), 0 deletions(-) rename {translated/tech => published}/20180910 13 Keyboard Shortcut Every Ubuntu 18.04 User Should Know.md (100%) diff --git a/translated/tech/20180910 13 Keyboard Shortcut Every Ubuntu 18.04 User Should Know.md b/published/20180910 13 Keyboard Shortcut Every Ubuntu 18.04 User Should Know.md similarity index 100% rename from translated/tech/20180910 13 Keyboard Shortcut Every Ubuntu 18.04 User Should Know.md rename to published/20180910 13 Keyboard Shortcut Every Ubuntu 18.04 User Should Know.md From 5adec56c31704b87574b79172f08a84067a47903 Mon Sep 17 00:00:00 2001 From: qhwdw <33189910+qhwdw@users.noreply.github.com> Date: Thu, 27 Sep 2018 11:02:04 +0800 Subject: [PATCH 066/736] Translating by qhwdw (#10386) --- ...jarne Stroustrup warns of dangerous future plans for his C.md | 1 + 1 file changed, 1 insertion(+) diff --git a/sources/tech/20180618 What-s all the C Plus Fuss- Bjarne Stroustrup warns of dangerous future plans for his C.md b/sources/tech/20180618 What-s all the C Plus Fuss- Bjarne Stroustrup warns of dangerous future plans for his C.md index 04644aebb2..2f9a6636e7 100644 --- a/sources/tech/20180618 What-s all the C Plus Fuss- Bjarne Stroustrup warns of dangerous future plans for his C.md +++ b/sources/tech/20180618 What-s all the C Plus Fuss- Bjarne Stroustrup warns of dangerous future plans for his C.md @@ -1,3 +1,4 @@ +Translating by qhwdw What's all the C Plus Fuss? Bjarne Stroustrup warns of dangerous future plans for his C++ ====== From cd46ac6cdcad976bf41515ada5417effc6f328e8 Mon Sep 17 00:00:00 2001 From: MjSeven <33125422+MjSeven@users.noreply.github.com> Date: Thu, 27 Sep 2018 11:08:55 +0800 Subject: [PATCH 067/736] Update 20171129 How to Install and Use Wireshark on Debian and Ubuntu 16.04_17.10.md --- ...stall and Use Wireshark on Debian and Ubuntu 16.04_17.10.md | 3 +++ 1 file changed, 3 insertions(+) diff --git a/sources/tech/20171129 How to Install and Use Wireshark on Debian and Ubuntu 16.04_17.10.md b/sources/tech/20171129 How to Install and Use Wireshark on Debian and Ubuntu 16.04_17.10.md index ce2ed658a4..05e441207f 100644 --- a/sources/tech/20171129 How to Install and Use Wireshark on Debian and Ubuntu 16.04_17.10.md +++ b/sources/tech/20171129 How to Install and Use Wireshark on Debian and Ubuntu 16.04_17.10.md @@ -1,3 +1,6 @@ +Translating by MjSeven + + How to Install and Use Wireshark on Debian 9 / Ubuntu 16.04 / 17.10 ============================================================ From c9f0bbad6476b5f3338af072f5dfe2ce0da1960b Mon Sep 17 00:00:00 2001 From: MjSeven <33125422+MjSeven@users.noreply.github.com> Date: Thu, 27 Sep 2018 15:25:29 +0800 Subject: [PATCH 068/736] Delete 20171129 How to Install and Use Wireshark on Debian and Ubuntu 16.04_17.10.md --- ...eshark on Debian and Ubuntu 16.04_17.10.md | 185 ------------------ 1 file changed, 185 deletions(-) delete mode 100644 sources/tech/20171129 How to Install and Use Wireshark on Debian and Ubuntu 16.04_17.10.md diff --git a/sources/tech/20171129 How to Install and Use Wireshark on Debian and Ubuntu 16.04_17.10.md b/sources/tech/20171129 How to Install and Use Wireshark on Debian and Ubuntu 16.04_17.10.md deleted file mode 100644 index 05e441207f..0000000000 --- a/sources/tech/20171129 How to Install and Use Wireshark on Debian and Ubuntu 16.04_17.10.md +++ /dev/null @@ -1,185 +0,0 @@ -Translating by MjSeven - - -How to Install and Use Wireshark on Debian 9 / Ubuntu 16.04 / 17.10 -============================================================ - -by [Pradeep Kumar][1] · Published November 29, 2017 · Updated November 29, 2017 - - [![wireshark-Debian-9-Ubuntu 16.04 -17.10](https://www.linuxtechi.com/wp-content/uploads/2017/11/wireshark-Debian-9-Ubuntu-16.04-17.10.jpg)][2] - -Wireshark is free and open source, cross platform, GUI based Network packet analyzer that is available for Linux, Windows, MacOS, Solaris etc. It captures network packets in real time & presents them in human readable format. Wireshark allows us to monitor the network packets up to microscopic level. Wireshark also has a command line utility called ‘tshark‘ that performs the same functions as Wireshark but through terminal & not through GUI. - -Wireshark can be used for network troubleshooting, analyzing, software & communication protocol development & also for education purposed. Wireshark uses a library called ‘pcap‘ for capturing the network packets. - -Wireshark comes with a lot of features & some those features are; - -* Support for a hundreds of protocols for inspection, - -* Ability to capture packets in real time & save them for later offline analysis, - -* A number of filters to analyzing data, - -* Data captured can be compressed & uncompressed on the fly, - -* Various file formats for data analysis supported, output can also be saved to XML, CSV, plain text formats, - -* data can be captured from a number of interfaces like ethernet, wifi, bluetooth, USB, Frame relay , token rings etc. - -In this article, we will discuss how to install Wireshark on Ubuntu/Debain machines & will also learn to use Wireshark for capturing network packets. - -#### Installation of Wireshark on Ubuntu 16.04 / 17.10 - -Wireshark is available with default Ubuntu repositories & can be simply installed using the following command. But there might be chances that you will not get the latest version of wireshark. - -``` -linuxtechi@nixworld:~$ sudo apt-get update -linuxtechi@nixworld:~$ sudo apt-get install wireshark -y -``` - -So to install latest version of wireshark we have to enable or configure official wireshark repository. - -Use the beneath commands one after the another to configure repository and to install latest version of Wireshark utility - -``` -linuxtechi@nixworld:~$ sudo add-apt-repository ppa:wireshark-dev/stable -linuxtechi@nixworld:~$ sudo apt-get update -linuxtechi@nixworld:~$ sudo apt-get install wireshark -y -``` - -Once the Wireshark is installed execute the below command so that non-root users can capture live packets of interfaces, - -``` -linuxtechi@nixworld:~$ sudo setcap 'CAP_NET_RAW+eip CAP_NET_ADMIN+eip' /usr/bin/dumpcap -``` - -#### Installation of Wireshark on Debian 9 - -Wireshark package and its dependencies are already present in the default debian 9 repositories, so to install latest and stable version of Wireshark on Debian 9, use the following command: - -``` -linuxtechi@nixhome:~$ sudo apt-get update -linuxtechi@nixhome:~$ sudo apt-get install wireshark -y -``` - -During the installation, it will prompt us to configure dumpcap for non-superusers, - -Select ‘yes’ and then hit enter. - - [![Configure-Wireshark-Debian9](https://www.linuxtechi.com/wp-content/uploads/2017/11/Configure-Wireshark-Debian9-1024x542.jpg)][3] - -Once the Installation is completed, execute the below command so that non-root users can also capture the live packets of the interfaces. - -``` -linuxtechi@nixhome:~$ sudo chmod +x /usr/bin/dumpcap -``` - -We can also use the latest source package to install the wireshark on Ubuntu/Debain & many other Linux distributions. - -#### Installing Wireshark using source code on Debian / Ubuntu Systems - -Firstly download the latest source package (which is 2.4.2 at the time for writing this article), use the following command, - -``` -linuxtechi@nixhome:~$ wget https://1.as.dl.wireshark.org/src/wireshark-2.4.2.tar.xz -``` - -Next extract the package & enter into the extracted directory, - -``` -linuxtechi@nixhome:~$ tar -xf wireshark-2.4.2.tar.xz -C /tmp -linuxtechi@nixhome:~$ cd /tmp/wireshark-2.4.2 -``` - -Now we will compile the code with the following commands, - -``` -linuxtechi@nixhome:/tmp/wireshark-2.4.2$ ./configure --enable-setcap-install -linuxtechi@nixhome:/tmp/wireshark-2.4.2$ make -``` - -Lastly install the compiled packages to install Wireshark on the system, - -``` -linuxtechi@nixhome:/tmp/wireshark-2.4.2$ sudo make install -linuxtechi@nixhome:/tmp/wireshark-2.4.2$ sudo ldconfig -``` - -Upon installation a separate group for Wireshark will also be created, we will now add our user to the group so that it can work with wireshark otherwise you might get ‘permission denied‘ error when starting wireshark. - -To add the user to the wireshark group, execute the following command, - -``` -linuxtechi@nixhome:~$ sudo usermod -a -G wireshark linuxtechi -``` - -Now we can start wireshark either from GUI Menu or from terminal with this command, - -``` -linuxtechi@nixhome:~$ wireshark -``` - -#### Access Wireshark on Debian 9 System - - [![Access-wireshark-debian9](https://www.linuxtechi.com/wp-content/uploads/2017/11/Access-wireshark-debian9-1024x664.jpg)][4] - -Click on Wireshark icon - - [![Wireshark-window-debian9](https://www.linuxtechi.com/wp-content/uploads/2017/11/Wireshark-window-debian9-1024x664.jpg)][5] - -#### Access Wireshark on Ubuntu 16.04 / 17.10 - - [![Access-wireshark-Ubuntu](https://www.linuxtechi.com/wp-content/uploads/2017/11/Access-wireshark-Ubuntu-1024x664.jpg)][6] - -Click on Wireshark icon - - [![Wireshark-window-Ubuntu](https://www.linuxtechi.com/wp-content/uploads/2017/11/Wireshark-window-Ubuntu-1024x664.jpg)][7] - -#### Capturing and Analyzing packets - -Once the wireshark has been started, we should be presented with the wireshark window, example is shown above for Ubuntu and Debian system. - - [![wireshark-Linux-system](https://www.linuxtechi.com/wp-content/uploads/2017/11/wireshark-Linux-system.jpg)][8] - -All these are the interfaces from where we can capture the network packets. Based on the interfaces you have on your system, this screen might be different for you. - -We are selecting ‘enp0s3’ for capturing the network traffic for that inteface. After selecting the inteface, network packets for all the devices on our network start to populate (refer to screenshot below) - - [![Capturing-Packet-from-enp0s3-Ubuntu-Wireshark](https://www.linuxtechi.com/wp-content/uploads/2017/11/Capturing-Packet-from-enp0s3-Ubuntu-Wireshark-1024x727.jpg)][9] - -First time we see this screen we might get overwhelmed by the data that is presented in this screen & might have thought how to sort out this data but worry not, one the best features of Wireshark is its filters. - -We can sort/filter out the data based on IP address, Port number, can also used source & destination filters, packet size etc & can also combine 2 or more filters together to create more comprehensive searches. We can either write our filters in ‘Apply a Display Filter‘ tab , or we can also select one of already created rules. To select pre-built filter, click on ‘flag‘ icon , next to ‘Apply a Display Filter‘ tab, - - [![Filter-in-wireshark-Ubuntu](https://www.linuxtechi.com/wp-content/uploads/2017/11/Filter-in-wireshark-Ubuntu-1024x727.jpg)][10] - -We can also filter data based on the color coding, By default, light purple is TCP traffic, light blue is UDP traffic, and black identifies packets with errors , to see what these codes mean, click View -> Coloring Rules, also we can change these codes. - - [![Packet-Colouring-Wireshark](https://www.linuxtechi.com/wp-content/uploads/2017/11/Packet-Colouring-Wireshark-1024x682.jpg)][11] - -After we have the results that we need, we can then click on any of the captured packets to get more details about that packet, this will show all the data about that network packet. - -Wireshark is an extremely powerful tool takes some time to getting used to & make a command over it, this tutorial will help you get started. Please feel free to drop in your queries or suggestions in the comment box below. - --------------------------------------------------------------------------------- - -via: https://www.linuxtechi.com - -作者:[Pradeep Kumar][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:https://www.linuxtechi.com/author/pradeep/ -[1]:https://www.linuxtechi.com/author/pradeep/ -[2]:https://www.linuxtechi.com/wp-content/uploads/2017/11/wireshark-Debian-9-Ubuntu-16.04-17.10.jpg -[3]:https://www.linuxtechi.com/wp-content/uploads/2017/11/Configure-Wireshark-Debian9.jpg -[4]:https://www.linuxtechi.com/wp-content/uploads/2017/11/Access-wireshark-debian9.jpg -[5]:https://www.linuxtechi.com/wp-content/uploads/2017/11/Wireshark-window-debian9.jpg -[6]:https://www.linuxtechi.com/wp-content/uploads/2017/11/Access-wireshark-Ubuntu.jpg -[7]:https://www.linuxtechi.com/wp-content/uploads/2017/11/Wireshark-window-Ubuntu.jpg -[8]:https://www.linuxtechi.com/wp-content/uploads/2017/11/wireshark-Linux-system.jpg -[9]:https://www.linuxtechi.com/wp-content/uploads/2017/11/Capturing-Packet-from-enp0s3-Ubuntu-Wireshark.jpg -[10]:https://www.linuxtechi.com/wp-content/uploads/2017/11/Filter-in-wireshark-Ubuntu.jpg -[11]:https://www.linuxtechi.com/wp-content/uploads/2017/11/Packet-Colouring-Wireshark.jpg From 0962a592e420179802547cd37967a1878fd8f032 Mon Sep 17 00:00:00 2001 From: MjSeven <33125422+MjSeven@users.noreply.github.com> Date: Thu, 27 Sep 2018 15:25:50 +0800 Subject: [PATCH 069/736] Create 20171129 How to Install and Use Wireshark on Debian and Ubuntu 16.04_17.10.md --- ...eshark on Debian and Ubuntu 16.04_17.10.md | 183 ++++++++++++++++++ 1 file changed, 183 insertions(+) create mode 100644 translated/tech/20171129 How to Install and Use Wireshark on Debian and Ubuntu 16.04_17.10.md diff --git a/translated/tech/20171129 How to Install and Use Wireshark on Debian and Ubuntu 16.04_17.10.md b/translated/tech/20171129 How to Install and Use Wireshark on Debian and Ubuntu 16.04_17.10.md new file mode 100644 index 0000000000..0bcbe0d3e5 --- /dev/null +++ b/translated/tech/20171129 How to Install and Use Wireshark on Debian and Ubuntu 16.04_17.10.md @@ -0,0 +1,183 @@ +在 Debian 9 / Ubuntu 16.04 / 17.10 中如何安装并使用 Wireshark +====== + +作者 [Pradeep Kumar][1],首发于 2017 年 11 月 29 日,更新于 2017 年 11 月 29 日 + +[![wireshark-Debian-9-Ubuntu 16.04 -17.10](https://www.linuxtechi.com/wp-content/uploads/2017/11/wireshark-Debian-9-Ubuntu-16.04-17.10.jpg)][2] + +Wireshark 是免费的,开源的,跨平台的基于 GUI 的网络数据包分析器,可用于 Linux, Windows, MacOS, Solaris 等。它可以实时捕获网络数据包,并以人性化的格式呈现。Wireshark 允许我们监控网络数据包上升到微观层面。Wireshark 还有一个名为 `tshark` 的命令行实用程序,它与 Wireshark 执行相同的功能,但它是通过终端而不是 GUI。 + +Wireshark 可用于网络故障排除,分析,软件和通信协议开发以及用于教育目的。Wireshark 使用 `pcap` 库来捕获网络数据包。 + +Wireshark 具有许多功能: + +* 支持数百项协议检查 + +* 能够实时捕获数据包并保存,以便以后进行离线分析 + +* 许多用于分析数据的过滤器 + +* 捕获的数据可以被压缩和解压缩(to 校正:on the fly 什么意思?) + +* 支持各种文件格式的数据分析,输出也可以保存为 XML, CSV 和纯文本格式 + +* 数据可以从以太网,wifi,蓝牙,USB,帧中继,令牌环等多个接口中捕获 + +在本文中,我们将讨论如何在 Ubuntu/Debian 上安装 Wireshark,并将学习如何使用 Wireshark 捕获网络数据包。 + +#### 在 Ubuntu 16.04 / 17.10 上安装 Wireshark + +Wireshark 在 Ubuntu 默认仓库中可用,只需使用以下命令即可安装。但有可能得不到最新版本的 wireshark。 + +``` +linuxtechi@nixworld:~$ sudo apt-get update +linuxtechi@nixworld:~$ sudo apt-get install wireshark -y +``` + +因此,要安装最新版本的 wireshark,我们必须启用或配置官方 wireshark 仓库。 + +使用下面的命令来配置仓库并安装最新版本的 wireshark 实用程序。 + +``` +linuxtechi@nixworld:~$ sudo add-apt-repository ppa:wireshark-dev/stable +linuxtechi@nixworld:~$ sudo apt-get update +linuxtechi@nixworld:~$ sudo apt-get install wireshark -y +``` + +一旦安装了 wireshark,执行以下命令,以便非 root 用户也可以捕获接口的实时数据包。 + +``` +linuxtechi@nixworld:~$ sudo setcap 'CAP_NET_RAW+eip CAP_NET_ADMIN+eip' /usr/bin/dumpcap +``` + +#### 在 Debian 9 上安装 Wireshark + +Wireshark 包及其依赖项已存在于 debian 9 的默认仓库中,因此要在 Debian 9 上安装最新且稳定版本的 Wireshark,请使用以下命令: + +``` +linuxtechi@nixhome:~$ sudo apt-get update +linuxtechi@nixhome:~$ sudo apt-get install wireshark -y +``` + +在安装过程中,它会提示我们为非超级用户配置 dumpcap, + +选择 `yes` 并回车。 + +[![Configure-Wireshark-Debian9](https://www.linuxtechi.com/wp-content/uploads/2017/11/Configure-Wireshark-Debian9-1024x542.jpg)][3] + +安装完成后,执行以下命令,以便非 root 用户也可以捕获接口的实时数据包。 + +``` +linuxtechi@nixhome:~$ sudo chmod +x /usr/bin/dumpcap +``` + +我们还可以使用最新的源代码包在 Ubuntu/Debian 和其它 Linux 发行版上安装 wireshark。 + +#### 在 Debian / Ubuntu 系统上使用源代码安装 Wireshark + +首先下载最新的源代码包(写这篇文章时它的最新版本是 2.4.2),使用以下命令: + +``` +linuxtechi@nixhome:~$ wget https://1.as.dl.wireshark.org/src/wireshark-2.4.2.tar.xz +``` + +然后解压缩包,进入解压缩的目录: + +``` +linuxtechi@nixhome:~$ tar -xf wireshark-2.4.2.tar.xz -C /tmp +linuxtechi@nixhome:~$ cd /tmp/wireshark-2.4.2 +``` + +现在我们使用以下命令编译代码: + +``` +linuxtechi@nixhome:/tmp/wireshark-2.4.2$ ./configure --enable-setcap-install +linuxtechi@nixhome:/tmp/wireshark-2.4.2$ make +``` + +最后安装已编译的软件包以便在系统上安装 Wireshark: + +``` +linuxtechi@nixhome:/tmp/wireshark-2.4.2$ sudo make install +linuxtechi@nixhome:/tmp/wireshark-2.4.2$ sudo ldconfig +``` + +在安装后,它将创建一个单独的 Wireshark 组,我们现在将我们的用户添加到组中,以便它可以与 Wireshark 一起使用,否则在启动 wireshark 时可能会出现 `permission denied(权限被拒绝)`错误。 + +要将用户添加到 wireshark 组,执行以下命令: + +``` +linuxtechi@nixhome:~$ sudo usermod -a -G wireshark linuxtechi +``` + +现在我们可以使用以下命令从 GUI 菜单或终端启动 wireshark: + +``` +linuxtechi@nixhome:~$ wireshark +``` + +#### 在 Debian 9 系统上使用 Wireshark + +[![Access-wireshark-debian9](https://www.linuxtechi.com/wp-content/uploads/2017/11/Access-wireshark-debian9-1024x664.jpg)][4] + +点击 Wireshark 图标 + +[![Wireshark-window-debian9](https://www.linuxtechi.com/wp-content/uploads/2017/11/Wireshark-window-debian9-1024x664.jpg)][5] + +#### 在 Ubuntu 16.04 / 17.10 上使用 Wireshark + +[![Access-wireshark-Ubuntu](https://www.linuxtechi.com/wp-content/uploads/2017/11/Access-wireshark-Ubuntu-1024x664.jpg)][6] + +点击 Wireshark 图标 + +[![Wireshark-window-Ubuntu](https://www.linuxtechi.com/wp-content/uploads/2017/11/Wireshark-window-Ubuntu-1024x664.jpg)][7] + +#### 捕获并分析数据包 + +一旦 wireshark 启动,我们就会看到 wireshark 窗口,上面有 Ubuntu 和 Debian 系统的示例。 + +[![wireshark-Linux-system](https://www.linuxtechi.com/wp-content/uploads/2017/11/wireshark-Linux-system.jpg)][8] + +所有这些都是我们可以捕获网络数据包的接口。根据你系统上的界面,此屏幕可能与你的不同。 + +我们选择 `enp0s3` 来捕获该接口的网络流量。选择接口后,在我们网络上所有设备的网络数据包开始填充(参考下面的屏幕截图): + +[![Capturing-Packet-from-enp0s3-Ubuntu-Wireshark](https://www.linuxtechi.com/wp-content/uploads/2017/11/Capturing-Packet-from-enp0s3-Ubuntu-Wireshark-1024x727.jpg)][9] + +第一次看到这个屏幕,我们可能会被这个屏幕上显示的数据所淹没,并且可能已经想过如何整理这些数据,但不用担心,Wireshark 的最佳功能之一就是它的过滤器。 + +我们可以根据 IP 地址,端口号,也可以使用来源和目标过滤器,数据包大小等对数据进行排序和过滤,也可以将两个或多个过滤器组合在一起以创建更全面的搜索。我们也可以在 `Apply a Display Filter(应用显示过滤器)`选项卡中编写过滤规则,也可以选择已创建的规则。要选择之前构建的过滤器,请单击 `Apply a Display Filter(应用显示过滤器)`选项卡旁边的 `flag` 图标。 + +[![Filter-in-wireshark-Ubuntu](https://www.linuxtechi.com/wp-content/uploads/2017/11/Filter-in-wireshark-Ubuntu-1024x727.jpg)][10] + +我们还可以根据颜色编码过滤数据,默认情况下,浅紫色是 TCP 流量,浅蓝色是 UDP 流量,黑色标识有错误的数据包,看看这些编码是什么意思,点击 `View -> Coloring Rules`,我们也可以改变这些编码。 + +[![Packet-Colouring-Wireshark](https://www.linuxtechi.com/wp-content/uploads/2017/11/Packet-Colouring-Wireshark-1024x682.jpg)][11] + +在我们得到我们需要的结果之后,我们可以点击任何捕获的数据包以获得有关该数据包的更多详细信息,这将显示该网络数据包的所有数据。 + +Wireshark 是一个非常强大的工具,需要一些时间来习惯并对其进行命令操作,本教程将帮助你入门。请随时在下面的评论框中提出你的疑问或建议。 + + +-------------------------------------------------------------------------------- + +via: https://www.linuxtechi.com + +作者:[Pradeep Kumar][a] +译者:[MjSeven](https://github.com/MjSeven) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://www.linuxtechi.com/author/pradeep/ +[1]:https://www.linuxtechi.com/author/pradeep/ +[2]:https://www.linuxtechi.com/wp-content/uploads/2017/11/wireshark-Debian-9-Ubuntu-16.04-17.10.jpg +[3]:https://www.linuxtechi.com/wp-content/uploads/2017/11/Configure-Wireshark-Debian9.jpg +[4]:https://www.linuxtechi.com/wp-content/uploads/2017/11/Access-wireshark-debian9.jpg +[5]:https://www.linuxtechi.com/wp-content/uploads/2017/11/Wireshark-window-debian9.jpg +[6]:https://www.linuxtechi.com/wp-content/uploads/2017/11/Access-wireshark-Ubuntu.jpg +[7]:https://www.linuxtechi.com/wp-content/uploads/2017/11/Wireshark-window-Ubuntu.jpg +[8]:https://www.linuxtechi.com/wp-content/uploads/2017/11/wireshark-Linux-system.jpg +[9]:https://www.linuxtechi.com/wp-content/uploads/2017/11/Capturing-Packet-from-enp0s3-Ubuntu-Wireshark.jpg +[10]:https://www.linuxtechi.com/wp-content/uploads/2017/11/Filter-in-wireshark-Ubuntu.jpg +[11]:https://www.linuxtechi.com/wp-content/uploads/2017/11/Packet-Colouring-Wireshark.jpg From c1072e3ab6e5cab92e016228e0b937ffa77f63d0 Mon Sep 17 00:00:00 2001 From: MjSeven <33125422+MjSeven@users.noreply.github.com> Date: Thu, 27 Sep 2018 15:52:30 +0800 Subject: [PATCH 070/736] =?UTF-8?q?=20=E7=BF=BB=E8=AF=91=E5=AE=8C=E6=AF=95?= =?UTF-8?q?=20(#10388)?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit * Delete 20171129 How to Install and Use Wireshark on Debian and Ubuntu 16.04_17.10.md * Create 20171129 How to Install and Use Wireshark on Debian and Ubuntu 16.04_17.10.md --- ...eshark on Debian and Ubuntu 16.04_17.10.md | 185 ------------------ ...eshark on Debian and Ubuntu 16.04_17.10.md | 183 +++++++++++++++++ 2 files changed, 183 insertions(+), 185 deletions(-) delete mode 100644 sources/tech/20171129 How to Install and Use Wireshark on Debian and Ubuntu 16.04_17.10.md create mode 100644 translated/tech/20171129 How to Install and Use Wireshark on Debian and Ubuntu 16.04_17.10.md diff --git a/sources/tech/20171129 How to Install and Use Wireshark on Debian and Ubuntu 16.04_17.10.md b/sources/tech/20171129 How to Install and Use Wireshark on Debian and Ubuntu 16.04_17.10.md deleted file mode 100644 index 05e441207f..0000000000 --- a/sources/tech/20171129 How to Install and Use Wireshark on Debian and Ubuntu 16.04_17.10.md +++ /dev/null @@ -1,185 +0,0 @@ -Translating by MjSeven - - -How to Install and Use Wireshark on Debian 9 / Ubuntu 16.04 / 17.10 -============================================================ - -by [Pradeep Kumar][1] · Published November 29, 2017 · Updated November 29, 2017 - - [![wireshark-Debian-9-Ubuntu 16.04 -17.10](https://www.linuxtechi.com/wp-content/uploads/2017/11/wireshark-Debian-9-Ubuntu-16.04-17.10.jpg)][2] - -Wireshark is free and open source, cross platform, GUI based Network packet analyzer that is available for Linux, Windows, MacOS, Solaris etc. It captures network packets in real time & presents them in human readable format. Wireshark allows us to monitor the network packets up to microscopic level. Wireshark also has a command line utility called ‘tshark‘ that performs the same functions as Wireshark but through terminal & not through GUI. - -Wireshark can be used for network troubleshooting, analyzing, software & communication protocol development & also for education purposed. Wireshark uses a library called ‘pcap‘ for capturing the network packets. - -Wireshark comes with a lot of features & some those features are; - -* Support for a hundreds of protocols for inspection, - -* Ability to capture packets in real time & save them for later offline analysis, - -* A number of filters to analyzing data, - -* Data captured can be compressed & uncompressed on the fly, - -* Various file formats for data analysis supported, output can also be saved to XML, CSV, plain text formats, - -* data can be captured from a number of interfaces like ethernet, wifi, bluetooth, USB, Frame relay , token rings etc. - -In this article, we will discuss how to install Wireshark on Ubuntu/Debain machines & will also learn to use Wireshark for capturing network packets. - -#### Installation of Wireshark on Ubuntu 16.04 / 17.10 - -Wireshark is available with default Ubuntu repositories & can be simply installed using the following command. But there might be chances that you will not get the latest version of wireshark. - -``` -linuxtechi@nixworld:~$ sudo apt-get update -linuxtechi@nixworld:~$ sudo apt-get install wireshark -y -``` - -So to install latest version of wireshark we have to enable or configure official wireshark repository. - -Use the beneath commands one after the another to configure repository and to install latest version of Wireshark utility - -``` -linuxtechi@nixworld:~$ sudo add-apt-repository ppa:wireshark-dev/stable -linuxtechi@nixworld:~$ sudo apt-get update -linuxtechi@nixworld:~$ sudo apt-get install wireshark -y -``` - -Once the Wireshark is installed execute the below command so that non-root users can capture live packets of interfaces, - -``` -linuxtechi@nixworld:~$ sudo setcap 'CAP_NET_RAW+eip CAP_NET_ADMIN+eip' /usr/bin/dumpcap -``` - -#### Installation of Wireshark on Debian 9 - -Wireshark package and its dependencies are already present in the default debian 9 repositories, so to install latest and stable version of Wireshark on Debian 9, use the following command: - -``` -linuxtechi@nixhome:~$ sudo apt-get update -linuxtechi@nixhome:~$ sudo apt-get install wireshark -y -``` - -During the installation, it will prompt us to configure dumpcap for non-superusers, - -Select ‘yes’ and then hit enter. - - [![Configure-Wireshark-Debian9](https://www.linuxtechi.com/wp-content/uploads/2017/11/Configure-Wireshark-Debian9-1024x542.jpg)][3] - -Once the Installation is completed, execute the below command so that non-root users can also capture the live packets of the interfaces. - -``` -linuxtechi@nixhome:~$ sudo chmod +x /usr/bin/dumpcap -``` - -We can also use the latest source package to install the wireshark on Ubuntu/Debain & many other Linux distributions. - -#### Installing Wireshark using source code on Debian / Ubuntu Systems - -Firstly download the latest source package (which is 2.4.2 at the time for writing this article), use the following command, - -``` -linuxtechi@nixhome:~$ wget https://1.as.dl.wireshark.org/src/wireshark-2.4.2.tar.xz -``` - -Next extract the package & enter into the extracted directory, - -``` -linuxtechi@nixhome:~$ tar -xf wireshark-2.4.2.tar.xz -C /tmp -linuxtechi@nixhome:~$ cd /tmp/wireshark-2.4.2 -``` - -Now we will compile the code with the following commands, - -``` -linuxtechi@nixhome:/tmp/wireshark-2.4.2$ ./configure --enable-setcap-install -linuxtechi@nixhome:/tmp/wireshark-2.4.2$ make -``` - -Lastly install the compiled packages to install Wireshark on the system, - -``` -linuxtechi@nixhome:/tmp/wireshark-2.4.2$ sudo make install -linuxtechi@nixhome:/tmp/wireshark-2.4.2$ sudo ldconfig -``` - -Upon installation a separate group for Wireshark will also be created, we will now add our user to the group so that it can work with wireshark otherwise you might get ‘permission denied‘ error when starting wireshark. - -To add the user to the wireshark group, execute the following command, - -``` -linuxtechi@nixhome:~$ sudo usermod -a -G wireshark linuxtechi -``` - -Now we can start wireshark either from GUI Menu or from terminal with this command, - -``` -linuxtechi@nixhome:~$ wireshark -``` - -#### Access Wireshark on Debian 9 System - - [![Access-wireshark-debian9](https://www.linuxtechi.com/wp-content/uploads/2017/11/Access-wireshark-debian9-1024x664.jpg)][4] - -Click on Wireshark icon - - [![Wireshark-window-debian9](https://www.linuxtechi.com/wp-content/uploads/2017/11/Wireshark-window-debian9-1024x664.jpg)][5] - -#### Access Wireshark on Ubuntu 16.04 / 17.10 - - [![Access-wireshark-Ubuntu](https://www.linuxtechi.com/wp-content/uploads/2017/11/Access-wireshark-Ubuntu-1024x664.jpg)][6] - -Click on Wireshark icon - - [![Wireshark-window-Ubuntu](https://www.linuxtechi.com/wp-content/uploads/2017/11/Wireshark-window-Ubuntu-1024x664.jpg)][7] - -#### Capturing and Analyzing packets - -Once the wireshark has been started, we should be presented with the wireshark window, example is shown above for Ubuntu and Debian system. - - [![wireshark-Linux-system](https://www.linuxtechi.com/wp-content/uploads/2017/11/wireshark-Linux-system.jpg)][8] - -All these are the interfaces from where we can capture the network packets. Based on the interfaces you have on your system, this screen might be different for you. - -We are selecting ‘enp0s3’ for capturing the network traffic for that inteface. After selecting the inteface, network packets for all the devices on our network start to populate (refer to screenshot below) - - [![Capturing-Packet-from-enp0s3-Ubuntu-Wireshark](https://www.linuxtechi.com/wp-content/uploads/2017/11/Capturing-Packet-from-enp0s3-Ubuntu-Wireshark-1024x727.jpg)][9] - -First time we see this screen we might get overwhelmed by the data that is presented in this screen & might have thought how to sort out this data but worry not, one the best features of Wireshark is its filters. - -We can sort/filter out the data based on IP address, Port number, can also used source & destination filters, packet size etc & can also combine 2 or more filters together to create more comprehensive searches. We can either write our filters in ‘Apply a Display Filter‘ tab , or we can also select one of already created rules. To select pre-built filter, click on ‘flag‘ icon , next to ‘Apply a Display Filter‘ tab, - - [![Filter-in-wireshark-Ubuntu](https://www.linuxtechi.com/wp-content/uploads/2017/11/Filter-in-wireshark-Ubuntu-1024x727.jpg)][10] - -We can also filter data based on the color coding, By default, light purple is TCP traffic, light blue is UDP traffic, and black identifies packets with errors , to see what these codes mean, click View -> Coloring Rules, also we can change these codes. - - [![Packet-Colouring-Wireshark](https://www.linuxtechi.com/wp-content/uploads/2017/11/Packet-Colouring-Wireshark-1024x682.jpg)][11] - -After we have the results that we need, we can then click on any of the captured packets to get more details about that packet, this will show all the data about that network packet. - -Wireshark is an extremely powerful tool takes some time to getting used to & make a command over it, this tutorial will help you get started. Please feel free to drop in your queries or suggestions in the comment box below. - --------------------------------------------------------------------------------- - -via: https://www.linuxtechi.com - -作者:[Pradeep Kumar][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:https://www.linuxtechi.com/author/pradeep/ -[1]:https://www.linuxtechi.com/author/pradeep/ -[2]:https://www.linuxtechi.com/wp-content/uploads/2017/11/wireshark-Debian-9-Ubuntu-16.04-17.10.jpg -[3]:https://www.linuxtechi.com/wp-content/uploads/2017/11/Configure-Wireshark-Debian9.jpg -[4]:https://www.linuxtechi.com/wp-content/uploads/2017/11/Access-wireshark-debian9.jpg -[5]:https://www.linuxtechi.com/wp-content/uploads/2017/11/Wireshark-window-debian9.jpg -[6]:https://www.linuxtechi.com/wp-content/uploads/2017/11/Access-wireshark-Ubuntu.jpg -[7]:https://www.linuxtechi.com/wp-content/uploads/2017/11/Wireshark-window-Ubuntu.jpg -[8]:https://www.linuxtechi.com/wp-content/uploads/2017/11/wireshark-Linux-system.jpg -[9]:https://www.linuxtechi.com/wp-content/uploads/2017/11/Capturing-Packet-from-enp0s3-Ubuntu-Wireshark.jpg -[10]:https://www.linuxtechi.com/wp-content/uploads/2017/11/Filter-in-wireshark-Ubuntu.jpg -[11]:https://www.linuxtechi.com/wp-content/uploads/2017/11/Packet-Colouring-Wireshark.jpg diff --git a/translated/tech/20171129 How to Install and Use Wireshark on Debian and Ubuntu 16.04_17.10.md b/translated/tech/20171129 How to Install and Use Wireshark on Debian and Ubuntu 16.04_17.10.md new file mode 100644 index 0000000000..0bcbe0d3e5 --- /dev/null +++ b/translated/tech/20171129 How to Install and Use Wireshark on Debian and Ubuntu 16.04_17.10.md @@ -0,0 +1,183 @@ +在 Debian 9 / Ubuntu 16.04 / 17.10 中如何安装并使用 Wireshark +====== + +作者 [Pradeep Kumar][1],首发于 2017 年 11 月 29 日,更新于 2017 年 11 月 29 日 + +[![wireshark-Debian-9-Ubuntu 16.04 -17.10](https://www.linuxtechi.com/wp-content/uploads/2017/11/wireshark-Debian-9-Ubuntu-16.04-17.10.jpg)][2] + +Wireshark 是免费的,开源的,跨平台的基于 GUI 的网络数据包分析器,可用于 Linux, Windows, MacOS, Solaris 等。它可以实时捕获网络数据包,并以人性化的格式呈现。Wireshark 允许我们监控网络数据包上升到微观层面。Wireshark 还有一个名为 `tshark` 的命令行实用程序,它与 Wireshark 执行相同的功能,但它是通过终端而不是 GUI。 + +Wireshark 可用于网络故障排除,分析,软件和通信协议开发以及用于教育目的。Wireshark 使用 `pcap` 库来捕获网络数据包。 + +Wireshark 具有许多功能: + +* 支持数百项协议检查 + +* 能够实时捕获数据包并保存,以便以后进行离线分析 + +* 许多用于分析数据的过滤器 + +* 捕获的数据可以被压缩和解压缩(to 校正:on the fly 什么意思?) + +* 支持各种文件格式的数据分析,输出也可以保存为 XML, CSV 和纯文本格式 + +* 数据可以从以太网,wifi,蓝牙,USB,帧中继,令牌环等多个接口中捕获 + +在本文中,我们将讨论如何在 Ubuntu/Debian 上安装 Wireshark,并将学习如何使用 Wireshark 捕获网络数据包。 + +#### 在 Ubuntu 16.04 / 17.10 上安装 Wireshark + +Wireshark 在 Ubuntu 默认仓库中可用,只需使用以下命令即可安装。但有可能得不到最新版本的 wireshark。 + +``` +linuxtechi@nixworld:~$ sudo apt-get update +linuxtechi@nixworld:~$ sudo apt-get install wireshark -y +``` + +因此,要安装最新版本的 wireshark,我们必须启用或配置官方 wireshark 仓库。 + +使用下面的命令来配置仓库并安装最新版本的 wireshark 实用程序。 + +``` +linuxtechi@nixworld:~$ sudo add-apt-repository ppa:wireshark-dev/stable +linuxtechi@nixworld:~$ sudo apt-get update +linuxtechi@nixworld:~$ sudo apt-get install wireshark -y +``` + +一旦安装了 wireshark,执行以下命令,以便非 root 用户也可以捕获接口的实时数据包。 + +``` +linuxtechi@nixworld:~$ sudo setcap 'CAP_NET_RAW+eip CAP_NET_ADMIN+eip' /usr/bin/dumpcap +``` + +#### 在 Debian 9 上安装 Wireshark + +Wireshark 包及其依赖项已存在于 debian 9 的默认仓库中,因此要在 Debian 9 上安装最新且稳定版本的 Wireshark,请使用以下命令: + +``` +linuxtechi@nixhome:~$ sudo apt-get update +linuxtechi@nixhome:~$ sudo apt-get install wireshark -y +``` + +在安装过程中,它会提示我们为非超级用户配置 dumpcap, + +选择 `yes` 并回车。 + +[![Configure-Wireshark-Debian9](https://www.linuxtechi.com/wp-content/uploads/2017/11/Configure-Wireshark-Debian9-1024x542.jpg)][3] + +安装完成后,执行以下命令,以便非 root 用户也可以捕获接口的实时数据包。 + +``` +linuxtechi@nixhome:~$ sudo chmod +x /usr/bin/dumpcap +``` + +我们还可以使用最新的源代码包在 Ubuntu/Debian 和其它 Linux 发行版上安装 wireshark。 + +#### 在 Debian / Ubuntu 系统上使用源代码安装 Wireshark + +首先下载最新的源代码包(写这篇文章时它的最新版本是 2.4.2),使用以下命令: + +``` +linuxtechi@nixhome:~$ wget https://1.as.dl.wireshark.org/src/wireshark-2.4.2.tar.xz +``` + +然后解压缩包,进入解压缩的目录: + +``` +linuxtechi@nixhome:~$ tar -xf wireshark-2.4.2.tar.xz -C /tmp +linuxtechi@nixhome:~$ cd /tmp/wireshark-2.4.2 +``` + +现在我们使用以下命令编译代码: + +``` +linuxtechi@nixhome:/tmp/wireshark-2.4.2$ ./configure --enable-setcap-install +linuxtechi@nixhome:/tmp/wireshark-2.4.2$ make +``` + +最后安装已编译的软件包以便在系统上安装 Wireshark: + +``` +linuxtechi@nixhome:/tmp/wireshark-2.4.2$ sudo make install +linuxtechi@nixhome:/tmp/wireshark-2.4.2$ sudo ldconfig +``` + +在安装后,它将创建一个单独的 Wireshark 组,我们现在将我们的用户添加到组中,以便它可以与 Wireshark 一起使用,否则在启动 wireshark 时可能会出现 `permission denied(权限被拒绝)`错误。 + +要将用户添加到 wireshark 组,执行以下命令: + +``` +linuxtechi@nixhome:~$ sudo usermod -a -G wireshark linuxtechi +``` + +现在我们可以使用以下命令从 GUI 菜单或终端启动 wireshark: + +``` +linuxtechi@nixhome:~$ wireshark +``` + +#### 在 Debian 9 系统上使用 Wireshark + +[![Access-wireshark-debian9](https://www.linuxtechi.com/wp-content/uploads/2017/11/Access-wireshark-debian9-1024x664.jpg)][4] + +点击 Wireshark 图标 + +[![Wireshark-window-debian9](https://www.linuxtechi.com/wp-content/uploads/2017/11/Wireshark-window-debian9-1024x664.jpg)][5] + +#### 在 Ubuntu 16.04 / 17.10 上使用 Wireshark + +[![Access-wireshark-Ubuntu](https://www.linuxtechi.com/wp-content/uploads/2017/11/Access-wireshark-Ubuntu-1024x664.jpg)][6] + +点击 Wireshark 图标 + +[![Wireshark-window-Ubuntu](https://www.linuxtechi.com/wp-content/uploads/2017/11/Wireshark-window-Ubuntu-1024x664.jpg)][7] + +#### 捕获并分析数据包 + +一旦 wireshark 启动,我们就会看到 wireshark 窗口,上面有 Ubuntu 和 Debian 系统的示例。 + +[![wireshark-Linux-system](https://www.linuxtechi.com/wp-content/uploads/2017/11/wireshark-Linux-system.jpg)][8] + +所有这些都是我们可以捕获网络数据包的接口。根据你系统上的界面,此屏幕可能与你的不同。 + +我们选择 `enp0s3` 来捕获该接口的网络流量。选择接口后,在我们网络上所有设备的网络数据包开始填充(参考下面的屏幕截图): + +[![Capturing-Packet-from-enp0s3-Ubuntu-Wireshark](https://www.linuxtechi.com/wp-content/uploads/2017/11/Capturing-Packet-from-enp0s3-Ubuntu-Wireshark-1024x727.jpg)][9] + +第一次看到这个屏幕,我们可能会被这个屏幕上显示的数据所淹没,并且可能已经想过如何整理这些数据,但不用担心,Wireshark 的最佳功能之一就是它的过滤器。 + +我们可以根据 IP 地址,端口号,也可以使用来源和目标过滤器,数据包大小等对数据进行排序和过滤,也可以将两个或多个过滤器组合在一起以创建更全面的搜索。我们也可以在 `Apply a Display Filter(应用显示过滤器)`选项卡中编写过滤规则,也可以选择已创建的规则。要选择之前构建的过滤器,请单击 `Apply a Display Filter(应用显示过滤器)`选项卡旁边的 `flag` 图标。 + +[![Filter-in-wireshark-Ubuntu](https://www.linuxtechi.com/wp-content/uploads/2017/11/Filter-in-wireshark-Ubuntu-1024x727.jpg)][10] + +我们还可以根据颜色编码过滤数据,默认情况下,浅紫色是 TCP 流量,浅蓝色是 UDP 流量,黑色标识有错误的数据包,看看这些编码是什么意思,点击 `View -> Coloring Rules`,我们也可以改变这些编码。 + +[![Packet-Colouring-Wireshark](https://www.linuxtechi.com/wp-content/uploads/2017/11/Packet-Colouring-Wireshark-1024x682.jpg)][11] + +在我们得到我们需要的结果之后,我们可以点击任何捕获的数据包以获得有关该数据包的更多详细信息,这将显示该网络数据包的所有数据。 + +Wireshark 是一个非常强大的工具,需要一些时间来习惯并对其进行命令操作,本教程将帮助你入门。请随时在下面的评论框中提出你的疑问或建议。 + + +-------------------------------------------------------------------------------- + +via: https://www.linuxtechi.com + +作者:[Pradeep Kumar][a] +译者:[MjSeven](https://github.com/MjSeven) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://www.linuxtechi.com/author/pradeep/ +[1]:https://www.linuxtechi.com/author/pradeep/ +[2]:https://www.linuxtechi.com/wp-content/uploads/2017/11/wireshark-Debian-9-Ubuntu-16.04-17.10.jpg +[3]:https://www.linuxtechi.com/wp-content/uploads/2017/11/Configure-Wireshark-Debian9.jpg +[4]:https://www.linuxtechi.com/wp-content/uploads/2017/11/Access-wireshark-debian9.jpg +[5]:https://www.linuxtechi.com/wp-content/uploads/2017/11/Wireshark-window-debian9.jpg +[6]:https://www.linuxtechi.com/wp-content/uploads/2017/11/Access-wireshark-Ubuntu.jpg +[7]:https://www.linuxtechi.com/wp-content/uploads/2017/11/Wireshark-window-Ubuntu.jpg +[8]:https://www.linuxtechi.com/wp-content/uploads/2017/11/wireshark-Linux-system.jpg +[9]:https://www.linuxtechi.com/wp-content/uploads/2017/11/Capturing-Packet-from-enp0s3-Ubuntu-Wireshark.jpg +[10]:https://www.linuxtechi.com/wp-content/uploads/2017/11/Filter-in-wireshark-Ubuntu.jpg +[11]:https://www.linuxtechi.com/wp-content/uploads/2017/11/Packet-Colouring-Wireshark.jpg From 5d353a7da0cbc1fc83a5784e69b4c32bdc5ece96 Mon Sep 17 00:00:00 2001 From: MjSeven <33125422+MjSeven@users.noreply.github.com> Date: Thu, 27 Sep 2018 16:15:49 +0800 Subject: [PATCH 071/736] Update 20180816 An introduction to the Django Python web app framework.md --- ...6 An introduction to the Django Python web app framework.md | 3 +++ 1 file changed, 3 insertions(+) diff --git a/sources/tech/20180816 An introduction to the Django Python web app framework.md b/sources/tech/20180816 An introduction to the Django Python web app framework.md index 21ab9d21ae..ab7dba9526 100644 --- a/sources/tech/20180816 An introduction to the Django Python web app framework.md +++ b/sources/tech/20180816 An introduction to the Django Python web app framework.md @@ -1,3 +1,6 @@ +Translating by MjSeven + + An introduction to the Django Python web app framework ====== From c53b2655fc156299aec7867641f7500f9863ab69 Mon Sep 17 00:00:00 2001 From: BayarHwasaii <20964952+bayar199468@users.noreply.github.com> Date: Thu, 27 Sep 2018 20:45:47 +0800 Subject: [PATCH 072/736] 7 Best eBook Readers for Linux (#10390) --- sources/tech/20171012 7 Best eBook Readers for Linux.md | 1 + 1 file changed, 1 insertion(+) diff --git a/sources/tech/20171012 7 Best eBook Readers for Linux.md b/sources/tech/20171012 7 Best eBook Readers for Linux.md index 5198f1bdc0..128c667d88 100644 --- a/sources/tech/20171012 7 Best eBook Readers for Linux.md +++ b/sources/tech/20171012 7 Best eBook Readers for Linux.md @@ -1,3 +1,4 @@ +Translating by bayar199468 7 Best eBook Readers for Linux ====== **Brief:** In this article, we are covering some of the best ebook readers for Linux. These apps give a better reading experience and some will even help in managing your ebooks. From 4dc49729c51ef9ef2e0cf59df3c82e8a5ed05586 Mon Sep 17 00:00:00 2001 From: MjSeven <33125422+MjSeven@users.noreply.github.com> Date: Thu, 27 Sep 2018 21:05:05 +0800 Subject: [PATCH 073/736] Update 20180816 An introduction to the Django Python web app framework.md (#10389) * Delete 20171129 How to Install and Use Wireshark on Debian and Ubuntu 16.04_17.10.md * Create 20171129 How to Install and Use Wireshark on Debian and Ubuntu 16.04_17.10.md * Update 20180816 An introduction to the Django Python web app framework.md --- ...6 An introduction to the Django Python web app framework.md | 3 +++ 1 file changed, 3 insertions(+) diff --git a/sources/tech/20180816 An introduction to the Django Python web app framework.md b/sources/tech/20180816 An introduction to the Django Python web app framework.md index 21ab9d21ae..ab7dba9526 100644 --- a/sources/tech/20180816 An introduction to the Django Python web app framework.md +++ b/sources/tech/20180816 An introduction to the Django Python web app framework.md @@ -1,3 +1,6 @@ +Translating by MjSeven + + An introduction to the Django Python web app framework ====== From 75f16370efdde027a5c4753cd3a397e6dcf1f987 Mon Sep 17 00:00:00 2001 From: "Xingyu.Wang" Date: Thu, 27 Sep 2018 22:03:12 +0800 Subject: [PATCH 074/736] PRF:20180806 Anatomy of a Linux DNS Lookup - Part IV.md @pinewall --- ...Anatomy of a Linux DNS Lookup - Part IV.md | 40 +++++++++---------- 1 file changed, 18 insertions(+), 22 deletions(-) diff --git a/translated/tech/20180806 Anatomy of a Linux DNS Lookup - Part IV.md b/translated/tech/20180806 Anatomy of a Linux DNS Lookup - Part IV.md index 9f3dd93437..e2f893b210 100644 --- a/translated/tech/20180806 Anatomy of a Linux DNS Lookup - Part IV.md +++ b/translated/tech/20180806 Anatomy of a Linux DNS Lookup - Part IV.md @@ -16,12 +16,8 @@ Linux DNS 查询剖析(第四部分) 在第四部分中,我将介绍容器如何完成 DNS 查询。你想的没错,也不是那么简单。 -* * * - ### 1) Docker 和 DNS -============================================================ - 在 [Linux DNS 查询剖析(第三部分)][3] 中,我们介绍了 `dnsmasq`,其工作方式如下:将 DNS 查询指向到 localhost 地址 `127.0.0.1`,同时启动一个进程监听 `53` 端口并处理查询请求。 在按上述方式配置 DNS 的主机上,如果运行了一个 Docker 容器,容器内的 `/etc/resolv.conf` 文件会是怎样的呢? @@ -72,29 +68,29 @@ google.com.             112     IN      A       172.217.23.14 在这个问题上,Docker 的解决方案是忽略所有可能的复杂情况,即无论主机中使用什么 DNS 服务器,容器内都使用 Google 的 DNS 服务器 `8.8.8.8` 和 `8.8.4.4` 完成 DNS 查询。 - _我的经历:在 2013 年,我遇到了使用 Docker 以来的第一个问题,与 Docker 的这种 DNS 解决方案密切相关。我们公司的网络屏蔽了 `8.8.8.8` 和 `8.8.4.4`,导致容器无法解析域名。_ +_我的经历:在 2013 年,我遇到了使用 Docker 以来的第一个问题,与 Docker 的这种 DNS 解决方案密切相关。我们公司的网络屏蔽了 `8.8.8.8` 和 `8.8.4.4`,导致容器无法解析域名。_ -这就是 Docker 容器的情况,但对于包括 Kubernetes 在内的容器 _编排引擎orchestrators_,情况又有些不同。 +这就是 Docker 容器的情况,但对于包括 Kubernetes 在内的容器 编排引擎orchestrators,情况又有些不同。 ### 2) Kubernetes 和 DNS -在 Kubernetes 中,最小部署单元是 `pod`;`pod` 是一组相互协作的容器,共享 IP 地址(和其它资源)。 +在 Kubernetes 中,最小部署单元是 pod;它是一组相互协作的容器,共享 IP 地址(和其它资源)。 Kubernetes 面临的一个额外的挑战是,将 Kubernetes 服务请求(例如,`myservice.kubernetes.io`)通过对应的解析器resolver,转发到具体服务地址对应的内网地址private network。这里提到的服务地址被称为归属于“集群域cluster domain”。集群域可由管理员配置,根据配置可以是 `cluster.local` 或 `myorg.badger` 等。 -在 Kubernetes 中,你可以为 `pod` 指定如下四种 `pod` 内 DNS 查询的方式。 +在 Kubernetes 中,你可以为 pod 指定如下四种 pod 内 DNS 查询的方式。 -* Default +**Default** -在这种(名称容易让人误解)的方式中,`pod` 与其所在的主机采用相同的 DNS 查询路径,与前面介绍的主机 DNS 查询一致。我们说这种方式的名称容易让人误解,因为该方式并不是默认选项!`ClusterFirst` 才是默认选项。 +在这种(名称容易让人误解)的方式中,pod 与其所在的主机采用相同的 DNS 查询路径,与前面介绍的主机 DNS 查询一致。我们说这种方式的名称容易让人误解,因为该方式并不是默认选项!`ClusterFirst` 才是默认选项。 如果你希望覆盖 `/etc/resolv.conf` 中的条目,你可以添加到 `kubelet` 的配置中。 -* ClusterFirst +**ClusterFirst** 在 `ClusterFirst` 方式中,遇到 DNS 查询请求会做有选择的转发。根据配置的不同,有以下两种方式: -第一种方式配置相对古老但更简明,即采用一个规则:如果请求的域名不是集群域的子域,那么将其转发到 `pod` 所在的主机。 +第一种方式配置相对古老但更简明,即采用一个规则:如果请求的域名不是集群域的子域,那么将其转发到 pod 所在的主机。 第二种方式相对新一些,你可以在内部 DNS 中配置选择性转发。 @@ -115,27 +111,27 @@ data: 在 `stubDomains` 条目中,可以为特定域名指定特定的 DNS 服务器;而 `upstreamNameservers` 条目则给出,待查询域名不是集群域子域情况下用到的 DNS 服务器。 -这是通过在一个 `pod` 中运行我们熟知的 `dnsmasq` 实现的。 +这是通过在一个 pod 中运行我们熟知的 `dnsmasq` 实现的。 ![kubedns](https://zwischenzugs.files.wordpress.com/2018/08/kubedns.png?w=525) 剩下两种选项都比较小众: -* ClusterFirstWithHostNet +**ClusterFirstWithHostNet** -适用于 `pod` 使用主机网络的情况,例如绕开 Docker 网络配置,直接使用与 `pod` 对应主机相同的网络。 +适用于 pod 使用主机网络的情况,例如绕开 Docker 网络配置,直接使用与 pod 对应主机相同的网络。 -* None +**None** `None` 意味着不改变 DNS,但强制要求你在 `pod` 规范文件specification的 `dnsConfig` 条目中指定 DNS 配置。 ### CoreDNS 即将到来 -除了上面提到的那些,一旦 `CoreDNS` 取代Kubernetes 中的 `kube-dns`,情况还会发生变化。`CoreDNS` 相比 `kube-dns` 具有可配置性更高、效率更高等优势。 +除了上面提到的那些,一旦 `CoreDNS` 取代 Kubernetes 中的 `kube-dns`,情况还会发生变化。`CoreDNS` 相比 `kube-dns` 具有可配置性更高、效率更高等优势。 如果想了解更多,参考[这里][5]。 -如果你对 OpenShift 的网络感兴趣,我曾写过一篇[文章][6]可供你参考。但文章中 OpenShift 的版本是 `3.6`,可能有些过时。 +如果你对 OpenShift 的网络感兴趣,我曾写过一篇[文章][6]可供你参考。但文章中 OpenShift 的版本是 3.6,可能有些过时。 ### 第四部分总结 @@ -152,14 +148,14 @@ via: https://zwischenzugs.com/2018/08/06/anatomy-of-a-linux-dns-lookup-part-iv/ 作者:[zwischenzugs][a] 译者:[pinewall](https://github.com/pinewall) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 [a]:https://zwischenzugs.com/ -[1]:https://zwischenzugs.com/2018/06/08/anatomy-of-a-linux-dns-lookup-part-i/ -[2]:https://zwischenzugs.com/2018/06/18/anatomy-of-a-linux-dns-lookup-part-ii/ -[3]:https://zwischenzugs.com/2018/07/06/anatomy-of-a-linux-dns-lookup-part-iii/ +[1]:https://linux.cn/article-9943-1.html +[2]:https://linux.cn/article-9949-1.html +[3]:https://linux.cn/article-9972-1.html [4]:https://kubernetes.io/docs/tasks/administer-cluster/dns-custom-nameservers/#impacts-on-pods [5]:https://coredns.io/ [6]:https://zwischenzugs.com/2017/10/21/openshift-3-6-dns-in-pictures/ From c12499182602eb52d877518d3b323dbf4c4ef4cb Mon Sep 17 00:00:00 2001 From: "Xingyu.Wang" Date: Thu, 27 Sep 2018 22:03:39 +0800 Subject: [PATCH 075/736] PUB:20180806 Anatomy of a Linux DNS Lookup - Part IV.md @pinewall https://linux.cn/article-10056-1.html --- .../20180806 Anatomy of a Linux DNS Lookup - Part IV.md | 0 1 file changed, 0 insertions(+), 0 deletions(-) rename {translated/tech => published}/20180806 Anatomy of a Linux DNS Lookup - Part IV.md (100%) diff --git a/translated/tech/20180806 Anatomy of a Linux DNS Lookup - Part IV.md b/published/20180806 Anatomy of a Linux DNS Lookup - Part IV.md similarity index 100% rename from translated/tech/20180806 Anatomy of a Linux DNS Lookup - Part IV.md rename to published/20180806 Anatomy of a Linux DNS Lookup - Part IV.md From 06dfdbd0cc6eb2802e08ffe92c9a855a7fc02bf2 Mon Sep 17 00:00:00 2001 From: "Xingyu.Wang" Date: Thu, 27 Sep 2018 23:40:31 +0800 Subject: [PATCH 076/736] PRF:20180201 Here are some amazing advantages of Go that you dont hear much about.md @imquanquan --- ...ges of Go that you dont hear much about.md | 104 +++++++----------- 1 file changed, 41 insertions(+), 63 deletions(-) diff --git a/translated/tech/20180201 Here are some amazing advantages of Go that you dont hear much about.md b/translated/tech/20180201 Here are some amazing advantages of Go that you dont hear much about.md index 954d800b25..c1e3c36951 100644 --- a/translated/tech/20180201 Here are some amazing advantages of Go that you dont hear much about.md +++ b/translated/tech/20180201 Here are some amazing advantages of Go that you dont hear much about.md @@ -1,65 +1,58 @@ 你没听说过的 Go 语言惊人优点 -============================================================ +========= ![](https://cdn-images-1.medium.com/max/2000/1*NDXd5I87VZG0Z74N7dog0g.png) -来自 [https://github.com/ashleymcnamara/gophers][1] 的图稿 +*来自 [https://github.com/ashleymcnamara/gophers][1] 的图稿* -在这篇文章中,我将讨论为什么你需要尝试一下 Go,以及应该从哪里学起。 +在这篇文章中,我将讨论为什么你需要尝试一下 Go 语言,以及应该从哪里学起。 -Golang 是可能是最近几年里你经常听人说起的编程语言。尽管它在 2009 年已经发布,但它最近才开始流行起来。 +Go 语言是可能是最近几年里你经常听人说起的编程语言。尽管它在 2009 年已经发布了,但它最近才开始流行起来。 ![](https://cdn-images-1.medium.com/max/2000/1*cQ8QzhCPiFXqk_oQdUk_zw.png) -根据 Google 趋势,Golang 语言非常流行。 +*根据 Google 趋势,Go 语言非常流行。* -这篇文章不会讨论一些你经常看到的 Golang 的主要特性。 +这篇文章不会讨论一些你经常看到的 Go 语言的主要特性。 -相反,我想向您介绍一些相当小众但仍然很重要的功能。在您决定尝试Go后,您才会知道这些功能。 +相反,我想向您介绍一些相当小众但仍然很重要的功能。只有在您决定尝试 Go 语言后,您才会知道这些功能。 这些都是表面上没有体现出来的惊人特性,但它们可以为您节省数周或数月的工作量。而且这些特性还可以使软件开发更加愉快。 -阅读本文不需要任何语言经验,所以不比担心 Golang 对你来说是新的事物。如果你想了解更多,可以看看我在底部列出的一些额外的链接,。 +阅读本文不需要任何语言经验,所以不必担心你还不了解 Go 语言。如果你想了解更多,可以看看我在底部列出的一些额外的链接。 我们将讨论以下主题: * GoDoc - * 静态代码分析 - * 内置的测试和分析框架 - * 竞争条件检测 - * 学习曲线 - -* 反射(Reflection) - -* Opinionatedness(专制独裁的 Go) - +* 反射 +* Opinionatedness * 文化 请注意,这个列表不遵循任何特定顺序来讨论。 ### GoDoc -Golang 非常重视代码中的文档,简洁也是如此。 +Go 语言非常重视代码中的文档,所以也很简洁。 -[GoDoc][4] 是一个静态代码分析工具,可以直接从代码中创建漂亮的文档页面。GoDoc 的一个显着特点是它不使用任何其他的语言,如 JavaDoc,PHPDoc 或 JSDoc 来注释代码中的结构,只需要用英语。 +[GoDoc][4] 是一个静态代码分析工具,可以直接从代码中创建漂亮的文档页面。GoDoc 的一个显著特点是它不使用任何其他的语言,如 JavaDoc、PHPDoc 或 JSDoc 来注释代码中的结构,只需要用英语。 -它使用从代码中获取的尽可能多的信息来概述、构造和格式化文档。它有多而全的功能,比如:交叉引用,代码示例以及一个指向版本控制系统仓库的链接。 +它使用从代码中获取的尽可能多的信息来概述、构造和格式化文档。它有多而全的功能,比如:交叉引用、代码示例,并直接链接到你的版本控制系统仓库。 -而你需要做的只有添加一些好的,像 `// MyFunc transforms Foo into Bar` 这样子的注释,而这些注释也会反映在的文档中。你甚至可以添加一些通过网络接口或者在本地可以实际运行的 [代码示例][5]。 +而你需要做的只有添加一些像 `// MyFunc transforms Foo into Bar` 这样子的老牌注释,而这些注释也会反映在的文档中。你甚至可以添加一些通过网络界面或者在本地可以实际运行的 [代码示例][5]。 -GoDoc 是 Go 的唯一文档引擎,供整个社区使用。这意味着用 Go 编写的每个库或应用程序都具有相同的文档格式。从长远来看,它可以帮你在浏览这些文档时节省大量时间。 +GoDoc 是 Go 的唯一文档引擎,整个社区都在使用。这意味着用 Go 编写的每个库或应用程序都具有相同的文档格式。从长远来看,它可以帮你在浏览这些文档时节省大量时间。 例如,这是我最近一个小项目的 GoDoc 页面:[pullkee — GoDoc][6]。 ### 静态代码分析 -Go 严重依赖于静态代码分析。例子包括 godoc 文档,gofmt 代码格式化,golint 代码风格统一,等等。 +Go 严重依赖于静态代码分析。例如用于文档的 [godoc][7],用于代码格式化的 [gofmt][8],用于代码风格的 [golint][9],等等。 -其中有很多甚至全部包含在类似 [gometalinter][10] 的项目中,这些将它们全部组合成一个实用程序。 +它们是如此之多,甚至有一个总揽了它们的项目 [gometalinter][10] ,将它们组合成了单一的实用程序。 这些工具通常作为独立的命令行应用程序实现,并可轻松与任何编码环境集成。 @@ -67,21 +60,21 @@ Go 严重依赖于静态代码分析。例子包括 godoc 文档,gofmt 代码 创建自己的分析器非常简单,因为 Go 有专门的内置包来解析和加工 Go 源码。 -你可以从这个链接中了解到更多相关内容: [GothamGo Kickoff Meetup: Go Static Analysis Tools by Alan Donovan][11]. +你可以从这个链接中了解到更多相关内容: [GothamGo Kickoff Meetup: Alan Donovan 的 Go 静态分析工具][11]。 ### 内置的测试和分析框架 -您是否曾尝试为一个从头开始的 Javascript 项目选择测试框架?如果是这样,你可能会明白经历这种分析瘫痪的斗争。您可能也意识到您没有使用其中 80% 的框架。 +您是否曾尝试为一个从头开始的 JavaScript 项目选择测试框架?如果是这样,你或许会理解经历这种过度分析analysis paralysis的痛苦。您可能也意识到您没有使用其中 80% 的框架。 一旦您需要进行一些可靠的分析,问题就会重复出现。 -Go 附带内置测试工具,旨在简化和提高效率。它为您提供了最简单的 API,并做出最小的假设。您可以将它用于不同类型的测试,分析,甚至可以提供可执行代码示例。 +Go 附带内置测试工具,旨在简化和提高效率。它为您提供了最简单的 API,并做出最小的假设。您可以将它用于不同类型的测试、分析,甚至可以提供可执行代码示例。 -它可以开箱即用地生成持续集成友好的输出,而且它的用法很简单,只需运行 `go test`。当然,它还支持高级功能,如并行运行测试,跳过标记代码,以及其他更多功能。 +它可以开箱即用地生成便于持续集成的输出,而且它的用法很简单,只需运行 `go test`。当然,它还支持高级功能,如并行运行测试,跳过标记代码,以及其他更多功能。 ### 竞争条件检测 -您可能已经了解了 Goroutines,它们在 Go 中用于实现并发代码执行。如果你未曾了解过,[这里][12]有一个非常简短的解释。 +您可能已经听说了 Goroutine,它们在 Go 中用于实现并发代码执行。如果你未曾了解过,[这里][12]有一个非常简短的解释。 无论具体技术如何,复杂应用中的并发编程都不容易,部分原因在于竞争条件的可能性。 @@ -93,13 +86,13 @@ Go 附带内置测试工具,旨在简化和提高效率。它为您提供了 ### 学习曲线 -您可以在一个晚上学习所有 Go 的语言功能。我是认真的。当然,还有标准库,以及不同,更具体领域的最佳实践。但是两个小时就足以让你自信地编写一个简单的 HTTP 服务器或命令行应用程序。 +您可以在一个晚上学习**所有**的 Go 语言功能。我是认真的。当然,还有标准库,以及不同的,更具体领域的最佳实践。但是两个小时就足以让你自信地编写一个简单的 HTTP 服务器或命令行应用程序。 -Golang 拥有[出色的文档][14],大部分高级主题已经在博客上进行了介绍:[The Go Programming Language Blog][15]。 +Go 语言拥有[出色的文档][14],大部分高级主题已经在他们的博客上进行了介绍:[Go 编程语言博客][15]。 -比起 Java(以及 Java 家族的语言),Javascript,Ruby,Python 甚至 PHP,你可以更轻松地把 Go 语言带到你的团队中。由于环境易于设置,您的团队在完成第一个生产代码之前需要进行的投资要小得多。 +比起 Java(以及 Java 家族的语言)、Javascript、Ruby、Python 甚至 PHP,你可以更轻松地把 Go 语言带到你的团队中。由于环境易于设置,您的团队在完成第一个生产代码之前需要进行的投资要小得多。 -### 反射(Reflection) +### 反射 代码反射本质上是一种隐藏在编译器下并访问有关语言结构的各种元信息的能力,例如变量或函数。 @@ -107,19 +100,19 @@ Golang 拥有[出色的文档][14],大部分高级主题已经在博客上进 此外,Go [没有实现一个名为泛型的概念][16],这使得以抽象方式处理多种类型更具挑战性。然而,由于泛型带来的复杂程度,许多人认为不实现泛型对语言实际上是有益的。我完全同意。 -根据 Go 的理念(这是一个单独的主题),您应该努力不要过度设计您的解决方案。这也适用于动态类型编程。尽可能坚持使用静态类型,并在确切知道要处理的类型时使用接口(interfaces)。接口在 Go 中非常强大且无处不在。 +根据 Go 的理念(这是一个单独的主题),您应该努力不要过度设计您的解决方案。这也适用于动态类型编程。尽可能坚持使用静态类型,并在确切知道要处理的类型时使用接口interface。接口在 Go 中非常强大且无处不在。 -但是,仍然存在一些情况,你无法知道你处理的数据类型。一个很好的例子是 JSON。您可以在应用程序中来回转换所有类型的数据。字符串,缓冲区,各种数字,嵌套结构等。 +但是,仍然存在一些情况,你无法知道你处理的数据类型。一个很好的例子是 JSON。您可以在应用程序中来回转换所有类型的数据。字符串、缓冲区、各种数字、嵌套结构等。 -为了解决这个问题,您需要一个工具来检查运行时的数据并根据其类型和结构采取不同行为。反射(Reflect)可以帮到你。Go 拥有一流的反射包,使您的代码能够像 Javascript 这样的语言一样动态。 +为了解决这个问题,您需要一个工具来检查运行时的数据并根据其类型和结构采取不同行为。反射Reflect可以帮到你。Go 拥有一流的反射包,使您的代码能够像 Javascript 这样的语言一样动态。 -一个重要的警告是知道你使用它所带来的代价 - 并且只有知道在没有更简单的方法时才使用它。 +一个重要的警告是知道你使用它所带来的代价 —— 并且只有知道在没有更简单的方法时才使用它。 你可以在这里阅读更多相关信息: [反射的法则 — Go 博客][18]. 您还可以在此处阅读 JSON 包源码中的一些实际代码: [src/encoding/json/encode.go — Source Code][19] -### Opinionatedness +### Opinionatedness(专制独裁的 Go) 顺便问一下,有这样一个单词吗? @@ -127,9 +120,9 @@ Golang 拥有[出色的文档][14],大部分高级主题已经在博客上进 这有时候基本上让我卡住了。我需要花时间思考这些事情而不是编写代码并满足用户。 -首先,我应该注意到我完全可以得到这些惯例的来源,它总是来源于你或者你的团队。无论如何,即使是一群经验丰富的 Javascript 开发人员也可以轻松地发现自己拥有完全不同的工具和范例的大部分经验,以实现相同的结果。 +首先,我应该注意到我完全知道这些惯例的来源,它总是来源于你或者你的团队。无论如何,即使是一群经验丰富的 Javascript 开发人员也很容易发现他们在实现相同的结果时,而大部分的经验却是在完全不同的工具和范例上。 -这导致整个团队中分析的瘫痪,并且使得个体之间更难以相互协作。 +这导致整个团队中出现过度分析,并且使得个体之间更难以相互协作。 嗯,Go 是不同的。即使您对如何构建和维护代码有很多强烈的意见,例如:如何命名,要遵循哪些结构模式,如何更好地实现并发。但你只有一个每个人都遵循的风格指南。你只有一个内置在基本工具链中的测试框架。 @@ -141,11 +134,11 @@ Golang 拥有[出色的文档][14],大部分高级主题已经在博客上进 人们说,每当你学习一门新的口语时,你也会沉浸在说这种语言的人的某些文化中。因此,您学习的语言越多,您可能会有更多的变化。 -编程语言也是如此。无论您将来如何应用新的编程语言,它总能给的带来新的编程视角或某些特别的技术。 +编程语言也是如此。无论您将来如何应用新的编程语言,它总能给你带来新的编程视角或某些特别的技术。 -无论是函数式编程,模式匹配(pattern matching)还是原型继承(prototypal inheritance)。一旦你学会了它们,你就可以随身携带这些编程思想,这扩展了你作为软件开发人员所拥有的问题解决工具集。它们也改变了你阅读高质量代码的方式。 +无论是函数式编程,模式匹配pattern matching还是原型继承prototypal inheritance。一旦你学会了它们,你就可以随身携带这些编程思想,这扩展了你作为软件开发人员所拥有的问题解决工具集。它们也改变了你阅读高质量代码的方式。 -而 Go 在方面有一项了不起的财富。Go 文化的主要支柱是保持简单,脚踏实地的代码,而不会产生许多冗余的抽象概念,并将可维护性放在首位。大部分时间花费在代码的编写工作上,而不是在修补工具和环境或者选择不同的实现方式上,这也是 Go文化的一部分。 +而 Go 在这方面有一项了不起的财富。Go 文化的主要支柱是保持简单,脚踏实地的代码,而不会产生许多冗余的抽象概念,并将可维护性放在首位。大部分时间花费在代码的编写工作上,而不是在修补工具和环境或者选择不同的实现方式上,这也是 Go 文化的一部分。 Go 文化也可以总结为:“应当只用一种方法去做一件事”。 @@ -161,12 +154,11 @@ Go 文化也可以总结为:“应当只用一种方法去做一件事”。 这不是 Go 的所有惊人的优点的完整列表,只是一些被人低估的特性。 -请尝试一下从 [Go 之旅(A Tour of Go)][20]来开始学习 Go,这将是一个令人惊叹的开始。 +请尝试一下从 [Go 之旅][20] 来开始学习 Go,这将是一个令人惊叹的开始。 如果您想了解有关 Go 的优点的更多信息,可以查看以下链接: * [你为什么要学习 Go? - Keval Patel][2] - * [告别Node.js - TJ Holowaychuk][3] 并在评论中分享您的阅读感悟! @@ -175,30 +167,16 @@ Go 文化也可以总结为:“应当只用一种方法去做一件事”。 不断为您的工作寻找最好的工具! -* * * - -If you like this article, please consider following me for more, and clicking on those funny green little hands right below this text for sharing. 👏👏👏 - -Check out my [Github][21] and follow me on [Twitter][22]! - --------------------------------------------------------------------------------- - -作者简介: - -Software Engineer and Traveler. Coding for fun. Javascript enthusiast. Tinkering with Golang. A lot into SOA and Docker. Architect at Velvica. - ------------- - - +------------------------------------------------------- via: https://medium.freecodecamp.org/here-are-some-amazing-advantages-of-go-that-you-dont-hear-much-about-1af99de3b23a 作者:[Kirill Rogovoy][a] -译者:[译者ID](https://github.com/imquanquan) -校对:[校对者ID](https://github.com/校对者ID) +译者:[imquanquan](https://github.com/imquanquan) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 -[a]: +[a]:https://twitter.com/krogovoy [1]:https://github.com/ashleymcnamara/gophers [2]:https://medium.com/@kevalpatel2106/why-should-you-learn-go-f607681fad65 [3]:https://medium.com/@tjholowaychuk/farewell-node-js-4ba9e7f3e52b From 402ec00da9d48f2bd1d549dc6585ccb306fbf083 Mon Sep 17 00:00:00 2001 From: chenliang Date: Fri, 28 Sep 2018 01:40:48 +0800 Subject: [PATCH 077/736] translated by Flowsnow --- ...20180528 What is behavior-driven Python.md | 308 ------------------ ...20180528 What is behavior-driven Python.md | 237 ++++++++++++++ 2 files changed, 237 insertions(+), 308 deletions(-) delete mode 100644 sources/tech/20180528 What is behavior-driven Python.md create mode 100644 translated/tech/20180528 What is behavior-driven Python.md diff --git a/sources/tech/20180528 What is behavior-driven Python.md b/sources/tech/20180528 What is behavior-driven Python.md deleted file mode 100644 index 100e0b0313..0000000000 --- a/sources/tech/20180528 What is behavior-driven Python.md +++ /dev/null @@ -1,308 +0,0 @@ -translating by Flowsnow -What is behavior-driven Python? -====== -![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/checklist_hands_team_collaboration.png?itok=u82QepPk) -Have you heard about [behavior-driven development][1] (BDD) and wondered what all the buzz is about? Maybe you've caught team members talking in "gherkin" and felt left out of the conversation. Or perhaps you're a Pythonista looking for a better way to test your code. Whatever the circumstance, learning about BDD can help you and your team achieve better collaboration and test automation, and Python's `behave` framework is a great place to start. - -### What is BDD? - - * Submitting forms on a website - * Searching for desired results - * Saving a document - * Making REST API calls - * Running command-line interface commands - - - -In software, a behavior is how a feature operates within a well-defined scenario of inputs, actions, and outcomes. Products can exhibit countless behaviors, such as: - -Defining a product's features based on its behaviors makes it easier to describe them, develop them, and test them. This is the heart of BDD: making behaviors the focal point of software development. Behaviors are defined early in development using a [specification by example][2] language. One of the most common behavior spec languages is [Gherkin][3], the Given-When-Then scenario format from the [Cucumber][4] project. Behavior specs are basically plain-language descriptions of how a behavior works, with a little bit of formal structure for consistency and focus. Test frameworks can easily automate these behavior specs by "gluing" step texts to code implementations. - -Below is an example of a behavior spec written in Gherkin: -``` -Scenario: Basic DuckDuckGo Search - -  Given the DuckDuckGo home page is displayed - -  When the user searches for "panda" - -  Then results are shown for "panda" - -``` - -At a quick glance, the behavior is intuitive to understand. Except for a few keywords, the language is freeform. The scenario is concise yet meaningful. A real-world example illustrates the behavior. Steps declaratively indicate what should happen—without getting bogged down in the details of how. - -The [main benefits of BDD][5] are good collaboration and automation. Everyone can contribute to behavior development, not just programmers. Expected behaviors are defined and understood from the beginning of the process. Tests can be automated together with the features they cover. Each test covers a singular, unique behavior in order to avoid duplication. And, finally, existing steps can be reused by new behavior specs, creating a snowball effect. - -### Python's behave framework - -`behave` is one of the most popular BDD frameworks in Python. It is very similar to other Gherkin-based Cucumber frameworks despite not holding the official Cucumber designation. `behave` has two primary layers: - - 1. Behavior specs written in Gherkin `.feature` files - 2. Step definitions and hooks written in Python modules that implement Gherkin steps - - - -As shown in the example above, Gherkin scenarios use a three-part format: - - 1. Given some initial state - 2. When an action is taken - 3. Then verify the outcome - - - -Each step is "glued" by decorator to a Python function when `behave` runs tests. - -### Installation - -As a prerequisite, make sure you have Python and `pip` installed on your machine. I strongly recommend using Python 3. (I also recommend using [`pipenv`][6], but the following example commands use the more basic `pip`.) - -Only one package is required for `behave`: -``` -pip install behave - -``` - -Other packages may also be useful, such as: -``` -pip install requests    # for REST API calls - -pip install selenium    # for Web browser interactions - -``` - -The [behavior-driven-Python][7] project on GitHub contains the examples used in this article. - -### Gherkin features - -The Gherkin syntax that `behave` uses is practically compliant with the official Cucumber Gherkin standard. A `.feature` file has Feature sections, which in turn have Scenario sections with Given-When-Then steps. Below is an example: -``` -Feature: Cucumber Basket - -  As a gardener, - -  I want to carry many cucumbers in a basket, - -  So that I don’t drop them all. - -  - -  @cucumber-basket - -  Scenario: Add and remove cucumbers - -    Given the basket is empty - -    When "4" cucumbers are added to the basket - -    And "6" more cucumbers are added to the basket - -    But "3" cucumbers are removed from the basket - -    Then the basket contains "7" cucumbers - -``` - -There are a few important things to note here: - - * Both the Feature and Scenario sections have [short, descriptive titles][8]. - * The lines immediately following the Feature title are comments ignored by `behave`. It is a good practice to put the user story there. - * Scenarios and Features can have tags (notice the `@cucumber-basket` mark) for hooks and filtering (explained below). - * Steps follow a [strict Given-When-Then order][9]. - * Additional steps can be added for any type using `And` and `But`. - * Steps can be parametrized with inputs—notice the values in double quotes. - - - -Scenarios can also be written as templates with multiple input combinations by using a Scenario Outline: -``` -Feature: Cucumber Basket - - - -  @cucumber-basket - -  Scenario Outline: Add cucumbers - -    Given the basket has “” cucumbers - -    When "" cucumbers are added to the basket - -    Then the basket contains "" cucumbers - - - -    Examples: Cucumber Counts - -      | initial | more | total | - -      |    0    |   1  |   1   | - -      |    1    |   2  |   3   | - -      |    5    |   4  |   9   | - -``` - -Scenario Outlines always have an Examples table, in which the first row gives column titles and each subsequent row gives an input combo. The row values are substituted wherever a column title appears in a step surrounded by angle brackets. In the example above, the scenario will be run three times because there are three rows of input combos. Scenario Outlines are a great way to avoid duplicate scenarios. - -There are other elements of the Gherkin language, but these are the main mechanics. To learn more, read the Automation Panda articles [Gherkin by Example][10] and [Writing Good Gherkin][11]. - -### Python mechanics - -Every Gherkin step must be "glued" to a step definition, a Python function that provides the implementation. Each function has a step type decorator with the matching string. It also receives a shared context and any step parameters. Feature files must be placed in a directory named `features/`, while step definition modules must be placed in a directory named `features/steps/`. Any feature file can use step definitions from any module—they do not need to have the same names. Below is an example Python module with step definitions for the cucumber basket features. -``` -from behave import * - -from cucumbers.basket import CucumberBasket - - - -@given('the basket has "{initial:d}" cucumbers') - -def step_impl(context, initial): - -    context.basket = CucumberBasket(initial_count=initial) - - - -@when('"{some:d}" cucumbers are added to the basket') - -def step_impl(context, some): - -    context.basket.add(some) - - - -@then('the basket contains "{total:d}" cucumbers') - -def step_impl(context, total): - -    assert context.basket.count == total - -``` - -Three [step matchers][12] are available: `parse`, `cfparse`, and `re`. The default and simplest marcher is `parse`, which is shown in the example above. Notice how parametrized values are parsed and passed into the functions as input arguments. A common best practice is to put double quotes around parameters in steps. - -Each step definition function also receives a [context][13] variable that holds data specific to the current scenario being run, such as `feature`, `scenario`, and `tags` fields. Custom fields may be added, too, to share data between steps. Always use context to share data—never use global variables! - -`behave` also supports [hooks][14] to handle automation concerns outside of Gherkin steps. A hook is a function that will be run before or after a step, scenario, feature, or whole test suite. Hooks are reminiscent of [aspect-oriented programming][15]. They should be placed in a special `environment.py` file under the `features/` directory. Hook functions can check the current scenario's tags, as well, so logic can be selectively applied. The example below shows how to use hooks to set up and tear down a Selenium WebDriver instance for any scenario tagged as `@web`. -``` -from selenium import webdriver - - - -def before_scenario(context, scenario): - -    if 'web' in context.tags: - -        context.browser = webdriver.Firefox() - -        context.browser.implicitly_wait(10) - - - -def after_scenario(context, scenario): - -    if 'web' in context.tags: - -        context.browser.quit() - -``` - -Note: Setup and cleanup can also be done with [fixtures][16] in `behave`. - -To offer an idea of what a `behave` project should look like, here's the example project's directory structure: - -![](https://opensource.com/sites/default/files/uploads/behave_dir_layout.png) - -Any Python packages and custom modules can be used with `behave`. Use good design patterns to build a scalable test automation solution. Step definition code should be concise. - -### Running tests - -To run tests from the command line, change to the project's root directory and run the `behave` command. Use the `–help` option to see all available options. - -Below are a few common use cases: -``` -# run all tests - -behave - - - -# run the scenarios in a feature file - -behave features/web.feature - - - -# run all tests that have the @duckduckgo tag - -behave --tags @duckduckgo - - - -# run all tests that do not have the @unit tag - -behave --tags ~@unit - - - -# run all tests that have @basket and either @add or @remove - -behave --tags @basket --tags @add,@remove - -``` - -For convenience, options may be saved in [config][17] files. - -### Other options - -`behave` is not the only BDD test framework in Python. Other good frameworks include: - - * `pytest-bdd` , a plugin for `pytest``behave`, it uses Gherkin feature files and step definition modules, but it also leverages all the features and plugins of `pytest`. For example, it can run Gherkin scenarios in parallel using `pytest-xdist`. BDD and non-BDD tests can also be executed together with the same filters. `pytest-bdd` also offers a more flexible directory layout. - - * `radish` is a "Gherkin-plus" framework—it adds Scenario Loops and Preconditions to the standard Gherkin language, which makes it more friendly to programmers. It also offers rich command line options like `behave`. - - * `lettuce` is an older BDD framework very similar to `behave`, with minor differences in framework mechanics. However, GitHub shows little recent activity in the project (as of May 2018). - - - -Any of these frameworks would be good choices. - -Also, remember that Python test frameworks can be used for any black box testing, even for non-Python products! BDD frameworks are great for web and service testing because their tests are declarative, and Python is a [great language for test automation][18]. - -This article is based on the author's [PyCon Cleveland 2018][19] talk, [Behavior-Driven Python][20]. - --------------------------------------------------------------------------------- - -via: https://opensource.com/article/18/5/behavior-driven-python - -作者:[Andrew Knight][a] -选题:[lujun9972](https://github.com/lujun9972) -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:https://opensource.com/users/andylpk247 -[1]:https://automationpanda.com/bdd/ -[2]:https://en.wikipedia.org/wiki/Specification_by_example -[3]:https://automationpanda.com/2017/01/26/bdd-101-the-gherkin-language/ -[4]:https://cucumber.io/ -[5]:https://automationpanda.com/2017/02/13/12-awesome-benefits-of-bdd/ -[6]:https://docs.pipenv.org/ -[7]:https://github.com/AndyLPK247/behavior-driven-python -[8]:https://automationpanda.com/2018/01/31/good-gherkin-scenario-titles/ -[9]:https://automationpanda.com/2018/02/03/are-gherkin-scenarios-with-multiple-when-then-pairs-okay/ -[10]:https://automationpanda.com/2017/01/27/bdd-101-gherkin-by-example/ -[11]:https://automationpanda.com/2017/01/30/bdd-101-writing-good-gherkin/ -[12]:http://behave.readthedocs.io/en/latest/api.html#step-parameters -[13]:http://behave.readthedocs.io/en/latest/api.html#detecting-that-user-code-overwrites-behave-context-attributes -[14]:http://behave.readthedocs.io/en/latest/api.html#environment-file-functions -[15]:https://en.wikipedia.org/wiki/Aspect-oriented_programming -[16]:http://behave.readthedocs.io/en/latest/api.html#fixtures -[17]:http://behave.readthedocs.io/en/latest/behave.html#configuration-files -[18]:https://automationpanda.com/2017/01/21/the-best-programming-language-for-test-automation/ -[19]:https://us.pycon.org/2018/ -[20]:https://us.pycon.org/2018/schedule/presentation/87/ diff --git a/translated/tech/20180528 What is behavior-driven Python.md b/translated/tech/20180528 What is behavior-driven Python.md new file mode 100644 index 0000000000..16edc3c802 --- /dev/null +++ b/translated/tech/20180528 What is behavior-driven Python.md @@ -0,0 +1,237 @@ +什么是行为驱动的Python? +====== +![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/checklist_hands_team_collaboration.png?itok=u82QepPk) + +您是否听说过[行为驱动开发][1](BDD),并想知道所有的新奇事物是什么? 也许你已经发现了团队成员在使用“gherkin”了,并感到被排除在外无法参与其中。 或许你是一个Python爱好者,正在寻找更好的方法来测试你的代码。 无论在什么情况下,了解BDD都可以帮助您和您的团队实现更好的协作和测试自动化,而Python的`行为`框架是一个很好的起点。 + +### 什么是BDD? + + * 在网站上提交表单 + * 搜索想要的结果 + * 保存文档 + * 进行REST API调用 + * 运行命令行界面命令 + +在软件中,行为是指在明确定义的输入,行为和结果场景中功能是如何运转的。 产品可以表现出无数的行为,例如: + +根据产品的行为定义产品的功能可以更容易地描述产品,并对其进行开发和测试。 BDD的核心是:使行为成为软件开发的焦点。 在开发早期使用示例语言的规范来定义行为。 最常见的行为规范语言之一是Gherkin,Cucumber项目中的Given-When-Then场景格式。 行为规范基本上是对行为如何工作的简单语言描述,具有一致性和焦点的一些正式结构。 通过将步骤文本“粘合”到代码实现,测试框架可以轻松地自动化这些行为规范。 + +下面是用Gherkin编写的行为规范的示例: + +根据产品的行为定义产品的功能可以更容易地描述产品,开发产品并对其进行测试。 这是BDD的核心:使行为成为软件开发的焦点。 在开发早期使用[示例规范][2]的语言来定义行为。 最常见的行为规范语言之一是[Gherkin][3],[Cucumber][4]项目中的Given-When-Then场景格式。 行为规范基本上是对行为如何工作的简单语言描述,具有一致性和焦点的一些正式结构。 通过将步骤文本“粘合”到代码实现,测试框架可以轻松地自动化这些行为规范。 + +下面是用Gherkin编写的行为规范的示例: + +``` +Scenario: Basic DuckDuckGo Search + Given the DuckDuckGo home page is displayed + When the user searches for "panda" + Then results are shown for "panda" +``` + +快速浏览一下,行为是直观易懂的。 除少数关键字外,该语言为自由格式。 场景简洁而有意义。 一个真实的例子说明了这种行为。 步骤以声明的方式表明应该发生什么——而不会陷入如何如何的细节中。 + +[BDD的主要优点][5]是良好的协作和自动化。 每个人都可以为行为开发做出贡献,而不仅仅是程序员。 从流程开始就定义并理解预期的行为。 测试可以与它们涵盖的功能一起自动化。 每个测试都包含一个单一的,独特的行为,以避免重复。 最后,现有的步骤可以通过新的行为规范重用,从而产生雪球效果。 + +### Python的behave框架 + +`behave`是Python中最流行的BDD框架之一。 它与其他基于Gherkin的Cucumber框架非常相似,尽管没有得到官方的Cucumber定名。 `behave`有两个主要层: + +1. 用Gherkin的`.feature`文件编写的行为规范 +2. 用Python模块编写的步骤定义和钩子,用于实现Gherkin步骤 + +如上例所示,Gherkin场景有三部分格式: + +1. 鉴于一些初始状态 +2. 当行为发生时 +3. 然后验证结果 + +当`behave`运行测试时,每个步骤由装饰器“粘合”到Python函数。 + +### 安装 + +作为先决条件,请确保在你的计算机上安装了Python和`pip`。 我强烈建议使用Python 3.(我还建议使用[`pipenv`][6],但以下示例命令使用更基本的`pip`。) + +`behave`框架只需要一个包: + +``` +pip install behave +``` + +其他包也可能有用,例如: +``` +pip install requests    # 用于调用REST API +pip install selenium    # 用于web浏览器交互 +``` + +GitHub上的[behavior-driven-Python][7]项目包含本文中使用的示例。 + +### Gherkin特点 + +`behave`框架使用的Gherkin语法实际上是符合官方的Cucumber Gherkin标准的。 `.feature`文件包含功能Feature部分,而Feature部分又包含具有Given-When-Then步骤的场景Scenario部分。 以下是一个例子: + +``` +Feature: Cucumber Basket +  As a gardener, +  I want to carry many cucumbers in a basket, +  So that I don’t drop them all. + +  @cucumber-basket +  Scenario: Add and remove cucumbers +    Given the basket is empty +    When "4" cucumbers are added to the basket +    And "6" more cucumbers are added to the basket +    But "3" cucumbers are removed from the basket +    Then the basket contains "7" cucumbers +``` + +这里有一些重要的事情需要注意: + +- Feature和Scenario部分都有[简短的描述性标题][8]。 +- 紧跟在Feature标题后面的行是会被`behave`框架忽略掉的注释。将功能描述放在那里是一种很好的做法。 +- Scenarios和Features可以有标签(注意`@cucumber-basket`标记)用于钩子和过滤(如下所述)。 +- 步骤都遵循[严格的Given-When-Then顺序][9]。 +- 使用`And`和`Bu`t可以为任何类型添加附加步骤。 +- 可以使用输入对步骤进行参数化——注意双引号里的值。 + +通过使用场景大纲,场景也可以写为具有多个输入组合的模板: + +``` +Feature: Cucumber Basket + + @cucumber-basket + Scenario Outline: Add cucumbers + Given the basket has “” cucumbers + When "" cucumbers are added to the basket + Then the basket contains "" cucumbers + + Examples: Cucumber Counts + | initial | more | total | + | 0 | 1 | 1 | + | 1 | 2 | 3 | + | 5 | 4 | 9 | +``` + +场景大纲总是有一个Examples表,其中第一行给出列标题,后续每一行给出一个输入组合。 只要列标题出现在由尖括号括起的步骤中,行值就会被替换。 在上面的示例中,场景将运行三次,因为有三行输入组合。 场景大纲是避免重复场景的好方法。 + +Gherkin语言还有其他元素,但这些是主要的机制。 想了解更多信息,请阅读Automation Panda这个网站的文章[Gherkin by Example][10]和[Writing Good Gherkin][11]。 + +### Python机制 + +每个Gherkin步骤必须“粘合”到步骤定义,即提供了实现的Python函数。 每个函数都有一个带有匹配字符串的步骤类型装饰器。 它还接收共享的上下文和任何步骤参数。 功能文件必须放在名为`features/`的目录中,而步骤定义模块必须放在名为`features/steps/`的目录中。 任何功能文件都可以使用任何模块中的步骤定义——它们不需要具有相同的名称。 下面是一个示例Python模块,其中包含cucumber basket功能的步骤定义。 + +``` +from behave import * +from cucumbers.basket import CucumberBasket + +@given('the basket has "{initial:d}" cucumbers') +def step_impl(context, initial): + context.basket = CucumberBasket(initial_count=initial) + +@when('"{some:d}" cucumbers are added to the basket') +def step_impl(context, some): + context.basket.add(some) + +@then('the basket contains "{total:d}" cucumbers') +def step_impl(context, total): + assert context.basket.count == total +``` + +可以使用三个[步骤匹配器][12]:`parse`,`cfparse`和`re`。默认和最简单的匹配器是`parse`,如上例所示。注意如何解析参数化值并将其作为输入参数传递给函数。一个常见的最佳实践是在步骤中给参数加双引号。 + +每个步骤定义函数还接收一个[上下文][13]变量,该变量保存当前正在运行的场景的数据,例如`feature`, `scenario`和`tags`字段。也可以添加自定义字段,用于在步骤之间共享数据。始终使用上下文来共享数据——永远不要使用全局变量! + +`behave`框架还支持[钩子][14]来处理Gherkin步骤之外的自动化问题。钩子是一个将在步骤,场景,功能或整个测试套件之前或之后运行的功能。钩子让人联想到[面向方面的编程][15]。它们应放在`features/`目录下的特殊`environment.py`文件中。钩子函数也可以检查当前场景的标签,因此可以有选择地应用逻辑。下面的示例显示了如何使用钩子为标记为`@web`的任何场景生成和销毁一个Selenium WebDriver实例。 + +``` +from selenium import webdriver + +def before_scenario(context, scenario): + if 'web' in context.tags: + context.browser = webdriver.Firefox() + context.browser.implicitly_wait(10) + +def after_scenario(context, scenario): + if 'web' in context.tags: + context.browser.quit() +``` + +注意:也可以使用[fixtures][16]进行构建和清理。 + +要了解一个`behave`项目应该是什么样子,这里是示例项目的目录结构: + +![](https://opensource.com/sites/default/files/uploads/behave_dir_layout.png) + +任何Python包和自定义模块都可以与`behave`框架一起使用。 使用良好的设计模式构建可扩展的测试自动化解决方案。步骤定义代码应简明扼要。 + +### 运行测试 + +要从命令行运行测试,请切换到项目的根目录并运行`behave`命令。 使用`-help`选项查看所有可用选项。 + +以下是一些常见用例: + +``` +# run all tests +behave + +# run the scenarios in a feature file +behave features/web.feature + +# run all tests that have the @duckduckgo tag +behave --tags @duckduckgo + +# run all tests that do not have the @unit tag +behave --tags ~@unit + +# run all tests that have @basket and either @add or @remove +behave --tags @basket --tags @add,@remove +``` + +为方便起见,选项可以保存在[config][17]文件中。 + +### 其他选择 + +`behave`不是Python中唯一的BDD测试框架。其他好的框架包括: + +- `pytest-bdd`,`pytest`的插件,和`behave`一样,它使用Gherkin功能文件和步骤定义模块,但它也利用了`pytest`的所有功能和插件。例如,它可以使用`pytest-xdist`并行运行Gherkin场景。 BDD和非BDD测试也可以与相同的过滤器一起执行。 `pytest-bdd`还提供更灵活的目录布局。 +- `radish`是一个“Gherkin增强版”框架——它将Scenario循环和前提条件添加到标准的Gherkin语言中,这使得它对程序员更友好。它还提供丰富的命令行选项,如`behave`。 +- `lettuce`是一种较旧的BDD框架,与`behave`非常相似,在框架机制方面存在细微差别。然而,GitHub最近显示该项目的活动很少(截至2018年5月)。 + +任何这些框架都是不错的选择。 + +另外,请记住,Python测试框架可用于任何黑盒测试,即使对于非Python产品也是如此! BDD框架非常适合Web和服务测试,因为它们的测试是声明性的,而Python是一种[很好的测试自动化语言][18]。 + +本文基于作者的[PyCon Cleveland 2018][19]演讲,[行为驱动的Python][20]。 + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/18/5/behavior-driven-python + +作者:[Andrew Knight][a] +选题:[lujun9972](https://github.com/lujun9972) +译者:[Flowsnow](https://github.com/Flowsnow) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://opensource.com/users/andylpk247 +[1]:https://automationpanda.com/bdd/ +[2]:https://en.wikipedia.org/wiki/Specification_by_example +[3]:https://automationpanda.com/2017/01/26/bdd-101-the-gherkin-language/ +[4]:https://cucumber.io/ +[5]:https://automationpanda.com/2017/02/13/12-awesome-benefits-of-bdd/ +[6]:https://docs.pipenv.org/ +[7]:https://github.com/AndyLPK247/behavior-driven-python +[8]:https://automationpanda.com/2018/01/31/good-gherkin-scenario-titles/ +[9]:https://automationpanda.com/2018/02/03/are-gherkin-scenarios-with-multiple-when-then-pairs-okay/ +[10]:https://automationpanda.com/2017/01/27/bdd-101-gherkin-by-example/ +[11]:https://automationpanda.com/2017/01/30/bdd-101-writing-good-gherkin/ +[12]:http://behave.readthedocs.io/en/latest/api.html#step-parameters +[13]:http://behave.readthedocs.io/en/latest/api.html#detecting-that-user-code-overwrites-behave-context-attributes +[14]:http://behave.readthedocs.io/en/latest/api.html#environment-file-functions +[15]:https://en.wikipedia.org/wiki/Aspect-oriented_programming +[16]:http://behave.readthedocs.io/en/latest/api.html#fixtures +[17]:http://behave.readthedocs.io/en/latest/behave.html#configuration-files +[18]:https://automationpanda.com/2017/01/21/the-best-programming-language-for-test-automation/ +[19]:https://us.pycon.org/2018/ +[20]:https://us.pycon.org/2018/schedule/presentation/87/ From 4c3c8a118a85f48fd89b02d7255f65e983c9c58e Mon Sep 17 00:00:00 2001 From: chenliang Date: Fri, 28 Sep 2018 01:56:52 +0800 Subject: [PATCH 078/736] translating by Flowsnow --- ...the Scikit-learn Python library for data science projects.md | 2 ++ 1 file changed, 2 insertions(+) diff --git a/sources/tech/20180926 How to use the Scikit-learn Python library for data science projects.md b/sources/tech/20180926 How to use the Scikit-learn Python library for data science projects.md index 4f5d9aedf6..e8b108720e 100644 --- a/sources/tech/20180926 How to use the Scikit-learn Python library for data science projects.md +++ b/sources/tech/20180926 How to use the Scikit-learn Python library for data science projects.md @@ -1,3 +1,5 @@ +translating by Flowsnow + How to use the Scikit-learn Python library for data science projects ====== From fd13a34ddec92ab34bdad8c6ea9ae79af15dd761 Mon Sep 17 00:00:00 2001 From: chenliang Date: Fri, 28 Sep 2018 01:58:27 +0800 Subject: [PATCH 079/736] translating by Flowsnow --- sources/tech/20180912 How to build rpm packages.md | 2 ++ 1 file changed, 2 insertions(+) diff --git a/sources/tech/20180912 How to build rpm packages.md b/sources/tech/20180912 How to build rpm packages.md index 97b630707d..3d14a8c797 100644 --- a/sources/tech/20180912 How to build rpm packages.md +++ b/sources/tech/20180912 How to build rpm packages.md @@ -1,3 +1,5 @@ +translating by Flowsnow + How to build rpm packages ====== From 43fb88f670b73fa0f23aa2baebf07e065dff7240 Mon Sep 17 00:00:00 2001 From: chenliang Date: Fri, 28 Sep 2018 01:59:59 +0800 Subject: [PATCH 080/736] translating by Flowsnow --- ...180924 A Simple, Beautiful And Cross-platform Podcast App.md | 2 ++ 1 file changed, 2 insertions(+) diff --git a/sources/tech/20180924 A Simple, Beautiful And Cross-platform Podcast App.md b/sources/tech/20180924 A Simple, Beautiful And Cross-platform Podcast App.md index ae9f91b548..628a805144 100644 --- a/sources/tech/20180924 A Simple, Beautiful And Cross-platform Podcast App.md +++ b/sources/tech/20180924 A Simple, Beautiful And Cross-platform Podcast App.md @@ -1,3 +1,5 @@ +translating by Flowsnow + A Simple, Beautiful And Cross-platform Podcast App ====== From b94aa078bc68a4c585ed334b34350f75acf91217 Mon Sep 17 00:00:00 2001 From: "Xingyu.Wang" Date: Fri, 28 Sep 2018 09:03:13 +0800 Subject: [PATCH 081/736] PRF:20180918 How To Force APT Package Manager To Use IPv4 In Ubuntu 16.04.md @geekpi --- ...ce APT Package Manager To Use IPv4 In Ubuntu 16.04.md | 9 ++++----- 1 file changed, 4 insertions(+), 5 deletions(-) diff --git a/translated/tech/20180918 How To Force APT Package Manager To Use IPv4 In Ubuntu 16.04.md b/translated/tech/20180918 How To Force APT Package Manager To Use IPv4 In Ubuntu 16.04.md index 02bc39addc..36c159ac16 100644 --- a/translated/tech/20180918 How To Force APT Package Manager To Use IPv4 In Ubuntu 16.04.md +++ b/translated/tech/20180918 How To Force APT Package Manager To Use IPv4 In Ubuntu 16.04.md @@ -3,7 +3,7 @@ ![](https://www.ostechnix.com/wp-content/uploads/2018/09/ipv4-720x340.png) -**APT**, 是 **A** dvanced **P** ackage **T** ool 的缩写,是基于 Debian 的系统的默认包管理器。我们可以使用 APT 安装、更新、升级和删除应用程序。最近,我一直遇到一个奇怪的错误。每当我尝试更新我的 Ubuntu 16.04 时,我都会收到此错误 - **“0% [Connecting to in.archive.ubuntu.com (2001:67c:1560:8001::14)]”** ,同时更新流程会卡住很长时间。我的网络连接没问题,我可以 ping 通所有网站,包括 Ubuntu 官方网站。在搜索了一番谷歌后,我意识到 Ubuntu 镜像有时无法通过 IPv6 访问。在我强制将 APT 包管理器在更新系统时使用 IPv4 代替 IPv6 访问 Ubuntu 镜像后,此问题得以解决。如果你遇到过此错误,可以按照以下说明解决。 +**APT**, 是 **A** dvanced **P** ackage **T** ool 的缩写,是基于 Debian 的系统的默认包管理器。我们可以使用 APT 安装、更新、升级和删除应用程序。最近,我一直遇到一个奇怪的错误。每当我尝试更新我的 Ubuntu 16.04 时,我都会收到此错误 - **“0% [Connecting to in.archive.ubuntu.com (2001:67c:1560:8001::14)]”** ,同时更新流程会卡住很长时间。我的网络连接没问题,我可以 ping 通所有网站,包括 Ubuntu 官方网站。在搜索了一番谷歌后,我意识到 Ubuntu 镜像站点有时无法通过 IPv6 访问。在我强制将 APT 包管理器在更新系统时使用 IPv4 代替 IPv6 访问 Ubuntu 镜像站点后,此问题得以解决。如果你遇到过此错误,可以按照以下说明解决。 ### 强制 APT 包管理器在 Ubuntu 16.04 中使用 IPv4 @@ -11,13 +11,12 @@ ``` $ sudo apt-get -o Acquire::ForceIPv4=true update - $ sudo apt-get -o Acquire::ForceIPv4=true upgrade ``` 瞧!这次更新很快就完成了。 -你还可以使用以下命令在 **/etc/apt/apt.conf.d/99force-ipv4** 中添加以下行,以便将来对所有 **apt-get** 事务保持持久性: +你还可以使用以下命令在 `/etc/apt/apt.conf.d/99force-ipv4` 中添加以下行,以便将来对所有 `apt-get` 事务保持持久性: ``` $ echo 'Acquire::ForceIPv4 "true";' | sudo tee /etc/apt/apt.conf.d/99force-ipv4 @@ -25,7 +24,7 @@ $ echo 'Acquire::ForceIPv4 "true";' | sudo tee /etc/apt/apt.conf.d/99force-ipv4 **免责声明:** -我不知道最近是否有人遇到这个问题,但我今天在我的 Ubuntu 16.04 LTS 虚拟机中遇到了至少四五次这样的错误,我按照上面的说法解决了这个问题。我不确定这是推荐的解决方案。请浏览 Ubuntu 论坛来确保此方法合法。由于我只是一个 VM,我只将它用于测试和学习目的,我不介意这种方法的真实性。请自行承担使用风险。 +我不知道最近是否有人遇到这个问题,但我今天在我的 Ubuntu 16.04 LTS 虚拟机中遇到了至少四、五次这样的错误,我按照上面的说法解决了这个问题。我不确定这是推荐的解决方案。请浏览 Ubuntu 论坛来确保此方法合法。由于我只是一个 VM,我只将它用于测试和学习目的,我不介意这种方法的真实性。请自行承担使用风险。 希望这有帮助。还有更多的好东西。敬请关注! @@ -40,7 +39,7 @@ via: https://www.ostechnix.com/how-to-force-apt-package-manager-to-use-ipv4-in-u 作者:[SK][a] 选题:[lujun9972](https://github.com/lujun9972) 译者:[geekpi](https://github.com/geekpi) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 From 62d627d9242e78e075a7f944ed2cee3a9d0275d3 Mon Sep 17 00:00:00 2001 From: "Xingyu.Wang" Date: Fri, 28 Sep 2018 09:03:34 +0800 Subject: [PATCH 082/736] PUB:20180918 How To Force APT Package Manager To Use IPv4 In Ubuntu 16.04.md @geekpi https://linux.cn/article-10058-1.html --- ...ow To Force APT Package Manager To Use IPv4 In Ubuntu 16.04.md | 0 1 file changed, 0 insertions(+), 0 deletions(-) rename {translated/tech => published}/20180918 How To Force APT Package Manager To Use IPv4 In Ubuntu 16.04.md (100%) diff --git a/translated/tech/20180918 How To Force APT Package Manager To Use IPv4 In Ubuntu 16.04.md b/published/20180918 How To Force APT Package Manager To Use IPv4 In Ubuntu 16.04.md similarity index 100% rename from translated/tech/20180918 How To Force APT Package Manager To Use IPv4 In Ubuntu 16.04.md rename to published/20180918 How To Force APT Package Manager To Use IPv4 In Ubuntu 16.04.md From 79cd02e2946a8790db3140cf5a4c9389e538b384 Mon Sep 17 00:00:00 2001 From: geekpi Date: Fri, 28 Sep 2018 09:18:13 +0800 Subject: [PATCH 083/736] translated --- ...turned an error code 1- Error in Ubuntu.md | 118 ------------------ ...turned an error code 1- Error in Ubuntu.md | 116 +++++++++++++++++ 2 files changed, 116 insertions(+), 118 deletions(-) delete mode 100644 sources/tech/20180824 [Solved] -sub process usr bin dpkg returned an error code 1- Error in Ubuntu.md create mode 100644 translated/tech/20180824 [Solved] -sub process usr bin dpkg returned an error code 1- Error in Ubuntu.md diff --git a/sources/tech/20180824 [Solved] -sub process usr bin dpkg returned an error code 1- Error in Ubuntu.md b/sources/tech/20180824 [Solved] -sub process usr bin dpkg returned an error code 1- Error in Ubuntu.md deleted file mode 100644 index 0200dfffdb..0000000000 --- a/sources/tech/20180824 [Solved] -sub process usr bin dpkg returned an error code 1- Error in Ubuntu.md +++ /dev/null @@ -1,118 +0,0 @@ -translating---geekpi - -[Solved] “sub process usr bin dpkg returned an error code 1″ Error in Ubuntu -====== -If you are encountering “sub process usr bin dpkg returned an error code 1” while installing software on Ubuntu Linux, here is how you can fix it. - -One of the common issue in Ubuntu and other Debian based distribution is the broken packages. You try to update the system or install a new package and you encounter an error like ‘Sub-process /usr/bin/dpkg returned an error code’. - -That’s what happened to me the other day. I was trying to install a radio application in Ubuntu when it threw me this error: -``` -Unpacking python-gst-1.0 (1.6.2-1build1) ... -Selecting previously unselected package radiotray. -Preparing to unpack .../radiotray_0.7.3-5ubuntu1_all.deb ... -Unpacking radiotray (0.7.3-5ubuntu1) ... -Processing triggers for man-db (2.7.5-1) ... -Processing triggers for desktop-file-utils (0.22-1ubuntu5.2) ... -Processing triggers for bamfdaemon (0.5.3~bzr0+16.04.20180209-0ubuntu1) ... -Rebuilding /usr/share/applications/bamf-2.index... -Processing triggers for gnome-menus (3.13.3-6ubuntu3.1) ... -Processing triggers for mime-support (3.59ubuntu1) ... -Setting up polar-bookshelf (1.0.0-beta56) ... -ln: failed to create symbolic link '/usr/local/bin/polar-bookshelf': No such file or directory -dpkg: error processing package polar-bookshelf (--configure): -subprocess installed post-installation script returned error exit status 1 -Setting up python-appindicator (12.10.1+16.04.20170215-0ubuntu1) ... -Setting up python-gst-1.0 (1.6.2-1build1) ... -Setting up radiotray (0.7.3-5ubuntu1) ... -Errors were encountered while processing: -polar-bookshelf -E: Sub-process /usr/bin/dpkg returned an error code (1) - -``` - -The last three lines are of the utmost importance here. -``` -Errors were encountered while processing: -polar-bookshelf -E: Sub-process /usr/bin/dpkg returned an error code (1) - -``` - -It tells me that the package polar-bookshelf is causing and issue. This might be crucial to how you fix this error here. - -### Fixing Sub-process /usr/bin/dpkg returned an error code (1) - -![Fix update errors in Ubuntu Linux][1] - -Let’s try to fix this broken error package. I’ll show several methods that you can try one by one. The initial ones are easy to use and simply no-brainers. - -You should try to run sudo apt update and then try to install a new package or upgrade after trying each of the methods discussed here. - -#### Method 1: Reconfigure Package Database - -The first method you can try is to reconfigure the package database. Probably the database got corrupted while installing a package. Reconfiguring often fixes the problem. -``` -sudo dpkg --configure -a - -``` - -#### Method 2: Use force install - -If a package installation was interrupted previously, you may try to do a force install. -``` -sudo apt-get install -f - -``` - -#### Method 3: Try removing the troublesome package - -If it’s not an issue for you, you may try to remove the package manually. Please don’t do it for Linux Kernels (packages starting with linux-). -``` -sudo apt remove - -``` - -#### Method 4: Remove post info files of the troublesome package - -This should be your last resort. You can try removing the files associated to the package in question from /var/lib/dpkg/info. - -**You need to know a little about basic Linux commands to figure out what’s happening and how can you use the same with your problem.** - -In my case, I had an issue with polar-bookshelf. So I looked for the files associated with it: -``` -ls -l /var/lib/dpkg/info | grep -i polar-bookshelf --rw-r--r-- 1 root root 2324811 Aug 14 19:29 polar-bookshelf.list --rw-r--r-- 1 root root 2822824 Aug 10 04:28 polar-bookshelf.md5sums --rwxr-xr-x 1 root root 113 Aug 10 04:28 polar-bookshelf.postinst --rwxr-xr-x 1 root root 84 Aug 10 04:28 polar-bookshelf.postrm - -``` - -Now all I needed to do was to remove these files: -``` -sudo mv /var/lib/dpkg/info/polar-bookshelf.* /tmp - -``` - -Use the sudo apt update and then you should be able to install software as usual. - -#### Which method worked for you (if it worked)? - -I hope this quick article helps you in fixing the ‘E: Sub-process /usr/bin/dpkg returned an error code (1)’ error. - -If it did work for you, which method was it? Did you manage to fix this error with some other method? If yes, please share that to help others with this issue. - --------------------------------------------------------------------------------- - -via: https://itsfoss.com/dpkg-returned-an-error-code-1/ - -作者:[Abhishek Prakash][a] -选题:[lujun9972](https://github.com/lujun9972) -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://itsfoss.com/author/abhishek/ -[1]:https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/08/fix-common-update-errors-ubuntu.jpeg diff --git a/translated/tech/20180824 [Solved] -sub process usr bin dpkg returned an error code 1- Error in Ubuntu.md b/translated/tech/20180824 [Solved] -sub process usr bin dpkg returned an error code 1- Error in Ubuntu.md new file mode 100644 index 0000000000..96eecf8936 --- /dev/null +++ b/translated/tech/20180824 [Solved] -sub process usr bin dpkg returned an error code 1- Error in Ubuntu.md @@ -0,0 +1,116 @@ +[已解决] Ubuntu 中的 “sub process usr bin dpkg returned an error code 1” 错误 +====== +如果你在 Ubuntu Linux 上安装软件时遇到 “sub process usr bin dpkg returned an error code 1”,请按照以下步骤进行修复。 + +Ubuntu 和其他基于 Debian 的发行版中的一个常见问题是已经损坏的包。你尝试更新系统或安装新软件包时遇到类似 “Sub-process /usr/bin/dpkg returned an error code” 的错误。 + +这就是前几天发生在我身上的事。我试图在 Ubuntu 中安装一个电台程序时,它给我了这个错误: +``` +Unpacking python-gst-1.0 (1.6.2-1build1) ... +Selecting previously unselected package radiotray. +Preparing to unpack .../radiotray_0.7.3-5ubuntu1_all.deb ... +Unpacking radiotray (0.7.3-5ubuntu1) ... +Processing triggers for man-db (2.7.5-1) ... +Processing triggers for desktop-file-utils (0.22-1ubuntu5.2) ... +Processing triggers for bamfdaemon (0.5.3~bzr0+16.04.20180209-0ubuntu1) ... +Rebuilding /usr/share/applications/bamf-2.index... +Processing triggers for gnome-menus (3.13.3-6ubuntu3.1) ... +Processing triggers for mime-support (3.59ubuntu1) ... +Setting up polar-bookshelf (1.0.0-beta56) ... +ln: failed to create symbolic link '/usr/local/bin/polar-bookshelf': No such file or directory +dpkg: error processing package polar-bookshelf (--configure): +subprocess installed post-installation script returned error exit status 1 +Setting up python-appindicator (12.10.1+16.04.20170215-0ubuntu1) ... +Setting up python-gst-1.0 (1.6.2-1build1) ... +Setting up radiotray (0.7.3-5ubuntu1) ... +Errors were encountered while processing: +polar-bookshelf +E: Sub-process /usr/bin/dpkg returned an error code (1) + +``` + +这里最后三行非常重要。 +``` +Errors were encountered while processing: +polar-bookshelf +E: Sub-process /usr/bin/dpkg returned an error code (1) + +``` + +它告诉我 polar-bookshelf 包引发了问题。这可能对你如何修复这个错误至关重要。 + +### 修复 Sub-process /usr/bin/dpkg returned an error code (1) + +![Fix update errors in Ubuntu Linux][1] + +让我们尝试修复这个损坏的错误包。我将展示几种你可以逐一尝试的方法。最初的那些易于使用,几乎不用动脑子。 + +你应该尝试运行 sudo apt update,接着尝试安装新的包或尝试升级这里讨论的每个包。 + +#### 方法 1:重新配包数据库 + +你可以尝试的第一种方法是重新配置包数据库。数据库可能在安装包时损坏了。重新配置通常可以解决问题。 +``` +sudo dpkg --configure -a + +``` + +#### 方法 2:强制安装 + +如果是之前中断安装的包,你可以尝试强制安装。 +``` +sudo apt-get install -f + +``` + +#### 方法3:尝试删除有问题的包 + +如果这不是你的问题,你可以尝试手动删除包。请不要在 Linux Kernels(以 linux- 开头的软件包)中执行此操作。 +``` +sudo apt remove + +``` + +#### 方法 4:删除有问题的包中的信息文件 + +这应该是你最后的选择。你可以尝试从 /var/lib/dpkg/info 中删除与相关软件包关联的文件。 + +**你需要了解一些基本的 Linux 命令来了解发生了什么以及如何对应你的问题** + +就我而言,我在 polar-bookshelf 中遇到问题。所以我查找了与之关联的文件: +``` +ls -l /var/lib/dpkg/info | grep -i polar-bookshelf +-rw-r--r-- 1 root root 2324811 Aug 14 19:29 polar-bookshelf.list +-rw-r--r-- 1 root root 2822824 Aug 10 04:28 polar-bookshelf.md5sums +-rwxr-xr-x 1 root root 113 Aug 10 04:28 polar-bookshelf.postinst +-rwxr-xr-x 1 root root 84 Aug 10 04:28 polar-bookshelf.postrm + +``` + +现在我需要做的就是删除这些文件: +``` +sudo mv /var/lib/dpkg/info/polar-bookshelf.* /tmp + +``` + +使用 sudo apt update,接着你应该就能像往常一样安装软件了。 + +#### 哪种方法适合你(如果有效)? + +我希望这篇快速文章可以帮助你修复 “E: Sub-process /usr/bin/dpkg returned an error code (1)” 的错误 + +如果它对你有用,是那种方法?你是否设法使用其他方法修复此错误?如果是,请分享一下以帮助其他人解决此问题。 + +-------------------------------------------------------------------------------- + +via: https://itsfoss.com/dpkg-returned-an-error-code-1/ + +作者:[Abhishek Prakash][a] +选题:[lujun9972](https://github.com/lujun9972) +译者:[geekpi](https://github.com/geekpi) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://itsfoss.com/author/abhishek/ +[1]:https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/08/fix-common-update-errors-ubuntu.jpeg From 1afa4c167954c07ba878ae995aafa53adb7aed58 Mon Sep 17 00:00:00 2001 From: geekpi Date: Fri, 28 Sep 2018 09:21:05 +0800 Subject: [PATCH 084/736] translating --- ...on - A Modular System Monitor Application Written In Rust.md | 2 ++ 1 file changed, 2 insertions(+) diff --git a/sources/tech/20180925 Hegemon - A Modular System Monitor Application Written In Rust.md b/sources/tech/20180925 Hegemon - A Modular System Monitor Application Written In Rust.md index 14f6a2e947..a75c1f3e9a 100644 --- a/sources/tech/20180925 Hegemon - A Modular System Monitor Application Written In Rust.md +++ b/sources/tech/20180925 Hegemon - A Modular System Monitor Application Written In Rust.md @@ -1,3 +1,5 @@ +translating---geekpi + Hegemon – A Modular System Monitor Application Written In Rust ====== From ee3c73bc72e472820cc6442b5b3b79643a4b6a2a Mon Sep 17 00:00:00 2001 From: HankChow <280630620@qq.com> Date: Fri, 28 Sep 2018 10:18:40 +0800 Subject: [PATCH 085/736] translated --- ...Port Number A Process Is Using In Linux.md | 102 ++++++++---------- 1 file changed, 46 insertions(+), 56 deletions(-) rename {sources => translated}/tech/20180924 How To Find Out Which Port Number A Process Is Using In Linux.md (61%) diff --git a/sources/tech/20180924 How To Find Out Which Port Number A Process Is Using In Linux.md b/translated/tech/20180924 How To Find Out Which Port Number A Process Is Using In Linux.md similarity index 61% rename from sources/tech/20180924 How To Find Out Which Port Number A Process Is Using In Linux.md rename to translated/tech/20180924 How To Find Out Which Port Number A Process Is Using In Linux.md index add3ce719e..ed3402e0fa 100644 --- a/sources/tech/20180924 How To Find Out Which Port Number A Process Is Using In Linux.md +++ b/translated/tech/20180924 How To Find Out Which Port Number A Process Is Using In Linux.md @@ -1,36 +1,24 @@ -HankChow translating - -How To Find Out Which Port Number A Process Is Using In Linux +如何在 Linux 中查看进程占用的端口号 ====== -As a Linux administrator, you should know whether the corresponding service is binding/listening with correct port or not. +对于 Linux 系统管理员来说,清楚某个服务是否正确地绑定或监听某个端口,是至关重要的。如果你需要处理端口相关的问题,这篇文章可能会对你有用。 -This will help you to easily troubleshoot further when you are facing port related issues. +端口是 Linux 系统上特定进程之间逻辑连接的标识,包括物理端口和软件端口。由于 Linux 操作系统是一个软件,因此本文只讨论软件端口。软件端口始终与主机的 IP 地址和相关的通信协议相关联,因此端口常用于区分应用程序。大部分涉及到网络的服务都必须打开一个套接字来监听传入的网络请求,而每个服务都使用一个独立的套接字。 -A port is a logical connection that identifies a specific process on Linux. There are two kind of port are available like, physical and software. +**推荐阅读:** +**(#)** [在 Linux 上查看进程 ID 的 4 种方法][1] +**(#)** [在 Linux 上终止进程的 3 种方法][2] -Since Linux operating system is a software hence, we are going to discuss about software port. +套接字是和 IP 地址,软件端口和协议结合起来使用的,而端口号对传输控制协议(Transmission Control Protocol, TCP)和 用户数据报协议(User Datagram Protocol, UDP)协议都适用,TCP 和 UDP 都可以使用0到65535之间的端口号进行通信。 -Software port is always associated with an IP address of a host and the relevant protocol type for communication. The port is used to distinguish the application. +以下是端口分配类别: -Most of the network related services have to open up a socket to listen incoming network requests. Socket is unique for every service. - -**Suggested Read :** -**(#)** [4 Easiest Ways To Find Out Process ID (PID) In Linux][1] -**(#)** [3 Easy Ways To Kill Or Terminate A Process In Linux][2] - -Socket is combination of IP address, software Port and protocol. The port numbers area available for both TCP and UDP protocol. - -The Transmission Control Protocol (TCP) and the User Datagram Protocol (UDP) use port numbers for communication. It is a value from 0 to 65535. - -Below are port assignments categories. - - * `0-1023:` Well Known Ports or System Ports - * `1024-49151:` Registered Ports for applications - * `49152-65535:` Dynamic Ports or Private Ports + * `0-1023:` 常用端口和系统端口 + * `1024-49151:` 软件的注册端口 + * `49152-65535:` 动态端口或私有端口 -You can check the details of the reserved ports in the /etc/services file on Linux. +在 Linux 上的 `/etc/services` 文件可以查看到更多关于保留端口的信息。 ``` # less /etc/services @@ -89,24 +77,25 @@ lmtp 24/udp # LMTP Mail Delivery ``` -This can be achieved using the below six methods. +可以使用以下六种方法查看端口信息。 - * `ss:` ss is used to dump socket statistics. - * `netstat:` netstat is displays a list of open sockets. - * `lsof:` lsof – list open files. - * `fuser:` fuser – list process IDs of all processes that have one or more files open - * `nmap:` nmap – Network exploration tool and security / port scanner - * `systemctl:` systemctl – Control the systemd system and service manager + * `ss:` ss 可以用于转储套接字统计信息。 + * `netstat:` netstat 可以显示打开的套接字列表。 + * `lsof:` lsof 可以列出打开的文件。 + * `fuser:` fuser 可以列出那些打开了文件的进程的进程 ID。 + * `nmap:` nmap 是网络检测工具和端口扫描程序。 + * `systemctl:` systemctl 是 systemd 系统的控制管理器和服务管理器。 -In this tutorial we are going to find out which port number the SSHD daemon is using. +以下我们将找出 `sshd` 守护进程所使用的端口号。 -### Method-1: Using ss Command +### 方法1:使用 ss 命令 -ss is used to dump socket statistics. It allows showing information similar to netstat. It can display more TCP and state informations than other tools. +`ss` 一般用于转储套接字统计信息。它能够输出类似于 `netstat` 输出的信息,但它可以比其它工具显示更多的 TCP 信息和状态信息。 + +它还可以显示所有类型的套接字统计信息,包括 PACKET、TCP、UDP、DCCP、RAW、Unix 域等。 -It can display stats for all kind of sockets such as PACKET, TCP, UDP, DCCP, RAW, Unix domain, etc. ``` # ss -tnlp | grep ssh @@ -114,7 +103,7 @@ LISTEN 0 128 *:22 *:* users:(("sshd",pid=997,fd=3)) LISTEN 0 128 :::22 :::* users:(("sshd",pid=997,fd=4)) ``` -Alternatively you can check this with port number as well. +也可以使用端口号来检查。 ``` # ss -tnlp | grep ":22" @@ -122,11 +111,11 @@ LISTEN 0 128 *:22 *:* users:(("sshd",pid=997,fd=3)) LISTEN 0 128 :::22 :::* users:(("sshd",pid=997,fd=4)) ``` -### Method-2: Using netstat Command +### 方法2:使用 netstat 命令 -netstat – Print network connections, routing tables, interface statistics, masquerade connections, and multicast memberships. +`netstat` 能够显示网络连接、路由表、接口统计信息、伪装连接以及多播成员。 -By default, netstat displays a list of open sockets. If you don’t specify any address families, then the active sockets of all configured address families will be printed. This program is obsolete. Replacement for netstat is ss. +默认情况下,`netstat` 会列出打开的套接字。如果不指定任何地址族,则会显示所有已配置地址族的活动套接字。但 `netstat` 已经过时了,一般会使用 `ss` 来替代。 ``` # netstat -tnlp | grep ssh @@ -134,7 +123,7 @@ tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN 997/sshd tcp6 0 0 :::22 :::* LISTEN 997/sshd ``` -Alternatively you can check this with port number as well. +也可以使用端口号来检查。 ``` # netstat -tnlp | grep ":22" @@ -142,9 +131,9 @@ tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN 1208/sshd tcp6 0 0 :::22 :::* LISTEN 1208/sshd ``` -### Method-3: Using lsof Command +### 方法3:使用 lsof 命令 -lsof – list open files. The Linux lsof command lists information about files that are open by processes running on the system. +`lsof` 能够列出打开的文件,并列出系统上被进程打开的文件的相关信息。 ``` # lsof -i -P | grep ssh @@ -154,7 +143,7 @@ sshd 11584 root 4u IPv6 27627 0t0 TCP *:22 (LISTEN) sshd 11592 root 3u IPv4 27744 0t0 TCP vps.2daygeek.com:ssh->103.5.134.167:49902 (ESTABLISHED) ``` -Alternatively you can check this with port number as well. +也可以使用端口号来检查。 ``` # lsof -i tcp:22 @@ -164,9 +153,9 @@ sshd 1208 root 4u IPv6 20921 0t0 TCP *:ssh (LISTEN) sshd 11592 root 3u IPv4 27744 0t0 TCP vps.2daygeek.com:ssh->103.5.134.167:49902 (ESTABLISHED) ``` -### Method-4: Using fuser Command +### 方法4:使用 fuser 命令 -The fuser utility shall write to standard output the process IDs of processes running on the local system that have one or more named files open. +`fuser` 工具会将本地系统上打开了文件的进程的进程 ID 显示在标准输出中。 ``` # fuser -v 22/tcp @@ -176,11 +165,11 @@ The fuser utility shall write to standard output the process IDs of processes ru root 49339 F.... sshd ``` -### Method-5: Using nmap Command +### 方法5:使用 nmap 命令 -Nmap (“Network Mapper”) is an open source tool for network exploration and security auditing. It was designed to rapidly scan large networks, although it works fine against single hosts. +`nmap`(“Network Mapper”)是一款用于网络检测和安全审计的开源工具。它最初用于对大型网络进行快速扫描,但它对于单个主机的扫描也有很好的表现。 -Nmap uses raw IP packets in novel ways to determine what hosts are available on the network, what services (application name and version) those hosts are offering, what operating systems (and OS versions) they are running, what type of packet filters/firewalls are in use, and dozens of other characteristics. +`nmap` 使用原始 IP 数据包来确定网络上可用的主机,这些主机的服务(包括应用程序名称和版本)、主机运行的操作系统(包括操作系统版本等信息)、正在使用的数据包过滤器或防火墙的类型,以及很多其它信息。 ``` # nmap -sV -p 22 localhost @@ -196,13 +185,13 @@ Service detection performed. Please report any incorrect results at http://nmap. Nmap done: 1 IP address (1 host up) scanned in 0.44 seconds ``` -### Method-6: Using systemctl Command +### 方法6:使用 systemctl 命令 -systemctl – Control the systemd system and service manager. This is the replacement of old SysV init system management and most of the modern Linux operating systems were adapted systemd. +`systemctl` 是 systemd 系统的控制管理器和服务管理器。它取代了旧的 SysV init 系统管理,目前大多数现代 Linux 操作系统都采用了 systemd。 -**Suggested Read :** -**(#)** [chkservice – A Tool For Managing Systemd Units From Linux Terminal][3] -**(#)** [How To Check All Running Services In Linux][4] +**推荐阅读:** +**(#)** [chkservice – Linux 终端上的 systemd 单元管理工具][3] +**(#)** [如何查看 Linux 系统上正在运行的服务][4] ``` # systemctl status sshd @@ -223,7 +212,7 @@ Sep 23 02:09:15 vps.2daygeek.com sshd[11589]: Connection closed by 103.5.134.167 Sep 23 02:09:41 vps.2daygeek.com sshd[11592]: Accepted password for root from 103.5.134.167 port 49902 ssh2 ``` -The above out will be showing the actual listening port of SSH service when you start the SSHD service recently. Otherwise it won’t because it updates recent logs in the output frequently. +以上输出的内容显示了最近一次启动 `sshd` 服务时 `ssh` 服务的监听端口。但它不会将最新日志更新到输出中。 ``` # systemctl status sshd @@ -250,7 +239,7 @@ Sep 23 12:50:40 vps.2daygeek.com sshd[23911]: Connection closed by 95.210.113.14 Sep 23 12:50:40 vps.2daygeek.com sshd[23909]: Connection closed by 95.210.113.142 port 51666 [preauth] ``` -Most of the time the above output won’t shows the process actual port number. in this case i would suggest you to check the details using the below command from the journalctl log file. +大部分情况下,以上的输出不会显示进程的实际端口号。这时更建议使用以下这个 `journalctl` 命令检查日志文件中的详细信息。 ``` # journalctl | grep -i "openssh\|sshd" @@ -268,7 +257,7 @@ via: https://www.2daygeek.com/how-to-find-out-which-port-number-a-process-is-usi 作者:[Prakash Subramanian][a] 选题:[lujun9972](https://github.com/lujun9972) -译者:[译者ID](https://github.com/译者ID) +译者:[HankChow](https://github.com/HankChow) 校对:[校对者ID](https://github.com/校对者ID) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 @@ -278,3 +267,4 @@ via: https://www.2daygeek.com/how-to-find-out-which-port-number-a-process-is-usi [2]: https://www.2daygeek.com/kill-terminate-a-process-in-linux-using-kill-pkill-killall-command/ [3]: https://www.2daygeek.com/chkservice-a-tool-for-managing-systemd-units-from-linux-terminal/ [4]: https://www.2daygeek.com/how-to-check-all-running-services-in-linux/ + From f5e554b0f5d92d9c174e868d701f69aee955e2b8 Mon Sep 17 00:00:00 2001 From: darksun Date: Fri, 28 Sep 2018 10:19:05 +0800 Subject: [PATCH 086/736] =?UTF-8?q?=E9=80=89=E9=A2=98:=203=20open=20source?= =?UTF-8?q?=20distributed=20tracing=20tools?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...3 open source distributed tracing tools.md | 88 +++++++++++++++++++ 1 file changed, 88 insertions(+) create mode 100644 sources/tech/20180926 3 open source distributed tracing tools.md diff --git a/sources/tech/20180926 3 open source distributed tracing tools.md b/sources/tech/20180926 3 open source distributed tracing tools.md new file mode 100644 index 0000000000..197fd9450e --- /dev/null +++ b/sources/tech/20180926 3 open source distributed tracing tools.md @@ -0,0 +1,88 @@ +3 open source distributed tracing tools +====== + +Find performance issues quickly with these tools, which provide a graphical view of what's happening across complex software systems. + +![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/server_data_system_admin.png?itok=q6HCfNQ8) + +Distributed tracing systems enable users to track a request through a software system that is distributed across multiple applications, services, and databases as well as intermediaries like proxies. This allows for a deeper understanding of what is happening within the software system. These systems produce graphical representations that show how much time the request took on each step and list each known step. + +A user reviewing this content can determine where the system is experiencing latencies or blockages. Instead of testing the system like a binary search tree when requests start failing, operators and developers can see exactly where the issues begin. This can also reveal where performance changes might be occurring from deployment to deployment. It’s always better to catch regressions automatically by alerting to the anomalous behavior than to have your customers tell you. + +How does this tracing thing work? Well, each request gets a special ID that’s usually injected into the headers. This ID uniquely identifies that transaction. This transaction is normally called a trace. The trace is the overall abstract idea of the entire transaction. Each trace is made up of spans. These spans are the actual work being performed, like a service call or a database request. Each span also has a unique ID. Spans can create subsequent spans called child spans, and child spans can have multiple parents. + +Once a transaction (or trace) has run its course, it can be searched in a presentation layer. There are several tools in this space that we’ll discuss later, but the picture below shows [Jaeger][1] from my [Istio walkthrough][2]. It shows multiple spans of a single trace. The power of this is immediately clear as you can better understand the transaction’s story at a glance. + +![](https://opensource.com/sites/default/files/uploads/monitoring_guide_jaeger_istio_0.png) + +This demo uses Istio’s built-in OpenTracing implementation, so I can get tracing without even modifying my application. It also uses Jaeger, which is OpenTracing-compatible. + +So what is OpenTracing? Let’s find out. + +### OpenTracing API + +[OpenTracing][3] is a spec that grew out of [Zipkin][4] to provide cross-platform compatibility. It offers a vendor-neutral API for adding tracing to applications and delivering that data into distributed tracing systems. A library written for the OpenTracing spec can be used with any system that is OpenTracing-compliant. Zipkin, Jaeger, and Appdash are examples of open source tools that have adopted the open standard, but even proprietary tools like [Datadog][5] and [Instana][6] are adopting it. This is expected to continue as OpenTracing reaches ubiquitous status. + +### OpenCensus + +Okay, we have OpenTracing, but what is this [OpenCensus][7] thing that keeps popping up in my searches? Is it a competing standard, something completely different, or something complementary? + +The answer depends on who you ask. I will do my best to explain the difference (as I understand it): OpenCensus takes a more holistic or all-inclusive approach. OpenTracing is focused on establishing an open API and spec and not on open implementations for each language and tracing system. OpenCensus provides not only the specification but also the language implementations and wire protocol. It also goes beyond tracing by including additional metrics that are normally outside the scope of distributed tracing systems. + +OpenCensus allows viewing data on the host where the application is running, but it also has a pluggable exporter system for exporting data to central aggregators. The current exporters produced by the OpenCensus team include Zipkin, Prometheus, Jaeger, Stackdriver, Datadog, and SignalFx, but anyone can create an exporter. + +From my perspective, there’s a lot of overlap. One isn’t necessarily better than the other, but it’s important to know what each does and doesn’t do. OpenTracing is primarily a spec, with others doing the implementation and opinionation. OpenCensus provides a holistic approach for the local component with more opinionation but still requires other systems for remote aggregation. + +### Tool options + +#### Zipkin + +Zipkin was one of the first systems of this kind. It was developed by Twitter based on the [Google Dapper paper][8] about the internal system Google uses. Zipkin was written using Java, and it can use Cassandra or ElasticSearch as a scalable backend. Most companies should be satisfied with one of those options. The lowest supported Java version is Java 6. It also uses the [Thrift][9] binary communication protocol, which is popular in the Twitter stack and is hosted as an Apache project. + +The system consists of reporters (clients), collectors, a query service, and a web UI. Zipkin is meant to be safe in production by transmitting only a trace ID within the context of a transaction to inform receivers that a trace is in process. The data collected in each reporter is then transmitted asynchronously to the collectors. The collectors store these spans in the database, and the web UI presents this data to the end user in a consumable format. The delivery of data to the collectors can occur in three different methods: HTTP, Kafka, and Scribe. + +The [Zipkin community][10] has also created [Brave][11], a Java client implementation compatible with Zipkin. It has no dependencies, so it won’t drag your projects down or clutter them with libraries that are incompatible with your corporate standards. There are many other implementations, and Zipkin is compatible with the OpenTracing standard, so these implementations should also work with other distributed tracing systems. The popular Spring framework has a component called [Spring Cloud Sleuth][12] that is compatible with Zipkin. + +#### Jaeger + +[Jaeger][1] is a newer project from Uber Technologies that the [CNCF][13] has since adopted as an Incubating project. It is written in Golang, so you don’t have to worry about having dependencies installed on the host or any overhead of interpreters or language virtual machines. Similar to Zipkin, Jaeger also supports Cassandra and ElasticSearch as scalable storage backends. Jaeger is also fully compatible with the OpenTracing standard. + +Jaeger’s architecture is similar to Zipkin, with clients (reporters), collectors, a query service, and a web UI, but it also has an agent on each host that locally aggregates the data. The agent receives data over a UDP connection, which it batches and sends to a collector. The collector receives that data in the form of the [Thrift][14] protocol and stores that data in Cassandra or ElasticSearch. The query service can access the data store directly and provide that information to the web UI. + +By default, a user won’t get all the traces from the Jaeger clients. The system samples 0.1% (1 in 1,000) of traces that pass through each client. Keeping and transmitting all traces would be a bit overwhelming to most systems. However, this can be increased or decreased by configuring the agents, which the client consults with for its configuration. This sampling isn’t completely random, though, and it’s getting better. Jaeger uses probabilistic sampling, which tries to make an educated guess at whether a new trace should be sampled or not. [Adaptive sampling is on its roadmap][15], which will improve the sampling algorithm by adding additional context for making decisions. + +#### Appdash + +[Appdash][16] is a distributed tracing system written in Golang, like Jaeger. It was created by [Sourcegraph][17] based on Google’s Dapper and Twitter’s Zipkin. Similar to Jaeger and Zipkin, Appdash supports the OpenTracing standard; this was a later addition and requires a component that is different from the default component. This adds risk and complexity. + +At a high level, Appdash’s architecture consists mostly of three components: a client, a local collector, and a remote collector. There’s not a lot of documentation, so this description comes from testing the system and reviewing the code. The client in Appdash gets added to your code. Appdash provides Python, Golang, and Ruby implementations, but OpenTracing libraries can be used with Appdash’s OpenTracing implementation. The client collects the spans and sends them to the local collector. The local collector then sends the data to a centralized Appdash server running its own local collector, which is the remote collector for all other nodes in the system. + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/18/9/distributed-tracing-tools + +作者:[Dan Barker][a] +选题:[lujun9972](https://github.com/lujun9972) +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/barkerd427 +[1]: https://www.jaegertracing.io/ +[2]: https://www.youtube.com/watch?v=T8BbeqZ0Rls +[3]: http://opentracing.io/ +[4]: https://zipkin.io/ +[5]: https://www.datadoghq.com/ +[6]: https://www.instana.com/ +[7]: https://opencensus.io/ +[8]: https://research.google.com/archive/papers/dapper-2010-1.pdf +[9]: https://thrift.apache.org/ +[10]: https://zipkin.io/pages/community.html +[11]: https://github.com/openzipkin/brave +[12]: https://cloud.spring.io/spring-cloud-sleuth/ +[13]: https://www.cncf.io/ +[14]: https://en.wikipedia.org/wiki/Apache_Thrift +[15]: https://www.jaegertracing.io/docs/roadmap/#adaptive-sampling +[16]: https://github.com/sourcegraph/appdash +[17]: https://about.sourcegraph.com/ From 32a36c59a19c92a6ccada27c495b06bbb6839419 Mon Sep 17 00:00:00 2001 From: darksun Date: Fri, 28 Sep 2018 10:19:56 +0800 Subject: [PATCH 087/736] =?UTF-8?q?=E9=80=89=E9=A2=98:=20How=20to=20Use=20?= =?UTF-8?q?RAR=20files=20in=20Ubuntu=20Linux?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...27 How to Use RAR files in Ubuntu Linux.md | 97 +++++++++++++++++++ 1 file changed, 97 insertions(+) create mode 100644 sources/tech/20180927 How to Use RAR files in Ubuntu Linux.md diff --git a/sources/tech/20180927 How to Use RAR files in Ubuntu Linux.md b/sources/tech/20180927 How to Use RAR files in Ubuntu Linux.md new file mode 100644 index 0000000000..63b03182b4 --- /dev/null +++ b/sources/tech/20180927 How to Use RAR files in Ubuntu Linux.md @@ -0,0 +1,97 @@ +How to Use RAR files in Ubuntu Linux +====== +[RAR][1] is a quite good archive file format. But, it isn’t the best when you’ve got 7-zip offering great compression ratios and Zip files being easily supported across multiple platforms by default. It is one of the most popular archive formats, but, [Ubuntu][2]‘s archive manager does not support extracting RAR files nor does it let you create RAR files. + +Fret not, we have a solution for you. To enable the support to extract RAR files, you need to install **UNRAR** – which is a freeware by [RARLAB][3]. And, to create and manage RAR files, you need to install **RAR** – which is available as a trial. + +![RAR files in Ubuntu Linux][4] + +### Extracting RAR Files + +Unless you have it installed, extracting RAR files will show you an error “Extraction not performed“. Here’s how it should look like ([ **Ubuntu 18.04**][5]): + +![Error in RAR extraction in Ubuntu][6] + +If you want to resolve the error and easily be able to extract RAR files, follow the instructions below to install unrar: + +**- >** Launch the terminal and type in: + +``` + sudo apt-get install unrar + +``` + +-> After installing unrar, you may choose to type in “ **unrar** ” (without the inverted commas) to know more about its usage and how to use RAR files with the help of it. + +The most common usage would obviously be extracting the RAR file you have. So, **you can either perform a right-click on the file and proceed to extract it** from there or you can do it via the terminal with the help of this command: + +``` +unrar x FileName.rar + +``` + +You can see that in action here: + +![Using unrar in Ubuntu][7] + +If the file isn’t present in the Home directory, then you have to navigate to the target folder by using the “ **cd** ” command. For instance, if you have the archive in the Music directory, simply type in “ **cd Music** ” to navigate to the location and then extract the RAR file. + +### Creating & Managing RAR files + +![Using rar archive in Ubuntu Linux][8] + +UNRAR does not let you create RAR files. So, you need to install the RAR command-line tool to be able to create RAR archives. + +To do that, you need to type in the following command: + +``` +sudo apt-get install rar + +``` + +Here, we will help you create a RAR file. In order to do that, follow the command syntax below: + +``` +rar a ArchiveName File_1 File_2 Dir_1 Dir_2 + +``` + +When you type a command in this format, it will add every item inside the directory to the archive. In either case, if you want specific files, just mention the exact name/path. + +By default, the RAR files reside in **HOME** directory. + +In the same way, you can update/manage the RAR files. Just type in a command using the following syntax: + +``` +rar u ArchiveName Filename + +``` + +To get the list of commands for the RAR tool, just type “ **rar** ” in the terminal. + +### Wrapping Up + +Now that you’ve known how to use RAR files on Ubuntu, will you prefer using it over 7-zip, Zip, or Tar.xz? + +Let us know your thoughts in the comments below. + +-------------------------------------------------------------------------------- + +via: https://itsfoss.com/use-rar-ubuntu-linux/ + +作者:[Ankush Das][a] +选题:[lujun9972](https://github.com/lujun9972) +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://itsfoss.com/author/ankush/ +[1]: https://www.rarlab.com/rar_file.htm +[2]: https://www.ubuntu.com/ +[3]: https://www.rarlab.com/ +[4]: https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/09/rar-ubuntu-linux.png +[5]: https://itsfoss.com/things-to-do-after-installing-ubuntu-18-04/ +[6]: https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/08/extract-rar-error.jpg +[7]: https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/08/unrar-rar-extraction.jpg +[8]: https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/09/rar-update-create.jpg From 6236c9211504e0b8d90f22862c077d5656932514 Mon Sep 17 00:00:00 2001 From: darksun Date: Fri, 28 Sep 2018 10:23:09 +0800 Subject: [PATCH 088/736] =?UTF-8?q?=E9=80=89=E9=A2=98:=20How=20To=20Find?= =?UTF-8?q?=20And=20Delete=20Duplicate=20Files=20In=20Linux?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...ind And Delete Duplicate Files In Linux.md | 441 ++++++++++++++++++ 1 file changed, 441 insertions(+) create mode 100644 sources/tech/20180927 How To Find And Delete Duplicate Files In Linux.md diff --git a/sources/tech/20180927 How To Find And Delete Duplicate Files In Linux.md b/sources/tech/20180927 How To Find And Delete Duplicate Files In Linux.md new file mode 100644 index 0000000000..e3a0a9d561 --- /dev/null +++ b/sources/tech/20180927 How To Find And Delete Duplicate Files In Linux.md @@ -0,0 +1,441 @@ +How To Find And Delete Duplicate Files In Linux +====== + +![](https://www.ostechnix.com/wp-content/uploads/2018/09/Find-And-Delete-Duplicate-Files-720x340.png) + +I always backup the configuration files or any old files to somewhere in my hard disk before edit or modify them, so I can restore them from the backup if I accidentally did something wrong. But the problem is I forgot to clean up those files and my hard disk is filled with a lot of duplicate files after a certain period of time. I feel either too lazy to clean the old files or afraid that I may delete an important files. If you’re anything like me and overwhelming with multiple copies of same files in different backup directories, you can find and delete duplicate files using the tools given below in Unix-like operating systems. + +**A word of caution:** + +Please be careful while deleting duplicate files. If you’re not careful, it will lead you to [**accidental data loss**][1]. I advice you to pay extra attention while using these tools. + +### Find And Delete Duplicate Files In Linux + +For the purpose of this guide, I am going to discuss about three utilities namely, + + 1. Rdfind, + 2. Fdupes, + 3. FSlint. + + + +These three utilities are free, open source and works on most Unix-like operating systems. + +##### 1. Rdfind + +**Rdfind** , stands for **r** edundant **d** ata **find** , is a free and open source utility to find duplicate files across and/or within directories and sub-directories. It compares files based on their content, not on their file names. Rdfind uses **ranking** algorithm to classify original and duplicate files. If you have two or more equal files, Rdfind is smart enough to find which is original file, and consider the rest of the files as duplicates. Once it found the duplicates, it will report them to you. You can decide to either delete them or replace them with [**hard links** or **symbolic (soft) links**][2]. + +**Installing Rdfind** + +Rdfind is available in [**AUR**][3]. So, you can install it in Arch-based systems using any AUR helper program like [**Yay**][4] as shown below. + +``` +$ yay -S rdfind + +``` + +On Debian, Ubuntu, Linux Mint: + +``` +$ sudo apt-get install rdfind + +``` + +On Fedora: + +``` +$ sudo dnf install rdfind + +``` + +On RHEL, CentOS: + +``` +$ sudo yum install epel-release + +$ sudo yum install rdfind + +``` + +**Usage** + +Once installed, simply run Rdfind command along with the directory path to scan for the duplicate files. + +``` +$ rdfind ~/Downloads + +``` + +![](https://www.ostechnix.com/wp-content/uploads/2018/09/rdfind-1.png) + +As you see in the above screenshot, Rdfind command will scan ~/Downloads directory and save the results in a file named **results.txt** in the current working directory. You can view the name of the possible duplicate files in results.txt file. + +``` +$ cat results.txt +# Automatically generated +# duptype id depth size device inode priority name +DUPTYPE_FIRST_OCCURRENCE 1469 8 9 2050 15864884 1 /home/sk/Downloads/tor-browser_en-US/Browser/TorBrowser/Tor/PluggableTransports/fte/tests/dfas/test5.regex +DUPTYPE_WITHIN_SAME_TREE -1469 8 9 2050 15864886 1 /home/sk/Downloads/tor-browser_en-US/Browser/TorBrowser/Tor/PluggableTransports/fte/tests/dfas/test6.regex +[...] +DUPTYPE_FIRST_OCCURRENCE 13 0 403635 2050 15740257 1 /home/sk/Downloads/Hyperledger(1).pdf +DUPTYPE_WITHIN_SAME_TREE -13 0 403635 2050 15741071 1 /home/sk/Downloads/Hyperledger.pdf +# end of file + +``` + +By reviewing the results.txt file, you can easily find the duplicates. You can remove the duplicates manually if you want to. + +Also, you can **-dryrun** option to find all duplicates in a given directory without changing anything and output the summary in your Terminal: + +``` +$ rdfind -dryrun true ~/Downloads + +``` + +Once you found the duplicates, you can replace them with either hardlinks or symlinks. + +To replace all duplicates with hardlinks, run: + +``` +$ rdfind -makehardlinks true ~/Downloads + +``` + +To replace all duplicates with symlinks/soft links, run: + +``` +$ rdfind -makesymlinks true ~/Downloads + +``` + +You may have some empty files in a directory and want to ignore them. If so, use **-ignoreempty** option like below. + +``` +$ rdfind -ignoreempty true ~/Downloads + +``` + +If you don’t want the old files anymore, just delete duplicate files instead of replacing them with hard or soft links. + +To delete all duplicates, simply run: + +``` +$ rdfind -deleteduplicates true ~/Downloads + +``` + +If you do not want to ignore empty files and delete them along with all duplicates, run: + +``` +$ rdfind -deleteduplicates true -ignoreempty false ~/Downloads + +``` + +For more details, refer the help section: + +``` +$ rdfind --help + +``` + +And, the manual pages: + +``` +$ man rdfind + +``` + +##### 2. Fdupes + +**Fdupes** is yet another command line utility to identify and remove the duplicate files within specified directories and the sub-directories. It is free, open source utility written in **C** programming language. Fdupes identifies the duplicates by comparing file sizes, partial MD5 signatures, full MD5 signatures, and finally performing a byte-by-byte comparison for verification. + +Similar to Rdfind utility, Fdupes comes with quite handful of options to perform operations, such as: + + * Recursively search duplicate files in directories and sub-directories + * Exclude empty files and hidden files from consideration + * Show the size of the duplicates + * Delete duplicates immediately as they encountered + * Exclude files with different owner/group or permission bits as duplicates + * And a lot more. + + + +**Installing Fdupes** + +Fdupes is available in the default repositories of most Linux distributions. + +On Arch Linux and its variants like Antergos, Manjaro Linux, install it using Pacman like below. + +``` +$ sudo pacman -S fdupes + +``` + +On Debian, Ubuntu, Linux Mint: + +``` +$ sudo apt-get install fdupes + +``` + +On Fedora: + +``` +$ sudo dnf install fdupes + +``` + +On RHEL, CentOS: + +``` +$ sudo yum install epel-release + +$ sudo yum install fdupes + +``` + +**Usage** + +Fdupes usage is pretty simple. Just run the following command to find out the duplicate files in a directory, for example **~/Downloads**. + +``` +$ fdupes ~/Downloads + +``` + +Sample output from my system: + +``` +/home/sk/Downloads/Hyperledger.pdf +/home/sk/Downloads/Hyperledger(1).pdf + +``` + +As you can see, I have a duplicate file in **/home/sk/Downloads/** directory. It shows the duplicates from the parent directory only. How to view the duplicates from sub-directories? Just use **-r** option like below. + +``` +$ fdupes -r ~/Downloads + +``` + +Now you will see the duplicates from **/home/sk/Downloads/** directory and its sub-directories as well. + +Fdupes can also be able to find duplicates from multiple directories at once. + +``` +$ fdupes ~/Downloads ~/Documents/ostechnix + +``` + +You can even search multiple directories, one recursively like below: + +``` +$ fdupes ~/Downloads -r ~/Documents/ostechnix + +``` + +The above commands searches for duplicates in “~/Downloads” directory and “~/Documents/ostechnix” directory and its sub-directories. + +Sometimes, you might want to know the size of the duplicates in a directory. If so, use **-S** option like below. + +``` +$ fdupes -S ~/Downloads +403635 bytes each: +/home/sk/Downloads/Hyperledger.pdf +/home/sk/Downloads/Hyperledger(1).pdf + +``` + +Similarly, to view the size of the duplicates in parent and child directories, use **-Sr** option. + +We can exclude empty and hidden files from consideration using **-n** and **-A** respectively. + +``` +$ fdupes -n ~/Downloads + +$ fdupes -A ~/Downloads + +``` + +The first command will exclude zero-length files from consideration and the latter will exclude hidden files from consideration while searching for duplicates in the specified directory. + +To summarize duplicate files information, use **-m** option. + +``` +$ fdupes -m ~/Downloads +1 duplicate files (in 1 sets), occupying 403.6 kilobytes + +``` + +To delete all duplicates, use **-d** option. + +``` +$ fdupes -d ~/Downloads + +``` + +Sample output: + +``` +[1] /home/sk/Downloads/Hyperledger Fabric Installation.pdf +[2] /home/sk/Downloads/Hyperledger Fabric Installation(1).pdf + +Set 1 of 1, preserve files [1 - 2, all]: + +``` + +This command will prompt you for files to preserve and delete all other duplicates. Just enter any number to preserve the corresponding file and delete the remaining files. Pay more attention while using this option. You might delete original files if you’re not be careful. + +If you want to preserve the first file in each set of duplicates and delete the others without prompting each time, use **-dN** option (not recommended). + +``` +$ fdupes -dN ~/Downloads + +``` + +To delete duplicates as they are encountered, use **-I** flag. + +``` +$ fdupes -I ~/Downloads + +``` + +For more details about Fdupes, view the help section and man pages. + +``` +$ fdupes --help + +$ man fdupes + +``` + +##### 3. FSlint + +**FSlint** is yet another duplicate file finder utility that I use from time to time to get rid of the unnecessary duplicate files and free up the disk space in my Linux system. Unlike the other two utilities, FSlint has both GUI and CLI modes. So, it is more user-friendly tool for newbies. FSlint not just finds the duplicates, but also bad symlinks, bad names, temp files, bad IDS, empty directories, and non stripped binaries etc. + +**Installing FSlint** + +FSlint is available in [**AUR**][5], so you can install it using any AUR helpers. + +``` +$ yay -S fslint + +``` + +On Debian, Ubuntu, Linux Mint: + +``` +$ sudo apt-get install fslint + +``` + +On Fedora: + +``` +$ sudo dnf install fslint + +``` + +On RHEL, CentOS: + +``` +$ sudo yum install epel-release + +``` + +$ sudo yum install fslint + +Once it is installed, launch it from menu or application launcher. + +This is how FSlint GUI looks like. + +![](http://www.ostechnix.com/wp-content/uploads/2018/09/fslint-1.png) + +As you can see, the interface of FSlint is user-friendly and self-explanatory. In the **Search path** tab, add the path of the directory you want to scan and click **Find** button on the lower left corner to find the duplicates. Check the recurse option to recursively search for duplicates in directories and sub-directories. The FSlint will quickly scan the given directory and list out them. + +![](https://www.ostechnix.com/wp-content/uploads/2018/09/fslint-2.png) + +From the list, choose the duplicates you want to clean and select any one of them given actions like Save, Delete, Merge and Symlink. + +In the **Advanced search parameters** tab, you can specify the paths to exclude while searching for duplicates. + +![](http://www.ostechnix.com/wp-content/uploads/2018/09/fslint-3.png) + +**FSlint command line options** + +FSlint provides a collection of the following CLI utilities to find duplicates in your filesystem: + + * **findup** — find DUPlicate files + * **findnl** — find Name Lint (problems with filenames) + * **findu8** — find filenames with invalid utf8 encoding + * **findbl** — find Bad Links (various problems with symlinks) + * **findsn** — find Same Name (problems with clashing names) + * **finded** — find Empty Directories + * **findid** — find files with dead user IDs + * **findns** — find Non Stripped executables + * **findrs** — find Redundant Whitespace in files + * **findtf** — find Temporary Files + * **findul** — find possibly Unused Libraries + * **zipdir** — Reclaim wasted space in ext2 directory entries + + + +All of these utilities are available under **/usr/share/fslint/fslint/fslint** location. + +For example, to find duplicates in a given directory, do: + +``` +$ /usr/share/fslint/fslint/findup ~/Downloads/ + +``` + +Similarly, to find empty directories, the command would be: + +``` +$ /usr/share/fslint/fslint/finded ~/Downloads/ + +``` + +To get more details on each utility, for example **findup** , run: + +``` +$ /usr/share/fslint/fslint/findup --help + +``` + +For more details about FSlint, refer the help section and man pages. + +``` +$ /usr/share/fslint/fslint/fslint --help + +$ man fslint + +``` + +##### Conclusion + +You know now about three tools to find and delete unwanted duplicate files in Linux. Among these three tools, I often use Rdfind. It doesn’t mean that the other two utilities are not efficient, but I am just happy with Rdfind so far. Well, it’s your turn. Which is your favorite tool and why? Let us know them in the comment section below. + +And, that’s all for now. Hope this was useful. More good stuffs to come. Stay tuned! + +Cheers! + + + +-------------------------------------------------------------------------------- + +via: https://www.ostechnix.com/how-to-find-and-delete-duplicate-files-in-linux/ + +作者:[SK][a] +选题:[lujun9972](https://github.com/lujun9972) +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://www.ostechnix.com/author/sk/ +[1]: https://www.ostechnix.com/prevent-files-folders-accidental-deletion-modification-linux/ +[2]: https://www.ostechnix.com/explaining-soft-link-and-hard-link-in-linux-with-examples/ +[3]: https://aur.archlinux.org/packages/rdfind/ +[4]: https://www.ostechnix.com/yay-found-yet-another-reliable-aur-helper/ +[5]: https://aur.archlinux.org/packages/fslint/ From 88fed60cb1ab8521b77a2fb5d963671754d0bfd2 Mon Sep 17 00:00:00 2001 From: HankChow <280630620@qq.com> Date: Fri, 28 Sep 2018 11:15:07 +0800 Subject: [PATCH 089/736] hankchow translating --- sources/tech/20180927 How to Use RAR files in Ubuntu Linux.md | 2 ++ 1 file changed, 2 insertions(+) diff --git a/sources/tech/20180927 How to Use RAR files in Ubuntu Linux.md b/sources/tech/20180927 How to Use RAR files in Ubuntu Linux.md index 63b03182b4..da0b9c8fad 100644 --- a/sources/tech/20180927 How to Use RAR files in Ubuntu Linux.md +++ b/sources/tech/20180927 How to Use RAR files in Ubuntu Linux.md @@ -1,3 +1,5 @@ +HankChow translating + How to Use RAR files in Ubuntu Linux ====== [RAR][1] is a quite good archive file format. But, it isn’t the best when you’ve got 7-zip offering great compression ratios and Zip files being easily supported across multiple platforms by default. It is one of the most popular archive formats, but, [Ubuntu][2]‘s archive manager does not support extracting RAR files nor does it let you create RAR files. From 5d56926e1c5e24d05e7d82a456044695b8696a3c Mon Sep 17 00:00:00 2001 From: belitex Date: Fri, 28 Sep 2018 11:59:10 +0800 Subject: [PATCH 090/736] =?UTF-8?q?=E7=BF=BB=E8=AF=91=E7=94=B3=E9=A2=86:?= =?UTF-8?q?=203=20open=20source=20distributed=20tracing=20tools?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- .../tech/20180926 3 open source distributed tracing tools.md | 2 ++ 1 file changed, 2 insertions(+) diff --git a/sources/tech/20180926 3 open source distributed tracing tools.md b/sources/tech/20180926 3 open source distributed tracing tools.md index 197fd9450e..9879302d38 100644 --- a/sources/tech/20180926 3 open source distributed tracing tools.md +++ b/sources/tech/20180926 3 open source distributed tracing tools.md @@ -1,3 +1,5 @@ +translating by belitex + 3 open source distributed tracing tools ====== From 4ff32a9bf088cbed5ec2ae22efcf0dffb101f1c5 Mon Sep 17 00:00:00 2001 From: heguangzhi <7731226@qq.com> Date: Fri, 28 Sep 2018 12:12:01 +0800 Subject: [PATCH 091/736] heguangzhi translating (#10406) --- .../20180926 An introduction to swap space on Linux systems.md | 2 ++ 1 file changed, 2 insertions(+) diff --git a/sources/tech/20180926 An introduction to swap space on Linux systems.md b/sources/tech/20180926 An introduction to swap space on Linux systems.md index 036890ef4b..da50208533 100644 --- a/sources/tech/20180926 An introduction to swap space on Linux systems.md +++ b/sources/tech/20180926 An introduction to swap space on Linux systems.md @@ -1,3 +1,5 @@ +heguangzhi Translating + An introduction to swap space on Linux systems ====== From 63c573af0dda2a31b20506f9224ff5b7b697892f Mon Sep 17 00:00:00 2001 From: Hank Chow <280630620@qq.com> Date: Fri, 28 Sep 2018 12:21:40 +0800 Subject: [PATCH 092/736] translated (#10407) --- ...27 How to Use RAR files in Ubuntu Linux.md | 99 ------------------- ...27 How to Use RAR files in Ubuntu Linux.md | 98 ++++++++++++++++++ 2 files changed, 98 insertions(+), 99 deletions(-) delete mode 100644 sources/tech/20180927 How to Use RAR files in Ubuntu Linux.md create mode 100644 translated/tech/20180927 How to Use RAR files in Ubuntu Linux.md diff --git a/sources/tech/20180927 How to Use RAR files in Ubuntu Linux.md b/sources/tech/20180927 How to Use RAR files in Ubuntu Linux.md deleted file mode 100644 index da0b9c8fad..0000000000 --- a/sources/tech/20180927 How to Use RAR files in Ubuntu Linux.md +++ /dev/null @@ -1,99 +0,0 @@ -HankChow translating - -How to Use RAR files in Ubuntu Linux -====== -[RAR][1] is a quite good archive file format. But, it isn’t the best when you’ve got 7-zip offering great compression ratios and Zip files being easily supported across multiple platforms by default. It is one of the most popular archive formats, but, [Ubuntu][2]‘s archive manager does not support extracting RAR files nor does it let you create RAR files. - -Fret not, we have a solution for you. To enable the support to extract RAR files, you need to install **UNRAR** – which is a freeware by [RARLAB][3]. And, to create and manage RAR files, you need to install **RAR** – which is available as a trial. - -![RAR files in Ubuntu Linux][4] - -### Extracting RAR Files - -Unless you have it installed, extracting RAR files will show you an error “Extraction not performed“. Here’s how it should look like ([ **Ubuntu 18.04**][5]): - -![Error in RAR extraction in Ubuntu][6] - -If you want to resolve the error and easily be able to extract RAR files, follow the instructions below to install unrar: - -**- >** Launch the terminal and type in: - -``` - sudo apt-get install unrar - -``` - --> After installing unrar, you may choose to type in “ **unrar** ” (without the inverted commas) to know more about its usage and how to use RAR files with the help of it. - -The most common usage would obviously be extracting the RAR file you have. So, **you can either perform a right-click on the file and proceed to extract it** from there or you can do it via the terminal with the help of this command: - -``` -unrar x FileName.rar - -``` - -You can see that in action here: - -![Using unrar in Ubuntu][7] - -If the file isn’t present in the Home directory, then you have to navigate to the target folder by using the “ **cd** ” command. For instance, if you have the archive in the Music directory, simply type in “ **cd Music** ” to navigate to the location and then extract the RAR file. - -### Creating & Managing RAR files - -![Using rar archive in Ubuntu Linux][8] - -UNRAR does not let you create RAR files. So, you need to install the RAR command-line tool to be able to create RAR archives. - -To do that, you need to type in the following command: - -``` -sudo apt-get install rar - -``` - -Here, we will help you create a RAR file. In order to do that, follow the command syntax below: - -``` -rar a ArchiveName File_1 File_2 Dir_1 Dir_2 - -``` - -When you type a command in this format, it will add every item inside the directory to the archive. In either case, if you want specific files, just mention the exact name/path. - -By default, the RAR files reside in **HOME** directory. - -In the same way, you can update/manage the RAR files. Just type in a command using the following syntax: - -``` -rar u ArchiveName Filename - -``` - -To get the list of commands for the RAR tool, just type “ **rar** ” in the terminal. - -### Wrapping Up - -Now that you’ve known how to use RAR files on Ubuntu, will you prefer using it over 7-zip, Zip, or Tar.xz? - -Let us know your thoughts in the comments below. - --------------------------------------------------------------------------------- - -via: https://itsfoss.com/use-rar-ubuntu-linux/ - -作者:[Ankush Das][a] -选题:[lujun9972](https://github.com/lujun9972) -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://itsfoss.com/author/ankush/ -[1]: https://www.rarlab.com/rar_file.htm -[2]: https://www.ubuntu.com/ -[3]: https://www.rarlab.com/ -[4]: https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/09/rar-ubuntu-linux.png -[5]: https://itsfoss.com/things-to-do-after-installing-ubuntu-18-04/ -[6]: https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/08/extract-rar-error.jpg -[7]: https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/08/unrar-rar-extraction.jpg -[8]: https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/09/rar-update-create.jpg diff --git a/translated/tech/20180927 How to Use RAR files in Ubuntu Linux.md b/translated/tech/20180927 How to Use RAR files in Ubuntu Linux.md new file mode 100644 index 0000000000..3521b21a8a --- /dev/null +++ b/translated/tech/20180927 How to Use RAR files in Ubuntu Linux.md @@ -0,0 +1,98 @@ +如何在 Ubuntu Linux 中使用 RAR 文件 +====== +[RAR][1] 是一种非常好的归档文件格式。但相比之下 7-zip 能提供了更好的压缩率,并且默认情况下还可以在多个平台上轻松支持 Zip 文件。不过 RAR 仍然是最流行的归档格式之一。然而 [Ubuntu][2] 自带的归档管理器却不支持提取 RAR 文件,也不允许创建 RAR 文件。 + +方法总比问题多。只要安装 `unrar` 这款由 [RARLAB][3] 提供的免费软件,就能在 Ubuntu 上支持提取RAR文件了。你也可以试安装 `rar` 来创建和管理 RAR 文件。 + +![RAR files in Ubuntu Linux][4] + +### 提取 RAR 文件 + +在未安装 unrar 的情况下,提取 RAR 文件会报出“未能提取”错误,就像下面这样(以 [Ubuntu 18.04][5] 为例): + +![Error in RAR extraction in Ubuntu][6] + +如果要解决这个错误并提取 RAR 文件,请按照以下步骤安装 unrar: + +打开终端并输入: + +``` + sudo apt-get install unrar + +``` + +安装 unrar 后,直接输入 `unrar` 就可以看到它的用法以及如何使用这个工具处理 RAR 文件。 + +最常用到的功能是提取 RAR 文件。因此,可以**通过右键单击 RAR 文件并执行提取**,也可以借助此以下命令通过终端执行操作: + +``` +unrar x FileName.rar + +``` + +结果类似以下这样: + +![Using unrar in Ubuntu][7] + +如果家目录中不存在对应的文件,就必须使用 `cd` 命令移动到目标目录下。例如 RAR 文件如果在 `Music` 目录下,只需要使用 `cd Music` 就可以移动到相应的目录,然后提取 RAR 文件。 + +### 创建和管理 RAR 文件 + +![Using rar archive in Ubuntu Linux][8] + +`unrar` 不允许创建 RAR 文件。因此还需要安装 `rar` 命令行工具才能创建 RAR 文件。 + +要创建 RAR 文件,首先需要通过以下命令安装 rar: + +``` +sudo apt-get install rar + +``` + +按照下面的命令语法创建 RAR 文件: + +``` +rar a ArchiveName File_1 File_2 Dir_1 Dir_2 + +``` + +按照这个格式输入命令时,它会将目录中的每个文件添加到 RAR 文件中。如果需要某一个特定的文件,就要指定文件确切的名称或路径。 + +默认情况下,RAR 文件会放置在**家目录**中。 + +以类似的方式,可以更新或管理 RAR 文件。同样是使用以下的命令语法: + +``` +rar u ArchiveName Filename + +``` + +在终端输入 `rar` 就可以列出 RAR 工具的相关命令。 + +### 总结 + +现在你已经知道如何在 Ubuntu 上管理 RAR 文件了,你会更喜欢使用 7-zip、Zip 或 Tar.xz 吗? + +欢迎在评论区中评论。 + +-------------------------------------------------------------------------------- + +via: https://itsfoss.com/use-rar-ubuntu-linux/ + +作者:[Ankush Das][a] +选题:[lujun9972](https://github.com/lujun9972) +译者:[HankChow](https://github.com/HankChow) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://itsfoss.com/author/ankush/ +[1]: https://www.rarlab.com/rar_file.htm +[2]: https://www.ubuntu.com/ +[3]: https://www.rarlab.com/ +[4]: https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/09/rar-ubuntu-linux.png +[5]: https://itsfoss.com/things-to-do-after-installing-ubuntu-18-04/ +[6]: https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/08/extract-rar-error.jpg +[7]: https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/08/unrar-rar-extraction.jpg +[8]: https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/09/rar-update-create.jpg + From e3ce1dac16b501714ca365eb7be59acef51ecd7b Mon Sep 17 00:00:00 2001 From: BeliteX <43316924+belitex@users.noreply.github.com> Date: Fri, 28 Sep 2018 12:22:11 +0800 Subject: [PATCH 093/736] =?UTF-8?q?=E7=BF=BB=E8=AF=91=E7=94=B3=E9=A2=86:?= =?UTF-8?q?=203=20open=20source=20distributed=20tracing=20tools=20(#10408)?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- .../tech/20180926 3 open source distributed tracing tools.md | 2 ++ 1 file changed, 2 insertions(+) diff --git a/sources/tech/20180926 3 open source distributed tracing tools.md b/sources/tech/20180926 3 open source distributed tracing tools.md index 197fd9450e..9879302d38 100644 --- a/sources/tech/20180926 3 open source distributed tracing tools.md +++ b/sources/tech/20180926 3 open source distributed tracing tools.md @@ -1,3 +1,5 @@ +translating by belitex + 3 open source distributed tracing tools ====== From 142afa3d8a493aa0ee6c9b804f44c0ef7c87aa4f Mon Sep 17 00:00:00 2001 From: way-ww <40491614+way-ww@users.noreply.github.com> Date: Fri, 28 Sep 2018 13:05:26 +0800 Subject: [PATCH 094/736] Delete 20180516 Manipulating Directories in Linux.md MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit 删除原文 --- ...80516 Manipulating Directories in Linux.md | 183 ------------------ 1 file changed, 183 deletions(-) delete mode 100644 sources/tech/20180516 Manipulating Directories in Linux.md diff --git a/sources/tech/20180516 Manipulating Directories in Linux.md b/sources/tech/20180516 Manipulating Directories in Linux.md deleted file mode 100644 index 4cc8ca4ea1..0000000000 --- a/sources/tech/20180516 Manipulating Directories in Linux.md +++ /dev/null @@ -1,183 +0,0 @@ -Translating by way-ww -Manipulating Directories in Linux -====== - -![](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/branches-238379_1920_0.jpg?itok=2PlNpsVu) - -If you are new to this series (and to Linux), [take a look at our first installment][1]. In that article, we worked our way through the tree-like structure of the Linux filesystem, or more precisely, the File Hierarchy Standard. I recommend reading through it to make sure you understand what you can and cannot safely touch. Because this time around, I’ll show how to get all touchy-feely with your directories. - -### Making Directories - -Let's get creative before getting destructive, though. To begin, open a terminal window and use `mkdir` to create a new directory like this: -``` -mkdir - -``` - -If you just put the directory name, the directory will appear hanging off the directory you are currently in. If you just opened a terminal, that will be your home directory. In a case like this, we say the directory will be created _relative_ to your current position: -``` -$ pwd #This tells you where you are now -- see our first tutorial -/home/ -$ mkdir newdirectory #Creates /home//newdirectory - -``` - -(Note that you do not have to type the text following the `#`. Text following the pound symbol `#` is considered a comment and is used to explain what is going on. It is ignored by the shell). - -You can create a directory within an existing directory hanging off your current location by specifying it in the command line: -``` -mkdir Documents/Letters - -``` - -Will create the _Letters_ directory within the _Documents_ directory. - -You can also create a directory above where you are by using `..` in the path. Say you move into the _Documents/Letters/_ directory you just created and you want to create a _Documents/Memos/_ directory. You can do: -``` -cd Documents/Letters # Move into your recently created Letters/ directory -mkdir ../Memos - -``` - -Again, all of the above is done relative to you current position. This is called using a _relative path_. - -You can also use an _absolute path_ to directories: This means telling `mkdir` where to put your directory in relation to the root (`/`) directory: -``` -mkdir /home//Documents/Letters - -``` - -Change `` to your user name in the command above and it will be equivalent to executing `mkdir Documents/Letters` from your home directory, except that it will work from wherever you are located in the directory tree. - -As a side note, regardless of whether you use a relative or an absolute path, if the command is successful, `mkdir` will create the directory silently, without any apparent feedback whatsoever. Only if there is some sort of trouble will `mkdir` print some feedback after you hit _[Enter]_. - -As with most other command-line tools, `mkdir` comes with several interesting options. The `-p` option is particularly useful, as it lets you create directories within directories within directories, even if none exist. To create, for example, a directory for letters to your Mom within _Documents/_ , you could do: -``` -mkdir -p Documents/Letters/Family/Mom - -``` - -And `mkdir` will create the whole branch of directories above _Mom/_ and also the directory _Mom/_ for you, regardless of whether any of the parent directories existed before you issued the command. - -You can also create several folders all at once by putting them one after another, separated by spaces: -``` -mkdir Letters Memos Reports - -``` - -will create the directories _Letters/_ , _Memos/_ and _Reports_ under the current directory. - -### In space nobody can hear you scream - -... Which brings us to the tricky question of spaces in directory names. Can you use spaces in directory names? Yes, you can. Is it advised you use spaces? No, absolutely not. Spaces make everything more complicated and, potentially, dangerous. - -Say you want to create a directory called _letters mom/_. If you didn't know any better, you could type: -``` -mkdir letters mom - -``` - -But this is WRONG! WRONG! WRONG! As we saw above, this will create two directories, _letters/_ and _mom/_ , but not _letters mom/_. - -Agreed that this is a minor annoyance: all you have to do is delete the two directories and start over. No big deal. - -But, wait! Deleting directories is where things get dangerous. Imagine you did create _letters mom/_ using a graphical tool, like, say [Dolphin][2] or [Nautilus][3]. If you suddenly decide to delete _letters mom/_ from a terminal, and you have another directory just called _letters/_ under the same directory, and said directory is full of important documents, and you tried this: -``` -rmdir letters mom - -``` - -You would risk removing _letters/_. I say "risk" because fortunately `rmdir`, the instruction used to remove directories, has a built in safeguard and will warn you if you try to delete a non-empty directory. - -However, this: -``` -rm -Rf letters mom - -``` - -(and this is a pretty standard way of getting rid of directories and their contents) will completely obliterate _letters/_ and will never even tell you what just happened. - -The `rm` command is used to delete files and directories. When you use it with the options `-R` (delete _recursively_ ) and `-f` ( _force_ deletion), it will burrow down into a directory and its subdirectories, deleting all the files they contain, then deleting the subdirectories themselves, then it will delete all the files in the top directory and then the directory itself. - -`rm -Rf` is an instruction you must handle with extreme care. - -My advice is, instead of spaces, use underscores (`_`), but if you still insist on spaces, there are two ways of getting them to work. You can use single or double quotes (`'` or `"`) like so: -``` -mkdir 'letters mom' -mkdir "letters dad" - -``` - -Or, you can _escape_ the spaces. Some characters have a special meaning for the shell. Spaces, as you have seen, are used to separate options and arguments on the command line. "Separating options and arguments" falls under the category of "special meaning". When you want the shell to ignore the special meaning of a character, you need to _escape_ it and to escape a character, you put a backslash (`\`) in front of it: -``` -mkdir letters\ mom -mkdir letter\ dad - -``` - -There are other special characters that would need escaping, like the apostrophe or single quote (`'`), double quotes (`"`), and the ampersand (`&`): -``` -mkdir mom\ \&\ dad\'s\ letters - -``` - -I know what you're thinking: If the backslash has a special meaning (to wit, telling the shell it has to escape the next character), that makes it a special character, too. Then, how would you escape the escape character which is `\`? - -Turns out, the exact way you escape any other special character: -``` -mkdir special\\characters - -``` - -will produce a directory called _special\characters_. - -Confusing? Of course. That's why you should avoid using special characters, including spaces, in directory names. - -For the record, here is a list of special characters you can refer to just in case. - -### Things to Remember - - * Use `mkdir ` to create a new directory. - * Use `rmdir ` to delete a directory (only works if it is empty). - * Use `rm -Rf ` to annihilate a directory -- use with extreme caution. - * Use a relative path to create directories relative to your current directory: `mkdir newdir.`. - * Use an absolute path to create directories relative to the root directory (`/`): `mkdir /home//newdir` - * Use `..` to create a directory in the directory above the current directory: `mkdir ../newdir` - * You can create several directories all in one go by separating them with spaces on the command line: `mkdir onedir twodir threedir` - * You can mix and mash relative and absolute paths when creating several directories simultaneously: `mkdir onedir twodir /home//threedir` - * Using spaces and special characters in directory names guarantees plenty of headaches and heartburn. Don't do it. - - - -For more information, you can look up the manuals of `mkdir`, `rmdir` and `rm`: -``` -man mkdir -man rmdir -man rm - -``` - -To exit the man pages, press _[q]_. - -### Next Time - -In the next installment, you'll learn about creating, modifying, and erasing files, as well as everything you need to know about permissions and privileges. See you then! - -Learn more about Linux through the free ["Introduction to Linux" ][4]course from The Linux Foundation and edX. - --------------------------------------------------------------------------------- - -via: https://www.linux.com/blog/learn/2018/5/manipulating-directories-linux - -作者:[Paul Brown][a] -选题:[lujun9972](https://github.com/lujun9972) -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:https://www.linux.com/users/bro66 -[1]:https://www.linux.com/blog/learn/intro-to-linux/2018/4/linux-filesystem-explained -[2]:https://userbase.kde.org/Dolphin -[3]:https://projects-old.gnome.org/nautilus/screenshots.html -[4]:https://training.linuxfoundation.org/linux-courses/system-administration-training/introduction-to-linux From fc6b2678f34df234d219dcbc4d400beceea7c25c Mon Sep 17 00:00:00 2001 From: way-ww <40491614+way-ww@users.noreply.github.com> Date: Fri, 28 Sep 2018 13:10:34 +0800 Subject: [PATCH 095/736] Create 20180516 Manipulating Directories in Linux.md MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit 翻译完成 --- ...80516 Manipulating Directories in Linux.md | 166 ++++++++++++++++++ 1 file changed, 166 insertions(+) create mode 100644 translated/tech/20180516 Manipulating Directories in Linux.md diff --git a/translated/tech/20180516 Manipulating Directories in Linux.md b/translated/tech/20180516 Manipulating Directories in Linux.md new file mode 100644 index 0000000000..9e62973064 --- /dev/null +++ b/translated/tech/20180516 Manipulating Directories in Linux.md @@ -0,0 +1,166 @@ +在Linux上操作目录 +====== + +![](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/branches-238379_1920_0.jpg?itok=2PlNpsVu) + +如果你不熟悉本系列(以及Linux),[请查看我们的第一部分][1]。在那篇文章中,我们通过Linux文件系统的树状结构,或者更确切地说以文件层次结构标准工作。我建议你仔细阅读,确保你理解自己能安全的做哪些操作。因为这一次,我将向你展示目录操作的魅力。 + +### 新建目录 + +在操作变得具有破坏性之前,让我们发挥创意创造。首先,打开一个终端窗口并使用命令mkdir创建一个新目录,如下所示: +``` +mkdir + +``` +如果你只输入了目录名称,该目录将显示在您当前所在目录中。如果你刚刚打开一个终端,你当前位置为你的家目录。下面这个例子,我们展示了将要创建的目录与你当前所处位置的关系: +``` +$ pwd #This tells you where you are now -- see our first tutorial +/home/ +$ mkdir newdirectory #Creates /home//newdirectory + +``` +(注 你不用输入#后面的文本。#后面的文本为注释内容,用于解释发生了什么。它会被shell忽略,不会被执行). + +你可以在当前位置中已经存在的某个目录下创建新的目录,方法是在命令行中指定它: +``` +mkdir Documents/Letters + +``` +这将在Documents目录中创建Letters目录。 + +你还可以在路径中使用..在当前目录的上一级目录中创建目录。假设你进入刚刚创建的Documents/Letters/目录,并且想要创建Documents/Memos/目录。你可以这样做: +``` +cd Documents/Letters # Move into your recently created Letters/ directory +mkdir ../Memos + +``` +同样,以上所有内容都是相对于你当前的位置做的。这就是使用了相对路径。 +你还可以使用目录的绝对路径:这意味着告诉mkdir命令将目录放在和根目录(/)有关的位置: +``` +mkdir /home//Documents/Letters + +``` +在上面的命令中将更改为你的用户名,这相当于从你的主目录执行mkdir Documents / Letters,通过使用绝对路径你可以在目录树中的任何位置完成这项工作。 + +无论你使用相对路径还是绝对路径,只要命令成功执行,mkdir将静默的创建新目录,而没有任何明显的反馈。只有当遇到某种问题时,mkdir才会在你敲下[Enter]后打印一些反馈。 + +与大多数其他命令行工具一样,mkdir提供了几个有趣的选项。 -p选项特别有用,因为它允许你创建嵌套目录,即使目录不存在也可以。例如,要在Documents /中创建一个目录存放写给妈妈的信,你可以这样做: +``` +mkdir -p Documents/Letters/Family/Mom + +``` +And `mkdir` will create the whole branch of directories above _Mom/_ and also the directory _Mom/_ for you, regardless of whether any of the parent directories existed before you issued the command. + +你也可以用空格来分隔目录名,来同时创建几个目录: +``` +mkdir Letters Memos Reports + +``` +这将在当前目录下创建目录Letters,Memos和Reports。 + +### 目录名中可怕的空格 + +... 这带来了目录名称中关于空格的棘手问题。你能在目录名中使用空格吗?是的你可以。那么建议你使用空格吗?不,绝对不是。空格使一切变得更加复杂,并且可能是危险的操作。 + +假设您要创建一个名为letters mom的目录。如果你不知道如何更好处理,你可能会输入: +``` +mkdir letters mom + +``` +但这是错误的!错误的!错误的!正如我们在上面介绍的,这将创建两个目录letters和mom,而不是一个目录letters mom。 + +得承认这是一个小麻烦:你所要做的就是删除这两个目录并重新开始,这没什么大不了。 + +可是等等!删除目录可是个危险的操作。想象一下,你确实使用图形工具[Dolphin][2]或[Nautilus][3]创建了目录letters mom。如果你突然决定从终端删除目录letters mom,并且您在同一目录下有另一个名为letters的目录,并且该目录中包含重要的文档,结果你为了删除错误的目录尝试了以下操作: +``` +rmdir letters mom + +``` +你将会有风险删除目录letters。这里说“风险”,是因为幸运的是rmdir这条用于删除目录的指令,有一个内置的安全措施,如果你试图删除一个非空目录时,它会发出警告。 + +但是,下面这个: +``` +rm -Rf letters mom + +``` +(注 这是删除目录及其内容的一种非常标准的方式)将完全删除letters目录,甚至永远不会告诉你刚刚发生了什么。 + +rm命令用于删除文件和目录。当你将它与选项-R(递归删除)和-f(强制删除)一起使用时,它会深入到目录及其子目录中,删除它们包含的所有文件,然后删除子目录本身,然后它将删除所有顶层目录中的文件,再然后是删除目录本身。 + +`rm -Rf` 是你必须非常小心处理的命令。 + +我的建议是,你可以使用下划线来代替空格,但如果你仍然坚持使用空格,有两种方法可以使它们起作用。您可以使用单引号或双引号,如下所示: +``` +mkdir 'letters mom' +mkdir "letters dad" + +``` +或者,你可以转义空格。有些字符对shell有特殊意义。正如你所见,空格用于在命令行上分隔选项和参数。 “分离选项和参数”属于“特殊含义”范畴。当你想让shell忽略一个字符的特殊含义时,你需要转义,你可以在它前面放一个反斜杠(\)如: +``` +mkdir letters\ mom +mkdir letter\ dad + +``` +还有其他特殊字符需要转义,如撇号或单引号('),双引号(“)和&符号(&): +``` +mkdir mom\ \&\ dad\'s\ letters + +``` +我知道你在想什么:如果反斜杠有一个特殊的含义(即告诉shell它必须转义下一个字符),这也使它成为一个特殊的字符。然后,你将如何转义转义字符(\)? + +事实证明,你转义任何其他特殊字符都是同样的方式: +``` +mkdir special\\characters + +``` +这将生成一个名为special\characters的目录。 + +感觉困惑?当然。这就是为什么你应该避免在目录名中使用特殊字符,包括空格。 + +以防误操作你可以参考下面这个记录特殊字符的列表。 + +### 总结 + + * 使用 `mkdir ` 创建新目录。 + * 使用 `rmdir ` 删除目录(仅在目录为空时才有效)。 + * 使用 `rm -Rf ` 来完全删除目录及其内容 - 请务必谨慎使用。 + * 使用相对路径创建相对于当前目录的目录: `mkdir newdir.`. + * 使用绝对路径创建相对于根目录(`/`)的目录: `mkdir /home//newdir` + * 使用 `..` 在当前目录的上级目录中创建目录: `mkdir ../newdir` + * 你可以通过在命令行上使用空格分隔目录名来创建多个目录: `mkdir onedir twodir threedir` + * 同时创建多个目录时,你可以混合使用相对路径和绝对路径: `mkdir onedir twodir /home//threedir` + * 在目录名称中使用空格和特殊字符真的会让你很头疼,你最好不要那样做。 + + + +有关更多信息,您可以查看`mkdir`, `rmdir` 和 `rm`的手册: +``` +man mkdir +man rmdir +man rm + +``` +要退出手册页,请按键盘[q]键。 + +### 下次预告 + +在下一部分中,你将学习如何创建,修改和删除文件,以及你需要了解的有关权限和特权的所有信息! + +通过Linux Foundation和edX免费提供的["Introduction to Linux" ][4]课程了解有关Linux的更多信息。 + +-------------------------------------------------------------------------------- + +via: https://www.linux.com/blog/learn/2018/5/manipulating-directories-linux + +作者:[Paul Brown][a] +选题:[lujun9972](https://github.com/lujun9972) +译者:[way-ww](https://github.com/way-ww) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://www.linux.com/users/bro66 +[1]:https://www.linux.com/blog/learn/intro-to-linux/2018/4/linux-filesystem-explained +[2]:https://userbase.kde.org/Dolphin +[3]:https://projects-old.gnome.org/nautilus/screenshots.html +[4]:https://training.linuxfoundation.org/linux-courses/system-administration-training/introduction-to-linux From 0e1bfd595bcaf47f2ba897d9cb05216bd17b94ce Mon Sep 17 00:00:00 2001 From: qhwdw Date: Fri, 28 Sep 2018 14:28:42 +0800 Subject: [PATCH 096/736] Translated by qhwdw --- ...rns of dangerous future plans for his C.md | 155 ----------------- ...rns of dangerous future plans for his C.md | 163 ++++++++++++++++++ 2 files changed, 163 insertions(+), 155 deletions(-) delete mode 100644 sources/tech/20180618 What-s all the C Plus Fuss- Bjarne Stroustrup warns of dangerous future plans for his C.md create mode 100644 translated/tech/20180618 What-s all the C Plus Fuss- Bjarne Stroustrup warns of dangerous future plans for his C.md diff --git a/sources/tech/20180618 What-s all the C Plus Fuss- Bjarne Stroustrup warns of dangerous future plans for his C.md b/sources/tech/20180618 What-s all the C Plus Fuss- Bjarne Stroustrup warns of dangerous future plans for his C.md deleted file mode 100644 index 2f9a6636e7..0000000000 --- a/sources/tech/20180618 What-s all the C Plus Fuss- Bjarne Stroustrup warns of dangerous future plans for his C.md +++ /dev/null @@ -1,155 +0,0 @@ -Translating by qhwdw -What's all the C Plus Fuss? Bjarne Stroustrup warns of dangerous future plans for his C++ -====== - -![](https://regmedia.co.uk/2018/06/15/shutterstock_38621860.jpg?x=442&y=293&crop=1) - -**Interview** Earlier this year, Bjarne Stroustrup, creator of C++, managing director in the technology division of Morgan Stanley, and a visiting professor of computer science at Columbia University in the US, wrote [a letter][1] inviting those overseeing the evolution of the programming language to “Remember the Vasa!” - -Easy for a Dane to understand no doubt, but perhaps more of a stretch for those with a few gaps in their knowledge of 17th century Scandinavian history. The Vasa was a Swedish warship, commissioned by King Gustavus Adolphus. It was the most powerful warship in the Baltic Sea from its maiden voyage on the August 10, 1628, until a few minutes later when it sank. - -The formidable Vasa suffered from a design flaw: it was top-heavy, so much so that it was [undone by a gust of wind][2]. By invoking the memory of the capsized ship, Stroustrup served up a cautionary tale about the risks facing C++ as more and more features get added to the language. - -Quite a few such features have been suggested. Stroustrup cited 43 proposals in his letter. He contends those participating in the evolution of the ISO standard language, a group known as [WG21][3], are working to advance the language but not together. - -In his letter, he wrote: - ->Individually, many proposals make sense. Together they are insanity to the point of endangering the future of C++. - -He makes clear that he doesn’t interpret the fate of the Vasa to mean that incremental improvements spell doom. Rather, he takes it as a lesson to build a solid foundation, to learn from experience and to test thoroughly. - -With the recent conclusion of the C++ Standardization Committee Meeting in Rapperswil, Switzerland, earlier this month, Stroustrup addressed a few questions put to him by _The Register_ about what's next for the language. (The most recent version is C++17, which arrived last year; the next version C++20 is under development and expected in 2020.) - -**_Register:_ In your note, Remember the Vasa!, you wrote:** - ->The foundation begun in C++11 is not yet complete, and C++17 did little to make our foundation more solid, regular, and complete. Instead, it added significant surface complexity and increased the number of features people need to learn. C++ could crumble under the weight of these – mostly not quite fully-baked – proposals. We should not spend most our time creating increasingly complicated facilities for experts, such as ourselves. - -**Is C++ too challenging for newcomers, and if so, what features do you believe would make the language more accessible?** - -_**Stroustrup:**_ Some parts of C++ are too challenging for newcomers. - -On the other hand, there are parts of C++ that makes it far more accessible to newcomers than C or 1990s C++. The difficulty is to get the larger community to focus on those parts and help beginners and casual C++ users to avoid the parts that are there to support implementers of advanced libraries. - -I recommend the [C++ Core Guidelines][4] as an aide for that. - -Also, my “A Tour of C++” can help people get on the right track with modern C++ without getting lost in 1990s complexities or ensnarled by modern facilities meant for expert use. The second edition of “A Tour of C++” covering C++17 and parts of C++20 is on its way to the stores. - -I and others have taught C++ to 1st year university students with no previous programming experience in 3 months. It can be done as long as you don’t try to dig into every obscure corner of the language and focus on modern C++. - -“Making simple things simple” is a long-term goal of mine. Consider the C++11 range-for loop: -``` -for (int& x : v) ++x; // increment each element of the container v - -``` - -where v can be just about any container. In C and C-style C++, that might look like this: -``` -for (int i=0; i 分开来看,许多提议都很有道理。但将它们综合到一起,这些提议是很愚蠢的,将危害 C++ 的未来。 + +他明确表示,不希望 C++ 重蹈瓦萨号的覆辙,这种渐近式的改进将敲响 C++ 的丧钟。相反,应该吸取瓦萨号的教训,构建一个坚实的基础,吸取经验教训,并做彻底的测试。 + +在瑞士拉普斯威尔(Rapperswill)召开的 C++ 标准化委员会会议之后,本月早些时候,Stroustrup 接受了_《The Register》_ 的采访,回答了有关 C++ 语言下一步发展方向方面的几个问题。(最新版是 C++17,它去年刚发布;下一个版本是 C++20,它正在开发中,预计于 2020 年发布。) + +**Register:在你的信件《想想瓦萨号!》中,你写道:** + +> 在 C++11 开始基础不再完整,而 C++17 中在使基础更加稳固、规范和完整方面几乎没有改善。相反地,却增加了重要接口的复杂度,让人们需要学习的特性数量越来越多。C++ 可能在这种提议的重压之下崩溃 —— 这些提议大多数都不成熟。我们不应该花费大量的时间为专家级用户们(比如我们自己)去创建越来越复杂的东西。~~(还要考虑普通用户的学习曲线,越复杂的东西越不易普及。)~~ + +**对新人来说,C++ 很难吗?如果是这样,你认为怎样的特性让新人更易理解?** + +**Stroustrup:**C++ 的有些东西对于新人来说确实很难。 + +换句话说,C++ 中有些东西对于新人来说,比起 C 或上世纪九十年代的 C++ 更容易理解了。而难点是让大型社区专注于这些部分,并且帮助新手和普通 C++ 用户去规避那些对高级库实现提供支持的部分。 + +我建议使用 [C++ 核心准则][4] 作为实现上述目标的一个辅助。 + +此外,我的 “C++ 教程” 也可以帮助人们在使用现代 C++ 时走上正确的方向,而不会迷失在自上世纪九十年代以来的复杂性中,或困惑于只有专家级的用户才能理解的东西中。第二版的 “C++ 教程” 涵盖了 C++17 和部分 C++20 的内容,这本书即将要出版了。 + +我和其他人给没有编程经验的大一新生教过 C++,只要你不去深挖编程语言的每个晦涩难懂的角落,把注意力集中到 C++ 中最主流的部分,在三个月内新可以学会 C++。 + +“让简单的东西保持简单” 是我长期追求的目标。比如 C++11 的 `range-for` 循环: + +``` +for (int& x : v) ++x; // increment each element of the container v + +``` + +`v` 的位置可以是任何容器。在 C 和 C 风格的 C++ 中,它可能看到的是这样: + +``` +for (int i=0; i Date: Fri, 28 Sep 2018 15:04:25 +0800 Subject: [PATCH 097/736] dianbanjiu translating --- .../tech/20140805 How to Install Cinnamon Desktop on Ubuntu.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/sources/tech/20140805 How to Install Cinnamon Desktop on Ubuntu.md b/sources/tech/20140805 How to Install Cinnamon Desktop on Ubuntu.md index 78ba32e2a2..6c6e54934b 100644 --- a/sources/tech/20140805 How to Install Cinnamon Desktop on Ubuntu.md +++ b/sources/tech/20140805 How to Install Cinnamon Desktop on Ubuntu.md @@ -1,4 +1,4 @@ -How to Install Cinnamon Desktop on Ubuntu +dianbanjiu Tranting How to Install Cinnamon Desktop on Ubuntu ====== **This tutorial shows you how to install Cinnamon desktop environment on Ubuntu.** From 9b9f18a2bca51c27c04e6be6d89524c9841c662e Mon Sep 17 00:00:00 2001 From: way-ww <40491614+way-ww@users.noreply.github.com> Date: Fri, 28 Sep 2018 15:34:12 +0800 Subject: [PATCH 098/736] Update 20180917 4 scanning tools for the Linux desktop.md request to translate --- sources/tech/20180917 4 scanning tools for the Linux desktop.md | 2 ++ 1 file changed, 2 insertions(+) diff --git a/sources/tech/20180917 4 scanning tools for the Linux desktop.md b/sources/tech/20180917 4 scanning tools for the Linux desktop.md index a239c87768..7da24d3a90 100644 --- a/sources/tech/20180917 4 scanning tools for the Linux desktop.md +++ b/sources/tech/20180917 4 scanning tools for the Linux desktop.md @@ -1,3 +1,5 @@ +Translating by way-ww + 4 scanning tools for the Linux desktop ====== Go paperless by driving your scanner with one of these open source applications. From 3913f43fd58a9a2e81d43f32d65d031ba3902367 Mon Sep 17 00:00:00 2001 From: "Xingyu.Wang" Date: Fri, 28 Sep 2018 15:52:15 +0800 Subject: [PATCH 099/736] =?UTF-8?q?=E8=B6=85=E6=9C=9F=E5=9B=9E=E6=94=B6?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit @leowang @ynmlml @FelixYFZ @runningwater @aiwhj @jessie-pang @mandeler @HankChow @auk7f7 @heart4lor @amwps290 @bestony @leemeans @wyxplus @stephenxs --- .../20180123 Moving to Linux from dated Windows machines.md | 1 - sources/talk/20180127 Write Dumb Code.md | 2 -- ...stions DevOps job candidates should be prepared to answer.md | 2 +- sources/talk/20180410 Microservices Explained.md | 1 - ... organizing your open source project-s workflow on GitHub.md | 1 - ... Why moving all your workloads to the cloud is a bad idea.md | 2 -- sources/tech/20171111 A CEOs Guide to Emacs.md | 2 +- .../20180215 Build a bikesharing app with Redis and Python.md | 2 -- sources/tech/20180518 How to Manage Fonts in Linux.md | 2 -- .../20180531 How to Build an Amazon Echo with Raspberry Pi.md | 2 -- ...ete Sed Command Guide [Explained with Practical Examples].md | 1 - ...ecoming a senior developer 9 experiences you ll encounter.md | 2 -- .../tech/20180723 Setting Up a Timer with systemd in Linux.md | 1 - ...ing a network attached storage device with a Raspberry Pi.md | 1 - ...80814 Top Linux developers- recommended programming books.md | 2 -- 15 files changed, 2 insertions(+), 22 deletions(-) diff --git a/sources/talk/20180123 Moving to Linux from dated Windows machines.md b/sources/talk/20180123 Moving to Linux from dated Windows machines.md index 74bf66df68..6acd6e53f2 100644 --- a/sources/talk/20180123 Moving to Linux from dated Windows machines.md +++ b/sources/talk/20180123 Moving to Linux from dated Windows machines.md @@ -1,4 +1,3 @@ -translating by leowang Moving to Linux from dated Windows machines ====== diff --git a/sources/talk/20180127 Write Dumb Code.md b/sources/talk/20180127 Write Dumb Code.md index 505e8198df..acc647b0e5 100644 --- a/sources/talk/20180127 Write Dumb Code.md +++ b/sources/talk/20180127 Write Dumb Code.md @@ -1,5 +1,3 @@ -translating by ynmlml - Write Dumb Code ====== The best way you can contribute to an open source project is to remove lines of code from it. We should endeavor to write code that a novice programmer can easily understand without explanation or that a maintainer can understand without significant time investment. diff --git a/sources/talk/20180308 20 questions DevOps job candidates should be prepared to answer.md b/sources/talk/20180308 20 questions DevOps job candidates should be prepared to answer.md index b862ce311e..da43855266 100644 --- a/sources/talk/20180308 20 questions DevOps job candidates should be prepared to answer.md +++ b/sources/talk/20180308 20 questions DevOps job candidates should be prepared to answer.md @@ -1,4 +1,4 @@ -Translating by FelixYFZ 20 questions DevOps job candidates should be prepared to answer +20 questions DevOps job candidates should be prepared to answer ====== ![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/hire-job-career.png?itok=SrZo0QJ3) diff --git a/sources/talk/20180410 Microservices Explained.md b/sources/talk/20180410 Microservices Explained.md index 5a2151a00b..1d7e946a12 100644 --- a/sources/talk/20180410 Microservices Explained.md +++ b/sources/talk/20180410 Microservices Explained.md @@ -1,4 +1,3 @@ -(translating by runningwater) Microservices Explained ====== diff --git a/sources/talk/20180419 3 tips for organizing your open source project-s workflow on GitHub.md b/sources/talk/20180419 3 tips for organizing your open source project-s workflow on GitHub.md index 1f9b80cd13..29e4ea2f48 100644 --- a/sources/talk/20180419 3 tips for organizing your open source project-s workflow on GitHub.md +++ b/sources/talk/20180419 3 tips for organizing your open source project-s workflow on GitHub.md @@ -1,4 +1,3 @@ -translating by aiwhj 3 tips for organizing your open source project's workflow on GitHub ====== diff --git a/sources/talk/20180724 Why moving all your workloads to the cloud is a bad idea.md b/sources/talk/20180724 Why moving all your workloads to the cloud is a bad idea.md index c4a6162068..1d97805178 100644 --- a/sources/talk/20180724 Why moving all your workloads to the cloud is a bad idea.md +++ b/sources/talk/20180724 Why moving all your workloads to the cloud is a bad idea.md @@ -1,5 +1,3 @@ -Translating by jessie-pang - Why moving all your workloads to the cloud is a bad idea ====== diff --git a/sources/tech/20171111 A CEOs Guide to Emacs.md b/sources/tech/20171111 A CEOs Guide to Emacs.md index ad75b856f0..a694d07917 100644 --- a/sources/tech/20171111 A CEOs Guide to Emacs.md +++ b/sources/tech/20171111 A CEOs Guide to Emacs.md @@ -1,4 +1,4 @@ -# mandeler translating A CEO's Guide to Emacs +A CEO's Guide to Emacs ============================================================ Years—no, decades—ago, I lived in Emacs. I wrote code and documents, managed email and calendar, and shelled all in the editor/OS. I was quite happy. Years went by and I moved to newer, shinier things. As a result, I forgot how to do tasks as basic as efficiently navigating files without a mouse. About three months ago, noticing just how much of my time was spent switching between applications and computers, I decided to give Emacs another try. It was a good decision for several reasons that will be covered in this post. Covered too are `.emacs` and Dropbox tips so that you can set up a good, movable environment. diff --git a/sources/tech/20180215 Build a bikesharing app with Redis and Python.md b/sources/tech/20180215 Build a bikesharing app with Redis and Python.md index 67ddd07730..06e4c6949a 100644 --- a/sources/tech/20180215 Build a bikesharing app with Redis and Python.md +++ b/sources/tech/20180215 Build a bikesharing app with Redis and Python.md @@ -1,5 +1,3 @@ -hankchow translating - Build a bikesharing app with Redis and Python ====== diff --git a/sources/tech/20180518 How to Manage Fonts in Linux.md b/sources/tech/20180518 How to Manage Fonts in Linux.md index 12b450c778..0faca7fa17 100644 --- a/sources/tech/20180518 How to Manage Fonts in Linux.md +++ b/sources/tech/20180518 How to Manage Fonts in Linux.md @@ -1,5 +1,3 @@ -translating by Auk7F7 - How to Manage Fonts in Linux ====== diff --git a/sources/tech/20180531 How to Build an Amazon Echo with Raspberry Pi.md b/sources/tech/20180531 How to Build an Amazon Echo with Raspberry Pi.md index 0bf792f769..a5d4767706 100644 --- a/sources/tech/20180531 How to Build an Amazon Echo with Raspberry Pi.md +++ b/sources/tech/20180531 How to Build an Amazon Echo with Raspberry Pi.md @@ -1,5 +1,3 @@ -heart4lor translating - How to Build an Amazon Echo with Raspberry Pi ====== diff --git a/sources/tech/20180615 Complete Sed Command Guide [Explained with Practical Examples].md b/sources/tech/20180615 Complete Sed Command Guide [Explained with Practical Examples].md index a1d721ae3c..e548213483 100644 --- a/sources/tech/20180615 Complete Sed Command Guide [Explained with Practical Examples].md +++ b/sources/tech/20180615 Complete Sed Command Guide [Explained with Practical Examples].md @@ -1,4 +1,3 @@ -translating by amwps290 Complete Sed Command Guide [Explained with Practical Examples] ====== In a previous article, I showed the [basic usage of Sed][1], the stream editor, on a practical use case. Today, be prepared to gain more insight about Sed as we will take an in-depth tour of the sed execution model. This will be also an opportunity to make an exhaustive review of all Sed commands and to dive into their details and subtleties. So, if you are ready, launch a terminal, [download the test files][2] and sit comfortably before your keyboard: we will start our exploration right now! diff --git a/sources/tech/20180711 Becoming a senior developer 9 experiences you ll encounter.md b/sources/tech/20180711 Becoming a senior developer 9 experiences you ll encounter.md index 1ca6e94ef3..7ff8e59007 100644 --- a/sources/tech/20180711 Becoming a senior developer 9 experiences you ll encounter.md +++ b/sources/tech/20180711 Becoming a senior developer 9 experiences you ll encounter.md @@ -1,5 +1,3 @@ -bestony is translating - Becoming a senior developer: 9 experiences you'll encounter ============================================================ diff --git a/sources/tech/20180723 Setting Up a Timer with systemd in Linux.md b/sources/tech/20180723 Setting Up a Timer with systemd in Linux.md index a6295faefe..27841dec61 100644 --- a/sources/tech/20180723 Setting Up a Timer with systemd in Linux.md +++ b/sources/tech/20180723 Setting Up a Timer with systemd in Linux.md @@ -1,4 +1,3 @@ -Translating by leemeans Setting Up a Timer with systemd in Linux ====== diff --git a/sources/tech/20180724 Building a network attached storage device with a Raspberry Pi.md b/sources/tech/20180724 Building a network attached storage device with a Raspberry Pi.md index 4083023ca4..3144efd4ee 100644 --- a/sources/tech/20180724 Building a network attached storage device with a Raspberry Pi.md +++ b/sources/tech/20180724 Building a network attached storage device with a Raspberry Pi.md @@ -1,4 +1,3 @@ -translating by wyxplus Building a network attached storage device with a Raspberry Pi ====== diff --git a/sources/tech/20180814 Top Linux developers- recommended programming books.md b/sources/tech/20180814 Top Linux developers- recommended programming books.md index f93ba6cadc..d9337ed319 100644 --- a/sources/tech/20180814 Top Linux developers- recommended programming books.md +++ b/sources/tech/20180814 Top Linux developers- recommended programming books.md @@ -1,5 +1,3 @@ -translated by stephenxs - Top Linux developers' recommended programming books ====== Without question, Linux was created by brilliant programmers who employed good computer science knowledge. Let the Linux programmers whose names you know share the books that got them started and the technology references they recommend for today's developers. How many of them have you read? From 65f974e58f7b5ed7038e198e22dcf809d695e83f Mon Sep 17 00:00:00 2001 From: jayli Date: Fri, 28 Sep 2018 17:38:47 +0800 Subject: [PATCH 100/736] moria translating --- ...80925 9 Easiest Ways To Find Out Process ID (PID) In Linux.md | 1 + 1 file changed, 1 insertion(+) diff --git a/sources/tech/20180925 9 Easiest Ways To Find Out Process ID (PID) In Linux.md b/sources/tech/20180925 9 Easiest Ways To Find Out Process ID (PID) In Linux.md index ae353bf11f..f542b15808 100644 --- a/sources/tech/20180925 9 Easiest Ways To Find Out Process ID (PID) In Linux.md +++ b/sources/tech/20180925 9 Easiest Ways To Find Out Process ID (PID) In Linux.md @@ -1,3 +1,4 @@ +【moria(knuth.fan at gmail.com)翻译中】 9 Easiest Ways To Find Out Process ID (PID) In Linux ====== Everybody knows about PID, Exactly what is PID? Why you want PID? What are you going to do using PID? Are you having the same questions on your mind? If so, you are in the right place to get all the details. From 3c2fbb3eeea2f3047c97719869a227f829415536 Mon Sep 17 00:00:00 2001 From: dianbanjiu Date: Fri, 28 Sep 2018 18:10:16 +0800 Subject: [PATCH 101/736] translated --- ...w to Install Cinnamon Desktop on Ubuntu.md | 80 ------------------- ...w to Install Cinnamon Desktop on Ubuntu.md | 77 ++++++++++++++++++ 2 files changed, 77 insertions(+), 80 deletions(-) delete mode 100644 sources/tech/20140805 How to Install Cinnamon Desktop on Ubuntu.md create mode 100644 translated/tech/20140805 How to Install Cinnamon Desktop on Ubuntu.md diff --git a/sources/tech/20140805 How to Install Cinnamon Desktop on Ubuntu.md b/sources/tech/20140805 How to Install Cinnamon Desktop on Ubuntu.md deleted file mode 100644 index 6c6e54934b..0000000000 --- a/sources/tech/20140805 How to Install Cinnamon Desktop on Ubuntu.md +++ /dev/null @@ -1,80 +0,0 @@ -dianbanjiu Tranting How to Install Cinnamon Desktop on Ubuntu -====== -**This tutorial shows you how to install Cinnamon desktop environment on Ubuntu.** - -[Cinnamon][1] is the default desktop environment of [Linux Mint][2]. Unlike Unity desktop environment in Ubuntu, Cinnamon is more traditional but elegant looking desktop environment with the bottom panel and app menu etc. Many Windows migrants [prefer Linux Mint over Ubuntu][3] because of Cinnamon desktop and its Windows-resembling user interface. - -Now, you don’t need to [install Linux Mint][4] just for trying Cinnamon. In this tutorial, I’ll show you **how to install Cinnamon in Ubuntu 18.04, 16.04 and 14.04**. - -You should note something before you install Cinnamon desktop on Ubuntu. Sometimes, installing additional desktop environments leads to conflict between the desktop environments. This may result in a broken session, broken applications and features etc. This is why you should be careful in making this choice. - -### How to Install Cinnamon on Ubuntu - -![How to install cinnamon desktop on Ubuntu Linux][5] - -There used to be a-sort-of official PPA from Cinnamon team for Ubuntu but it doesn’t exist anymore. Don’t lose heart. There is an unofficial PPA available and it works perfectly. This PPA consists of the latest Cinnamon version. - -Open a terminal and use the following commands: - -``` -sudo add-apt-repository ppa:embrosyn/cinnamon -sudo apt update && sudo apt install cinnamon - -``` - -It will download files of around 150 MB in size (if I remember correctly). This also provides you with Nemo (Nautilus fork) and Cinnamon Control Center. This bonus stuff gives a closer feel of Linux Mint. - -### Using Cinnamon desktop environment in Ubuntu - -Once you have installed Cinnamon, log out of the current session. At the login screen, click on the Ubuntu symbol beside the username: - -![](https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2014/08/Change_Desktop_Environment_Ubuntu.jpeg) - -When you do this, it will give you all the desktop environments available for your system. No need to tell you that you have to choose Cinnamon: - -![](https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2014/08/Install_Cinnamon_Ubuntu.jpeg) - -Now you should be logged in to Ubuntu with Cinnamon desktop environment. Remember, you can do the same to switch back to Unity. Here is a quick screenshot of what it looked like to run **Cinnamon in Ubuntu** : - -![](https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2014/08/Cinnamon_Ubuntu_1404.jpeg) - -Looks completely like Linux Mint, isn’t it? I didn’t find any compatibility issue between Cinnamon and Unity. I switched back and forth between Unity and Cinnamon and both worked perfectly. - -#### Remove Cinnamon from Ubuntu - -It is understandable that you might want to uninstall Cinnamon. We will use PPA Purge for this purpose. Let’s install PPA Purge first: - -``` -sudo apt-get install ppa-purge - -``` - -Afterward, use the following command to purge the PPA: - -``` -sudo ppa-purge ppa:embrosyn/cinnamon - -``` - -In related articles, I suggest you to read more about [how to remove PPA in Linux][6]. - -I hope this post helps you to **install Cinnamon in Ubuntu**. Do share your experience with Cinnamon. - --------------------------------------------------------------------------------- - -via: https://itsfoss.com/install-cinnamon-on-ubuntu/ - -作者:[Abhishek Prakash][a] -选题:[lujun9972](https://github.com/lujun9972) -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://itsfoss.com/author/abhishek/ -[1]: http://cinnamon.linuxmint.com/ -[2]: http://www.linuxmint.com/ -[3]: https://itsfoss.com/linux-mint-vs-ubuntu/ -[4]: https://itsfoss.com/guide-install-linux-mint-16-dual-boot-windows/ -[5]: https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/09/install-cinnamon-ubuntu.png -[6]: https://itsfoss.com/how-to-remove-or-delete-ppas-quick-tip/ diff --git a/translated/tech/20140805 How to Install Cinnamon Desktop on Ubuntu.md b/translated/tech/20140805 How to Install Cinnamon Desktop on Ubuntu.md new file mode 100644 index 0000000000..087cd3cea1 --- /dev/null +++ b/translated/tech/20140805 How to Install Cinnamon Desktop on Ubuntu.md @@ -0,0 +1,77 @@ +# 如何在 Ubuntu 上安装 Cinnamon 桌面环境 + +**这篇教程将会为你展示如何在 Ubuntu 上安装 Cinnamon 桌面环境** + +[Cinnamon][1]是 [Linux Mint][2] 的默认桌面环境。不同于 Ubuntu 的 Unity 桌面环境,Cinnamon 通过底部面板和应用菜单等查看桌面信息的方式更加传统和优雅。由于 Cinnamon 桌面以及它类 Windows 的用户界面,许多桌面用户[相较于 Ubuntu 更喜欢 Linux Mint][3]。 + +现在你无需[安装 Linux Mint][4] 就能够体验到 Cinnamon了。在这篇教程,我将会展示给你 **如何在 Ubuntu 18.04,16.04 和 14.04 上安装 Cinnamon**。 + +在 Ubuntu 上安装 Cinnamon 之前,有一些事情需要你注意。有时候,安装的额外桌面环境可能会与你当前的桌面环境有冲突。可能导致会话,应用程序或功能等的崩溃。这就是为什么你需要在做这个决定时谨慎一点的原因。 + +![如何在 Ubuntu 上安装 Cinnamon 桌面环境][5] + +过去有一系列 Cinnamon team 为 Ubuntu 提供的官方 PPA,但现在都已经失效了。不过不用担心,还有一个非官方的 PPA,而且它运行的很完美。这个 PPA 里包含了最新的 Cinnamon 版本。 + +``` +sudo add-apt-repository +ppa:embrosyn/cinnamon +sudo apt update && sudo apt install cinnamon + +``` + +下载的大小大概是 150 MB(如果我没记错的话)。这其中提供的 Nemo(Cinnamon 的文件管理器,基于Nautilus)和 Cinnamon 控制中心。这些东西提供了一个更加接近于 Linux Mint 的感觉。 + +### 在 Ubuntu 上使用 Cinnamon 桌面环境 + +Cinnamon安装完成后,退出当前会话,在登陆界面,点击用户名旁边的 Ubuntu 符号: + +![](https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2014/08/Change_Desktop_Environment_Ubuntu.jpeg) + +之后,它将会显示所有系统可用的桌面环境。选择 Cinnamon。 + +![](https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2014/08/Install_Cinnamon_Ubuntu.jpeg) + +现在你应该已经登陆到有着 Cinnamon 桌面环境的 Ubuntu 中了。你还可以通过同样的方式再回到 Unity 桌面。这里有一张以 Cinnaon 做为桌面环境的 Ubuntu 桌面截图。 + +![](https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2014/08/Cinnamon_Ubuntu_1404.jpeg) + +看起来是不是像极了 Linux Mint。此外,我并没有发现任何有关 Cinnamon 和 Unity 的兼容性问题。在 Unity 和 Cinnamon 来回切换,他们也依旧工作的很完美。 + +#### 从 Ubuntu 卸载 Cinnamon + +如果你想卸载 Cinnamon,可以使用 PPA Purge 来完成。首先安装 PPA Purge: + +``` +sudo apt-get install ppa-purge + +``` + +安装完成之后,使用下面的命令去移除 PPA: + +``` +sudo ppa-purge ppa:embrosyn/cinnamon + +``` + +更多的信息,我建议你去阅读 [如何从 Linux 移除 PPA][6] 这篇文章。 + +我希望这篇文章能够帮助你在 Ubuntu 上安装 Cinnamon。也可以分享一下你使用 Cinnamon 的经验。 + +------ + +via: https://itsfoss.com/install-cinnamon-on-ubuntu/ + +作者:[Abhishek Prakash][a] +选题:[lujun9972](https://github.com/lujun9972) +译者:[dianbanjiu](https://github.com/dianbanjiu) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://itsfoss.com/author/abhishek/ +[1]: http://cinnamon.linuxmint.com/ +[2]: http://www.linuxmint.com/ +[3]: https://itsfoss.com/linux-mint-vs-ubuntu/ +[4]: https://itsfoss.com/guide-install-linux-mint-16-dual-boot-windows/ +[5]: https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/09/install-cinnamon-ubuntu.png +[6]: https://itsfoss.com/how-to-remove-or-delete-ppas-quick-tip/ From 40c52f820375f2f58a66d13f0e80d4c8e29752af Mon Sep 17 00:00:00 2001 From: dianbanjiu Date: Fri, 28 Sep 2018 18:15:57 +0800 Subject: [PATCH 102/736] translated --- .../tech/20140805 How to Install Cinnamon Desktop on Ubuntu.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/translated/tech/20140805 How to Install Cinnamon Desktop on Ubuntu.md b/translated/tech/20140805 How to Install Cinnamon Desktop on Ubuntu.md index 087cd3cea1..a9f0690ff7 100644 --- a/translated/tech/20140805 How to Install Cinnamon Desktop on Ubuntu.md +++ b/translated/tech/20140805 How to Install Cinnamon Desktop on Ubuntu.md @@ -31,7 +31,7 @@ Cinnamon安装完成后,退出当前会话,在登陆界面,点击用户名 ![](https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2014/08/Install_Cinnamon_Ubuntu.jpeg) -现在你应该已经登陆到有着 Cinnamon 桌面环境的 Ubuntu 中了。你还可以通过同样的方式再回到 Unity 桌面。这里有一张以 Cinnaon 做为桌面环境的 Ubuntu 桌面截图。 +现在你应该已经登陆到有着 Cinnamon 桌面环境的 Ubuntu 中了。你还可以通过同样的方式再回到 Unity 桌面。这里有一张以 Cinnamon 做为桌面环境的 Ubuntu 桌面截图。 ![](https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2014/08/Cinnamon_Ubuntu_1404.jpeg) From 0a74c03586c9197656acba758bc8febe4fdfc3c1 Mon Sep 17 00:00:00 2001 From: zhousiyu325 Date: Fri, 28 Sep 2018 19:34:29 +0800 Subject: [PATCH 103/736] translate 20140412 My Lisp Experiences and the Development of GNU Emacs.md --- ...40412 My Lisp Experiences and the Development of GNU Emacs.md | 1 + 1 file changed, 1 insertion(+) diff --git a/sources/talk/20140412 My Lisp Experiences and the Development of GNU Emacs.md b/sources/talk/20140412 My Lisp Experiences and the Development of GNU Emacs.md index 7be913c3bf..9bed3b36c1 100644 --- a/sources/talk/20140412 My Lisp Experiences and the Development of GNU Emacs.md +++ b/sources/talk/20140412 My Lisp Experiences and the Development of GNU Emacs.md @@ -1,3 +1,4 @@ +Translating by zhousiyu325 My Lisp Experiences and the Development of GNU Emacs ====== From 2aa738ca23201466e14a2946aa375fa3a91ae24e Mon Sep 17 00:00:00 2001 From: heguangzhi <7731226@qq.com> Date: Fri, 28 Sep 2018 20:25:42 +0800 Subject: [PATCH 104/736] Translate 20180920 (#10405) MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit * heguangzhi translating Linux firewalls: What you need to know about iptables and firewalld * 翻译 * The firewall * Configure HTTP access using firewalld * Configure a locked-down customer kiosk using iptables * The script * Configuring iptables to load on system boot * translated Linux 防火墙: 关于 iptables 和 firewalld,你需要知道些什么 * translated Linux 防火墙: 关于 iptables 和 firewalld,你需要知道些什么 --- ...ed to know about iptables and firewalld.md | 170 ----------------- ...ed to know about iptables and firewalld.md | 178 ++++++++++++++++++ 2 files changed, 178 insertions(+), 170 deletions(-) delete mode 100644 sources/tech/20180918 Linux firewalls- What you need to know about iptables and firewalld.md create mode 100644 translated/tech/20180918 Linux firewalls- What you need to know about iptables and firewalld.md diff --git a/sources/tech/20180918 Linux firewalls- What you need to know about iptables and firewalld.md b/sources/tech/20180918 Linux firewalls- What you need to know about iptables and firewalld.md deleted file mode 100644 index 98e38a02cd..0000000000 --- a/sources/tech/20180918 Linux firewalls- What you need to know about iptables and firewalld.md +++ /dev/null @@ -1,170 +0,0 @@ -heguangzhi translating - -Linux firewalls: What you need to know about iptables and firewalld -====== -Here's how to use the iptables and firewalld tools to manage Linux firewall connectivity rules. - -![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/openwires_fromRHT_520_0612LL.png?itok=PqZi55Ab) -This article is excerpted from my book, [Linux in Action][1], and a second Manning project that’s yet to be released. - -### The firewall - -A firewall is a set of rules. When a data packet moves into or out of a protected network space, its contents (in particular, information about its origin, target, and the protocol it plans to use) are tested against the firewall rules to see if it should be allowed through. Here’s a simple example: - -![firewall filtering request][3] - -A firewall can filter requests based on protocol or target-based rules. - -On the one hand, [iptables][4] is a tool for managing firewall rules on a Linux machine. - -On the other hand, [firewalld][5] is also a tool for managing firewall rules on a Linux machine. - -You got a problem with that? And would it spoil your day if I told you that there was another tool out there, called [nftables][6]? - -OK, I’ll admit that the whole thing does smell a bit funny, so let me explain. It all starts with Netfilter, which controls access to and from the network stack at the Linux kernel module level. For decades, the primary command-line tool for managing Netfilter hooks was the iptables ruleset. - -Because the syntax needed to invoke those rules could come across as a bit arcane, various user-friendly implementations like [ufw][7] and firewalld were introduced as higher-level Netfilter interpreters. Ufw and firewalld are, however, primarily designed to solve the kinds of problems faced by stand-alone computers. Building full-sized network solutions will often require the extra muscle of iptables or, since 2014, its replacement, nftables (through the nft command line tool). - -iptables hasn’t gone anywhere and is still widely used. In fact, you should expect to run into iptables-protected networks in your work as an admin for many years to come. But nftables, by adding on to the classic Netfilter toolset, has brought some important new functionality. - -From here on, I’ll show by example how firewalld and iptables solve simple connectivity problems. - -### Configure HTTP access using firewalld - -As you might have guessed from its name, firewalld is part of the [systemd][8] family. Firewalld can be installed on Debian/Ubuntu machines, but it’s there by default on Red Hat and CentOS. If you’ve got a web server like Apache running on your machine, you can confirm that the firewall is working by browsing to your server’s web root. If the site is unreachable, then firewalld is doing its job. - -You’ll use the `firewall-cmd` tool to manage firewalld settings from the command line. Adding the `–state` argument returns the current firewall status: - -``` -# firewall-cmd --state -running -``` - -By default, firewalld will be active and will reject all incoming traffic with a couple of exceptions, like SSH. That means your website won’t be getting too many visitors, which will certainly save you a lot of data transfer costs. As that’s probably not what you had in mind for your web server, though, you’ll want to open the HTTP and HTTPS ports that by convention are designated as 80 and 443, respectively. firewalld offers two ways to do that. One is through the `–add-port` argument that references the port number directly along with the network protocol it’ll use (TCP in this case). The `–permanent` argument tells firewalld to load this rule each time the server boots: - -``` -# firewall-cmd --permanent --add-port=80/tcp -# firewall-cmd --permanent --add-port=443/tcp -``` - -The `–reload` argument will apply those rules to the current session: - -``` -# firewall-cmd --reload -``` - -Curious as to the current settings on your firewall? Run `–list-services`: - -``` -# firewall-cmd --list-services -dhcpv6-client http https ssh -``` - -Assuming you’ve added browser access as described earlier, the HTTP, HTTPS, and SSH ports should now all be open—along with `dhcpv6-client`, which allows Linux to request an IPv6 IP address from a local DHCP server. - -### Configure a locked-down customer kiosk using iptables - -I’m sure you’ve seen kiosks—they’re the tablets, touchscreens, and ATM-like PCs in a box that airports, libraries, and business leave lying around, inviting customers and passersby to browse content. The thing about most kiosks is that you don’t usually want users to make themselves at home and treat them like their own devices. They’re not generally meant for browsing, viewing YouTube videos, or launching denial-of-service attacks against the Pentagon. So to make sure they’re not misused, you need to lock them down. - -One way is to apply some kind of kiosk mode, whether it’s through clever use of a Linux display manager or at the browser level. But to make sure you’ve got all the holes plugged, you’ll probably also want to add some hard network controls through a firewall. In the following section, I'll describe how I would do it using iptables. - -There are two important things to remember about using iptables: The order you give your rules is critical, and by themselves, iptables rules won’t survive a reboot. I’ll address those here one at a time. - -### The kiosk project - -To illustrate all this, let’s imagine we work for a store that’s part of a larger chain called BigMart. They’ve been around for decades; in fact, our imaginary grandparents probably grew up shopping there. But these days, the guys at BigMart corporate headquarters are probably just counting the hours before Amazon drives them under for good. - -Nevertheless, BigMart’s IT department is doing its best, and they’ve just sent you some WiFi-ready kiosk devices that you’re expected to install at strategic locations throughout your store. The idea is that they’ll display a web browser logged into the BigMart.com products pages, allowing them to look up merchandise features, aisle location, and stock levels. The kiosks will also need access to bigmart-data.com, where many of the images and video media are stored. - -Besides those, you’ll want to permit updates and, whenever necessary, package downloads. Finally, you’ll want to permit inbound SSH access only from your local workstation, and block everyone else. The figure below illustrates how it will all work: - -![kiosk traffic flow ip tables][10] - -The kiosk traffic flow being controlled by iptables. - -### The script - -Here’s how that will all fit into a Bash script: - -``` -#!/bin/bash -iptables -A OUTPUT -p tcp -d bigmart.com -j ACCEPT -iptables -A OUTPUT -p tcp -d bigmart-data.com -j ACCEPT -iptables -A OUTPUT -p tcp -d ubuntu.com -j ACCEPT -iptables -A OUTPUT -p tcp -d ca.archive.ubuntu.com -j ACCEPT -iptables -A OUTPUT -p tcp --dport 80 -j DROP -iptables -A OUTPUT -p tcp --dport 443 -j DROP -iptables -A INPUT -p tcp -s 10.0.3.1 --dport 22 -j ACCEPT -iptables -A INPUT -p tcp -s 0.0.0.0/0 --dport 22 -j DROP -``` - -The basic anatomy of our rules starts with `-A`, telling iptables that we want to add the following rule. `OUTPUT` means that this rule should become part of the OUTPUT chain. `-p` indicates that this rule will apply only to packets using the TCP protocol, where, as `-d` tells us, the destination is [bigmart.com][11]. The `-j` flag points to `ACCEPT` as the action to take when a packet matches the rule. In this first rule, that action is to permit, or accept, the request. But further down, you can see requests that will be dropped, or denied. - -Remember that order matters. And that’s because iptables will run a request past each of its rules, but only until it gets a match. So an outgoing browser request for, say, [youtube.com][12] will pass the first four rules, but when it gets to either the `–dport 80` or `–dport 443` rule—depending on whether it’s an HTTP or HTTPS request—it’ll be dropped. iptables won’t bother checking any further because that was a match. - -On the other hand, a system request to ubuntu.com for a software upgrade will get through when it hits its appropriate rule. What we’re doing here, obviously, is permitting outgoing HTTP or HTTPS requests to only our BigMart or Ubuntu destinations and no others. - -The final two rules will deal with incoming SSH requests. They won’t already have been denied by the two previous drop rules since they don’t use ports 80 or 443, but 22. In this case, login requests from my workstation will be accepted but requests for anywhere else will be dropped. This is important: Make sure the IP address you use for your port 22 rule matches the address of the machine you’re using to log in—if you don’t do that, you’ll be instantly locked out. It's no big deal, of course, because the way it’s currently configured, you could simply reboot the server and the iptables rules will all be dropped. If you’re using an LXC container as your server and logging on from your LXC host, then use the IP address your host uses to connect to the container, not its public address. - -You’ll need to remember to update this rule if my machine’s IP ever changes; otherwise, you’ll be locked out. - -Playing along at home (hopefully on a throwaway VM of some sort)? Great. Create your own script. Now I can save the script, use `chmod` to make it executable, and run it as `sudo`. Don’t worry about that `bigmart-data.com not found` error—of course it’s not found; it doesn’t exist. - -``` -chmod +X scriptname.sh -sudo ./scriptname.sh -``` - -You can test your firewall from the command line using `cURL`. Requesting ubuntu.com works, but [manning.com][13] fails. - -``` -curl ubuntu.com -curl manning.com -``` - -### Configuring iptables to load on system boot - -Now, how do I get these rules to automatically load each time the kiosk boots? The first step is to save the current rules to a .rules file using the `iptables-save` tool. That’ll create a file in the root directory containing a list of the rules. The pipe, followed by the tee command, is necessary to apply my `sudo` authority to the second part of the string: the actual saving of a file to the otherwise restricted root directory. - -I can then tell the system to run a related tool called `iptables-restore` every time it boots. A regular cron job of the kind we saw in the previous module won’t help because they’re run at set times, but we have no idea when our computer might decide to crash and reboot. - -There are lots of ways to handle this problem. Here’s one: - -On my Linux machine, I’ll install a program called [anacron][14] that will give us a file in the /etc/ directory called anacrontab. I’ll edit the file and add this `iptables-restore` command, telling it to load the current values of that .rules file into iptables each day (when necessary) one minute after a boot. I’ll give the job an identifier (`iptables-restore`) and then add the command itself. Since you’re playing along with me at home, you should test all this out by rebooting your system. - -``` -sudo iptables-save | sudo tee /root/my.active.firewall.rules -sudo apt install anacron -sudo nano /etc/anacrontab -1 1 iptables-restore iptables-restore < /root/my.active.firewall.rules - -``` - -I hope these practical examples have illustrated how to use iptables and firewalld for managing connectivity issues on Linux-based firewalls. - --------------------------------------------------------------------------------- - -via: https://opensource.com/article/18/9/linux-iptables-firewalld - -作者:[David Clinton][a] -选题:[lujun9972](https://github.com/lujun9972) -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://opensource.com/users/remyd -[1]: https://www.manning.com/books/linux-in-action?a_aid=bootstrap-it&a_bid=4ca15fc9&chan=opensource -[2]: /file/409116 -[3]: https://opensource.com/sites/default/files/uploads/iptables1.jpg (firewall filtering request) -[4]: https://en.wikipedia.org/wiki/Iptables -[5]: https://firewalld.org/ -[6]: https://wiki.nftables.org/wiki-nftables/index.php/Main_Page -[7]: https://en.wikipedia.org/wiki/Uncomplicated_Firewall -[8]: https://en.wikipedia.org/wiki/Systemd -[9]: /file/409121 -[10]: https://opensource.com/sites/default/files/uploads/iptables2.jpg (kiosk traffic flow ip tables) -[11]: http://bigmart.com/ -[12]: http://youtube.com/ -[13]: http://manning.com/ -[14]: https://sourceforge.net/projects/anacron/ diff --git a/translated/tech/20180918 Linux firewalls- What you need to know about iptables and firewalld.md b/translated/tech/20180918 Linux firewalls- What you need to know about iptables and firewalld.md new file mode 100644 index 0000000000..c3ecb7b1d3 --- /dev/null +++ b/translated/tech/20180918 Linux firewalls- What you need to know about iptables and firewalld.md @@ -0,0 +1,178 @@ +Linux 防火墙: 关于 iptables 和 firewalld,你需要知道些什么 +====== + +以下是如何使用 iptables 和 firewalld 工具来管理 Linux 防火墙规则。 + +![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/openwires_fromRHT_520_0612LL.png?itok=PqZi55Ab) +这篇文章摘自我的书[Linux in Action][1],第二 Manning project 尚未发布。 + +### 防火墙 + + +防火墙是一组规则。当数据包进出受保护的网络时,进出内容(特别是关于其来源、目标和使用的协议等信息)会根据防火墙规则进行检测,以确定是否允许其通过。下面是一个简单的例子: + + +![防火墙过滤请求] [3] + +防火墙可以根据协议或基于目标的规则过滤请求。 + +一方面, [iptables][4] 是 Linux 机器上管理防火墙规则的工具。 + +另一方面,[firewalld][5]也是 Linux 机器上管理防火墙规则的工具。 + +你有什么问题吗?如果我告诉你还有另外一种工具,叫做 [nftables][6],这会不会糟蹋你的一天呢? + +好吧,我承认整件事确实有点好笑,所以让我解释一下了。这一切都从 Netfilter 开始,在 Linux 内核模块级别, Netfilter 控制访问网络栈。几十年来,管理 Netfilter 钩子的主要命令行工具是 iptables 规则集。 + +因为调用这些规则所需的语法看起来有点晦涩难懂,所以各种用户友好的实现方式,如[ufw][7] 和 firewalld 被引入作,并为更高级别的 Netfilter 解释器。然而,Ufw 和 firewalld 主要是为解决独立计算机面临的各种问题而设计的。构建全方面的网络解决方案通常需要 iptables,或者从2014年起,它的替代品 nftables (nft 命令行工具)。 + + +iptables 没有消失,仍然被广泛使用着。事实上,在未来的许多年里,作为一名管理员,你应该会使用 iptables 来保护的网络。但是nftables 通过操作经典的 Netfilter 工具集带来了一些重要的崭新的功能。 + + +从现在开始,我将通过示例展示 firewalld 和 iptables 如何解决简单的连接问题。 + +### 使用 firewalld 配置 HTTP 访问 + +正如你能从它的名字中猜到的,firewalld 是 [systemd][8] 家族的一部分。Firewalld 可以安装在 Debian/Ubuntu 机器上,不过, 它默认安装在 RedHat 和 CentOS 上。如果您的计算机上运行着像 Apache 这样的 web 服务器,您可以通过浏览服务器的 web 根目录来确认防火墙是否正在工作。如果网站不可访问,那么 firewalld 正在工作。 + +你可以使用 `firewall-cmd` 工具从命令行管理 firewalld 设置。添加 `–state` 参数将返回当前防火墙的状态: + +``` +# firewall-cmd --state +running +``` + +默认情况下,firewalld 将处于运行状态,并将拒绝所有传入流量,但有几个例外,如 SSH。这意味着你的网站不会有太多的访问者,这无疑会为你节省大量的数据传输成本。然而,这不是你对 web 服务器的要求,你希望打开 HTTP 和 HTTPS 端口,按照惯例,这两个端口分别被指定为80和443。firewalld 提供了两种方法来实现这个功能。一个是通过 `–add-port` 参数,该参数直接引用端口号及其将使用的网络协议(在本例中为TCP )。 另外一个是通过`–permanent` 参数,它告诉 firewalld 在每次服务器启动时加载此规则: + + +``` +# firewall-cmd --permanent --add-port=80/tcp +# firewall-cmd --permanent --add-port=443/tcp +``` + + `–reload` 参数将这些规则应用于当前会话: + +``` +# firewall-cmd --reload +``` + +查看当前防火墙上的设置, 运行 `–list-services` : + +``` +# firewall-cmd --list-services +dhcpv6-client http https ssh +``` + +假设您已经如前所述添加了浏览器访问,那么 HTTP、HTTPS 和 SSH 端口现在都应该是开放的—— `dhcpv6-client` ,它允许 Linux 从本地 DHCP 服务器请求 IPv6 IP地址。 + +### 使用 iptables 配置锁定的客户信息亭 + +我相信你已经看到了信息亭——它们是放在机场、图书馆和商务场所的盒子里的平板电脑、触摸屏和ATM类电脑,邀请顾客和路人浏览内容。大多数信息亭的问题是,你通常不希望用户像在自己家一样,把他们当成自己的设备。它们通常不是用来浏览、观看 YouTube 视频或对五角大楼发起拒绝服务攻击的。因此,为了确保它们没有被滥用,你需要锁定它们。 + + +一种方法是应用某种信息亭模式,无论是通过巧妙使用Linux显示管理器还是在浏览器级别。但是为了确保你已经堵塞了所有的漏洞,你可能还想通过防火墙添加一些硬网络控制。在下一节中,我将讲解如何使用iptables 来完成。 + + +关于使用iptables,有两件重要的事情需要记住:你给规则的顺序非常关键,iptables 规则本身在重新启动后将无法存活。我会一次一个地在解释这些。 + +### 信息亭项目 + +为了说明这一切,让我们想象一下,我们为一家名为 BigMart 的大型连锁商店工作。它们已经存在了几十年;事实上,我们想象中的祖父母可能是在那里购物并长大的。但是这些天,BigMart 公司总部的人可能只是在数着亚马逊将他们永远赶下去的时间。 + +尽管如此,BigMart 的IT部门正在尽他们最大努力提供解决方案,他们向你发放了一些具有 WiFi 功能信息亭设备,你在整个商店的战略位置使用这些设备。其想法是,登录到 BigMart.com 产品页面,允许查找商品特征、过道位置和库存水平。信息亭还允许进入 bigmart-data.com,那里储存着许多图像和视频媒体信息。 + +除此之外,您还需要允许下载软件包更新。最后,您还希望只允许从本地工作站访问SSH,并阻止其他人登录。下图说明了它将如何工作: + +![信息亭流量IP表] [10] + +信息亭业务流由 iptables 控制。 + +### 脚本 + +以下是 Bash 脚本内容: + +``` +#!/bin/bash +iptables -A OUTPUT -p tcp -d bigmart.com -j ACCEPT +iptables -A OUTPUT -p tcp -d bigmart-data.com -j ACCEPT +iptables -A OUTPUT -p tcp -d ubuntu.com -j ACCEPT +iptables -A OUTPUT -p tcp -d ca.archive.ubuntu.com -j ACCEPT +iptables -A OUTPUT -p tcp --dport 80 -j DROP +iptables -A OUTPUT -p tcp --dport 443 -j DROP +iptables -A INPUT -p tcp -s 10.0.3.1 --dport 22 -j ACCEPT +iptables -A INPUT -p tcp -s 0.0.0.0/0 --dport 22 -j DROP +``` + +我们从基本规则 `-A` 开始分析,它告诉iptables 我们要添加规则。`OUTPUT` 意味着这条规则应该成为输出的一部分。`-p` 表示该规则仅使用TCP协议的数据包,正如`-d` 告诉我们的,目的地址是 [bigmart.com][11]。`-j` 参数作用为数据包符合规则时要采取的操作是 `ACCEPT`。第一条规则表示允许或接受请求。但,最后一条规则表示删除或拒绝的请求。 + +规则顺序是很重要的。iptables 仅仅允许匹配规则的内容请求通过。一个向外发出的浏览器请求,比如访问[youtube.com][12] 是会通过的,因为这个请求匹配第四条规则,但是当它到达“dport 80”或“dport 443”规则时——取决于是HTTP还是HTTPS请求——它将被删除。iptables不再麻烦检查了,因为那是一场比赛。 + +另一方面,向ubuntu.com 发出软件升级的系统请求,只要符合其适当的规则,就会通过。显然,我们在这里做的是,只允许向我们的 BigMart 或 Ubuntu 发送 HTTP 或 HTTPS 请求,而不允许向其他目的地发送。 + +最后两条规则将处理 SSH 请求。因为它不使用端口80或443端口,而是使用22端口,所以之前的两个丢弃规则不会拒绝它。在这种情况下,来自我的工作站的登录请求将被接受,但是对其他任何地方的请求将被拒绝。这一点很重要:确保用于端口22规则的IP地址与您用来登录的机器的地址相匹配——如果不这样做,将立即被锁定。当然,这没什么大不了的,因为按照目前的配置方式,只需重启服务器,iptables 规则就会全部丢失。如果使用 LXC 容器作为服务器并从 LXC 主机登录,则使用主机 IP 地址连接容器,而不是其公共地址。 + +如果机器的IP发生变化,请记住更新这个规则;否则,你会被拒绝访问。 + +在家玩(是在某种性虚拟机上)?太好了。创建自己的脚本。现在我可以保存脚本,使用`chmod` 使其可执行,并以`sudo` 的形式运行它。不要担心 `igmart-data.com没找到`错误——当然没找到;它不存在。 + +``` +chmod +X scriptname.sh +sudo ./scriptname.sh +``` + +你可以使用`cURL` 命令行测试防火墙。请求 ubuntu.com 奏效,但请求 [manning.com][13]是失败的 。 + + +``` +curl ubuntu.com +curl manning.com +``` + +### 配置 iptables 以在系统启动时加载 + +现在,我如何让这些规则在每次 kiosk 启动时自动加载?第一步是将当前规则保存。使用`iptables-save` 工具保存规则文件。将在根目录中创建一个包含规则列表的文件。管道后面跟着 tee 命令,是将我的`sudo` 权限应用于字符串的第二部分:将文件实际保存到否则受限的根目录。 + +然后我可以告诉系统每次启动时运行一个相关的工具,叫做`iptables-restore` 。我们在上一模块中看到的常规cron 作业,因为它们在设定的时间运行,但是我们不知道什么时候我们的计算机可能会决定崩溃和重启。 + +有许多方法来处理这个问题。这里有一个: + + +在我的 Linux 机器上,我将安装一个名为 [anacron][14] 的程序,该程序将在 /etc/ 目录中为我们提供一个名为anacrondab 的文件。我将编辑该文件并添加这个 `iptables-restore` 命令,告诉它加载该文件的当前值。引导后一分钟,规则每天(必要时)加载到 iptables 中。我会给作业一个标识符( `iptables-restore` ),然后添加命令本身。如果你在家和我一起这样,你应该通过重启系统来测试一下。 + +``` +sudo iptables-save | sudo tee /root/my.active.firewall.rules +sudo apt install anacron +sudo nano /etc/anacrontab +1 1 iptables-restore iptables-restore < /root/my.active.firewall.rules + +``` + +我希望这些实际例子已经说明了如何使用 iptables 和 firewalld 来管理基于Linux的防火墙上的连接问题。 + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/18/9/linux-iptables-firewalld + +作者:[David Clinton][a] +选题:[lujun9972](https://github.com/lujun9972) +译者:[heguangzhi](https://github.com/heguangzhi) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/remyd +[1]: https://www.manning.com/books/linux-in-action?a_aid=bootstrap-it&a_bid=4ca15fc9&chan=opensource +[2]: /file/409116 +[3]: https://opensource.com/sites/default/files/uploads/iptables1.jpg (firewall filtering request) +[4]: https://en.wikipedia.org/wiki/Iptables +[5]: https://firewalld.org/ +[6]: https://wiki.nftables.org/wiki-nftables/index.php/Main_Page +[7]: https://en.wikipedia.org/wiki/Uncomplicated_Firewall +[8]: https://en.wikipedia.org/wiki/Systemd +[9]: /file/409121 +[10]: https://opensource.com/sites/default/files/uploads/iptables2.jpg (kiosk traffic flow ip tables) +[11]: http://bigmart.com/ +[12]: http://youtube.com/ +[13]: http://manning.com/ +[14]: https://sourceforge.net/projects/anacron/ From b5c72f4681d1e28fd08151b00351a4b78764bf01 Mon Sep 17 00:00:00 2001 From: FelixYFZ <33593534+FelixYFZ@users.noreply.github.com> Date: Fri, 28 Sep 2018 20:36:00 +0800 Subject: [PATCH 105/736] Update 20180308 20 questions DevOps job candidates should be prepared to answer.md (#10416) --- ...stions DevOps job candidates should be prepared to answer.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/sources/talk/20180308 20 questions DevOps job candidates should be prepared to answer.md b/sources/talk/20180308 20 questions DevOps job candidates should be prepared to answer.md index da43855266..beb6f372b9 100644 --- a/sources/talk/20180308 20 questions DevOps job candidates should be prepared to answer.md +++ b/sources/talk/20180308 20 questions DevOps job candidates should be prepared to answer.md @@ -1,4 +1,4 @@ -20 questions DevOps job candidates should be prepared to answer +20 questions DevOps job candidates should be prepared to answer Translating by FelixYFZ ====== ![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/hire-job-career.png?itok=SrZo0QJ3) From 7e26d77971579542e5662cda40742cafd28b8a75 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?=E7=AB=A0=E5=86=9B?= Date: Fri, 28 Sep 2018 20:41:52 +0800 Subject: [PATCH 106/736] adopt article - How to Find And delete duplicate file in linux --- .../20180927 How To Find And Delete Duplicate Files In Linux.md | 2 ++ 1 file changed, 2 insertions(+) diff --git a/sources/tech/20180927 How To Find And Delete Duplicate Files In Linux.md b/sources/tech/20180927 How To Find And Delete Duplicate Files In Linux.md index e3a0a9d561..dd45caa589 100644 --- a/sources/tech/20180927 How To Find And Delete Duplicate Files In Linux.md +++ b/sources/tech/20180927 How To Find And Delete Duplicate Files In Linux.md @@ -1,3 +1,5 @@ +translating by singledo + How To Find And Delete Duplicate Files In Linux ====== From 6dd5d1372fdbfdb4c71cdbc0ed33cbae26829a30 Mon Sep 17 00:00:00 2001 From: zhousiyu325 Date: Fri, 28 Sep 2018 21:16:29 +0800 Subject: [PATCH 107/736] there exist a chinese article for it , so i plan to change one --- ...40412 My Lisp Experiences and the Development of GNU Emacs.md | 1 - 1 file changed, 1 deletion(-) diff --git a/sources/talk/20140412 My Lisp Experiences and the Development of GNU Emacs.md b/sources/talk/20140412 My Lisp Experiences and the Development of GNU Emacs.md index 9bed3b36c1..7be913c3bf 100644 --- a/sources/talk/20140412 My Lisp Experiences and the Development of GNU Emacs.md +++ b/sources/talk/20140412 My Lisp Experiences and the Development of GNU Emacs.md @@ -1,4 +1,3 @@ -Translating by zhousiyu325 My Lisp Experiences and the Development of GNU Emacs ====== From 9adba22715d59ad2fe4110b29cc7d2022961263d Mon Sep 17 00:00:00 2001 From: geekpi Date: Sat, 29 Sep 2018 08:50:43 +0800 Subject: [PATCH 108/736] translated --- ...estore Them On Freshly Installed Ubuntu.md | 109 ------------------ ...estore Them On Freshly Installed Ubuntu.md | 107 +++++++++++++++++ 2 files changed, 107 insertions(+), 109 deletions(-) delete mode 100644 sources/tech/20180915 Backup Installed Packages And Restore Them On Freshly Installed Ubuntu.md create mode 100644 translated/tech/20180915 Backup Installed Packages And Restore Them On Freshly Installed Ubuntu.md diff --git a/sources/tech/20180915 Backup Installed Packages And Restore Them On Freshly Installed Ubuntu.md b/sources/tech/20180915 Backup Installed Packages And Restore Them On Freshly Installed Ubuntu.md deleted file mode 100644 index c775fd5040..0000000000 --- a/sources/tech/20180915 Backup Installed Packages And Restore Them On Freshly Installed Ubuntu.md +++ /dev/null @@ -1,109 +0,0 @@ -translating---geekpi - -Backup Installed Packages And Restore Them On Freshly Installed Ubuntu -====== - -![](https://www.ostechnix.com/wp-content/uploads/2018/09/apt-clone-720x340.png) - -Installing the same set of packages on multiple Ubuntu systems is time consuming and boring task. You don’t want to spend your time to install the same packages over and over on multiple systems. When it comes to install packages on similar architecture Ubuntu systems, there are many methods available to make this task easier. You could simply migrate your old Ubuntu system’s applications, settings and data to a newly installed system with a couple mouse clicks using [**Aptik**][1]. Or, you can take the [**backup entire list of installed packages**][2] using your package manager (Eg. APT), and install them later on a freshly installed system. Today, I learned that there is also yet another dedicated utility available to do this job. Say hello to **apt-clone** , a simple tool that lets you to create a list of installed packages for Debian/Ubuntu systems that can be restored on freshly installed systems or containers or into a directory. - -Apt-clone will help you on situations where you want to, - - * Install consistent applications across multiple systems running with similar Ubuntu (and derivatives) OS. - * Install same set of packages on multiple systems often. - * Backup the entire list of installed applications and restore them on demand wherever and whenever necessary. - - - -In this brief guide, we will be discussing how to install and use Apt-clone on Debian-based systems. I tested this utility on Ubuntu 18.04 LTS system, however it should work on all Debian and Ubuntu-based systems. - -### Backup Installed Packages And Restore Them Later On Freshly Installed Ubuntu System - -Apt-clone is available in the default repositories. To install it, just enter the following command from the Terminal: - -``` -$ sudo apt install apt-clone -``` - -Once installed, simply create the list of installed packages and save them in any location of your choice. - -``` -$ mkdir ~/mypackages - -$ sudo apt-clone clone ~/mypackages -``` - -The above command saved all installed packages in my Ubuntu system in a file named **apt-clone-state-ubuntuserver.tar.gz** under **~/mypackages** directory. - -To view the details of the backup file, run: - -``` -$ apt-clone info mypackages/apt-clone-state-ubuntuserver.tar.gz -Hostname: ubuntuserver -Arch: amd64 -Distro: bionic -Meta: -Installed: 516 pkgs (33 automatic) -Date: Sat Sep 15 10:23:05 2018 -``` - -As you can see, I have 516 packages in total in my Ubuntu server. - -Now, copy this file on your USB or external drive and go to any other system that want to install the same set of packages. Or you can also transfer the backup file to the system on the network and install the packages by using the following command: - -``` -$ sudo apt-clone restore apt-clone-state-ubuntuserver.tar.gz -``` - -Please be mindful that this command will overwrite your existing **/etc/apt/sources.list** and will install/remove packages. You have been warned! Also, just make sure the destination system is on same arch and same OS. For example, if the source system is running with 18.04 LTS 64bit, the destination system must also has the same. - -If you don’t want to restore packages on the system, you can simply use `--destination /some/location` option to debootstrap the clone into this directory. - -``` -$ sudo apt-clone restore apt-clone-state-ubuntuserver.tar.gz --destination ~/oldubuntu -``` - -In this case, the above command will restore the packages in a folder named **~/oldubuntu**. - -For more details, refer help section: - -``` -$ apt-clone -h -``` - -Or, man pages: - -``` -$ man apt-clone -``` - -**Suggested read:** - -+ [Systemback – Restore Ubuntu Desktop and Server to previous state][3] -+ [Cronopete – An Apple’s Time Machine Clone For Linux][4] - -And, that’s all for now. Hope this was useful. More good stuffs to come. Stay tuned! - -Cheers! - - - - --------------------------------------------------------------------------------- - -via: https://www.ostechnix.com/backup-installed-packages-and-restore-them-on-freshly-installed-ubuntu-system/ - -作者:[SK][a] -选题:[lujun9972](https://github.com/lujun9972) -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://www.ostechnix.com/author/sk/ -[1]: https://www.ostechnix.com/how-to-migrate-system-settings-and-data-from-an-old-system-to-a-newly-installed-ubuntu-system/ -[2]: https://www.ostechnix.com/create-list-installed-packages-install-later-list-centos-ubuntu/#comment-12598 - -[3]: https://www.ostechnix.com/systemback-restore-ubuntu-desktop-and-server-to-previous-state/ - -[4]: https://www.ostechnix.com/cronopete-apples-time-machine-clone-linux/ diff --git a/translated/tech/20180915 Backup Installed Packages And Restore Them On Freshly Installed Ubuntu.md b/translated/tech/20180915 Backup Installed Packages And Restore Them On Freshly Installed Ubuntu.md new file mode 100644 index 0000000000..1b21607ee9 --- /dev/null +++ b/translated/tech/20180915 Backup Installed Packages And Restore Them On Freshly Installed Ubuntu.md @@ -0,0 +1,107 @@ +备份安装包并在全新安装的 Ubuntu 上恢复它们 +====== + +![](https://www.ostechnix.com/wp-content/uploads/2018/09/apt-clone-720x340.png) + +在多个 Ubuntu 系统上安装同一组软件包是一项耗时且无聊的任务。你不会想花时间在多个系统上反复安装相同的软件包。在类似架构的 Ubuntu 系统上安装软件包时,有许多方法可以使这项任务更容易。你可以方便地通过 [**Aptik**][1] 并点击几次鼠标将以前的 Ubuntu 系统的应用程序、设置和数据迁移到新安装的系统中。或者,你可以使用软件包管理器(例如 APT)获取[**备份的已安装软件包的完整列表**][2],然后在新安装的系统上安装它们。今天,我了解到还有另一个专用工具可以完成这项工作。来看一下 **apt-clone**,这是一个简单的工具,可以让你为 Debian/Ubuntu 系统创建一个已安装的软件包列表,这些软件包可以在新安装的系统或容器上或目录中恢复。 + +Apt-clone 会帮助你处理你想要的情况, + + * 在运行类似 Ubuntu(及衍生版)的多个系统上安装一致的应用程序。 +  * 经常在多个系统上安装相同的软件包。 +  * 备份已安装的应用程序的完整列表,并在需要时随时随地恢复它们。 + + + +在本简要指南中,我们将讨论如何在基于 Debian 的系统上安装和使用 Apt-clone。我在 Ubuntu 18.04 LTS 上测试了这个程序,但它应该适用于所有基于 Debian 和 Ubuntu 的系统。 + +### 备份已安装的软件包并在新安装的 Ubuntu 上恢复它们 + +Apt-clone 在默认仓库中有。要安装它,只需在终端输入以下命令: + +``` +$ sudo apt install apt-clone +``` + +安装后,只需创建已安装软件包的列表,并将其保存在你选择的任何位置。 + +``` +$ mkdir ~/mypackages + +$ sudo apt-clone clone ~/mypackages +``` + +上面的命令将我的 Ubuntu 中所有已安装的软件包保存在 **~/mypackages** 目录下名为 **apt-clone-state-ubuntuserver.tar.gz** 的文件中。 + +要查看备份文件的详细信息,请运行: + +``` +$ apt-clone info mypackages/apt-clone-state-ubuntuserver.tar.gz +Hostname: ubuntuserver +Arch: amd64 +Distro: bionic +Meta: +Installed: 516 pkgs (33 automatic) +Date: Sat Sep 15 10:23:05 2018 +``` + +如你所见,我的 Ubuntu 服务器总共有 516 个包。 + +现在,将此文件复制到 USB 或外部驱动器上,并转至要安装同一套软件包的任何其他系统。或者,你也可以将备份文件传输到网络上的系统,并使用以下命令安装软件包: + +``` +$ sudo apt-clone restore apt-clone-state-ubuntuserver.tar.gz +``` + +请注意,此命令将覆盖你现有的 **/etc/apt/sources.list** 并将安装/删除软件包。警告过你了!此外,只需确保目标系统是相同的架构和操作系统。例如,如果源系统是 18.04 LTS 64位,那么目标系统必须也是相同的。 + +如果你不想在系统上恢复软件包,可以使用 `--destination /some/location` 选项将克隆复制到这个文件夹中。 + +``` +$ sudo apt-clone restore apt-clone-state-ubuntuserver.tar.gz --destination ~/oldubuntu +``` + +在此例中,上面的命令将软件包恢复到 **~/oldubuntu** 中。 + +有关详细信息,请参阅帮助部分: + +``` +$ apt-clone -h +``` + +或者手册页: + +``` +$ man apt-clone +``` + +**建议阅读:** + ++ [Systemback - 将 Ubuntu 桌面版和服务器版恢复到以前的状态][3] ++ [Cronopete - Linux 下的苹果时间机器][4] + +就是这些了。希望这个有用。还有更多好东西。敬请期待! + +干杯! + + + + +-------------------------------------------------------------------------------- + +via: https://www.ostechnix.com/backup-installed-packages-and-restore-them-on-freshly-installed-ubuntu-system/ + +作者:[SK][a] +选题:[lujun9972](https://github.com/lujun9972) +译者:[geekpi](https://github.com/geekpi) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://www.ostechnix.com/author/sk/ +[1]: https://www.ostechnix.com/how-to-migrate-system-settings-and-data-from-an-old-system-to-a-newly-installed-ubuntu-system/ +[2]: https://www.ostechnix.com/create-list-installed-packages-install-later-list-centos-ubuntu/#comment-12598 + +[3]: https://www.ostechnix.com/systemback-restore-ubuntu-desktop-and-server-to-previous-state/ + +[4]: https://www.ostechnix.com/cronopete-apples-time-machine-clone-linux/ From dd493f766bc1d9f1cd860d50de41705608caddb3 Mon Sep 17 00:00:00 2001 From: geekpi Date: Sat, 29 Sep 2018 09:00:12 +0800 Subject: [PATCH 109/736] translating --- ...Clinews - Read News And Latest Headlines From Commandline.md | 2 ++ 1 file changed, 2 insertions(+) diff --git a/sources/tech/20180921 Clinews - Read News And Latest Headlines From Commandline.md b/sources/tech/20180921 Clinews - Read News And Latest Headlines From Commandline.md index 24ae89f461..b7082ea141 100644 --- a/sources/tech/20180921 Clinews - Read News And Latest Headlines From Commandline.md +++ b/sources/tech/20180921 Clinews - Read News And Latest Headlines From Commandline.md @@ -1,3 +1,5 @@ +translating----geekpi + Clinews – Read News And Latest Headlines From Commandline ====== From e565ab6f6dc99ea7d32b0e37d17b3282e11c8c0a Mon Sep 17 00:00:00 2001 From: darksun Date: Sat, 29 Sep 2018 09:54:59 +0800 Subject: [PATCH 110/736] =?UTF-8?q?=E9=80=89=E9=A2=98:=205=20cool=20tiling?= =?UTF-8?q?=20window=20managers?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- .../20180927 5 cool tiling window managers.md | 87 +++++++++++++++++++ 1 file changed, 87 insertions(+) create mode 100644 sources/tech/20180927 5 cool tiling window managers.md diff --git a/sources/tech/20180927 5 cool tiling window managers.md b/sources/tech/20180927 5 cool tiling window managers.md new file mode 100644 index 0000000000..f687918c65 --- /dev/null +++ b/sources/tech/20180927 5 cool tiling window managers.md @@ -0,0 +1,87 @@ +5 cool tiling window managers +====== + +![](https://fedoramagazine.org/wp-content/uploads/2018/09/tilingwindowmanagers-816x345.jpg) +The Linux desktop ecosystem offers multiple window managers (WMs). Some are developed as part of a desktop environment. Others are meant to be used as standalone application. This is the case of tiling WMs, which offer a more lightweight, customized environment. This article presents five such tiling WMs for you to try out. + +### i3 + +[i3][1] is one of the most popular tiling window managers. Like most other such WMs, i3 focuses on low resource consumption and customizability by the user. + +You can refer to [this previous article in the Magazine][2] to get started with i3 installation details and how to configure it. + +### sway + +[sway][3] is a tiling Wayland compositor. It has the advantage of compatibility with an existing i3 configuration, so you can use it to replace i3 and use Wayland as the display protocol. + +You can use dnf to install sway from Fedora repository: + +``` +$ sudo dnf install sway +``` + +If you want to migrate from i3 to sway, there’s a small [migration guide][4] available. + +### Qtile + +[Qtile][5] is another tiling manager that also happens to be written in Python. By default, you configure Qtile in a Python script located under ~/.config/qtile/config.py. When this script is not available, Qtile uses a default [configuration][6]. + +One of the benefits of Qtile being in Python is you can write scripts to control the WM. For example, the following script prints the screen details: + +``` +> from libqtile.command import Client +> c = Client() +> print(c.screen.info) +{'index': 0, 'width': 1920, 'height': 1006, 'x': 0, 'y': 0} +``` + +To install Qlite on Fedora, use the following command: + +``` +$ sudo dnf install qtile +``` + +### dwm + +The [dwm][7] window manager focuses more on being lightweight. One goal of the project is to keep dwm minimal and small. For example, the entire code base never exceeded 2000 lines of code. On the other hand, dwm isn’t as easy to customize and configure. Indeed, the only way to change dwm default configuration is to [edit the source code and recompile the application][8]. + +If you want to try the default configuration, you can install dwm in Fedora using dnf: + +``` +$ sudo dnf install dwm +``` + +For those who wand to change their dwm configuration, the dwm-user package is available in Fedora. This package automatically recompiles dwm using the configuration stored in the user home directory at ~/.dwm/config.h. + +### awesome + +[awesome][9] originally started as a fork of dwm, to provide configuration of the WM using an external configuration file. The configuration is done via Lua scripts, which allow you to write scripts to automate tasks or create widgets. + +You can check out awesome on Fedora by installing it like this: + +``` +$ sudo dnf install awesome +``` + + +-------------------------------------------------------------------------------- + +via: https://fedoramagazine.org/5-cool-tiling-window-managers/ + +作者:[Clément Verna][a] +选题:[lujun9972](https://github.com/lujun9972) +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://fedoramagazine.org +[1]: https://i3wm.org/ +[2]: https://fedoramagazine.org/getting-started-i3-window-manager/ +[3]: https://swaywm.org/ +[4]: https://github.com/swaywm/sway/wiki/i3-Migration-Guide +[5]: http://www.qtile.org/ +[6]: https://github.com/qtile/qtile/blob/develop/libqtile/resources/default_config.py +[7]: https://dwm.suckless.org/ +[8]: https://dwm.suckless.org/customisation/ +[9]: https://awesomewm.org/ From 1017866948187e299f8713c55e2c5798cf7995fe Mon Sep 17 00:00:00 2001 From: darksun Date: Sat, 29 Sep 2018 10:00:01 +0800 Subject: [PATCH 111/736] =?UTF-8?q?=E9=80=89=E9=A2=98:=20A=20Free=20And=20?= =?UTF-8?q?Secure=20Online=20PDF=20Conversion=20Suite?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ... And Secure Online PDF Conversion Suite.md | 111 ++++++++++++++++++ 1 file changed, 111 insertions(+) create mode 100644 sources/tech/20180928 A Free And Secure Online PDF Conversion Suite.md diff --git a/sources/tech/20180928 A Free And Secure Online PDF Conversion Suite.md b/sources/tech/20180928 A Free And Secure Online PDF Conversion Suite.md new file mode 100644 index 0000000000..afb66e43ee --- /dev/null +++ b/sources/tech/20180928 A Free And Secure Online PDF Conversion Suite.md @@ -0,0 +1,111 @@ +A Free And Secure Online PDF Conversion Suite +====== + +![](https://www.ostechnix.com/wp-content/uploads/2018/09/easypdf-720x340.jpg) + +We are always in search for a better and more efficient solution that can make our lives more convenient. That is why when you are working with PDF documents you need a fast and reliable tool that you can use in every situation. Therefore, we wanted to introduce you to **EasyPDF** Online PDF Suite for every occasion. The promise behind this tool is that it can make your PDF management easier and we tested it to check that claim. + +But first, here are the most important things you need to know about EasyPDF: + + * EasyPDF is free and anonymous online PDF Conversion Suite. + * Convert PDF to Word, Excel, PowerPoint, AutoCAD, JPG, GIF and Text. + * Create PDF from Word, PowerPoint, JPG, Excel files and many other formats. + * Manipulate PDFs with PDF Merge, Split and Compress. + * OCR conversion of scanned PDFs and images. + * Upload files from your device or the Cloud (Google Drive and DropBox). + * Available on Windows, Linux, Mac, and smartphones via any browser. + * Multiple languages supported. + + + +### EasyPDF User Interface + +![](http://www.ostechnix.com/wp-content/uploads/2018/09/easypdf-interface.png) + +One of the first things that catches your eye is the sleek user interface which gives the tool clean and functional environment in where you can work comfortably. The whole experience is even better because there are no ads on a website at all. + +All different types of conversions have their dedicated menu with a simple box to add files, so you don’t have to wonder about what you need to do. + +Most websites aren’t optimized to work well and run smoothly on mobile phones, but EasyPDF is an exception from that rule. It opens almost instantly on smartphone and is easy to navigate. You can also add it as the shortcut on your home screen from the **three dots menu** on the Chrome app. + +![](http://www.ostechnix.com/wp-content/uploads/2018/09/EasyPDF-fs8.png) + +### Functionality + +Apart from looking nice, EasyPDF is pretty straightforward to use. You **don’t need to register** or leave an **email** to use the tool. It is completely anonymous. Additionally, it doesn’t put any limitations to the number or size of files for conversion. No installation required either! Cool, yeah? + +You choose a desired conversion format, for example, PDF to Word. Select the PDF file you want to convert. You can upload a file from the device by either drag & drop or selecting the file from the folder. There is also an option to upload a document from [**Google Drive**][1] or [**Dropbox**][2]. + +After you choose the file, press the Convert button to start the conversion process. You won’t wait for a long time to get your file because conversion will finish in a minute. If you have some more files to convert, remember to download the file before you proceed further. If you don’t download the document first, you will lose it. + +![](https://www.ostechnix.com/wp-content/uploads/2018/09/EasyPDF1.png) + +For a different type of conversion, return to the homepage. + +The currently available types of conversions are: + + * **PDF to Word** – Convert PDF documents to Word documents + + * **PDF to PowerPoint** – Convert PDF documents to PowerPoint Presentations + + * **PDF to Excel** – Convert PDF documents to Excel documents + + * **PDF Creation** – Create PDF documents from any type of file (E.g text, doc, odt) + + * **Word to PDF** – Convert Word documents to PDF documents + + * **JPG to PDF** – Convert JPG images to PDF documents + + * **PDF to AutoCAD** – Convert PDF documents to .dwg format (DWG is native format for CAD packages) + + * **PDF to Text** – Convert PDF documents to Text documents + + * **PDF Split** – Split PDF files into multiple parts + + * **PDF Merge** – Merge multiple PDF files into one + + * **PDF Compress** – Compress PDF documents + + * **PDF to JPG** – Convert PDF documents to JPG images + + * **PDF to PNG** – Convert PDF documents to PNG images + + * **PDF to GIF** – Convert PDF documents to GIF files + + * **OCR Online** – + +Convert scanned paper documents + +to editable files (E.g Word, Excel, Text) + + + + +Want to give it a try? Great! Click the following link and start converting! + +[![](https://www.ostechnix.com/wp-content/uploads/2018/09/EasyPDF-online-pdf.png)][https://easypdf.com/] + +### Conclusion + +EasyPDF lives up to its name and enables easier PDF management. As far as I tested EasyPDF service, It offers out of the box conversion feature completely **FREE!** It is fast, secure and reliable. You will find the quality of services most satisfying without having to pay anything or leaving your personal data like email address. Give it a try and who knows maybe you will find your new favorite PDF tool. + +And, that’s all for now. More good stuffs to come. Stay tuned! + +Cheers! + + + +-------------------------------------------------------------------------------- + +via: https://www.ostechnix.com/easypdf-a-free-and-secure-online-pdf-conversion-suite/ + +作者:[SK][a] +选题:[lujun9972](https://github.com/lujun9972) +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://www.ostechnix.com/author/sk/ +[1]: https://www.ostechnix.com/how-to-mount-google-drive-locally-as-virtual-file-system-in-linux/ +[2]: https://www.ostechnix.com/install-dropbox-in-ubuntu-18-04-lts-desktop/ From 12d0e9e9e4d3e15fbe72bfc94fd02048ea22dabb Mon Sep 17 00:00:00 2001 From: "Xingyu.Wang" Date: Sat, 29 Sep 2018 10:49:16 +0800 Subject: [PATCH 112/736] PRF:20180730 7 Python libraries for more maintainable code.md MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit @HankChow 翻译的不错 --- ...on libraries for more maintainable code.md | 68 ++++++++++--------- 1 file changed, 36 insertions(+), 32 deletions(-) diff --git a/translated/tech/20180730 7 Python libraries for more maintainable code.md b/translated/tech/20180730 7 Python libraries for more maintainable code.md index de08df9304..051ad79f2c 100644 --- a/translated/tech/20180730 7 Python libraries for more maintainable code.md +++ b/translated/tech/20180730 7 Python libraries for more maintainable code.md @@ -1,80 +1,84 @@ -这 7 个 Python 库让你写出更易维护的代码 +让 Python 代码更易维护的七种武器 ====== +> 检查你的代码的质量,通过这些外部库使其更易维护。 + ![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/programming_keyboard_coding.png?itok=E0Vvam7A) > 可读性很重要。 -> — [Python 之禅(The Zen of Python)][1], Tim Peters +> — [Python 之禅][1]The Zen of Python,Tim Peters -尽管很多项目一开始的时候就有可读性和编码标准的要求,但随着项目进入“维护模式”,这些要求都会变得虎头蛇尾。然而,在代码库中保持一致的代码风格和测试标准能够显著减轻维护的压力,也能确保新的开发者能够快速了解项目的情况,同时能更好地保持应用程序的运行良好。 +随着软件项目进入“维护模式”,对可读性和编码标准的要求很容易落空(甚至从一开始就没有建立过那些标准)。然而,在代码库中保持一致的代码风格和测试标准能够显著减轻维护的压力,也能确保新的开发者能够快速了解项目的情况,同时能更好地全程保持应用程序的质量。 + +使用外部库来检查代码的质量不失为保护项目未来可维护性的一个好方法。以下会推荐一些我们最喜爱的[检查代码][2](包括检查 PEP 8 和其它代码风格错误)的库,用它们来强制保持代码风格一致,并确保在项目成熟时有一个可接受的测试覆盖率。 ### 检查你的代码风格 -使用外部库来检查代码运行情况不失为保护项目未来可维护性的一个好方法。以下会推荐一些我们最喜爱的[检查代码][2](包括检查 PEP 8 和其它代码风格错误)的库,用它们来强制保持代码风格一致,并确保在项目成熟时有一个可接受的测试覆盖率。 +[PEP 8][3] 是 Python 代码风格规范,它规定了类似行长度、缩进、多行表达式、变量命名约定等内容。尽管你的团队自身可能也会有稍微不同于 PEP 8 的代码风格规范,但任何代码风格规范的目标都是在代码库中强制实施一致的标准,使代码的可读性更强、更易于维护。下面三个库就可以用来帮助你美化代码。 -[PEP 8][3]是Python代码风格规范,规定了行长度,缩进,多行表达式、变量命名约定等内容。尽管你的团队自身可能也会有不同于 PEP 8 的代码风格规范,但任何代码风格规范的目标都是在代码库中强制实施一致的标准,使代码的可读性更强、更易于维护。下面三个库就可以用来帮助你美化代码。 +#### 1、 Pylint -#### 1\. Pylint +[Pylint][4] 是一个检查违反 PEP 8 规范和常见错误的库。它在一些流行的[编辑器和 IDE][5] 中都有集成,也可以单独从命令行运行。 -[Pylint][4] 是一个检查违反 PEP 8 规范和常见错误的库。它在一些流行的编辑器和 IDE 中都有集成,也可以单独从命令行运行。 - -执行 `pip install pylint`安装 Pylint 。然后运行 `pylint [options] path/to/dir` 或者 `pylint [options] path/to/module.py` 就可以在命令行中使用 Pylint,它会向控制台输出代码中违反规范和出现错误的地方。 +执行 `pip install pylint` 安装 Pylint 。然后运行 `pylint [options] path/to/dir` 或者 `pylint [options] path/to/module.py` 就可以在命令行中使用 Pylint,它会向控制台输出代码中违反规范和出现错误的地方。 你还可以使用 `pylintrc` [配置文件][6]来自定义 Pylint 对哪些代码错误进行检查。 -#### 2\. Flake8 +#### 2、 Flake8 -对 [Flake8][7] 的描述是“将 PEP 8、Pyflakes(类似 Pylint)、McCabe(代码复杂性检查器)、第三方插件整合到一起,以检查 Python 代码风格和质量的一个 Python 工具”。 +[Flake8][7] 是“将 PEP 8、Pyflakes(类似 Pylint)、McCabe(代码复杂性检查器)和第三方插件整合到一起,以检查 Python 代码风格和质量的一个 Python 工具”。 执行 `pip install flake8` 安装 flake8 ,然后执行 `flake8 [options] path/to/dir` 或者 `flake8 [options] path/to/module.py` 可以查看报出的错误和警告。 和 Pylint 类似,Flake8 允许通过[配置文件][8]来自定义检查的内容。它有非常清晰的文档,包括一些有用的[提交钩子][9],可以将自动检查代码纳入到开发工作流程之中。 -Flake8 也允许集成到一些流行的编辑器和 IDE 当中,但在文档中并没有详细说明。要将 Flake8 集成到喜欢的编辑器或 IDE 中,可以搜索插件(例如 [Sublime Text 的 Flake8 插件][10])。 +Flake8 也可以集成到一些流行的编辑器和 IDE 当中,但在文档中并没有详细说明。要将 Flake8 集成到喜欢的编辑器或 IDE 中,可以搜索插件(例如 [Sublime Text 的 Flake8 插件][10])。 -#### 3\. Isort +#### 3、 Isort -[Isort][11] 这个库能将你在项目中导入的库按字母顺序,并将其[正确划分为不同部分][12](例如标准库、第三方库,自建的库等)。这样提高了代码的可读性,并且可以在导入的库较多的时候轻松找到各个库。 +[Isort][11] 这个库能将你在项目中导入的库按字母顺序排序,并将其[正确划分为不同部分][12](例如标准库、第三方库、自建的库等)。这样提高了代码的可读性,并且可以在导入的库较多的时候轻松找到各个库。 -执行 `pip install isort` 安装 isort,然后执行 `isort path/to/module.py` 就可以运行了。文档中还提供了更多的配置项,例如通过配置 `.isort.cfg` 文件来决定 isort 如何处理一个库的多行导入。 +执行 `pip install isort` 安装 isort,然后执行 `isort path/to/module.py` 就可以运行了。[文档][13]中还提供了更多的配置项,例如通过[配置][14] `.isort.cfg` 文件来决定 isort 如何处理一个库的多行导入。 和 Flake8、Pylint 一样,isort 也提供了将其与流行的[编辑器和 IDE][15] 集成的插件。 -### 共享代码风格 +### 分享你的代码风格 -每次文件发生变动之后都用命令行手动检查代码是一件痛苦的事,你可能也不太喜欢通过运行 IDE 中某个插件来实现这个功能。同样地,你的同事可能会用不同的代码检查方式,也许他们的编辑器中也没有安装插件,甚至自己可能也不会严格检查代码和按照警告来更正代码。总之,你共享的代码库将会逐渐地变得混乱且难以阅读。 +每次文件发生变动之后都用命令行手动检查代码是一件痛苦的事,你可能也不太喜欢通过运行 IDE 中某个插件来实现这个功能。同样地,你的同事可能会用不同的代码检查方式,也许他们的编辑器中也没有那种插件,甚至你自己可能也不会严格检查代码和按照警告来更正代码。总之,你分享出来的代码库将会逐渐地变得混乱且难以阅读。 -一个很好的解决方案是使用一个库,自动将代码按照 PEP 8 规范进行格式化。我们推荐的三个库都有不同的自定义级别来控制如何格式化代码。其中有一些设置较为特殊,例如 Pylint 和 Flake8 ,你需要先行测试,看看是否有你无法忍受蛋有不能修改的默认配置。 +一个很好的解决方案是使用一个库,自动将代码按照 PEP 8 规范进行格式化。我们推荐的三个库都有不同的自定义级别来控制如何格式化代码。其中有一些设置较为特殊,例如 Pylint 和 Flake8 ,你需要先行测试,看看是否有你无法忍受但又不能修改的默认配置。 -#### 4\. Autopep8 +#### 4、 Autopep8 -[Autopep8][16] 可以自动格式化指定的模块中的代码,包括重新缩进行,修复缩进,删除多余的空格,并重构常见的比较错误(例如布尔值和 `None` 值)。你可以查看文档中完整的[更正列表][17]。 +[Autopep8][16] 可以自动格式化指定的模块中的代码,包括重新缩进行、修复缩进、删除多余的空格,并重构常见的比较错误(例如布尔值和 `None` 值)。你可以查看文档中完整的[更正列表][17]。 -运行 `pip install --upgrade autopep8` 安装 autopep8。然后执行 `autopep8 --in-place --aggressive --aggressive ` 就可以重新格式化你的代码。`aggressive` 标记的数量表示 auotopep8 在代码风格控制上有多少控制权。在这里可以详细了解 [aggressive][18] 选项。 +运行 `pip install --upgrade autopep8` 安装 Autopep8。然后执行 `autopep8 --in-place --aggressive --aggressive ` 就可以重新格式化你的代码。`aggressive` 选项的数量表示 Auotopep8 在代码风格控制上有多少控制权。在这里可以详细了解 [aggressive][18] 选项。 -#### 5\. Yapf +#### 5、 Yapf -[Yapf][19] 是另一种有自己的[配置项][20]列表的重新格式化代码的工具。它与 autopep8 的不同之处在于它不仅会指出代码中违反 PEP 8 规范的地方,还会对没有违反 PEP 8 但代码风格不一致的地方重新格式化,旨在令代码的可读性更强。 +[Yapf][19] 是另一种有自己的[配置项][20]列表的重新格式化代码的工具。它与 Autopep8 的不同之处在于它不仅会指出代码中违反 PEP 8 规范的地方,还会对没有违反 PEP 8 但代码风格不一致的地方重新格式化,旨在令代码的可读性更强。 -执行`pip install yapf` 安装 Yapf,然后执行 `yapf [options] path/to/dir` 或 `yapf [options] path/to/module.py` 可以对代码重新格式化。 +执行 `pip install yapf` 安装 Yapf,然后执行 `yapf [options] path/to/dir` 或 `yapf [options] path/to/module.py` 可以对代码重新格式化。[定制选项][20]的完整列表在这里。 -#### 6\. Black +#### 6、 Black -[Black][21] 在代码检查工具当中算是比较新的一个。它与 autopep8 和 Yapf 类似,但限制较多,没有太多的自定义选项。这样的好处是你不需要去决定使用怎么样的代码风格,让 black 来给你做决定就好。你可以在这里查阅 black 的[自定义选项][22]以及[如何在配置文件中对其进行设置][23]。 +[Black][21] 在代码检查工具当中算是比较新的一个。它与 Autopep8 和 Yapf 类似,但限制较多,没有太多的自定义选项。这样的好处是你不需要去决定使用怎么样的代码风格,让 Black 来给你做决定就好。你可以在这里查阅 Black [有限的自定义选项][22]以及[如何在配置文件中对其进行设置][23]。 -Black 依赖于 Python 3.6+,但它可以格式化用 Python 2 编写的代码。执行 `pip install black` 安装 black,然后执行 `black path/to/dir` 或 `black path/to/module.py` 就可以使用 black 优化你的代码。 +Black 依赖于 Python 3.6+,但它可以格式化用 Python 2 编写的代码。执行 `pip install black` 安装 Black,然后执行 `black path/to/dir` 或 `black path/to/module.py` 就可以使用 Black 优化你的代码。 ### 检查你的测试覆盖率 -如果你正在进行测试工作,你需要确保提交到代码库的新代码都已经测试通过,并且不会降低测试覆盖率。虽然测试覆盖率不是衡量测试有效性和充分性的唯一指标,但它是确保项目遵循基本测试标准的一种方法。对于计算测试覆盖率,我们推荐使用 Coverage 这个库。 +如果你正在进行编写测试,你需要确保提交到代码库的新代码都已经测试通过,并且不会降低测试覆盖率。虽然测试覆盖率不是衡量测试有效性和充分性的唯一指标,但它是确保项目遵循基本测试标准的一种方法。对于计算测试覆盖率,我们推荐使用 Coverage 这个库。 -#### 7\. Coverage +#### 7、 Coverage -[Coverage][24] 有数种显示测试覆盖率的方式,包括将结果输出到控制台或 HTML 页面,并指出哪些具体哪些地方没有被覆盖到。你可以通过配置文件自定义 Coverage 检查的内容,让你更方便使用。 +[Coverage][24] 有数种显示测试覆盖率的方式,包括将结果输出到控制台或 HTML 页面,并指出哪些具体哪些地方没有被覆盖到。你可以通过[配置文件][25]自定义 Coverage 检查的内容,让你更方便使用。 执行 `pip install coverage` 安装 Converage 。然后执行 `coverage [path/to/module.py] [args]` 可以运行程序并查看输出结果。如果要查看哪些代码行没有被覆盖,执行 `coverage report -m` 即可。 -持续集成(Continuous integration, CI)是在合并和部署代码之前自动检查代码风格错误和测试覆盖率最小值的过程。很多免费或付费的工具都可以用于执行这项工作,具体的过程不在本文中赘述,但 CI 过程是令代码更易读和更易维护的重要步骤,关于这一部分可以参考 [Travis CI][26] 和 [Jenkins][27]。 +### 持续集成工具 + +持续集成Continuous integration(CI)是在合并和部署代码之前自动检查代码风格错误和测试覆盖率最小值的过程。很多免费或付费的工具都可以用于执行这项工作,具体的过程不在本文中赘述,但 CI 过程是令代码更易读和更易维护的重要步骤,关于这一部分可以参考 [Travis CI][26] 和 [Jenkins][27]。 以上这些只是用于检查 Python 代码的各种工具中的其中几个。如果你有其它喜爱的工具,欢迎在评论中分享。 @@ -85,7 +89,7 @@ via: https://opensource.com/article/18/7/7-python-libraries-more-maintainable-co 作者:[Jeff Triplett][a] 选题:[lujun9972](https://github.com/lujun9972) 译者:[HankChow](https://github.com/HankChow) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 From 3860a5edf93f351991780ce359f1a9f0f85e46c1 Mon Sep 17 00:00:00 2001 From: "Xingyu.Wang" Date: Sat, 29 Sep 2018 10:49:35 +0800 Subject: [PATCH 113/736] PUB:20180730 7 Python libraries for more maintainable code.md @HankChow https://linux.cn/article-10059-1.html --- .../20180730 7 Python libraries for more maintainable code.md | 0 1 file changed, 0 insertions(+), 0 deletions(-) rename {translated/tech => published}/20180730 7 Python libraries for more maintainable code.md (100%) diff --git a/translated/tech/20180730 7 Python libraries for more maintainable code.md b/published/20180730 7 Python libraries for more maintainable code.md similarity index 100% rename from translated/tech/20180730 7 Python libraries for more maintainable code.md rename to published/20180730 7 Python libraries for more maintainable code.md From 0936fec6dcb82e8ce8f536258d9d5d80536e5615 Mon Sep 17 00:00:00 2001 From: "Xingyu.Wang" Date: Sat, 29 Sep 2018 11:28:25 +0800 Subject: [PATCH 114/736] PUB:20180201 Here are some amazing advantages of Go that you dont hear much about.md @imquanquan https://linux.cn/article-10057-1.html --- ...some amazing advantages of Go that you dont hear much about.md | 0 1 file changed, 0 insertions(+), 0 deletions(-) rename {translated/tech => published}/20180201 Here are some amazing advantages of Go that you dont hear much about.md (100%) diff --git a/translated/tech/20180201 Here are some amazing advantages of Go that you dont hear much about.md b/published/20180201 Here are some amazing advantages of Go that you dont hear much about.md similarity index 100% rename from translated/tech/20180201 Here are some amazing advantages of Go that you dont hear much about.md rename to published/20180201 Here are some amazing advantages of Go that you dont hear much about.md From cb7b4653ccdf4a590f234893e8c41d699bb2adff Mon Sep 17 00:00:00 2001 From: DarkSun Date: Sat, 29 Sep 2018 11:55:14 +0800 Subject: [PATCH 115/736] =?UTF-8?q?=E9=80=89=E9=A2=98:=20How=20to=20Instal?= =?UTF-8?q?l=20Popcorn=20Time=20on=20Ubuntu=2018.04=20and=20Other=20Linux?= =?UTF-8?q?=20Distributions=20(#10425)?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...ntu 18.04 and Other Linux Distributions.md | 233 ++++++++++++++++++ 1 file changed, 233 insertions(+) create mode 100644 sources/tech/20180928 How to Install Popcorn Time on Ubuntu 18.04 and Other Linux Distributions.md diff --git a/sources/tech/20180928 How to Install Popcorn Time on Ubuntu 18.04 and Other Linux Distributions.md b/sources/tech/20180928 How to Install Popcorn Time on Ubuntu 18.04 and Other Linux Distributions.md new file mode 100644 index 0000000000..01fbef0292 --- /dev/null +++ b/sources/tech/20180928 How to Install Popcorn Time on Ubuntu 18.04 and Other Linux Distributions.md @@ -0,0 +1,233 @@ +How to Install Popcorn Time on Ubuntu 18.04 and Other Linux Distributions +====== +**Brief: This tutorial shows you how to install Popcorn Time on Ubuntu and other Linux distributions. Some handy Popcorn Time tips have also been discussed.** + +[Popcorn Time][1] is an open source [Netflix][2] inspired [torrent][3] streaming application for Linux, Mac and Windows. + +With the regular torrents, you have to wait for the download to finish before you could watch the videos. + +[Popcorn Time][4] is different. It uses torrent underneath but allows you to start watching the videos (almost) immediately. It’s like you are watching videos on streaming websites like YouTube or Netflix. You don’t have to wait for the download to finish here. + +![Popcorn Time in Ubuntu Linux][5] +Popcorn Time + +If you want to watch movies online without those creepy ads, Popcorn Time is a good alternative. Keep in mind that the streaming quality depends on the number of available seeds. + +Popcorn Time also provides a nice user interface where you can browse through available movies, tv-series and other contents. If you ever used [Netflix on Linux][6], you will find it’s somewhat a similar experience. + +Using torrent to download movies is illegal in several countries where there are strict laws against piracy. In countries like the USA, UK and West European you may even get legal notices. That said, it’s up to you to decide if you want to use it or not. You have been warned. +(If you still want to take the risk and use Popcorn Time, you should use a VPN service like [Ivacy][7] that has been specifically designed for using Torrents and protecting your identity. Even then it’s not always easy to avoid the snooping authorities.) + +Some of the main features of Popcorn Time are: + + * Watch movies and TV Series online using Torrent + * A sleek user interface lets you browse the available movies and TV series + * Change streaming quality + * Bookmark content for watching later + * Download content for offline viewing + * Ability to enable subtitles by default, change the subtitles size etc + * Keyboard shortcuts to navigate through Popcorn Time + + + +### How to install Popcorn Time on Ubuntu and other Linux Distributions + +I am using Ubuntu 18.04 in this tutorial but you can use the same instructions for other Linux distributions such as Linux Mint, Debian, Manjaro, Deepin etc. + +Let’s see how to install Popcorn time on Linux. It’s really easy actually. Simply follow the instructions and copy paste the commands I have mentioned. + +#### Step 1: Download Popcorn Time + +You can download Popcorn Time from its official website. The download link is present on the homepage itself. + +[Get Popcorn Time](https://popcorntime.sh/) + +#### Step 2: Install Popcorn Time + +Once you have downloaded Popcorn Time, it’s time to use it. The downloaded file is a tar file that consists of an executable among other files. While you can extract this tar file anywhere, the [Linux convention is to install additional software in][8] /[opt directory.][8] + +Create a new directory in /opt: + +``` +sudo mkdir /opt/popcorntime +``` + +Now go to the Downloads directory. + +``` +cd ~/Downloads +``` + +Extract the downloaded Popcorn Time files into the newly created /opt/popcorntime directory. + +``` +sudo tar Jxf Popcorn-Time-* -C /opt/popcorntime +``` + +#### Step 3: Make Popcorn Time accessible for everyone + +You would want every user on your system to be able to run Popcorn Time without sudo access, right? To do that, you need to create a [symbolic link][9] to the executable in /usr/bin directory. + +``` +ln -sf /opt/popcorntime/Popcorn-Time /usr/bin/Popcorn-Time +``` + +#### Step 4: Create desktop launcher for Popcorn Time + +So far so good. But you would also like to see Popcorn Time in the application menu, add it to your favorite application list etc. + +For that, you need to create a desktop entry. + +Open a terminal and create a new file named popcorntime.desktop in /usr/share/applications. + +You can use any [command line based text editor][10]. Ubuntu has [Nano][11] installed by default so you can use that. + +``` +sudo nano /usr/share/applications/popcorntime.desktop +``` + +Insert the following lines here: + +``` +[Desktop Entry] +Version = 1.0 +Type = Application +Terminal = false +Name = Popcorn Time +Exec = /usr/bin/Popcorn-Time +Icon = /opt/popcorntime/popcorn.png +Categories = Application; +``` + +If you used Nano editor, save it using shortcut Ctrl+X. When asked for saving, enter Y and then press enter again to save and exit. + +We are almost there. One last thing to do here is to have the correct icon for Popcorn Time. For that, you can download a Popcorn Time icon and save it as popcorn.png in /opt/popcorntime directory. + +You can do that using the command below: + +``` +sudo wget -O /opt/popcorntime/popcorn.png https://upload.wikimedia.org/wikipedia/commons/d/df/Pctlogo.png + +``` + +That’s it. Now you can search for Popcorn Time and click on it to launch it. + +![Popcorn Time installed on Ubuntu][12] +Search for Popcorn Time in Menu + +On the first launch, you’ll have to accept the terms and conditions. + +![Popcorn Time in Ubuntu Linux][13] +Accept the Terms of Service + +Once you do that, you can enjoy the movies and TV shows. + +![Watch movies on Popcorn Time][14] + +Well, that’s all you needed to install Popcorn Time on Ubuntu or any other Linux distribution. You can start watching your favorite movies straightaway. + +However, if you are interested, I would suggest reading these Popcorn Time tips to get more out of it. + +[![][15]][16] +![][17] + +### 7 Tips for using Popcorn Time effectively + +Now that you have installed Popcorn Time, I am going to tell you some nifty Popcorn Time tricks. I assure you that it will enhance your experience with Popcorn Time multiple folds. + +#### 1\. Use advanced settings + +Always have the advanced settings enabled. It gives you more options to tweak Popcorn Time. Go to the top right corner and click on the gear symbol. Click on it and check advanced settings on the next screen. + +![](https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2015/01/Popcorn_Time_Tricks.jpeg) + +#### 2\. Watch the movies in VLC or other players + +Did you know that you can choose to watch a file in your preferred media player instead of the default Popcorn Time player? Of course, that media player should have been installed in the system. + +Now you may ask why would one want to use another player. And my answer is because other players like VLC has hidden features which you might not find in the Popcorn Time player. + +For example, if a file has very low volume, you can use VLC to enhance the audio by 400 percent. You can also [synchronize incoherent subtitles with VLC][18]. You can switch between media players before you start to play a file: + +![](https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2015/01/Pocorn_Time_Tricks_1.png) + +#### 3\. Bookmark movies and watch it later + +Just browsing through movies and TV series but don’t have time or mood to watch those? No issues. You can add the movies to the bookmark and can access these bookmarked videos from the Favorites tab. This enables you to create a list of movies you would watch later. + +![](https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2015/01/Pocorn_Time_Tricks2.png) + +#### 4\. Check torrent health and seed information + +As I had mentioned earlier, your viewing experience in Popcorn Times depends on torrent speed. Good thing is that Popcorn time shows the health of the torrent file so that you can be aware of the streaming speed. + +You will see a green/yellow/red dot on the file. Green means there are plenty of seeds and the file will stream easily. Yellow means a medium number of seeds, streaming should be okay. Red means there are very few seeds available and the streaming will be poor or won’t work at all. + +![](https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2015/01/Pocorn_Time_Tricks3.jpg) + +#### 5\. Add custom subtitles + +If you need subtitles and it is not available in your preferred language, you can add custom subtitles downloaded from external websites. Get the .srt files and use it inside Popcorn Time: + +![](https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2015/01/Pocporn_Time_Tricks5.png) + +This is where VLC comes handy as you can [download subtitles automatically with VLC][19]. + + +#### 6\. Save the files for offline viewing + +When Popcorn Times stream a content, it downloads it and store temporarily. When you close the app, it’s cleaned out. You can change this behavior so that the downloaded file remains there for your future use. + +In the advanced settings, scroll down a bit. Look for Cache directory. You can change this to some other directory like Downloads. This way, even if you close Popcorn Time, the file will be available for viewing. + +![](https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2015/01/Popcorn_Time_Tips.jpg) + +#### 7\. Drag and drop external torrent files to play immediately + +I bet you did not know about this one. If you don’t find a certain movie on Popcorn Time, download the torrent file from your favorite torrent website. Open Popcorn Time and just drag and drop the torrent file in Popcorn Time. It will start playing the file, depending upon seeds. This way, you don’t need to download the entire file before watching it. + +When you drag and drop the torrent file in Popcorn Time, it will give you the option to choose which video file should it play. If there are subtitles in it, it will play automatically or else, you can add external subtitles. + +![](https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2015/01/Pocorn_Time_Tricks4.png) + +There are plenty of other features in Popcorn Time. But I’ll stop with my list here and let you explore Popcorn Time on Ubuntu Linux. I hope you find these Popcorn Time tips and tricks useful. + +I am repeating again. Using Torrents is illegal in many countries. If you do that, take precaution and use a VPN service. If you are looking for my recommendation, you can go for [Swiss-based privacy company ProtonVPN][20] (of [ProtonMail][21] fame). Singapore based [Ivacy][7] is another good option. If you think these are expensive, you can look for [cheap VPN deals on It’s FOSS Shop][22]. + +Note: This article contains affiliate links. Please read our [affiliate policy][23]. + +-------------------------------------------------------------------------------- + +via: https://itsfoss.com/popcorn-time-ubuntu-linux/ + +作者:[Abhishek Prakash][a] +选题:[lujun9972](https://github.com/lujun9972) +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://itsfoss.com/author/abhishek/ +[1]: https://popcorntime.sh/ +[2]: https://netflix.com/ +[3]: https://en.wikipedia.org/wiki/Torrent_file +[4]: https://en.wikipedia.org/wiki/Popcorn_Time +[5]: https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/09/popcorn-time-linux.jpeg +[6]: https://itsfoss.com/netflix-firefox-linux/ +[7]: https://billing.ivacy.com/page/23628 +[8]: http://tldp.org/LDP/Linux-Filesystem-Hierarchy/html/opt.html +[9]: https://en.wikipedia.org/wiki/Symbolic_link +[10]: https://itsfoss.com/command-line-text-editors-linux/ +[11]: https://itsfoss.com/nano-3-release/ +[12]: https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/09/popcorn-time-ubuntu-menu.jpg +[13]: https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/09/popcorn-time-ubuntu-license.jpeg +[14]: https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/09/popcorn-time-watch-movies.jpeg +[15]: https://ivacy.postaffiliatepro.com/accounts/default1/vdegzkxbw/7f82d531.png +[16]: https://billing.ivacy.com/page/23628/7f82d531 +[17]: http://ivacy.postaffiliatepro.com/scripts/vdegzkxiw?aff=23628&a_bid=7f82d531 +[18]: https://itsfoss.com/how-to-synchronize-subtitles-with-movie-quick-tip/ +[19]: https://itsfoss.com/download-subtitles-automatically-vlc-media-player-ubuntu/ +[20]: https://protonvpn.net/?aid=chmod777 +[21]: https://itsfoss.com/protonmail/ +[22]: https://shop.itsfoss.com/search?utf8=%E2%9C%93&query=vpn +[23]: https://itsfoss.com/affiliate-policy/ From 741e70adb23a44ba4db98baa9feafd7a7e7e6a8e Mon Sep 17 00:00:00 2001 From: DarkSun Date: Sat, 29 Sep 2018 12:16:33 +0800 Subject: [PATCH 116/736] =?UTF-8?q?=E9=80=89=E9=A2=98:=20What=20containers?= =?UTF-8?q?=20can=20teach=20us=20about=20DevOps=20(#10426)?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...at containers can teach us about DevOps.md | 99 +++++++++++++++++++ 1 file changed, 99 insertions(+) create mode 100644 sources/tech/20180928 What containers can teach us about DevOps.md diff --git a/sources/tech/20180928 What containers can teach us about DevOps.md b/sources/tech/20180928 What containers can teach us about DevOps.md new file mode 100644 index 0000000000..610a68b2d1 --- /dev/null +++ b/sources/tech/20180928 What containers can teach us about DevOps.md @@ -0,0 +1,99 @@ +What containers can teach us about DevOps +====== + +The use of containers supports the three pillars of DevOps practices: flow, feedback, and continual experimentation and learning. + +![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/LAW-patent_reform_520x292_10136657_1012_dc.png?itok=Cd2PmDWf) + +One can argue that containers and DevOps were made for one another. Certainly, the container ecosystem benefits from the skyrocketing popularity of DevOps practices, both in design choices and in DevOps’ use by teams developing container technologies. Because of this parallel evolution, the use of containers in production can teach teams the fundamentals of DevOps and its three pillars: [The Three Ways][1]. + +### Principles of flow + +**Container flow** + +A container can be seen as a silo, and from inside, it is easy to forget the rest of the system: the host node, the cluster, the underlying infrastructure. Inside the container, it might appear that everything is functioning in an acceptable manner. From the outside perspective, though, the application inside the container is a part of a larger ecosystem of applications that make up a service: the web API, the web app user interface, the database, the workers, and caching services and garbage collectors. Teams put constraints on the container to limit performance impact on infrastructure, and much has been done to provide metrics for measuring container performance because overloaded or slow container workloads have downstream impact on other services or customers. + +**Real-world flow** + +This lesson can be applied to teams functioning in a silo as well. Every process (be it code release, infrastructure creation or even, say, manufacturing of [Spacely’s Sprockets][2]), follows a linear path from conception to realization. In technology, this progress flows from development to testing to operations and release. If a team working alone becomes a bottleneck or introduces a problem, the impact is felt all along the entire pipeline. A defect passed down the line destroys productivity downstream. While the broken process within the scope of the team itself may seem perfectly correct, it has a negative impact on the environment as a whole. + +**DevOps and flow** + +The first way of DevOps, principles of flow, is about approaching the process as a whole, striving to comprehend how the system works together and understanding the impact of issues on the entire process. To increase the efficiency of the process, pain points and waste are identified and removed. This is an ongoing process; teams must continually strive to increase visibility into the process and find and fix trouble spots and waste. + +> “The outcomes of putting the First Way into practice include never passing a known defect to downstream work centers, never allowing local optimization to create global degradation, always seeking to increase flow, and always seeking to achieve a profound understanding of the system (as per Deming).” + +–Gene Kim, [The Three Ways: The Principles Underpinning DevOps][3], IT Revolution, 25 Apr. 2017 + +### Principles of feedback + +**Container feedback** + +In addition to limiting containers to prevent impact elsewhere, many products have been created to monitor and trend container metrics in an effort to understand what they are doing and notify when they are misbehaving. [Prometheus][4], for example, is [all the rage][5] for collecting metrics from containers and clusters. Containers are excellent at separating applications and providing a way to ship an environment together with the code, sometimes at the cost of opacity, so much is done to try to provide rapid feedback so issues can be addressed promptly within the silo. + +**Real-world feedback** + +The same is necessary for the flow of the system. From inception to realization, an efficient process quickly provides relevant feedback to identify when there is an issue. The key words here are “quick” and “relevant.” Burying teams in thousands of irrelevant notifications make it difficult or even impossible to notice important events that need immediate action, and receiving even relevant information too late may allow small, easily solved issues to move downstream and become bigger problems. Imagine [if Lucy and Ethel][6] had provided immediate feedback that the conveyor belt was too fast—there would have been no problem with the chocolate production (though that would not have been nearly as funny). + +**DevOps and feedback** + +The Second Way of DevOps, principles of feedback, is all about getting relevant information quickly. With immediate, useful feedback, problems can be identified as they happen and addressed before impact is felt elsewhere in the development process. DevOps teams strive to “optimize for downstream” and immediately move to fix problems that might impact other teams that come after them. As with flow, feedback is a continual process to identify ways to quickly get important data and act on problems as they occur. + +> “Creating fast feedback is critical to achieving quality, reliability, and safety in the technology value stream.” + +–Gene Kim, et al., The DevOps Handbook: How to Create World-Class Agility, Reliability, and Security in Technology Organizations, IT Revolution Press, 2016 + +### Principles of continual experimentation and learning + +**Container continual experimentation and learning** + +It is a bit more challenging applying operational learning to the Third Way of DevOps:continual experimentation and learning. Trying to salvage what we can grasp of the very edges of the metaphor, containers make development easy, allowing developers and operations teams to test new code or configurations locally and safely outside of production and incorporate discovered benefits into production in a way that was difficult in the past. Changes can be radical and still version-controlled, documented, and shared quickly and easily. + +**Real-world continual experimentation and learning** + +For example, consider this anecdote from my own experience: Years ago, as a young, inexperienced sysadmin (just three weeks into the job), I was asked to make changes to an Apache virtual host running the website of the central IT department for a university. Without an easy-to-use test environment, I made a configuration change to the production site that I thought would accomplish the task and pushed it out. Within a few minutes, I overheard coworkers in the next cube: + +“Wait, is the website down?” + +“Hrm, yeah, it looks like it. What the heck?” + +There was much eye-rolling involved. + +Mortified (the shame is real, folks), I sunk down as far as I could into my seat and furiously tried to back out the changes I’d introduced. Later that same afternoon, the director of the department—the boss of my boss’s boss—appeared in my cube to talk about what had happened. “Don’t worry,” she told me. “We’re not mad at you. It was a mistake and now you have learned.” + +In the world of containers, this could have been easily changed and tested on my own laptop and the broken configuration identified by more skilled team members long before it ever made it into production. + +**DevOps continual experimentation and learning** + +A real culture of experimentation promotes the individual’s ability to find where a change in the process may be beneficial, and to test that assumption without the fear of retaliation if they fail. For DevOps teams, failure becomes an educational tool that adds to the knowledge of the individual and organization, rather than something to be feared or punished. Individuals in the DevOps team dedicate themselves to continuous learning, which in turn benefits the team and wider organization as that knowledge is shared. + +As the metaphor completely falls apart, focus needs to be given to a specific point: The other two principles may appear at first glance to focus entirely on process, but continual learning is a human task—important for the future of the project, the person, the team, and the organization. It has an impact on the process, but it also has an impact on the individual and other people. + +> “Experimentation and risk-taking are what enable us to relentlessly improve our system of work, which often requires us to do things very differently than how we’ve done it for decades.” + +–Gene Kim, et al., [The Phoenix Project: A Novel about IT, DevOps, and Helping Your Business Win][7], IT Revolution Press, 2013 + +### Containers can teach us DevOps + +Learning to work effectively with containers can help teach DevOps and the Three Ways: principles of flow, principles of feedback, and principles of continuous experimentation and learning. Looking holistically at the application and infrastructure rather than putting on blinders to everything outside the container teaches us to take all parts of the system and understand their upstream and downstream impacts, break out of silos, and work as a team to increase global performance and deep understanding of the entire system. Working to provide timely and accurate feedback teaches us to create effective feedback patterns within our organizations to identify problems before their impact grows. Finally, providing a safe environment to try new ideas and learn from them teaches us to create a culture where failure represents a positive addition to our knowledge and the ability to take big chances with educated guesses can result in new, elegant solutions to complex problems. + + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/18/9/containers-can-teach-us-devops + +作者:[Chris Hermansen][a] +选题:[lujun9972](https://github.com/lujun9972) +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/clhermansen +[1]: https://itrevolution.com/the-three-ways-principles-underpinning-devops/ +[2]: https://en.wikipedia.org/wiki/The_Jetsons +[3]: http://itrevolution.com/the-three-ways-principles-underpinning-devops +[4]: https://prometheus.io/ +[5]: https://opensource.com/article/18/9/prometheus-operational-advantage +[6]: https://www.youtube.com/watch?v=8NPzLBSBzPI +[7]: https://itrevolution.com/book/the-phoenix-project/ From 2e2cd3dc6c708e43563e113653072dbffcd44d3c Mon Sep 17 00:00:00 2001 From: DarkSun Date: Sat, 29 Sep 2018 12:17:12 +0800 Subject: [PATCH 117/736] =?UTF-8?q?=E9=80=89=E9=A2=98:=2010=20handy=20Bash?= =?UTF-8?q?=20aliases=20for=20Linux=20(#10427)?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...0180928 10 handy Bash aliases for Linux.md | 116 ++++++++++++++++++ 1 file changed, 116 insertions(+) create mode 100644 sources/tech/20180928 10 handy Bash aliases for Linux.md diff --git a/sources/tech/20180928 10 handy Bash aliases for Linux.md b/sources/tech/20180928 10 handy Bash aliases for Linux.md new file mode 100644 index 0000000000..b69b2f8aab --- /dev/null +++ b/sources/tech/20180928 10 handy Bash aliases for Linux.md @@ -0,0 +1,116 @@ +10 handy Bash aliases for Linux +====== +Get more efficient by using condensed versions of long Bash commands. + +![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/bash_command_line.png?itok=k4z94W2U) + +How many times have you repeatedly typed out a long command on the command line and wished there was a way to save it for later? This is where Bash aliases come in handy. They allow you to condense long, cryptic commands down to something easy to remember and use. Need some examples to get you started? No problem! + +To use a Bash alias you've created, you need to add it to your .bash_profile file, which is located in your home folder. Note that this file is hidden and accessible only from the command line. The easiest way to work with this file is to use something like Vi or Nano. + +### 10 handy Bash aliases + + 1. How many times have you needed to unpack a .tar file and couldn't remember the exact arguments needed? Aliases to the rescue! Just add the following to your .bash_profile file and then use **untar FileName** to unpack any .tar file. + + + +``` +alias untar='tar -zxvf ' + +``` + + 2. Want to download something but be able to resume if something goes wrong? + + + +``` +alias wget='wget -c ' + +``` + + 3. Need to generate a random, 20-character password for a new online account? No problem. + + + +``` +alias getpass="openssl rand -base64 20" + +``` + + 4. Downloaded a file and need to test the checksum? We've got that covered too. + + + +``` +alias sha='shasum -a 256 ' + +``` + + 5. A normal ping will go on forever. We don't want that. Instead, let's limit that to just five pings. + + + +``` +alias ping='ping -c 5' + +``` + + 6. Start a web server in any folder you'd like. + + + +``` +alias www='python -m SimpleHTTPServer 8000' + +``` + + 7. Want to know how fast your network is? Just download Speedtest-cli and use this alias. You can choose a server closer to your location by using the **speedtest-cli --list** command. + + + +``` +alias speed='speedtest-cli --server 2406 --simple' + +``` + + 8. How many times have you needed to know your external IP address and had no idea how to get that info? Yeah, me too. + + + +``` +alias ipe='curl ipinfo.io/ip' + +``` + + 9. Need to know your local IP address? + + + +``` +alias ipi='ipconfig getifaddr en0' + +``` + + 10. Finally, let's clear the screen. + + + +``` +alias c='clear' + +``` + +As you can see, Bash aliases are a super-easy way to simplify your life on the command line. Want more info? I recommend a quick Google search for "Bash aliases" or a trip to GitHub. + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/18/9/handy-bash-aliases + +作者:[Patrick H.Mullins][a] +选题:[lujun9972](https://github.com/lujun9972) +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/pmullins From a4f6c95179923f327cc56e1e9c6fbd3be5ccdec0 Mon Sep 17 00:00:00 2001 From: "Xingyu.Wang" Date: Sat, 29 Sep 2018 12:18:08 +0800 Subject: [PATCH 118/736] PRF&PUB:20180917 Linux tricks that can save you time and trouble (#10428) MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit * PRF:20180917 Linux tricks that can save you time and trouble.md @HankChow 翻译的不错 * PUB:20180917 Linux tricks that can save you time and trouble.md @HankChow https://linux.cn/article-10060-1.html --- ...tricks that can save you time and trouble.md | 17 ++++++++--------- 1 file changed, 8 insertions(+), 9 deletions(-) rename {translated/tech => published}/20180917 Linux tricks that can save you time and trouble.md (87%) diff --git a/translated/tech/20180917 Linux tricks that can save you time and trouble.md b/published/20180917 Linux tricks that can save you time and trouble.md similarity index 87% rename from translated/tech/20180917 Linux tricks that can save you time and trouble.md rename to published/20180917 Linux tricks that can save you time and trouble.md index 1dbc81bfbd..6c9f3d3247 100644 --- a/translated/tech/20180917 Linux tricks that can save you time and trouble.md +++ b/published/20180917 Linux tricks that can save you time and trouble.md @@ -1,14 +1,15 @@ 让你提高效率的 Linux 技巧 ====== -想要在 Linux 命令行工作中提高效率,你需要使用一些技巧。 + +> 想要在 Linux 命令行工作中提高效率,你需要使用一些技巧。 ![](https://images.idgesg.net/images/article/2018/09/boy-jumping-off-swing-100772498-large.jpg) -巧妙的 Linux 命令行技巧能让你节省时间、避免出错,还能让你记住和复用各种复杂的命令,专注在需要做的事情本身,而不是做事的方式。以下介绍一些好用的命令行技巧。 +巧妙的 Linux 命令行技巧能让你节省时间、避免出错,还能让你记住和复用各种复杂的命令,专注在需要做的事情本身,而不是你要怎么做。以下介绍一些好用的命令行技巧。 ### 命令编辑 -如果要对一个已输入的命令进行修改,可以使用 ^a(ctrl + a)或 ^e(ctrl + e)将光标快速移动到命令的开头或命令的末尾。 +如果要对一个已输入的命令进行修改,可以使用 `^a`(`ctrl + a`)或 `^e`(`ctrl + e`)将光标快速移动到命令的开头或命令的末尾。 还可以使用 `^` 字符实现对上一个命令的文本替换并重新执行命令,例如 `^before^after^` 相当于把上一个命令中的 `before` 替换为 `after` 然后重新执行一次。 @@ -59,11 +60,11 @@ alias show_dimensions='xdpyinfo | grep '\''dimensions:'\''' ### 冻结、解冻终端界面 -^s(ctrl + s)将通过执行流量控制命令 XOFF 来停止终端输出内容,这会对 PuTTY 会话和桌面终端窗口产生影响。如果误输入了这个命令,可以使用 ^q(ctrl + q)让终端重新响应。所以只需要记住^q 这个组合键就可以了,毕竟这种情况并不多见。 +`^s`(`ctrl + s`)将通过执行流量控制命令 XOFF 来停止终端输出内容,这会对 PuTTY 会话和桌面终端窗口产生影响。如果误输入了这个命令,可以使用 `^q`(`ctrl + q`)让终端重新响应。所以只需要记住 `^q` 这个组合键就可以了,毕竟这种情况并不多见。 ### 复用命令 -Linux 提供了很多让用户复用命令的方法,其核心是通过历史缓冲区收集执行过的命令。复用命令的最简单方法是输入 `!` 然后接最近使用过的命令的开头字母;当然也可以按键盘上的向上箭头,直到看到要复用的命令,然后按 Enter 键。还可以先使用 `history` 显示命令历史,然后输入 `!` 后面再接命令历史记录中需要复用的命令旁边的数字。 +Linux 提供了很多让用户复用命令的方法,其核心是通过历史缓冲区收集执行过的命令。复用命令的最简单方法是输入 `!` 然后接最近使用过的命令的开头字母;当然也可以按键盘上的向上箭头,直到看到要复用的命令,然后按回车键。还可以先使用 `history` 显示命令历史,然后输入 `!` 后面再接命令历史记录中需要复用的命令旁边的数字。 ``` !! <== 复用上一条命令 @@ -129,7 +130,7 @@ $ rm -i <== 请求确认 $ unalias rm ``` -如果已经将 `rm -i` 默认设置为 `rm` 的别名,但你希望在删除文件之前不必进行确认,则可以将 `unalias` 命令放在一个启动文件(例如 ~/.bashrc)中。 +如果已经将 `rm -i` 默认设置为 `rm` 的别名,但你希望在删除文件之前不必进行确认,则可以将 `unalias` 命令放在一个启动文件(例如 `~/.bashrc`)中。 ### 使用 sudo @@ -151,8 +152,6 @@ md () { mkdir -p "$@" && cd "$1"; } 使用 Linux 命令行是在 Linux 系统上工作最有效也最有趣的方法,但配合命令行技巧和巧妙的别名可以让你获得更好的体验。 -加入 [Facebook][1] 和 [LinkedIn][2] 上的 Network World 社区可以和我们一起讨论。 - -------------------------------------------------------------------------------- via: https://www.networkworld.com/article/3305811/linux/linux-tricks-that-even-you-can-love.html @@ -160,7 +159,7 @@ via: https://www.networkworld.com/article/3305811/linux/linux-tricks-that-even-y 作者:[Sandra Henry-Stocker][a] 选题:[lujun9972](https://github.com/lujun9972) 译者:[HankChow](https://github.com/HankChow) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 From 9eb6ce1f9a6c368c067c8ad7be9dec2e81020d94 Mon Sep 17 00:00:00 2001 From: DarkSun Date: Sat, 29 Sep 2018 12:18:37 +0800 Subject: [PATCH 119/736] =?UTF-8?q?=E9=80=89=E9=A2=98:=20Using=20Grails=20?= =?UTF-8?q?with=20jQuery=20and=20DataTables=20(#10429)?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...Using Grails with jQuery and DataTables.md | 544 ++++++++++++++++++ 1 file changed, 544 insertions(+) create mode 100644 sources/tech/20180928 Using Grails with jQuery and DataTables.md diff --git a/sources/tech/20180928 Using Grails with jQuery and DataTables.md b/sources/tech/20180928 Using Grails with jQuery and DataTables.md new file mode 100644 index 0000000000..9a9ad08fb0 --- /dev/null +++ b/sources/tech/20180928 Using Grails with jQuery and DataTables.md @@ -0,0 +1,544 @@ +Using Grails with jQuery and DataTables +====== + +Learn to build a Grails-based data browser that lets users visualize complex tabular data. + +![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/data_container_block.png?itok=S8MbXEYw) + +I’m a huge fan of [Grails][1]. Granted, I’m mostly a data person who likes to explore and analyze data using command-line tools. But even data people sometimes need to _look at_ the data, and sometimes using data means having a great data browser. With Grails, [jQuery][2], and the [DataTables jQuery plugin][3], we can make really nice tabular data browsers. + +The [DataTables website][3] offers a lot of decent “recipe-style” documentation that shows how to put together some fine sample applications, and it includes the necessary JavaScript, HTML, and occasional [PHP][4] to accomplish some pretty spiffy stuff. But for those who would rather use Grails as their backend, a bit of interpretation is necessary. Also, the sample application data used is a single flat table of employees of a fictional company, so the complexity of dealing with table relations serves as an exercise for the reader. + +In this article, we’ll fill those two gaps by creating a Grails application with a slightly more complex data structure and a DataTables browser. In doing so, we’ll cover Grails criteria, which are [Groovy][5] -fied Java Hibernate criteria. I’ve put the code for the application on [GitHub][6] , so this article is oriented toward explaining the nuances of the code. + +For prerequisites, you will need Java, Groovy, and Grails environments set up. With Grails, I tend to use a terminal window and [Vim][7], so that’s what’s used here. To get a modern Java, I suggest downloading and installing the [Open Java Development Kit][8] (OpenJDK) provided by your Linux distro (which should be Java 8, 9, 10 or 11; at the time of writing, I’m working with Java 8). From my point of view, the best way to get up-to-date Groovy and Grails is to use [SDKMAN!][9]. + +Readers who have never tried Grails will probably need to do some background reading. As a starting point, I recommend [Creating Your First Grails Application][10]. + +### Getting the employee browser application + +As mentioned above, I’ve put the source code for this sample employee browser application on [GitHub][6]. For further explanation, the application **embrow** was built using the following commands in a Linux terminal window: + +``` +cd Projects +grails create-app com.nuevaconsulting.embrow +``` + +The domain classes and unit tests are created as follows: + +``` +grails create-domain-class com.nuevaconsulting.embrow.Position +grails create-domain-class com.nuevaconsulting.embrow.Office +grails create-domain-class com.nuevaconsulting.embrow.Employeecd embrowgrails createdomaincom.grails createdomaincom.grails createdomaincom. +``` + +The domain classes built this way have no attributes, so they must be edited as follows: + +The Position domain class: + +``` +package com.nuevaconsulting.embrow +  +class Position { + +    String name +    int starting + +    static constraints = { +        name nullable: false, blank: false +        starting nullable: false +    } +}com.Stringint startingstatic constraintsnullableblankstarting nullable +``` + +The Office domain class: + +``` +package com.nuevaconsulting.embrow +  +class Office { + +    String name +    String address +    String city +    String country + +    static constraints = { +        name nullable: false, blank: false +        address nullable: false, blank: false +        city nullable: false, blank: false +        country nullable: false, blank: false +    } +} +``` + +And the Employee domain class: + +``` +package com.nuevaconsulting.embrow +  +class Employee { + +    String surname +    String givenNames +    Position position +    Office office +    int extension +    Date hired +    int salary +    static constraints = { +        surname nullable: false, blank: false +        givenNames nullable: false, blank: false +        : false +        office nullable: false +        extension nullable: false +        hired nullable: false +        salary nullable: false +    } +} +``` + +Note that whereas the Position and Office domain classes use predefined Groovy types String and int, the Employee domain class defines fields that are of type Position and Office (as well as the predefined Date). This causes the creation of the database table in which instances of Employee are stored to contain references, or foreign keys, to the tables in which instances of Position and Office are stored. + +Now you can generate the controllers, views, and various other test components: + +``` +-all com.nuevaconsulting.embrow.Position +grails generate-all com.nuevaconsulting.embrow.Office +grails generate-all com.nuevaconsulting.embrow.Employeegrails generateall com.grails generateall com.grails generateall com. +``` + +At this point, you have a basic create-read-update-delete (CRUD) application ready to go. I’ve included some base data in the **grails-app/init/com/nuevaconsulting/BootStrap.groovy** to populate the tables. + +If you run the application with the command: + +``` +grails run-app +``` + +you will see the following screen in the browser at **** + +![Embrow home screen][12] + +The Embrow application home screen + +Clicking on the link for the OfficeController gives you a screen that looks like this: + +![Office list][14] + +The office list + +Note that this list is generated by the **OfficeController index** method and displayed by the view `office/index.gsp`. + +Similarly, clicking on the **EmployeeController** gives a screen that looks like this: + +![Employee controller][16] + +The employee controller + +Ok, that’s pretty ugly—what’s with the Position and Office links? + +Well, the views generated by the `generate-all` commands above create an **index.gsp** file that uses the Grails tag that by default shows the class name ( **com.nuevaconsulting.embrow.Position** ) and the persistent instance identifier ( **30** ). This behavior can be customized to yield something better looking, and there is some pretty neat stuff with the autogenerated links, the autogenerated pagination, and the autogenerated sortable columns. + +But even when it's fully cleaned up, this employee browser offers limited functionality. For example, what if you want to find all employees whose position includes the text “dev”? What if you want to combine columns for sorting so that the primary sort key is a surname and the secondary sort key is an office name? Or what if you want to export a sorted subset to a spreadsheet or PDF to email to someone who doesn’t have access to the browser? + +The jQuery DataTables plugin provides this kind of extra functionality and allows you to create a full-fledged tabular data browser. + +### Creating the employee browser view and controller methods + +In order to create an employee browser based on jQuery DataTables, you must complete two tasks: + + 1. Create a Grails view that incorporates the HTML and JavaScript required to enable the DataTables + + 2. Add a method to the Grails controller to handle the new view + + + + +#### The employee browser view + +In the directory **embrow/grails-app/views/employee** , start by making a copy of the **index.gsp** file, calling it **browser.gsp** : + +``` +cd Projects +cd embrow/grails-app/views/employee +cp gsp browser.gsp +``` + +At this point, you want to customize the new **browser.gsp** file to add the relevant jQuery DataTables code. + +As a rule, I like to grab my JavaScript and CSS from a content provider when feasible; to do so in this case, after the line: + +``` +<g:message code="default.list.label" args="[entityName]" /> +``` + +insert the following lines: + +``` + + + + + + + + + + + + +``` + +Next, remove the code that provided the data pagination in **index.gsp** : + +``` +
+

+ +
${flash.message}
+
+ + + +
+``` + +and insert the code that materializes the jQuery DataTables. + +The first part to insert is the HTML that creates the basic tabular structure of the browser. For the application where DataTables talks to a database backend, provide only the table headers and footers; the DataTables JavaScript takes care of the table contents. + +``` +
+

Employee Browser

+ + + + + + + + + + + + + + + + + + + + + + + +
SurnameGiven name(s)PositionOfficeExtensionHiredSalary
SurnameGiven name(s)PositionOfficeExtensionHiredSalary
+
+``` + +Next, insert a JavaScript block, which serves three primary functions: It sets the size of the text boxes shown in the footer for column filtering, it establishes the DataTables table model, and it creates a handler to do the column filtering. + +``` + +$('#employee_dt tfoot th').each( function() {javascript +``` + +The code below handles sizing the filter boxes at the bottoms of the table columns: + +``` +var title = $(this).text(); +if (title == 'Extension' || title == 'Hired') +$(this).html(''); +else +$(this).html(''); +});titletitletitletitletitle +``` + +Next, define the table model. This is where all the table options are provided, including the scrolling, rather than paginated, nature of the interface, the cryptic decorations to be provided according to the dom string, the ability to export data to CSV and other formats, as well as where the Ajax connection to the server is established. Note that the URL is created with a Groovy GString call to the Grails **createLink()** method, referring to the **browserLister** action in the **EmployeeController**. Also of interest is the definition of the columns of the table. This information is sent across to the back end, which queries the database and returns the appropriate records. + +``` +var table = $('#employee_dt').DataTable( { +"scrollY": 500, +"deferRender": true, +"scroller": true, +"dom": "Brtip", +"buttons": [ 'copy', 'csv', 'excel', 'pdf', 'print' ], +"processing": true, +"serverSide": true, +"ajax": { +"url": "${createLink(controller: 'employee', action: 'browserLister')}", +"type": "POST", +}, +"columns": [ +{ "data": "surname" }, +{ "data": "givenNames" }, +{ "data": "position" }, +{ "data": "office" }, +{ "data": "extension" }, +{ "data": "hired" }, +{ "data": "salary" } +] +}); +``` + +Finally, monitor the filter columns for changes and use them to apply the filter(s). + +``` +table.columns().every(function() { +var that = this; +$('input', this.footer()).on('keyup change', function(e) { +if (that.search() != this.value && 8 < e.keyCode && e.keyCode < 32) +that.search(this.value).draw(); +}); +``` + +And that’s it for the JavaScript. This completes the changes to the view code. + +``` +}); + +``` + +Here’s a screenshot of the UI this view creates: + +![](https://opensource.com/sites/default/files/uploads/screen_4.png) + +Here’s another screenshot showing the filtering and multi-column sorting at work (looking for employees whose positions include the characters “dev”, ordering first by office, then by surname): + +![](https://opensource.com/sites/default/files/uploads/screen_5.png) + +Here’s another screenshot, showing what happens when you click on the CSV button: + +![](https://opensource.com/sites/default/files/uploads/screen6.png) + +And finally, here’s a screenshot showing the CSV data opened in LibreOffice: + +![](https://opensource.com/sites/default/files/uploads/screen7.png) + +Ok, so the view part looked pretty straightforward; therefore, the controller action must do all the heavy lifting, right? Let’s see… + +#### The employee controller browserLister action + +Recall that we saw this string + +``` +"${createLink(controller: 'employee', action: 'browserLister')}" +``` + +as the URL used for the Ajax calls from the DataTables table model. [createLink() is the method][17] behind a Grails tag that is used to dynamically generate a link as the HTML is preprocessed on the Grails server. This ends up generating a link to the **EmployeeController** , located in + +``` +embrow/grails-app/controllers/com/nuevaconsulting/embrow/EmployeeController.groovy +``` + +and specifically to the controller method **browserLister()**. I’ve left some print statements in the code so that the intermediate results can be seen in the terminal window where the application is running. + +``` +    def browserLister() { +        // Applies filters and sorting to return a list of desired employees +``` + +First, print out the parameters passed to **browserLister()**. I usually start building controller methods with this code so that I’m completely clear on what my controller is receiving. + +``` +      println "employee browserLister params $params" +        println() +``` + +Next, process those parameters to put them in a more usable shape. First, the jQuery DataTables parameters, a Groovy map called **jqdtParams** : + +``` +def jqdtParams = [:] +params.each { key, value -> + def keyFields = key.replace(']','').split(/\[/) + def table = jqdtParams + for (int f = 0; f < keyFields.size() - 1; f++) { + def keyField = keyFields[f] + if (!table.containsKey(keyField)) + table[keyField] = [:] + table = table[keyField] + } + table[keyFields[-1]] = value +} +println "employee dataTableParams $jqdtParams" +println() +``` + +Next, the column data, a Groovy map called **columnMap** : + +``` +def columnMap = jqdtParams.columns.collectEntries { k, v -> + def whereTerm = null + switch (v.data) { + case 'extension': + case 'hired': + case 'salary': + if (v.search.value ==~ /\d+(,\d+)*/) + whereTerm = v.search.value.split(',').collect { it as Integer } + break + default: + if (v.search.value ==~ /[A-Za-z0-9 ]+/) + whereTerm = "%${v.search.value}%" as String + break + } + [(v.data): [where: whereTerm]] +} +println "employee columnMap $columnMap" +println() +``` + +Next, a list of all column names, retrieved from **columnMap** , and a corresponding list of how those columns should be ordered in the view, Groovy lists called **allColumnList** and **orderList** , respectively: + +``` +def allColumnList = columnMap.keySet() as List +println "employee allColumnList $allColumnList" +def orderList = jqdtParams.order.collect { k, v -> [allColumnList[v.column as Integer], v.dir] } +println "employee orderList $orderList" +``` + +We’re going to use Grails’ implementation of Hibernate criteria to actually carry out the selection of elements to be displayed as well as their ordering and pagination. Criteria requires a filter closure; in most examples, this is given as part of the creation of the criteria instance itself, but here we define the filter closure beforehand. Note in this case the relatively complex interpretation of the “date hired” filter, which is treated as a year and applied to establish date ranges, and the use of **createAlias** to allow us to reach into related classes Position and Office: + +``` +def filterer = { + createAlias 'position', 'p' + createAlias 'office', 'o' + + if (columnMap.surname.where) ilike 'surname', columnMap.surname.where + if (columnMap.givenNames.where) ilike 'givenNames', columnMap.givenNames.where + if (columnMap.position.where) ilike 'p.name', columnMap.position.where + if (columnMap.office.where) ilike 'o.name', columnMap.office.where + if (columnMap.extension.where) inList 'extension', columnMap.extension.where + if (columnMap.salary.where) inList 'salary', columnMap.salary.where + if (columnMap.hired.where) { + if (columnMap.hired.where.size() > 1) { + or { + columnMap.hired.where.each { + between 'hired', Date.parse('yyyy/MM/dd',"${it}/01/01" as String), + Date.parse('yyyy/MM/dd',"${it}/12/31" as String) + } + } + } else { + between 'hired', Date.parse('yyyy/MM/dd',"${columnMap.hired.where[0]}/01/01" as String), + Date.parse('yyyy/MM/dd',"${columnMap.hired.where[0]}/12/31" as String) + } + } +} +``` + +At this point, it’s time to apply the foregoing. The first step is to get a total count of all the Employee instances, required by the pagination code: + +``` +        def recordsTotal = Employee.count() +        println "employee recordsTotal $recordsTotal" +``` + +Next, apply the filter to the Employee instances to get the count of filtered results, which will always be less than or equal to the total number (again, this is for the pagination code): + +``` +        def c = Employee.createCriteria() +        def recordsFiltered = c.count { +            filterer.delegate = delegate +            filterer() +        } +        println "employee recordsFiltered $recordsFiltered" + +``` + +Once you have those two counts, you can get the actual filtered instances using the pagination and ordering information as well. + +``` + def orderer = Employee.withCriteria { + filterer.delegate = delegate + filterer() + orderList.each { oi -> + switch (oi[0]) { + case 'surname': order 'surname', oi[1]; break + case 'givenNames': order 'givenNames', oi[1]; break + case 'position': order 'p.name', oi[1]; break + case 'office': order 'o.name', oi[1]; break + case 'extension': order 'extension', oi[1]; break + case 'hired': order 'hired', oi[1]; break + case 'salary': order 'salary', oi[1]; break + } + } + maxResults (jqdtParams.length as Integer) + firstResult (jqdtParams.start as Integer) + } +``` + +To be completely clear, the pagination code in JTables manages three counts: the total number of records in the data set, the number resulting after the filters are applied, and the number to be displayed on the page (whether the display is scrolling or paginated). The ordering is applied to all the filtered records and the pagination is applied to chunks of those filtered records for display purposes. + +Next, process the results returned by the orderer, creating links to the Employee, Position, and Office instance in each row so the user can click on these links to get all the detail on the relevant instance: + +``` +        def dollarFormatter = new DecimalFormat('$##,###.##') +        def employees = orderer.collect { employee -> +            ['surname': "${employee.surname}", +                'givenNames': employee.givenNames, +                'position': "${employee.position?.name}", +                'office': "${employee.office?.name}", +                'extension': employee.extension, +                'hired': employee.hired.format('yyyy/MM/dd'), +                'salary': dollarFormatter.format(employee.salary)] +        } +``` + +And finally, create the result you want to return and give it back as JSON, which is what jQuery DataTables requires. + +``` + def result = [draw: jqdtParams.draw, recordsTotal: recordsTotal, recordsFiltered: recordsFiltered, data: employees] + render(result as JSON) + } +``` + +That’s it. + +If you’re familiar with Grails, this probably seems like more work than you might have originally thought, but there’s no rocket science here, just a lot of moving parts. However, if you haven’t had much exposure to Grails (or to Groovy), there’s a lot of new stuff to understand—closures, delegates, and builders, among other things. + +In that case, where to start? The best place is to learn about Groovy itself, especially [Groovy closures][18] and [Groovy delegates and builders][19]. Then go back to the reading suggested above on Grails and Hibernate criteria queries. + +### Conclusions + +jQuery DataTables make awesome tabular data browsers for Grails. Coding the view isn’t too tricky, but the PHP examples provided in the DataTables documentation take you only so far. In particular, they aren’t written with Grails programmers in mind, nor do they explore the finer details of using elements that are references to other classes (essentially lookup tables). + +I’ve used this approach to make a couple of data browsers that allow the user to select which columns to view and accumulate record counts, or just to browse the data. The performance is good even in million-row tables on a relatively modest VPS. + +One caveat: I have stumbled upon some problems with the various Hibernate criteria mechanisms exposed in Grails (see my other GitHub repositories), so care and experimentation is required. If all else fails, the alternative approach is to build SQL strings on the fly and execute them instead. As of this writing, I prefer to work with Grails criteria, unless I get into messy subqueries, but that may just reflect my relative lack of experience with subqueries in Hibernate. + +I hope you Grails programmers out there find this interesting. Please feel free to leave comments or suggestions below. + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/18/9/using-grails-jquery-and-datatables + +作者:[Chris Hermansen][a] +选题:[lujun9972](https://github.com/lujun9972) +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/clhermansen +[1]: https://grails.org/ +[2]: https://jquery.com/ +[3]: https://datatables.net/ +[4]: http://php.net/ +[5]: http://groovy-lang.org/ +[6]: https://github.com/monetschemist/grails-datatables +[7]: https://www.vim.org/ +[8]: http://openjdk.java.net/ +[9]: http://sdkman.io/ +[10]: http://guides.grails.org/creating-your-first-grails-app/guide/index.html +[11]: https://opensource.com/file/410061 +[12]: https://opensource.com/sites/default/files/uploads/screen_1.png (Embrow home screen) +[13]: https://opensource.com/file/410066 +[14]: https://opensource.com/sites/default/files/uploads/screen_2.png (Office list screenshot) +[15]: https://opensource.com/file/410071 +[16]: https://opensource.com/sites/default/files/uploads/screen3.png (Employee controller screenshot) +[17]: https://gsp.grails.org/latest/ref/Tags/createLink.html +[18]: http://groovy-lang.org/closures.html +[19]: http://groovy-lang.org/dsls.html From fac2c6cf15f8350c918ae772ad652f023edfd0ee Mon Sep 17 00:00:00 2001 From: "Xingyu.Wang" Date: Sat, 29 Sep 2018 12:48:08 +0800 Subject: [PATCH 120/736] PRF:20180828 How to Play Windows-only Games on Linux with Steam Play.md @geekpi --- ...ows-only Games on Linux with Steam Play.md | 37 +++++++++---------- 1 file changed, 17 insertions(+), 20 deletions(-) diff --git a/translated/tech/20180828 How to Play Windows-only Games on Linux with Steam Play.md b/translated/tech/20180828 How to Play Windows-only Games on Linux with Steam Play.md index 52a919ea57..3d5d3a20bc 100644 --- a/translated/tech/20180828 How to Play Windows-only Games on Linux with Steam Play.md +++ b/translated/tech/20180828 How to Play Windows-only Games on Linux with Steam Play.md @@ -1,8 +1,9 @@ 如何使用 Steam Play 在 Linux 上玩仅限 Windows 的游戏 ====== -Steam 的新实验功能允许你在 Linux 上玩仅限 Windows 的游戏。以下是如何在 Steam 中使用此功能。 -你已经听说过这个消息。游戏发行平台[ Steam 正在实现一个 WINE 分支来允许你玩仅在 Windows 上的游戏][1]。对于 Linux 用户来说,这绝对是一个好消息,因为我们抱怨 Linux 的游戏数量不足。 +> Steam 的新实验功能允许你在 Linux 上玩仅限 Windows 的游戏。以下是如何在 Steam 中使用此功能。 + +你已经听说过这个消息。游戏发行平台 [Steam 正在复刻一个 WINE 分支来允许你玩仅限于 Windows 上的游戏][1]。对于 Linux 用户来说,这绝对是一个好消息,因为我们总抱怨 Linux 的游戏数量不足。 这个新功能仍处于测试阶段,但你现在可以在 Linux 上试用它并在 Linux 上玩仅限 Windows 的游戏。让我们看看如何做到这一点。 @@ -14,20 +15,19 @@ Steam 的新实验功能允许你在 Linux 上玩仅限 Windows 的游戏。以 安装了 Steam 并且你已登录到 Steam 帐户,就可以了解如何在 Steam Linux 客户端中启用 Windows 游戏。 - #### 步骤 1:进入帐户设置 -运行 Steam 客户端。在左上角,单击 Steam,然后单击 Settings。 +运行 Steam 客户端。在左上角,单击 “Steam”,然后单击 “Settings”。 ![Enable steam play beta on Linux][4] #### 步骤 2:选择加入测试计划 -在“设置”中,从左侧窗口中选择“帐户”,然后单击 “Beta participation” 下的 “CHANGE” 按钮。 +在“Settings”中,从左侧窗口中选择“Account”,然后单击 “Beta participation” 下的 “CHANGE” 按钮。 ![Enable beta feature in Steam Linux][5] -你应该在此处选择 Steam Beta Update。 +你应该在此处选择 “Steam Beta Update”。 ![Enable beta feature in Steam Linux][6] @@ -37,32 +37,29 @@ Steam 的新实验功能允许你在 Linux 上玩仅限 Windows 的游戏。以 下载好 Steam 新的测试版更新后,它将重新启动。到这里就差不多了。 -再次进入“设置”。你现在可以在左侧窗口看到新的 Steam Play 选项。单击它并选中复选框: +再次进入“Settings”。你现在可以在左侧窗口看到新的 “Steam Play” 选项。单击它并选中复选框: * Enable Steam Play for supported titles (你可以玩列入白名单的 Windows 游戏) * Enable Steam Play for all titles (你可以尝试玩所有仅限 Windows 的游戏) - - ![Play Windows games on Linux using Steam Play][7] -我不记得 Steam 是否会再次重启,但我想这是微不足道的。你现在应该可以在 Linux 上看到安装仅限 Windows 的游戏的选项了。 +我不记得 Steam 是否会再次重启,但我想这无所谓。你现在应该可以在 Linux 上看到安装仅限 Windows 的游戏的选项了。 -比如,我的 Steam 库中有 Age of Empires,正常情况下这个在 Linux 中没有。但我在 Steam Play 测试版启用所有 Windows 游戏后,现在我可以选择在 Linux 上安装 Age of Empires 了。 +比如,我的 Steam 库中有《Age of Empires》,正常情况下这个在 Linux 中没有。但我在 Steam Play 测试版启用所有 Windows 游戏后,现在我可以选择在 Linux 上安装《Age of Empires》了。 ![Install Windows-only games on Linux using Steam][8] -现在可以在 Linux 上安装仅限 Windows 的游戏 + +*现在可以在 Linux 上安装仅限 Windows 的游戏* ### 有关 Steam Play 测试版功能的信息 在 Linux 上使用 Steam Play 测试版玩仅限 Windows 的游戏有一些事情你需要知道并且牢记。 - * If you have games downloaded on Windows via Steam, you can save some download data by [sharing Steam game files between Linux and Windows][12]. - * 目前,[只有 27 个 Steam Play 中的 Windows 游戏被列入白名单][9]。这些白名单游戏在 Linux 上无缝运行。 - * 你可以使用 Steam Play 测试版尝试任何 Windows 游戏,但它可能无法一直运行。有些游戏有时会崩溃,而某些游戏可能根本无法运行。 + * 目前,[只有 27 个 Steam Play 中的 Windows 游戏被列入白名单][9]。这些白名单游戏可以在 Linux 上无缝运行。 + * 你可以使用 Steam Play 测试版尝试任何 Windows 游戏,但它可能不是总能运行。有些游戏有时会崩溃,而某些游戏可能根本无法运行。 * 在测试版中,你无法 Steam 商店中看到适用于 Linux 的 Windows 限定游戏。你必须自己尝试游戏或参考[这个社区维护的列表][10]以查看该 Windows 游戏的兼容性状态。你也可以通过填写[此表][11]来为列表做出贡献。 - * 如果你通过 Steam 在 Windows 上下载游戏,那么可以通过[在 Linux 和 Windows 之间共享 Steam 游戏文件][12]来保存一些下载数据。 - + * 如果你在 Windows 中通过 Steam 下载了游戏,你可以[在 Linux 和 Windows 之间共享 Steam 游戏文件][12]来节省下载的数据。 我希望本教程能帮助你在 Linux 上运行仅限 Windows 的游戏。你期待在 Linux 上玩哪些游戏? @@ -73,12 +70,12 @@ via: https://itsfoss.com/steam-play/ 作者:[Abhishek Prakash][a] 选题:[lujun9972](https://github.com/lujun9972) 译者:[geekpi](https://github.com/geekpi) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 [a]: https://itsfoss.com/author/abhishek/ -[1]:https://itsfoss.com/steam-play-proton/ +[1]:https://linux.cn/article-10054-1.html [2]:https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/08/play-windows-games-on-linux-featured.jpeg [3]:https://itsfoss.com/install-steam-ubuntu-linux/ [4]:https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/08/enable-steam-play-beta.jpeg @@ -89,4 +86,4 @@ via: https://itsfoss.com/steam-play/ [9]:https://steamcommunity.com/games/221410 [10]:https://docs.google.com/spreadsheets/d/1DcZZQ4HL_Ol969UbXJmFG8TzOHNnHoj8Q1f8DIFe8-8/htmlview?sle=true# [11]:https://docs.google.com/forms/d/e/1FAIpQLSeefaYQduMST_lg0IsYxZko8tHLKe2vtVZLFaPNycyhY4bidQ/viewform -[12]:https://itsfoss.com/share-steam-files-linux-windows/ +[12]:https://linux.cn/article-8027-1.html From 56688366cb9d05f1f0cb90c2c7fe5a2245342f4e Mon Sep 17 00:00:00 2001 From: "Xingyu.Wang" Date: Sat, 29 Sep 2018 12:48:29 +0800 Subject: [PATCH 121/736] PUB:20180828 How to Play Windows-only Games on Linux with Steam Play.md @geekpi https://linux.cn/article-10061-1.html --- ...828 How to Play Windows-only Games on Linux with Steam Play.md | 0 1 file changed, 0 insertions(+), 0 deletions(-) rename {translated/tech => published}/20180828 How to Play Windows-only Games on Linux with Steam Play.md (100%) diff --git a/translated/tech/20180828 How to Play Windows-only Games on Linux with Steam Play.md b/published/20180828 How to Play Windows-only Games on Linux with Steam Play.md similarity index 100% rename from translated/tech/20180828 How to Play Windows-only Games on Linux with Steam Play.md rename to published/20180828 How to Play Windows-only Games on Linux with Steam Play.md From 4e8d4f92ee194f60378f6a184d57ae8b34ff9a27 Mon Sep 17 00:00:00 2001 From: "Xingyu.Wang" Date: Sat, 29 Sep 2018 13:24:25 +0800 Subject: [PATCH 122/736] PRF:20171124 How do groups work on Linux.md @DavidChenLiang --- .../20171124 How do groups work on Linux.md | 102 +++++++----------- 1 file changed, 41 insertions(+), 61 deletions(-) diff --git a/translated/tech/20171124 How do groups work on Linux.md b/translated/tech/20171124 How do groups work on Linux.md index ace322775a..f9134ca619 100644 --- a/translated/tech/20171124 How do groups work on Linux.md +++ b/translated/tech/20171124 How do groups work on Linux.md @@ -1,60 +1,49 @@ -"组"在 Linux 上到底是怎么工作的? -============================================================ +“用户组”在 Linux 上到底是怎么工作的? +======== +嗨!就在上周,我还自认为对 Linux 上的用户和组的工作机制了如指掌。我认为它们的关系是这样的: -你好!就在上周,我还自认为对 Linux 上的用户和组的工作机制了如指掌。我认为它们的关系是这样的: +1. 每个进程都属于一个用户(比如用户 `julia`) +2. 当这个进程试图读取一个被某个组所拥有的文件时, Linux 会 + a. 先检查用户`julia` 是否有权限访问文件。(LCTT 译注:此处应该是指检查文件的所有者是否就是 `julia`) + b. 检查 `julia` 属于哪些组,并进一步检查在这些组里是否有某个组拥有这个文件或者有权限访问这个文件。 +3. 如果上述 a、b 任一为真(或者“其它”位设为有权限访问),那么这个进程就有权限访问这个文件。 -1. 每个进程都属于一个用户( 比如用户`julia`) - -2. 当这个进程试图读取一个被某个组所拥有的文件时, Linux 会 a)先检查用户`julia` 是否有权限访问文件。(LCTT译注:检查文件的所有者是否就是`julia`) b)检查`julia` 属于哪些组,并进一步检查在这些组里是否有某个组拥有这个文件或者有权限访问这个文件。 - -3. 如果上述a,b任一为真( 或者`其他`位设为有权限访问),那么这个进程就有权限访问这个文件。 - -比如说,如果一个进程被用户`julia`拥有并且`julia` 在`awesome`组,那么这个进程就能访问下面这个文件。 +比如说,如果一个进程被用户 `julia` 拥有并且 `julia` 在`awesome` 组,那么这个进程就能访问下面这个文件。 ``` r--r--r-- 1 root awesome 6872 Sep 24 11:09 file.txt - ``` -然而上述的机制我并没有考虑得非常清楚,如果你硬要我阐述清楚,我会说进程可能会在**运行时**去检查`/etc/group` 文件里是否有某些组拥有当前的用户。 +然而上述的机制我并没有考虑得非常清楚,如果你硬要我阐述清楚,我会说进程可能会在**运行时**去检查 `/etc/group` 文件里是否有某些组拥有当前的用户。 -### 然而这并不是Linux 里“组”的工作机制 +### 然而这并不是 Linux 里“组”的工作机制 -我在上个星期的工作中发现了一件有趣的事,事实证明我前面的理解错了,我对组的工作机制的描述并不准确。特别是Linux**并不会**在进程每次试图访问一个文件时就去检查这个进程的用户属于哪些组。 +我在上个星期的工作中发现了一件有趣的事,事实证明我前面的理解错了,我对组的工作机制的描述并不准确。特别是 Linux **并不会**在进程每次试图访问一个文件时就去检查这个进程的用户属于哪些组。 -我在读了[The Linux Programming -Interface][1]这本书的第九章后才恍然大悟(这本书真是太棒了。)这才是组真正的工作方式!我意识到之前我并没有真正理解用户和组是怎么工作的,我信心满满的尝试了下面的内容并且验证到底发生了什么,事实证明现在我的理解才是对的。 +我在读了《[Linux 编程接口][1]》这本书的第九章(“进程资格”)后才恍然大悟(这本书真是太棒了),这才是组真正的工作方式!我意识到之前我并没有真正理解用户和组是怎么工作的,我信心满满的尝试了下面的内容并且验证到底发生了什么,事实证明现在我的理解才是对的。 ### 用户和组权限检查是怎么完成的 -现在这些关键的知识在我看来非常简单! 这本书的第九章上来就告诉我如下事实:用户和组ID是**进程的属性**,它们是: +现在这些关键的知识在我看来非常简单! 这本书的第九章上来就告诉我如下事实:用户和组 ID 是**进程的属性**,它们是: -* 真实用户ID和组ID; +* 真实用户 ID 和组 ID; +* 有效用户 ID 和组 ID; +* 保存的 set-user-ID 和保存的 set-group-ID; +* 文件系统用户 ID 和组 ID(特定于 Linux); +* 补充的组 ID; -* 有效用户ID和组ID; - -* 被保存的set-user-ID和被保存的set-group-ID; - -* 文件系统用户ID和组ID(特定于 Linux); - -* 增补的组ID; - -这说明Linux**实际上**检查一个进程能否访问一个文件所做的组检查是这样的: - -* 检查一个进程的组ID和补充组ID(这些ID就在进程的属性里,**并不是**实时在`/etc/group`里查找这些ID) +这说明 Linux **实际上**检查一个进程能否访问一个文件所做的组检查是这样的: +* 检查一个进程的组 ID 和补充组 ID(这些 ID 就在进程的属性里,**并不是**实时在 `/etc/group` 里查找这些 ID) * 检查要访问的文件的访问属性里的组设置 - - * 确定进程对文件是否有权限访问(LCTT 译注:即文件的组是否是以上的组之一) -通常当访问控制的时候使用的是**有效**用户/组ID,而不是**真实**用户/组ID。技术上来说当访问一个文件时使用的是**文件系统**ID,他们实际上和有效用户/组ID一样。(LCTT译注:这句话针对 Linux 而言。) +通常当访问控制的时候使用的是**有效**用户/组 ID,而不是**真实**用户/组 ID。技术上来说当访问一个文件时使用的是**文件系统**的 ID,它们通常和有效用户/组 ID 一样。(LCTT 译注:这句话针对 Linux 而言。) ### 将一个用户加入一个组并不会将一个已存在的进程(的用户)加入那个组 -下面是一个有趣的例子:如果我创建了一个新的组:`panda` 组并且将我自己(bork)加入到这个组,然后运行`groups` 来检查我是否在这个组里:结果是我(bork)竟然不在这个组?! - +下面是一个有趣的例子:如果我创建了一个新的组:`panda` 组并且将我自己(`bork`)加入到这个组,然后运行 `groups` 来检查我是否在这个组里:结果是我(`bork`)竟然不在这个组?! ``` bork@kiwi~> sudo addgroup panda @@ -69,8 +58,7 @@ bork adm cdrom sudo dip plugdev lpadmin sambashare docker lxd ``` -`panda`并不在上面的组里!为了再次确定我们的发现,让我们建一个文件,这个文件被`panda`组拥有,看看我能否访问它。 - +`panda` 并不在上面的组里!为了再次确定我们的发现,让我们建一个文件,这个文件被 `panda` 组拥有,看看我能否访问它。 ``` $ touch panda-file.txt @@ -78,73 +66,65 @@ $ sudo chown root:panda panda-file.txt $ sudo chmod 660 panda-file.txt $ cat panda-file.txt cat: panda-file.txt: Permission denied - ``` -好吧,确定了,我(bork)无法访问`panda-file.txt`。这一点都不让人吃惊,我的命令解释器并没有`panda` 组作为补充组ID,运行`adduser bork panda`并不会改变这一点。 - +好吧,确定了,我(`bork`)无法访问 `panda-file.txt`。这一点都不让人吃惊,我的命令解释器并没有将 `panda` 组作为补充组 ID,运行 `adduser bork panda` 并不会改变这一点。 ### 那进程一开始是怎么得到用户的组的呢? +这真是个非常令人困惑的问题,对吗?如果进程会将组的信息预置到进程的属性里面,进程在初始化的时候怎么取到组的呢?很明显你无法给你自己指定更多的组(否则就会和 Linux 访问控制的初衷相违背了……) -这真是个非常令人困惑的问题,对吗?如果进程会将组的信息预置到进程的属性里面,进程在初始化的时候怎么取到组的呢?很明显你无法给你自己指定更多的组(否则就会和Linux访问控制的初衷相违背了。。。) - -有一点还是很清楚的:一个新的进程是怎么从我的命令行解释器(/bash/fish)里被**执行**而得到它的组的。(新的)进程将拥有我的用户 ID(bork),并且进程属性里还有很多组ID。从我的命令解释器里执行的所有进程是从这个命令解释器里`复刻`而来的,所以这个新进程得到了和命令解释器同样的组。 - -因此一定存在一个“第一个”进程来把你的组设置到进程属性里,而所有由此进程而衍生的进程将都设置这些组。而那个“第一个”进程就是你的**登录命令**,在我的笔记本电脑上,它是由‘登录’程序(`/bin/login`)实例化而来。` 登录程序` 以root身份运行,然后调用了一个 C 的库函数-`initgroups`来设置你的进程的组(具体来说是通过读取`/etc/group` 文件),因为登录程序是以root运行的,所以它能设置你的进程的组。 +有一点还是很清楚的:一个新的进程是怎么从我的命令行解释器(`/bash/fish`)里被**执行**而得到它的组的。(新的)进程将拥有我的用户 ID(`bork`),并且进程属性里还有很多组 ID。从我的命令解释器里执行的所有进程是从这个命令解释器里 `fork()` 而来的,所以这个新进程得到了和命令解释器同样的组。 +因此一定存在一个“第一个”进程来把你的组设置到进程属性里,而所有由此进程而衍生的进程将都设置这些组。而那个“第一个”进程就是你的登录程序login shell,在我的笔记本电脑上,它是由 `login` 程序(`/bin/login`)实例化而来。登录程序以 root 身份运行,然后调用了一个 C 的库函数 —— `initgroups` 来设置你的进程的组(具体来说是通过读取 `/etc/group` 文件),因为登录程序是以 root 运行的,所以它能设置你的进程的组。 ### 让我们再登录一次 -好了!既然我们的`login shell`正在运行,而我又想刷新我的进程的组设置,从我们前面所学到的进程是怎么初始化组ID的,我应该可以通过再次运行`login` 程序来刷新我的进程组并启动一个新的`login shell`! +好了!假如说我们正处于一个登录程序中,而我又想刷新我的进程的组设置,从我们前面所学到的进程是怎么初始化组 ID 的,我应该可以通过再次运行登录程序来刷新我的进程组并启动一个新的登录命令! -让我们试试下边的方法: +让我们试试下边的方法: ``` $ sudo login bork $ groups bork adm cdrom sudo dip plugdev lpadmin sambashare docker lxd panda $ cat panda-file.txt # it works! I can access the file owned by `panda` now! - ``` -当然,成功了!现在由登录程序衍生的程序的用户是组`panda`的一部分了!太棒了!这并不会影响我其他的已经在运行的登录程序(及其子进程),如果我真的希望“所有的”进程都能对`panda` -组有访问权限。我必须完全的重启我的登陆会话,这意味着我必须退出我的窗口管理器然后再重新`login`。(LCTT译注:即更新进程树的树根进程,这里是窗口管理器进程。) +当然,成功了!现在由登录程序衍生的程序的用户是组 `panda` 的一部分了!太棒了!这并不会影响我其他的已经在运行的登录程序(及其子进程),如果我真的希望“所有的”进程都能对 `panda` 组有访问权限。我必须完全的重启我的登录会话,这意味着我必须退出我的窗口管理器然后再重新登录。(LCTT 译注:即更新进程树的树根进程,这里是窗口管理器进程。) -### newgrp命令 +### newgrp 命令 - -在 Twitter 上有人告诉我如果只是想启动一个刷新了组信息的命令解释器的话,你可以使用`newgrp`(LCTT译注:不启动新的命令解释器),如下: +在 Twitter 上有人告诉我如果只是想启动一个刷新了组信息的命令解释器的话,你可以使用 `newgrp`(LCTT 译注:不启动新的命令解释器),如下: ``` sudo addgroup panda sudo adduser bork panda newgrp panda # starts a new shell, and you don't have to be root to run it! - ``` - -你也可以用`sg panda bash` 来完成同样的效果,这个命令能启动一个`bash` 登录程序,而这个程序就有`panda` 组。 +你也可以用 `sg panda bash` 来完成同样的效果,这个命令能启动一个`bash` 登录程序,而这个程序就有 `panda` 组。 ### seduid 将设置有效用户 ID -其实我一直对一个进程如何以`setuid root`的权限来运行意味着什么有点似是而非。现在我知道了,事实上所发生的是:setuid 设置了`有效用户ID`! 如果我('julia')运行了一个`setuid root` 的进程( 比如`passwd`),那么进程的**真实**用户 ID 将为`julia`,而**有效**用户 ID 将被设置为`root`。 +其实我一直对一个进程如何以 `setuid root` 的权限来运行意味着什么有点似是而非。现在我知道了,事实上所发生的是:`setuid` 设置了 +“有效用户 ID”! 如果我(`julia`)运行了一个 `setuid root` 的进程( 比如 `passwd`),那么进程的**真实**用户 ID 将为 `julia`,而**有效**用户 ID 将被设置为 `root`。 -`passwd` 需要以root权限来运行,但是它能看到进程的真实用户ID是`julia` ,是`julia`启动了这个进程,`passwd`会阻止这个进程修改除了`julia`之外的用户密码。 +`passwd` 需要以 root 权限来运行,但是它能看到进程的真实用户 ID 是 `julia` ,是 `julia` 启动了这个进程,`passwd` 会阻止这个进程修改除了 `julia` 之外的用户密码。 ### 就是这些了! -在 Linux Programming Interface 这本书里有很多Linux上一些功能的罕见使用方法以及Linux上所有的事物到底是怎么运行的详细解释,这里我就不一一展开了。那本书棒极了,我上面所说的都在该书的第九章,这章在1300页的书里只占了17页。 +在《[Linux 编程接口][1]》这本书里有很多 Linux 上一些功能的罕见使用方法以及 Linux 上所有的事物到底是怎么运行的详细解释,这里我就不一一展开了。那本书棒极了,我上面所说的都在该书的第九章,这章在 1300 页的书里只占了 17 页。 -我最爱这本书的一点是我只用读17页关于用户和组是怎么工作的内容,而这区区17页就能做到内容完备,详实有用。我不用读完所有的1300页书就能得到有用的东西,太棒了! +我最爱这本书的一点是我只用读 17 页关于用户和组是怎么工作的内容,而这区区 17 页就能做到内容完备、详实有用。我不用读完所有的 1300 页书就能得到有用的东西,太棒了! -------------------------------------------------------------------------------- via: https://jvns.ca/blog/2017/11/20/groups/ -作者:[Julia Evans ][a] +作者:[Julia Evans][a] 译者:[DavidChen](https://github.com/DavidChenLiang) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 From 24d069dd4d4c1b2b57c759eef650ee4483b7d99d Mon Sep 17 00:00:00 2001 From: "Xingyu.Wang" Date: Sat, 29 Sep 2018 13:24:49 +0800 Subject: [PATCH 123/736] PUB: 20171124 How do groups work on Linux.md @DavidChenLiang https://linux.cn/article-10062-1.html --- .../tech => published}/20171124 How do groups work on Linux.md | 0 1 file changed, 0 insertions(+), 0 deletions(-) rename {translated/tech => published}/20171124 How do groups work on Linux.md (100%) diff --git a/translated/tech/20171124 How do groups work on Linux.md b/published/20171124 How do groups work on Linux.md similarity index 100% rename from translated/tech/20171124 How do groups work on Linux.md rename to published/20171124 How do groups work on Linux.md From bc3ac0c79971ef6042a23c3e966a5872b1bede93 Mon Sep 17 00:00:00 2001 From: z52527 Date: Sat, 29 Sep 2018 14:14:57 +0800 Subject: [PATCH 124/736] Delete 20180828 A Cat Clone With Syntax Highlighting And Git Integration.md MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit 删除源文件 --- ...Syntax Highlighting And Git Integration.md | 167 ------------------ 1 file changed, 167 deletions(-) delete mode 100644 sources/tech/20180828 A Cat Clone With Syntax Highlighting And Git Integration.md diff --git a/sources/tech/20180828 A Cat Clone With Syntax Highlighting And Git Integration.md b/sources/tech/20180828 A Cat Clone With Syntax Highlighting And Git Integration.md deleted file mode 100644 index bddc4cac5b..0000000000 --- a/sources/tech/20180828 A Cat Clone With Syntax Highlighting And Git Integration.md +++ /dev/null @@ -1,167 +0,0 @@ -A Cat Clone With Syntax Highlighting And Git Integration -====== - -![](https://www.ostechnix.com/wp-content/uploads/2018/08/Bat-command-720x340.png) - -In Unix-like systems, we use **‘cat’** command to print and concatenate files. Using cat command, we can print the contents of a file to the standard output, concatenate several files into the target file, and append several files into the target file. Today, I stumbled upon a similar utility named **“Bat”** , a clone to the cat command, with some additional cool features such as syntax highlighting, git integration and automatic paging etc. In this brief guide, we will how to install and use Bat command in Linux. - -### Installation - -Bat is available in the default repositories of Arch Linux. So, you can install it using pacman on any arch-based systems. -``` -$ sudo pacman -S bat - -``` - -On Debian, Ubuntu, Linux Mint systems, download the **.deb** file from the [**Releases page**][1] and install it as shown below. -``` -$ sudo apt install gdebi - -$ sudo gdebi bat_0.5.0_amd64.deb - -``` - -For other systems, you may need to compile and install from source. Make sure you have installed Rust 1.26 or higher. - - - -Then, run the following command to install Bat: -``` -$ cargo install bat - -``` - -Alternatively, you can install it using [**Linuxbrew**][2] package manager. -``` -$ brew install bat - -``` - -### Bat command Usage - -The Bat command’s usage is very similar to cat command. - -To create a new file using bat command, do: -``` -$ bat > file.txt - -``` - -To view the contents of a file using bat command, just do: -``` -$ bat file.txt - -``` - -You can also view multiple files at once: -``` -$ bat file1.txt file2.txt - -``` - -To append the contents of the multiple files in a single file: -``` -$ bat file1.txt file2.txt file3.txt > document.txt - -``` - -Like I already mentioned, apart from viewing and editing files, the Bat command has some additional cool features though. - -The bat command supports **syntax highlighting** for large number of programming and markup languages. For instance, look at the following example. I am going to display the contents of the **reverse.py** file using both cat and bat commands. - -![](https://www.ostechnix.com/wp-content/uploads/2018/08/bat-and-cat-command-output-comparison.png) - -Did you notice the difference? Cat command shows the contents of the file in plain text format, whereas bat command shows output with syntax highlighting, order number in a neat tabular column format. Much better, isn’t it? - -If you want to display only the line numbers (not the tabular column), use **-n** flag. -``` -$ bat -n reverse.py - -``` - -**Sample output:** -![](https://www.ostechnix.com/wp-content/uploads/2018/08/bat-command-output-3.png) - -Another notable feature of Bat command is it supports **automatic paging**. That means if output of a file is too large for one screen, the bat command automatically pipes its own output to **less** command, so you can view the output page by page. - -Let me show you an example. When you view the contents of a file which spans multiple pages using cat command, the prompt quickly jumps to the last page of the file, and you do not see the content in the beginning or in the middle. - -Have a look at the following output: - -![](https://www.ostechnix.com/wp-content/uploads/2018/08/cat-command-output.png) - -As you can see, the cat command displays last page of the file. - -So, you may need to pipe the output of the cat command to **less** command to view it’s contents page by page from the beginning. -``` -$ cat reverse.py | less - -``` - -Now, you can view output page by page by hitting the ENTER key. However, it is not necessary if you use bat command. The bat command will automatically pipe the output of a file which spans multiple pages. -``` -$ bat reverse.py - -``` - -**Sample output:** - -![](https://www.ostechnix.com/wp-content/uploads/2018/08/bat-command-output-1.png) - -Now hit the ENTER key to go to the next page. - -The bat command also supports **GIT integration** , so you can view/edit the files in your Git repository without much hassle. It communicates with git to show modifications with respect to the index (see left side bar). - -![](https://www.ostechnix.com/wp-content/uploads/2018/08/bat-command-output-2.png) - -**Customizing Bat** - -If you don’t like the default themes, you can change it too. Bat has option for that too. - -To list the available themes, just run: -``` -$ bat --list-themes -1337 -DarkNeon -Default -GitHub -Monokai Extended -Monokai Extended Bright -Monokai Extended Light -Monokai Extended Origin -TwoDark - -``` - -To use a different theme, for example TwoDark, run: -``` -$ bat --theme=TwoDark file.txt - -``` - -If you want to make the theme permanent, use `export BAT_THEME="TwoDark"` in your shells startup file. - -Bat also have the option to control the appearance of the output. To do so, use the `--style` option. To show only Git changes and line numbers but no grid and no file header, use `--style=numbers,changes`. - -For more details, refer the Bat project GitHub Repository (Link at the end). - -And, that’s all for now. Hope this was useful. More good stuffs to come. Stay tuned! - -Cheers! - - - --------------------------------------------------------------------------------- - -via: https://www.ostechnix.com/bat-a-cat-clone-with-syntax-highlighting-and-git-integration/ - -作者:[SK][a] -选题:[lujun9972](https://github.com/lujun9972) -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:https://www.ostechnix.com/author/sk/ -[1]:https://github.com/sharkdp/bat/releases -[2]:https://www.ostechnix.com/linuxbrew-common-package-manager-linux-mac-os-x/ From 4cba159b136f4c78320321e54fb46a3af432a8a2 Mon Sep 17 00:00:00 2001 From: z52527 Date: Sat, 29 Sep 2018 14:16:16 +0800 Subject: [PATCH 125/736] Create 20180828 A Cat Clone With Syntax Highlighting And Git Integration.md MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit 翻译完成 --- ...Syntax Highlighting And Git Integration.md | 173 ++++++++++++++++++ 1 file changed, 173 insertions(+) create mode 100644 translated/tech/20180828 A Cat Clone With Syntax Highlighting And Git Integration.md diff --git a/translated/tech/20180828 A Cat Clone With Syntax Highlighting And Git Integration.md b/translated/tech/20180828 A Cat Clone With Syntax Highlighting And Git Integration.md new file mode 100644 index 0000000000..0b56659315 --- /dev/null +++ b/translated/tech/20180828 A Cat Clone With Syntax Highlighting And Git Integration.md @@ -0,0 +1,173 @@ +一种具有语法高亮和 Git 集成的 Cat 克隆命令——Bat +====== + +![](https://www.ostechnix.com/wp-content/uploads/2018/08/Bat-command-720x340.png) + +在类UNIX系统中,我们使用 **‘cat’** 命令去打印和连接文件。使用cat命令, 我们能将文件目录打印到到标准输出,合成几个文件为一个目标文件,还有追加几个文件到目标文件中。今天,我偶然发现一个具有相似作用的命令叫做 **“Bat”** ,一个 cat 命令的克隆版,具有一些例如语法高亮、 git 集成和自动分页等非常酷的特性。在这个简略指南中,我们将讲述如何在 linux 中安装和使用 Bat 命令。 + +### 安装 + +Bat 可以在 Arch Linux 的默认软件源中获取。 所以你可以使用 pacman 命令在任何 arch-based 的系统上来安装它。 +``` +$ sudo pacman -S bat + +``` + +在 Debian,Ubuntu, Linux Mint 等系统中,从[**发布页面**][1] 下载 **.deb** 文件,然后用下面的命令来安装。 +``` +$ sudo apt install gdebi + +$ sudo gdebi bat_0.5.0_amd64.deb + +``` + +对于其他系统,你也许需要从软件源编译并安装 确保你已经安装了 Rust 1.26 或者更高版本。 + + + +然后运行以下命令来安装 Bat +``` +$ cargo install bat + +``` + +或者,你可以从 [**Linuxbrew**][2] 软件包管理中来安装它。 +``` +$ brew install bat + +``` + +### Bat 命令的使用 + +Bat 命令的使用与 cat 命令的使用非常相似。 + +使用 Bat 命令创建一个新的文件: +``` +$ bat > file.txt + +``` + +使用 Bat 命令来查看文件内容,只需要: +``` +$ bat file.txt + +``` + +你能同时查看多个文件,通过: +``` +$ bat file1.txt file2.txt + +``` + +将多个文件的内容合并至一个单独文件中: +``` +$ bat file1.txt file2.txt file3.txt > document.txt + +``` + +就像我之前提到的那样,除了浏览和编辑文件以外, Bat 命令有一些非常酷的特性。 + +Bat 命令支持大多数编程和标记语言的语法高亮syntax highlighting。比如,下面这个例子。我将使用 cat 和 bat 命令来展示 **reverse.py** 的内容。 + +![](https://www.ostechnix.com/wp-content/uploads/2018/08/bat-and-cat-command-output-comparison.png) + +你注意到区别了吗? cat 命令以纯文本格式显示文件的内容,而 bat 命令显示了语法高亮和整齐的文本对齐格式。更好了不是吗? + +如果你只想显示行号(而不是文本对齐)使用 +**-n** 标记。 +``` +$ bat -n reverse.py + +``` + +**Sample output:** +![](https://www.ostechnix.com/wp-content/uploads/2018/08/bat-command-output-3.png) + +另一个 Bat 命令中值得注意的特性是它支持自动分页automatic paging。 它的意思是当文件的输出对于屏幕来说太大的时候,bat 命令自动将自己的输出内容传输到 **less** 命令中,所以你可以一页一页的查看输出内容。 + +让我给你看一个例子,使用cat命令查看跨多个页面的文件的内容时,提示快速跳至文件的最后一页,你看不到内容的开头和中间部分。 + +看一下下面的输出: + +![](https://www.ostechnix.com/wp-content/uploads/2018/08/cat-command-output.png) + +正如你所看到的,cat 命令显示了文章的最后一页。 + +所以你也许需要去将使用 cat 命令的输出传输到 **less** 命令中去从开头一页一页的查看内容。 +``` +$ cat reverse.py | less + +``` + +现在你可以使用 ENTER 键去一页一页的查看输出。然而当你使用 bat 命令时这些都是不必要的。bat命令将自动传输跨越多个页面的文件的输出。 + +``` +$ bat reverse.py + +``` + +**Sample output:** + +![](https://www.ostechnix.com/wp-content/uploads/2018/08/bat-command-output-1.png) + +现在按下 ENTER 键去往下一页。 + +bat 命令也支持 Git 集成**GIT integration**, +这样您就可以轻松查看/编辑Git存储库中的文件。 它与 Git 连接可以显示关于索引的修改。(看左栏) + +![](https://www.ostechnix.com/wp-content/uploads/2018/08/bat-command-output-2.png) + +**定制 Bat** + +如果你不喜欢默认主题,你也可以修改它。Bat 同样有修改它的选项。 + +若要显示可用主题,只需运行: +``` +$ bat --list-themes +1337 +DarkNeon +Default +GitHub +Monokai Extended +Monokai Extended Bright +Monokai Extended Light +Monokai Extended Origin +TwoDark + +``` + + +要使用其他主题,例如 TwoDark,请运行: +``` +$ bat --theme=TwoDark file.txt + +``` + +如果你想永久改变主题,在你的 shells startup 文件中加入 `export BAT_THEME="TwoDark"`。 + + +Bat还可以选择修改输出的外观。使用 `--style` 选项来修改输出外观。仅显示 Git 的更改和行号但不显示网格和文件头,请使用 `--style=numbers,changes`. + + +更多详细信息,请参阅 Bat 项目的 GitHub 库(链接在文末) + +最好,这就是目前的全部内容了。希望这篇文章会帮到你。更多精彩文章即将到来,敬请关注! + +干杯! + + + +-------------------------------------------------------------------------------- + +via: https://www.ostechnix.com/bat-a-cat-clone-with-syntax-highlighting-and-git-integration/ + +作者:[SK][a] +选题:[lujun9972](https://github.com/lujun9972) +译者:[z52527](https://github.com/z52527) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://www.ostechnix.com/author/sk/ +[1]:https://github.com/sharkdp/bat/releases +[2]:https://www.ostechnix.com/linuxbrew-common-package-manager-linux-mac-os-x/ From fefd3e591bf42f36166b91d88e1d11a15174f1b4 Mon Sep 17 00:00:00 2001 From: z52527 Date: Sat, 29 Sep 2018 14:28:06 +0800 Subject: [PATCH 126/736] Create 20180828 A Cat Clone With Syntax Highlighting And Git Integration.md --- ...Syntax Highlighting And Git Integration.md | 168 ++++++++++++++++++ 1 file changed, 168 insertions(+) create mode 100644 sources/tech/20180828 A Cat Clone With Syntax Highlighting And Git Integration.md diff --git a/sources/tech/20180828 A Cat Clone With Syntax Highlighting And Git Integration.md b/sources/tech/20180828 A Cat Clone With Syntax Highlighting And Git Integration.md new file mode 100644 index 0000000000..7d30b522a0 --- /dev/null +++ b/sources/tech/20180828 A Cat Clone With Syntax Highlighting And Git Integration.md @@ -0,0 +1,168 @@ +A Cat Clone With Syntax Highlighting And Git Integration +====== +20180828 A Cat Clone With Syntax Highlighting And Git Integration.md + +![](https://www.ostechnix.com/wp-content/uploads/2018/08/Bat-command-720x340.png) + +In Unix-like systems, we use **‘cat’** command to print and concatenate files. Using cat command, we can print the contents of a file to the standard output, concatenate several files into the target file, and append several files into the target file. Today, I stumbled upon a similar utility named **“Bat”** , a clone to the cat command, with some additional cool features such as syntax highlighting, git integration and automatic paging etc. In this brief guide, we will how to install and use Bat command in Linux. + +### Installation + +Bat is available in the default repositories of Arch Linux. So, you can install it using pacman on any arch-based systems. +``` +$ sudo pacman -S bat + +``` + +On Debian, Ubuntu, Linux Mint systems, download the **.deb** file from the [**Releases page**][1] and install it as shown below. +``` +$ sudo apt install gdebi + +$ sudo gdebi bat_0.5.0_amd64.deb + +``` + +For other systems, you may need to compile and install from source. Make sure you have installed Rust 1.26 or higher. + + + +Then, run the following command to install Bat: +``` +$ cargo install bat + +``` + +Alternatively, you can install it using [**Linuxbrew**][2] package manager. +``` +$ brew install bat + +``` + +### Bat command Usage + +The Bat command’s usage is very similar to cat command. + +To create a new file using bat command, do: +``` +$ bat > file.txt + +``` + +To view the contents of a file using bat command, just do: +``` +$ bat file.txt + +``` + +You can also view multiple files at once: +``` +$ bat file1.txt file2.txt + +``` + +To append the contents of the multiple files in a single file: +``` +$ bat file1.txt file2.txt file3.txt > document.txt + +``` + +Like I already mentioned, apart from viewing and editing files, the Bat command has some additional cool features though. + +The bat command supports **syntax highlighting** for large number of programming and markup languages. For instance, look at the following example. I am going to display the contents of the **reverse.py** file using both cat and bat commands. + +![](https://www.ostechnix.com/wp-content/uploads/2018/08/bat-and-cat-command-output-comparison.png) + +Did you notice the difference? Cat command shows the contents of the file in plain text format, whereas bat command shows output with syntax highlighting, order number in a neat tabular column format. Much better, isn’t it? + +If you want to display only the line numbers (not the tabular column), use **-n** flag. +``` +$ bat -n reverse.py + +``` + +**Sample output:** +![](https://www.ostechnix.com/wp-content/uploads/2018/08/bat-command-output-3.png) + +Another notable feature of Bat command is it supports **automatic paging**. That means if output of a file is too large for one screen, the bat command automatically pipes its own output to **less** command, so you can view the output page by page. + +Let me show you an example. When you view the contents of a file which spans multiple pages using cat command, the prompt quickly jumps to the last page of the file, and you do not see the content in the beginning or in the middle. + +Have a look at the following output: + +![](https://www.ostechnix.com/wp-content/uploads/2018/08/cat-command-output.png) + +As you can see, the cat command displays last page of the file. + +So, you may need to pipe the output of the cat command to **less** command to view it’s contents page by page from the beginning. +``` +$ cat reverse.py | less + +``` + +Now, you can view output page by page by hitting the ENTER key. However, it is not necessary if you use bat command. The bat command will automatically pipe the output of a file which spans multiple pages. +``` +$ bat reverse.py + +``` + +**Sample output:** + +![](https://www.ostechnix.com/wp-content/uploads/2018/08/bat-command-output-1.png) + +Now hit the ENTER key to go to the next page. + +The bat command also supports **GIT integration** , so you can view/edit the files in your Git repository without much hassle. It communicates with git to show modifications with respect to the index (see left side bar). + +![](https://www.ostechnix.com/wp-content/uploads/2018/08/bat-command-output-2.png) + +**Customizing Bat** + +If you don’t like the default themes, you can change it too. Bat has option for that too. + +To list the available themes, just run: +``` +$ bat --list-themes +1337 +DarkNeon +Default +GitHub +Monokai Extended +Monokai Extended Bright +Monokai Extended Light +Monokai Extended Origin +TwoDark + +``` + +To use a different theme, for example TwoDark, run: +``` +$ bat --theme=TwoDark file.txt + +``` + +If you want to make the theme permanent, use `export BAT_THEME="TwoDark"` in your shells startup file. + +Bat also have the option to control the appearance of the output. To do so, use the `--style` option. To show only Git changes and line numbers but no grid and no file header, use `--style=numbers,changes`. + +For more details, refer the Bat project GitHub Repository (Link at the end). + +And, that’s all for now. Hope this was useful. More good stuffs to come. Stay tuned! + +Cheers! + + + +-------------------------------------------------------------------------------- + +via: https://www.ostechnix.com/bat-a-cat-clone-with-syntax-highlighting-and-git-integration/ + +作者:[SK][a] +选题:[lujun9972](https://github.com/lujun9972) +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://www.ostechnix.com/author/sk/ +[1]:https://github.com/sharkdp/bat/releases +[2]:https://www.ostechnix.com/linuxbrew-common-package-manager-linux-mac-os-x/ From f47b6997cbca37926dedabc9709752a17262a795 Mon Sep 17 00:00:00 2001 From: z52527 Date: Sat, 29 Sep 2018 14:28:37 +0800 Subject: [PATCH 127/736] Delete 20180828 A Cat Clone With Syntax Highlighting And Git Integration.md --- ...Syntax Highlighting And Git Integration.md | 173 ------------------ 1 file changed, 173 deletions(-) delete mode 100644 translated/tech/20180828 A Cat Clone With Syntax Highlighting And Git Integration.md diff --git a/translated/tech/20180828 A Cat Clone With Syntax Highlighting And Git Integration.md b/translated/tech/20180828 A Cat Clone With Syntax Highlighting And Git Integration.md deleted file mode 100644 index 0b56659315..0000000000 --- a/translated/tech/20180828 A Cat Clone With Syntax Highlighting And Git Integration.md +++ /dev/null @@ -1,173 +0,0 @@ -一种具有语法高亮和 Git 集成的 Cat 克隆命令——Bat -====== - -![](https://www.ostechnix.com/wp-content/uploads/2018/08/Bat-command-720x340.png) - -在类UNIX系统中,我们使用 **‘cat’** 命令去打印和连接文件。使用cat命令, 我们能将文件目录打印到到标准输出,合成几个文件为一个目标文件,还有追加几个文件到目标文件中。今天,我偶然发现一个具有相似作用的命令叫做 **“Bat”** ,一个 cat 命令的克隆版,具有一些例如语法高亮、 git 集成和自动分页等非常酷的特性。在这个简略指南中,我们将讲述如何在 linux 中安装和使用 Bat 命令。 - -### 安装 - -Bat 可以在 Arch Linux 的默认软件源中获取。 所以你可以使用 pacman 命令在任何 arch-based 的系统上来安装它。 -``` -$ sudo pacman -S bat - -``` - -在 Debian,Ubuntu, Linux Mint 等系统中,从[**发布页面**][1] 下载 **.deb** 文件,然后用下面的命令来安装。 -``` -$ sudo apt install gdebi - -$ sudo gdebi bat_0.5.0_amd64.deb - -``` - -对于其他系统,你也许需要从软件源编译并安装 确保你已经安装了 Rust 1.26 或者更高版本。 - - - -然后运行以下命令来安装 Bat -``` -$ cargo install bat - -``` - -或者,你可以从 [**Linuxbrew**][2] 软件包管理中来安装它。 -``` -$ brew install bat - -``` - -### Bat 命令的使用 - -Bat 命令的使用与 cat 命令的使用非常相似。 - -使用 Bat 命令创建一个新的文件: -``` -$ bat > file.txt - -``` - -使用 Bat 命令来查看文件内容,只需要: -``` -$ bat file.txt - -``` - -你能同时查看多个文件,通过: -``` -$ bat file1.txt file2.txt - -``` - -将多个文件的内容合并至一个单独文件中: -``` -$ bat file1.txt file2.txt file3.txt > document.txt - -``` - -就像我之前提到的那样,除了浏览和编辑文件以外, Bat 命令有一些非常酷的特性。 - -Bat 命令支持大多数编程和标记语言的语法高亮syntax highlighting。比如,下面这个例子。我将使用 cat 和 bat 命令来展示 **reverse.py** 的内容。 - -![](https://www.ostechnix.com/wp-content/uploads/2018/08/bat-and-cat-command-output-comparison.png) - -你注意到区别了吗? cat 命令以纯文本格式显示文件的内容,而 bat 命令显示了语法高亮和整齐的文本对齐格式。更好了不是吗? - -如果你只想显示行号(而不是文本对齐)使用 -**-n** 标记。 -``` -$ bat -n reverse.py - -``` - -**Sample output:** -![](https://www.ostechnix.com/wp-content/uploads/2018/08/bat-command-output-3.png) - -另一个 Bat 命令中值得注意的特性是它支持自动分页automatic paging。 它的意思是当文件的输出对于屏幕来说太大的时候,bat 命令自动将自己的输出内容传输到 **less** 命令中,所以你可以一页一页的查看输出内容。 - -让我给你看一个例子,使用cat命令查看跨多个页面的文件的内容时,提示快速跳至文件的最后一页,你看不到内容的开头和中间部分。 - -看一下下面的输出: - -![](https://www.ostechnix.com/wp-content/uploads/2018/08/cat-command-output.png) - -正如你所看到的,cat 命令显示了文章的最后一页。 - -所以你也许需要去将使用 cat 命令的输出传输到 **less** 命令中去从开头一页一页的查看内容。 -``` -$ cat reverse.py | less - -``` - -现在你可以使用 ENTER 键去一页一页的查看输出。然而当你使用 bat 命令时这些都是不必要的。bat命令将自动传输跨越多个页面的文件的输出。 - -``` -$ bat reverse.py - -``` - -**Sample output:** - -![](https://www.ostechnix.com/wp-content/uploads/2018/08/bat-command-output-1.png) - -现在按下 ENTER 键去往下一页。 - -bat 命令也支持 Git 集成**GIT integration**, -这样您就可以轻松查看/编辑Git存储库中的文件。 它与 Git 连接可以显示关于索引的修改。(看左栏) - -![](https://www.ostechnix.com/wp-content/uploads/2018/08/bat-command-output-2.png) - -**定制 Bat** - -如果你不喜欢默认主题,你也可以修改它。Bat 同样有修改它的选项。 - -若要显示可用主题,只需运行: -``` -$ bat --list-themes -1337 -DarkNeon -Default -GitHub -Monokai Extended -Monokai Extended Bright -Monokai Extended Light -Monokai Extended Origin -TwoDark - -``` - - -要使用其他主题,例如 TwoDark,请运行: -``` -$ bat --theme=TwoDark file.txt - -``` - -如果你想永久改变主题,在你的 shells startup 文件中加入 `export BAT_THEME="TwoDark"`。 - - -Bat还可以选择修改输出的外观。使用 `--style` 选项来修改输出外观。仅显示 Git 的更改和行号但不显示网格和文件头,请使用 `--style=numbers,changes`. - - -更多详细信息,请参阅 Bat 项目的 GitHub 库(链接在文末) - -最好,这就是目前的全部内容了。希望这篇文章会帮到你。更多精彩文章即将到来,敬请关注! - -干杯! - - - --------------------------------------------------------------------------------- - -via: https://www.ostechnix.com/bat-a-cat-clone-with-syntax-highlighting-and-git-integration/ - -作者:[SK][a] -选题:[lujun9972](https://github.com/lujun9972) -译者:[z52527](https://github.com/z52527) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:https://www.ostechnix.com/author/sk/ -[1]:https://github.com/sharkdp/bat/releases -[2]:https://www.ostechnix.com/linuxbrew-common-package-manager-linux-mac-os-x/ From cb8679eed5c58b6c0e14b328afd4ea3d2e13d2e3 Mon Sep 17 00:00:00 2001 From: z52527 Date: Sat, 29 Sep 2018 14:30:53 +0800 Subject: [PATCH 128/736] Delete 20180828 A Cat Clone With Syntax Highlighting And Git Integration.md --- ...Syntax Highlighting And Git Integration.md | 171 ------------------ 1 file changed, 171 deletions(-) delete mode 100644 sources/tech/20180828 A Cat Clone With Syntax Highlighting And Git Integration.md diff --git a/sources/tech/20180828 A Cat Clone With Syntax Highlighting And Git Integration.md b/sources/tech/20180828 A Cat Clone With Syntax Highlighting And Git Integration.md deleted file mode 100644 index 34eba64c15..0000000000 --- a/sources/tech/20180828 A Cat Clone With Syntax Highlighting And Git Integration.md +++ /dev/null @@ -1,171 +0,0 @@ -Translating by z52527 - - -A Cat Clone With Syntax Highlighting And Git Integration -====== -20180828 A Cat Clone With Syntax Highlighting And Git Integration.md - -![](https://www.ostechnix.com/wp-content/uploads/2018/08/Bat-command-720x340.png) - -In Unix-like systems, we use **‘cat’** command to print and concatenate files. Using cat command, we can print the contents of a file to the standard output, concatenate several files into the target file, and append several files into the target file. Today, I stumbled upon a similar utility named **“Bat”** , a clone to the cat command, with some additional cool features such as syntax highlighting, git integration and automatic paging etc. In this brief guide, we will how to install and use Bat command in Linux. - -### Installation - -Bat is available in the default repositories of Arch Linux. So, you can install it using pacman on any arch-based systems. -``` -$ sudo pacman -S bat - -``` - -On Debian, Ubuntu, Linux Mint systems, download the **.deb** file from the [**Releases page**][1] and install it as shown below. -``` -$ sudo apt install gdebi - -$ sudo gdebi bat_0.5.0_amd64.deb - -``` - -For other systems, you may need to compile and install from source. Make sure you have installed Rust 1.26 or higher. - - - -Then, run the following command to install Bat: -``` -$ cargo install bat - -``` - -Alternatively, you can install it using [**Linuxbrew**][2] package manager. -``` -$ brew install bat - -``` - -### Bat command Usage - -The Bat command’s usage is very similar to cat command. - -To create a new file using bat command, do: -``` -$ bat > file.txt - -``` - -To view the contents of a file using bat command, just do: -``` -$ bat file.txt - -``` - -You can also view multiple files at once: -``` -$ bat file1.txt file2.txt - -``` - -To append the contents of the multiple files in a single file: -``` -$ bat file1.txt file2.txt file3.txt > document.txt - -``` - -Like I already mentioned, apart from viewing and editing files, the Bat command has some additional cool features though. - -The bat command supports **syntax highlighting** for large number of programming and markup languages. For instance, look at the following example. I am going to display the contents of the **reverse.py** file using both cat and bat commands. - -![](https://www.ostechnix.com/wp-content/uploads/2018/08/bat-and-cat-command-output-comparison.png) - -Did you notice the difference? Cat command shows the contents of the file in plain text format, whereas bat command shows output with syntax highlighting, order number in a neat tabular column format. Much better, isn’t it? - -If you want to display only the line numbers (not the tabular column), use **-n** flag. -``` -$ bat -n reverse.py - -``` - -**Sample output:** -![](https://www.ostechnix.com/wp-content/uploads/2018/08/bat-command-output-3.png) - -Another notable feature of Bat command is it supports **automatic paging**. That means if output of a file is too large for one screen, the bat command automatically pipes its own output to **less** command, so you can view the output page by page. - -Let me show you an example. When you view the contents of a file which spans multiple pages using cat command, the prompt quickly jumps to the last page of the file, and you do not see the content in the beginning or in the middle. - -Have a look at the following output: - -![](https://www.ostechnix.com/wp-content/uploads/2018/08/cat-command-output.png) - -As you can see, the cat command displays last page of the file. - -So, you may need to pipe the output of the cat command to **less** command to view it’s contents page by page from the beginning. -``` -$ cat reverse.py | less - -``` - -Now, you can view output page by page by hitting the ENTER key. However, it is not necessary if you use bat command. The bat command will automatically pipe the output of a file which spans multiple pages. -``` -$ bat reverse.py - -``` - -**Sample output:** - -![](https://www.ostechnix.com/wp-content/uploads/2018/08/bat-command-output-1.png) - -Now hit the ENTER key to go to the next page. - -The bat command also supports **GIT integration** , so you can view/edit the files in your Git repository without much hassle. It communicates with git to show modifications with respect to the index (see left side bar). - -![](https://www.ostechnix.com/wp-content/uploads/2018/08/bat-command-output-2.png) - -**Customizing Bat** - -If you don’t like the default themes, you can change it too. Bat has option for that too. - -To list the available themes, just run: -``` -$ bat --list-themes -1337 -DarkNeon -Default -GitHub -Monokai Extended -Monokai Extended Bright -Monokai Extended Light -Monokai Extended Origin -TwoDark - -``` - -To use a different theme, for example TwoDark, run: -``` -$ bat --theme=TwoDark file.txt - -``` - -If you want to make the theme permanent, use `export BAT_THEME="TwoDark"` in your shells startup file. - -Bat also have the option to control the appearance of the output. To do so, use the `--style` option. To show only Git changes and line numbers but no grid and no file header, use `--style=numbers,changes`. - -For more details, refer the Bat project GitHub Repository (Link at the end). - -And, that’s all for now. Hope this was useful. More good stuffs to come. Stay tuned! - -Cheers! - - - --------------------------------------------------------------------------------- - -via: https://www.ostechnix.com/bat-a-cat-clone-with-syntax-highlighting-and-git-integration/ - -作者:[SK][a] -选题:[lujun9972](https://github.com/lujun9972) -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:https://www.ostechnix.com/author/sk/ -[1]:https://github.com/sharkdp/bat/releases -[2]:https://www.ostechnix.com/linuxbrew-common-package-manager-linux-mac-os-x/ From 94e161bb1531277255e412682788f8f4a347faf3 Mon Sep 17 00:00:00 2001 From: z52527 Date: Sat, 29 Sep 2018 14:31:49 +0800 Subject: [PATCH 129/736] Create 20180828 A Cat Clone With Syntax Highlighting And Git Integration.md MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit 翻译完成 --- ...Syntax Highlighting And Git Integration.md | 173 ++++++++++++++++++ 1 file changed, 173 insertions(+) create mode 100644 translated/tech/20180828 A Cat Clone With Syntax Highlighting And Git Integration.md diff --git a/translated/tech/20180828 A Cat Clone With Syntax Highlighting And Git Integration.md b/translated/tech/20180828 A Cat Clone With Syntax Highlighting And Git Integration.md new file mode 100644 index 0000000000..29b81d5efe --- /dev/null +++ b/translated/tech/20180828 A Cat Clone With Syntax Highlighting And Git Integration.md @@ -0,0 +1,173 @@ +一种具有语法高亮和 Git 集成的 Cat 克隆命令——Bat +====== + +![](https://www.ostechnix.com/wp-content/uploads/2018/08/Bat-command-720x340.png) + +在类UNIX系统中,我们使用 **‘cat’** 命令去打印和连接文件。使用cat命令, 我们能将文件目录打印到到标准输出,合成几个文件为一个目标文件,还有追加几个文件到目标文件中。今天,我偶然发现一个具有相似作用的命令叫做 **“Bat”** ,一个 cat 命令的克隆版,具有一些例如语法高亮、 git 集成和自动分页等非常酷的特性。在这个简略指南中,我们将讲述如何在 linux 中安装和使用 Bat 命令。 + +### 安装 + +Bat 可以在 Arch Linux 的默认软件源中获取。 所以你可以使用 pacman 命令在任何 arch-based 的系统上来安装它。 +``` +$ sudo pacman -S bat + +``` + +在 Debian,Ubuntu, Linux Mint 等系统中,从[**发布页面**][1] 下载 **.deb** 文件,然后用下面的命令来安装。 +``` +$ sudo apt install gdebi + +$ sudo gdebi bat_0.5.0_amd64.deb + +``` + +对于其他系统,你也许需要从软件源编译并安装 确保你已经安装了 Rust 1.26 或者更高版本。 + + + +然后运行以下命令来安装 Bat +``` +$ cargo install bat + +``` + +或者,你可以从 [**Linuxbrew**][2] 软件包管理中来安装它。 +``` +$ brew install bat + +``` + +### Bat 命令的使用 + +Bat 命令的使用与 cat 命令的使用非常相似。 + +使用 Bat 命令创建一个新的文件: +``` +$ bat > file.txt + +``` + +使用 Bat 命令来查看文件内容,只需要: +``` +$ bat file.txt + +``` + +你能同时查看多个文件,通过: +``` +$ bat file1.txt file2.txt + +``` + +将多个文件的内容合并至一个单独文件中: +``` +$ bat file1.txt file2.txt file3.txt > document.txt + +``` + +就像我之前提到的那样,除了浏览和编辑文件以外, Bat 命令有一些非常酷的特性。 + +Bat 命令支持大多数编程和标记语言的语法高亮syntax highlighting。比如,下面这个例子。我将使用 cat 和 bat 命令来展示 **reverse.py** 的内容。 + +![](https://www.ostechnix.com/wp-content/uploads/2018/08/bat-and-cat-command-output-comparison.png) + +你注意到区别了吗? cat 命令以纯文本格式显示文件的内容,而 bat 命令显示了语法高亮和整齐的文本对齐格式。更好了不是吗? + +如果你只想显示行号(而不是文本对齐)使用 +**-n** 标记。 +``` +$ bat -n reverse.py + +``` + +**Sample output:** +![](https://www.ostechnix.com/wp-content/uploads/2018/08/bat-command-output-3.png) + +另一个 Bat 命令中值得注意的特性是它支持自动分页automatic paging。 它的意思是当文件的输出对于屏幕来说太大的时候,bat 命令自动将自己的输出内容传输到 **less** 命令中,所以你可以一页一页的查看输出内容。 + +让我给你看一个例子,使用cat命令查看跨多个页面的文件的内容时,提示快速跳至文件的最后一页,你看不到内容的开头和中间部分。 + +看一下下面的输出: + +![](https://www.ostechnix.com/wp-content/uploads/2018/08/cat-command-output.png) + +正如你所看到的,cat 命令显示了文章的最后一页。 + +所以你也许需要去将使用 cat 命令的输出传输到 **less** 命令中去从开头一页一页的查看内容。 +``` +$ cat reverse.py | less + +``` + +现在你可以使用 ENTER 键去一页一页的查看输出。然而当你使用 bat 命令时这些都是不必要的。bat命令将自动传输跨越多个页面的文件的输出。 + +``` +$ bat reverse.py + +``` + +**Sample output:** + +![](https://www.ostechnix.com/wp-content/uploads/2018/08/bat-command-output-1.png) + +现在按下 ENTER 键去往下一页。 + +bat 命令也支持 Git 集成**GIT integration**, +这样您就可以轻松查看/编辑Git存储库中的文件。 它与 Git 连接可以显示关于索引的修改。(看左栏) + +![](https://www.ostechnix.com/wp-content/uploads/2018/08/bat-command-output-2.png) + +**定制 Bat** + +如果你不喜欢默认主题,你也可以修改它。Bat 同样有修改它的选项。 + +若要显示可用主题,只需运行: +``` +$ bat --list-themes +1337 +DarkNeon +Default +GitHub +Monokai Extended +Monokai Extended Bright +Monokai Extended Light +Monokai Extended Origin +TwoDark + +``` + + +要使用其他主题,例如 TwoDark,请运行: +``` +$ bat --theme=TwoDark file.txt + +``` + +如果你想永久改变主题,在你的 shells startup 文件中加入 `export BAT_THEME="TwoDark"`。 + + +Bat还可以选择修改输出的外观。使用 `--style` 选项来修改输出外观。仅显示 Git 的更改和行号但不显示网格和文件头,请使用 `--style=numbers,changes`. + + +更多详细信息,请参阅 Bat 项目的 GitHub 库(链接在文末) + +最好,这就是目前的全部内容了。希望这篇文章会帮到你。更多精彩文章即将到来,敬请关注! + +干杯! + + + +-------------------------------------------------------------------------------- + +via: https://www.ostechnix.com/bat-a-cat-clone-with-syntax-highlighting-and-git-integration/ + +作者:[SK][a] +选题:[lujun9972](https://github.com/lujun9972) +译者:[z52527](https://github.com/z52527) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://www.ostechnix.com/author/sk/ +[1]:https://github.com/sharkdp/bat/releases +[2]:https://www.ostechnix.com/linuxbrew-common-package-manager-linux-mac-os-x/ From b8f40849e1b2276ba32fbb8241edd0ffd4d87a5a Mon Sep 17 00:00:00 2001 From: z52527 Date: Sat, 29 Sep 2018 14:39:40 +0800 Subject: [PATCH 130/736] Update 20180828 A Cat Clone With Syntax Highlighting And Git Integration.md --- ...th Syntax Highlighting And Git Integration.md | 16 ++++++++-------- 1 file changed, 8 insertions(+), 8 deletions(-) diff --git a/translated/tech/20180828 A Cat Clone With Syntax Highlighting And Git Integration.md b/translated/tech/20180828 A Cat Clone With Syntax Highlighting And Git Integration.md index 29b81d5efe..4fd29cf70c 100644 --- a/translated/tech/20180828 A Cat Clone With Syntax Highlighting And Git Integration.md +++ b/translated/tech/20180828 A Cat Clone With Syntax Highlighting And Git Integration.md @@ -13,7 +13,7 @@ $ sudo pacman -S bat ``` -在 Debian,Ubuntu, Linux Mint 等系统中,从[**发布页面**][1] 下载 **.deb** 文件,然后用下面的命令来安装。 +在 Debian,Ubuntu, Linux Mint 等系统中,从[**发布页面**][1] 下载 **.deb** 文件,然后用下面的命令来安装。 ``` $ sudo apt install gdebi @@ -31,7 +31,7 @@ $ cargo install bat ``` -或者,你可以从 [**Linuxbrew**][2] 软件包管理中来安装它。 +或者,你可以从 [**Linuxbrew**][2] 软件包管理中来安装它。 ``` $ brew install bat @@ -83,7 +83,7 @@ $ bat -n reverse.py **Sample output:** ![](https://www.ostechnix.com/wp-content/uploads/2018/08/bat-command-output-3.png) -另一个 Bat 命令中值得注意的特性是它支持自动分页automatic paging。 它的意思是当文件的输出对于屏幕来说太大的时候,bat 命令自动将自己的输出内容传输到 **less** 命令中,所以你可以一页一页的查看输出内容。 +另一个 Bat 命令中值得注意的特性是它支持自动分页automatic paging。 它的意思是当文件的输出对于屏幕来说太大的时候,bat 命令自动将自己的输出内容传输到 **less** 命令中,所以你可以一页一页的查看输出内容。 让我给你看一个例子,使用cat命令查看跨多个页面的文件的内容时,提示快速跳至文件的最后一页,你看不到内容的开头和中间部分。 @@ -113,7 +113,7 @@ $ bat reverse.py 现在按下 ENTER 键去往下一页。 bat 命令也支持 Git 集成**GIT integration**, -这样您就可以轻松查看/编辑Git存储库中的文件。 它与 Git 连接可以显示关于索引的修改。(看左栏) +这样您就可以轻松查看/编辑Git存储库中的文件。 它与 Git 连接可以显示关于索引的修改。(看左栏) ![](https://www.ostechnix.com/wp-content/uploads/2018/08/bat-command-output-2.png) @@ -121,7 +121,7 @@ bat 命令也支持 Git 集成**GIT integration**, 如果你不喜欢默认主题,你也可以修改它。Bat 同样有修改它的选项。 -若要显示可用主题,只需运行: +若要显示可用主题,只需运行: ``` $ bat --list-themes 1337 @@ -137,16 +137,16 @@ TwoDark ``` -要使用其他主题,例如 TwoDark,请运行: +要使用其他主题,例如 TwoDark,请运行: ``` $ bat --theme=TwoDark file.txt ``` -如果你想永久改变主题,在你的 shells startup 文件中加入 `export BAT_THEME="TwoDark"`。 +如果你想永久改变主题,在你的 shells startup 文件中加入 `export BAT_THEME="TwoDark"`。 -Bat还可以选择修改输出的外观。使用 `--style` 选项来修改输出外观。仅显示 Git 的更改和行号但不显示网格和文件头,请使用 `--style=numbers,changes`. +Bat还可以选择修改输出的外观。使用 `--style` 选项来修改输出外观。仅显示 Git 的更改和行号但不显示网格和文件头,请使用 `--style=numbers,changes`. 更多详细信息,请参阅 Bat 项目的 GitHub 库(链接在文末) From a355a1ce8986b886ac59ce2c7eb8d5b2ae9cd02c Mon Sep 17 00:00:00 2001 From: "Xingyu.Wang" Date: Sat, 29 Sep 2018 19:02:02 +0800 Subject: [PATCH 131/736] PRF:20180824 [Solved] -sub process usr bin dpkg returned an error code 1- Error in Ubuntu.md @geekpi --- ...turned an error code 1- Error in Ubuntu.md | 34 ++++++++++--------- 1 file changed, 18 insertions(+), 16 deletions(-) diff --git a/translated/tech/20180824 [Solved] -sub process usr bin dpkg returned an error code 1- Error in Ubuntu.md b/translated/tech/20180824 [Solved] -sub process usr bin dpkg returned an error code 1- Error in Ubuntu.md index 96eecf8936..df829aa425 100644 --- a/translated/tech/20180824 [Solved] -sub process usr bin dpkg returned an error code 1- Error in Ubuntu.md +++ b/translated/tech/20180824 [Solved] -sub process usr bin dpkg returned an error code 1- Error in Ubuntu.md @@ -1,10 +1,12 @@ -[已解决] Ubuntu 中的 “sub process usr bin dpkg returned an error code 1” 错误 +怎样解决 Ubuntu 中的 “sub process usr bin dpkg returned an error code 1” 错误 ====== -如果你在 Ubuntu Linux 上安装软件时遇到 “sub process usr bin dpkg returned an error code 1”,请按照以下步骤进行修复。 -Ubuntu 和其他基于 Debian 的发行版中的一个常见问题是已经损坏的包。你尝试更新系统或安装新软件包时遇到类似 “Sub-process /usr/bin/dpkg returned an error code” 的错误。 +> 如果你在 Ubuntu Linux 上安装软件时遇到 “sub process usr bin dpkg returned an error code 1”,请按照以下步骤进行修复。 + +Ubuntu 和其他基于 Debian 的发行版中的一个常见问题是已经损坏的包。你尝试更新系统或安装新软件包时会遇到类似 “Sub-process /usr/bin/dpkg returned an error code” 的错误。 这就是前几天发生在我身上的事。我试图在 Ubuntu 中安装一个电台程序时,它给我了这个错误: + ``` Unpacking python-gst-1.0 (1.6.2-1build1) ... Selecting previously unselected package radiotray. @@ -30,11 +32,11 @@ E: Sub-process /usr/bin/dpkg returned an error code (1) ``` 这里最后三行非常重要。 + ``` Errors were encountered while processing: polar-bookshelf E: Sub-process /usr/bin/dpkg returned an error code (1) - ``` 它告诉我 polar-bookshelf 包引发了问题。这可能对你如何修复这个错误至关重要。 @@ -45,59 +47,59 @@ E: Sub-process /usr/bin/dpkg returned an error code (1) 让我们尝试修复这个损坏的错误包。我将展示几种你可以逐一尝试的方法。最初的那些易于使用,几乎不用动脑子。 -你应该尝试运行 sudo apt update,接着尝试安装新的包或尝试升级这里讨论的每个包。 +在试了这里讨论的每一种方法之后,你应该尝试运行 `sudo apt update`,接着尝试安装新的包或升级。 #### 方法 1:重新配包数据库 你可以尝试的第一种方法是重新配置包数据库。数据库可能在安装包时损坏了。重新配置通常可以解决问题。 + ``` sudo dpkg --configure -a - ``` #### 方法 2:强制安装 -如果是之前中断安装的包,你可以尝试强制安装。 +如果是之前包安装过程被中断,你可以尝试强制安装。 + ``` sudo apt-get install -f - ``` #### 方法3:尝试删除有问题的包 -如果这不是你的问题,你可以尝试手动删除包。请不要在 Linux Kernels(以 linux- 开头的软件包)中执行此操作。 +如果这不是你的问题,你可以尝试手动删除包。但不要对 Linux 内核包(以 linux- 开头)执行此操作。 + ``` sudo apt remove - ``` #### 方法 4:删除有问题的包中的信息文件 -这应该是你最后的选择。你可以尝试从 /var/lib/dpkg/info 中删除与相关软件包关联的文件。 +这应该是你最后的选择。你可以尝试从 `/var/lib/dpkg/info` 中删除与相关软件包关联的文件。 **你需要了解一些基本的 Linux 命令来了解发生了什么以及如何对应你的问题** 就我而言,我在 polar-bookshelf 中遇到问题。所以我查找了与之关联的文件: + ``` ls -l /var/lib/dpkg/info | grep -i polar-bookshelf -rw-r--r-- 1 root root 2324811 Aug 14 19:29 polar-bookshelf.list -rw-r--r-- 1 root root 2822824 Aug 10 04:28 polar-bookshelf.md5sums -rwxr-xr-x 1 root root 113 Aug 10 04:28 polar-bookshelf.postinst -rwxr-xr-x 1 root root 84 Aug 10 04:28 polar-bookshelf.postrm - ``` 现在我需要做的就是删除这些文件: + ``` sudo mv /var/lib/dpkg/info/polar-bookshelf.* /tmp - ``` -使用 sudo apt update,接着你应该就能像往常一样安装软件了。 +使用 `sudo apt update`,接着你应该就能像往常一样安装软件了。 #### 哪种方法适合你(如果有效)? -我希望这篇快速文章可以帮助你修复 “E: Sub-process /usr/bin/dpkg returned an error code (1)” 的错误 +我希望这篇快速文章可以帮助你修复 “E: Sub-process /usr/bin/dpkg returned an error code (1)” 的错误。 如果它对你有用,是那种方法?你是否设法使用其他方法修复此错误?如果是,请分享一下以帮助其他人解决此问题。 @@ -108,7 +110,7 @@ via: https://itsfoss.com/dpkg-returned-an-error-code-1/ 作者:[Abhishek Prakash][a] 选题:[lujun9972](https://github.com/lujun9972) 译者:[geekpi](https://github.com/geekpi) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 From c8ef9b1b3462677da5f341bd68070ac29355d2b6 Mon Sep 17 00:00:00 2001 From: "Xingyu.Wang" Date: Sat, 29 Sep 2018 19:02:38 +0800 Subject: [PATCH 132/736] PUB:20180824 [Solved] -sub process usr bin dpkg returned an error code 1- Error in Ubuntu.md @geekpi https://linux.cn/article-10063-1.html --- ...cess usr bin dpkg returned an error code 1- Error in Ubuntu.md | 0 1 file changed, 0 insertions(+), 0 deletions(-) rename {translated/tech => published}/20180824 [Solved] -sub process usr bin dpkg returned an error code 1- Error in Ubuntu.md (100%) diff --git a/translated/tech/20180824 [Solved] -sub process usr bin dpkg returned an error code 1- Error in Ubuntu.md b/published/20180824 [Solved] -sub process usr bin dpkg returned an error code 1- Error in Ubuntu.md similarity index 100% rename from translated/tech/20180824 [Solved] -sub process usr bin dpkg returned an error code 1- Error in Ubuntu.md rename to published/20180824 [Solved] -sub process usr bin dpkg returned an error code 1- Error in Ubuntu.md From 1f021de7a8fb4ac87c1fee0be63dbf4382a14bcd Mon Sep 17 00:00:00 2001 From: dianbanjiu Date: Sat, 29 Sep 2018 21:35:58 +0800 Subject: [PATCH 133/736] dianbanjiu translating --- ...opcorn Time on Ubuntu 18.04 and Other Linux Distributions.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/sources/tech/20180928 How to Install Popcorn Time on Ubuntu 18.04 and Other Linux Distributions.md b/sources/tech/20180928 How to Install Popcorn Time on Ubuntu 18.04 and Other Linux Distributions.md index 01fbef0292..578624aba4 100644 --- a/sources/tech/20180928 How to Install Popcorn Time on Ubuntu 18.04 and Other Linux Distributions.md +++ b/sources/tech/20180928 How to Install Popcorn Time on Ubuntu 18.04 and Other Linux Distributions.md @@ -1,4 +1,4 @@ -How to Install Popcorn Time on Ubuntu 18.04 and Other Linux Distributions +Translating by dianbanjiu How to Install Popcorn Time on Ubuntu 18.04 and Other Linux Distributions ====== **Brief: This tutorial shows you how to install Popcorn Time on Ubuntu and other Linux distributions. Some handy Popcorn Time tips have also been discussed.** From 86d122fe654c689636e884531e3909d6f92e3d62 Mon Sep 17 00:00:00 2001 From: "Xingyu.Wang" Date: Sat, 29 Sep 2018 23:05:51 +0800 Subject: [PATCH 134/736] PRF:20180828 A Cat Clone With Syntax Highlighting And Git Integration.md MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit @z52527 恭喜你,完成了第一篇翻译贡献! --- ...Syntax Highlighting And Git Integration.md | 90 ++++++++----------- 1 file changed, 39 insertions(+), 51 deletions(-) diff --git a/translated/tech/20180828 A Cat Clone With Syntax Highlighting And Git Integration.md b/translated/tech/20180828 A Cat Clone With Syntax Highlighting And Git Integration.md index 4fd29cf70c..6d4f18f51c 100644 --- a/translated/tech/20180828 A Cat Clone With Syntax Highlighting And Git Integration.md +++ b/translated/tech/20180828 A Cat Clone With Syntax Highlighting And Git Integration.md @@ -1,127 +1,119 @@ -一种具有语法高亮和 Git 集成的 Cat 克隆命令——Bat +Bat:一种具有语法高亮和 Git 集成的 Cat 类命令 ====== ![](https://www.ostechnix.com/wp-content/uploads/2018/08/Bat-command-720x340.png) -在类UNIX系统中,我们使用 **‘cat’** 命令去打印和连接文件。使用cat命令, 我们能将文件目录打印到到标准输出,合成几个文件为一个目标文件,还有追加几个文件到目标文件中。今天,我偶然发现一个具有相似作用的命令叫做 **“Bat”** ,一个 cat 命令的克隆版,具有一些例如语法高亮、 git 集成和自动分页等非常酷的特性。在这个简略指南中,我们将讲述如何在 linux 中安装和使用 Bat 命令。 +在类 UNIX 系统中,我们使用 `cat` 命令去打印和连接文件。使用 `cat` 命令,我们能将文件目录打印到到标准输出,合成几个文件为一个目标文件,还有追加几个文件到目标文件中。今天,我偶然发现一个具有相似作用的命令叫做 “Bat” ,它是 `cat` 命令的一个克隆版,具有一些例如语法高亮、 Git 集成和自动分页等非常酷的特性。在这个简略指南中,我们将讲述如何在 Linux 中安装和使用 `bat` 命令。 ### 安装 -Bat 可以在 Arch Linux 的默认软件源中获取。 所以你可以使用 pacman 命令在任何 arch-based 的系统上来安装它。 +Bat 可以在 Arch Linux 的默认软件源中获取。 所以你可以使用 `pacman` 命令在任何基于 arch 的系统上来安装它。 + ``` $ sudo pacman -S bat - ``` -在 Debian,Ubuntu, Linux Mint 等系统中,从[**发布页面**][1] 下载 **.deb** 文件,然后用下面的命令来安装。 +在 Debian、Ubuntu、Linux Mint 等系统中,从其[发布页面][1]下载 **.deb** 文件,然后用下面的命令来安装。 + ``` $ sudo apt install gdebi - $ sudo gdebi bat_0.5.0_amd64.deb - ``` -对于其他系统,你也许需要从软件源编译并安装 确保你已经安装了 Rust 1.26 或者更高版本。 +对于其他系统,你也许需要从软件源编译并安装。确保你已经安装了 Rust 1.26 或者更高版本。 +然后运行以下命令来安装 Bat: - -然后运行以下命令来安装 Bat ``` $ cargo install bat - ``` -或者,你可以从 [**Linuxbrew**][2] 软件包管理中来安装它。 +或者,你可以从 [Linuxbrew][2] 软件包管理中来安装它。 + ``` $ brew install bat - ``` -### Bat 命令的使用 +### bat 命令的使用 -Bat 命令的使用与 cat 命令的使用非常相似。 +`bat` 命令的使用与 `cat` 命令的使用非常相似。 + +使用 `bat` 命令创建一个新的文件: -使用 Bat 命令创建一个新的文件: ``` $ bat > file.txt - ``` -使用 Bat 命令来查看文件内容,只需要: +使用 `bat` 命令来查看文件内容,只需要: + ``` $ bat file.txt - ``` -你能同时查看多个文件,通过: +你能同时查看多个文件: + ``` $ bat file1.txt file2.txt - ``` 将多个文件的内容合并至一个单独文件中: + ``` $ bat file1.txt file2.txt file3.txt > document.txt - ``` -就像我之前提到的那样,除了浏览和编辑文件以外, Bat 命令有一些非常酷的特性。 +就像我之前提到的那样,除了浏览和编辑文件以外,`bat` 命令有一些非常酷的特性。 -Bat 命令支持大多数编程和标记语言的语法高亮syntax highlighting。比如,下面这个例子。我将使用 cat 和 bat 命令来展示 **reverse.py** 的内容。 +`bat` 命令支持大多数编程和标记语言的语法高亮syntax highlighting。比如,下面这个例子。我将使用 `cat` 和 `bat` 命令来展示 `reverse.py` 的内容。 ![](https://www.ostechnix.com/wp-content/uploads/2018/08/bat-and-cat-command-output-comparison.png) -你注意到区别了吗? cat 命令以纯文本格式显示文件的内容,而 bat 命令显示了语法高亮和整齐的文本对齐格式。更好了不是吗? +你注意到区别了吗? `cat` 命令以纯文本格式显示文件的内容,而 `bat` 命令显示了语法高亮和整齐的文本对齐格式。更好了不是吗? + +如果你只想显示行号(而没有表格)使用 `-n` 标记。 -如果你只想显示行号(而不是文本对齐)使用 -**-n** 标记。 ``` $ bat -n reverse.py - ``` -**Sample output:** ![](https://www.ostechnix.com/wp-content/uploads/2018/08/bat-command-output-3.png) -另一个 Bat 命令中值得注意的特性是它支持自动分页automatic paging。 它的意思是当文件的输出对于屏幕来说太大的时候,bat 命令自动将自己的输出内容传输到 **less** 命令中,所以你可以一页一页的查看输出内容。 +另一个 `bat` 命令中值得注意的特性是它支持自动分页automatic paging。 它的意思是当文件的输出对于屏幕来说太大的时候,`bat` 命令自动将自己的输出内容传输到 `less` 命令中,所以你可以一页一页的查看输出内容。 -让我给你看一个例子,使用cat命令查看跨多个页面的文件的内容时,提示快速跳至文件的最后一页,你看不到内容的开头和中间部分。 +让我给你看一个例子,使用 `cat` 命令查看跨多个页面的文件的内容时,提示符会快速跳至文件的最后一页,你看不到内容的开头和中间部分。 看一下下面的输出: ![](https://www.ostechnix.com/wp-content/uploads/2018/08/cat-command-output.png) -正如你所看到的,cat 命令显示了文章的最后一页。 +正如你所看到的,`cat` 命令显示了文章的最后一页。 + +所以你也许需要去将使用 `cat` 命令的输出传输到 `less` 命令中去从开头一页一页的查看内容。 -所以你也许需要去将使用 cat 命令的输出传输到 **less** 命令中去从开头一页一页的查看内容。 ``` $ cat reverse.py | less - ``` -现在你可以使用 ENTER 键去一页一页的查看输出。然而当你使用 bat 命令时这些都是不必要的。bat命令将自动传输跨越多个页面的文件的输出。 +现在你可以使用回车键去一页一页的查看输出。然而当你使用 `bat` 命令时这些都是不必要的。`bat` 命令将自动传输跨越多个页面的文件的输出。 ``` $ bat reverse.py - ``` -**Sample output:** - ![](https://www.ostechnix.com/wp-content/uploads/2018/08/bat-command-output-1.png) -现在按下 ENTER 键去往下一页。 +现在按下回车键去往下一页。 -bat 命令也支持 Git 集成**GIT integration**, -这样您就可以轻松查看/编辑Git存储库中的文件。 它与 Git 连接可以显示关于索引的修改。(看左栏) +`bat` 命令也支持 Git 集成**GIT integration**,这样您就可以轻松查看/编辑 Git 存储库中的文件。 它与 Git 连接可以显示关于索引的修改。(看左栏) ![](https://www.ostechnix.com/wp-content/uploads/2018/08/bat-command-output-2.png) -**定制 Bat** +### 定制 Bat 如果你不喜欢默认主题,你也可以修改它。Bat 同样有修改它的选项。 若要显示可用主题,只需运行: + ``` $ bat --list-themes 1337 @@ -133,30 +125,26 @@ Monokai Extended Bright Monokai Extended Light Monokai Extended Origin TwoDark - ``` 要使用其他主题,例如 TwoDark,请运行: + ``` $ bat --theme=TwoDark file.txt - ``` -如果你想永久改变主题,在你的 shells startup 文件中加入 `export BAT_THEME="TwoDark"`。 +如果你想永久改变主题,在你的 shells 启动文件中加入 `export BAT_THEME="TwoDark"`。 +`bat` 还可以选择修改输出的外观。使用 `--style` 选项来修改输出外观。仅显示 Git 的更改和行号但不显示网格和文件头,请使用 `--style=numbers,changes`。 -Bat还可以选择修改输出的外观。使用 `--style` 选项来修改输出外观。仅显示 Git 的更改和行号但不显示网格和文件头,请使用 `--style=numbers,changes`. - - -更多详细信息,请参阅 Bat 项目的 GitHub 库(链接在文末) +更多详细信息,请参阅 Bat 项目的 GitHub 库(链接在文末)。 最好,这就是目前的全部内容了。希望这篇文章会帮到你。更多精彩文章即将到来,敬请关注! 干杯! - -------------------------------------------------------------------------------- via: https://www.ostechnix.com/bat-a-cat-clone-with-syntax-highlighting-and-git-integration/ @@ -164,7 +152,7 @@ via: https://www.ostechnix.com/bat-a-cat-clone-with-syntax-highlighting-and-git- 作者:[SK][a] 选题:[lujun9972](https://github.com/lujun9972) 译者:[z52527](https://github.com/z52527) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 From 16307895199e3572b2f18cb805db41fa4161982c Mon Sep 17 00:00:00 2001 From: "Xingyu.Wang" Date: Sat, 29 Sep 2018 23:07:10 +0800 Subject: [PATCH 135/736] PUB:20180828 A Cat Clone With Syntax Highlighting And Git Integration.md MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit @z52527 本文首发地址: https://linux.cn/article-10064-1.html 你的 LCTT 专页地址: https://linux.cn/lctt/z52527 请到 LCTT 平台注册领取 LCCN https://lctt.linux.cn/ --- ...28 A Cat Clone With Syntax Highlighting And Git Integration.md | 0 1 file changed, 0 insertions(+), 0 deletions(-) rename {translated/tech => published}/20180828 A Cat Clone With Syntax Highlighting And Git Integration.md (100%) diff --git a/translated/tech/20180828 A Cat Clone With Syntax Highlighting And Git Integration.md b/published/20180828 A Cat Clone With Syntax Highlighting And Git Integration.md similarity index 100% rename from translated/tech/20180828 A Cat Clone With Syntax Highlighting And Git Integration.md rename to published/20180828 A Cat Clone With Syntax Highlighting And Git Integration.md From 6554270832034c78167ca6b03c6c276a25068f23 Mon Sep 17 00:00:00 2001 From: "Xingyu.Wang" Date: Sat, 29 Sep 2018 23:20:13 +0800 Subject: [PATCH 136/736] PRF:20140805 How to Install Cinnamon Desktop on Ubuntu.md MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit @dianbanjiu 恭喜您,完成了第一篇翻译贡献! --- ...w to Install Cinnamon Desktop on Ubuntu.md | 30 +++++++++---------- 1 file changed, 15 insertions(+), 15 deletions(-) diff --git a/translated/tech/20140805 How to Install Cinnamon Desktop on Ubuntu.md b/translated/tech/20140805 How to Install Cinnamon Desktop on Ubuntu.md index a9f0690ff7..cf4201ba77 100644 --- a/translated/tech/20140805 How to Install Cinnamon Desktop on Ubuntu.md +++ b/translated/tech/20140805 How to Install Cinnamon Desktop on Ubuntu.md @@ -1,29 +1,31 @@ -# 如何在 Ubuntu 上安装 Cinnamon 桌面环境 +如何在 Ubuntu 上安装 Cinnamon 桌面环境 +====== -**这篇教程将会为你展示如何在 Ubuntu 上安装 Cinnamon 桌面环境** +> 这篇教程将会为你展示如何在 Ubuntu 上安装 Cinnamon 桌面环境。 -[Cinnamon][1]是 [Linux Mint][2] 的默认桌面环境。不同于 Ubuntu 的 Unity 桌面环境,Cinnamon 通过底部面板和应用菜单等查看桌面信息的方式更加传统和优雅。由于 Cinnamon 桌面以及它类 Windows 的用户界面,许多桌面用户[相较于 Ubuntu 更喜欢 Linux Mint][3]。 +[Cinnamon][1] 是 [Linux Mint][2] 的默认桌面环境。不同于 Ubuntu 的 Unity 桌面环境,Cinnamon 是一个更加传统而优雅的桌面环境,其带有底部面板和应用菜单。由于 Cinnamon 桌面以及它类 Windows 的用户界面,许多桌面用户[相较于 Ubuntu 更喜欢 Linux Mint][3]。 -现在你无需[安装 Linux Mint][4] 就能够体验到 Cinnamon了。在这篇教程,我将会展示给你 **如何在 Ubuntu 18.04,16.04 和 14.04 上安装 Cinnamon**。 +现在你无需[安装 Linux Mint][4] 就能够体验到 Cinnamon了。在这篇教程,我将会展示给你如何在 Ubuntu 18.04,16.04 和 14.04 上安装 Cinnamon。 -在 Ubuntu 上安装 Cinnamon 之前,有一些事情需要你注意。有时候,安装的额外桌面环境可能会与你当前的桌面环境有冲突。可能导致会话,应用程序或功能等的崩溃。这就是为什么你需要在做这个决定时谨慎一点的原因。 +在 Ubuntu 上安装 Cinnamon 之前,有一些事情需要你注意。有时候,安装的额外桌面环境可能会与你当前的桌面环境有冲突。可能导致会话、应用程序或功能等的崩溃。这就是为什么你需要在做这个决定时谨慎一点的原因。 + +### 如何在 Ubuntu 上安装 Cinnamon 桌面环境 ![如何在 Ubuntu 上安装 Cinnamon 桌面环境][5] -过去有一系列 Cinnamon team 为 Ubuntu 提供的官方 PPA,但现在都已经失效了。不过不用担心,还有一个非官方的 PPA,而且它运行的很完美。这个 PPA 里包含了最新的 Cinnamon 版本。 +过去有 Cinnamon 团队为 Ubuntu 提供的一系列的官方 PPA,但现在都已经失效了。不过不用担心,还有一个非官方的 PPA,而且它运行的很完美。这个 PPA 里包含了最新的 Cinnamon 版本。 ``` sudo add-apt-repository ppa:embrosyn/cinnamon sudo apt update && sudo apt install cinnamon - ``` -下载的大小大概是 150 MB(如果我没记错的话)。这其中提供的 Nemo(Cinnamon 的文件管理器,基于Nautilus)和 Cinnamon 控制中心。这些东西提供了一个更加接近于 Linux Mint 的感觉。 +下载的大小大概是 150 MB(如果我没记错的话)。这其中提供的 Nemo(Cinnamon 的文件管理器,基于 Nautilus)和 Cinnamon 控制中心。这些东西提供了一个更加接近于 Linux Mint 的感觉。 ### 在 Ubuntu 上使用 Cinnamon 桌面环境 -Cinnamon安装完成后,退出当前会话,在登陆界面,点击用户名旁边的 Ubuntu 符号: +Cinnamon 安装完成后,退出当前会话,在登录界面,点击用户名旁边的 Ubuntu 符号: ![](https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2014/08/Change_Desktop_Environment_Ubuntu.jpeg) @@ -31,11 +33,11 @@ Cinnamon安装完成后,退出当前会话,在登陆界面,点击用户名 ![](https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2014/08/Install_Cinnamon_Ubuntu.jpeg) -现在你应该已经登陆到有着 Cinnamon 桌面环境的 Ubuntu 中了。你还可以通过同样的方式再回到 Unity 桌面。这里有一张以 Cinnamon 做为桌面环境的 Ubuntu 桌面截图。 +现在你应该已经登录到有着 Cinnamon 桌面环境的 Ubuntu 中了。你还可以通过同样的方式再回到 Unity 桌面。这里有一张以 Cinnamon 做为桌面环境的 Ubuntu 桌面截图。 ![](https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2014/08/Cinnamon_Ubuntu_1404.jpeg) -看起来是不是像极了 Linux Mint。此外,我并没有发现任何有关 Cinnamon 和 Unity 的兼容性问题。在 Unity 和 Cinnamon 来回切换,他们也依旧工作的很完美。 +看起来是不是像极了 Linux Mint。此外,我并没有发现任何有关 Cinnamon 和 Unity 的兼容性问题。在 Unity 和 Cinnamon 来回切换,它们也依旧工作的很完美。 #### 从 Ubuntu 卸载 Cinnamon @@ -43,14 +45,12 @@ Cinnamon安装完成后,退出当前会话,在登陆界面,点击用户名 ``` sudo apt-get install ppa-purge - ``` -安装完成之后,使用下面的命令去移除 PPA: +安装完成之后,使用下面的命令去移除该 PPA: ``` sudo ppa-purge ppa:embrosyn/cinnamon - ``` 更多的信息,我建议你去阅读 [如何从 Linux 移除 PPA][6] 这篇文章。 @@ -64,7 +64,7 @@ via: https://itsfoss.com/install-cinnamon-on-ubuntu/ 作者:[Abhishek Prakash][a] 选题:[lujun9972](https://github.com/lujun9972) 译者:[dianbanjiu](https://github.com/dianbanjiu) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 From e8f79a20ecd857f617d940895c82b1be30174f05 Mon Sep 17 00:00:00 2001 From: "Xingyu.Wang" Date: Sat, 29 Sep 2018 23:21:09 +0800 Subject: [PATCH 137/736] PUB:20140805 How to Install Cinnamon Desktop on Ubuntu.md MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit @dianbanjiu 本文首发地址: https://linux.cn/article-10065-1.html 您的 LCTT 专页: https://linux.cn/lctt/dianbanjiu 请到 LCTT 平台注册领取 LCCN https://lctt.linux.cn/ --- .../20140805 How to Install Cinnamon Desktop on Ubuntu.md | 0 1 file changed, 0 insertions(+), 0 deletions(-) rename {translated/tech => published}/20140805 How to Install Cinnamon Desktop on Ubuntu.md (100%) diff --git a/translated/tech/20140805 How to Install Cinnamon Desktop on Ubuntu.md b/published/20140805 How to Install Cinnamon Desktop on Ubuntu.md similarity index 100% rename from translated/tech/20140805 How to Install Cinnamon Desktop on Ubuntu.md rename to published/20140805 How to Install Cinnamon Desktop on Ubuntu.md From 63799cd4bd6bf7e5d60cc93c7acf7aa75656d90e Mon Sep 17 00:00:00 2001 From: "Xingyu.Wang" Date: Sat, 29 Sep 2018 23:53:47 +0800 Subject: [PATCH 138/736] PRF:20180516 Manipulating Directories in Linux.md MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit @way-ww 恭喜你完成了第一篇翻译贡献! --- ...80516 Manipulating Directories in Linux.md | 133 ++++++++++-------- 1 file changed, 74 insertions(+), 59 deletions(-) diff --git a/translated/tech/20180516 Manipulating Directories in Linux.md b/translated/tech/20180516 Manipulating Directories in Linux.md index 9e62973064..2a71d30db2 100644 --- a/translated/tech/20180516 Manipulating Directories in Linux.md +++ b/translated/tech/20180516 Manipulating Directories in Linux.md @@ -1,152 +1,167 @@ -在Linux上操作目录 +在 Linux 上操作目录 ====== ![](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/branches-238379_1920_0.jpg?itok=2PlNpsVu) -如果你不熟悉本系列(以及Linux),[请查看我们的第一部分][1]。在那篇文章中,我们通过Linux文件系统的树状结构,或者更确切地说以文件层次结构标准工作。我建议你仔细阅读,确保你理解自己能安全的做哪些操作。因为这一次,我将向你展示目录操作的魅力。 +> 让我们继续学习一下 Linux 文件系统的树形结构,并展示一下如何在其中创建你的目录。 + +如果你不熟悉本系列(以及 Linux),[请查看我们的第一部分][1]。在那篇文章中,我们贯穿了 Linux 文件系统的树状结构(或者更确切地说是文件层次结构标准File Hierarchy Standard,FHS)。我建议你仔细阅读,确保你理解自己能安全的做哪些操作。因为这一次,我将向你展示目录操作的魅力。 ### 新建目录 -在操作变得具有破坏性之前,让我们发挥创意创造。首先,打开一个终端窗口并使用命令mkdir创建一个新目录,如下所示: +在破坏之前,先让我们来创建。首先,打开一个终端窗口并使用命令 `mkdir` 创建一个新目录,如下所示: + ``` mkdir +``` +如果你只输入了目录名称,该目录将显示在您当前所在目录中。如果你刚刚打开一个终端,你当前位置为你的家目录。在这个例子中,我们展示了将要创建的目录与你当前所处位置的关系: ``` -如果你只输入了目录名称,该目录将显示在您当前所在目录中。如果你刚刚打开一个终端,你当前位置为你的家目录。下面这个例子,我们展示了将要创建的目录与你当前所处位置的关系: -``` -$ pwd #This tells you where you are now -- see our first tutorial +$ pwd # 告知你当前所在位置(参见第一部分) /home/ -$ mkdir newdirectory #Creates /home//newdirectory - +$ mkdir newdirectory # 创建 /home//newdirectory ``` -(注 你不用输入#后面的文本。#后面的文本为注释内容,用于解释发生了什么。它会被shell忽略,不会被执行). + +(注:你不用输入 `#` 后面的文本。`#` 后面的文本为注释内容,用于解释发生了什么。它会被 shell 忽略,不会被执行)。 你可以在当前位置中已经存在的某个目录下创建新的目录,方法是在命令行中指定它: + ``` mkdir Documents/Letters +``` + +这将在 `Documents` 目录中创建 `Letters` 目录。 + +你还可以在路径中使用 `..` 在当前目录的上一级目录中创建目录。假设你进入刚刚创建的 `Documents/Letters/` 目录,并且想要创建`Documents/Memos/` 目录。你可以这样做: ``` -这将在Documents目录中创建Letters目录。 - -你还可以在路径中使用..在当前目录的上一级目录中创建目录。假设你进入刚刚创建的Documents/Letters/目录,并且想要创建Documents/Memos/目录。你可以这样做: -``` -cd Documents/Letters # Move into your recently created Letters/ directory +cd Documents/Letters # 进入到你刚刚创建的 Letters/ 目录 mkdir ../Memos - ``` + 同样,以上所有内容都是相对于你当前的位置做的。这就是使用了相对路径。 -你还可以使用目录的绝对路径:这意味着告诉mkdir命令将目录放在和根目录(/)有关的位置: + +你还可以使用目录的绝对路径:这意味着告诉 `mkdir` 命令将目录放在和根目录(`/`)有关的位置: + ``` mkdir /home//Documents/Letters - ``` -在上面的命令中将更改为你的用户名,这相当于从你的主目录执行mkdir Documents / Letters,通过使用绝对路径你可以在目录树中的任何位置完成这项工作。 -无论你使用相对路径还是绝对路径,只要命令成功执行,mkdir将静默的创建新目录,而没有任何明显的反馈。只有当遇到某种问题时,mkdir才会在你敲下[Enter]后打印一些反馈。 +在上面的命令中将 `` 更改为你的用户名,这相当于从你的主目录执行 `mkdir Documents/Letters`,通过使用绝对路径你可以在目录树中的任何位置完成这项工作。 + +无论你使用相对路径还是绝对路径,只要命令成功执行,`mkdir` 将静默的创建新目录,而没有任何明显的反馈。只有当遇到某种问题时,`mkdir`才会在你敲下回车键后打印一些反馈。 + +与大多数其他命令行工具一样,`mkdir` 提供了几个有趣的选项。 `-p` 选项特别有用,因为它允许你嵌套创建目录,即使目录不存在也可以。例如,要在 `Documents/` 中创建一个目录存放写给妈妈的信,你可以这样做: -与大多数其他命令行工具一样,mkdir提供了几个有趣的选项。 -p选项特别有用,因为它允许你创建嵌套目录,即使目录不存在也可以。例如,要在Documents /中创建一个目录存放写给妈妈的信,你可以这样做: ``` mkdir -p Documents/Letters/Family/Mom - ``` -And `mkdir` will create the whole branch of directories above _Mom/_ and also the directory _Mom/_ for you, regardless of whether any of the parent directories existed before you issued the command. + +`mkdir` 会创建 `Mom/` 之上的整个目录分支,并且也会创建 `Mom/` 目录,无论其上的目录在你敲入该命令时是否已经存在。 你也可以用空格来分隔目录名,来同时创建几个目录: -``` -mkdir Letters Memos Reports ``` -这将在当前目录下创建目录Letters,Memos和Reports。 +mkdir Letters Memos Reports +``` + +这将在当前目录下创建目录 `Letters`、`Memos` 和 `Reports`。 ### 目录名中可怕的空格 -... 这带来了目录名称中关于空格的棘手问题。你能在目录名中使用空格吗?是的你可以。那么建议你使用空格吗?不,绝对不是。空格使一切变得更加复杂,并且可能是危险的操作。 +……这带来了目录名称中关于空格的棘手问题。你能在目录名中使用空格吗?是的你可以。那么建议你使用空格吗?不,绝对不建议。空格使一切变得更加复杂,并且可能是危险的操作。 + +假设您要创建一个名为 `letters mom/` 的目录。如果你不知道如何更好处理,你可能会输入: -假设您要创建一个名为letters mom的目录。如果你不知道如何更好处理,你可能会输入: ``` mkdir letters mom - ``` -但这是错误的!错误的!错误的!正如我们在上面介绍的,这将创建两个目录letters和mom,而不是一个目录letters mom。 + +但这是错误的!错误的!错误的!正如我们在上面介绍的,这将创建两个目录 `letters/` 和 `mom/`,而不是一个目录 `letters mom/`。 得承认这是一个小麻烦:你所要做的就是删除这两个目录并重新开始,这没什么大不了。 -可是等等!删除目录可是个危险的操作。想象一下,你确实使用图形工具[Dolphin][2]或[Nautilus][3]创建了目录letters mom。如果你突然决定从终端删除目录letters mom,并且您在同一目录下有另一个名为letters的目录,并且该目录中包含重要的文档,结果你为了删除错误的目录尝试了以下操作: +可是等等!删除目录可是个危险的操作。想象一下,你使用图形工具[Dolphin][2] 或 [Nautilus][3] 创建了目录 `letters mom/`。如果你突然决定从终端删除目录 `letters mom`,并且您在同一目录下有另一个名为 `letters` 的目录,并且该目录中包含重要的文档,结果你为了删除错误的目录尝试了以下操作: + ``` rmdir letters mom - ``` -你将会有风险删除目录letters。这里说“风险”,是因为幸运的是rmdir这条用于删除目录的指令,有一个内置的安全措施,如果你试图删除一个非空目录时,它会发出警告。 + +你将会有删除目录 letters 的风险。这里说“风险”,是因为幸运的是`rmdir` 这条用于删除目录的指令,有一个内置的安全措施,如果你试图删除一个非空目录时,它会发出警告。 但是,下面这个: + ``` rm -Rf letters mom - ``` -(注 这是删除目录及其内容的一种非常标准的方式)将完全删除letters目录,甚至永远不会告诉你刚刚发生了什么。 -rm命令用于删除文件和目录。当你将它与选项-R(递归删除)和-f(强制删除)一起使用时,它会深入到目录及其子目录中,删除它们包含的所有文件,然后删除子目录本身,然后它将删除所有顶层目录中的文件,再然后是删除目录本身。 +(注:这是删除目录及其内容的一种非常标准的方式)将完全删除 `letters/` 目录,甚至永远不会告诉你刚刚发生了什么。) + +`rm` 命令用于删除文件和目录。当你将它与选项 `-R`(递归删除)和 `-f`(强制删除)一起使用时,它会深入到目录及其子目录中,删除它们包含的所有文件,然后删除子目录本身,然后它将删除所有顶层目录中的文件,再然后是删除目录本身。 `rm -Rf` 是你必须非常小心处理的命令。 我的建议是,你可以使用下划线来代替空格,但如果你仍然坚持使用空格,有两种方法可以使它们起作用。您可以使用单引号或双引号,如下所示: + ``` mkdir 'letters mom' mkdir "letters dad" - ``` -或者,你可以转义空格。有些字符对shell有特殊意义。正如你所见,空格用于在命令行上分隔选项和参数。 “分离选项和参数”属于“特殊含义”范畴。当你想让shell忽略一个字符的特殊含义时,你需要转义,你可以在它前面放一个反斜杠(\)如: + +或者,你可以转义空格。有些字符对 shell 有特殊意义。正如你所见,空格用于在命令行上分隔选项和参数。 “分离选项和参数”属于“特殊含义”范畴。当你想让 shell 忽略一个字符的特殊含义时,你需要转义,你可以在它前面放一个反斜杠(`\`)如: + ``` mkdir letters\ mom mkdir letter\ dad - ``` -还有其他特殊字符需要转义,如撇号或单引号('),双引号(“)和&符号(&): + +还有其他特殊字符需要转义,如撇号或单引号(`'`),双引号(`“`)和&符号(`&`): + ``` mkdir mom\ \&\ dad\'s\ letters - ``` -我知道你在想什么:如果反斜杠有一个特殊的含义(即告诉shell它必须转义下一个字符),这也使它成为一个特殊的字符。然后,你将如何转义转义字符(\)? + +我知道你在想什么:如果反斜杠有一个特殊的含义(即告诉 shell 它必须转义下一个字符),这也使它成为一个特殊的字符。然后,你将如何转义转义字符(`\`)? 事实证明,你转义任何其他特殊字符都是同样的方式: -``` -mkdir special\\characters ``` -这将生成一个名为special\characters的目录。 +mkdir special\\characters +``` + +这将生成一个名为 `special\characters/` 的目录。 感觉困惑?当然。这就是为什么你应该避免在目录名中使用特殊字符,包括空格。 -以防误操作你可以参考下面这个记录特殊字符的列表。 +以防误操作你可以参考下面这个记录特殊字符的列表。(LCTT 译注:此处原文链接丢失。) ### 总结 * 使用 `mkdir ` 创建新目录。 * 使用 `rmdir ` 删除目录(仅在目录为空时才有效)。 - * 使用 `rm -Rf ` 来完全删除目录及其内容 - 请务必谨慎使用。 - * 使用相对路径创建相对于当前目录的目录: `mkdir newdir.`. - * 使用绝对路径创建相对于根目录(`/`)的目录: `mkdir /home//newdir` - * 使用 `..` 在当前目录的上级目录中创建目录: `mkdir ../newdir` - * 你可以通过在命令行上使用空格分隔目录名来创建多个目录: `mkdir onedir twodir threedir` - * 同时创建多个目录时,你可以混合使用相对路径和绝对路径: `mkdir onedir twodir /home//threedir` + * 使用 `rm -Rf ` 来完全删除目录及其内容 —— 请务必谨慎使用。 + * 使用相对路径创建相对于当前目录的目录: `mkdir newdir`。 + * 使用绝对路径创建相对于根目录(`/`)的目录: `mkdir /home//newdir`。 + * 使用 `..` 在当前目录的上级目录中创建目录: `mkdir ../newdir`。 + * 你可以通过在命令行上使用空格分隔目录名来创建多个目录: `mkdir onedir twodir threedir`。 + * 同时创建多个目录时,你可以混合使用相对路径和绝对路径: `mkdir onedir twodir /home//threedir`。 * 在目录名称中使用空格和特殊字符真的会让你很头疼,你最好不要那样做。 +有关更多信息,您可以查看 `mkdir`、`rmdir` 和 `rm` 的手册: - -有关更多信息,您可以查看`mkdir`, `rmdir` 和 `rm`的手册: ``` man mkdir man rmdir man rm - ``` -要退出手册页,请按键盘[q]键。 + +要退出手册页,请按键盘 `q` 键。 ### 下次预告 -在下一部分中,你将学习如何创建,修改和删除文件,以及你需要了解的有关权限和特权的所有信息! +在下一部分中,你将学习如何创建、修改和删除文件,以及你需要了解的有关权限和特权的所有信息! -通过Linux Foundation和edX免费提供的["Introduction to Linux" ][4]课程了解有关Linux的更多信息。 +通过 Linux 基金会和 edX 免费提供的[“Introduction to Linux”][4]课程了解有关Linux的更多信息。 -------------------------------------------------------------------------------- @@ -155,12 +170,12 @@ via: https://www.linux.com/blog/learn/2018/5/manipulating-directories-linux 作者:[Paul Brown][a] 选题:[lujun9972](https://github.com/lujun9972) 译者:[way-ww](https://github.com/way-ww) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 [a]:https://www.linux.com/users/bro66 -[1]:https://www.linux.com/blog/learn/intro-to-linux/2018/4/linux-filesystem-explained +[1]:https://linux.cn/article-9798-1.html [2]:https://userbase.kde.org/Dolphin [3]:https://projects-old.gnome.org/nautilus/screenshots.html [4]:https://training.linuxfoundation.org/linux-courses/system-administration-training/introduction-to-linux From 73a6ebc827ea884abcc03d1914cae1211ff75ec9 Mon Sep 17 00:00:00 2001 From: "Xingyu.Wang" Date: Sat, 29 Sep 2018 23:54:58 +0800 Subject: [PATCH 139/736] PUB:20180516 Manipulating Directories in Linux.md MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit @way-ww 本文首发地址: https://linux.cn/article-10066-1.html 您的 LCTT 专页:https://linux.cn/lctt/way-ww 请到 LCTT 平台注册领取 LCCN :https://lctt.linux.cn/ --- .../20180516 Manipulating Directories in Linux.md | 0 1 file changed, 0 insertions(+), 0 deletions(-) rename {translated/tech => published}/20180516 Manipulating Directories in Linux.md (100%) diff --git a/translated/tech/20180516 Manipulating Directories in Linux.md b/published/20180516 Manipulating Directories in Linux.md similarity index 100% rename from translated/tech/20180516 Manipulating Directories in Linux.md rename to published/20180516 Manipulating Directories in Linux.md From 172dce727e6d3cac7155c447d1567f214049585a Mon Sep 17 00:00:00 2001 From: way-ww <40491614+way-ww@users.noreply.github.com> Date: Sat, 29 Sep 2018 23:57:26 +0800 Subject: [PATCH 140/736] Delete 20180917 4 scanning tools for the Linux desktop.md MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit 删除源文件 --- ... 4 scanning tools for the Linux desktop.md | 74 ------------------- 1 file changed, 74 deletions(-) delete mode 100644 sources/tech/20180917 4 scanning tools for the Linux desktop.md diff --git a/sources/tech/20180917 4 scanning tools for the Linux desktop.md b/sources/tech/20180917 4 scanning tools for the Linux desktop.md deleted file mode 100644 index 7da24d3a90..0000000000 --- a/sources/tech/20180917 4 scanning tools for the Linux desktop.md +++ /dev/null @@ -1,74 +0,0 @@ -Translating by way-ww - -4 scanning tools for the Linux desktop -====== -Go paperless by driving your scanner with one of these open source applications. - -![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/osdc-photo-camera-blue.png?itok=AsIMZ9ga) - -While the paperless world isn't here quite yet, more and more people are getting rid of paper by scanning documents and photos. Having a scanner isn't enough to do the deed, though. You need software to drive that scanner. - -But the catch is many scanner makers don't have Linux versions of the software they bundle with their devices. For the most part, that doesn't matter. Why? Because there are good scanning applications available for the Linux desktop. They work with a variety of scanners and do a good job. - -Let's take a look at four simple but flexible open source Linux scanning tools. I've used each of these tools (and even wrote about three of them [back in 2014][1]) and found them very useful. You might, too. - -### Simple Scan - -One of my longtime favorites, [Simple Scan][2] is small, quick, efficient, and easy to use. If you've seen it before, that's because Simple Scan is the default scanner application on the GNOME desktop, as well as for a number of Linux distributions. - -Scanning a document or photo takes one click. After scanning something, you can rotate or crop it and save it as an image (JPEG or PNG only) or as a PDF. That said, Simple Scan can be slow, even if you scan documents at lower resolutions. On top of that, Simple Scan uses a set of global defaults for scanning, like 150dpi for text and 300dpi for photos. You need to go into Simple Scan's preferences to change those settings. - -If you've scanned something with more than a couple of pages, you can reorder the pages before you save. And if necessary—say you're submitting a signed form—you can email from within Simple Scan. - -### Skanlite - -In many ways, [Skanlite][3] is Simple Scan's cousin in the KDE world. Skanlite has few features, but it gets the job done nicely. - -The software has options that you can configure, including automatically saving scanned files, setting the quality of the scan, and identifying where to save your scans. Skanlite can save to these image formats: JPEG, PNG, BMP, PPM, XBM, and XPM. - -One nifty feature is the software's ability to save portions of what you've scanned to separate files. That comes in handy when, say, you want to excise someone or something from a photo. - -### Gscan2pdf - -Another old favorite, [gscan2pdf][4] might be showing its age, but it still packs a few more features than some of the other applications mentioned here. Even so, gscan2pdf is still comparatively light. - -In addition to saving scans in various image formats (JPEG, PNG, and TIFF), gscan2pdf also saves them as PDF or [DjVu][5] files. You can set the scan's resolution, whether it's black and white or color, and paper size before you click the Scan button. That beats going into gscan2pdf's preferences every time you want to change any of those settings. You can also rotate, crop, and delete pages. - -While none of those features are truly killer, they give you a bit more flexibility. - -### GIMP - -You probably know [GIMP][6] as an image-editing tool. But did you know you can use it to drive your scanner? - -You'll need to install the [XSane][7] scanner software and the GIMP XSane plugin. Both of those should be available from your Linux distro's package manager. From there, select File > Create > Scanner/Camera. From there, click on your scanner and then the Scan button. - -If that's not your cup of tea, or if it doesn't work, you can combine GIMP with a plugin called [QuiteInsane][8]. With either plugin, GIMP becomes a powerful scanning application that lets you set a number of options like whether to scan in color or black and white, the resolution of the scan, and whether or not to compress results. You can also use GIMP's tools to touch up or apply effects to your scans. This makes it great for scanning photos and art. - -### Do they really just work? - -All of this software works well for the most part and with a variety of hardware. I've used them with several multifunction printers that I've owned over the years—whether connecting using a USB cable or over wireless. - -You might have noticed that I wrote "works well for the most part" in the previous paragraph. I did run into one exception: an inexpensive Canon multifunction printer. None of the software I used could detect it. I had to download and install Canon's Linux scanner software, which did work. - -What's your favorite open source scanning tool for Linux? Share your pick by leaving a comment. - --------------------------------------------------------------------------------- - -via: https://opensource.com/article/18/9/linux-scanner-tools - -作者:[Scott Nesbitt][a] -选题:[lujun9972](https://github.com/lujun9972) -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://opensource.com/users/scottnesbitt -[1]: https://opensource.com/life/14/8/3-tools-scanners-linux-desktop -[2]: https://gitlab.gnome.org/GNOME/simple-scan -[3]: https://www.kde.org/applications/graphics/skanlite/ -[4]: http://gscan2pdf.sourceforge.net/ -[5]: http://en.wikipedia.org/wiki/DjVu -[6]: http://www.gimp.org/ -[7]: https://en.wikipedia.org/wiki/Scanner_Access_Now_Easy#XSane -[8]: http://sourceforge.net/projects/quiteinsane/ From 4180fbf9c38145e187aa5effbf6a1cf2b6b33329 Mon Sep 17 00:00:00 2001 From: way-ww <40491614+way-ww@users.noreply.github.com> Date: Sun, 30 Sep 2018 00:00:06 +0800 Subject: [PATCH 141/736] =?UTF-8?q?=E7=BF=BB=E8=AF=91=E5=AE=8C=E6=88=90201?= =?UTF-8?q?80917=204=20scanning=20tools=20for=20the=20Linux=20desktop.md?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit 翻译完成 --- ... 4 scanning tools for the Linux desktop.md | 72 +++++++++++++++++++ 1 file changed, 72 insertions(+) create mode 100644 translated/tech/20180917 4 scanning tools for the Linux desktop.md diff --git a/translated/tech/20180917 4 scanning tools for the Linux desktop.md b/translated/tech/20180917 4 scanning tools for the Linux desktop.md new file mode 100644 index 0000000000..89aaad3a89 --- /dev/null +++ b/translated/tech/20180917 4 scanning tools for the Linux desktop.md @@ -0,0 +1,72 @@ +用于Linux桌面的4个扫描工具 +====== +使用其中一个开源软件驱动扫描仪来实现无纸化办公。 + +![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/osdc-photo-camera-blue.png?itok=AsIMZ9ga) + +尽管无纸化世界还没有到来,但越来越多的人通过扫描文件和照片来摆脱纸张的束缚。不过,仅仅拥有一台扫描仪还不足够。你还需要软件来驱动扫描仪。 + +然而问题是许多扫描仪制造商没有将Linux版本的软件与他们的设备适配在一起。不过在大多数情况下,即使没有也没多大关系。因为在linux桌面上已经有很好的扫描软件了。它们能够与许多扫描仪配合很好的完成工作。 + +现在就让我们看看四个简单又灵活的开源Linux扫描工具。我已经使用过了下面这些工具(甚至[早在2014年][1]写过关于其中三个工具的文章)并且觉得它们非常有用。希望你也会这样认为。 + +### Simple Scan + +这是我最喜欢的一个软件之一,[Simple Scan][2]小巧,迅速,高效,且易于使用。如果你以前见过它,那是因为Simple Scan是GNOME桌面上的默认扫描程序应用程序,也是许多Linux发行版的默认扫描程序。 + +你只需单击一下就能扫描文档或照片。扫描过某些内容后,你可以旋转或裁剪它并将其另存为图像(仅限JPEG或PNG格式)或PDF格式。也就是说Simple Scan可能会很慢,即使你用较低分辨率来扫描文档。最重要的是,Simple Scan在扫描时会使用一组全局的默认值,例如150dpi用于文本,300dpi用于照片。你需要进入Simple Scan的首选项才能更改这些设置。 + +如果你扫描的内容超过了几页,还可以在保存之前重新排序页面。如果有必要的话 - 假如你正在提交已签名的表格 - 你可以使用Simple Scan来发送电子邮件。 + +### Skanlite + +从很多方面来看,[Skanlite][3]是Simple Scan在KDE世界中的表兄弟。虽然Skanlite功能很少,但它可以出色的完成工作。 + +你可以自己配置这个软件的选项,包括自动保存扫描文件,设置扫描质量以及确定扫描保存位置。 Skanlite可以保存为以下图像格式:JPEG,PNG,BMP,PPM,XBM和XPM。 + +其中一个很棒的功能是Skanlite能够将你扫描的部分内容保存到单独的文件中。当你想要从照片中删除某人或某物时,这就派上用场了。 + +### Gscan2pdf + +这是我另一个最爱的老软件,[gscan2pdf][4]可能会显得很老旧了,但它仍然包含一些比这里提到的其他软件更好的功能。即便如此,gscan2pdf仍然显得很轻便。 + +除了以各种图像格式(JPEG,PNG和TIFF)保存扫描外,gscan2pdf还将它们保存为PDF或[DjVu][5]文件。你可以在单击“扫描”按钮之前设置扫描的分辨率,无论是黑白,彩色还是纸张大小,每当你想要更改任何这些设置时,这都会进入gscan2pdf的首选项。你还可以旋转,裁剪和删除页面。 + +虽然这些都不是真正的杀手级功能,但它们会给你带来更多的灵活性。 + +### GIMP + +你大概会知道[GIMP][6]是一个图像编辑工具。但是你恐怕不知道可以用它来驱动你的扫描仪吧。 + +你需要安装[XSane][7]扫描软件和GIMP XSane插件。这两个应该都可以从你的Linux发行版的包管理器中获得。在软件里,选择文件>创建>扫描仪/相机。单击扫描仪,然后单击扫描按钮即可进行扫描。 + +如果这不是你想要的,或者它不起作用,你可以将GIMP和一个叫作[QuiteInsane][8]的插件结合起来。使用任一插件,都能使GIMP成为一个功能强大的扫描软件,它可以让你设置许多选项,如是否扫描彩色或黑白,扫描的分辨率,以及是否压缩结果等。你还可以使用GIMP的工具来修改或应用扫描后的效果。这使得它非常适合扫描照片和艺术品。 + +### 它们真的能够工作吗? + +所有的这些软件在大多数时候都能够在各种硬件上运行良好。我将它们与我过去几年来拥有的多台多功能打印机一起使用 - 无论是使用USB线连接还是通过无线连接。 + +你可能已经注意到我在前一段中写过“大多数时候运行良好”。这是因为我确实遇到过一个例外:一个便宜的canon多功能打印机。我使用的软件都没有检测到它。最后我不得不下载并安装canon的Linux扫描仪软件才使它工作。 + +你最喜欢的Linux开源扫描工具是什么?发表评论,分享你的选择。 + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/18/9/linux-scanner-tools + +作者:[Scott Nesbitt][a] +选题:[lujun9972](https://github.com/lujun9972) +译者:[way-ww](https://github.com/way-ww) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/scottnesbitt +[1]: https://opensource.com/life/14/8/3-tools-scanners-linux-desktop +[2]: https://gitlab.gnome.org/GNOME/simple-scan +[3]: https://www.kde.org/applications/graphics/skanlite/ +[4]: http://gscan2pdf.sourceforge.net/ +[5]: http://en.wikipedia.org/wiki/DjVu +[6]: http://www.gimp.org/ +[7]: https://en.wikipedia.org/wiki/Scanner_Access_Now_Easy#XSane +[8]: http://sourceforge.net/projects/quiteinsane/ From 6025587ce384fa17242441ee5d8b01f55f7bb2bf Mon Sep 17 00:00:00 2001 From: geekpi Date: Sun, 30 Sep 2018 09:01:23 +0800 Subject: [PATCH 142/736] translated --- ... With browser-mpris2 (Chrome Extension).md | 76 ------------------- ... With browser-mpris2 (Chrome Extension).md | 76 +++++++++++++++++++ 2 files changed, 76 insertions(+), 76 deletions(-) delete mode 100644 sources/tech/20180816 Add YouTube Player Controls To Your Linux Desktop With browser-mpris2 (Chrome Extension).md create mode 100644 translated/tech/20180816 Add YouTube Player Controls To Your Linux Desktop With browser-mpris2 (Chrome Extension).md diff --git a/sources/tech/20180816 Add YouTube Player Controls To Your Linux Desktop With browser-mpris2 (Chrome Extension).md b/sources/tech/20180816 Add YouTube Player Controls To Your Linux Desktop With browser-mpris2 (Chrome Extension).md deleted file mode 100644 index acc8f56e0c..0000000000 --- a/sources/tech/20180816 Add YouTube Player Controls To Your Linux Desktop With browser-mpris2 (Chrome Extension).md +++ /dev/null @@ -1,76 +0,0 @@ -translating---geekpi - -Add YouTube Player Controls To Your Linux Desktop With browser-mpris2 (Chrome Extension) -====== -A Unity feature that I miss (it only actually worked for a short while though) is automatically getting player controls in the Ubuntu Sound Indicator when visiting a website like YouTube in a web browser, so you could pause or stop the video directly from the top bar, as well as see the video / song information and a preview. - -This Unity feature is long dead, but I was searching for something similar for Gnome Shell and I came across **[browser-mpris2][1], an extension that implements a MPRIS v2 interface for Google Chrome / Chromium, which currently only supports YouTube** , and I thought there might be some Linux Uprising readers who'll like this. - -**The extension also works with Chromium-based web browsers like Opera and Vivaldi.** -** -** **browser-mpris2 also supports Firefox but since loading extensions via about:debugging is temporary, and this is needed for browser-mpris2, this article doesn't include Firefox instructions. The developer[intends][2] to submit the extension to the Firefox addons website in the future.** - -**Using this Chrome extension you get YouTube media player controls (play, pause, stop and seeking) in MPRIS2-capable applets**. For example, if you use Gnome Shell, you get YouTube media player controls as a permanent notification or, you can use an extension like Media Player Indicator for this. In Cinnamon / Linux Mint with Cinnamon, it shows up in the Sound Applet. - -**It didn't work for me on Unity** , I'm not sure why. I didn't try this extension with other MPRIS2-capable applets available in various desktop environments (KDE, Xfce, MATE, etc.). If you give it a try, let us know if it works with your desktop environment / MPRIS2 enabled applet. - -Here is a screenshot with [Media Player Indicator][3] displaying information about the currently playing YouTube video, along with its controls (play/pause, stop and seeking), on Ubuntu 18.04 with Gnome Shell and Chromium browser: - -![](https://1.bp.blogspot.com/-rsc4FpYBSrI/W3VtPphfdOI/AAAAAAAABXY/YfKV6pBncs0LAwTwYSS0tKRJADDfZDBfwCLcBGAs/s640/browser-mpris2-gnome-shell-sound-indicator.png) - -And in Linux Mint 19 Cinnamon with its default sound applet and Chromium browser: - -![](https://2.bp.blogspot.com/-I2DuYetv7eQ/W3VtUUcg26I/AAAAAAAABXc/Tv-RemkyO60k6CC_mYUxewG-KfVgpFefACLcBGAs/s1600/browser-mpris2-cinnamon-linux-mint.png) - -### How to install browser-mpris2 for Google Chrome / Chromium - -**1\. Install Git if you haven't already.** - -In Debian / Ubuntu / Linux Mint, use this command to install git: -``` -sudo apt install git - -``` - -**2\. Download and install the[browser-mpris2][1] required files.** - -The commands below clone the browser-mpris2 Git repository and install the chrome-mpris2 file to `/usr/local/bin/` (run the "git clone..." command in a folder where you can continue to keep the browser-mpris2 folder because you can't remove it, as it will be used by Chrome / Chromium): -``` -git clone https://github.com/otommod/browser-mpris2 -sudo install browser-mpris2/native/chrome-mpris2 /usr/local/bin/ - -``` - -**3\. Load the extension in Chrome / Chromium-based web browsers.** - -![](https://3.bp.blogspot.com/-yEoNFj2wAXM/W3Vvewa979I/AAAAAAAABXo/dmltlNZk3J4sVa5jQenFFrT28ecklY92QCLcBGAs/s640/browser-mpris2-chrome-developer-load-unpacked.png) - -Open Google Chrome, Chromium, Opera or Vivaldi web browsers, go to the Extensions page (enter `chrome://extensions` in the URL bar), enable `Developer mode` using the toggle available in the top right-hand side of the screen, then select `Load Unpacked` and select the chrome-mpris2 directory (make sure to not select a subfolder). - -Copy the extension ID and save it because you'll need it later (it's something like: `emngjajgcmeiligomkgpngljimglhhii` but it's different for you so make sure to use the ID from your computer!) . - -**4\. Run** `install-chrome.py` (from the `browser-mpris2/native` folder), specifying the extension id and chrome-mpris2 path. - -Use this command in a terminal (replace `REPLACE-THIS-WITH-EXTENSION-ID` with the browser-mpris2 extension ID displayed under `chrome://extensions` from the previous step) to install this extension: -``` -browser-mpris2/native/install-chrome.py REPLACE-THIS-WITH-EXTENSION-ID /usr/local/bin/chrome-mpris2 - -``` - -You only need to run this command once, there's no need to add it to startup or anything like that. Any YouTube video you play in Google Chrome or Chromium browsers should show up in whatever MPRISv2 applet you're using. There's no need to restart the web browser. - --------------------------------------------------------------------------------- - -via: https://www.linuxuprising.com/2018/08/add-youtube-player-controls-to-your.html - -作者:[Logix][a] -选题:[lujun9972](https://github.com/lujun9972) -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:https://plus.google.com/118280394805678839070 -[1]:https://github.com/otommod/browser-mpris2 -[2]:https://github.com/otommod/browser-mpris2/issues/11 -[3]:https://extensions.gnome.org/extension/55/media-player-indicator/ diff --git a/translated/tech/20180816 Add YouTube Player Controls To Your Linux Desktop With browser-mpris2 (Chrome Extension).md b/translated/tech/20180816 Add YouTube Player Controls To Your Linux Desktop With browser-mpris2 (Chrome Extension).md new file mode 100644 index 0000000000..a72b4cdd8d --- /dev/null +++ b/translated/tech/20180816 Add YouTube Player Controls To Your Linux Desktop With browser-mpris2 (Chrome Extension).md @@ -0,0 +1,76 @@ +使用 browser-mpris2(Chrome 扩展)将 YouTube 播放器控件添加到 Linux 桌面 +====== +一个我怀念的 Unity 功能(虽然只使用了一小段时间)是在 Web 浏览器中访问 YouTube 等网站时自动获取 Ubuntu 声音指示器中的播放器控件,因此你可以直接从顶部栏暂停或停止视频,以及浏览视频/歌曲信息和预览。 + +这个 Unity 功能已经消失很久了,但我正在为 Gnome Shell 寻找类似的东西,然后我遇到了 **[browser-mpris2][1],这是一个为 Google Chrome/Chromium 实现 MPRIS v2 接口的扩展,目前只支持 YouTube**,我想可能会有一些 Linux Uprising 的读者会喜欢这个。 + +**该扩展还适用于 Opera 和 Vivaldi 等基于 Chromium 的 Web 浏览器。** +** +** **browser-mpris2 也支持 Firefox,但因为通过 about:debugging 加载扩展是临时的,而这是 browser-mpris2 所需要的,因此本文不包括 Firefox 的指导。开发人员[打算][2]将来将扩展提交到 Firefox 插件网站上。** + +**使用此 Chrome 扩展,你可以在支持 MPRIS2 的 applets 中获得 YouTube 媒体播放器控件(播放、暂停、停止和查找 +)**。例如,如果你使用 Gnome Shell,你可将 YouTube 媒体播放器控件作为永久通知,或者你可以使用 Media Player Indicator 之类的扩展来实现此目的。在 Cinnamon /Linux Mint with Cinnamon 中,它出现在声音 Applet 中。 + +**我无法在 Unity 上用它**,我不知道为什么。我没有在不同桌面环境(KDE、Xfce、MATE 等)中使用其他支持 MPRIS2 的 applet 尝试此扩展。如果你尝试过,请告诉我们它是否适用于你的桌面环境/支持 MPRIS2 的 applet。 + +以下是在使用 Gnome Shell 的 Ubuntu 18.04 并装有 Chromium 浏览器的[媒体播放器指示器][3]的截图,其中显示了有关当前正在播放的 YouTube 视频的信息及其控件(播放/暂停,停止和查找): + +![](https://1.bp.blogspot.com/-rsc4FpYBSrI/W3VtPphfdOI/AAAAAAAABXY/YfKV6pBncs0LAwTwYSS0tKRJADDfZDBfwCLcBGAs/s640/browser-mpris2-gnome-shell-sound-indicator.png) + +在 Linux Mint 19 Cinnamon 中使用其默认声音 applet 和 Chromium 浏览器的截图: + + +![](https://2.bp.blogspot.com/-I2DuYetv7eQ/W3VtUUcg26I/AAAAAAAABXc/Tv-RemkyO60k6CC_mYUxewG-KfVgpFefACLcBGAs/s1600/browser-mpris2-cinnamon-linux-mint.png) + +### 如何为 Google Chrom/Chromium安装 browser-mpris2 + +**1\. 如果你还没有安装 Git 就安装它** + +在 Debian/Ubuntu/Linux Mint 中,使用此命令安装 git: +``` +sudo apt install git + +``` + +**2\. 下载并安装 [browser-mpris2][1] 所需文件。** + +下面的命令克隆了 browser-mpris2 的 Git 仓库并将 chrome-mpris2 安装到 `/usr/local/bin/`(在一个你可以保存 browser-mpris2 文件夹的地方运行 “git clone ...” 命令,由于它会被 Chrome/Chromium 使用,你不能删除它): +``` +git clone https://github.com/otommod/browser-mpris2 +sudo install browser-mpris2/native/chrome-mpris2 /usr/local/bin/ + +``` + +**3\. 在基于 Chrome/Chromium 的 Web 浏览器中加载此扩展。** + +![](https://3.bp.blogspot.com/-yEoNFj2wAXM/W3Vvewa979I/AAAAAAAABXo/dmltlNZk3J4sVa5jQenFFrT28ecklY92QCLcBGAs/s640/browser-mpris2-chrome-developer-load-unpacked.png) + +打开 Goog​​le Chrome、Chromium、Opera 或 Vivaldi 浏览器,进入 Extensions 页面(在 URL 栏中输入 `chrome://extensions`),在屏幕右上角切换到`开发者模式`。然后选择 `Load Unpacked` 并选择 chrome-mpris2 目录(确保没有选择子文件夹)。 + +复制扩展 ID 并保存它,因为你以后需要它(它类似于这样:`emngjajgcmeiligomkgpngljimglhhii`,但它会与你的不一样,因此确保使用你计算机中的 ID!)。 + +**4\. 运行 **`install-chrome.py`**(在 `browser-mpris2/native` 文件夹中),指定扩展 id 和 chrome-mpris2 路径。 + +在终端中使用此命令(将 `REPLACE-THIS-WITH-EXTENSION-ID` 替换为上一步中 `chrome://extensions` 下显示的 browser-mpris2 扩展 ID)安装此扩展: +``` +browser-mpris2/native/install-chrome.py REPLACE-THIS-WITH-EXTENSION-ID /usr/local/bin/chrome-mpris2 + +``` + +你只需要运行此命令一次,无需将其添加到启动或其他类似的地方。你在 Google Chrome 或 Chromium 浏览器中播放的任何 YouTube 视频都应显示在你正在使用的任何 MPRISv2 applet 中。你无需重启 Web 浏览器。 + +-------------------------------------------------------------------------------- + +via: https://www.linuxuprising.com/2018/08/add-youtube-player-controls-to-your.html + +作者:[Logix][a] +选题:[lujun9972](https://github.com/lujun9972) +译者:[geekpi](https://github.com/geekpi) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://plus.google.com/118280394805678839070 +[1]:https://github.com/otommod/browser-mpris2 +[2]:https://github.com/otommod/browser-mpris2/issues/11 +[3]:https://extensions.gnome.org/extension/55/media-player-indicator/ From f1095a93c8b69b889443c1211be4277ced04d6b9 Mon Sep 17 00:00:00 2001 From: geekpi Date: Sun, 30 Sep 2018 09:08:31 +0800 Subject: [PATCH 143/736] translating --- sources/tech/20180928 10 handy Bash aliases for Linux.md | 2 ++ 1 file changed, 2 insertions(+) diff --git a/sources/tech/20180928 10 handy Bash aliases for Linux.md b/sources/tech/20180928 10 handy Bash aliases for Linux.md index b69b2f8aab..7ae1070997 100644 --- a/sources/tech/20180928 10 handy Bash aliases for Linux.md +++ b/sources/tech/20180928 10 handy Bash aliases for Linux.md @@ -1,3 +1,5 @@ +translating---geekpi + 10 handy Bash aliases for Linux ====== Get more efficient by using condensed versions of long Bash commands. From 37a2496ab044261ba281a46df30f9e5b39232437 Mon Sep 17 00:00:00 2001 From: geekpi Date: Sun, 30 Sep 2018 09:18:10 +0800 Subject: [PATCH 144/736] translated (#10442) --- ... With browser-mpris2 (Chrome Extension).md | 76 ------------------- ... With browser-mpris2 (Chrome Extension).md | 76 +++++++++++++++++++ 2 files changed, 76 insertions(+), 76 deletions(-) delete mode 100644 sources/tech/20180816 Add YouTube Player Controls To Your Linux Desktop With browser-mpris2 (Chrome Extension).md create mode 100644 translated/tech/20180816 Add YouTube Player Controls To Your Linux Desktop With browser-mpris2 (Chrome Extension).md diff --git a/sources/tech/20180816 Add YouTube Player Controls To Your Linux Desktop With browser-mpris2 (Chrome Extension).md b/sources/tech/20180816 Add YouTube Player Controls To Your Linux Desktop With browser-mpris2 (Chrome Extension).md deleted file mode 100644 index acc8f56e0c..0000000000 --- a/sources/tech/20180816 Add YouTube Player Controls To Your Linux Desktop With browser-mpris2 (Chrome Extension).md +++ /dev/null @@ -1,76 +0,0 @@ -translating---geekpi - -Add YouTube Player Controls To Your Linux Desktop With browser-mpris2 (Chrome Extension) -====== -A Unity feature that I miss (it only actually worked for a short while though) is automatically getting player controls in the Ubuntu Sound Indicator when visiting a website like YouTube in a web browser, so you could pause or stop the video directly from the top bar, as well as see the video / song information and a preview. - -This Unity feature is long dead, but I was searching for something similar for Gnome Shell and I came across **[browser-mpris2][1], an extension that implements a MPRIS v2 interface for Google Chrome / Chromium, which currently only supports YouTube** , and I thought there might be some Linux Uprising readers who'll like this. - -**The extension also works with Chromium-based web browsers like Opera and Vivaldi.** -** -** **browser-mpris2 also supports Firefox but since loading extensions via about:debugging is temporary, and this is needed for browser-mpris2, this article doesn't include Firefox instructions. The developer[intends][2] to submit the extension to the Firefox addons website in the future.** - -**Using this Chrome extension you get YouTube media player controls (play, pause, stop and seeking) in MPRIS2-capable applets**. For example, if you use Gnome Shell, you get YouTube media player controls as a permanent notification or, you can use an extension like Media Player Indicator for this. In Cinnamon / Linux Mint with Cinnamon, it shows up in the Sound Applet. - -**It didn't work for me on Unity** , I'm not sure why. I didn't try this extension with other MPRIS2-capable applets available in various desktop environments (KDE, Xfce, MATE, etc.). If you give it a try, let us know if it works with your desktop environment / MPRIS2 enabled applet. - -Here is a screenshot with [Media Player Indicator][3] displaying information about the currently playing YouTube video, along with its controls (play/pause, stop and seeking), on Ubuntu 18.04 with Gnome Shell and Chromium browser: - -![](https://1.bp.blogspot.com/-rsc4FpYBSrI/W3VtPphfdOI/AAAAAAAABXY/YfKV6pBncs0LAwTwYSS0tKRJADDfZDBfwCLcBGAs/s640/browser-mpris2-gnome-shell-sound-indicator.png) - -And in Linux Mint 19 Cinnamon with its default sound applet and Chromium browser: - -![](https://2.bp.blogspot.com/-I2DuYetv7eQ/W3VtUUcg26I/AAAAAAAABXc/Tv-RemkyO60k6CC_mYUxewG-KfVgpFefACLcBGAs/s1600/browser-mpris2-cinnamon-linux-mint.png) - -### How to install browser-mpris2 for Google Chrome / Chromium - -**1\. Install Git if you haven't already.** - -In Debian / Ubuntu / Linux Mint, use this command to install git: -``` -sudo apt install git - -``` - -**2\. Download and install the[browser-mpris2][1] required files.** - -The commands below clone the browser-mpris2 Git repository and install the chrome-mpris2 file to `/usr/local/bin/` (run the "git clone..." command in a folder where you can continue to keep the browser-mpris2 folder because you can't remove it, as it will be used by Chrome / Chromium): -``` -git clone https://github.com/otommod/browser-mpris2 -sudo install browser-mpris2/native/chrome-mpris2 /usr/local/bin/ - -``` - -**3\. Load the extension in Chrome / Chromium-based web browsers.** - -![](https://3.bp.blogspot.com/-yEoNFj2wAXM/W3Vvewa979I/AAAAAAAABXo/dmltlNZk3J4sVa5jQenFFrT28ecklY92QCLcBGAs/s640/browser-mpris2-chrome-developer-load-unpacked.png) - -Open Google Chrome, Chromium, Opera or Vivaldi web browsers, go to the Extensions page (enter `chrome://extensions` in the URL bar), enable `Developer mode` using the toggle available in the top right-hand side of the screen, then select `Load Unpacked` and select the chrome-mpris2 directory (make sure to not select a subfolder). - -Copy the extension ID and save it because you'll need it later (it's something like: `emngjajgcmeiligomkgpngljimglhhii` but it's different for you so make sure to use the ID from your computer!) . - -**4\. Run** `install-chrome.py` (from the `browser-mpris2/native` folder), specifying the extension id and chrome-mpris2 path. - -Use this command in a terminal (replace `REPLACE-THIS-WITH-EXTENSION-ID` with the browser-mpris2 extension ID displayed under `chrome://extensions` from the previous step) to install this extension: -``` -browser-mpris2/native/install-chrome.py REPLACE-THIS-WITH-EXTENSION-ID /usr/local/bin/chrome-mpris2 - -``` - -You only need to run this command once, there's no need to add it to startup or anything like that. Any YouTube video you play in Google Chrome or Chromium browsers should show up in whatever MPRISv2 applet you're using. There's no need to restart the web browser. - --------------------------------------------------------------------------------- - -via: https://www.linuxuprising.com/2018/08/add-youtube-player-controls-to-your.html - -作者:[Logix][a] -选题:[lujun9972](https://github.com/lujun9972) -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:https://plus.google.com/118280394805678839070 -[1]:https://github.com/otommod/browser-mpris2 -[2]:https://github.com/otommod/browser-mpris2/issues/11 -[3]:https://extensions.gnome.org/extension/55/media-player-indicator/ diff --git a/translated/tech/20180816 Add YouTube Player Controls To Your Linux Desktop With browser-mpris2 (Chrome Extension).md b/translated/tech/20180816 Add YouTube Player Controls To Your Linux Desktop With browser-mpris2 (Chrome Extension).md new file mode 100644 index 0000000000..a72b4cdd8d --- /dev/null +++ b/translated/tech/20180816 Add YouTube Player Controls To Your Linux Desktop With browser-mpris2 (Chrome Extension).md @@ -0,0 +1,76 @@ +使用 browser-mpris2(Chrome 扩展)将 YouTube 播放器控件添加到 Linux 桌面 +====== +一个我怀念的 Unity 功能(虽然只使用了一小段时间)是在 Web 浏览器中访问 YouTube 等网站时自动获取 Ubuntu 声音指示器中的播放器控件,因此你可以直接从顶部栏暂停或停止视频,以及浏览视频/歌曲信息和预览。 + +这个 Unity 功能已经消失很久了,但我正在为 Gnome Shell 寻找类似的东西,然后我遇到了 **[browser-mpris2][1],这是一个为 Google Chrome/Chromium 实现 MPRIS v2 接口的扩展,目前只支持 YouTube**,我想可能会有一些 Linux Uprising 的读者会喜欢这个。 + +**该扩展还适用于 Opera 和 Vivaldi 等基于 Chromium 的 Web 浏览器。** +** +** **browser-mpris2 也支持 Firefox,但因为通过 about:debugging 加载扩展是临时的,而这是 browser-mpris2 所需要的,因此本文不包括 Firefox 的指导。开发人员[打算][2]将来将扩展提交到 Firefox 插件网站上。** + +**使用此 Chrome 扩展,你可以在支持 MPRIS2 的 applets 中获得 YouTube 媒体播放器控件(播放、暂停、停止和查找 +)**。例如,如果你使用 Gnome Shell,你可将 YouTube 媒体播放器控件作为永久通知,或者你可以使用 Media Player Indicator 之类的扩展来实现此目的。在 Cinnamon /Linux Mint with Cinnamon 中,它出现在声音 Applet 中。 + +**我无法在 Unity 上用它**,我不知道为什么。我没有在不同桌面环境(KDE、Xfce、MATE 等)中使用其他支持 MPRIS2 的 applet 尝试此扩展。如果你尝试过,请告诉我们它是否适用于你的桌面环境/支持 MPRIS2 的 applet。 + +以下是在使用 Gnome Shell 的 Ubuntu 18.04 并装有 Chromium 浏览器的[媒体播放器指示器][3]的截图,其中显示了有关当前正在播放的 YouTube 视频的信息及其控件(播放/暂停,停止和查找): + +![](https://1.bp.blogspot.com/-rsc4FpYBSrI/W3VtPphfdOI/AAAAAAAABXY/YfKV6pBncs0LAwTwYSS0tKRJADDfZDBfwCLcBGAs/s640/browser-mpris2-gnome-shell-sound-indicator.png) + +在 Linux Mint 19 Cinnamon 中使用其默认声音 applet 和 Chromium 浏览器的截图: + + +![](https://2.bp.blogspot.com/-I2DuYetv7eQ/W3VtUUcg26I/AAAAAAAABXc/Tv-RemkyO60k6CC_mYUxewG-KfVgpFefACLcBGAs/s1600/browser-mpris2-cinnamon-linux-mint.png) + +### 如何为 Google Chrom/Chromium安装 browser-mpris2 + +**1\. 如果你还没有安装 Git 就安装它** + +在 Debian/Ubuntu/Linux Mint 中,使用此命令安装 git: +``` +sudo apt install git + +``` + +**2\. 下载并安装 [browser-mpris2][1] 所需文件。** + +下面的命令克隆了 browser-mpris2 的 Git 仓库并将 chrome-mpris2 安装到 `/usr/local/bin/`(在一个你可以保存 browser-mpris2 文件夹的地方运行 “git clone ...” 命令,由于它会被 Chrome/Chromium 使用,你不能删除它): +``` +git clone https://github.com/otommod/browser-mpris2 +sudo install browser-mpris2/native/chrome-mpris2 /usr/local/bin/ + +``` + +**3\. 在基于 Chrome/Chromium 的 Web 浏览器中加载此扩展。** + +![](https://3.bp.blogspot.com/-yEoNFj2wAXM/W3Vvewa979I/AAAAAAAABXo/dmltlNZk3J4sVa5jQenFFrT28ecklY92QCLcBGAs/s640/browser-mpris2-chrome-developer-load-unpacked.png) + +打开 Goog​​le Chrome、Chromium、Opera 或 Vivaldi 浏览器,进入 Extensions 页面(在 URL 栏中输入 `chrome://extensions`),在屏幕右上角切换到`开发者模式`。然后选择 `Load Unpacked` 并选择 chrome-mpris2 目录(确保没有选择子文件夹)。 + +复制扩展 ID 并保存它,因为你以后需要它(它类似于这样:`emngjajgcmeiligomkgpngljimglhhii`,但它会与你的不一样,因此确保使用你计算机中的 ID!)。 + +**4\. 运行 **`install-chrome.py`**(在 `browser-mpris2/native` 文件夹中),指定扩展 id 和 chrome-mpris2 路径。 + +在终端中使用此命令(将 `REPLACE-THIS-WITH-EXTENSION-ID` 替换为上一步中 `chrome://extensions` 下显示的 browser-mpris2 扩展 ID)安装此扩展: +``` +browser-mpris2/native/install-chrome.py REPLACE-THIS-WITH-EXTENSION-ID /usr/local/bin/chrome-mpris2 + +``` + +你只需要运行此命令一次,无需将其添加到启动或其他类似的地方。你在 Google Chrome 或 Chromium 浏览器中播放的任何 YouTube 视频都应显示在你正在使用的任何 MPRISv2 applet 中。你无需重启 Web 浏览器。 + +-------------------------------------------------------------------------------- + +via: https://www.linuxuprising.com/2018/08/add-youtube-player-controls-to-your.html + +作者:[Logix][a] +选题:[lujun9972](https://github.com/lujun9972) +译者:[geekpi](https://github.com/geekpi) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://plus.google.com/118280394805678839070 +[1]:https://github.com/otommod/browser-mpris2 +[2]:https://github.com/otommod/browser-mpris2/issues/11 +[3]:https://extensions.gnome.org/extension/55/media-player-indicator/ From 709f1cf488fe0f0dc3d874bf186e56b4f9baa8c4 Mon Sep 17 00:00:00 2001 From: geekpi Date: Sun, 30 Sep 2018 09:19:47 +0800 Subject: [PATCH 145/736] translating (#10443) --- sources/tech/20180928 10 handy Bash aliases for Linux.md | 2 ++ 1 file changed, 2 insertions(+) diff --git a/sources/tech/20180928 10 handy Bash aliases for Linux.md b/sources/tech/20180928 10 handy Bash aliases for Linux.md index b69b2f8aab..7ae1070997 100644 --- a/sources/tech/20180928 10 handy Bash aliases for Linux.md +++ b/sources/tech/20180928 10 handy Bash aliases for Linux.md @@ -1,3 +1,5 @@ +translating---geekpi + 10 handy Bash aliases for Linux ====== Get more efficient by using condensed versions of long Bash commands. From 1f620f8ed3d7b82d1a88f6fd6107bd2338ff2932 Mon Sep 17 00:00:00 2001 From: darksun Date: Sun, 30 Sep 2018 09:34:51 +0800 Subject: [PATCH 146/736] =?UTF-8?q?=E9=80=89=E9=A2=98:=20Quiet=20log=20noi?= =?UTF-8?q?se=20with=20Python=20and=20machine=20learning?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ... noise with Python and machine learning.md | 110 ++++++++++++++++++ 1 file changed, 110 insertions(+) create mode 100644 sources/tech/20180928 Quiet log noise with Python and machine learning.md diff --git a/sources/tech/20180928 Quiet log noise with Python and machine learning.md b/sources/tech/20180928 Quiet log noise with Python and machine learning.md new file mode 100644 index 0000000000..f1fe2f1b7f --- /dev/null +++ b/sources/tech/20180928 Quiet log noise with Python and machine learning.md @@ -0,0 +1,110 @@ +Quiet log noise with Python and machine learning +====== + +Logreduce saves debugging time by picking out anomalies from mountains of log data. + +![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/sound-radio-noise-communication.png?itok=KMNn9QrZ) + +Continuous integration (CI) jobs can generate massive volumes of data. When a job fails, figuring out what went wrong can be a tedious process that involves investigating logs to discover the root cause—which is often found in a fraction of the total job output. To make it easier to separate the most relevant data from the rest, the [Logreduce][1] machine learning model is trained using previous successful job runs to extract anomalies from failed runs' logs. + +This principle can also be applied to other use cases, for example, extracting anomalies from [Journald][2] or other systemwide regular log files. + +### Using machine learning to reduce noise + +A typical log file contains many nominal events ("baselines") along with a few exceptions that are relevant to the developer. Baselines may contain random elements such as timestamps or unique identifiers that are difficult to detect and remove. To remove the baseline events, we can use a [k-nearest neighbors pattern recognition algorithm][3] (k-NN). + +![](https://opensource.com/sites/default/files/uploads/ml-generic-workflow.png) + +Log events must be converted to numeric values for k-NN regression. Using the generic feature extraction tool [HashingVectorizer][4] enables the process to be applied to any type of log. It hashes each word and encodes each event in a sparse matrix. To further reduce the search space, tokenization removes known random words, such as dates or IP addresses. + +![](https://opensource.com/sites/default/files/uploads/hashing-vectorizer.png) + +Once the model is trained, the k-NN search tells us the distance of each new event from the baseline. + +![](https://opensource.com/sites/default/files/uploads/kneighbors.png) + +This [Jupyter notebook][5] demonstrates the process and graphs the sparse matrix vectors. + +![](https://opensource.com/sites/default/files/uploads/anomaly-detection-with-scikit-learn.png) + +### Introducing Logreduce + +The Logreduce Python software transparently implements this process. Logreduce's initial goal was to assist with [Zuul CI][6] job failure analyses using the build database, and it is now integrated into the [Software Factory][7] development forge's job logs process. + +At its simplest, Logreduce compares files or directories and removes lines that are similar. Logreduce builds a model for each source file and outputs any of the target's lines whose distances are above a defined threshold by using the following syntax: **distance | filename:line-number: line-content**. + +``` +$ logreduce diff /var/log/audit/audit.log.1 /var/log/audit/audit.log +INFO  logreduce.Classifier - Training took 21.982s at 0.364MB/s (1.314kl/s) (8.000 MB - 28.884 kilo-lines) +0.244 | audit.log:19963:        type=USER_AUTH acct="root" exe="/usr/bin/su" hostname=managesf.sftests.com +INFO  logreduce.Classifier - Testing took 18.297s at 0.306MB/s (1.094kl/s) (5.607 MB - 20.015 kilo-lines) +99.99% reduction (from 20015 lines to 1 + +``` + +A more advanced Logreduce use can train a model offline to be reused. Many variants of the baselines can be used to fit the k-NN search tree. + +``` +$ logreduce dir-train audit.clf /var/log/audit/audit.log.* +INFO  logreduce.Classifier - Training took 80.883s at 0.396MB/s (1.397kl/s) (32.001 MB - 112.977 kilo-lines) +DEBUG logreduce.Classifier - audit.clf: written +$ logreduce dir-run audit.clf /var/log/audit/audit.log +``` + +Logreduce also implements interfaces to discover baselines for Journald time ranges (days/weeks/months) and Zuul CI job build histories. It can also generate HTML reports that group anomalies found in multiple files in a simple interface. + +![](https://opensource.com/sites/default/files/uploads/html-report.png) + +### Managing baselines + +The key to using k-NN regression for anomaly detection is to have a database of known good baselines, which the model uses to detect lines that deviate too far. This method relies on the baselines containing all nominal events, as anything that isn't found in the baseline will be reported as anomalous. + +CI jobs are great targets for k-NN regression because the job outputs are often deterministic and previous runs can be automatically used as baselines. Logreduce features Zuul job roles that can be used as part of a failed job post task in order to issue a concise report (instead of the full job's logs). This principle can be applied to other cases, as long as baselines can be constructed in advance. For example, a nominal system's [SoS report][8] can be used to find issues in a defective deployment. + +![](https://opensource.com/sites/default/files/uploads/baselines.png) + +### Anomaly classification service + +The next version of Logreduce introduces a server mode to offload log processing to an external service where reports can be further analyzed. It also supports importing existing reports and requests to analyze a Zuul build. The services run analyses asynchronously and feature a web interface to adjust scores and remove false positives. + +![](https://opensource.com/sites/default/files/uploads/classification-interface.png) + +Reviewed reports can be archived as a standalone dataset with the target log files and the scores for anomalous lines recorded in a flat JSON file. + +### Project roadmap + +Logreduce is already being used effectively, but there are many opportunities for improving the tool. Plans for the future include: + + * Curating many annotated anomalies found in log files and producing a public domain dataset to enable further research. Anomaly detection in log files is a challenging topic, and having a common dataset to test new models would help identify new solutions. + * Reusing the annotated anomalies with the model to refine the distances reported. For example, when users mark lines as false positives by setting their distance to zero, the model could reduce the score of those lines in future reports. + * Fingerprinting archived anomalies to detect when a new report contains an already known anomaly. Thus, instead of reporting the anomaly's content, the service could notify the user that the job hit a known issue. When the issue is fixed, the service could automatically restart the job. + * Supporting more baseline discovery interfaces for targets such as SOS reports, Jenkins builds, Travis CI, and more. + + + +If you are interested in getting involved in this project, please contact us on the **#log-classify** Freenode IRC channel. Feedback is always appreciated! + +Tristan Cacqueray will present [Reduce your log noise using machine learning][9] at the [OpenStack Summit][10], November 13-15 in Berlin. + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/18/9/quiet-log-noise-python-and-machine-learning + +作者:[Tristan de Cacqueray][a] +选题:[lujun9972](https://github.com/lujun9972) +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/tristanc +[1]: https://pypi.org/project/logreduce/ +[2]: http://man7.org/linux/man-pages/man8/systemd-journald.service.8.html +[3]: https://en.wikipedia.org/wiki/K-nearest_neighbors_algorithm +[4]: http://scikit-learn.org/stable/modules/generated/sklearn.feature_extraction.text.HashingVectorizer.html +[5]: https://github.com/TristanCacqueray/anomaly-detection-workshop-opendev/blob/master/datasets/notebook/anomaly-detection-with-scikit-learn.ipynb +[6]: https://zuul-ci.org +[7]: https://www.softwarefactory-project.io +[8]: https://sos.readthedocs.io/en/latest/ +[9]: https://www.openstack.org/summit/berlin-2018/summit-schedule/speakers/4307 +[10]: https://www.openstack.org/summit/berlin-2018/ From 36da26f8765ede333fb7833a517c0a5a24f40394 Mon Sep 17 00:00:00 2001 From: sd886393 Date: Sun, 30 Sep 2018 09:39:26 +0800 Subject: [PATCH 147/736] =?UTF-8?q?=E3=80=90=E5=AE=8C=E6=88=90=E7=BF=BB?= =?UTF-8?q?=E8=AF=91=E3=80=9120180531=20How=20to=20create=20shortcuts=20in?= =?UTF-8?q?=20vi.md?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- .../20180531 How to create shortcuts in vi.md | 131 ----------------- .../20180531 How to create shortcuts in vi.md | 134 ++++++++++++++++++ 2 files changed, 134 insertions(+), 131 deletions(-) delete mode 100644 sources/tech/20180531 How to create shortcuts in vi.md create mode 100644 translated/tech/20180531 How to create shortcuts in vi.md diff --git a/sources/tech/20180531 How to create shortcuts in vi.md b/sources/tech/20180531 How to create shortcuts in vi.md deleted file mode 100644 index ba856e745a..0000000000 --- a/sources/tech/20180531 How to create shortcuts in vi.md +++ /dev/null @@ -1,131 +0,0 @@ -【sd886393认领翻译中】How to create shortcuts in vi -====== - -![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/documentation-type-keys-yearbook.png?itok=Q-ELM2rn) - -Learning the [vi text editor][1] takes some effort, but experienced vi users know that after a while, using basic commands becomes second nature. It's a form of what is known as muscle memory, which in this case might well be called finger memory. - -After you get a grasp of the main approach and basic commands, you can make editing with vi even more powerful and streamlined by using its customization options to create shortcuts. I hope that the techniques described below will facilitate your writing, programming, and data manipulation. - -Before proceeding, I'd like to thank Chris Hermansen (who recruited me to write this article) for checking my draft with [Vim][2], as I use another version of vi. I'm also grateful for Chris's helpful suggestions, which I incorporated here. - -First, let's review some conventions. I'll use to designate pressing the RETURN or ENTER key, and for the space bar. CTRL-x indicates simultaneously pressing the Control key and the x key (whatever x happens to be). - -Set up your own command abbreviations with the `map` command. My first example involves the `write` command, used to save the current state of the file you're working on: -``` -:w - -``` - -This is only three keystrokes, but since I do it so frequently, I'd rather use only one. The key I've chosen for this purpose is the comma, which is not part of the standard vi command set. The command to set this up is: -``` -:map , :wCTRL-v - -``` - -The CTRL-v is essential since without it the would signal the end of the map, and we want to include the as part of the mapped comma. In general, CTRL-v is used to enter the keystroke (or control character) that follows rather than being interpreted literally. - -In the above map, the part on the right will display on the screen as `:w^M`. The caret (`^`) indicates a control character, in this case CTRL-m, which is the system's form of . - -So far so good—sort of. If I write my current file about a dozen times while creating and/or editing it, this map could result in a savings of 2 x 12 keystrokes. But that doesn't account for the keystrokes needed to set up the map, which in the above example is 11 (counting CTRL-v and the shifted character `:` as one stroke each). Even with a net savings, it would be a bother to set up the map each time you start a vi session. - -Fortunately, there's a way to put maps and other abbreviations in a startup file that vi reads each time it is invoked: the `.exrc` file, or in Vim, the `.vimrc` file. Simply create this file in your home directory with a list of maps, one per line—without the colon—and the abbreviation is defined for all subsequent vi sessions until you delete or change it. - -Before going on to a variation of the `map` command and another type of abbreviation method, here are a few more examples of maps that I've found useful for streamlining my text editing: -``` -                                        Displays as - - - -:map X :xCTRL-v                    :x^M - - - -or - - - -:map X ,:qCTRL-v                   ,:q^M - -``` - -The above equivalent maps write and quit (exit) the file. The `:x` is the standard vi command for this, and the second version illustrates that a previously defined map may be used in a subsequent map. -``` -:map v :e                   :e - -``` - -The above starts the command to move to another file while remaining within vi; when using this, just follow the "v" with a filename, followed by . -``` -:map CTRL-vCTRL-e :e#CTRL-v    :e #^M - -``` - -The `#` here is the standard vi symbol for "the alternate file," which means the filename last used, so this shortcut is handy for switching back and forth between two files. Here's an example of how I use this: -``` -map CTRL-vCTRL-r :!spell %>err &CTRL-v     :!spell %>err&^M - -``` - -(Note: The first CTRL-v in both examples above is not needed in some versions of vi.) The `:!` is a way to run an external (non-vi) command. In this case (`spell`), `%` is the vi symbol denoting the current file, the `>` redirects the output of the spell-check to a file called `err`, and the `&` says to run this in the background so I can continue editing while `spell` completes its task. I can then type `verr` (using my previous shortcut, `v`, followed by `err`) to go the file of potential errors flagged by the `spell` command, then back to the file I'm working on with CTRL-e. After running the spell-check the first time, I can use CTRL-r repeatedly and return to the `err` file with just CTRL-e. - -A variation of the `map` command may be used to abbreviate text strings while inputting. For example, -``` -:map! CTRL-o \fI - -:map! CTRL-k \fP - -``` - -This will allow you to use CTRL-o as a shortcut for entering the `groff` command to italicize the word that follows, and this will allow you to use CTRL-k for the `groff` command reverts to the previous font. - -Here are two other examples of this technique: -``` -:map! rh rhinoceros - -:map! hi hippopotamus - -``` - -The above may instead be accomplished using the `ab` command, as follows (if you're trying these out in order, first use `unmap! rh` and `umap! hi`): -``` -:ab rh rhinoceros - -:ab hi hippopotamus - -``` - -In the `map!` method above, the abbreviation immediately expands to the defined word when typed (in Vim), whereas with the `ab` method, the expansion occurs when the abbreviation is followed by a space or punctuation mark (in both Vim and my version of vi, where the expansion also works like this for the `map!` method). - -To reverse any `map`, `map!`, or `ab` within a vi session, use `:unmap`, `:unmap!`, or `:unab`. - -In my version of vi, undefined letters that are good candidates for mapping include g, K, q, v, V, and Z; undefined control characters are CTRL-a, CTRL-c, CTRL-k, CTRL-n, CTRL-o, CTRL-p, and CTRL-x; some other undefined characters are `#` and `*`. You can also redefine characters that have meaning in vi but that you consider obscure and of little use; for example, the X that I chose for two examples in this article is a built-in vi command to delete the character to the immediate left of the current character (easily accomplished by the two-key command `hx`). - -Finally, the commands -``` -:map - -:map! - -:ab - -``` - -will show all the currently defined mappings and abbreviations. - -I hope that all of these tips will help you customize vi and make it easier and more efficient to use. - --------------------------------------------------------------------------------- - -via: https://opensource.com/article/18/5/shortcuts-vi-text-editor - -作者:[Dan Sonnenschein][a] -选题:[lujun9972](https://github.com/lujun9972) -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:https://opensource.com/users/dannyman -[1]:http://ex-vi.sourceforge.net/ -[2]:https://www.vim.org/ diff --git a/translated/tech/20180531 How to create shortcuts in vi.md b/translated/tech/20180531 How to create shortcuts in vi.md new file mode 100644 index 0000000000..8616013e96 --- /dev/null +++ b/translated/tech/20180531 How to create shortcuts in vi.md @@ -0,0 +1,134 @@ +如何在 vi 中创建快捷键 +====== + +![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/documentation-type-keys-yearbook.png?itok=Q-ELM2rn) + +学习使用 [vi 文本编辑器][1] 确实得花点功夫,不过 vi 的老手们都知道,经过一小会的锻炼,就可以将基本的 vi 操作融汇贯通。我们都知道“肌肉记忆”,那么学习 vi 的过程可以称之为“手指记忆”。 + +当你抓住了基础的操作窍门之后,你就可以定制化地配置 vi 的快捷键,从而让其处理的功能更为强大、流畅。 + +在开始之前,我想先感谢下 Chris Hermansen(他雇佣我写了这篇文章)仔细地检查了我的另一篇关于使用 vi 增强版本[Vim][2]的文章。当然还有他那些我未采纳的建议。 + +首先,我们来说明下面几个惯例设定。我会使用符号来代表按下 RETURN 或者 ENTER 键, 代表按下空格键,CTRL-x 表示一起按下 Control 键和 x 键 + +使用 `map` 命令来进行按键的映射。第一个例子是 `write` 命令,通常你之前保存使用这样的命令: + +``` +:w + +``` + +虽然这里只有三个键,不过考虑到我用这个命令实在是太频繁了,我更想“一键”搞定它。在这里我选择逗号键,比如这样: +``` +:map , :wCTRL-v + +``` + +这里的 CTRL-v 事实上是对 做了转义的操作,如果不加这个的话,默认 会作为这条映射指令的结束信号,而非映射中的一个操作。 CTRL-v 后面所跟的操作会翻译为用户的实际操作,而非该按键平常的操作。 + +在上面的映射中,右边的部分会在屏幕中显示为 `:w^M`,其中 `^` 字符就是指代 `control`,完整的意思就是 CTRL-m,表示就是系统中一行的结尾 + + +目前来说,就很不错了。如果我编辑、创建了十二次文件,这个键位映射就可以省掉了 2*12 次按键。不过这里没有计算你建立这个键位映射所花费的 11次按键(计算CTRL-v 和 冒号均为一次按键)。虽然这样已经省了很多次,但是每次打开 vi 都要重新建立这个映射也会觉得非常麻烦。 + +幸运的是,这里可以将这些键位映射放到 vi 的启动配置文件中,让其在每次启动的时候自动读取:文件为 `.exrc`,对于 vim 是 `.vimrc`。只需要将这些文件放在你的用户根目录中即可,并在文件中每行写入一个键位映射,之后就会在每次启动 vi 生效直到你删除对应的配置。 + +在继续说明 `map` 其他用法以及其他的缩写机制之前,这里在列举几个我常用提高文本处理效率的 map 设置: +``` +                                        Displays as + + + +:map X :xCTRL-v                    :x^M + + + +or + + + +:map X ,:qCTRL-v                   ,:q^M + +``` + +上面的 map 指令的意思是写入并关闭当前的编辑文件。其中 `:x` 是 vi 原本的命令,而下面的版本说明之前的 map 配置可以继续用作第二个 map 键位映射。 +``` +:map v :e                   :e + +``` + +上面的指令意思是在 vi 编辑器内部 切换文件,使用这个时候,只需要按 `v` 并跟着输入文件名,之后按 `` 键。 +``` +:map CTRL-vCTRL-e :e#CTRL-v    :e #^M + +``` + +`#` 在这里是 vi 中标准的符号,意思是最后使用的文件名。所以切换当前与上一个文件的方法就使用上面的映射。 +``` +map CTRL-vCTRL-r :!spell %>err &CTRL-v     :!spell %>err&^M + +``` + +(注意:在两个例子中出现的第一个 CRTL-v 在某些 vi 的版本中是不需要的)其中,`:!` 用来运行一个外部的(非 vi 内部的)命令。在这个拼写检查的例子中,`%` 是 vi 中的符号用来只带目前的文件, `>` 用来重定向拼写检查中的输出到 `err` 文件中,之后跟上 `&` 说明该命令是一个后台运行的任务,这样可以保证在拼写检查的同时还可以进行编辑文件的工作。这里我可以键入 `verr`(使用我之前定义的快捷键 `v` 跟上 `err`),进入 `spell` 输出结果的文件,之后再输入 `CTRL-e` 来回到刚才编辑的文件中。这样我就可以在拼写检查之后,使用 CTRL-r 来查看检查的错误,再通过 CTRL-e 返回刚才编辑的文件。 + +还用很多字符串输入的缩写,也使用了各种 map 命令,比如: +``` +:map! CTRL-o \fI + +:map! CTRL-k \fP + +``` + +这个映射允许你使用 CTRL-o 作为 `groff` 命令的缩写,从而让让接下来书写的单词有斜体的效果,并使用 CTRL-k 进行恢复 + +还有两个类似的映射: +``` +:map! rh rhinoceros + +:map! hi hippopotamus + +``` + +上面的也可以使用 `ab` 命令来替换,就像下面这样(如果想这么用的话,需要首先按顺序运行 1. `unmap! rh` 2. `umap! hi`): +``` +:ab rh rhinoceros + +:ab hi hippopotamus + +``` + +在上面 `map!` 的命令中,缩写会马上的展开成原有的单词,而在 `ab` 命令中,单词展开的操作会在输入了空格和标点之后才展开(不过在Vim 和 本机使用的 vi中,展开的形式与 `map!` 类似) + +想要取消刚才设定的按键映射,可以对应的输入 `:unmap`, `unmap!`, `:unab` + +在我使用的 vi 版本中,比较好用的候选映射按键包括 `g, K, q, v, V, Z`,控制字符包括:`CTRL-a, CTRL-c, CTRL-k, CTRL-n, CTRL-p, CTRL-x`;还有一些其他的字符如`#, *`,当然你也可以使用那些已经在 vi 中有过定义但不经常使用的字符,比如本文选择`X`和`I`,其中`X`表示删除左边的字符,并立刻左移当前字符。 + +最后,下面的命令 +``` +:map + +:map! + +:ab + +``` + +将会显示,目前所有的缩写和键位映射。 +will show all the currently defined mappings and abbreviations. + +希望上面的技巧能够更好地更高效地帮助你使用 vi。 + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/18/5/shortcuts-vi-text-editor + +作者:[Dan Sonnenschein][a] +选题:[lujun9972](https://github.com/lujun9972) +译者:[译者ID](https://github.com/sd886393) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://opensource.com/users/dannyman +[1]:http://ex-vi.sourceforge.net/ +[2]:https://www.vim.org/ From c15c7589ea070392cc72c0cc1e83c58df59a418a Mon Sep 17 00:00:00 2001 From: darksun Date: Sun, 30 Sep 2018 09:49:33 +0800 Subject: [PATCH 148/736] =?UTF-8?q?=E9=80=89=E9=A2=98:=20Use=20Cozy=20to?= =?UTF-8?q?=20Play=20Audiobooks=20in=20Linux?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...29 Use Cozy to Play Audiobooks in Linux.md | 138 ++++++++++++++++++ 1 file changed, 138 insertions(+) create mode 100644 sources/tech/20180929 Use Cozy to Play Audiobooks in Linux.md diff --git a/sources/tech/20180929 Use Cozy to Play Audiobooks in Linux.md b/sources/tech/20180929 Use Cozy to Play Audiobooks in Linux.md new file mode 100644 index 0000000000..8e6583f046 --- /dev/null +++ b/sources/tech/20180929 Use Cozy to Play Audiobooks in Linux.md @@ -0,0 +1,138 @@ +Use Cozy to Play Audiobooks in Linux +====== +**We review Cozy, an audiobook player for Linux. Read to find out if it’s worth to install Cozy on your Linux system or not.** + +![Audiobook player for Linux][1] + +Audiobooks are a great way to consume literature. Many people who don’t have time to read, choose to listen. Most people, myself included, just use a regular media player like VLC or [MPV][2] for listening to audiobooks on Linux. + +Today, we will look at a Linux application built solely for listening to audiobooks. + +![][3]Cozy Audiobook Player + +### Cozy Audiobook Player for Linux + +The [Cozy Audiobook Player][4] is created by [Julian Geywitz][5] from Germany. It is built using both Python and GTK+ 3. According to the site, Julian wrote Cozy on Fedora and optimized it for [elementary OS][6]. + +The player borrows its layout from iTunes. The player controls are placed along the top of the application The library takes up the rest. You can sort all of your audiobooks based on the title, author and reader, and search very quickly. + +![][7]Initial setup + +When you first launch [Cozy][8], you are given the option to choose where you will store your audiobook files. Cozy will keep an eye on that folder and update your library as you add new audiobooks. You can also set it up to use an external or network drive. + +#### Features of Cozy + +Here is a full list of the features that [Cozy][9] has to offer. + + * Import all your audiobooks into Cozy to browse them comfortably + * Sort your audiobooks by author, reader & title + * Remembers your playback position + * Sleep timer + * Playback speed control + * Search your audiobook library + * Add multiple storage locations + * Drag & Drop to import new audio books + * Support for DRM free mp3, m4a (aac, ALAC, …), flac, ogg, wav files + * Mpris integration (Media keys & playback info for the desktop environment) + * Developed on Fedora and tested under elementaryOS + + + +#### Experiencing Cozy + +![][10]Audiobook library + +At first, I was excited to try our Cozy because I like to listen to audiobooks. However, I ran into a couple of issues. There is no way to edit the information of an audiobook. For example, I downloaded a couple audiobooks from [LibriVox][11] to test it. All three audiobooks were listed under “Unknown” for the reader. There was nothing to edit or change the audiobook info. I guess you could edit all of the files, but that would take quite a bit of time. + +When I listen to an audiobook, I like to know what track is currently playing. Cozy only has a single progress bar for the whole audiobook. I know that Cozy is designed to remember where you left off in an audiobook, but if I was going to continue to listen to the audiobook on my phone, I would like to know what track I am on. + +![][12]Settings + +There was also an option in the setting menu to turn on a dark theme. As you can see in the screenshots, the application has a black theme, to begin with. I turned the option on, but nothing happened. There isn’t even an option to add a theme or change any of the colors. Overall, the application had a feeling of not being finished. + +#### Installing Cozy on Linux + +If you would like to install Cozy, you have several options for different distros. + +##### Ubuntu, Debian, openSUSE, Fedora + +Julian used the [openSUSE Build Service][13] to create custom repos for Ubuntu, Debian, openSUSE and Fedora. Each one only takes a couple terminal commands to install. + +##### Install Cozy using Flatpak on any Linux distribution (including Ubuntu) + +If your [distro supports Flatpak][14], you can install Cozy using the following commands: + +``` +flatpak remote-add --user --if-not-exists flathub https://flathub.org/repo/flathub.flatpakrepo +flatpak install --user flathub com.github.geigi.cozy +``` + +##### Install Cozy on elementary OS + +If you have elementary OS installed, you can install Cozy from the [built-in App Store][15]. + +##### Install Cozy on Arch Linux + +Cozy is available in the [Arch User Repository][16]. All you have to do is search for `cozy-audiobooks`. + +### Where to find free Audiobooks? + +In order to try out this application, you will need to find some audiobooks to listen to. My favorite site for audiobooks is [LibriVox][11]. Since [LibriVox][17] depends on volunteers to record audiobooks, the quality can vary. However, there are a number of very talented readers. + +Here is a list of free audiobook sources: + ++ [Open Culture][20] ++ [Project Gutenberg][21] ++ [Digitalbook.io][22] ++ [FreeClassicAudioBooks.com][23] ++ [MindWebs][24] ++ [Scribl][25] + + +### Final Thoughts on Cozy + +For now, I think I’ll stick with my preferred audiobook software (VLC) for now. Cozy just doesn’t add anything. I won’t call it a [must-have application for Linux][18] just yet. There is no compelling reason for me to switch. Maybe, I’ll revisit it again in the future, maybe when it hits 1.0. + +Take Cozy for a spin. You might come to a different conclusion. + +Have you ever used Cozy? If not, what is your favorite audiobook player? What is your favorite source for free audiobooks? Let us know in the comments below. + +If you found this article interesting, please take a minute to share it on social media, Hacker News or [Reddit][19]. + +-------------------------------------------------------------------------------- + +via: https://itsfoss.com/cozy-audiobook-player/ + +作者:[John Paul][a] +选题:[lujun9972](https://github.com/lujun9972) +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://itsfoss.com/author/john/ +[1]: https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/09/audiobook-player-linux.png +[2]: https://itsfoss.com/mpv-video-player/ +[3]: https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/09/cozy3.jpg +[4]: https://cozy.geigi.de/ +[5]: https://github.com/geigi +[6]: https://elementary.io/ +[7]: https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/09/cozy1.jpg +[8]: https://github.com/geigi/cozy +[9]: https://www.patreon.com/geigi +[10]: https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/09/cozy2.jpg +[11]: https://librivox.org/ +[12]: https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/09/cozy4.jpg +[13]: https://software.opensuse.org//download.html?project=home%3Ageigi&package=com.github.geigi.cozy +[14]: https://itsfoss.com/flatpak-guide/ +[15]: https://elementary.io/store/ +[16]: https://aur.archlinux.org/ +[17]: https://archive.org/details/librivoxaudio +[18]: https://itsfoss.com/essential-linux-applications/ +[19]: http://reddit.com/r/linuxusersgroup +[20]: http://www.openculture.com/freeaudiobooks +[21]: http://www.gutenberg.org/browse/categories/1 +[22]: https://www.digitalbook.io/ +[23]: http://freeclassicaudiobooks.com/ +[24]: https://archive.org/details/MindWebs_201410 +[25]: https://scribl.com/ From 8c14a12d4eb0196066cd94a38052fbe1f7a178f2 Mon Sep 17 00:00:00 2001 From: sd886393 Date: Sun, 30 Sep 2018 10:17:12 +0800 Subject: [PATCH 149/736] =?UTF-8?q?=E8=AE=A4=E9=A2=86=20by=20sd886393?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- .../tech/20180928 What containers can teach us about DevOps.md | 1 + 1 file changed, 1 insertion(+) diff --git a/sources/tech/20180928 What containers can teach us about DevOps.md b/sources/tech/20180928 What containers can teach us about DevOps.md index 610a68b2d1..33f83fb0f7 100644 --- a/sources/tech/20180928 What containers can teach us about DevOps.md +++ b/sources/tech/20180928 What containers can teach us about DevOps.md @@ -1,3 +1,4 @@ +认领:by sd886393 What containers can teach us about DevOps ====== From 7bb6f3525e1d7a28d5676b2f41d242b1e27f66b6 Mon Sep 17 00:00:00 2001 From: MjSeven <33125422+MjSeven@users.noreply.github.com> Date: Sun, 30 Sep 2018 10:26:04 +0800 Subject: [PATCH 150/736] Delete 20180816 An introduction to the Django Python web app framework.md --- ... to the Django Python web app framework.md | 1250 ----------------- 1 file changed, 1250 deletions(-) delete mode 100644 sources/tech/20180816 An introduction to the Django Python web app framework.md diff --git a/sources/tech/20180816 An introduction to the Django Python web app framework.md b/sources/tech/20180816 An introduction to the Django Python web app framework.md deleted file mode 100644 index ab7dba9526..0000000000 --- a/sources/tech/20180816 An introduction to the Django Python web app framework.md +++ /dev/null @@ -1,1250 +0,0 @@ -Translating by MjSeven - - -An introduction to the Django Python web app framework -====== - -![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/web-spider-frame-framework.png?itok=Rl2AG2Dc) - -In the first three articles of this four-part series comparing different Python web frameworks, we covered the [Pyramid][1], [Flask][2], and [Tornado][3] web frameworks. We've built the same app three times and have finally made our way to [Django][4]. Django is, by and large, the major web framework for Python developers these days and it's not too hard to see why. It excels in hiding a lot of the configuration logic and letting you focus on being able to build big, quickly. - -That said, when it comes to small projects, like our To-Do List app, Django can be a bit like bringing a firehose to a water gun fight. Let's see how it all comes together. - -### About Django - -Django styles itself as "a high-level Python web framework that encourages rapid development and clean, pragmatic design. Built by experienced developers, it takes care of much of the hassle of web development, so you can focus on writing your app without needing to reinvent the wheel." And they really mean it! This massive web framework comes with so many batteries included that oftentimes during development it can be a mystery as to how everything manages to work together. - -In addition to the framework itself being large, the Django community is absolutely massive. In fact, it's so big and active that there's [a whole website][5] devoted to the third-party packages people have designed to plug into Django to do a whole host of things. This includes everything from authentication and authorization, to full-on Django-powered content management systems, to e-commerce add-ons, to integrations with Stripe. Talk about not re-inventing the wheel; chances are if you want something done with Django, someone has already done it and you can just pull it into your project. - -For this purpose, we want to build a REST API with Django, so we'll leverage the always popular [Django REST framework][6]. Its job is to turn the Django framework, which was made to serve fully rendered HTML pages built with Django's own templating engine, into a system specifically geared toward effectively handling REST interactions. Let's get going with that. - -### Django startup and configuration -``` -$ mkdir django_todo - -$ cd django_todo - -$ pipenv install --python 3.6 - -$ pipenv shell - -(django-someHash) $ pipenv install django djangorestframework - -``` - -For reference, we're working with `django-2.0.7` and `djangorestframework-3.8.2`. - -Unlike Flask, Tornado, and Pyramid, we don't need to write our own `setup.py` file. We're not making an installable Python distribution. As with many things, Django takes care of that for us in its own Django way. We'll still need a `requirements.txt` file to track all our necessary installs for deployment elsewhere. However, as far as targeting modules within our Django project goes, Django will let us list the subdirectories we want access to, then allow us to import from those directories as if they're installed packages. - -First, we have to create a Django project. - -When we installed Django, we also installed the command-line script `django-admin`. Its job is to manage all the various Django-related commands that help put our project together and maintain it as we continue to develop. Instead of having us build up the entire Django ecosystem from scratch, the `django-admin` will allow us to get started with all the absolutely necessary files (and more) we need for a standard Django project. - -The syntax for invoking `django-admin`'s start-project command is `django-admin startproject `. We want the files to exist in our current working directory, so: -``` -(django-someHash) $ django-admin startproject django_todo . - -``` - -Typing `ls` will show one new file and one new directory. -``` -(django-someHash) $ ls - -manage.py   django_todo - -``` - -`manage.py` is a command-line-executable Python file that ends up just being a wrapper around `django-admin`. As such, its job is the same: to help us manage our project. Hence the name `manage.py`. - -The directory it created, the `django_todo` inside of `django_todo`, represents the configuration root for our project. Let's dig into that now. - -### Configuring Django - -By calling the `django_todo` directory the "configuration root," we mean this directory holds the files necessary for generally configuring our Django project. Pretty much everything outside this directory will be focused solely on the "business logic" associated with the project's models, views, routes, etc. All points that connect the project together will lead here. - -Calling `ls` within `django_todo` reveals four files: -``` -(django-someHash) $ cd django_todo - -(django-someHash) $ ls - -__init__.py settings.py urls.py     wsgi.py - -``` - - * `__init__.py` is empty, solely existing to turn this directory into an importable Python package. - * `settings.py` is where most configuration items will be set, like whether the project's in DEBUG mode, what databases are in use, where Django should look for files, etc. It is the "main configuration" part of the configuration root, and we'll dig into that momentarily. - * `urls.py` is, as the name implies, where the URLs are set. While we don't have to explicitly write every URL for the project in this file, we **do** need to make this file aware of any other places where URLs have been declared. If this file doesn't point to other URLs, those URLs don't exist. **Period.** - * `wsgi.py` is for serving the application in production. Just like how Pyramid, Tornado, and Flask exposed some "app" object that was the configured application to be served, Django must also expose one. That's done here. It can then be served with something like [Gunicorn][7], [Waitress][8], or [uWSGI][9]. - - - -#### Setting the settings - -Taking a look inside `settings.py` will reveal its considerable size—and these are just the defaults! This doesn't even include hooks for the database, static files, media files, any cloud integration, or any of the other dozens of ways that a Django project can be configured. Let's see, top to bottom, what we've been given: - - * `BASE_DIR` sets the absolute path to the base directory, or the directory where `manage.py` is located. This is useful for locating files. - * `SECRET_KEY` is a key used for cryptographic signing within the Django project. In practice, it's used for things like sessions, cookies, CSRF protection, and auth tokens. As soon as possible, preferably before the first commit, the value for `SECRET_KEY` should be changed and moved into an environment variable. - * `DEBUG` tells Django whether to run the project in development mode or production mode. This is an extremely critical distinction. - * In development mode, when an error pops up, Django will show the full stack trace that led to the error, as well as all the settings and configurations involved in running the project. This can be a massive security issue if `DEBUG` was set to `True` in a production environment. - * In production, Django shows a plain error page when things go wrong. No information is given beyond an error code. - * A simple way to safeguard our project is to set `DEBUG` to an environment variable, like `bool(os.environ.get('DEBUG', ''))`. - * `ALLOWED_HOSTS` is the literal list of hostnames from which the application is being served. In development this can be empty, but in production our Django project will not run if the host that serves the project is not among the list of ALLOWED_HOSTS. Another thing for the box of environment variables. - * `INSTALLED_APPS` is the list of Django "apps" (think of them as subdirectories; more on this later) that our Django project has access to. We're given a few by default to provide… - * The built-in Django administrative website - * Django's built-in authentication system - * Django's one-size-fits-all manager for data models - * Session management - * Cookie and session-based messaging - * Usage of static files inherent to the site, like `css` files, `js` files, any images that are a part of our site's design, etc. - * `MIDDLEWARE` is as it sounds: the middleware that helps our Django project run. Much of it is for handling various types of security, although we can add others as we need them. - * `ROOT_URLCONF` sets the import path of our base-level URL configuration file. That `urls.py` that we saw before? By default, Django points to that file to gather all our URLs. If we want Django to look elsewhere, we'll set the import path to that location here. - * `TEMPLATES` is the list of template engines that Django would use for our site's frontend if we were relying on Django to build our HTML. Since we're not, it's irrelevant. - * `WSGI_APPLICATION` sets the import path of our WSGI application—the thing that gets served when in production. By default, it points to an `application` object in `wsgi.py`. This rarely, if ever, needs to be modified. - * `DATABASES` sets which databases our Django project will access. The `default` database must be set. We can set others by name, as long as we provide the `HOST`, `USER`, `PASSWORD`, `PORT`, database `NAME`, and appropriate `ENGINE`. As one might imagine, these are all sensitive pieces of information, so it's best to hide them away in environment variables. [Check the Django docs][10] for more details. - * Note: If instead of providing individual pieces of a database's location, you'd rather provide the full database URL, check out [dj_database_url][11]. - * `AUTH_PASSWORD_VALIDATORS` is effectively a list of functions that run to check input passwords. We get a few by default, but if we had other, more complex validation needs—more than merely checking if the password matches a user's attribute, if it exceeds the minimum length, if it's one of the 1,000 most common passwords, or if the password is entirely numeric—we could list them here. - * `LANGUAGE_CODE` will set the language for the site. By default it's US English, but we could switch it up to be other languages. - * `TIME_ZONE` is the time zone for any autogenerated timestamps in our Django project. I cannot stress enough how important it is that we stick to UTC and perform any time zone-specific processing elsewhere instead of trying to reconfigure this setting. As [this article][12] states, UTC is the common denominator among all time zones because there are no offsets to worry about. If offsets are that important, we could calculate them as needed with an appropriate offset from UTC. - * `USE_I18N` will let Django use its own translation services to translate strings for the front end. I18N = internationalization (18 characters between "i" and "n") - * `USE_L10N` (L10N = localization [10 characters between "l" and "n"]) will use the common local formatting of data if set to `True`. A great example is dates: in the US it's MM-DD-YYYY. In Europe, dates tend to be written DD-MM-YYYY - * `STATIC_URL` is part of a larger body of settings for serving static files. We'll be building a REST API, so we won't need to worry about static files. In general, this sets the root path after the domain name for every static file. So, if we had a logo image to serve, it'd be `http:////logo.gif` - - - -These settings are pretty much ready to go by default. One thing we'll have to change is the `DATABASES` setting. First, we create the database that we'll be using with: -``` -(django-someHash) $ createdb django_todo - -``` - -We want to use a PostgreSQL database like we did with Flask, Pyramid, and Tornado. That means we'll have to change the `DATABASES` setting to allow our server to access a PostgreSQL database. First: the engine. By default, the database engine is `django.db.backends.sqlite3`. We'll be changing that to `django.db.backends.postgresql`. - -For more information about Django's available engines, [check the docs][13]. Note that while it is technically possible to incorporate a NoSQL solution into a Django project, out of the box, Django is strongly biased toward SQL solutions. - -Next, we have to specify the key-value pairs for the different parts of the connection parameters. - - * `NAME` is the name of the database we just created. - * `USER` is an individual's Postgres database username - * `PASSWORD` is the password needed to access the database - * `HOST` is the host for the database. `localhost` or `127.0.0.1` will work, as we're developing locally. - * `PORT` is whatever PORT we have open for Postgres; it's typically `5432`. - - - -`settings.py` expects us to provide string values for each of these keys. However, this is highly sensitive information. That's not going to work for any responsible developer. There are several ways to address this problem, but we'll just set up environment variables. -``` -DATABASES = { - -    'default': { - -        'ENGINE': 'django.db.backends.postgresql', - -        'NAME': os.environ.get('DB_NAME', ''), - -        'USER': os.environ.get('DB_USER', ''), - -        'PASSWORD': os.environ.get('DB_PASS', ''), - -        'HOST': os.environ.get('DB_HOST', ''), - -        'PORT': os.environ.get('DB_PORT', ''), - -    } - -} - -``` - -Before going forward, make sure to set the environment variables or Django will not work. Also, we need to install `psycopg2` into this environment so we can talk to our database. - -### Django routes and views - -Let's make something function inside this project. We'll be using Django REST Framework to construct our REST API, so we have to make sure we can use it by adding `rest_framework` to the end of `INSTALLED_APPS` in `settings.py`. -``` -INSTALLED_APPS = [ - -    'django.contrib.admin', - -    'django.contrib.auth', - -    'django.contrib.contenttypes', - -    'django.contrib.sessions', - -    'django.contrib.messages', - -    'django.contrib.staticfiles', - -    'rest_framework' - -] - -``` - -While Django REST Framework doesn't exclusively require class-based views (like Tornado) to handle incoming requests, it is the preferred method for writing views. Let's define one. - -Let's create a file called `views.py` in `django_todo`. Within `views.py`, we'll create our "Hello, world!" view. -``` -# in django_todo/views.py - -from rest_framework.response import JsonResponse - -from rest_framework.views import APIView - - - -class HelloWorld(APIView): - -    def get(self, request, format=None): - -        """Print 'Hello, world!' as the response body.""" - -        return JsonResponse("Hello, world!") - -``` - -Every Django REST Framework class-based view inherits either directly or indirectly from `APIView`. `APIView` handles a ton of stuff, but for our purposes it does these specific things: - - * Sets up the methods needed to direct traffic based on the HTTP method (e.g. GET, POST, PUT, DELETE) - * Populates the `request` object with all the data and attributes we'll need for parsing and processing any incoming request - * Takes the `Response` or `JsonResponse` that every dispatch method (i.e., methods named `get`, `post`, `put`, `delete`) returns and constructs a properly formatted HTTP response. - - - -Yay, we have a view! On its own it does nothing. We need to connect it to a route. - -If we hop into `django_todo/urls.py`, we reach our default URL configuration file. As mentioned earlier: If a route in our Django project is not included here, it doesn't exist. - -We add desired URLs by adding them to the given `urlpatterns` list. By default, we get a whole set of URLs for Django's built-in site administration backend. We'll delete that completely. - -We also get some very helpful doc strings that tell us exactly how to add routes to our Django project. We'll need to provide a call to `path()` with three parameters: - - * The desired route, as a string (without the leading slash) - * The view function (only ever a function!) that will handle that route - * The name of the route in our Django project - - - -Let's import our `HelloWorld` view and attach it to the home route `"/"`. We can also remove the path to the `admin` from `urlpatterns`, as we won't be using it. -``` -# django_todo/urls.py, after the big doc string - -from django.urls import path - -from django_todo.views import HelloWorld - - - -urlpatterns = [ - -    path('', HelloWorld.as_view(), name="hello"), - -] - -``` - -Well, this is different. The route we specified is just a blank string. Why does that work? Django assumes that every path we declare begins with a leading slash. We're just specifying routes to resources after the initial domain name. If a route isn't going to a specific resource and is instead just the home page, the route is just `""`, or effectively "no resource." - -The `HelloWorld` view is imported from that `views.py` file we just created. In order to do this import, we need to update `settings.py` to include `django_todo` in the list of `INSTALLED_APPS`. Yeah, it's a bit weird. Here's one way to think about it. - -`INSTALLED_APPS` refers to the list of directories or packages that Django sees as importable. It's Django's way of treating individual components of a project like installed packages without going through a `setup.py`. We want the `django_todo` directory to be treated like an importable package, so we include that directory in `INSTALLED_APPS`. Now, any module within that directory is also importable. So we get our view. - -The `path` function will ONLY take a view function as that second argument, not just a class-based view on its own. Luckily, all valid Django class-based views include this `.as_view()` method. Its job is to roll up all the goodness of the class-based view into a view function and return that view function. So, we never have to worry about making that translation. Instead, we only have to think about the business logic, letting Django and Django REST Framework handle the rest. - -Let's crack this open in the browser! - -Django comes packaged with its own local development server, accessible through `manage.py`. Let's navigate to the directory containing `manage.py` and type: -``` -(django-someHash) $ ./manage.py runserver - -Performing system checks... - - - -System check identified no issues (0 silenced). - -August 01, 2018 - 16:47:24 - -Django version 2.0.7, using settings 'django_todo.settings' - -Starting development server at http://127.0.0.1:8000/ - -Quit the server with CONTROL-C. - -``` - -When `runserver` is executed, Django does a check to make sure the project is (more or less) wired together correctly. It's not fool-proof, but it does catch some glaring issues. It also notifies us if our database is out of sync with our code. Undoubtedly ours is because we haven't committed any of our application's stuff to our database, but that's fine for now. Let's visit `http://127.0.0.1:8000` to see the output of the `HelloWorld` view. - -Huh. That's not the plaintext data we saw in Pyramid, Flask, and Tornado. When Django REST Framework is used, the HTTP response (when viewed in the browser) is this sort of rendered HTML, showing our actual JSON response in red. - -But don't fret! If we do a quick `curl` looking at `http://127.0.0.1:8000` in the command line, we don't get any of that fancy HTML. Just the content. -``` -# Note: try this in a different terminal window, outside of the virtual environment above - -$ curl http://127.0.0.1:8000 - -"Hello, world!" - -``` - -Bueno! - -Django REST Framework wants us to have a human-friendly interface when using the browser. This makes sense; if JSON is viewed in the browser, it's typically because a human wants to check that it looks right or get a sense of what the JSON response will look like as they design some consumer of an API. It's a lot like what you'd get from a service like [Postman][14]. - -Either way, we know our view is working! Woo! Let's recap what we've done: - - 1. Started the project with `django-admin startproject ` - 2. Updated the `django_todo/settings.py` to use environment variables for `DEBUG`, `SECRET_KEY`, and values in the `DATABASES` dict - 3. Installed `Django REST Framework` and added it to the list of `INSTALLED_APPS` - 4. Created `django_todo/views.py` to include our first view class to say Hello to the World - 5. Updated `django_todo/urls.py` with a path to our new home route - 6. Updated `INSTALLED_APPS` in `django_todo/settings.py` to include the `django_todo` package - - - -### Creating models - -Let's create our data models now. - -A Django project's entire infrastructure is built around data models. It's written so each data model can have its own little universe with its own views, its own set of URLs that concern its resources, and even its own tests (if we are so inclined). - -If we wanted to build a simple Django project, we could circumvent this by just writing our own `models.py` file in the `django_todo` directory and importing it into our views. However, we're trying to write a Django project the "right" way, so we should divide up our models as best we can into their own little packages The Django Way™. - -The Django Way involves creating what are called Django "apps." Django "apps" aren't separate applications per se; they don't have their own settings and whatnot (although they can). They can, however, have just about everything else one might think of being in a standalone application: - - * Set of self-contained URLs - * Set of self-contained HTML templates (if we want to serve HTML) - * One or more data models - * Set of self-contained views - * Set of self-contained tests - - - -They are made to be independent so they can be easily shared like standalone applications. In fact, Django REST Framework is an example of a Django app. It comes packaged with its own views and HTML templates for serving up our JSON. We just leverage that Django app to turn our project into a full-on RESTful API with less hassle. - -To create the Django app for our To-Do List items, we'll want to use the `startapp` command with `manage.py`. -``` -(django-someHash) $ ./manage.py startapp todo - -``` - -The `startapp` command will succeed silently. We can check that it did what it should've done by using `ls`. -``` -(django-someHash) $ ls - -Pipfile      Pipfile.lock django_todo  manage.py    todo - -``` - -Look at that: We've got a brand new `todo` directory. Let's look inside! -``` -(django-someHash) $ ls todo - -__init__.py admin.py    apps.py     migrations  models.py   tests.py    views.py - -``` - -Here are the files that `manage.py startapp` created: - - * `__init__.py` is empty; it exists so this directory can be seen as a valid import path for models, views, etc. - * `admin.py` is not quite empty; it's used for formatting this app's models in the Django admin, which we're not getting into in this article. - * `apps.py` … not much work to do here either; it helps with formatting models for the Django admin. - * `migrations` is a directory that'll contain snapshots of our data models; it's used for updating our database. This is one of the few frameworks that comes with database management built-in, and part of that is allowing us to update our database instead of having to tear it down and rebuild it to change the schema. - * `models.py` is where the data models live. - * `tests.py` is where tests would go—if we wrote any. - * `views.py` is for the views we write that pertain to the models in this app. They don't have to be written here. We could, for example, write all our views in `django_todo/views.py`. It's here, however, so it's easier to separate our concerns. This becomes far more relevant with sprawling applications that cover many conceptual spaces. - - - -What hasn't been created for us is a `urls.py` file for this app. We can make that ourselves. -``` -(django-someHash) $ touch todo/urls.py - -``` - -Before moving forward we should do ourselves a favor and add this new Django app to our list of `INSTALLED_APPS` in `django_todo/settings.py`. -``` -# in settings.py - -INSTALLED_APPS = [ - -    'django.contrib.admin', - -    'django.contrib.auth', - -    'django.contrib.contenttypes', - -    'django.contrib.sessions', - -    'django.contrib.messages', - -    'django.contrib.staticfiles', - -    'rest_framework', - -    'django_todo', - -    'todo' # <--- the line was added - -] - -``` - -Inspecting `todo/models.py` shows that `manage.py` already wrote a bit of code for us to get started. Diverging from how models were created in the Flask, Tornado, and Pyramid implementations, Django doesn't leverage a third party to manage database sessions or the construction of its object instances. It's all rolled into Django's `django.db.models` submodule. - -The way a model is built, however, is more or less the same. To create a model in Django, we'll need to build a `class` that inherits from `models.Model`. All the fields that will apply to instances of that model should appear as class attributes. Instead of importing columns and field types from SQLAlchemy like we have in the past, all of our fields will come directly from `django.db.models`. -``` -# todo/models.py - -from django.db import models - - - -class Task(models.Model): - -    """Tasks for the To Do list.""" - -    name = models.CharField(max_length=256) - -    note = models.TextField(blank=True, null=True) - -    creation_date = models.DateTimeField(auto_now_add=True) - -    due_date = models.DateTimeField(blank=True, null=True) - -    completed = models.BooleanField(default=False) - -``` - -While there are some definite differences between what Django needs and what SQLAlchemy-based systems need, the overall contents and structure are more or less the same. Let's point out the differences. - -We no longer need to declare a separate field for an auto-incremented ID number for our object instances. Django builds one for us unless we specify a different field as the primary key. - -Instead of instantiating `Column` objects that are passed datatype objects, we just directly reference the datatypes as the columns themselves. - -The `Unicode` field became either `models.CharField` or `models.TextField`. `CharField` is for small text fields of a specific maximum length, whereas `TextField` is for any amount of text. - -The `TextField` should be able to be blank, and we specify this in TWO ways. `blank=True` says that when an instance of this model is constructed, and the data attached to this field is being validated, it's OK for that data to be empty. This is different from `null=True`, which says when the table for this model class is constructed, the column corresponding to `note` will allow for blank or `NULL` entries. So, to sum that all up, `blank=True` controls how data gets added to model instances while `null=True` controls how the database table holding that data is constructed in the first place. - -The `DateTime` field grew some muscle and became able to do some work for us instead of us having to modify the `__init__` method for the class. For the `creation_date` field, we specify `auto_now_add=True`. What this means in a practical sense is that when a new model instance is created Django will automatically record the date and time of now as that field's value. That's handy! - -When neither `auto_now_add` nor its close cousin `auto_now` are set to `True`, `DateTimeField` will expect data like any other field. It'll need to be fed with a proper `datetime` object to be valid. The `due_date` column has `blank` and `null` both set to `True` so that an item on the To-Do List can just be an item to be done at some point in the future, with no defined date or time. - -`BooleanField` just ends up being a field that can take one of two values: `True` or `False`. Here, the default value is set to be `False`. - -#### Managing the database - -As mentioned earlier, Django has its own way of doing database management. Instead of having to write… really any code at all regarding our database, we leverage the `manage.py` script that Django provided on construction. It'll manage not just the construction of the tables for our database, but also any updates we wish to make to those tables without necessarily having to blow the whole thing away! - -Because we've constructed a new model, we need to make our database aware of it. First, we need to put into code the schema that corresponds to this model. The `makemigrations` command of `manage.py` will take a snapshot of the model class we built and all its fields. It'll take that information and package it into a Python script that'll live in this particular Django app's `migrations` directory. There will never be a reason to run this migration script directly. It'll exist solely so that Django can use it as a basis to update our database table or to inherit information when we update our model class. -``` -(django-someHash) $ ./manage.py makemigrations - -Migrations for 'todo': - -  todo/migrations/0001_initial.py - -    - Create model Task - -``` - -This will look at every app listed in `INSTALLED_APPS` and check for models that exist in those apps. It'll then check the corresponding `migrations` directory for migration files and compare them to the models in each of those `INSTALLED_APPS` apps. If a model has been upgraded beyond what the latest migration says should exist, a new migration file will be created that inherits from the most recent one. It'll be automatically named and also be given a message that says what changed since the last migration. - -If it's been a while since you last worked on your Django project and can't remember if your models were in sync with your migrations, you have no need to fear. `makemigrations` is an idempotent operation; your `migrations` directory will have only one copy of the current model configuration whether you run `makemigrations` once or 20 times. Even better than that, when we run `./manage.py runserver`, Django will detect that our models are out of sync with our migrations, and it'll just flat out tell us in colored text so we can make the appropriate choice. - -This next point is something that trips everybody up at least once: Creating a migration file does not immediately affect our database. When we ran `makemigrations`, we prepared our Django project to define how a given table should be created and end up looking. It's still on us to apply those changes to our database. That's what the `migrate` command is for. -``` -(django-someHash) $ ./manage.py migrate - -Operations to perform: - -  Apply all migrations: admin, auth, contenttypes, sessions, todo - -Running migrations: - -  Applying contenttypes.0001_initial... OK - -  Applying auth.0001_initial... OK - -  Applying admin.0001_initial... OK - -  Applying admin.0002_logentry_remove_auto_add... OK - -  Applying contenttypes.0002_remove_content_type_name... OK - -  Applying auth.0002_alter_permission_name_max_length... OK - -  Applying auth.0003_alter_user_email_max_length... OK - -  Applying auth.0004_alter_user_username_opts... OK - -  Applying auth.0005_alter_user_last_login_null... OK - -  Applying auth.0006_require_contenttypes_0002... OK - -  Applying auth.0007_alter_validators_add_error_messages... OK - -  Applying auth.0008_alter_user_username_max_length... OK - -  Applying auth.0009_alter_user_last_name_max_length... OK - -  Applying sessions.0001_initial... OK - -  Applying todo.0001_initial... OK - -``` - -When we apply our migrations, Django first checks to see if the other `INSTALLED_APPS` have migrations to be applied. It checks them in roughly the order they're listed. We want our app to be listed last, because we want to make sure that, in case our model depends on any of Django's built-in models, the database updates we make don't suffer from dependency problems. - -We have another model to build: the User model. However, the game has changed a bit since we're using Django. So many applications require some sort of User model that Django's `django.contrib.auth` package built its own for us to use. If it weren't for the authentication token we require for our users, we could just move on and use it instead of reinventing the wheel. - -However, we need that token. There are a couple of ways we can handle this. - - * Inherit from Django's `User` object, making our own object that extends it by adding a `token` field - * Create a new object that exists in a one-to-one relationship with Django's `User` object, whose only purpose is to hold a token - - - -I'm in the habit of building object relationships, so let's go with the second option. Let's call it an `Owner` as it basically has a similar connotation as a `User`, which is what we want. - -Out of sheer laziness, we could just include this new `Owner` object in `todo/models.py`, but let's refrain from that. `Owner` doesn't explicitly have to do with the creation or maintenance of items on the task list. Conceptually, the `Owner` is simply the owner of the task. There may even come a time where we want to expand this `Owner` to include other data that has absolutely nothing to do with tasks. - -Just to be safe, let's make an `owner` app whose job is to house and handle this `Owner` object. -``` -(django-someHash) $ ./manage.py startapp owner - -``` - -Don't forget to add it to the list of `INSTALLED_APPS` in `settings.py`. -``` -INSTALLED_APPS = [ - -    'django.contrib.admin', - -    'django.contrib.auth', - -    'django.contrib.contenttypes', - -    'django.contrib.sessions', - -    'django.contrib.messages', - -    'django.contrib.staticfiles', - -    'rest_framework', - -    'django_todo', - -    'todo', - -    'owner' - -] - -``` - -If we look at the root of our Django project, we now have two Django apps: -``` -(django-someHash) $ ls - -Pipfile      Pipfile.lock django_todo  manage.py    owner        todo - -``` - -In `owner/models.py`, let's build this `Owner` model. As mentioned earlier, it'll have a one-to-one relationship with Django's built-in `User` object. We can enforce this relationship with Django's `models.OneToOneField` -``` -# owner/models.py - -from django.db import models - -from django.contrib.auth.models import User - -import secrets - - - -class Owner(models.Model): - -    """The object that owns tasks.""" - -    user = models.OneToOneField(User, on_delete=models.CASCADE) - -    token = models.CharField(max_length=256) - - - -    def __init__(self, *args, **kwargs): - -        """On construction, set token.""" - -        self.token = secrets.token_urlsafe(64) - -        super().__init__(*args, **kwargs) - -``` - -This says the `Owner` object is linked to the `User` object, with one `owner` instance per `user` instance. `on_delete=models.CASCADE` dictates that if the corresponding `User` gets deleted, the `Owner` instance it's linked to will also get deleted. Let's run `makemigrations` and `migrate` to bake this new model into our database. -``` -(django-someHash) $ ./manage.py makemigrations - -Migrations for 'owner': - -  owner/migrations/0001_initial.py - -    - Create model Owner - -(django-someHash) $ ./manage.py migrate - -Operations to perform: - -  Apply all migrations: admin, auth, contenttypes, owner, sessions, todo - -Running migrations: - -  Applying owner.0001_initial... OK - -``` - -Now our `Owner` needs to own some `Task` objects. It'll be very similar to the `OneToOneField` seen above, except that we'll stick a `ForeignKey` field on the `Task` object pointing to an `Owner`. -``` -# todo/models.py - -from django.db import models - -from owner.models import Owner - - - -class Task(models.Model): - -    """Tasks for the To Do list.""" - -    name = models.CharField(max_length=256) - -    note = models.TextField(blank=True, null=True) - -    creation_date = models.DateTimeField(auto_now_add=True) - -    due_date = models.DateTimeField(blank=True, null=True) - -    completed = models.BooleanField(default=False) - -    owner = models.ForeignKey(Owner, on_delete=models.CASCADE) - -``` - -Every To-Do List task has exactly one owner who can own multiple tasks. When that owner is deleted, any task they own goes with them. - -Let's now run `makemigrations` to take a new snapshot of our data model setup, then `migrate` to apply those changes to our database. -``` -(django-someHash) django $ ./manage.py makemigrations - -You are trying to add a non-nullable field 'owner' to task without a default; we can't do that (the database needs something to populate existing rows). - -Please select a fix: - - 1) Provide a one-off default now (will be set on all existing rows with a null value for this column) - - 2) Quit, and let me add a default in models.py - -``` - -Oh no! We have a problem! What happened? Well, when we created the `Owner` object and added it as a `ForeignKey` to `Task`, we basically required that every `Task` requires an `Owner`. However, the first migration we made for the `Task` object didn't include that requirement. So, even though there's no data in our database's table, Django is doing a pre-check on our migrations to make sure they're compatible and this new migration we're proposing is not. - -There are a few ways to deal with this sort of problem: - - 1. Blow away the current migration and build a new one that includes the current model configuration - 2. Add a default value to the `owner` field on the `Task` object - 3. Allow tasks to have `NULL` values for the `owner` field. - - - -Option 2 wouldn't make much sense here; we'd be proposing that any `Task` that was created would, by default, be linked to some default owner despite none necessarily existing. - -Option 1 would require us to destroy and rebuild our migrations. We should leave those alone. - -Let's go with option 3. In this circumstance, it won't be the end of the world if we allow the `Task` table to have null values for the owners; any tasks created from this point forward will necessarily have an owner. If you're in a situation where that isn't an acceptable schema for your database table, blow away your migrations, drop the table, and rebuild the migrations. -``` -# todo/models.py - -from django.db import models - -from owner.models import Owner - - - -class Task(models.Model): - -    """Tasks for the To Do list.""" - -    name = models.CharField(max_length=256) - -    note = models.TextField(blank=True, null=True) - -    creation_date = models.DateTimeField(auto_now_add=True) - -    due_date = models.DateTimeField(blank=True, null=True) - -    completed = models.BooleanField(default=False) - -    owner = models.ForeignKey(Owner, on_delete=models.CASCADE, null=True) - -(django-someHash) $ ./manage.py makemigrations - -Migrations for 'todo': - -  todo/migrations/0002_task_owner.py - -    - Add field owner to task - -(django-someHash) $ ./manage.py migrate - -Operations to perform: - -  Apply all migrations: admin, auth, contenttypes, owner, sessions, todo - -Running migrations: - -  Applying todo.0002_task_owner... OK - -``` - -Woo! We have our models! Welcome to the Django way of declaring objects. - -For good measure, let's ensure that whenever a `User` is made, it's automatically linked with a new `Owner` object. We can do this using Django's `signals` system. Basically, we say exactly what we intend: "When we get the signal that a new `User` has been constructed, construct a new `Owner` and set that new `User` as that `Owner`'s `user` field." In practice that looks like: -``` -# owner/models.py - -from django.contrib.auth.models import User - -from django.db import models - -from django.db.models.signals import post_save - -from django.dispatch import receiver - - - -import secrets - - - - - -class Owner(models.Model): - -    """The object that owns tasks.""" - -    user = models.OneToOneField(User, on_delete=models.CASCADE) - -    token = models.CharField(max_length=256) - - - -    def __init__(self, *args, **kwargs): - -        """On construction, set token.""" - -        self.token = secrets.token_urlsafe(64) - -        super().__init__(*args, **kwargs) - - - - - -@receiver(post_save, sender=User) - -def link_user_to_owner(sender, **kwargs): - -    """If a new User is saved, create a corresponding Owner.""" - -    if kwargs['created']: - -        owner = Owner(user=kwargs['instance']) - -        owner.save() - -``` - -We set up a function that listens for signals to be sent from the `User` object built into Django. It's waiting for just after a `User` object has been saved. This can come from either a new `User` or an update to an existing `User`; we discern between the two scenarios within the listening function. - -If the thing sending the signal was a newly created instance, `kwargs['created']` will have the value of `True`. We only want to do something if this is `True`. If it's a new instance, we create a new `Owner`, setting its `user` field to be the new `User` instance that was created. After that, we `save()` the new `Owner`. This will commit our change to the database if all is well. It'll fail if the data doesn't validate against the fields we declared. - -Now let's talk about how we're going to access the data. - -### Accessing model data - -In the Flask, Pyramid, and Tornado frameworks, we accessed model data by running queries against some database session. Maybe it was attached to a `request` object, maybe it was a standalone `session` object. Regardless, we had to establish a live connection to the database and query on that connection. - -This isn't the way Django works. Django, by default, doesn't leverage any third-party object-relational mapping (ORM) to converse with the database. Instead, Django allows the model classes to maintain their own conversations with the database. - -Every model class that inherits from `django.db.models.Model` will have attached to it an `objects` object. This will take the place of the `session` or `dbsession` we've become so familiar with. Let's open the special shell that Django gives us and investigate how this `objects` object works. -``` -(django-someHash) $ ./manage.py shell - -Python 3.7.0 (default, Jun 29 2018, 20:13:13) - -[Clang 9.1.0 (clang-902.0.39.2)] on darwin - -Type "help", "copyright", "credits" or "license" for more information. - -(InteractiveConsole) - ->>> - -``` - -The Django shell is different from a normal Python shell in that it's aware of the Django project we've been building and can do easy imports of our models, views, settings, etc. without having to worry about installing a package. We can access our models with a simple `import`. -``` ->>> from owner.models import Owner - ->>> Owner - - - -``` - -Currently, we have no `Owner` instances. We can tell by querying for them with `Owner.objects.all()`. -``` ->>> Owner.objects.all() - - - -``` - -Anytime we run a query method on the `.objects` object, we'll get a `QuerySet` back. For our purposes, it's effectively a `list`, and this `list` is showing us that it's empty. Let's make an `Owner` by making a `User`. -``` ->>> from django.contrib.auth.models import User - ->>> new_user = User(username='kenyattamurphy', email='kenyatta.murphy@gmail.com') - ->>> new_user.set_password('wakandaforever') - ->>> new_user.save() - -``` - -If we query for all of our `Owner`s now, we should find Kenyatta. -``` ->>> Owner.objects.all() - -]> - -``` - -Yay! We've got data! - -### Serializing models - -We'll be passing data back and forth beyond just "Hello World." As such, we'll want to see some sort of JSON-ified output that represents that data well. Taking that object's data and transforming it into a JSON object for submission across HTTP is a version of data serialization. In serializing data, we're taking the data we currently have and reformatting it to fit some standard, more-easily-digestible form. - -If I were doing this with Flask, Pyramid, and Tornado, I'd create a new method on each model to give the user direct access to call `to_json()`. The only job of `to_json()` would be to return a JSON-serializable (i.e. numbers, strings, lists, dicts) dictionary with whatever fields I want to be displayed for the object in question. - -It'd probably look something like this for the `Task` object: -``` -class Task(Base): - -    ...all the fields... - - - -    def to_json(self): - -        """Convert task attributes to a JSON-serializable dict.""" - -        return { - -            'id': self.id, - -            'name': self.name, - -            'note': self.note, - -            'creation_date': self.creation_date.strftime('%m/%d/%Y %H:%M:%S'), - -            'due_date': self.due_date.strftime('%m/%d/%Y %H:%M:%S'), - -            'completed': self.completed, - -            'user': self.user_id - -        } - -``` - -It's not fancy, but it does the job. - -Django REST Framework, however, provides us with an object that'll not only do that for us but also validate inputs when we want to create new object instances or update existing ones. It's called the [ModelSerializer][15]. - -Django REST Framework's `ModelSerializer` is effectively documentation for our models. They don't have lives of their own if there are no models attached (for that there's the [Serializer][16] class). Their main job is to accurately represent our model and make the conversion to JSON thoughtless when our model's data needs to be serialized and sent over a wire. - -Django REST Framework's `ModelSerializer` works best for simple objects. As an example, imagine that we didn't have that `ForeignKey` on the `Task` object. We could create a serializer for our `Task` that would convert its field values to JSON as necessary with the following declaration: -``` -# todo/serializers.py - -from rest_framework import serializers - -from todo.models import Task - - - -class TaskSerializer(serializers.ModelSerializer): - -    """Serializer for the Task model.""" - - - -    class Meta: - -        model = Task - -        fields = ('id', 'name', 'note', 'creation_date', 'due_date', 'completed') - -``` - -Inside our new `TaskSerializer`, we create a `Meta` class. `Meta`'s job here is just to hold information (or metadata) about the thing we're attempting to serialize. Then, we note the specific fields that we want to show. If we wanted to show all the fields, we could just shortcut the process and use `'__all__'`. We could, alternatively, use the `exclude` keyword instead of `fields` to tell Django REST Framework that we want every field except for a select few. We can have as many serializers as we like, so maybe we want one for a small subset of fields and one for all the fields? Go wild here. - -In our case, there is a relation between each `Task` and its owner `Owner` that must be reflected here. As such, we need to borrow the `serializers.PrimaryKeyRelatedField` object to specify that each `Task` will have an `Owner` and that relationship is one-to-one. Its owner will be found from the set of all owners that exists. We get that set by doing a query for those owners and returning the results we want to be associated with this serializer: `Owner.objects.all()`. We also need to include `owner` in the list of fields, as we always need an `Owner` associated with a `Task` -``` -# todo/serializers.py - -from rest_framework import serializers - -from todo.models import Task - -from owner.models import Owner - - - -class TaskSerializer(serializers.ModelSerializer): - -    """Serializer for the Task model.""" - -    owner = serializers.PrimaryKeyRelatedField(queryset=Owner.objects.all()) - - - -    class Meta: - -        model = Task - -        fields = ('id', 'name', 'note', 'creation_date', 'due_date', 'completed', 'owner') - -``` - -Now that this serializer is built, we can use it for all the CRUD operations we'd like to do for our objects: - - * If we want to `GET` a JSONified version of a specific `Task`, we can do `TaskSerializer(some_task).data` - * If we want to accept a `POST` with the appropriate data to create a new `Task`, we can use `TaskSerializer(data=new_data).save()` - * If we want to update some existing data with a `PUT`, we can say `TaskSerializer(existing_task, data=data).save()` - - - -We're not including `delete` because we don't really need to do anything with information for a `delete` operation. If you have access to an object you want to delete, just say `object_instance.delete()`. - -Here is an example of what some serialized data might look like: -``` ->>> from todo.models import Task - ->>> from todo.serializers import TaskSerializer - ->>> from owner.models import Owner - ->>> from django.contrib.auth.models import User - ->>> new_user = User(username='kenyatta', email='kenyatta@gmail.com') - ->>> new_user.save_password('wakandaforever') - ->>> new_user.save() # creating the User that builds the Owner - ->>> kenyatta = Owner.objects.first() # grabbing the Owner that is kenyatta - ->>> new_task = Task(name="Buy roast beef for the Sunday potluck", owner=kenyatta) - ->>> new_task.save() - ->>> TaskSerializer(new_task).data - -{'id': 1, 'name': 'Go to the supermarket', 'note': None, 'creation_date': '2018-07-31T06:00:25.165013Z', 'due_date': None, 'completed': False, 'owner': 1} - -``` - -There's a lot more you can do with the `ModelSerializer` objects, and I suggest checking [the docs][17] for those greater capabilities. Otherwise, this is as much as we need. It's time to dig into some views. - -### Views for reals - -We've built the models and the serializers, and now we need to set up the views and URLs for our application. After all, we can't do anything with an application that has no views. We've already seen an example with the `HelloWorld` view above. However, that's always a contrived, proof-of-concept example and doesn't really show what can be done with Django REST Framework's views. Let's clear out the `HelloWorld` view and URL so we can start fresh with our views. - -The first view we'll build is the `InfoView`. As in the previous frameworks, we just want to package and send out a dictionary of our proposed routes. The view itself can live in `django_todo.views` since it doesn't pertain to a specific model (and thus doesn't conceptually belong in a specific app). -``` -# django_todo/views.py - -from rest_framework.response import JsonResponse - -from rest_framework.views import APIView - - - -class InfoView(APIView): - -    """List of routes for this API.""" - -    def get(self, request): - -        output = { - -            'info': 'GET /api/v1', - -            'register': 'POST /api/v1/accounts', - -            'single profile detail': 'GET /api/v1/accounts/', - -            'edit profile': 'PUT /api/v1/accounts/', - -            'delete profile': 'DELETE /api/v1/accounts/', - -            'login': 'POST /api/v1/accounts/login', - -            'logout': 'GET /api/v1/accounts/logout', - -            "user's tasks": 'GET /api/v1/accounts//tasks', - -            "create task": 'POST /api/v1/accounts//tasks', - -            "task detail": 'GET /api/v1/accounts//tasks/', - -            "task update": 'PUT /api/v1/accounts//tasks/', - -            "delete task": 'DELETE /api/v1/accounts//tasks/' - -        } - -        return JsonResponse(output) - -``` - -This is pretty much identical to what we had in Tornado. Let's hook it up to an appropriate route and be on our way. For good measure, we'll also remove the `admin/` route, as we won't be using the Django administrative backend here. -``` -# in django_todo/urls.py - -from django_todo.views import InfoView - -from django.urls import path - - - -urlpatterns = [ - -    path('api/v1', InfoView.as_view(), name="info"), - -] - -``` - -#### Connecting models to views - -Let's figure out the next URL, which will be the endpoint for either creating a new `Task` or listing a user's existing tasks. This should exist in a `urls.py` in the `todo` app since this has to deal specifically with `Task` objects instead of being a part of the whole project. -``` -# in todo/urls.py - -from django.urls import path - -from todo.views import TaskListView - - - -urlpatterns = [ - -    path('', TaskListView.as_view(), name="list_tasks") - -] - -``` - -What's the deal with this route? We didn't specify a particular user or much of a path at all. Since there would be a couple of routes requiring the base path `/api/v1/accounts//tasks`, why write it again and again when we can just write it once? - -Django allows us to take a whole suite of URLs and import them into the base `django_todo/urls.py` file. We can then give every one of those imported URLs the same base path, only worrying about the variable parts when, you know, they vary. -``` -# in django_todo/urls.py - -from django.urls import include, path - -from django_todo.views import InfoView - - - -urlpatterns = [ - -    path('api/v1', InfoView.as_view(), name="info"), - -    path('api/v1/accounts//tasks', include('todo.urls')) - -] - -``` - -And now every URL coming from `todo/urls.py` will be prefixed with the path `api/v1/accounts//tasks`. - -Let's build out the view in `todo/views.py` -``` -# todo/views.py - -from django.shortcuts import get_object_or_404 - -from rest_framework.response import JsonResponse - -from rest_framework.views import APIView - - - -from owner.models import Owner - -from todo.models import Task - -from todo.serializers import TaskSerializer - - - - - -class TaskListView(APIView): - -    def get(self, request, username, format=None): - -        """Get all of the tasks for a given user.""" - -        owner = get_object_or_404(Owner, user__username=username) - -        tasks = Task.objects.filter(owner=owner).all() - -        serialized = TaskSerializer(tasks, many=True) - -        return JsonResponse({ - -            'username': username, - -            'tasks': serialized.data - -        }) - -``` - -There's a lot going on here in a little bit of code, so let's walk through it. - -We start out with the same inheritance of the `APIView` that we've been using, laying the groundwork for what will be our view. We override the same `get` method we've overridden before, adding a parameter that allows our view to receive the `username` from the incoming request. - -Our `get` method will then use that `username` to grab the `Owner` associated with that user. This `get_object_or_404` function allows us to do just that, with a little something special added for ease of use. - -It would make sense that there's no point in looking for tasks if the specified user can't be found. In fact, we'd want to return a 404 error. `get_object_or_404` gets a single object based on whatever criteria we pass in and either returns that object or raises an [Http404 exception][18]. We can set that criteria based on attributes of the object. The `Owner` objects are all attached to a `User` through their `user` attribute. We don't have a `User` object to search with, though. We only have a `username`. So, we say to `get_object_or_404` "when you look for an `Owner`, check to see that the `User` attached to it has the `username` that I want" by specifying `user__username`. That's TWO underscores. When filtering through a QuerySet, the two underscores mean "attribute of this nested object." Those attributes can be as deeply nested as needed. - -We now have the `Owner` corresponding to the given username. We use that `Owner` to filter through all the tasks, only retrieving the ones it owns with `Task.objects.filter`. We could've used the same nested-attribute pattern that we did with `get_object_or_404` to drill into the `User` connected to the `Owner` connected to the `Tasks` (`tasks = Task.objects.filter(owner__user__username=username).all()`) but there's no need to get that wild with it. - -`Task.objects.filter(owner=owner).all()` will provide us with a `QuerySet` of all the `Task` objects that match our query. Great. The `TaskSerializer` will then take that `QuerySet` and all its data, along with the flag of `many=True` to notify it as being a collection of items instead of just one item, and return a serialized set of results. Effectively a list of dictionaries. Finally, we provide the outgoing response with the JSON-serialized data and the username used for the query. - -#### Handling the POST request - -The `post` method will look somewhat different from what we've seen before. -``` -# still in todo/views.py - -# ...other imports... - -from rest_framework.parsers import JSONParser - -from datetime import datetime - - - -class TaskListView(APIView): - -    def get(self, request, username, format=None): - -        ... - - - -    def post(self, request, username, format=None): - -        """Create a new Task.""" - -        owner = get_object_or_404(Owner, user__username=username) - -        data = JSONParser().parse(request) - -        data['owner'] = owner.id - -        if data['due_date']: - -            data['due_date'] = datetime.strptime(data['due_date'], '%d/%m/%Y %H:%M:%S') - - - -        new_task = TaskSerializer(data=data) - -        if new_task.is_valid(): - -            new_task.save() - -            return JsonResponse({'msg': 'posted'}, status=201) - - - -        return JsonResponse(new_task.errors, status=400) - -``` - -When we receive data from the client, we parse it into a dictionary using `JSONParser().parse(request)`. We add the owner to the data and format the `due_date` for the task if one exists. - -Our `TaskSerializer` does the heavy lifting. It first takes in the incoming data and translates it into the fields we specified on the model. It then validates that data to make sure it fits the specified fields. If the data being attached to the new `Task` is valid, it constructs a new `Task` object with that data and commits it to the database. We then send back an appropriate "Yay! We made a new thing!" response. If not, we collect the errors that `TaskSerializer` generated and send those back to the client with a `400 Bad Request` status code. - -If we were to build out the `put` view for updating a `Task`, it would look very similar to this. The main difference would be that when we instantiate the `TaskSerializer`, instead of just passing in the new data, we'd pass in the old object and the new data for that object like `TaskSerializer(existing_task, data=data)`. We'd still do the validity check and send back the responses we want to send back. - -### Wrapping up - -Django as a framework is highly customizable, and everyone has their own way of stitching together a Django project. The way I've written it out here isn't necessarily the exact way that a Django project needs to be set up; it's just a) what I'm familiar with, and b) what leverages Django's management system. Django projects grow in complexity as you separate concepts into their own little silos. You do that so it's easier for multiple people to contribute to the overall project without stepping on each other's toes. - -The vast map of files that is a Django project, however, doesn't make it more performant or naturally predisposed to a microservice architecture. On the contrary, it can very easily become a confusing monolith. That may still be useful for your project. It may also make it harder for your project to be manageable, especially as it grows. - -Consider your options carefully and use the right tool for the right job. For a simple project like this, Django likely isn't the right tool. - -Django is meant to handle multiple sets of models that cover a variety of different project areas that may share some common ground. This project is a small, two-model project with a handful of routes. If we were to build this out more, we'd only have seven routes and still the same two models. It's hardly enough to justify a full Django project. - -It would be a great option if we expected this project to expand. This is not one of those projects. This is choosing a flamethrower to light a candle. It's absolute overkill. - -Still, a web framework is a web framework, regardless of which one you use for your project. It can take in requests and respond as well as any other, so you do as you wish. Just be aware of what overhead comes with your choice of framework. - -That's it! We've reached the end of this series! I hope it has been an enlightening adventure and will help you make more than just the most-familiar choice when you're thinking about how to build out your next project. Make sure to read the documentation for each framework to expand on anything covered in this series (as it's not even the least bit comprehensive). There's a wide world of stuff to get into for each. Happy coding! - --------------------------------------------------------------------------------- - -via: https://opensource.com/article/18/8/django-framework - -作者:[Nicholas Hunt-Walker][a] -选题:[lujun9972](https://github.com/lujun9972) -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:https://opensource.com/users/nhuntwalker -[1]:https://opensource.com/article/18/5/pyramid-framework -[2]:https://opensource.com/article/18/4/flask -[3]:https://opensource.com/article/18/6/tornado-framework -[4]:https://www.djangoproject.com -[5]:https://djangopackages.org/ -[6]:http://www.django-rest-framework.org/ -[7]:http://gunicorn.org/ -[8]:https://docs.pylonsproject.org/projects/waitress/en/latest/ -[9]:https://uwsgi-docs.readthedocs.io/en/latest/ -[10]:https://docs.djangoproject.com/en/2.0/ref/settings/#databases -[11]:https://pypi.org/project/dj-database-url/ -[12]:http://yellerapp.com/posts/2015-01-12-the-worst-server-setup-you-can-make.html -[13]:https://docs.djangoproject.com/en/2.0/ref/settings/#std:setting-DATABASE-ENGINE -[14]:https://www.getpostman.com/ -[15]:http://www.django-rest-framework.org/api-guide/serializers/#modelserializer -[16]:http://www.django-rest-framework.org/api-guide/serializers/ -[17]:http://www.django-rest-framework.org/api-guide/serializers/#serializers -[18]:https://docs.djangoproject.com/en/2.0/topics/http/views/#the-http404-exception From 2e72e1d1d171f3dfbe0661aba0cbf0620e9c7b20 Mon Sep 17 00:00:00 2001 From: MjSeven <33125422+MjSeven@users.noreply.github.com> Date: Sun, 30 Sep 2018 10:26:45 +0800 Subject: [PATCH 151/736] Create 20180816 An introduction to the Django Python web app framework.md --- ... to the Django Python web app framework.md | 1219 +++++++++++++++++ 1 file changed, 1219 insertions(+) create mode 100644 translated/tech/20180816 An introduction to the Django Python web app framework.md diff --git a/translated/tech/20180816 An introduction to the Django Python web app framework.md b/translated/tech/20180816 An introduction to the Django Python web app framework.md new file mode 100644 index 0000000000..dc9fd20449 --- /dev/null +++ b/translated/tech/20180816 An introduction to the Django Python web app framework.md @@ -0,0 +1,1219 @@ +Python Web 应用程序 Django 框架简介 +===== + +![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/web-spider-frame-framework.png?itok=Rl2AG2Dc) + +在本系列(由四部分组成)的前三篇文章中,我们讨论了 [Pyramid][1], [Flask][2] 和 [Tornado][3] 这 3 个 Web 框架。我们已经构建了三次相同的应用程序,最终我们遇到了 [Django][4]。总的来说,Django 是目前 Python 开发人员使用的主要 Web 框架,并且不难看出原因。它擅长隐藏大量的配置逻辑,让你专注于能过够快速构建大型应用程序。 + +也就是说,当涉及到小型项目时,比如我们的待办事项列表应用程序,Django 可能有点像用消防水管来进行水枪大战。让我们来看看它们是如何结合在一起的。 + +### 关于 Django + +Django 将自己定位为“一个高级的 Python Web 框架,它鼓励快速开发和干净,实用的设计。它由经验丰富的开发人员构建,解决了 Web 开发的很多麻烦,因此你可以专注于编写应用程序而无需重新发明轮子”。它真的做到了!这个庞大的 Web 框架附带了非常多的工具,通常在开发过程中,如何将所有内容组合在一起协同工作可能是个谜。 + +除了框架本身很大,Django 社区也是非常庞大的。事实上,它非常庞大和活跃,以至于有[一个网站][5]致力于为人们收集第三方包,这些第三方包可集成进 Django 来做一大堆事情。包括从身份验证和授权到完全基于 Django 的内容管理系统,电子商务附加组件以及与 Stripe(译注:美版“支付宝”)集成的所有内容。关于不要重新发明轮子:如果你想用 Django 完成一些事情,有人可能已经做过了,你只需将它集成进你的项目就行。 + +为此,我们希望使用 Django 构建 REST API,因此我们将利用流行的 [Django REST framework][6]。它的工作是将 Django 框架(Django 使用自己的模板引擎构建 HTML 页面)转换为专门用于有效地处理 REST 交互的系统。让我们开始吧。 + +### Django 启动和配置 + +``` +$ mkdir django_todo + +$ cd django_todo + +$ pipenv install --python 3.6 + +$ pipenv shell + +(django-someHash) $ pipenv install django djangorestframework + +``` + +作为参考,我们使用的是 `django-2.0.7` 和 `djangorestframework-3.8.2`。 + +与 Flask, Tornado 和 Pyramid 不同,我们不需要自己编写 `setup.py` 文件,我们并不是在做一个可安装的 Python 发行版。像很多事情一样,Django 以自己的方式处理这个问题。我们仍然需要一个 `requirements.txt` 文件来跟踪我们在其它地方部署的所有必要安装。但是,就 Django 项目中的目标模块而言,Django 会让我们列出我们想要访问的子目录,然后允许我们从这些目录中导入,就像它们是已安装的包一样。 + +首先,我们必须创建一个 Django 项目。 + +当我们安装了 Django 后,我们还安装了命令行脚本 `django-admin`。它的工作是管理所有与 Django 相关的命令,这些命令有助于我们将项目整合在一起,并在我们继续开发的过程中对其进行维护。`django-admin` 并不是让我们从头开始构建整个 Django 生态系统,而是让我们开始使用标准 Django 项目所需的所有必要文件(以及更多)。 + +调用 `django-admin` 的 `start-project` 命令的语法是 `django-admin startproject <项目名称> <存放目录>`。我们希望文件存于当前的工作目录中,所以: +``` +(django-someHash) $ django-admin startproject django_todo . + +``` + +输入 `ls` 将显示一个新文件和一个新目录。 +``` +(django-someHash) $ ls + +manage.py   django_todo + +``` + +`manage.py` 是一个可执行命令行 Python 文件,它最终成为 `django-admin` 的装饰器(to 校正:这里装饰器只是一个语义上的称呼,与 Python 的装饰器不同)。因此,它的工作与 `django-admin` 是一样的:帮助我们管理项目。因此得名 `manage.py`。 + +它在 `django_todo` 目录里创建了一个新目录 `django_todo`,其代表了我们项目的配置根目录。现在让我们深入研究一下。 + +### 配置 Django + +可以将 `django_todo` 目录称为“配置根”,我们的意思是这个目录包含了通常配置 Django 项目所需的文件。几乎所有这个目录之外的内容都只关注与项目模型,视图,路由等相关的“业务逻辑”。所有连接项目的点都将在这里出现。 + +在 `django_todo` 目录中调用 `ls` 会显示以下四个文件: +``` +(django-someHash) $ cd django_todo + +(django-someHash) $ ls + +__init__.py settings.py urls.py     wsgi.py + +``` + + * `__init__.py` 文件为空,之所以存在是为了将此目录转换为可导入的 Python 包。 + + * `settings.py` 是设置大多数配置项的地方。例如项目是否处于 DEBUG 模式,正在使用哪些数据库,Django 应该定位文件的位置等等。它是配置根目录的“主要配置”部分,我们将在一会深入研究。 + + * `urls.py` 顾名思义就是设置 URL 的地方。虽然我们不必在此文件中显式写入项目的每个 URL,但我们需要让此文件知道在其他任何地方已声明的 URL。如果此文件未指向其它 URL,则那些 URL 就不存在。 + + * `wsgi.py` 用于在生产环境中提供应用程序。就像 Pyramid, Tornado 和 Flask 暴露了一些 “app” 对象一样,它们用来提供配置好的应用程序,Django 也必须暴露一个,就是在这里完成的。它可以和 [Gunicorn][7], [Waitress][8] 或者 [uWSGI][9] 一起配合来提供服务。 + +#### 设置 settings + +看一看 `settings.py`,它里面有大量的配置项,那些只是默认值!这甚至不包括数据库,静态文件,媒体文件,任何集成的钩子,或者可以配置 Django 项目的任何其它几种方式。让我们从上到下看看有什么: + + * `BASE_DIR` 设置目录的绝对路径,或者是 `manage.py` 所在的目录。这对于定位文件非常有用。 + + * `SECRET_KEY` 是用于 Django 项目中加密签名的密钥。在实际中,它用于会话,cookie,CSRF 保护和身份验证令牌等。最好在第一次提交之前,尽快应该更改 `SECRET_KEY` 的值并将其放置到环境变量中。 + + * `DEBUG` 告诉 Django 是以开发模式还是生产模式运行项目。这是一个非常关键的区别。 + + * 在开发模式下,当弹出一个错误时,Django 将显示导致错误的完整堆栈跟踪,以及运行项目所涉及的所有设置和配置。如果在生产环境中将 `DEBUG` 设置为 `True`,这可能是一个巨大的安全问题。 + + * 在生产模式下,当出现问题时,Django 会显示一个简单的错误页面,即除错误代码外不提供任何信息。 + + * 保护我们项目的一个简单方法是将 `DEBUG` 设置为环境变量,如 `bool(os.environ.get('DEBUG', ''))`。 + * `ALLOWED_HOSTS` 是应用程序提供服务的主机名的列表。在开发模式中,这可能是空的,但是在生产中,如果为项目提供服务的主机不在 ALLOWED_HOSTS 列表中,Django 项目将无法运行。这是设置为环境变量的另一种情况。 + + * `INSTALLED_APPS` 是我们的 Django 项目可以访问的 Django "apps" 列表(将它们视为子目录,稍后会详细介绍)。默认情况下,它将提供: + * 内置的 Django admin 网站 + * Django 的内置认证系统 + * Django 的数据模型通用管理器 + * 会话管理 + * Cookie 和基于会话的消息传递 + * 站点固有的静态文件的用法,比如 `css` 文件,`js` 文件,任何属于我们网站设计的图片等。 + + * `MIDDLEWARE` 顾名思义:帮助 Django 项目运行的中间件。其中很大一部分用于处理各种类型的安全,尽管我们可以根据需要添加其它中间件。 + + * `ROOT_URLCONF` 设置基本 URL 配置文件的导入路径。还记得我们之前见过的那个 `urls.py` 吗?默认情况下,Django 指向该文件以此来收集所有的 URL。如果我们想让 Django 在其它地方寻找,我们将在这里设置 URL 位置的导入路径。 + + * `TEMPLATES` 是 Django 用于我们网站前端的模板引擎列表,假如我们依靠 Django 来构建我们的 HTML。我们在这里不需要,那就无关紧要了。 + + * `WSGI_APPLICATION` 设置我们的 WSGI 应用程序的导入路径 - 在生产环境下使用的东西。默认情况下,它指向 `wsgi.py` 中的 `application` 对象。这很少(如果有的话)需要修改。 + + * `DATABASES` 设置 Django 项目将访问那些数据库。必须设置 `default` 数据库。我们可以通过名称设置别的数据库,只要我们提供 `HOST`, `USER`, `PASSWORD`, `PORT`, 数据库名称 `NAME` 和合适的 `ENGINE`。可以想象,这些都是敏感的信息,因此最好将它们隐藏在环境变量中。[查看 Django 文档][10]了解更多详情。 + + * 注意:如果不是提供数据库的单个部分,而是提供完整的数据库 URL,请查看 [dj_database_url][11]。 + + * `AUTH_PASSWORD_VALIDATORS` 实际上是运行以检查输入密码的函数列表。默认情况下我们有一些,但是如果我们有其它更复杂的验证需求:不仅仅是检查密码是否与用户的属性匹配,是否超过最小长度,是否是 1000 个最常用的密码之一,或者密码完全是数字,我们可以在这里列出它们。 + + * `LANGUAGE_CODE` 设置网站的语言。默认情况下它是美国英语,但我们可以将其切换为其它语言。 + + * `TIME_ZONE` 是我们 Django 项目后中自动生成的时间戳的时区。我强调坚持使用 UTC 并在其它地方执行任何特定于时区的处理,而不是尝试重新配置此设置。正如[这篇文章][12] 所述,UTC 是所有时区的共同点,因为不需要担心偏移。如果偏移很重要,我们可以根据需要使用与 UTC 的适当偏移来计算它们。 + + * `USE_I18N` 将让 Django 使用自己的翻译服务来为前端翻译字符串。I18N = 国际化(“i” 和 “n” 之间的 18 个字符)。 + + * `USE_L10N` (L10N = 本地化[在 "l" 和 "n" 之间有 10 个字符]) 如果设置为 `True`,那么将使用数据的公共本地格式。一个很好的例子是日期:在美国它是 MM-DD-YYYY。在欧洲,日期往往写成 DD-MM-YYYY。 + + * `STATIC_URL` 是用于提供静态文件的大量设置的一部分。我们将构建一个 REST API,因此我们不需要担心静态文件。通常,这会为每个静态文件的域名设置根路径。所以,如果我们有一个徽标图像,那就是 `http:////logo.gif`。 + +默认情况下,这些设置已准备就绪。我们必须改变的一个选项是 `DATABASES` 设置。首先,我们创建将要使用的数据库: +``` +(django-someHash) $ createdb django_todo + +``` + +我们想要像使用 Flask, Pyramid 和 Tornado 一样使用 PostgreSQL 数据库,这意味着我们必须更改 `DATABASES` 设置以允许我们的服务器访问 PostgreSQL 数据库。首先是引擎。默认情况下,数据库引擎是 `django.db.backends.sqlite3`,我们把它改成 `django.db.backends.postgresql`。 + +有关 Django 可用引擎的更多信息,[查看文档][13]。请注意,尽管技术上可以将 NoSQL 解决方案整合到 Django 项目中,但为了开箱即用,Django 强烈偏向于 SQL 解决方案。 + +接下来,我们必须为连接参数的不同部分指定键值对。 + + * `NAME` 是我们刚刚创建的数据库的名称。 + * `USER` 是 Postgres 数据库用户名。 + * `PASSWORD` 是访问数据库所需的密码。 + * `HOST` 是数据库的主机。当我们在本地开发时,`localhost` 或 `127.0.0.1` 都将起作用。 + * `PORT` 是我们为 Postgres 开放的端口,它通常是 `5432`。 + +`settings.py` 希望我们为每个键提供字符串值。但是,这是高度敏感的信息。这对任何负责任的开发人员都不起作用。有几种方法可以解决这个问题,一种是我们需要设置环境变量。 +``` +DATABASES = { + +    'default': { + +        'ENGINE': 'django.db.backends.postgresql', + +        'NAME': os.environ.get('DB_NAME', ''), + +        'USER': os.environ.get('DB_USER', ''), + +        'PASSWORD': os.environ.get('DB_PASS', ''), + +        'HOST': os.environ.get('DB_HOST', ''), + +        'PORT': os.environ.get('DB_PORT', ''), + +    } + +} + +``` + +在继续之前,请确保设置环境变量或 Django 不起作用(to 校正:这里不清楚原文的意思,什么叫 django 不起作用)。此外,我们需要在此环境中安装 `psycopg2`,以便我们可以与数据库通信。 + +### Django 路由和视图 + +让我们在这个项目中实现一些函数。我们将使用 Django REST Framework 来构建 REST API,所以我们必须确保在 `settings.py` 中将 `rest_framework` 添加到 `INSTALLED_APPS` 的末尾。 +``` +INSTALLED_APPS = [ + +    'django.contrib.admin', + +    'django.contrib.auth', + +    'django.contrib.contenttypes', + +    'django.contrib.sessions', + +    'django.contrib.messages', + +    'django.contrib.staticfiles', + +    'rest_framework' + +] + +``` + +虽然 Django REST Framework 并不专门需要基于类的视图(如 Tornado)来处理传入的请求,但类是编写视图的首选方法。让我们来定义一个类视图。 + +让我们在 `django_todo` 创建一个名为 `views.py` 的文件。在 `views.py` 中,我们将创建 "Hello, world!" 视图。 +``` +# django_todo/views.py + +from rest_framework.response import JsonResponse + +from rest_framework.views import APIView + + +class HelloWorld(APIView): + +    def get(self, request, format=None): + +        """Print 'Hello, world!' as the response body.""" + +        return JsonResponse("Hello, world!") + +``` + +每个 Django REST Framework 基于类的视图都直接或间接地继承自 `APIView`。`APIView` 处理大量的东西,但为了达到我们的目的,它做了以下特定的事情: + + * 根据 HTTP 方法(例如 GET, POST, PUT, DELETE)来设置引导对应请求所需的方法 + + * 用我们需要的所有数据和属性来填充 `request` 对象,以便解析和处理传入的请求 + + * 采用 `Response` 或 `JsonResponse`,每个调度方法(即名为 `get`, `post`, `put`, `delete` 的方法)返回并构造格式正确的 HTTP 响应。 + +终于,我们有一个视图了!它本身没有任何作用,我们需要将它连接到路由。 + +如果我们跳转到 `django_todo/urls.py`,我们会到达默认的 URL 配置文件。如前所述:如果 Django 项目中的路由不包含在此处,则它不存在。 + +我们在给定的 `urlpatterns` 列表中添加所需的 URL。默认情况下,我们有一个 url,它里面包含一整套 URL 用于 Django 的内置管理后端系统,但是我们会删除它。 + +我们还得到一些非常有用的文档字符串,它告诉我们如何向 Django 项目添加路由。我们需要调用 `path()`,伴随三个参数: + + * 所需的路由,作为字符串(没有前导斜线) + * 处理该路由的视图函数(只能有一个函数!) + * 在 Django 项目中路由的名称 + +让我们导入 `HelloWorld` 视图并将其附加到主路径 `"/"` 。我们可以从 `urlpatterns` 中删除 `admin` 的路径,因为我们不会使用它。 + +``` +# django_todo/urls.py, after the big doc string + +from django.urls import path + +from django_todo.views import HelloWorld + + + +urlpatterns = [ + +    path('', HelloWorld.as_view(), name="hello"), + +] + +``` + +好吧,这里有一点不同。我们指定的路由只是一个空白字符串,为什么它会工作?Django 假设我们声明的每个路由都以一个前导斜杠开头,我们只是在初始域名后指定资源路由。如果一条路由没有去往一个特定的资源,而只是一个主页,那么该路由是 `""`,或者实际上是“没有资源”。 + +`HelloWorld` 视图是从我们刚刚创建的 `views.py` 文件导入的。为了执行此导入,我们需要更新 `settings.py` 中的 `INSTALLED_APPS` 列表使其包含 `django_todo`。是的,这有点奇怪。以下是一种理解方式。 + +`INSTALLED_APPS` 指的是 Django 认为可导入的目录或包的列表。它是 Django 处理项目的各个组件的方式,比如安装了一个包,而不需要经过 `setup.py` 的方式。我们希望将 `django_todo` 目录视为可导入的包,因此我们将该目录包含在 `INSTALLED_APPS` 中。现在,在该目录中的任何模块也是可导入的。所以我们得到了我们的视图。 + +`path` 函数只将视图函数作为第二个参数,而不仅仅是基于类的视图。幸运的是,所有有效的基于 Django 类的视图都包含 `.as_view()` 方法。它的工作是将基于类的视图的所有优点汇总到一个视图函数中并返回该视图函数。所以,我们永远不必担心转换的工作。相反,我们只需要考虑业务逻辑,让 Django 和 Django REST Framework 处理剩下的事情。 + +让我们在浏览器中打开它! + +Django 提供了自己的本地开发服务器,可通过 `manage.py` 访问。让我们切换到包含 `manage.py` 的目录并输入: +``` +(django-someHash) $ ./manage.py runserver +Performing system checks... + +System check identified no issues (0 silenced). + +August 01, 2018 - 16:47:24 + +Django version 2.0.7, using settings 'django_todo.settings' + +Starting development server at http://127.0.0.1:8000/ + +Quit the server with CONTROL-C. + +``` + +当 `runserver` 执行时,Django 会检查以确保项目(或多或少)正确连接在一起。这不是万无一失的,但确实会发现一些明显的问题。如果我们的数据库与代码不同步,它会通知我们。毫无遗问,因为我们没有将任何应用程序的东西提交到我们的数据库,但现在这样做还是可以的。让我们访问 `http://127.0.0.1:8000` 来查看 `HelloWorld` 视图的输出。 + +咦?这不是我们在 Pyramid, Flask 和 Tornado 中看到的明文数据。当使用 Django REST Framework 时,HTTP 响应(在浏览器中查看时)是这样呈现的 HTML,以红色显示我们的实际 JSON 响应。 + +但不要担心!如果我们在命令行中使用 `curl` 快速访问 `http://127.0.0.1:8000`,我们就不会得到任何花哨的 HTML,只有内容。 +``` +# 注意:在不同的终端口窗口中执行此操作,在虚拟环境之外 + +$ curl http://127.0.0.1:8000 + +"Hello, world!" + +``` + +棒极了! + +Django REST Framework 希望我们在使用浏览器浏览时拥有一个人性化的界面。这是有道理的,如果在浏览器中查看 JSON,通常是因为人们想要检查它是否正确,或者在设计一些消费者 API 时想要了解 JSON 响应。这很像你从 [Postman][14] 中获得的东西。 + +无论哪种方式,我们都知道我们的视图工作了!酷!让我们概括一下我们做过的事情: + + 1. 使用 `django-admin startproject <项目名称>` 开始一个项目 + 2. 使用环境变量来更新 `django_todo/settings.py` 中的 `DEBUG`, `SECRET_KEY`,还有 `DATABASES` 字典 + 3. 安装 `Django REST Framework`,并将它添加到 `INSTALLED_APPS` + 4. 创建 `django_todo/views.py` 来包含我们的第一个类视图,它返回响应 "Hello, world!" + 5. 更新 `django_todo/urls.py`,其中包含我们的根路由 + 6. 在 `django_todo/settings.py` 中更新 `INSTALLED_APPS` 以包含 `django_todo` 包 + +### 创建模型 + +现在让我们来创建数据模型吧。 + +Django 项目的整个基础架构都是围绕数据模型构建的,它是这样编写的,因此每个数据模型够可以拥有自己的小天地,拥有自己的视图,自己与其资源相关的 URL 集合,甚至是自己的测试(如果我们需要(to 校正:这里???))。 + +如果我们想构建一个简单的 Django 项目,我们可以通过在 `django_todo` 目录中编写我们自己的 `models.py` 文件并将其导入我们的视图来避免这种情况。但是,我们试图以“正确”的方式编写 Django 项目,因此我们应该尽可能地将模型分成 Django 方式的包(to 校正:这里 Django Way™ 有点懵)。 + +Django Way 涉及创建所谓的 Django “apps”,它本身并不是单独的应用程序,它们没有自己的设置和诸如此类的东西(虽然它们也可以)。但是,它们可以拥有一个人们可能认为属于独立应用程序的东西: + + * 一组自包含的 URL + * 一组自包含的 HTML 模板(如果我们想要提供 HTML) + * 一个或多个数据模型 + * 一套自包含的视图 + * 一套自包含的测试 + +它们是独立的,因此可以像独立应用程序一样轻松共享。实际上,Django REST Framework 是 Django app 的一个例子。它包含自己的视图和 HTML 模板,用于提供我们的 JSON。我们只是利用这个 Django app 将我们的项目变成一个全面的 RESTful API 而不用那么麻烦。 + +要为我们的待办事项列表项创建 Django app,我们将要使用 `manage.py` 伴随参数 `startapp`。 +``` +(django-someHash) $ ./manage.py startapp todo + +``` + +`startapp` 命令成功执行后没有输出。我们可以通过使用 `ls` 来检查它是否完成它应该做的事情。 +``` +(django-someHash) $ ls + +Pipfile      Pipfile.lock django_todo  manage.py    todo + +``` + +看看:我们有一个全新的 `todo` 目录。让我们看看里面! +``` +(django-someHash) $ ls todo + +__init__.py admin.py    apps.py     migrations  models.py   tests.py    views.py + +``` + +以下是 `manage.py startapp` 创建的文件: + + * `__init__.py` 是空文件。它之所以存在是因为此目录可看作是模型,视图等的有效导入路径。 + + * `admin.py` 不是空文件。它用于在 Django admin 中格式化(to 校正:格式化可能欠妥)这个应用程序的模型,我们在本文中没有涉及到它。 + + * `apps.py` 这里基本不起作用。它有助于格式化 Django admin 的模型。 + + * `migrations` 是一个包含我们数据模型快照的目录。它用于更新数据库。这是内置数据库管理的少数几个框架之一,其中一部分允许我们更新数据库,而不必拆除它并重建它以更改模式。 + + * `models.py` 是数据模型所在。 + + * `tests.py` 是测试所在的地方,如果我们需要写测试。 + + * `views.py` 用于我们编写的与此 app 中的模型相关的视图。它们不是一定得写在这里。例如,我们可以在 `django_todo/views.py` 中写下我们所有的视图。但是,它在这个 app 中更容易将我们的问题分开。这与覆盖许多概念空间的扩展应用程序之间的关系更加密切。 + +它并没有为这个 app 创建 `urls.py` 文件,但我们可以自己创建。 +``` +(django-someHash) $ touch todo/urls.py + +``` + +在继续之前,我们应该帮自己一个忙,将这个新 Django 应用程序添加到 `django_todo/settings.py` 中的 `INSTALLED_APPS` 列表中。 +``` +# settings.py + +INSTALLED_APPS = [ + +    'django.contrib.admin', + +    'django.contrib.auth', + +    'django.contrib.contenttypes', + +    'django.contrib.sessions', + +    'django.contrib.messages', + +    'django.contrib.staticfiles', + +    'rest_framework', + +    'django_todo', + +    'todo' # <--- 添加了这行 + +] + +``` + +检查 `todo / models.py` 发现 `manage.py` 已经为我们编写了一些代码。不同于在 Flask, Tornado 和 Pyramid 实现中创建模型的方式,Django 不利用第三方来管理数据库会话或构建其对象实例。它全部归入 Django 的 `django.db.models` 子模块。 + +然而,建立模型的方式或多或少是相同的。要在 Django 中创建模型,我们需要构建一个继承自 `models.Model` 的 `class`,将应用于该模型实例的所有字段都应视为类属性。我们不像过去那样从 SQLAlchemy 导入列和字段类型,而是直接从 `django.db.models` 导入。 +``` +# todo/models.py + +from django.db import models + + +class Task(models.Model): + +    """Tasks for the To Do list.""" + +    name = models.CharField(max_length=256) + +    note = models.TextField(blank=True, null=True) + +    creation_date = models.DateTimeField(auto_now_add=True) + +    due_date = models.DateTimeField(blank=True, null=True) + +    completed = models.BooleanField(default=False) + +``` + +虽然 Django 的需求和基于 SQLAlchemy 的系统之间存在一些明显的差异,但总体内容和结构或多或少相同。让我们来指出这些差异。 + +我们不再需要为对象实例声明自动递增 ID 的单独字段。除非我们指定一个不同的字段作为主键,否则 Django 会为我们构建一个。 + +我们只是直接引用数据类型作为列本身,而不是实例化传递数据类型对象的 `Column` 对象。 + +`Unicode` 字段变为 `models.CharField` 或 `models.TextField`。`CharField` 用于特定最大长度的小文本字段,而 `TextField` 用于任何数量的文本。 + +`TextField` 应该是空白的,我们以两种方式指定它。`blank = True` 表示当构建此模型的实例,并且正在验证附加到该字段的数据时,该数据是可以为空的。这与 `null = True` 不同,后者表示当构造此模型类的表时,对应于 `note` 的列将允许空白或为 `NULL`。因此,总而言之,`blank = True` 控制如何将数据添加到模型实例,而 `null = True` 控制如何构建保存该数据的数据库表。 + +`DateTime` 字段增加了一些属性,并且能够为我们做一些工作,使得我们不必修改类的 `__init__` 方法。对于 `creation_date` 字段,我们指定 `auto_now_add = True`。在实际意义上意味着,当创建一个新模型实例时,Django 将自动记录现在的日期和时间作为该字段的值。这非常方便! + +当 `auto_now_add` 及其类似属性 `auto_now` 都没被设置为 `True`时,`DateTimeField` 会像其它字段一样期待数据。它需要提供一个适当的 `datetime` 对象才能生效。`due_date` 列的 `blank` 和 `null` 属性都设置为 `True`,这样待办事项列表中的项目就可以成为将来某个时间点完成,没有确定的日期或时间。 + +`BooleanField` 最终可以取两个值:`True` 或 `False`。这里,默认值设置为 `False`。 + +#### 管理数据库 + +如前所述,Django 有自己的数据库管理方式。我们可以利用 Django 提供的 `manage.py` 脚本,而不必编写任何关于数据库的代码。它不仅可以管理我们数据库的表格构建,还可以管理我们希望对这些表格进行的任何更新,而不必将整个事情搞砸! + +因为我们构建了一个新模型,所以我们需要让数据库知道它。首先,我们需要将与此模型对应的模式放入代码中。`manage.py` 的 `makemigrations` 命令对我们构建的模型类及其所有字段进行快照。它将获取该信息并将其打包成一个 Python 脚本,该脚本将存在于特定 Django app 的 `migrations` 目录中。永远不会有理由直接运行这个迁移脚本。它的存在只是为了让 Django 可以使用它作为更新数据库表的基础,或者在我们更新模型类时继承信息。 +``` +(django-someHash) $ ./manage.py makemigrations + +Migrations for 'todo': + +  todo/migrations/0001_initial.py + +    - Create model Task + +``` + +这将查看 `INSTALLED_APPS` 中列出的每个应用程序,并检查这些应用程序中存在的模型。然后,它将检查相应的 `migrations` 目录中的迁移文件,并将它们与每个 `INSTALLED_APPS` 中的模型进行比较。如果模型升级已超出最新迁移所应存在的范围,则将创建一个继承自最新迁移文件的新迁移文件,它将自动命名,并且还会显示一条消息,说明自上次迁移以来发生了哪些更改。 + +如果你上次处理 Django 项目已经有一段时间了,并且不记得模型是否与迁移同步,那么你无需担心。`makemigrations` 是一个幂等操作。无论你运行 `makemigrations` 一次还是 20 次,`migrations` 目录只有一个与当前模型配置的副本。还有更棒的,当我们运行 `./manage.py runserver` 时,Django 检测到我们的模型与迁移不同步,它会用彩色文本告诉我们以便我们可以做出适当的选择。 + +下一个要点是至少让每个人访问一次:创建一个迁移文件不会立即影响我们的数据库。当我们运行 `makemigrations` 时,我们准备了 Django 项目来定义如何创建给定的表并最终查找。我们仍在将这些更改应用于数据库。这就是 `migrate` 命令的用途。 + +``` +(django-someHash) $ ./manage.py migrate + +Operations to perform: + +  Apply all migrations: admin, auth, contenttypes, sessions, todo + +Running migrations: + +  Applying contenttypes.0001_initial... OK + +  Applying auth.0001_initial... OK + +  Applying admin.0001_initial... OK + +  Applying admin.0002_logentry_remove_auto_add... OK + +  Applying contenttypes.0002_remove_content_type_name... OK + +  Applying auth.0002_alter_permission_name_max_length... OK + +  Applying auth.0003_alter_user_email_max_length... OK + +  Applying auth.0004_alter_user_username_opts... OK + +  Applying auth.0005_alter_user_last_login_null... OK + +  Applying auth.0006_require_contenttypes_0002... OK + +  Applying auth.0007_alter_validators_add_error_messages... OK + +  Applying auth.0008_alter_user_username_max_length... OK + +  Applying auth.0009_alter_user_last_name_max_length... OK + +  Applying sessions.0001_initial... OK + +  Applying todo.0001_initial... OK + +``` + +当我们应用迁移时,Django 首先检查其他 `INSTALLED_APPS` 是否有要应用的迁移,它大致按照列出的顺序检查它们。我们希望我们的应用程序最后列出,因为我们希望确保,如果我们的模型依赖于任何 Django 的内置模型,我们所做的数据库更新不会受到依赖性问题的影响。 + +我们还有另一个要构建的模型:User 模型。但是,自从我们使用 Django 以来,游戏发生了一些变化。许多应用程序需要某种类型的用户模型,Django 的 `django.contrib.auth` 包构建了自己的用户模型供我们使用。如果它不是我们用户需要的身份验证令牌,我们可以继续使用它而不是重新发明轮子。 + +但是,我们需要那个令牌。我们可以通过两种方式来处理这个问题。 + + * 继承自 Django 的 `User` 对象,我们自己的对象通过添加 `token` 字段来扩展它 + * 创建一个与 Django 的 `User` 对象一对一关系的新对象,其唯一目的是持有一个令牌 + +我习惯于建立对象关系,所以让我们选择第二种选择。我们称之为 `Owner`,因为它基本上具有与 `User` 类似的内涵,这就是我们想要的。 + +出于纯粹的懒惰,我们可以在 `todo/models.py` 中包含这个新的 `Owner` 对象,但是不要这样做。`Owner` 没有明确地与任务列表上的项目的创建或维护有关。从概念上讲,`Owner` 只是任务的所有者。甚至有时候我们想要扩展这个 `Owner` 以包含与任务完全无关的其他数据。 + +为了安全起见,让我们创建一个 `owner` 应用程序,其工作是容纳和处理这个 `Owner` 对象。 +``` +(django-someHash) $ ./manage.py startapp owner + +``` + +不要忘记在 `settings.py` 文件中的 `INSTALLED_APPS` 中添加它。 +``` +INSTALLED_APPS = [ +    'django.contrib.admin', + +    'django.contrib.auth', + +    'django.contrib.contenttypes', + +    'django.contrib.sessions', + +    'django.contrib.messages', + +    'django.contrib.staticfiles', + +    'rest_framework', + +    'django_todo', + +    'todo', + +    'owner' +] + +``` + +如果我们查看 Django 项目的根目录,我们现在有两个 Django 应用程序: +``` +(django-someHash) $ ls + +Pipfile      Pipfile.lock django_todo  manage.py    owner        todo + +``` + +在 `owner/models.py` 中,让我们构建这个 `Owner` 模型。如前所述,它与 Django 的内置 `User` 对象有一对一的关系。我们可以用 Django 的 `models.OneToOneField` 强制实现这种关系。 +``` +# owner/models.py + +from django.db import models + +from django.contrib.auth.models import User + +import secrets + + +class Owner(models.Model): + +    """The object that owns tasks.""" + +    user = models.OneToOneField(User, on_delete=models.CASCADE) + +    token = models.CharField(max_length=256) + + +    def __init__(self, *args, **kwargs): + +        """On construction, set token.""" + +        self.token = secrets.token_urlsafe(64) + +        super().__init__(*args, **kwargs) + +``` + +这表示 `Owner` 对象对应到 `User` 对象,每个 `user` 实例有一个 `owner` 实例。`on_delete = models.CASCADE` 表示如果相应的 `User` 被删除,它所对应的 `Owner` 实例也将被删除。让我们运行 `makemigrations` 和 `migrate` 来将这个新模型放入到我们的数据库中。 +``` +(django-someHash) $ ./manage.py makemigrations + +Migrations for 'owner': + +  owner/migrations/0001_initial.py + +    - Create model Owner + +(django-someHash) $ ./manage.py migrate + +Operations to perform: + +  Apply all migrations: admin, auth, contenttypes, owner, sessions, todo + +Running migrations: + +  Applying owner.0001_initial... OK + +``` + +现在我们的 `Owner` 需要拥有一些 `Task` 对象。它与上面看到的 `OneToOneField` 非常相似,只不过我们会在 `Task` 对象上贴一个 `ForeignKey` 字段指向 `Owner`。 + +``` +# todo/models.py + +from django.db import models + +from owner.models import Owner + + +class Task(models.Model): + +    """Tasks for the To Do list.""" + +    name = models.CharField(max_length=256) + +    note = models.TextField(blank=True, null=True) + +    creation_date = models.DateTimeField(auto_now_add=True) + +    due_date = models.DateTimeField(blank=True, null=True) + +    completed = models.BooleanField(default=False) + +    owner = models.ForeignKey(Owner, on_delete=models.CASCADE) + +``` + +每个待办事项列表任务只有一个可以拥有多个任务的所有者。删除该所有者后,他们拥有的任务都会随之删除。 + +现在让我们运行 `makemigrations` 来获取我们的数据模型设置的新快照,然后运行 `migrate` 将这些更改应用到我们的数据库。 + +``` +(django-someHash) django $ ./manage.py makemigrations + +You are trying to add a non-nullable field 'owner' to task without a default; we can't do that (the database needs something to populate existing rows). + +Please select a fix: + + 1) Provide a one-off default now (will be set on all existing rows with a null value for this column) + + 2) Quit, and let me add a default in models.py + +``` + +不好了!出现了问题!发生了什么?其实,当我们创建 `Owner` 对象并将其作为 `ForeignKey` 添加到 `Task` 时,要求每个 `Task` 都需要一个 `Owner`。但是,我们为 `Task` 对象进行的第一次迁移不包括该要求。因此,即使我们的数据库表中没有数据,Django 也会对我们的迁移进行预先检查,以确保它们兼容,而我们提议的这种新迁移不是。 + +有几种方法可以解决这类问题: + + 1. 退出当前迁移并构建一个包含当前模型配置的新迁移 +  2. 将一个默认值添加到 `Task` 对象的 `owner` 字段 +  3. 允许任务为 `owner` 字段设置 `NULL` 值 + +方案 2 在这里没有多大意义。我们建议,任何创建的 `Task`,默认情况下都会对应到某个默认所有者,尽管不一定存在。(to 校正:后面这句发意义在哪里?既然它已经说了方案 2 没有意义) + +方案 1 要求我们销毁和重建我们的迁移,而我们应该把它们留下。 + +让我们考虑选项 3。在这种情况下,如果我们允许 `Task` 表为所有者提供空值,它不会很糟糕。从这一点开始创建的任何任务都必然拥有一个所有者。如果你的数据库表不是一个可重新架构的情况下,请删除迁移,删除表并重建迁移。 +``` +# todo/models.py + +from django.db import models + +from owner.models import Owner + + +class Task(models.Model): + +    """Tasks for the To Do list.""" + +    name = models.CharField(max_length=256) + +    note = models.TextField(blank=True, null=True) + +    creation_date = models.DateTimeField(auto_now_add=True) + +    due_date = models.DateTimeField(blank=True, null=True) + +    completed = models.BooleanField(default=False) + +    owner = models.ForeignKey(Owner, on_delete=models.CASCADE, null=True) + +(django-someHash) $ ./manage.py makemigrations + +Migrations for 'todo': + +  todo/migrations/0002_task_owner.py + +    - Add field owner to task + +(django-someHash) $ ./manage.py migrate + +Operations to perform: + +  Apply all migrations: admin, auth, contenttypes, owner, sessions, todo + +Running migrations: + +  Applying todo.0002_task_owner... OK + +``` + +酷!我们有模型了!欢迎使用 Django 声明对象的方式。 + +为了更好地衡量,让我们确保无论何时制作 `User`,它都会自动与新的 `Owner` 对象对应。我们可以使用 Django 的 `signals` 系统来做到这一点。基本上,我们确切地表达了意图:“当我们得到一个新的 `User` 被构造的信号时,构造一个新的 `Owner` 并将新的 `User` 设置为 `Owner` 的 `user` 字段。”在实践中看起来像这样: +``` +# owner/models.py + +from django.contrib.auth.models import User + +from django.db import models + +from django.db.models.signals import post_save + +from django.dispatch import receiver + +import secrets + + +class Owner(models.Model): + +    """The object that owns tasks.""" + +    user = models.OneToOneField(User, on_delete=models.CASCADE) + +    token = models.CharField(max_length=256) + + +    def __init__(self, *args, **kwargs): + +        """On construction, set token.""" + +        self.token = secrets.token_urlsafe(64) + +        super().__init__(*args, **kwargs) + + +@receiver(post_save, sender=User) +def link_user_to_owner(sender, **kwargs): + +    """If a new User is saved, create a corresponding Owner.""" + +    if kwargs['created']: + +        owner = Owner(user=kwargs['instance']) + +        owner.save() + +``` + +我们设置了一个函数,用于监听从 Django 中内置的 `User` 对象发送的信号。它正在等待 `User` 对象被保存之后的情况。这可以来自新的 `User` 或对现有 `User` 的更新。我们在监听功能中辨别出两种情况。 + +如果发送信号的东西是新创建的实例,`kwargs ['created']` 将具有值 `True`。如果是 `True` 的话,我们想做点事情。如果它是一个新实例,我们创建一个新的 `Owner`,将其 `user` 字段设置为创建的新 `User` 实例。之后,我们 `save()` 新的 `Owner`。如果一切正常,这将提交更改到数据库。如果数据没通过我们声明的字段的验证,它将失败。 + +现在让我们谈谈我们将如何访问数据。 + + +### 访问模型数据 + +在 Flask, Pyramid 和 Tornado 框架中,我们通过对某些数据库会话运行查询来访问模型数据。也许它被附加到 `request` 对象,也许它是一个独立的 `session` 对象。无论如何,我们必须建立与数据库的实时连接并在该连接上进行查询。 + +这不是 Django 的工作方式。默认情况下,Django 不利用任何第三方对象关系映射(ORM)与数据库进行通信。相反,Django 允许模型类维护自己与数据库的对话。 + +从 `django.db.models.Model` 继承的每个模型类都会附加一个 `objects` 对象。这将取代我们熟悉的 `session` 或 `dbsession`。让我们打开 Django 给我们的特殊 shell,并研究这个 `objects` 对象是如何工作的。 +``` +(django-someHash) $ ./manage.py shell + +Python 3.7.0 (default, Jun 29 2018, 20:13:13) +[Clang 9.1.0 (clang-902.0.39.2)] on darwin +Type "help", "copyright", "credits" or "license" for more information. +(InteractiveConsole) + +>>> + +``` + +Django shell 与普通的 Python shell 不同,因为它知道我们正在构建的 Django 项目,可以轻松导入我们的模型,视图,设置等,而不必担心安装包。我们可以通过简单的 `import` 访问我们的模型。 +``` +>>> from owner.models import Owner + +>>> Owner + + + +``` + +目前,我们没有 `Owner` 实例。我们可以通过 `Owner.objects.all()` 查询它们。 +``` +>>> Owner.objects.all() + + + +``` + +无论何时我们在 ` .objects` 对象上运行查询方法,我们都会得到 `QuerySet`。为了我们的目的,它实际上是一个 `list`,这个 `list` 向我们显示它是空的。让我们通过创建一个 `User` 来创建一个 `Owner`。 +``` +>>> from django.contrib.auth.models import User + +>>> new_user = User(username='kenyattamurphy', email='kenyatta.murphy@gmail.com') + +>>> new_user.set_password('wakandaforever') + +>>> new_user.save() + +``` + +如果我们现在查询所有的 `Owner`,我们应该会找到 Kenyatta。 +``` +>>> Owner.objects.all() + +]> + +``` + +棒极了!我们得到了数据! + +### 序列化模型 + +我们将在 “Hello World” 之外来回传递数据。因此,我们希望看到某种类似于 JSON 类型的输出,它可以很好地表示数据。获取该对象的数据并将其转换为 JSON 对象以通过 HTTP 提交是数据序列化的一种方式。在序列化数据时,我们正在获取我们目前拥有的数据并重新格式化以适应一些标准的,更易于理解的形式。 + +如果我用 Flask, Pyramid 和 Tornado 这样做,我会在每个模型上创建一个新方法,让用户可以直接调用 `to_json()`。`to_json()` 的唯一工作是返回一个 JSON 可序列化的(即数字,字符串,列表,词典)字典,其中包含我想要为所讨论的对象显示的任何字段。 + +对于 `Task` 对象,它可能看起来像这样: +``` +class Task(Base): + +    ...all the fields... + +    def to_json(self): + +        """Convert task attributes to a JSON-serializable dict.""" + +        return { + +            'id': self.id, + +            'name': self.name, + +            'note': self.note, + +            'creation_date': self.creation_date.strftime('%m/%d/%Y %H:%M:%S'), + +            'due_date': self.due_date.strftime('%m/%d/%Y %H:%M:%S'), + +            'completed': self.completed, + +            'user': self.user_id + +        } + +``` + +这不花哨,但它确实起到了作用。 + +然而,Django REST Framework 为我们提供了一个对象,它不仅可以为我们这样做,还可以在我们想要创建新对象实例或更新现有实例时验证输入,它被称为 [ModelSerializer][15]。 + +Django REST Framework 的 `ModelSerializer` 是我们模型的有效文档。如果没有附加模型,它们就没有自己的生命(因为那里有 [Serializer][16] 类)。它们的主要工作是准确地表示我们的模型,并在我们的模型数据需要序列化并通过线路发送时,将其转换为 JSON。 + +Django REST Framework 的 `ModelSerializer` 最适合简单对象。举个例子,假设我们在 `Task` 对象上没有 `ForeignKey`。我们可以为 `Task` 创建一个序列化器,它将根据需要将其字段值转换为 JSON,声明如下: +``` +# todo/serializers.py + +from rest_framework import serializers + +from todo.models import Task + + +class TaskSerializer(serializers.ModelSerializer): + +    """Serializer for the Task model.""" + +    class Meta: + +        model = Task + +        fields = ('id', 'name', 'note', 'creation_date', 'due_date', 'completed') + +``` + +在我们新的 `TaskSerializer` 中,我们创建了一个 `Meta` 类。`Meta` 的工作就是保存关于我们试图序列化的东西的信息(或元数据)。然后,我们会注意到要显示的特定字段。如果我们想要显示所有字段,我们可以简化过程并使用`'__all __'`。或者,我们可以使用 `exclude` 关键字而不是 `fields` 来告诉 Django REST Framework 我们想要除了少数几个字段以外的每个字段。我们可以拥有尽可能多的序列化器,所以也许我们想要一个用于一小部分字段而一个用于所有字段?在这里都可以。 + +在我们的例子中,每个 `Task` 和它的所有者 `Owner` 之间都有一个关系,必须在这里反映出来。因此,我们需要借用 `serializers.PrimaryKeyRelatedField` 对象来指定每个 `Task` 都有一个 `Owner`,并且该关系是一对一的。它的 owner 将从存在的所有 owners 的集合中找到。我们通过对这些 owners 进行查询并返回我们想要与此序列化程序关联的结果来获得该集合:`Owner.objects.all()`。我们还需要在字段列表中包含 `owner`,因为我们总是需要一个与 `Task` 相关联的 `Owner`。 +``` +# todo/serializers.py + +from rest_framework import serializers + +from todo.models import Task + +from owner.models import Owner + + +class TaskSerializer(serializers.ModelSerializer): + +    """Serializer for the Task model.""" + +    owner = serializers.PrimaryKeyRelatedField(queryset=Owner.objects.all()) + + +    class Meta: + +        model = Task + +        fields = ('id', 'name', 'note', 'creation_date', 'due_date', 'completed', 'owner') + +``` + +现在构建了这个序列化器,我们可以将它用于我们想要为我们的对象做的所有 CRUD 操作: + * 如果我们想要 `GET` 一个特定的 `Task` 的 JSON 类型版本,我们可以做 `TaskSerializer((some_task).data` + + * 如果我们想接受带有适当数据的 `POST` 来创建一个新的 `Task`,我们可以使用 `TaskSerializer(data = new_data).save()` + + * 如果我们想用 `PUT` 更新一些现有数据,我们可以用 `TaskSerializer(existing_task, data = data).save()` + +我们没有包括 `delete`,因为我们不需要对 `delete` 操作做任何事情。如果你可以删除一个对象,只需使用 `object_instance.delete()`。 + +以下是一些序列化数据的示例: +``` +>>> from todo.models import Task + +>>> from todo.serializers import TaskSerializer + +>>> from owner.models import Owner + +>>> from django.contrib.auth.models import User + +>>> new_user = User(username='kenyatta', email='kenyatta@gmail.com') + +>>> new_user.save_password('wakandaforever') + +>>> new_user.save() # creating the User that builds the Owner + +>>> kenyatta = Owner.objects.first() # 找到 kenyatta 的所有者 + +>>> new_task = Task(name="Buy roast beef for the Sunday potluck", owner=kenyatta) + +>>> new_task.save() + +>>> TaskSerializer(new_task).data + +{'id': 1, 'name': 'Go to the supermarket', 'note': None, 'creation_date': '2018-07-31T06:00:25.165013Z', 'due_date': None, 'completed': False, 'owner': 1} + +``` + +使用 `ModelSerializer` 对象可以做更多的事情,我建议查看[文档][17]以获得更强大的功能。否则,这就是我们所需要的。现在是时候深入视图了。 + +### 查看视图 + +我们已经构建了模型和序列化器,现在我们需要为我们的应用程序设置视图和 URL。毕竟,对于没有视图的应用程序,我们无法做任何事情。我们已经看到了上面的 `HelloWorld` 视图的示例。然而,这总是一个人为的,概念验证的例子,并没有真正展示 Django REST Framework 的视图可以做些什么。让我们清除 `HelloWorld` 视图和 URL,这样我们就可以从我们的视图重新开始。 + +我们要构建的第一个视图是 `InfoView`。与之前的框架一样,我们只想打包并发送一个字典到正确的路由。视图本身可以存在于 `django_todo.views` 中,因为它与特定模型无关(因此在概念上不属于特定应用程序)。 +``` +# django_todo/views.py + +from rest_framework.response import JsonResponse + +from rest_framework.views import APIView + + +class InfoView(APIView): + +    """List of routes for this API.""" + +    def get(self, request): + +        output = { + +            'info': 'GET /api/v1', + +            'register': 'POST /api/v1/accounts', + +            'single profile detail': 'GET /api/v1/accounts/', + +            'edit profile': 'PUT /api/v1/accounts/', + +            'delete profile': 'DELETE /api/v1/accounts/', + +            'login': 'POST /api/v1/accounts/login', + +            'logout': 'GET /api/v1/accounts/logout', + +            "user's tasks": 'GET /api/v1/accounts//tasks', + +            "create task": 'POST /api/v1/accounts//tasks', + +            "task detail": 'GET /api/v1/accounts//tasks/', + +            "task update": 'PUT /api/v1/accounts//tasks/', + +            "delete task": 'DELETE /api/v1/accounts//tasks/' + +        } + +        return JsonResponse(output) + +``` + +这与我们在 Tornado 中所拥有的完全相同。让我们将它放置到合适的路由并继续前行。为了更好的测试,我们还将删除 `admin/` 路由,因为我们不会在这里使用 Django 管理后端。 +``` +# in django_todo/urls.py + +from django_todo.views import InfoView + +from django.urls import path + + +urlpatterns = [ + +    path('api/v1', InfoView.as_view(), name="info"), + +] + +``` + +#### 连接模型与视图 + +让我们弄清楚下一个 URL,它将是创建新的 `Task` 或列出用户现有任务的入口。这应该存在于 `todo` 应用程序的 `urls.py` 中,因为它必须专门处理 `Task `对象而不是整个项目的一部分。 +``` +# in todo/urls.py + +from django.urls import path + +from todo.views import TaskListView + + +urlpatterns = [ + +    path('', TaskListView.as_view(), name="list_tasks") + +] + +``` + +这个路由处理的是什么?我们根本没有指定特定用户或路径。由于会有一些路由需要基本路径 `/api/v1/accounts//tasks`,为什么我们只需写一次就能一次又一次地写它? + +Django 允许我们获取一整套 URL 并将它们导入 `django_todo/urls.py` 文件。然后,我们可以为这些导入的 URL 中的每一个提供相同的基本路径,只关心可变部分,你知道它们是不同的。 +``` +# in django_todo/urls.py + +from django.urls import include, path + +from django_todo.views import InfoView + + +urlpatterns = [ + +    path('api/v1', InfoView.as_view(), name="info"), + +    path('api/v1/accounts//tasks', include('todo.urls')) + +] + +``` + +现在,来自 `todo/urls.py` 的每个 URL 都将以路径 `api/v1/accounts//tasks` 为前缀。 + +让我们在 `todo/views.py` 中构建视图。 +``` +# todo/views.py + +from django.shortcuts import get_object_or_404 + +from rest_framework.response import JsonResponse + +from rest_framework.views import APIView + + +from owner.models import Owner + +from todo.models import Task + +from todo.serializers import TaskSerializer + + +class TaskListView(APIView): + +    def get(self, request, username, format=None): + +        """Get all of the tasks for a given user.""" + +        owner = get_object_or_404(Owner, user__username=username) + +        tasks = Task.objects.filter(owner=owner).all() + +        serialized = TaskSerializer(tasks, many=True) + +        return JsonResponse({ + +            'username': username, + +            'tasks': serialized.data + +        }) + +``` + +这里有很多代码,让我们来看看吧。 + +我们从与我们一直使用的 `APIView` 的继承开始,为我们的视图奠定基础。我们覆盖了之前覆盖的相同 `get` 方法,添加了一个参数,允许我们的视图从传入的请求中接收 `username`。 + +然后我们的 `get` 方法将使用 `username` 来获取与该用户关联的 `Owner`。这个 `get_object_or_404` 函数允许我们这样做,添加一些特殊的东西以方便使用。 + +如果无法找到指定的用户,那么查找任务是没有意义的。实际上,我们想要返回 404 错误。`get_object_or_404` 根据我们传入的任何条件获取单个对象,并返回该对象或引发 [Http404 异常][18]。我们可以根据对象的属性设置该条件。`Owner` 对象都通过 `user` 属性附加到 `User`。但是,我们没有要搜索的 `User` 对象,我们只有一个 `username`。所以,当你寻找一个 `Owner` 时,我们对 `get_object_or_404` 说:通过指定 `user__username` 来检查附加到它的 `User` 是否具有我想要的 `username`。这是两个下划线。通过 QuerySet 过滤时,这两个下划线表示“此嵌套对象的属性”。这些属性可以根据需要进行深度嵌套。 + +我们现在拥有与给定用户名相对应的 `Owner`。我们使用 `Owner` 来过滤所有任务,只用 `Task.objects.filter` 检索它拥有的任务。我们可以使用与 `get_object_or_404` 相同的嵌套属性模式来钻入连接到 `Tasks` 的 `Owner` 的 `User`(`tasks = Task.objects.filter(owner__user__username = username)).all()`)但是没有必要那么宽松。 + +`Task.objects.filter(owner = owner).all()` 将为我们提供与我们的查询匹配的所有 `Task` 对象的`QuerySet`。大。然后,`TaskSerializer` 将获取 `QuerySet` 及其所有数据以及 `many = True` 标志,将其通知为项目集合而不是仅仅一个项目,并返回一系列序列化结果。实际上是一个词典列表。最后,我们使用 JSON 序列化数据和用于查询的用户名提供传出响应。 + +#### 处理 POST 请求 + +`post` 方法看起来与我们之前看到的有些不同。 +``` +# still in todo/views.py + +# ...other imports... + +from rest_framework.parsers import JSONParser + +from datetime import datetime + + +class TaskListView(APIView): + +    def get(self, request, username, format=None): + +        ... + + +    def post(self, request, username, format=None): + +        """Create a new Task.""" + +        owner = get_object_or_404(Owner, user__username=username) + +        data = JSONParser().parse(request) + +        data['owner'] = owner.id + +        if data['due_date']: + +            data['due_date'] = datetime.strptime(data['due_date'], '%d/%m/%Y %H:%M:%S') + + +        new_task = TaskSerializer(data=data) + +        if new_task.is_valid(): + +            new_task.save() + +            return JsonResponse({'msg': 'posted'}, status=201) + + +        return JsonResponse(new_task.errors, status=400) + +``` + +当我们从客户端接收数据时,我们使用 `JSONParser().parse(request)` 将其解析为字典。我们将所有者添加到数据中并格式化任务的 `due_date`(如果存在)。 + +我们的 `TaskSerializer` 完成了繁重的任务。它首先接收传入的数据并将其转换为我们在模型上指定的字段。然后验证该数据以确保它适合指定的字段。如果附加到新 `Task` 的数据有效,它将使用该数据构造一个新的 `Task` 对象并将其提交给数据库。然后我们发回适当的“耶!我们做了一件新事!”响应。如果没有,我们收集 `TaskSerializer` 生成的错误,并将这些错误发送回客户端,并返回 `400 Bad Request` 状态代码。 + +如果我们要构建 `put` 视图来更新 `Task`,它看起来会非常相似。主要区别在于,当我们实例化 `TaskSerializer` 时,我们将传递旧对象和该对象的新数据,如 `TaskSerializer(existing_task,data = data)`。我们仍然会进行有效性检查并发回我们想要发回的回复。 + +### 总结 + +Django 作为一个框架是高度可定制的,每个人都有自己的方式拼接 Django 项目。我在这里写出来的方式不一定是 Django 建立项目的确切方式。它只是 a) 我熟悉的方式,以及 b) 利用 Django 的管理系统。当你将概念分成他们自己的小筒仓时,Django 项目的复杂性会增加。这样做是为了让多个人更容易为整个项目做出贡献,而不会麻烦彼此。 + +然而,作为 Django 项目的大量文件映射并不能使其更高效或自然地偏向于微服务架构。相反,它很容易成为一个令人困惑的巨石,这可能对你的项目仍然有用,它也可能使你的项目难以管理,尤其是随着项目的增长。 + +仔细考虑你的需求并使用合适的工具来完成正确的工作。对于像这样的简单项目,Django 可能不是合适的工具。 + +Django 旨在处理多种模型,这些模型涵盖了不同的项目领域,但它们可能有一些共同点。这个项目是一个小型的双模型项目,有一些路由。如果我们要构建更多,我们只有七条路由,但仍然是相同的两个模型。这还不足以证明一个完整的 Django 项目。 + +如果我们期望这个项目能够扩展,那将是一个很好的选择。这不是其中一个项目。这是选择一个点燃蜡烛的火焰喷射器。这是绝对的矫枉过正。(to 校正:这里有点迷糊) + +尽管如此,Web 框架仍然是一个 Web 框架,无论你使用哪个框架。它都可以接收请求并做出任何响应,因此你可以按照自己的意愿进行操作。只需要注意你选择的框架所带来的开销。 + +就是这样!我们已经到了这个系列的最后!我希望这是一次启发性的冒险。当你在考虑如何构建你的下一个项目时,它将帮助你做出的不仅仅是最熟悉的选择。请务必阅读每个框架的文档,以扩展本系列中涉及的任何内容(因为它没有那么全面)。每个人都有一个广阔的世界。愉快地写代码吧! + + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/18/8/django-framework + +作者:[Nicholas Hunt-Walker][a] +选题:[lujun9972](https://github.com/lujun9972) +译者:[MjSeven](https://github.com/MjSeven) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://opensource.com/users/nhuntwalker +[1]:https://opensource.com/article/18/5/pyramid-framework +[2]:https://opensource.com/article/18/4/flask +[3]:https://opensource.com/article/18/6/tornado-framework +[4]:https://www.djangoproject.com +[5]:https://djangopackages.org/ +[6]:http://www.django-rest-framework.org/ +[7]:http://gunicorn.org/ +[8]:https://docs.pylonsproject.org/projects/waitress/en/latest/ +[9]:https://uwsgi-docs.readthedocs.io/en/latest/ +[10]:https://docs.djangoproject.com/en/2.0/ref/settings/#databases +[11]:https://pypi.org/project/dj-database-url/ +[12]:http://yellerapp.com/posts/2015-01-12-the-worst-server-setup-you-can-make.html +[13]:https://docs.djangoproject.com/en/2.0/ref/settings/#std:setting-DATABASE-ENGINE +[14]:https://www.getpostman.com/ +[15]:http://www.django-rest-framework.org/api-guide/serializers/#modelserializer +[16]:http://www.django-rest-framework.org/api-guide/serializers/ +[17]:http://www.django-rest-framework.org/api-guide/serializers/#serializers +[18]:https://docs.djangoproject.com/en/2.0/topics/http/views/#the-http404-exception From a40e37aea9dabf1b33c6a9a899df49869976bc51 Mon Sep 17 00:00:00 2001 From: "Xingyu.Wang" Date: Sun, 30 Sep 2018 13:21:06 +0800 Subject: [PATCH 152/736] PRF:20180924 Make The Output Of Ping Command Prettier And Easier To Read.md @HankChow --- ...ing Command Prettier And Easier To Read.md | 35 ++++++++----------- 1 file changed, 14 insertions(+), 21 deletions(-) diff --git a/translated/tech/20180924 Make The Output Of Ping Command Prettier And Easier To Read.md b/translated/tech/20180924 Make The Output Of Ping Command Prettier And Easier To Read.md index efca96da23..6267fad2e8 100644 --- a/translated/tech/20180924 Make The Output Of Ping Command Prettier And Easier To Read.md +++ b/translated/tech/20180924 Make The Output Of Ping Command Prettier And Easier To Read.md @@ -3,21 +3,19 @@ ![](https://www.ostechnix.com/wp-content/uploads/2018/09/prettyping-720x340.png) -众所周知,`ping` 命令可以用来检查目标主机是否可达。使用 `ping` 命令的时候,会发送一个 ICMP Echo 请求,通过目标主机的响应与否来确定目标主机的状态。如果你经常使用 `ping` 命令,你可以尝试一下 `prettyping`。Prettyping 只是将一个标准的 ping 工具增加了一层封装,在运行标准 ping 命令的同时添加了颜色和 unicode 字符解析输出,所以它的输出更漂亮紧凑、清晰易读。它是用 `bash` 和 `awk` 编写的免费开源工具,支持大部分类 Unix 操作系统,包括 GNU/Linux、FreeBSD 和 Mac OS X。Prettyping 除了美化 ping 命令的输出,还有很多值得注意的功能。 +众所周知,`ping` 命令可以用来检查目标主机是否可达。使用 `ping` 命令的时候,会发送一个 ICMP Echo 请求,通过目标主机的响应与否来确定目标主机的状态。如果你经常使用 `ping` 命令,你可以尝试一下 `prettyping`。Prettyping 只是将一个标准的 ping 工具增加了一层封装,在运行标准 `ping` 命令的同时添加了颜色和 unicode 字符解析输出,所以它的输出更漂亮紧凑、清晰易读。它是用 `bash` 和 `awk` 编写的自由开源工具,支持大部分类 Unix 操作系统,包括 GNU/Linux、FreeBSD 和 Mac OS X。Prettyping 除了美化 `ping` 命令的输出,还有很多值得注意的功能。 * 检测丢失的数据包并在输出中标记出来。 - * 显示实时数据。每次收到响应后,都会更新统计数据,而对于普通 ping 命令,只会在执行结束后统计。 - * 能够在输出结果不混乱的前提下灵活处理“未知信息”(例如错误信息)。 + * 显示实时数据。每次收到响应后,都会更新统计数据,而对于普通 `ping` 命令,只会在执行结束后统计。 + * 可以灵活处理“未知信息”(例如错误信息),而不搞乱输出结果。 * 能够避免输出重复的信息。 - * 兼容常用的 ping 工具命令参数。 + * 兼容常用的 `ping` 工具命令参数。 * 能够由普通用户执行。 * 可以将输出重定向到文件中。 * 不需要安装,只需要下载二进制文件,赋予可执行权限即可执行。 * 快速且轻巧。 * 输出结果清晰直观。 - - ### 安装 Prettyping 如上所述,Prettyping 是一个绿色软件,不需要任何安装,只要使用以下命令下载 Prettyping 二进制文件: @@ -52,9 +50,9 @@ $ prettyping ostechnix.com ![](https://www.ostechnix.com/wp-content/uploads/2018/09/prettyping-in-action.gif) -如果你不带任何参数执行 `prettyping`,它就会一直运行直到被 ctrl + c 中断。 +如果你不带任何参数执行 `prettyping`,它就会一直运行直到被 `ctrl + c` 中断。 -由于 Prettyping 只是一个对普通 ping 命令的封装,所以常用的 ping 参数也是有效的。例如使用 `-c 5` 来指定 ping 一台主机的 5 次: +由于 Prettyping 只是一个对普通 `ping` 命令的封装,所以常用的 ping 参数也是有效的。例如使用 `-c 5` 来指定 ping 一台主机的 5 次: ``` $ prettyping -c 5 ostechnix.com @@ -76,7 +74,7 @@ $ prettyping --nomulticolor ostechnix.com ![](https://www.ostechnix.com/wp-content/uploads/2018/09/prettyping-without-unicode-support.png) -如果你的终端不支持 **UTF-8**,或者无法修复系统中的 unicode 字体,只需要加上 `--nounicode` 参数就能轻松解决。 +如果你的终端不支持 UTF-8,或者无法修复系统中的 unicode 字体,只需要加上 `--nounicode` 参数就能轻松解决。 Prettyping 支持将输出的内容重定向到文件中,例如执行以下这个命令会将 `prettyping ostechnix.com` 的输出重定向到 `ostechnix.txt` 中: @@ -89,10 +87,9 @@ Prettyping 还有很多选项帮助你完成各种任务,例如: * 启用/禁用延时图例(默认启用) * 强制按照终端的格式输出(默认自动) * 在统计数据中统计最后的 n 次 ping(默认 60 次) - * 覆盖对终端尺寸的检测 - * 覆盖 awk 解释器(默认不覆盖) - * 覆盖 ping 工具(默认不覆盖) - + * 覆盖对终端尺寸的自动检测 + * 指定 awk 解释器路径(默认:`awk`) + * 指定 ping 工具路径(默认:`ping`) 查看帮助文档可以了解更多: @@ -101,18 +98,14 @@ Prettyping 还有很多选项帮助你完成各种任务,例如: $ prettyping --help ``` -尽管 prettyping 没有添加任何额外功能,但我个人喜欢它的这些优点: +尽管 Prettyping 没有添加任何额外功能,但我个人喜欢它的这些优点: - * 实时统计 - 可以随时查看所有实时统计信息,标准 `ping` 命令只会在命令执行结束后才显示统计信息。 - * 紧凑的显示 - 可以在终端看到更长的时间跨度。 + * 实时统计 —— 可以随时查看所有实时统计信息,标准 `ping` 命令只会在命令执行结束后才显示统计信息。 + * 紧凑的显示 —— 可以在终端看到更长的时间跨度。 * 检测丢失的数据包并显示出来。 - - 如果你一直在寻找可视化显示 `ping` 命令输出的工具,那么 Prettyping 肯定会有所帮助。尝试一下,你不会失望的。 - - -------------------------------------------------------------------------------- via: https://www.ostechnix.com/prettyping-make-the-output-of-ping-command-prettier-and-easier-to-read/ @@ -120,7 +113,7 @@ via: https://www.ostechnix.com/prettyping-make-the-output-of-ping-command-pretti 作者:[SK][a] 选题:[lujun9972](https://github.com/lujun9972) 译者:[HankChow](https://github.com/HankChow) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 From 1fc8a3a458828e0d27a786e964cd0091c5ef18e1 Mon Sep 17 00:00:00 2001 From: "Xingyu.Wang" Date: Sun, 30 Sep 2018 13:21:29 +0800 Subject: [PATCH 153/736] PUB:20180924 Make The Output Of Ping Command Prettier And Easier To Read.md @HankChow https://linux.cn/article-10067-1.html --- ...Make The Output Of Ping Command Prettier And Easier To Read.md | 0 1 file changed, 0 insertions(+), 0 deletions(-) rename {translated/tech => published}/20180924 Make The Output Of Ping Command Prettier And Easier To Read.md (100%) diff --git a/translated/tech/20180924 Make The Output Of Ping Command Prettier And Easier To Read.md b/published/20180924 Make The Output Of Ping Command Prettier And Easier To Read.md similarity index 100% rename from translated/tech/20180924 Make The Output Of Ping Command Prettier And Easier To Read.md rename to published/20180924 Make The Output Of Ping Command Prettier And Easier To Read.md From 36557209ddc43a054ca30191ccdb67e4b16fe33f Mon Sep 17 00:00:00 2001 From: HankChow <280630620@qq.com> Date: Sun, 30 Sep 2018 18:05:53 +0800 Subject: [PATCH 154/736] translated --- ...st An Available Package Groups In Linux.md | 112 ++++++++---------- 1 file changed, 51 insertions(+), 61 deletions(-) rename {sources => translated}/tech/20180910 How To List An Available Package Groups In Linux.md (69%) diff --git a/sources/tech/20180910 How To List An Available Package Groups In Linux.md b/translated/tech/20180910 How To List An Available Package Groups In Linux.md similarity index 69% rename from sources/tech/20180910 How To List An Available Package Groups In Linux.md rename to translated/tech/20180910 How To List An Available Package Groups In Linux.md index 754c2d0c3a..b192e6c5f0 100644 --- a/sources/tech/20180910 How To List An Available Package Groups In Linux.md +++ b/translated/tech/20180910 How To List An Available Package Groups In Linux.md @@ -1,43 +1,33 @@ -How To List An Available Package Groups In Linux +如何在 Linux 中列出可用的软件包组 ====== -As we know, if we want to install any packages in Linux we need to use the distribution package manager to get it done. +我们知道,如果想要在 Linux 中安装软件包,可以使用软件包管理器来进行安装。由于系统管理员需要频繁用到软件包管理器,所以它是 Linux 当中的一个重要工具。 -Package manager is playing major role in Linux as this used most of the time by admin. +但是如果想一次性安装一个软件包组,在 Linux 中有可能吗?又如何通过命令去实现呢? -If you would like to install group of package in one shot what would be the possible option. +在 Linux 中确实可以用软件包管理器来达到这样的目的。很多软件包管理器都有这样的选项来实现这个功能,但就我所知,`apt` 或 `apt-get` 软件包管理器却并没有这个选项。因此对基于 Debian 的系统,需要使用的命令是 `tasksel`,而不是 `apt`或 `apt-get` 这样的官方软件包管理器。 -Is it possible in Linux? if so, what would be the command for it. +在 Linux 中安装软件包组有很多好处。对于 LAMP 来说,安装过程会包含多个软件包,但如果安装软件包组命令来安装,只安装一个包就可以了。 -Yes, this can be done in Linux by using the package manager. Each package manager has their own option to perform this task, as i know apt or apt-get package manager doesn’t has this option. +当你的团队需要安装 LAMP,但不知道其中具体包含哪些软件包,这个时候软件包组就派上用场了。软件包组是 Linux 系统上一个很方便的工具,它能让你轻松地完成一组软件包的安装。 -For Debian based system we need to use tasksel command instead of official package managers called apt or apt-get. +软件包组是一组用于公共功能的软件包,包括系统工具、声音和视频。 安装软件包组的过程中,会获取到一系列的依赖包,从而大大节省了时间。 -What is the benefit if we install group of package in Linux? Yes, there is lot of benefit is available in Linux when we install group of package because if you want to install LAMP separately we need to include so many packages but that can be done using single package when we use group of package command. +**推荐阅读:** +**(#)** [如何在 Linux 上按照大小列出已安装的软件包][1] +**(#)** [如何在 Linux 上查看/列出可用的软件包更新][2] +**(#)** [如何在 Linux 上查看软件包的安装/更新/升级/移除/卸载时间][3] +**(#)** [如何在 Linux 上查看一个软件包的详细信息][4] +**(#)** [如何查看一个软件包是否在你的 Linux 发行版上可用][5] +**(#)** [萌新指导:一个可视化的 Linux 包管理工具][6] +**(#)** [老手必会:命令行软件包管理器的用法][7] -Say for example, as you get a request from Application team to install LAMP but you don’t know what are the packages needs to be installed, this is where group of package comes into picture. +### 如何在 CentOS/RHEL 系统上列出可用的软件包组 -Group option is a handy tool for Linux systems which will install Group of Software in a single click on your system without headache. +RHEL 和 CentOS 系统使用的是 RPM 软件包,因此可以使用 `yum` 软件包管理器来获取相关的软件包信息。 -A package group is a collection of packages that serve a common purpose, for instance System Tools or Sound and Video. Installing a package group pulls a set of dependent packages, saving time considerably. +`yum` 是 Yellowdog Updater, Modified 的缩写,它是一个用于基于 RPM 系统(例如 RHEL 和 CentOS)的,开源的命令行软件包管理工具。它是从分发库或其它第三方库中获取、安装、删除、查询和管理 RPM 包的主要工具。 -**Suggested Read :** -**(#)** [How To List Installed Packages By Size (Largest) On Linux][1] -**(#)** [How To View/List The Available Packages Updates In Linux][2] -**(#)** [How To View A Particular Package Installed/Updated/Upgraded/Removed/Erased Date On Linux][3] -**(#)** [How To View Detailed Information About A Package In Linux][4] -**(#)** [How To Search If A Package Is Available On Your Linux Distribution Or Not][5] -**(#)** [Newbies corner – A Graphical frontend tool for Linux Package Manager][6] -**(#)** [Linux Expert should knows, list of Command line Package Manager & Usage][7] - -### How To List An Available Package Groups In CentOS/RHEL Systems - -RHEL & CentOS systems are using RPM packages hence we can use the `Yum Package Manager` to get this information. - -YUM stands for Yellowdog Updater, Modified is an open-source command-line front-end package-management utility for RPM based systems such as Red Hat Enterprise Linux (RHEL) and CentOS. - -Yum is the primary tool for getting, installing, deleting, querying, and managing RPM packages from distribution repositories, as well as other third-party repositories. - -**Suggested Read :** [YUM Command To Manage Packages on RHEL/CentOS Systems][8] +**推荐阅读:** [使用 yum 命令在 RHEL/CentOS 系统上管理软件包][8] ``` # yum grouplist @@ -82,7 +72,7 @@ Done ``` -If you would like to list what are the packages is associated on it, run the below command. In this example we are going to list what are the packages is associated with “Performance Tools” group. +如果需要列出相关联的软件包,可以执行以下这个命令。下面的例子是列出和 Performance Tools 组相关联的软件包。 ``` # yum groupinfo "Performance Tools" @@ -116,17 +106,17 @@ Group: Performance Tools ``` -### How To List An Available Package Groups In Fedora +### 如何在 Fedora 系统上列出可用的软件包组 -Fedora system uses DNF package manager hence we can use the Dnf Package Manager to get this information. +Fedora 系统使用的是 DNF 软件包管理器,因此可以通过 DNF 软件包管理器来获取相关的信息。 -DNF stands for Dandified yum. We can tell DNF, the next generation of yum package manager (Fork of Yum) using hawkey/libsolv library for backend. Aleš Kozumplík started working on DNF since Fedora 18 and its implemented/launched in Fedora 22 finally. +DNF 的含义是 Dandified yum。、DNF 软件包管理器是 YUM 软件包管理器的一个分支,它使用 hawkey/libsolv 库作为后端。从 Fedora 18 开始,Aleš Kozumplík 开始着手 DNF 的开发,直到在Fedora 22 开始加入到系统中。 -Dnf command is used to install, update, search & remove packages on Fedora 22 and later system. It automatically resolve dependencies and make it smooth package installation without any trouble. +`dnf` 命令可以在 Fedora 22 及更高版本上安装、更新、搜索和删除软件包, 它可以自动解决软件包的依赖关系并其顺利安装,不会产生问题。 -Yum replaced by DNF due to several long-term problems in Yum which was not solved. Asked why ? he did not patches the Yum issues. Aleš Kozumplík explains that patching was technically hard and YUM team wont accept the changes immediately and other major critical, YUM is 56K lines but DNF is 29K lies. So, there is no option for further development, except to fork. +由于一些长期未被解决的问题的存在,YUM 被 DNF 逐渐取代了。而 Aleš Kozumplík 的 DNF 却并未对 yum 的这些问题作出修补,他认为这是技术上的难题,YUM 团队也从不接受这些更改。而且 YUM 的代码量有 5.6 万行,而 DNF 只有 2.9 万行。因此已经不需要沿着 YUM 的方向继续开发了,重新开一个分支才是更好的选择。 -**Suggested Read :** [DNF (Fork of YUM) Command To Manage Packages on Fedora System][9] +**推荐阅读:** [在 Fedora 系统上使用 DNF 命令管理软件包][9] ``` # dnf grouplist @@ -180,7 +170,7 @@ Available Groups: ``` -If you would like to list what are the packages is associated on it, run the below command. In this example we are going to list what are the packages is associated with “Editor” group. +如果需要列出相关联的软件包,可以执行以下这个命令。下面的例子是列出和 Editor 组相关联的软件包。 ``` @@ -215,13 +205,13 @@ Group: Editors zile ``` -### How To List An Available Package Groups In openSUSE System +### 如何在 openSUSE 系统上列出可用的软件包组 -openSUSE system uses zypper package manager hence we can use the zypper Package Manager to get this information. +openSUSE 系统使用的是 zypper 软件包管理器,因此可以通过 zypper 软件包管理器来获取相关的信息。 -Zypper is a command line package manager for suse & openSUSE distributions. It’s used to install, update, search & remove packages & manage repositories, perform various queries, and more. Zypper command-line interface to ZYpp system management library (libzypp). +Zypper 是 suse 和 openSUSE 发行版的命令行包管理器。它可以用于安装、更新、搜索和删除软件包,还有管理存储库,执行各种查询等功能。 Zypper 命令行界面用到了 ZYpp 系统管理库(libzypp)。 -**Suggested Read :** [Zypper Command To Manage Packages On openSUSE & suse Systems][10] +**推荐阅读:** [在 openSUSE 和 suse 系统使用 zypper 命令管理软件包][10] ``` # zypper patterns @@ -277,8 +267,7 @@ i | yast2_basis | 20150918-25.1 | @System | | yast2_install_wf | 20150918-25.1 | Main Repository (OSS) | ``` -If you would like to list what are the packages is associated on it, run the below command. In this example we are going to list what are the packages is associated with “file_server” group. -Additionally zypper command allows a user to perform the same action with different options. +如果需要列出相关联的软件包,可以执行以下这个命令。下面的例子是列出和 file_server 组相关联的软件包。另外 `zypper` 还允许用户使用不同的选项执行相同的操作。 ``` # zypper info file_server @@ -317,7 +306,7 @@ Contents : | yast2-tftp-server | package | Recommended ``` -If you would like to list what are the packages is associated on it, run the below command. +如果需要列出相关联的软件包,可以执行以下这个命令。 ``` # zypper pattern-info file_server @@ -357,7 +346,7 @@ Contents : | yast2-tftp-server | package | Recommended ``` -If you would like to list what are the packages is associated on it, run the below command. +如果需要列出相关联的软件包,可以执行以下这个命令。 ``` # zypper info pattern file_server @@ -396,7 +385,7 @@ Contents : | yast2-tftp-server | package | Recommended ``` -If you would like to list what are the packages is associated on it, run the below command. +如果需要列出相关联的软件包,可以执行以下这个命令。 ``` # zypper info -t pattern file_server @@ -436,17 +425,17 @@ Contents : | yast2-tftp-server | package | Recommended ``` -### How To List An Available Package Groups In Debian/Ubuntu Systems +### 如何在 Debian/Ubuntu 系统上列出可用的软件包组 -Since APT or APT-GET package manager doesn’t offer this option for Debian/Ubuntu based systems hence, we are using tasksel command to get this information. +由于 APT 或 APT-GET 软件包管理器没有为基于 Debian/Ubuntu 的系统提供这样的选项,因此需要使用 `tasksel` 命令来获取相关信息。 -[Tasksel][11] is a handy tool for Debian/Ubuntu systems which will install Group of Software in a single click on your system. Tasks are defined in `.desc` files and located at `/usr/share/tasksel`. +[tasksel][11] 是 Debian/Ubuntu 系统上一个很方便的工具,只需要很少的操作就可以用它来安装好一组软件包。可以在 `/usr/share/tasksel` 目录下的 `.desc` 文件中安排软件包的安装任务。 -By default, tasksel tool installed on Debian system as part of Debian installer but it’s not installed on Ubuntu desktop editions. This functionality is similar to that of meta-packages, like how package managers have. +默认情况下,`tasksel` 工具是作为 Debian 系统的一部分安装的,但桌面版 Ubuntu 则没有自带 `tasksel`,类似软件包管理器中的元包(meta-packages)。 -Tasksel tool offer a simple user interface based on zenity (popup Graphical dialog box in command line). +`tasksel` 工具带有一个基于 zenity 的简单用户界面,例如命令行中的弹出图形对话框。 -**Suggested Read :** [Tasksel – Install Group of Software in A Single Click on Debian/Ubuntu][12] +**推荐阅读:** [使用 tasksel 在 Debian/Ubuntu 系统上快速安装软件包组][12] ``` # tasksel --list-task @@ -494,20 +483,20 @@ u openssh-server OpenSSH server u server Basic Ubuntu server ``` -If you would like to list what are the packages is associated on it, run the below command. In this example we are going to list what are the packages is associated with “file_server” group. +如果需要列出相关联的软件包,可以执行以下这个命令。下面的例子是列出和 lamp-server 组相关联的软件包。 ``` # tasksel --task-desc "lamp-server" Selects a ready-made Linux/Apache/MySQL/PHP server. ``` -### How To List An Available Package Groups In Arch Linux based Systems +### 如何在基于 Arch Linux 的系统上列出可用的软件包组 -Arch Linux based systems are using pacman package manager hence we can use the pacman Package Manager to get this information. +基于 Arch Linux 的系统使用的是 pacman 软件包管理器,因此可以通过 pacman 软件包管理器来获取相关的信息。 -pacman stands for package manager utility (pacman). pacman is a command-line utility to install, build, remove and manage Arch Linux packages. pacman uses libalpm (Arch Linux Package Management (ALPM) library) as a back-end to perform all the actions. +pacman 是 package manager 的缩写。`pacman` 可以用于安装、构建、删除和管理 Arch Linux 软件包。`pacman` 使用 libalpm(Arch Linux Package Management 库,ALPM)作为后端来执行所有操作。 -**Suggested Read :** [Pacman Command To Manage Packages On Arch Linux Based Systems][13] +**推荐阅读:** [使用 pacman 在基于 Arch Linux 的系统上管理软件包][13] ``` # pacman -Sg @@ -550,7 +539,7 @@ vim-plugins ``` -If you would like to list what are the packages is associated on it, run the below command. In this example we are going to list what are the packages is associated with “gnome” group. +如果需要列出相关联的软件包,可以执行以下这个命令。下面的例子是列出和 gnome 组相关联的软件包。 ``` # pacman -Sg gnome @@ -589,7 +578,7 @@ gnome simple-scan ``` -Alternatively we can check the same by running following command. +也可以执行以下这个命令实现同样的效果。 ``` # pacman -S gnome @@ -609,7 +598,7 @@ Interrupt signal received ``` -To know exactly how many packages is associated on it, run the following command. +可以执行以下命令检查相关软件包的数量。 ``` # pacman -Sg gnome | wc -l @@ -623,7 +612,7 @@ via: https://www.2daygeek.com/how-to-list-an-available-package-groups-in-linux/ 作者:[Prakash Subramanian][a] 选题:[lujun9972](https://github.com/lujun9972) -译者:[译者ID](https://github.com/译者ID) +译者:[HankChow](https://github.com/HankChow) 校对:[校对者ID](https://github.com/校对者ID) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 @@ -642,3 +631,4 @@ via: https://www.2daygeek.com/how-to-list-an-available-package-groups-in-linux/ [11]: https://wiki.debian.org/tasksel [12]: https://www.2daygeek.com/tasksel-install-group-of-software-in-a-single-click-or-single-command-on-debian-ubuntu/ [13]: https://www.2daygeek.com/pacman-command-examples-manage-packages-arch-linux-system/ + From 5196c9a365dadcdd4b58705f17e3b58db1b64fae Mon Sep 17 00:00:00 2001 From: zhousiyu325 Date: Sun, 30 Sep 2018 19:28:06 +0800 Subject: [PATCH 155/736] translating 20180928 A Free And Secure Online PDF Conversion Suite.md --- .../20180928 A Free And Secure Online PDF Conversion Suite.md | 1 + 1 file changed, 1 insertion(+) diff --git a/sources/tech/20180928 A Free And Secure Online PDF Conversion Suite.md b/sources/tech/20180928 A Free And Secure Online PDF Conversion Suite.md index afb66e43ee..10b220590f 100644 --- a/sources/tech/20180928 A Free And Secure Online PDF Conversion Suite.md +++ b/sources/tech/20180928 A Free And Secure Online PDF Conversion Suite.md @@ -1,3 +1,4 @@ +translated by zhousiyu325 A Free And Secure Online PDF Conversion Suite ====== From 13ad860f3d5afe4f6bf064c86d1cf10543df7c02 Mon Sep 17 00:00:00 2001 From: zhousiyu325 Date: Sun, 30 Sep 2018 22:36:12 +0800 Subject: [PATCH 156/736] finish translating 20180928 A Free And Secure Online PDF Conversion Suite.md --- ... And Secure Online PDF Conversion Suite.md | 112 ------------------ ... And Secure Online PDF Conversion Suite.md | 104 ++++++++++++++++ 2 files changed, 104 insertions(+), 112 deletions(-) delete mode 100644 sources/tech/20180928 A Free And Secure Online PDF Conversion Suite.md create mode 100644 translated/tech/20180928 A Free And Secure Online PDF Conversion Suite.md diff --git a/sources/tech/20180928 A Free And Secure Online PDF Conversion Suite.md b/sources/tech/20180928 A Free And Secure Online PDF Conversion Suite.md deleted file mode 100644 index 10b220590f..0000000000 --- a/sources/tech/20180928 A Free And Secure Online PDF Conversion Suite.md +++ /dev/null @@ -1,112 +0,0 @@ -translated by zhousiyu325 -A Free And Secure Online PDF Conversion Suite -====== - -![](https://www.ostechnix.com/wp-content/uploads/2018/09/easypdf-720x340.jpg) - -We are always in search for a better and more efficient solution that can make our lives more convenient. That is why when you are working with PDF documents you need a fast and reliable tool that you can use in every situation. Therefore, we wanted to introduce you to **EasyPDF** Online PDF Suite for every occasion. The promise behind this tool is that it can make your PDF management easier and we tested it to check that claim. - -But first, here are the most important things you need to know about EasyPDF: - - * EasyPDF is free and anonymous online PDF Conversion Suite. - * Convert PDF to Word, Excel, PowerPoint, AutoCAD, JPG, GIF and Text. - * Create PDF from Word, PowerPoint, JPG, Excel files and many other formats. - * Manipulate PDFs with PDF Merge, Split and Compress. - * OCR conversion of scanned PDFs and images. - * Upload files from your device or the Cloud (Google Drive and DropBox). - * Available on Windows, Linux, Mac, and smartphones via any browser. - * Multiple languages supported. - - - -### EasyPDF User Interface - -![](http://www.ostechnix.com/wp-content/uploads/2018/09/easypdf-interface.png) - -One of the first things that catches your eye is the sleek user interface which gives the tool clean and functional environment in where you can work comfortably. The whole experience is even better because there are no ads on a website at all. - -All different types of conversions have their dedicated menu with a simple box to add files, so you don’t have to wonder about what you need to do. - -Most websites aren’t optimized to work well and run smoothly on mobile phones, but EasyPDF is an exception from that rule. It opens almost instantly on smartphone and is easy to navigate. You can also add it as the shortcut on your home screen from the **three dots menu** on the Chrome app. - -![](http://www.ostechnix.com/wp-content/uploads/2018/09/EasyPDF-fs8.png) - -### Functionality - -Apart from looking nice, EasyPDF is pretty straightforward to use. You **don’t need to register** or leave an **email** to use the tool. It is completely anonymous. Additionally, it doesn’t put any limitations to the number or size of files for conversion. No installation required either! Cool, yeah? - -You choose a desired conversion format, for example, PDF to Word. Select the PDF file you want to convert. You can upload a file from the device by either drag & drop or selecting the file from the folder. There is also an option to upload a document from [**Google Drive**][1] or [**Dropbox**][2]. - -After you choose the file, press the Convert button to start the conversion process. You won’t wait for a long time to get your file because conversion will finish in a minute. If you have some more files to convert, remember to download the file before you proceed further. If you don’t download the document first, you will lose it. - -![](https://www.ostechnix.com/wp-content/uploads/2018/09/EasyPDF1.png) - -For a different type of conversion, return to the homepage. - -The currently available types of conversions are: - - * **PDF to Word** – Convert PDF documents to Word documents - - * **PDF to PowerPoint** – Convert PDF documents to PowerPoint Presentations - - * **PDF to Excel** – Convert PDF documents to Excel documents - - * **PDF Creation** – Create PDF documents from any type of file (E.g text, doc, odt) - - * **Word to PDF** – Convert Word documents to PDF documents - - * **JPG to PDF** – Convert JPG images to PDF documents - - * **PDF to AutoCAD** – Convert PDF documents to .dwg format (DWG is native format for CAD packages) - - * **PDF to Text** – Convert PDF documents to Text documents - - * **PDF Split** – Split PDF files into multiple parts - - * **PDF Merge** – Merge multiple PDF files into one - - * **PDF Compress** – Compress PDF documents - - * **PDF to JPG** – Convert PDF documents to JPG images - - * **PDF to PNG** – Convert PDF documents to PNG images - - * **PDF to GIF** – Convert PDF documents to GIF files - - * **OCR Online** – - -Convert scanned paper documents - -to editable files (E.g Word, Excel, Text) - - - - -Want to give it a try? Great! Click the following link and start converting! - -[![](https://www.ostechnix.com/wp-content/uploads/2018/09/EasyPDF-online-pdf.png)][https://easypdf.com/] - -### Conclusion - -EasyPDF lives up to its name and enables easier PDF management. As far as I tested EasyPDF service, It offers out of the box conversion feature completely **FREE!** It is fast, secure and reliable. You will find the quality of services most satisfying without having to pay anything or leaving your personal data like email address. Give it a try and who knows maybe you will find your new favorite PDF tool. - -And, that’s all for now. More good stuffs to come. Stay tuned! - -Cheers! - - - --------------------------------------------------------------------------------- - -via: https://www.ostechnix.com/easypdf-a-free-and-secure-online-pdf-conversion-suite/ - -作者:[SK][a] -选题:[lujun9972](https://github.com/lujun9972) -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://www.ostechnix.com/author/sk/ -[1]: https://www.ostechnix.com/how-to-mount-google-drive-locally-as-virtual-file-system-in-linux/ -[2]: https://www.ostechnix.com/install-dropbox-in-ubuntu-18-04-lts-desktop/ diff --git a/translated/tech/20180928 A Free And Secure Online PDF Conversion Suite.md b/translated/tech/20180928 A Free And Secure Online PDF Conversion Suite.md new file mode 100644 index 0000000000..46cc5067f2 --- /dev/null +++ b/translated/tech/20180928 A Free And Secure Online PDF Conversion Suite.md @@ -0,0 +1,104 @@ +一款免费且安全的在线PDF转换软件 +====== + +![](https://www.ostechnix.com/wp-content/uploads/2018/09/easypdf-720x340.jpg) + +我们总在寻找一个更好用且更高效的解决方案,来我们的生活理加方便。 比方说,在处理PDF文档时,你会迫切地想拥有一款工具,它能够在任何情形下都显得快速可靠。在这,我们想向你推荐**EasyPDF**——一款可以胜任所有场合的在线PDF软件。通过大量的测试,我们可以保证:这款工具能够让你的PDF文档管理更加容易。 + +不过,关于EasyPDF有一些十分重要的事情,你必须知道。 + +* EasyPDF是免费的、匿名的在线PDF转换软件。 +* 能够将PDF文档转换成Word、Excel、PowerPoint、AutoCAD、JPG, GIF和Text等格式格式的文档。 +* 能够从ord、Excel、PowerPoint等其他格式的文件创建PDF文件。 +* 能够进行PDF文档的合并、分割和压缩。 +* 能够识别扫描的PDF和图片中的内容。 +* 可以从你的设备或者云存储(Google Drive 和 DropBox)中上传文档。 +* 可以在Windows、Linux、Mac和智能手机上通过浏览器来操作。 +* 支持多种语言。 + +### EasyPDF的用户界面 + +![](http://www.ostechnix.com/wp-content/uploads/2018/09/easypdf-interface.png) + +EasyPDF最吸引你眼球的就是平滑的用户界面,营造一种整洁的环境,这会让使用者感觉更加舒服。由于网站完全没有一点广告,EasyPDF的整体使用体验相比以前会好很多。 + +每种不同类型的转换都有它们专门的菜单,只需要简单地向其中添加文件,你并不需要知道太多知识来进行操作。 + +许多类似网站没有做好相关的优化,使得在手机上的使用体验并不太友好。然而,EasyPDF突破了这一个瓶颈。在智能手机上,EasyPDF几乎可以秒开,并且可以顺畅的操作。你也通过Chrome app的**three dots menu**把EasyPDF添加到手机的主屏幕上。 + +![](http://www.ostechnix.com/wp-content/uploads/2018/09/EasyPDF-fs8.png) + +### 特性 + +除了好看的界面,EasyPDF还非常易于使用。为了使用它,你 **不需要注册一个账号** 或者**留下一个邮箱**,它是完全匿名的。另外,EasyPDF也不会对要转换的文件进行数量或者大小的限制,完全不需要安装!酷极了,不是吗? + +首先,你需要选择一种想要进行的格式转换,比如,将PDF转换成Word。然后,选择你想要转换的PDF文件。你可以通过两种方式来上传文件:直接拖拉或者从设备上的文件夹进行选择。还可以选择从[**Google Drive**][1] 或 [**Dropbox**][2]来上传文件。 + +选择要进行格式转换的文件后,点击Convert按钮开始转换过程。转换过程会在一分钟内完成,你并不需要等待太长时间。如果你还有对其他文件进行格式转换,在接着转换前,不要忘了将前面已经转换完成的文件下载保存。不然的话,你将会丢失前面的文件。 + +![](https://www.ostechnix.com/wp-content/uploads/2018/09/EasyPDF1.png) + +要进行其他类型的格式转换,直接返回到主页。 + +目前支持的几种格式转换类型如下: + +* **PDF to Word** – 将 PDF 文档 转换成 Word 文档 + + * **PDF 转换成 PowerPoint** – 将 PDF 文档 转换成 PowerPoint 演示讲稿 + + * **PDF 转换成 Excel** – 将 PDF 文档 转换成 Excel 文档 + + * **PDF 创建** – 从一些其他类型的文件(如, text, doc, odt)来创建PDF文档 + + * **Word 转换成 PDF** – 将 Word 文档 转换成 PDF 文档 + + * **JPG 转换成 PDF** – 将 JPG images 转换成 PDF 文档 + + * **PDF 转换成 Au转换成CAD** – 将 PDF 文档 转换成 .dwg 格式 (DWG 是 CAD 文件的原生的格式) + + * **PDF 转换成 Text** – 将 PDF 文档 转换成 Text 文档 + + * **PDF 分割** – 把 PDF 文件分割成多个部分 + + * **PDF 合并** – 把多个PDF文件合并成一个文件 + + * **PDF 压缩** – 将 PDF 文档进行压缩 + + * **PDF 转换成 JPG** – 将 PDF 文档 转换成 JPG 图片 + + * **PDF 转换成 PNG** – 将 PDF 文档 转换成 PNG 图片 + + * **PDF 转换成 GIF** – 将 PDF 文档 转换成 GIF 文件 + + * **在线文字内容识别** – 将扫描的纸质文档转换成能够进行编辑的文件(如,Word,Excel,Text) + + 想试一试吗?好极了!点击下面的链接,然后开始格式转换吧! + +[![](https://www.ostechnix.com/wp-content/uploads/2018/09/EasyPDF-online-pdf.png)][https://easypdf.com/] + +### 总结 + +EasyPDF 名符其实,能够让PDF 管理更加容易。就我测试过的 EasyPDF 服务而言,它提供了**完全免费**的简单易用的转换功能。它十分快速、安全和可靠。你会对它的服务质量感到非常满意,因为它不用支付任何费用,也不用留下像邮箱这样的个人信息。值得一试,也许你会找到你自己更喜欢的 PDF 工具。 + +好吧,我就说这些。更多的好东西还在后后面,请继续关注! + +加油! + + + + + +-------------------------------------------------------------------------------- + +via: https://www.ostechnix.com/easypdf-a-free-and-secure-online-pdf-conversion-suite/ + +作者:[SK][a] +选题:[lujun9972](https://github.com/lujun9972) +译者:[译者ID](https://github.com/zhousiyu325) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://www.ostechnix.com/author/sk/ +[1]: https://www.ostechnix.com/how-to-mount-google-drive-locally-as-virtual-file-system-in-linux/ +[2]: https://www.ostechnix.com/install-dropbox-in-ubuntu-18-04-lts-desktop/ From 38728367da46495541b79f3f0a4487838e3549c2 Mon Sep 17 00:00:00 2001 From: Andy Luo Date: Sun, 30 Sep 2018 22:42:18 +0800 Subject: [PATCH 157/736] Update How To Find And Delete Duplicate Files In Linux.md Translating by pygmalion666 --- .../20180927 How To Find And Delete Duplicate Files In Linux.md | 1 + 1 file changed, 1 insertion(+) diff --git a/sources/tech/20180927 How To Find And Delete Duplicate Files In Linux.md b/sources/tech/20180927 How To Find And Delete Duplicate Files In Linux.md index e3a0a9d561..2b9c610f1d 100644 --- a/sources/tech/20180927 How To Find And Delete Duplicate Files In Linux.md +++ b/sources/tech/20180927 How To Find And Delete Duplicate Files In Linux.md @@ -1,3 +1,4 @@ +Translating by pygmalion666 How To Find And Delete Duplicate Files In Linux ====== From e998a735755889b900e2d601168082c6c6ac8d48 Mon Sep 17 00:00:00 2001 From: LuMing <784315443@qq.com> Date: Mon, 1 Oct 2018 00:54:55 +0800 Subject: [PATCH 158/736] translated --- ...lan Network Configuration Tool on Linux.md | 230 ------------------ ...lan Network Configuration Tool on Linux.md | 215 ++++++++++++++++ 2 files changed, 215 insertions(+), 230 deletions(-) delete mode 100644 sources/tech/20180907 How to Use the Netplan Network Configuration Tool on Linux.md create mode 100644 translated/tech/20180907 How to Use the Netplan Network Configuration Tool on Linux.md diff --git a/sources/tech/20180907 How to Use the Netplan Network Configuration Tool on Linux.md b/sources/tech/20180907 How to Use the Netplan Network Configuration Tool on Linux.md deleted file mode 100644 index a9d3eb0895..0000000000 --- a/sources/tech/20180907 How to Use the Netplan Network Configuration Tool on Linux.md +++ /dev/null @@ -1,230 +0,0 @@ -LuuMing translating -How to Use the Netplan Network Configuration Tool on Linux -====== - -![](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/netplan.jpg?itok=Gu_ZfNGa) - -For years Linux admins and users have configured their network interfaces in the same way. For instance, if you’re a Ubuntu user, you could either configure the network connection via the desktop GUI or from within the /etc/network/interfaces file. The configuration was incredibly easy and never failed to work. The configuration within that file looked something like this: - -``` -auto enp10s0 - -iface enp10s0 inet static - -address 192.168.1.162 - -netmask 255.255.255.0 - -gateway 192.168.1.100 - -dns-nameservers 1.0.0.1,1.1.1.1 - -``` - -Save and close that file. Restart networking with the command: - -``` -sudo systemctl restart networking - -``` - -Or, if you’re not using a non-systemd distribution, you could restart networking the old fashioned way like so: - -``` -sudo /etc/init.d/networking restart - -``` - -Your network will restart and the newly configured interface is good to go. - -That’s how it’s been done for years. Until now. With certain distributions (such as Ubuntu Linux 18.04), the configuration and control of networking has changed considerably. Instead of that interfaces file and using the /etc/init.d/networking script, we now turn to [Netplan][1]. Netplan is a command line utility for the configuration of networking on certain Linux distributions. Netplan uses YAML description files to configure network interfaces and, from those descriptions, will generate the necessary configuration options for any given renderer tool. - -I want to show you how to use Netplan on Linux, to configure a static IP address and a DHCP address. I’ll be demonstrating on Ubuntu Server 18.04. I will give you one word of warning, the .yaml files you create for Netplan must be consistent in spacing, otherwise they’ll fail to work. You don’t have to use a specific spacing for each line, it just has to remain consistent. - -### The new configuration files - -Open a terminal window (or log into your Ubuntu Server via SSH). You will find the new configuration files for Netplan in the /etc/netplan directory. Change into that directory with the command cd /etc/netplan. Once in that directory, you will probably only see a single file: - -``` -01-netcfg.yaml - -``` - -You can create a new file or edit the default. If you opt to edit the default, I suggest making a copy with the command: - -``` -sudo cp /etc/netplan/01-netcfg.yaml /etc/netplan/01-netcfg.yaml.bak - -``` - -With your backup in place, you’re ready to configure. - -### Network Device Name - -Before you configure your static IP address, you’ll need to know the name of device to be configured. To do that, you can issue the command ip a and find out which device is to be used (Figure 1). - -![netplan][3] - -Figure 1: Finding our device name with the ip a command. - -[Used with permission][4] - -I’ll be configuring ens5 for a static IP address. - -### Configuring a Static IP Address - -Open the original .yaml file for editing with the command: - -``` -sudo nano /etc/netplan/01-netcfg.yaml - -``` - -The layout of the file looks like this: - -network: - -Version: 2 - -Renderer: networkd - -ethernets: - -DEVICE_NAME: - -Dhcp4: yes/no - -Addresses: [IP/NETMASK] - -Gateway: GATEWAY - -Nameservers: - -Addresses: [NAMESERVER, NAMESERVER] - -Where: - - * DEVICE_NAME is the actual device name to be configured. - - * yes/no is an option to enable or disable dhcp4. - - * IP is the IP address for the device. - - * NETMASK is the netmask for the IP address. - - * GATEWAY is the address for your gateway. - - * NAMESERVER is the comma-separated list of DNS nameservers. - - - - -Here’s a sample .yaml file: - -``` -network: - - version: 2 - - renderer: networkd - - ethernets: - - ens5: - - dhcp4: no - - addresses: [192.168.1.230/24] - - gateway4: 192.168.1.254 - - nameservers: - - addresses: [8.8.4.4,8.8.8.8] - -``` - -Edit the above to fit your networking needs. Save and close that file. - -Notice the netmask is no longer configured in the form 255.255.255.0. Instead, the netmask is added to the IP address. - -### Testing the Configuration - -Before we apply the change, let’s test the configuration. To do that, issue the command: - -``` -sudo netplan try - -``` - -The above command will validate the configuration before applying it. If it succeeds, you will see Configuration accepted. In other words, Netplan will attempt to apply the new settings to a running system. Should the new configuration file fail, Netplan will automatically revert to the previous working configuration. Should the new configuration work, it will be applied. - -### Applying the New Configuration - -If you are certain of your configuration file, you can skip the try option and go directly to applying the new options. The command for this is: - -``` -sudo netplan apply - -``` - -At this point, you can issue the command ip a to see that your new address configurations are in place. - -### Configuring DHCP - -Although you probably won’t be configuring your server for DHCP, it’s always good to know how to do this. For example, you might not know what static IP addresses are currently available on your network. You could configure the device for DHCP, get an IP address, and then reconfigure that address as static. - -To use DHCP with Netplan, the configuration file would look something like this: - -``` -network: - - version: 2 - - renderer: networkd - - ethernets: - - ens5: - - Addresses: [] - - dhcp4: true - - optional: true - -``` - -Save and close that file. Test the file with: - -``` -sudo netplan try - -``` - -Netplan should succeed and apply the DHCP configuration. You could then issue the ip a command, get the dynamically assigned address, and then reconfigure a static address. Or, you could leave it set to use DHCP (but seeing as how this is a server, you probably won’t want to do that). - -Should you have more than one interface, you could name the second .yaml configuration file 02-netcfg.yaml. Netplan will apply the configuration files in numerical order, so 01 will be applied before 02. Create as many configuration files as needed for your server. - -### That’s All There Is - -Believe it or not, that’s all there is to using Netplan. Although it is a significant change to how we’re accustomed to configuring network addresses, it’s not all that hard to get used to. But this style of configuration is here to stay… so you will need to get used to it. - -Learn more about Linux through the free ["Introduction to Linux" ][5]course from The Linux Foundation and edX. - --------------------------------------------------------------------------------- - -via: https://www.linux.com/learn/intro-to-linux/2018/9/how-use-netplan-network-configuration-tool-linux - -作者:[Jack Wallen][a] -选题:[lujun9972](https://github.com/lujun9972) -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://www.linux.com/users/jlwallen -[1]: https://netplan.io/ -[3]: https://www.linux.com/sites/lcom/files/styles/rendered_file/public/netplan_1.jpg?itok=XuIsXWbV (netplan) -[4]: /licenses/category/used-permission -[5]: https://training.linuxfoundation.org/linux-courses/system-administration-training/introduction-to-linux diff --git a/translated/tech/20180907 How to Use the Netplan Network Configuration Tool on Linux.md b/translated/tech/20180907 How to Use the Netplan Network Configuration Tool on Linux.md new file mode 100644 index 0000000000..0027aafb6f --- /dev/null +++ b/translated/tech/20180907 How to Use the Netplan Network Configuration Tool on Linux.md @@ -0,0 +1,215 @@ +如何在 Linux 上使用网络配置工具 Netplan +====== + +![](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/netplan.jpg?itok=Gu_ZfNGa) + +多年以来 Linux 管理员和用户们使用相同的方式配置他们的网络接口。例如,如果你是 Ubuntu 用户,你能够用桌面 GUI 配置网络连接,也可以在 /etc/network/interfaces 文件里配置。配置相当简单且从未失败。在文件中配置看起来就像这样: + +``` +auto enp10s0 + +iface enp10s0 inet static + +address 192.168.1.162 + +netmask 255.255.255.0 + +gateway 192.168.1.100 + +dns-nameservers 1.0.0.1,1.1.1.1 +``` + +保存并关闭文件。使用命令重启网络: + +``` +sudo systemctl restart networking +``` + +或者,如果你使用不带systemd 的发行版,你可以通过老办法来重启网络: + +``` +sudo /etc/init.d/networking restart +``` + +你的网络将会重新启动,新的配置将会生效。 + +这就是多年以来的做法。但是现在,在某些发行版上(例如 Ubuntu Linux 18.04),网络的配置与控制发生了很大的变化。不需要那个 interfaces 文件和 /etc/init.d/networking 脚本,我们现在转向使用 [Netplan][1]。Netplan 是一个在某些 Linux 发行版上配置网络连接的命令行工具。Netplan 使用 YAML 描述文件来配置网络接口,然后,通过这些描述为任何给定的呈现工具生成必要的配置选项。 + +我将向你展示如何在 Linux 上使用 Netplan 配置静态 IP 地址和 DHCP 地址。我会在 Ubuntu Server 18.04 上演示。有句忠告,你创建的 .yaml 文件中的间距必须保持一致,否则将会失败。你不用为每行使用特定的间距,只需保持一致就行了。 + +### 新的配置文件 + +打开终端窗口(或者通过 SSH 登录进 Ubuntu 服务器)。你会在 /etc/netplan 文件夹下发现 Netplan 的新配置文件。使用 cd/etc/netplan 命令进入到那个文件夹下。一旦进到了那个文件夹,也许你就能够看到一个文件: + +``` +01-netcfg.yaml +``` + +你可以创建一个新的文件或者是编辑默认文件。如果你打算修改默认文件,我建议你先做一个备份: + +``` +sudo cp /etc/netplan/01-netcfg.yaml /etc/netplan/01-netcfg.yaml.bak +``` + +备份好后,就可以开始配置了。 + +### 网络设备名称 + +在你开始配置静态 IP 之前,你需要知道设备名称。要做到这一点,你可以使用命令 ip a,然后找出哪一个设备将会被用到(图 1)。 + +![netplan][3] + +图 1:使用 ip a 命令找出设备名称 + +[Used with permission][4] (译注:这是什么鬼?) + +我将为 ens5 配置一个静态的 IP。 + +### 配置静态 IP 地址 + +使用命令打开原来的 .yaml 文件: + +``` +sudo nano /etc/netplan/01-netcfg.yaml +``` + +文件的布局看起来就像这样: + +network: + +Version: 2 + +Renderer: networkd + +ethernets: + +DEVICE_NAME: + +Dhcp4: yes/no + +Addresses: [IP/NETMASK] + +Gateway: GATEWAY + +Nameservers: + +Addresses: [NAMESERVER, NAMESERVER] + +其中: + + * DEVICE_NAME 是需要配置设备的实际名称。 + + * yes/no 代表是否启用 dhcp4。 + + * IP 是设备的 IP 地址。 + + * NETMASK 是 IP 地址的掩码。 + + * GATEWAY 是网关的地址。 + + * NAMESERVER 是由逗号分开的 DNS 服务器列表。 + +这是一份 .yaml 文件的样例: + +``` +network: + + version: 2 + + renderer: networkd + + ethernets: + + ens5: + + dhcp4: no + + addresses: [192.168.1.230/24] + + gateway4: 192.168.1.254 + + nameservers: + + addresses: [8.8.4.4,8.8.8.8] +``` + +编辑上面的文件以达到你想要的效果。保存并关闭文件。 + +注意,掩码已经不用再配置为 255.255.255.0 这种形式。取而代之的是,掩码已被添加进了 IP 地址中。 + +### 测试配置 + +在应用改变之前,让我们测试一下配置。为此,使用命令: + +``` +sudo netplan try +``` + +上面的命令会在应用配置之前验证其是否有效。如果成功,你就会看到配置被接受。换句话说,Netplan 会尝试将新的配置应用到运行的系统上。如果新的配置失败了,Netplan 会自动地恢复到之前使用的配置。成功后,新的配置就会被使用。 + +### 应用新的配置 + +如果你确信配置文件没有问题,你就可以跳过测试环节并且直接使用新的配置。它的命令是: + +``` +sudo netplan apply +``` + +此时,你可以使用 ip a 看看新的地址是否正确。 + +### 配置 DHCP + +虽然你可能不会配置 DHCP 服务,但通常还是知道比较好。例如,你也许不知道网络上当前可用的静态 IP 地址是多少。你可以为设备配置 DHCP,获取到 IP 地址,然后将那个地址重新配置为静态地址。 + +在 Netplan 上使用 DHCP,配置文件看起来就像这样: + +``` +network: + + version: 2 + + renderer: networkd + + ethernets: + + ens5: + + Addresses: [] + + dhcp4: true + + optional: true +``` + +保存并退出。用下面命令来测试文件: + +``` +sudo netplan try +``` + +Netplan 应该会成功配置 DHCP 服务。这时你可以使用 ip a 命令得到动态分配的地址,然后重新配置静态地址。或者,你可以直接使用 DHCP 分配的地址(但看看这是一个服务器,你可能不想这样做)。 + +也许你有不只一个的网络接口,你可以命名第二个 .yaml 文件为 02-netcfg.yaml 。Netplan 会按照数字顺序应用配置文件,因此 01 会在 02 之前使用。根据你的需要创建多个配置文件。 + +### 就是这些了 + +不管你信不信,那些就是所有关于使用 Netplan 的东西了。虽然它对于我们习惯性的配置网络地址来说是一个相当大的改变,但并不是所有人都用的惯。但这种配置方式值得一提...因此你会适应的。 + +在 Linux Foundation 和 edX 上通过 ["Introduction to Linux"] 课程学习更多关于 Linux 的内容。 + +-------------------------------------------------------------------------------- + +via: https://www.linux.com/learn/intro-to-linux/2018/9/how-use-netplan-network-configuration-tool-linux + +作者:[Jack Wallen][a] +选题:[lujun9972](https://github.com/lujun9972) +译者:[LuuMing](https://github.com/LuuMing) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://www.linux.com/users/jlwallen +[1]: https://netplan.io/ +[3]: https://www.linux.com/sites/lcom/files/styles/rendered_file/public/netplan_1.jpg?itok=XuIsXWbV (netplan) +[4]: /licenses/category/used-permission +[5]: https://training.linuxfoundation.org/linux-courses/system-administration-training/introduction-to-linux From 99f3ecc2bbc25472b0d64fe3b0d60c690f1103ee Mon Sep 17 00:00:00 2001 From: "Xingyu.Wang" Date: Mon, 1 Oct 2018 11:28:43 +0800 Subject: [PATCH 159/736] PRF:20171022 Review- Algorithms to Live By by Brian Christian - Tom Griffiths.md @GraveAccent --- ... Live By by Brian Christian - Tom Griffiths.md | 15 ++++++++------- 1 file changed, 8 insertions(+), 7 deletions(-) diff --git a/translated/tech/20171022 Review- Algorithms to Live By by Brian Christian - Tom Griffiths.md b/translated/tech/20171022 Review- Algorithms to Live By by Brian Christian - Tom Griffiths.md index eec0d29397..808da9a3d3 100644 --- a/translated/tech/20171022 Review- Algorithms to Live By by Brian Christian - Tom Griffiths.md +++ b/translated/tech/20171022 Review- Algorithms to Live By by Brian Christian - Tom Griffiths.md @@ -1,18 +1,19 @@ -书评|算法之美 +书评:《算法之美( Algorithms to Live By )》 ====== + ![](https://www.eyrie.org/~eagle/reviews/covers/1-62779-037-3.jpg) 又一次为了工作图书俱乐部而读书。除了其它我亲自推荐的书,这是我至今最喜爱的书。 -作为计算机科学基础之一的研究领域是算法:我们如何高效地用计算机程序解决问题?这基本上属于数学领域,但是这很少关于理想的或理论上的解决方案,而是更在于最高效地利用有限的资源获得一个充足(如果不能完美)的答案。其中许多问题要么是日常的生活问题,要么与人们密切相关。毕竟,计算机科学的目的是为了用计算机解决实际问题。《算法之美》提出的问题是:“我们可以反过来吗”--我们可以通过学习计算机科学解决问题的方式来帮助我们做出日常决定吗? +作为计算机科学基础之一的研究领域是算法:我们如何高效地用计算机程序解决问题?这基本上属于数学领域,但是这很少关于理想的或理论上的解决方案,而是更在于最高效地利用有限的资源获得一个充分(如果不能完美)的答案。其中许多问题要么是日常的生活问题,要么与人们密切相关。毕竟,计算机科学的目的是为了用计算机解决实际问题。《算法之美Algorithms to Live By》提出的问题是:“我们可以反过来吗”——我们可以通过学习计算机科学解决问题的方式来帮助我们做出日常决定吗? 本书的十一个章节有很多有趣的内容,但也有一个有趣的主题:人类早已擅长这一点。很多章节以一个算法研究和对问题的数学分析作为开始,接着深入到探讨如何利用这些结果做出更好的决策,然后讨论关于人类真正会做出的决定的研究,之后,考虑到典型生活情境的限制,会发现人类早就在应用我们提出的最佳算法的特殊版本了。这往往会破坏本书的既定目标,值得庆幸的是,它决不会破坏对一般问题的有趣讨论,即计算机科学如何解决它们,以及我们对这些问题的数学和技术形态的了解。我认为这本书的自助效用比作者打算的少一些,但有很多可供思考的东西。 -(也就是说,值得考虑这种一致性是否太少了,因为人类已经擅长这方面了,更因为我们的算法是根据人类直觉设计的。可能我们的最佳算法只是反映了人类的思想。在某些情况下,我们发现我们的方案和数学上的典范不一样, 但是在另一些情况下,它们仍然是我们当下最好的猜想。) +(也就是说,值得考虑这种一致性是否太少了,因为人类已经擅长这方面了,更因为我们的算法是根据人类直觉设计的。可能我们的最佳算法只是反映了人类的思想。在某些情况下,我们发现我们的方案和数学上的典范不一样,但是在另一些情况下,它们仍然是我们当下最好的猜想。) -这是那种章节列表是书评里重要部分的书。这里讨论的算法领域有最优停止、探索和利用决策(什么时候带着你发现的最好东西走以及什么时候寻觅更好的东西),以及排序、缓存、调度、贝叶斯定理(一般还有预测)、创建模型时的过拟合、放松(解决容易的问题而不是你的实际问题)、随机算法、一系列网络算法,最后还有游戏理论。其中每一项都有有用的见解和发人深省的讨论--这些有时显得十分理论化的概念令人吃惊地很好地映射到了日常生活。这本书以一段关于“可计算的善意”的讨论结束:鼓励减少你自己和你交往的人所需的计算和复杂性惩罚。 +这是那种章节列表是书评里重要部分的书。这里讨论的算法领域有最优停止、探索和利用决策(什么时候带着你发现的最好东西走,以及什么时候寻觅更好的东西),以及排序、缓存、调度、贝叶斯定理(一般还有预测)、创建模型时的过拟合、放松(解决容易的问题而不是你的实际问题)、随机算法、一系列网络算法,最后还有游戏理论。其中每一项都有有用的见解和发人深省的讨论——这些有时显得十分理论化的概念令人吃惊地很好地映射到了日常生活。这本书以一段关于“可计算的善意”的讨论结束:鼓励减少你自己和你交往的人所需的计算和复杂性惩罚。 -如果你有计算机科学背景(就像我一样),其中许多都是熟悉的概念,而且你因为被普及了很多新东西或许会有疑惑。然而,请给这本书一个机会,类比法没你担忧的那么令人紧张。作者既小心又聪明地应用了这些原则。这本书令人惊喜地通过了一个重要的合理性检查:涉及到我知道或反复思考过的主题的章节很少有或没有明显的错误,而且能讲出有用和重要的事情。比如,调度的那一章节毫不令人吃惊地和时间管理有关,通过直接跳到时间管理问题的核心而胜过了半数时间管理类书籍:如果你要做一个清单上的所有事情,你做这些事情的顺序很少要紧,所以最难的调度问题是决定不做哪些事情而不是做这些事情的顺序。 +如果你有计算机科学背景(就像我一样),其中许多都是熟悉的概念,而且你因为被普及了很多新东西或许会有疑惑。然而,请给这本书一个机会,类比法没你担忧的那么令人紧张。作者既小心又聪明地应用了这些原则。这本书令人惊喜地通过了一个重要的合理性检查:涉及到我知道或反复思考过的主题的章节很少有或没有明显的错误,而且能讲出有用和重要的事情。比如,调度的那一章节毫不令人吃惊地和时间管理有关,通过直接跳到时间管理问题的核心而胜过了半数的时间管理类书籍:如果你要做一个清单上的所有事情,你做这些事情的顺序很少要紧,所以最难的调度问题是决定不做哪些事情而不是做这些事情的顺序。 作者在贝叶斯定理这一章节中的观点完全赢得了我的心。本章的许多内容都是关于贝叶斯先验的,以及一个人对过去事件的了解为什么对分析未来的概率很重要。作者接着讨论了著名的棉花糖实验。即给了儿童一个棉花糖以后,儿童被研究者告知如果他们能够克制自己不吃这个棉花糖,等到研究者回来时,会给他们两个棉花糖。克制自己不吃棉花糖(在心理学文献中叫作“延迟满足”)被发现与未来几年更好的生活有关。这个实验多年来一直被引用和滥用于各种各样的宣传,关于选择未来的收益放弃即时的快乐从而拥有成功的生活,以及生活中的失败是因为无法延迟满足。更多的邪恶分析(当然)将这种能力与种族联系在一起,带有可想而知的种族主义结论。 @@ -20,7 +21,7 @@ 《算法之美》是我读过的唯一提到了棉花糖实验并应用了我认为更有说服力的分析的书。这不是一个关于儿童天赋的实验,这是一个关于他们的贝叶斯先验的实验。什么时候立即吃棉花糖而不是等待奖励是完全合理的?当他们过去的经历告诉他们成年人不可靠,不可信任,会在不可预测的时间内消失并且撒谎的时候。而且,更好的是,作者用我之前没有听说过的后续研究和观察支持了这一分析,观察到的内容是,一些孩子会等待一段时间然后“放弃”。如果他们下意识地使用具有较差先验的贝叶斯模型,这就完全合情合理。 -这是一本很好的书。它可能在某些地方的尝试有点太勉强(数学上最优停止对于日常生活的适用性比我认为作者想要表现的更加偶然和牵强附会),如果你学过算法,其中一些内容会感到熟悉,但是它的行文思路清晰,简洁,而且编辑得非常好。这本书没有哪一部分对不起它所受的欢迎,书中的讨论贯穿始终。如果你发现自己“已经知道了这一切”,你可能还会在接下来几页中遇到一个新的概念或一个简洁的解释。有时作者会做一些我从没想到但是回想起来正确的联系,比如将网络协议中的指数退避和司法系统中的选择惩罚联系起来。还有意识到我们的现代通信世界并不是一直联系的,它是不断缓冲的,我们中的许多人正深受缓冲膨胀这一独特现象的苦恼。 +这是一本很好的书。它可能在某些地方的尝试有点太勉强(数学上最优停止对于日常生活的适用性比我认为作者想要表现的更加偶然和牵强附会),如果你学过算法,其中一些内容会感到熟悉,但是它的行文思路清晰,简洁,而且编辑得非常好。这本书没有哪一部分对不起它所受到的欢迎,书中的讨论贯穿始终。如果你发现自己“已经知道了这一切”,你可能还会在接下来几页中遇到一个新的概念或一个简洁的解释。有时作者会做一些我从没想到但是回想起来正确的联系,比如将网络协议中的指数退避和司法系统中的选择惩罚联系起来。还有意识到我们的现代通信世界并不是一直联系的,它是不断缓冲的,我们中的许多人正深受缓冲膨胀这一独特现象的苦恼。 我认为你并不必须是计算机科学专业或者精通数学才能读这本书。如果你想深入,每章的结尾都有许多数学上的细节,但是正文总是易读而清晰,至少就我所知是这样(作为一个以计算机科学为专业并学到了很多数学知识的人,你至少可以有保留地相信我)。即使你已经钻研了多年的算法,这本书仍然可以提供很多东西。 @@ -36,7 +37,7 @@ via: https://www.eyrie.org/~eagle/reviews/books/1-62779-037-3.html 作者:[Brian Christian;Tom Griffiths][a] 译者:[GraveAccent](https://github.com/GraveAccent) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 From 58dee474babcd99be1a038fec85eed553b13f257 Mon Sep 17 00:00:00 2001 From: "Xingyu.Wang" Date: Mon, 1 Oct 2018 11:29:04 +0800 Subject: [PATCH 160/736] PUB:20171022 Review- Algorithms to Live By by Brian Christian - Tom Griffiths.md @GraveAccent https://linux.cn/article-10068-1.html --- ...w- Algorithms to Live By by Brian Christian - Tom Griffiths.md | 0 1 file changed, 0 insertions(+), 0 deletions(-) rename {translated/tech => published}/20171022 Review- Algorithms to Live By by Brian Christian - Tom Griffiths.md (100%) diff --git a/translated/tech/20171022 Review- Algorithms to Live By by Brian Christian - Tom Griffiths.md b/published/20171022 Review- Algorithms to Live By by Brian Christian - Tom Griffiths.md similarity index 100% rename from translated/tech/20171022 Review- Algorithms to Live By by Brian Christian - Tom Griffiths.md rename to published/20171022 Review- Algorithms to Live By by Brian Christian - Tom Griffiths.md From ae49adf39e8485fea0310279adf8958f52354e27 Mon Sep 17 00:00:00 2001 From: HankChow <280630620@qq.com> Date: Mon, 1 Oct 2018 11:46:53 +0800 Subject: [PATCH 161/736] hankchow translating --- ...ow To Limit Network Bandwidth In Linux Using Wondershaper.md | 2 ++ 1 file changed, 2 insertions(+) diff --git a/sources/tech/20180906 How To Limit Network Bandwidth In Linux Using Wondershaper.md b/sources/tech/20180906 How To Limit Network Bandwidth In Linux Using Wondershaper.md index 11d266e163..bb133d6103 100644 --- a/sources/tech/20180906 How To Limit Network Bandwidth In Linux Using Wondershaper.md +++ b/sources/tech/20180906 How To Limit Network Bandwidth In Linux Using Wondershaper.md @@ -1,3 +1,5 @@ +HankChow translating + How To Limit Network Bandwidth In Linux Using Wondershaper ====== From be9f3119b21996fe371285702b5da521f0cdad8d Mon Sep 17 00:00:00 2001 From: "Xingyu.Wang" Date: Mon, 1 Oct 2018 12:12:27 +0800 Subject: [PATCH 162/736] PRF:20180522 Free Resources for Securing Your Open Source Code.md @sd886393 --- ...rces for Securing Your Open Source Code.md | 58 ++++++++----------- 1 file changed, 24 insertions(+), 34 deletions(-) diff --git a/translated/tech/20180522 Free Resources for Securing Your Open Source Code.md b/translated/tech/20180522 Free Resources for Securing Your Open Source Code.md index 4e63a64e43..285a49c6a4 100644 --- a/translated/tech/20180522 Free Resources for Securing Your Open Source Code.md +++ b/translated/tech/20180522 Free Resources for Securing Your Open Source Code.md @@ -1,53 +1,43 @@ -一些提高你开源源码安全性的工具 +一些提高开源代码安全性的工具 ====== +> 开源软件的迅速普及带来了对健全安全实践的需求。 + ![](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/open-security.jpg?itok=R3M5LDrb) -虽然目前开源依然发展势头较好,并被广大的厂商所采用,然而最近由 Black Duck 和 Synopsys 发布的[2018开源安全与风险评估报告][1]指出了一些存在的风险并重点阐述了对于健全安全措施的需求。这份报告的分析资料素材来自经过脱敏后的 1100 个商业代码库,这些代码所涉及:自动化、大数据、企业级软件、金融服务业、健康医疗、物联网、制造业等多个领域。 +虽然目前开源依然发展势头较好,并被广大的厂商所采用,然而最近由 Black Duck 和 Synopsys 发布的 [2018 开源安全与风险评估报告][1]指出了一些存在的风险,并重点阐述了对于健全安全措施的需求。这份报告的分析资料素材来自经过脱敏后的 1100 个商业代码库,这些代码所涉及:自动化、大数据、企业级软件、金融服务业、健康医疗、物联网、制造业等多个领域。 -这份报告强调开源软件正在被大量的使用,扫描结果中有 96% 的应用都使用了开源组件。然而,报告还指出许多其中存在很多漏洞。具体在 [这里][2]: +这份报告强调开源软件正在被大量的使用,扫描结果中有 96% 的应用都使用了开源组件。然而,报告还指出许多其中存在很多漏洞。具体在 [这里][2]: * 令人担心的是扫描的所有结果中,有 78% 的代码库存在至少一个开源的漏洞,平均每个代码库有 64 个漏洞。 - * 在经过代码审计过后代码库中,发现超过 54% 的漏洞经验证是高危漏洞。 - * 17% 的代码库包括一种已经早已公开的漏洞,包括:Heartbleed、Logjam、Freak、Drown、Poddle。 +Synopsys 旗下 Black Duck 的技术负责人 Tim Mackey 称,“这份报告清楚的阐述了:随着开源软件正在被企业广泛的使用,企业与组织也应当使用一些工具来检测可能出现在这些开源软件中的漏洞,以及管理其所使用的开源软件的方式是否符合相应的许可证规则。” +确实,随着越来越具有影响力的安全威胁出现,历史上从未有过我们目前对安全工具和实践的需求。大多数的组织已经意识到网络与系统管理员需要具有相应的较强的安全技能和安全证书。[在一篇文章中][3],我们给出一些具有较大影响力的工具、认证和实践。 +Linux 基金会已经在安全方面提供了许多关于安全的信息与教育资源。比如,Linux 社区提供了许多针对特定平台的免费资源,其中 [Linux 工作站安全检查清单][4] 其中提到了很多有用的基础信息。线上的一些发表刊物也可以提升用户针对某些平台对于漏洞的保护,如:[Fedora 安全指南][5]、[Debian 安全手册][6]。 -Tim Mackey,Synopsys 旗下 Black Duck 的技术负责人称,"这份报告清楚的阐述了:随着开源软件正在被企业广泛的使用,企业与组织也应当使用一些工具来检测可能出现在这些开源软件中的漏洞,并且管理其所使用的开源软件的方式是否符合相应的许可证规则" +目前被广泛使用的私有云平台 OpenStack 也加强了关于基于云的智能安全需求。根据 Linux 基金会发布的 [公有云指南][7]:“据 Gartner 的调研结果,尽管公有云的服务商在安全审查和提升透明度方面做的都还不错,安全问题仍然是企业考虑向公有云转移的最重要的考量之一。” -确实,随着越来越具有影响力的安全威胁出现,历史上从未有过我们目前对安全工具和实践的需求。大多数的组织已经意识到网络与系统管理员需要具有相应的较强的安全技能和安全证书。[在这篇文章中,][3] 我们给出一些具有较大影响力的工具、认证和实践。 +无论是对于组织还是个人,千里之堤毁于蚁穴,这些“蚁穴”无论是来自路由器、防火墙、VPN 或虚拟机都可能导致灾难性的后果。以下是一些免费的工具可能对于检测这些漏洞提供帮助: -Linux 基金会已经在安全方面提供了许多关于安全的信息与教育资源。比如,Linux 社区提供许多免费的用来针对一些平台的工具,其中[Linux 服务器安全检查表][4] 其中提到了很多有用的基础信息。线上的一些发表刊物也可以提升用户针对某些平台对于漏洞的保护,如:[Fedora 安全指南][5],[Debian 安全手册][6]。 + * [Wireshark][8],流量包分析工具 + * [KeePass Password Safe][9],自由开源的密码管理器 + * [Malwarebytes][10],免费的反病毒和勒索软件工具 + * [NMAP][11],安全扫描器 + * [NIKTO][12],开源的 web 服务器扫描器 + * [Ansible][13],自动化的配置运维工具,可以辅助做安全基线 + * [Metasploit][14],渗透测试工具,可辅助理解攻击向量 -目前被广泛使用的私有云平台 OpenStack 也加强了关于基于云的智能安全需求。根据 Linux 基金会发布的 [公有云指南][7]:“据 Gartner 的调研结果,尽管公有云的服务商在安全和审查方面做的都还不错,安全问题是企业考虑向公有云转移的最重要的考量之一” +这里有一些对上面工具讲解的视频。比如 [Metasploit 教学][15]、[Wireshark 教学][16]。还有一些传授安全技能的免费电子书,比如:由 Ibrahim Haddad 博士和 Linux 基金会共同出版的[并购过程中的开源审计][17],里面阐述了多条在技术平台合并过程中,因没有较好的进行开源审计,从而引发的安全问题。当然,书中也记录了如何在这一过程中进行代码合规检查、准备以及文档编写。 -无论是对于组织还是个人,千里之堤毁于蚁穴,这些“蚁穴”无论是来自路由器、防火墙、VPNs或虚拟机都可能导致灾难性的后果。以下是一些免费的工具可能对于检测这些漏洞提供帮助: - - * [Wireshark][8], 流量包分析工具 - - * [KeePass Password Safe][9], 免费开源的密码管理器 - - * [Malwarebytes][10], 免费的反病毒和勒索软件工具 - - * [NMAP][11], 安全扫描器 - - * [NIKTO][12], 开源 web 扫描器 - - * [Ansible][13], 自动化的配置运维工具,可以辅助做安全基线 - - * [Metasploit][14], 渗透测试工具,可辅助理解攻击向量 - - - -这里有一些对上面工具讲解的视频。比如[Metasploit 教学][15]、[Wireshark 教学][16]。还有一些传授安全技能的免费电子书,比如:由 Ibrahim Haddad 博士和 Linux 基金会共同出版的[并购过程中的开源审计][17],里面阐述了多条在技术平台合并过程中,因没有较好的进行开源审计,从而引发的安全问题。当然,书中也记录了如何在这一过程中进行代码合规检查、准备以及文档编写。 - -同时,我们 [之前提到的一个免费的电子书][18], 由来自[The New Stack][19] 编写的“Docker与容器中的网络、安全和存储”,里面也提到了关于加强容器网络安全的最新技术,以及Docker本身可提供的关于,提升其网络的安全与效率的最佳实践。这本电子书还记录了关于如何构建安全容器集群的最佳实践。 +同时,我们 [之前提到的一个免费的电子书][18], 由来自 [The New Stack][19] 编写的“Docker 与容器中的网络、安全和存储”,里面也提到了关于加强容器网络安全的最新技术,以及 Docker 本身可提供的关于提升其网络的安全与效率的最佳实践。这本电子书还记录了关于如何构建安全容器集群的最佳实践。 所有这些工具和资源,可以在很大的程度上预防安全问题,正如人们所说的未雨绸缪,考虑到一直存在的安全问题,现在就应该开始学习这些安全合规资料与工具。 -想要了解更多的安全、合规以及开源项目问题,点击[这里][20] + +想要了解更多的安全、合规以及开源项目问题,点击[这里][20]。 -------------------------------------------------------------------------------- @@ -55,8 +45,8 @@ via: https://www.linux.com/blog/2018/5/free-resources-securing-your-open-source- 作者:[Sam Dean][a] 选题:[lujun9972](https://github.com/lujun9972) -译者:[译者ID](https://github.com/sd886393) -校对:[校对者ID](https://github.com/校对者ID) +译者:[sd886393](https://github.com/sd886393) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 @@ -64,7 +54,7 @@ via: https://www.linux.com/blog/2018/5/free-resources-securing-your-open-source- [1]:https://www.blackducksoftware.com/open-source-security-risk-analysis-2018 [2]:https://www.prnewswire.com/news-releases/synopsys-report-finds-majority-of-software-plagued-by-known-vulnerabilities-and-license-conflicts-as-open-source-adoption-soars-300648367.html [3]:https://www.linux.com/blog/sysadmin-ebook/2017/8/future-proof-your-sysadmin-career-locking-down-security -[4]:http://go.linuxfoundation.org/ebook_workstation_security +[4]:https://linux.cn/article-6753-1.html [5]:https://docs.fedoraproject.org/en-US/Fedora/19/html/Security_Guide/index.html [6]:https://www.debian.org/doc/manuals/securing-debian-howto/index.en.html [7]:https://www.linux.com/publications/2016-guide-open-cloud From 2ccaac422b306deaf95233046a06863d516a2743 Mon Sep 17 00:00:00 2001 From: "Xingyu.Wang" Date: Mon, 1 Oct 2018 12:12:48 +0800 Subject: [PATCH 163/736] PUB:20180522 Free Resources for Securing Your Open Source Code.md @sd886393 https://linux.cn/article-10069-1.html --- .../20180522 Free Resources for Securing Your Open Source Code.md | 0 1 file changed, 0 insertions(+), 0 deletions(-) rename {translated/tech => published}/20180522 Free Resources for Securing Your Open Source Code.md (100%) diff --git a/translated/tech/20180522 Free Resources for Securing Your Open Source Code.md b/published/20180522 Free Resources for Securing Your Open Source Code.md similarity index 100% rename from translated/tech/20180522 Free Resources for Securing Your Open Source Code.md rename to published/20180522 Free Resources for Securing Your Open Source Code.md From 98fc86faa4a5076dd1ee6ca6e35cae43e9605e57 Mon Sep 17 00:00:00 2001 From: HankChow <280630620@qq.com> Date: Mon, 1 Oct 2018 13:55:41 +0800 Subject: [PATCH 164/736] translated --- ...k Bandwidth In Linux Using Wondershaper.md | 198 ------------------ ...k Bandwidth In Linux Using Wondershaper.md | 196 +++++++++++++++++ 2 files changed, 196 insertions(+), 198 deletions(-) delete mode 100644 sources/tech/20180906 How To Limit Network Bandwidth In Linux Using Wondershaper.md create mode 100644 translated/tech/20180906 How To Limit Network Bandwidth In Linux Using Wondershaper.md diff --git a/sources/tech/20180906 How To Limit Network Bandwidth In Linux Using Wondershaper.md b/sources/tech/20180906 How To Limit Network Bandwidth In Linux Using Wondershaper.md deleted file mode 100644 index bb133d6103..0000000000 --- a/sources/tech/20180906 How To Limit Network Bandwidth In Linux Using Wondershaper.md +++ /dev/null @@ -1,198 +0,0 @@ -HankChow translating - -How To Limit Network Bandwidth In Linux Using Wondershaper -====== - -![](https://www.ostechnix.com/wp-content/uploads/2018/09/Wondershaper-1-720x340.jpg) - -This tutorial will help you to easily limit network bandwidth and shape your network traffic in Unix-like operating systems. By limiting the network bandwidth usage, you can save unnecessary bandwidth consumption’s by applications, such as package managers (pacman, yum, apt), web browsers, torrent clients, download managers etc., and prevent the bandwidth abuse by a single or multiple users in the network. For the purpose of this tutorial, we will be using a command line utility named **Wondershaper**. Trust me, it is not that hard as you may think. It is one of the easiest and quickest way ever I have come across to limit the Internet or local network bandwidth usage in your own Linux system. Read on. - -Please be mindful that the aforementioned utility can only limit the incoming and outgoing traffic of your local network interfaces, not the interfaces of your router or modem. In other words, Wondershaper will only limit the network bandwidth in your local system itself, not any other systems in the network. These utility is mainly designed for limiting the bandwidth of one or more network adapters in your local system. Hope you got my point. - -Let us see how to use Wondershaper to shape the network traffic. - -### Limit Network Bandwidth In Linux Using Wondershaper - -**Wondershaper** is simple script used to limit the bandwidth of your system’s network adapter(s). It limits the bandwidth iproute’s tc command, but greatly simplifies its operation. - -**Installing Wondershaper** - -To install the latest version, git clone wondershaoer repository: - -``` -$ git clone https://github.com/magnific0/wondershaper.git - -``` - -Go to the wondershaper directory and install it as show below - -``` -$ cd wondershaper - -$ sudo make install - -``` - -And, run the following command to start wondershaper service automatically on every reboot. - -``` -$ sudo systemctl enable wondershaper.service - -$ sudo systemctl start wondershaper.service - -``` - -You can also install using your distribution’s package manager (official or non-official) if you don’t mind the latest version. - -Wondershaper is available in [**AUR**][1], so you can install it in Arch-based systems using AUR helper programs such as [**Yay**][2]. - -``` -$ yay -S wondershaper-git - -``` - -On Debian, Ubuntu, Linux Mint: - -``` -$ sudo apt-get install wondershaper - -``` - -On Fedora: - -``` -$ sudo dnf install wondershaper - -``` - -On RHEL, CentOS, enable EPEL repository and install wondershaper as shown below. - -``` -$ sudo yum install epel-release - -$ sudo yum install wondershaper - -``` - -Finally, start wondershaper service automatically on every reboot. - -``` -$ sudo systemctl enable wondershaper.service - -$ sudo systemctl start wondershaper.service - -``` - -**Usage** - -First, find the name of your network interface. Here are some common ways to find the details of a network card. - -``` -$ ip addr - -$ route - -$ ifconfig - -``` - -Once you find the network card name, you can limit the bandwidth rate as shown below. - -``` -$ sudo wondershaper -a -d -u - -``` - -For instance, if your network card name is **enp0s8** and you wanted to limit the bandwidth to **1024 Kbps** for **downloads** and **512 kbps** for **uploads** , the command would be: - -``` -$ sudo wondershaper -a enp0s8 -d 1024 -u 512 - -``` - -Where, - - * **-a** : network card name - * **-d** : download rate - * **-u** : upload rate - - - -To clear the limits from a network adapter, simply run: - -``` -$ sudo wondershaper -c -a enp0s8 - -``` - -Or - -``` -$ sudo wondershaper -c enp0s8 - -``` - -Just in case, there are more than one network card available in your system, you need to manually set the download/upload rates for each network interface card as described above. - -If you have installed Wondershaper by cloning its GitHub repository, there is a configuration named **wondershaper.conf** exists in **/etc/conf.d/** location. Make sure you have set the download or upload rates by modifying the appropriate values(network card name, download/upload rate) in this file. - -``` -$ sudo nano /etc/conf.d/wondershaper.conf - -[wondershaper] -# Adapter -# -IFACE="eth0" - -# Download rate in Kbps -# -DSPEED="2048" - -# Upload rate in Kbps -# -USPEED="512" - -``` - -Here is the sample before Wondershaper: - -After enabling Wondershaper: - -As you can see, the download rate has been tremendously reduced after limiting the bandwidth using WOndershaper in my Ubuntu 18.o4 LTS server. - -For more details, view the help section by running the following command: - -``` -$ wondershaper -h - -``` - -Or, refer man pages. - -``` -$ man wondershaper - -``` - -As far as tested, Wondershaper worked just fine as described above. Give it a try and let us know what do you think about this utility. - -And, that’s all for now. Hope this was useful. More good stuffs to come. Stay tuned. - -Cheers! - - - --------------------------------------------------------------------------------- - -via: https://www.ostechnix.com/how-to-limit-network-bandwidth-in-linux-using-wondershaper/ - -作者:[SK][a] -选题:[lujun9972](https://github.com/lujun9972) -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://www.ostechnix.com/author/sk/ -[1]: https://aur.archlinux.org/packages/wondershaper-git/ -[2]: https://www.ostechnix.com/yay-found-yet-another-reliable-aur-helper/ diff --git a/translated/tech/20180906 How To Limit Network Bandwidth In Linux Using Wondershaper.md b/translated/tech/20180906 How To Limit Network Bandwidth In Linux Using Wondershaper.md new file mode 100644 index 0000000000..746e664228 --- /dev/null +++ b/translated/tech/20180906 How To Limit Network Bandwidth In Linux Using Wondershaper.md @@ -0,0 +1,196 @@ +在 Linux 中使用 Wondershaper 限制网络带宽 +====== + +![](https://www.ostechnix.com/wp-content/uploads/2018/09/Wondershaper-1-720x340.jpg) + +以下内容将向你介绍如何轻松对网络带宽做出限制,并在类 Unix 操作系统中对网络流量进行优化。通过限制网络带宽,可以节省应用程序不必要的带宽消耗,包括软件包管理器(pacman、yum、apt)、web 浏览器、torrent 客户端、下载管理器等,并防止单个或多个用户滥用网络带宽。在本文当中,将会介绍 Wondershaper 这一个实用的命令行程序,这是我认为限制 Linux 系统 Internet 或本地网络带宽的最简单、最快捷的方式之一。 + +请注意,Wondershaper 只能限制本地网络接口的传入和传出流量,而不能限制路由器或调制解调器的接口。换句话说,Wondershaper 只会限制本地系统本身的网络带宽,而不会限制网络中的其它系统。因此 Wondershaper 主要用于限制本地系统中一个或多个网卡的带宽。 + +下面来看一下 Wondershaper 是如何优化网络流量的。 + +### 在 Linux 中使用 Wondershaper 限制网络带宽 + +`wondershaper` 是用于显示系统网卡网络带宽的简单脚本。它使用了 `iproute` 和 `tc` 命令,但大大简化了操作过程。 + +**安装 Wondershaper** + +使用 `git clone` 克隆 Wondershaper 的版本库就可以安装最新版本: + +``` +$ git clone https://github.com/magnific0/wondershaper.git + +``` + +按照以下命令进入 `wondershaper` 目录并安装: + +``` +$ cd wondershaper + +$ sudo make install + +``` + +然后执行以下命令,可以让 `wondershaper` 在每次系统启动时都自动开始服务: + +``` +$ sudo systemctl enable wondershaper.service + +$ sudo systemctl start wondershaper.service + +``` + +如果你不强求安装最新版本,也可以使用软件包管理器(官方和非官方均可)来进行安装。 + +`wondershaper` 在 [Arch 用户软件仓库][1](Arch User Repository, AUR)中可用,所以可以使用类似 [`yay`][2] 这些 AUR 辅助软件在基于 Arch 的系统中安装 `wondershaper` 。 + +``` +$ yay -S wondershaper-git + +``` + +对于Debian、Ubuntu 和 Linux Mint 可以使用以下命令安装: + +``` +$ sudo apt-get install wondershaper + +``` + +对于 Fedora 可以使用以下命令安装: + +``` +$ sudo dnf install wondershaper + +``` + +对于 RHEL、CentOS,只需要启用 EPEL 仓库,就可以使用以下命令安装: + +``` +$ sudo yum install epel-release + +$ sudo yum install wondershaper + +``` + +在每次系统启动时都自动启动 `wondershaper` 服务。 + +``` +$ sudo systemctl enable wondershaper.service + +$ sudo systemctl start wondershaper.service + +``` + +**用法** + +首先需要找到网络接口的名称,通过以下几个命令都可以查询到网卡的详细信息: + +``` +$ ip addr + +$ route + +$ ifconfig + +``` + +在确定网卡名称以后,就可以按照以下的命令限制网络带宽: + +``` +$ sudo wondershaper -a -d -u + +``` + +例如,如果网卡名称是 `enp0s8`,并且需要把上行、下行速率分别限制为 1024 Kbps 和 512 Kbps,就可以执行以下命令: + +``` +$ sudo wondershaper -a enp0s8 -d 1024 -u 512 + +``` + +其中参数的含义是: + + * `-a`:网卡名称 + * `-d`:下行带宽 + * `-u`:上行带宽 + + + +如果要对网卡解除网络带宽的限制,只需要执行: + +``` +$ sudo wondershaper -c -a enp0s8 + +``` + +或者: + +``` +$ sudo wondershaper -c enp0s8 + +``` + +如果系统中有多个网卡,为确保稳妥,需要按照上面的方法手动设置每个网卡的上行、下行速率。 + +如果你是通过 `git clone` 克隆 GitHub 版本库的方式安装 Wondershaper,那么在 `/etc/conf.d/` 目录中会存在一个名为 `wondershaper.conf` 的配置文件,修改这个配置文件中的相应值(包括网卡名称、上行速率、下行速率),也可以设置上行或下行速率。 + +``` +$ sudo nano /etc/conf.d/wondershaper.conf + +[wondershaper] +# Adapter +# +IFACE="eth0" + +# Download rate in Kbps +# +DSPEED="2048" + +# Upload rate in Kbps +# +USPEED="512" + +``` + +Wondershaper 使用前: +![](https://www.ostechnix.com/wp-content/uploads/2018/09/wondershaper-1.png) + +Wondershaper 使用后: +![](https://www.ostechnix.com/wp-content/uploads/2018/09/wondershaper-2.png) + +可以看到,使用 Wondershaper 限制网络带宽之后,下行速率与限制之前相比已经大幅下降。 + +执行以下命令可以查看更多相关信息。 + +``` +$ wondershaper -h + +``` + +也可以查看 Wondershaper 的用户手册: + +``` +$ man wondershaper + +``` + +As far as tested, Wondershaper worked just fine as described above. Give it a try and let us know what do you think about this utility. +根据测试,Wondershaper 按照上面的方式可以有很好的效果。你可以试用一下,然后发表你的看法。 + + + +-------------------------------------------------------------------------------- + +via: https://www.ostechnix.com/how-to-limit-network-bandwidth-in-linux-using-wondershaper/ + +作者:[SK][a] +选题:[lujun9972](https://github.com/lujun9972) +译者:[HankChow](https://github.com/HankChow) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://www.ostechnix.com/author/sk/ +[1]: https://aur.archlinux.org/packages/wondershaper-git/ +[2]: https://www.ostechnix.com/yay-found-yet-another-reliable-aur-helper/ + From 0fd32d0c4fb2a50564ef60a52957973d67ab200b Mon Sep 17 00:00:00 2001 From: z52527 Date: Mon, 1 Oct 2018 14:57:41 +0800 Subject: [PATCH 165/736] Update 20180831 Publishing Markdown to HTML with MDwiki.md MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit 翻译中 --- sources/tech/20180831 Publishing Markdown to HTML with MDwiki.md | 1 + 1 file changed, 1 insertion(+) diff --git a/sources/tech/20180831 Publishing Markdown to HTML with MDwiki.md b/sources/tech/20180831 Publishing Markdown to HTML with MDwiki.md index c25239b7ba..769f9ba420 100644 --- a/sources/tech/20180831 Publishing Markdown to HTML with MDwiki.md +++ b/sources/tech/20180831 Publishing Markdown to HTML with MDwiki.md @@ -1,3 +1,4 @@ +Translating by z52527 Publishing Markdown to HTML with MDwiki ====== From d2742031082b89c68882af2430042b4d1e1e4dd1 Mon Sep 17 00:00:00 2001 From: dianbanjiu Date: Mon, 1 Oct 2018 18:09:14 +0800 Subject: [PATCH 166/736] translated --- ...ntu 18.04 and Other Linux Distributions.md | 233 ------------------ ...ntu 18.04 and Other Linux Distributions.md | 230 +++++++++++++++++ 2 files changed, 230 insertions(+), 233 deletions(-) delete mode 100644 sources/tech/20180928 How to Install Popcorn Time on Ubuntu 18.04 and Other Linux Distributions.md create mode 100644 translated/tech/20180928 How to Install Popcorn Time on Ubuntu 18.04 and Other Linux Distributions.md diff --git a/sources/tech/20180928 How to Install Popcorn Time on Ubuntu 18.04 and Other Linux Distributions.md b/sources/tech/20180928 How to Install Popcorn Time on Ubuntu 18.04 and Other Linux Distributions.md deleted file mode 100644 index 578624aba4..0000000000 --- a/sources/tech/20180928 How to Install Popcorn Time on Ubuntu 18.04 and Other Linux Distributions.md +++ /dev/null @@ -1,233 +0,0 @@ -Translating by dianbanjiu How to Install Popcorn Time on Ubuntu 18.04 and Other Linux Distributions -====== -**Brief: This tutorial shows you how to install Popcorn Time on Ubuntu and other Linux distributions. Some handy Popcorn Time tips have also been discussed.** - -[Popcorn Time][1] is an open source [Netflix][2] inspired [torrent][3] streaming application for Linux, Mac and Windows. - -With the regular torrents, you have to wait for the download to finish before you could watch the videos. - -[Popcorn Time][4] is different. It uses torrent underneath but allows you to start watching the videos (almost) immediately. It’s like you are watching videos on streaming websites like YouTube or Netflix. You don’t have to wait for the download to finish here. - -![Popcorn Time in Ubuntu Linux][5] -Popcorn Time - -If you want to watch movies online without those creepy ads, Popcorn Time is a good alternative. Keep in mind that the streaming quality depends on the number of available seeds. - -Popcorn Time also provides a nice user interface where you can browse through available movies, tv-series and other contents. If you ever used [Netflix on Linux][6], you will find it’s somewhat a similar experience. - -Using torrent to download movies is illegal in several countries where there are strict laws against piracy. In countries like the USA, UK and West European you may even get legal notices. That said, it’s up to you to decide if you want to use it or not. You have been warned. -(If you still want to take the risk and use Popcorn Time, you should use a VPN service like [Ivacy][7] that has been specifically designed for using Torrents and protecting your identity. Even then it’s not always easy to avoid the snooping authorities.) - -Some of the main features of Popcorn Time are: - - * Watch movies and TV Series online using Torrent - * A sleek user interface lets you browse the available movies and TV series - * Change streaming quality - * Bookmark content for watching later - * Download content for offline viewing - * Ability to enable subtitles by default, change the subtitles size etc - * Keyboard shortcuts to navigate through Popcorn Time - - - -### How to install Popcorn Time on Ubuntu and other Linux Distributions - -I am using Ubuntu 18.04 in this tutorial but you can use the same instructions for other Linux distributions such as Linux Mint, Debian, Manjaro, Deepin etc. - -Let’s see how to install Popcorn time on Linux. It’s really easy actually. Simply follow the instructions and copy paste the commands I have mentioned. - -#### Step 1: Download Popcorn Time - -You can download Popcorn Time from its official website. The download link is present on the homepage itself. - -[Get Popcorn Time](https://popcorntime.sh/) - -#### Step 2: Install Popcorn Time - -Once you have downloaded Popcorn Time, it’s time to use it. The downloaded file is a tar file that consists of an executable among other files. While you can extract this tar file anywhere, the [Linux convention is to install additional software in][8] /[opt directory.][8] - -Create a new directory in /opt: - -``` -sudo mkdir /opt/popcorntime -``` - -Now go to the Downloads directory. - -``` -cd ~/Downloads -``` - -Extract the downloaded Popcorn Time files into the newly created /opt/popcorntime directory. - -``` -sudo tar Jxf Popcorn-Time-* -C /opt/popcorntime -``` - -#### Step 3: Make Popcorn Time accessible for everyone - -You would want every user on your system to be able to run Popcorn Time without sudo access, right? To do that, you need to create a [symbolic link][9] to the executable in /usr/bin directory. - -``` -ln -sf /opt/popcorntime/Popcorn-Time /usr/bin/Popcorn-Time -``` - -#### Step 4: Create desktop launcher for Popcorn Time - -So far so good. But you would also like to see Popcorn Time in the application menu, add it to your favorite application list etc. - -For that, you need to create a desktop entry. - -Open a terminal and create a new file named popcorntime.desktop in /usr/share/applications. - -You can use any [command line based text editor][10]. Ubuntu has [Nano][11] installed by default so you can use that. - -``` -sudo nano /usr/share/applications/popcorntime.desktop -``` - -Insert the following lines here: - -``` -[Desktop Entry] -Version = 1.0 -Type = Application -Terminal = false -Name = Popcorn Time -Exec = /usr/bin/Popcorn-Time -Icon = /opt/popcorntime/popcorn.png -Categories = Application; -``` - -If you used Nano editor, save it using shortcut Ctrl+X. When asked for saving, enter Y and then press enter again to save and exit. - -We are almost there. One last thing to do here is to have the correct icon for Popcorn Time. For that, you can download a Popcorn Time icon and save it as popcorn.png in /opt/popcorntime directory. - -You can do that using the command below: - -``` -sudo wget -O /opt/popcorntime/popcorn.png https://upload.wikimedia.org/wikipedia/commons/d/df/Pctlogo.png - -``` - -That’s it. Now you can search for Popcorn Time and click on it to launch it. - -![Popcorn Time installed on Ubuntu][12] -Search for Popcorn Time in Menu - -On the first launch, you’ll have to accept the terms and conditions. - -![Popcorn Time in Ubuntu Linux][13] -Accept the Terms of Service - -Once you do that, you can enjoy the movies and TV shows. - -![Watch movies on Popcorn Time][14] - -Well, that’s all you needed to install Popcorn Time on Ubuntu or any other Linux distribution. You can start watching your favorite movies straightaway. - -However, if you are interested, I would suggest reading these Popcorn Time tips to get more out of it. - -[![][15]][16] -![][17] - -### 7 Tips for using Popcorn Time effectively - -Now that you have installed Popcorn Time, I am going to tell you some nifty Popcorn Time tricks. I assure you that it will enhance your experience with Popcorn Time multiple folds. - -#### 1\. Use advanced settings - -Always have the advanced settings enabled. It gives you more options to tweak Popcorn Time. Go to the top right corner and click on the gear symbol. Click on it and check advanced settings on the next screen. - -![](https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2015/01/Popcorn_Time_Tricks.jpeg) - -#### 2\. Watch the movies in VLC or other players - -Did you know that you can choose to watch a file in your preferred media player instead of the default Popcorn Time player? Of course, that media player should have been installed in the system. - -Now you may ask why would one want to use another player. And my answer is because other players like VLC has hidden features which you might not find in the Popcorn Time player. - -For example, if a file has very low volume, you can use VLC to enhance the audio by 400 percent. You can also [synchronize incoherent subtitles with VLC][18]. You can switch between media players before you start to play a file: - -![](https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2015/01/Pocorn_Time_Tricks_1.png) - -#### 3\. Bookmark movies and watch it later - -Just browsing through movies and TV series but don’t have time or mood to watch those? No issues. You can add the movies to the bookmark and can access these bookmarked videos from the Favorites tab. This enables you to create a list of movies you would watch later. - -![](https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2015/01/Pocorn_Time_Tricks2.png) - -#### 4\. Check torrent health and seed information - -As I had mentioned earlier, your viewing experience in Popcorn Times depends on torrent speed. Good thing is that Popcorn time shows the health of the torrent file so that you can be aware of the streaming speed. - -You will see a green/yellow/red dot on the file. Green means there are plenty of seeds and the file will stream easily. Yellow means a medium number of seeds, streaming should be okay. Red means there are very few seeds available and the streaming will be poor or won’t work at all. - -![](https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2015/01/Pocorn_Time_Tricks3.jpg) - -#### 5\. Add custom subtitles - -If you need subtitles and it is not available in your preferred language, you can add custom subtitles downloaded from external websites. Get the .srt files and use it inside Popcorn Time: - -![](https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2015/01/Pocporn_Time_Tricks5.png) - -This is where VLC comes handy as you can [download subtitles automatically with VLC][19]. - - -#### 6\. Save the files for offline viewing - -When Popcorn Times stream a content, it downloads it and store temporarily. When you close the app, it’s cleaned out. You can change this behavior so that the downloaded file remains there for your future use. - -In the advanced settings, scroll down a bit. Look for Cache directory. You can change this to some other directory like Downloads. This way, even if you close Popcorn Time, the file will be available for viewing. - -![](https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2015/01/Popcorn_Time_Tips.jpg) - -#### 7\. Drag and drop external torrent files to play immediately - -I bet you did not know about this one. If you don’t find a certain movie on Popcorn Time, download the torrent file from your favorite torrent website. Open Popcorn Time and just drag and drop the torrent file in Popcorn Time. It will start playing the file, depending upon seeds. This way, you don’t need to download the entire file before watching it. - -When you drag and drop the torrent file in Popcorn Time, it will give you the option to choose which video file should it play. If there are subtitles in it, it will play automatically or else, you can add external subtitles. - -![](https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2015/01/Pocorn_Time_Tricks4.png) - -There are plenty of other features in Popcorn Time. But I’ll stop with my list here and let you explore Popcorn Time on Ubuntu Linux. I hope you find these Popcorn Time tips and tricks useful. - -I am repeating again. Using Torrents is illegal in many countries. If you do that, take precaution and use a VPN service. If you are looking for my recommendation, you can go for [Swiss-based privacy company ProtonVPN][20] (of [ProtonMail][21] fame). Singapore based [Ivacy][7] is another good option. If you think these are expensive, you can look for [cheap VPN deals on It’s FOSS Shop][22]. - -Note: This article contains affiliate links. Please read our [affiliate policy][23]. - --------------------------------------------------------------------------------- - -via: https://itsfoss.com/popcorn-time-ubuntu-linux/ - -作者:[Abhishek Prakash][a] -选题:[lujun9972](https://github.com/lujun9972) -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://itsfoss.com/author/abhishek/ -[1]: https://popcorntime.sh/ -[2]: https://netflix.com/ -[3]: https://en.wikipedia.org/wiki/Torrent_file -[4]: https://en.wikipedia.org/wiki/Popcorn_Time -[5]: https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/09/popcorn-time-linux.jpeg -[6]: https://itsfoss.com/netflix-firefox-linux/ -[7]: https://billing.ivacy.com/page/23628 -[8]: http://tldp.org/LDP/Linux-Filesystem-Hierarchy/html/opt.html -[9]: https://en.wikipedia.org/wiki/Symbolic_link -[10]: https://itsfoss.com/command-line-text-editors-linux/ -[11]: https://itsfoss.com/nano-3-release/ -[12]: https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/09/popcorn-time-ubuntu-menu.jpg -[13]: https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/09/popcorn-time-ubuntu-license.jpeg -[14]: https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/09/popcorn-time-watch-movies.jpeg -[15]: https://ivacy.postaffiliatepro.com/accounts/default1/vdegzkxbw/7f82d531.png -[16]: https://billing.ivacy.com/page/23628/7f82d531 -[17]: http://ivacy.postaffiliatepro.com/scripts/vdegzkxiw?aff=23628&a_bid=7f82d531 -[18]: https://itsfoss.com/how-to-synchronize-subtitles-with-movie-quick-tip/ -[19]: https://itsfoss.com/download-subtitles-automatically-vlc-media-player-ubuntu/ -[20]: https://protonvpn.net/?aid=chmod777 -[21]: https://itsfoss.com/protonmail/ -[22]: https://shop.itsfoss.com/search?utf8=%E2%9C%93&query=vpn -[23]: https://itsfoss.com/affiliate-policy/ diff --git a/translated/tech/20180928 How to Install Popcorn Time on Ubuntu 18.04 and Other Linux Distributions.md b/translated/tech/20180928 How to Install Popcorn Time on Ubuntu 18.04 and Other Linux Distributions.md new file mode 100644 index 0000000000..5d7b3f772b --- /dev/null +++ b/translated/tech/20180928 How to Install Popcorn Time on Ubuntu 18.04 and Other Linux Distributions.md @@ -0,0 +1,230 @@ +# 如何在 Ubuntu 以及其他 Linux 发行版上安装 Popcorn Time + +**简要:这篇教程展示给你如何在 Ubuntu 和其他 Linux 发行版上安装 Popcorn Time,也会讨论一些 Popcorn Time 的便捷操作** + +[Popcorn Time][1] 是一个受开源 [Netflix][2] 启发的 [torrent][3] 流媒体应用,可以在 Linux,Mac上Windows 上运行。 + +传统的 torrents,在你看影片之前必须等待它下载完成。 + +[Popcorn Time][4] 有所不同。它的使用基于 torrent,但是允许你(几乎)立即开始观看影片。他跟你在 Youtube 或者 Netflix 等流媒体网页上看影片一样,无需等待它下载完成。 + +![Popcorn Time in Ubuntu Linux][5] +Popcorn Time + +如果你不想在看在线电影时被突如其来的广告吓倒的话,Popcorn Time 是一个不错的选择。不过要记得,它的播放质量依赖于当前网络中可用的种子(seeds)数。 + +Popcorn Time 还提供了一个不错的用户界面,让你能够浏览可用的电影,电视剧和其他视频内容。如果你曾经[在 Linux 上使用过 Netflix][6],你会发现两者有一些相似之处。 + +有些国家严格打击盗版,所以使用 torrent 下载电影是违法行为。在类似美国,英国和西欧等一些国家,你或许曾经收到过法律声明。也就是说,是否使用取决于你。已经警告过你了。 +(如果你仍想要冒险使用 Popcorn Time,你应该使用像 [Ivacy][7] 这样的 VPN 服务,它为使用 Torrents 和保护隐私有特别的设计。即便这样,也不能完全避免被查到。) + +Popcorn Time 一些主要的特点: + + * 使用 Torrent 在线观看电影和电视剧 + * 有一个时尚的用户界面让你浏览可用的电影和电视剧资源 + * 调整流媒体的质量 + * 标记为稍后观看 + * 下载为离线观看 + * 可以默认开启字幕,改变字母尺寸等 + * 使用键盘快捷键浏览 + + +### 如何在 Ubuntu 和其它 Linux 发行版上安装 Popcorn Time + +这篇教程以 Ubuntu 18.04 为例,但是你可以使用类似的结构,在例如 Linux Mint,Debian,Manjaro,Deepin等 Linux 发行版上安装。 + +接下来我们看该如何在 Linux 上安装 Popcorn Time。事实上,这个过程非常简单。只需要按照说明操作复制粘贴我提到的这些命令即可。 + +#### 第一步:下载 Popcorn Time + +你可以从它的官网上安装 Popcorn Time。它主页上的下载链接是。 +[Get Popcorn Time](https://popcorntime.sh/) + +#### 第二步:安装 Popcorn Time + +下载完成之后,就该使用它了。下载下来的是一个 tar 文件,在这些文件里面包含有一个可执行文件。你可以把 tar 文件提取在任何位置,[Linux 常把附加软件安装在][8] /[opt 目录。][8] + +在 /opt 下创建一个新的目录: + +``` +sudo mkdir /opt/popcorntime +``` + +现在进入你下载文件的文件夹中,比如我把 Popcorn Time 下载到了主目录的 Downloads目录下。 + +``` +cd ~/Downloads +``` + +提取下载好的 Popcorn Time 文件到新创建的 /opt/popcorntime 目录下 + +``` +sudo tar Jxf Popcorn-Time-* -C /opt/popcorntime +``` + +#### 第三步:让所有用户可以使用 Popcorn Time + +如果你想要系统中所有的用户无需经过 sudo 就可以运行 Popcorn Time。你需要在 /usr/bin 目录下创建一个[符号链接(软链接)][9]指向这个可执行文件。 + +``` +ln -sf /opt/popcorntime/Popcorn-Time /usr/bin/Popcorn-Time +``` + +#### 第四步:为 Popcorn Time 创建桌面启动器 + +到目前为止,一切顺利,但是你也许想要在应用菜单里看到 Popcorn Time,又或是想把它添加到最喜欢的应用列表里等。 + +为此,你需要创建一个桌面入口。 + +打开一个终端窗口,在 /usr/share/applications 目录下创建一个名为 popcorntime.desktop 的文件。 + +你可以使用任何[基于命令行的文本编辑器][10]。Ubuntu 默认安装了 [Nano][11],所以你可以直接使用这个。 + +``` +sudo nano /usr/share/applications/popcorntime.desktop +``` + +在里面插入以下内容: + +``` +[Desktop Entry] +Version = 1.0 +Type = Application +Terminal = false +Name = Popcorn-Time +Exec = /usr/bin/Popcorn-Time +Icon = /opt/popcorntime/popcorn.png +Categories = Application; +``` + +如果你使用的是 Nano 编辑器,使用 Ctrl+X 保存输入的内容,当询问是否保存时,输入 Y,然后按回车保存并退出。 + +就快要完成了。最后一件事就是为 Popcorn Time 设置一个正确的图标。你可以下载一个 Popcorn Time 图标到 /opt/popcorntime 目录下,并命名为 popcorn.png。 + +你可以使用以下命令: + +``` +sudo wget -O /opt/popcorntime/popcorn.png https://upload.wikimedia.org/wikipedia/commons/d/df/Pctlogo.png +``` + +这样就 OK 了。现在你可以搜索 Popcorn Time 然后点击启动它了。 + +![Popcorn Time installed on Ubuntu][12] +在菜单里搜索 Popcorn Time + +第一次启动时,你必须接受这些条款和条件。 + +![Popcorn Time in Ubuntu][13] +接受这些服务条款 + +一旦你完成这些,你就可以享受你的电影和电视节目了。 + +![Watch movies on Popcorn Time][14] + +好了,这就是所有你在 Ubuntu 或者其他 Linux 发行版上安装 Popcorn Time 所需要的了。你可以直接开始看你最喜欢的影视节目了。 + +当然,如果你有兴趣的话,我建议你阅读以下关于 Popcorn Time 的小贴士,可以学到更多。 + +[![][15]][16] +![][17] + +### 高效使用 Popcorn Time 的七个小贴士 + +现在你已经安装好了 Popcorn Time 了,我接下来将要告诉你一些有用的 Popcorn Time 技巧。我保证它会增强你使用 Popcorn Time 的体验。 + +#### 1\. 使用高级设置 + +始终启用高级设置。它给了你更多的选项去调整 Popcorn Time 点击右上角的齿轮标记。查看其中的高级设置。 + +![](https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2015/01/Popcorn_Time_Tricks.jpeg) + +#### 2\. 在 VLC 或者其他播放器里观看影片 + +你知道你可以选择自己喜欢的播放器而不是 Popcorn Time 默认的播放器观看一个视频吗?当然,这个播放器必须已经安装在你的系统上了。 + +现在你可能会问为什么要使用其他的播放器。我的回答是:其他播放器可以弥补 Popcorn Time 默认播放器上的一些不足。 + +例如,如果一个文件的声音非常小,你可以使用 VLC 将音频声音增强 400%,你还可以[使用 VLC 同步不连贯的字幕][18]。你可以在播放文件之前在不同的媒体播放器之间进行切换。 + +![](https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2015/01/Pocorn_Time_Tricks_1.png) + +#### 3\. 将影片标记为稍后观看 + +只是浏览电影和电视节目,但是却没有时间和精力去看?这不是问题。你可以添加这些影片到书签里面,稍后可以在 Faveriate 标签里面访问这些影片。这可以让你创建一个你想要稍后观看的列表。 + +![](https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2015/01/Pocorn_Time_Tricks2.png) + +#### 4\. 检查 torrent 的信息和种子信息 + +像我之前提到的,你在 Popcorn Time 的观看体验依赖于 torrent 的速度。好消息是 Popcorn Time 显示了 torrent 的信息,因此你可以知道流媒体的速度。 + +你可以在文件上看到一个绿色 / 黄色 / 红色的点。绿色意味着有足够的种子,文件很容易播放。黄色意味着有中等数量的种子,应该可以播放。红色意味着只有非常少可用的种子,播放的速度会很慢甚至无法观看。 + +![](https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2015/01/Pocorn_Time_Tricks3.jpg) + +#### 5\. 添加自定义字幕 + +如果你需要字幕而且它没有你想要的语言,你可以从外部网站下载自定义字幕。得到 .src 文件,然后就可以在 Popcorn Time 中使用它: + +![](https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2015/01/Pocporn_Time_Tricks5.png) + +这是[用 VLC 自动下载字幕][19] + +#### 6\. 保存文件离线观看 + +用 Popcorn Time 播放内容时,它会下载并暂时存储这些内容。当你关闭 APP 时,缓存会被清理干净。你可以更改这个操作,使得下载的文件可以保存下来供你未来使用。 + +在高级设置里面,向下滚动一点。找到缓存目录,你可以把它更改到其他像是 Downloads 目录,这下你即便关闭了 Popcorn Time,这些文件依旧可以观看。 + +![](https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2015/01/Popcorn_Time_Tips.jpg) + +#### 7\. 拖放外部 torrent 文件立即播放 + +我猜你不知道这个操作。如果你没有在 Popcorn Time 发现某些影片,从你最喜欢的 torrent 网站下载 torrent 文件,打开 Popcorn Time,然后拖放这个 torrent 文件到 Popcorn Time 里面。它将会立即播放文件,当然这个取决于种子。这次你不需要在观看前下载整个文件了。 + +当你拖放文件到 Popcorn Time 后,它将会给你对应的选项,选择它应该播放的。如果里面有字幕,它会自动播放,否则你需要添加外部字幕。 + +![](https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2015/01/Pocorn_Time_Tricks4.png) + +在 Popcorn Time 里面有很多的功能,但是我决定就此打住,剩下的就由你自己来探索吧。我希望你能发现更多 Popcorn Time 有用的功能和技巧。 + +我再提醒一遍,使用 Torrents 在很多国家是违法的。如果你还是这样做了,请做好防护措施,并使用 VPN 服务。如果你想要我的建议,你可以去看一下(让 [ProtonMail][21] 成名的)[瑞士的隐私公司 ProtonVPN][20]。新加坡的 [Ivacy][7] 也是一个不错的选择。如果你觉得这些都太贵了,你可以看一下[在 FOSS SHOP 上廉价的 VPN][22] + +注意:这篇文章里包含了会员链接,请阅读我们的[会员隐私][23]。 + + +----------------------------------- + +via: https://itsfoss.com/popcorn-time-ubuntu-linux/ + +作者:[Abhishek Prakash][a] +选题:[lujun9972](https://github.com/lujun9972) +译者:[dianbanjiu](https://github.com/dianbanjiu) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://itsfoss.com/author/abhishek/ +[1]: https://popcorntime.sh/ +[2]: https://netflix.com/ +[3]: https://en.wikipedia.org/wiki/Torrent_file +[4]: https://en.wikipedia.org/wiki/Popcorn_Time +[5]: https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/09/popcorn-time-linux.jpeg +[6]: https://itsfoss.com/netflix-firefox-linux/ +[7]: https://billing.ivacy.com/page/23628 +[8]: http://tldp.org/LDP/Linux-Filesystem-Hierarchy/html/opt.html +[9]: https://en.wikipedia.org/wiki/Symbolic_link +[10]: https://itsfoss.com/command-line-text-editors-linux/ +[11]: https://itsfoss.com/nano-3-release/ +[12]: https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/09/popcorn-time-ubuntu-menu.jpg +[13]: https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/09/popcorn-time-ubuntu-license.jpeg +[14]: https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/09/popcorn-time-watch-movies.jpeg +[15]: https://ivacy.postaffiliatepro.com/accounts/default1/vdegzkxbw/7f82d531.png +[16]: https://billing.ivacy.com/page/23628/7f82d531 +[17]: http://ivacy.postaffiliatepro.com/scripts/vdegzkxiw?aff=23628&a_bid=7f82d531 +[18]: https://itsfoss.com/how-to-synchronize-subtitles-with-movie-quick-tip/ +[19]: https://itsfoss.com/download-subtitles-automatically-vlc-media-player-ubuntu/ +[20]: https://protonvpn.net/?aid=chmod777 +[21]: https://itsfoss.com/protonmail/ +[22]: https://shop.itsfoss.com/search?utf8=%E2%9C%93&query=vpn +[23]: https://itsfoss.com/affiliate-policy/ From a66fc2babc22e5bdfc580cae6c09d7e9482e1fc1 Mon Sep 17 00:00:00 2001 From: HankChow <280630620@qq.com> Date: Mon, 1 Oct 2018 20:42:23 +0800 Subject: [PATCH 167/736] hankchow translating --- ...180823 How To Easily And Safely Manage Cron Jobs In Linux.md | 2 ++ 1 file changed, 2 insertions(+) diff --git a/sources/tech/20180823 How To Easily And Safely Manage Cron Jobs In Linux.md b/sources/tech/20180823 How To Easily And Safely Manage Cron Jobs In Linux.md index aa4ec0a655..3f65ac7825 100644 --- a/sources/tech/20180823 How To Easily And Safely Manage Cron Jobs In Linux.md +++ b/sources/tech/20180823 How To Easily And Safely Manage Cron Jobs In Linux.md @@ -1,3 +1,5 @@ +HankChow translating + How To Easily And Safely Manage Cron Jobs In Linux ====== From 2b39f9d568e29a7c0d2af208f8b3403f7414abfb Mon Sep 17 00:00:00 2001 From: "Xingyu.Wang" Date: Mon, 1 Oct 2018 21:16:27 +0800 Subject: [PATCH 168/736] PRF:20180901 5 Ways to Take Screenshot in Linux GUI and Terminal.md @HankChow --- ...ke Screenshot in Linux GUI and Terminal.md | 100 ++++++++---------- 1 file changed, 47 insertions(+), 53 deletions(-) diff --git a/translated/tech/20180901 5 Ways to Take Screenshot in Linux GUI and Terminal.md b/translated/tech/20180901 5 Ways to Take Screenshot in Linux GUI and Terminal.md index b8872981fe..c6618b9a52 100644 --- a/translated/tech/20180901 5 Ways to Take Screenshot in Linux GUI and Terminal.md +++ b/translated/tech/20180901 5 Ways to Take Screenshot in Linux GUI and Terminal.md @@ -1,6 +1,7 @@ -5 种在 Linux 图形界面或命令行界面截图的方法 +在 Linux 下截屏并编辑的最佳工具 ====== -下面介绍几种获取屏幕截图并对其编辑的方法,而且其中的屏幕截图工具在 Ubuntu 和其它主流 Linux 发行版中都能够使用。 + +> 有几种获取屏幕截图并对其进行添加文字、箭头等编辑的方法,这里提及的的屏幕截图工具在 Ubuntu 和其它主流 Linux 发行版中都能够使用。 ![在 Ubuntu Linux 中如何获取屏幕截图][1] @@ -8,26 +9,26 @@ 本文将会介绍在不适用第三方工具的情况下,如何通过系统自带的方法和工具获取屏幕截图,另外还会介绍一些可用于 Linux 的最佳截图工具。 -### 方法 1: 在 Linux 中截图的默认方式 +### 方法 1:在 Linux 中截图的默认方式 -你是否需要截取整个屏幕?屏幕中的某个区域?某个特定的窗口? +你想要截取整个屏幕?屏幕中的某个区域?某个特定的窗口? 如果只需要获取一张屏幕截图,不对其进行编辑的话,那么键盘的默认快捷键就可以满足要求了。而且不仅仅是 Ubuntu ,绝大部分的 Linux 发行版和桌面环境都支持以下这些快捷键: -**PrtSc** – 获取整个屏幕的截图并保存到 Pictures 目录。 -**Shift + PrtSc** – 获取屏幕的某个区域截图并保存到 Pictures 目录。 -**Alt + PrtSc** –获取当前窗口的截图并保存到 Pictures 目录。 -**Ctrl + PrtSc** – 获取整个屏幕的截图并存放到剪贴板。 -**Shift + Ctrl + PrtSc** – 获取屏幕的某个区域截图并存放到剪贴板。 -**Ctrl + Alt + PrtSc** – 获取当前窗口的 截图并存放到剪贴板。 +- `PrtSc` – 获取整个屏幕的截图并保存到 Pictures 目录。 +- `Shift + PrtSc` – 获取屏幕的某个区域截图并保存到 Pictures 目录。 +- `Alt + PrtSc` –获取当前窗口的截图并保存到 Pictures 目录。 +- `Ctrl + PrtSc` – 获取整个屏幕的截图并存放到剪贴板。 +- `Shift + Ctrl + PrtSc` – 获取屏幕的某个区域截图并存放到剪贴板。 +- `Ctrl + Alt + PrtSc` – 获取当前窗口的 截图并存放到剪贴板。 如上所述,在 Linux 中使用默认的快捷键获取屏幕截图是相当简单的。但如果要在不把屏幕截图导入到其它应用程序的情况下对屏幕截图进行编辑,还是使用屏幕截图工具比较方便。 -#### **方法 2: 在 Linux 中使用 Flameshot 获取屏幕截图并编辑** +### 方法 2:在 Linux 中使用 Flameshot 获取屏幕截图并编辑 ![flameshot][2] -功能概述 +功能概述: * 注释 (高亮、标示、添加文本、框选) * 图片模糊 @@ -35,66 +36,63 @@ * 上传到 Imgur * 用另一个应用打开截图 +Flameshot 在去年发布到 [GitHub][3],并成为一个引人注目的工具。 - -Flameshot 在去年发布到 [GitHub][3],并成为一个引人注目的工具。如果你需要的是一个能够用于标注、模糊、上传到 imgur 的新式截图工具,那么 Flameshot 是一个好的选择。 +如果你需要的是一个能够用于标注、模糊、上传到 imgur 的新式截图工具,那么 Flameshot 是一个好的选择。 下面将会介绍如何安装 Flameshot 并根据你的偏好进行配置。 如果你用的是 Ubuntu,那么只需要在 Ubuntu 软件中心上搜索,就可以找到 Flameshot 进而完成安装了。要是你想使用终端来安装,可以执行以下命令: + ``` sudo apt install flameshot - ``` -如果你在安装过程中遇到问题,可以按照[官方的安装说明][4]进行操作。安装完成后,你还需要进行配置。尽管可以通过搜索来随时启动 Flameshot,但如果想使用 PrtSc 键触发启动,则需要指定对应的键盘快捷键。以下是相关配置步骤: +如果你在安装过程中遇到问题,可以按照[官方的安装说明][4]进行操作。安装完成后,你还需要进行配置。尽管可以通过搜索来随时启动 Flameshot,但如果想使用 `PrtSc` 键触发启动,则需要指定对应的键盘快捷键。以下是相关配置步骤: - * 进入系统设置中的键盘设置 - * 页面中会列出所有现有的键盘快捷键,拉到底部就会看见一个 **+** 按钮 + * 进入系统设置中的“键盘设置” + * 页面中会列出所有现有的键盘快捷键,拉到底部就会看见一个 “+” 按钮 * 点击 “+” 按钮添加自定义快捷键并输入以下两个字段: -**名称:** 任意名称均可 -**命令:** /usr/bin/flameshot gui - * 最后将这个快捷操作绑定到 **PrtSc** 键上,可能会提示与系统的截图功能相冲突,但可以忽略掉这个警告。 - - + * “名称”: 任意名称均可。 + * “命令”: `/usr/bin/flameshot gui` + * 最后将这个快捷操作绑定到 `PrtSc` 键上,可能会提示与系统的截图功能相冲突,但可以忽略掉这个警告。 配置之后,你的自定义快捷键页面大概会是以下这样: ![][5] -将键盘快捷键映射到 Flameshot -### **方法 3: 在 Linux 中使用 Shutter 获取屏幕截图并编辑** +*将键盘快捷键映射到 Flameshot* + +### 方法 3:在 Linux 中使用 Shutter 获取屏幕截图并编辑 ![][6] -功能概述: +功能概述: * 注释 (高亮、标示、添加文本、框选) * 图片模糊 * 图片裁剪 * 上传到图片网站 - - [Shutter][7] 是一个对所有主流 Linux 发行版都适用的屏幕截图工具。尽管最近已经不太更新了,但仍然是操作屏幕截图的一个优秀工具。 -在使用过程中可能会遇到这个工具的一些缺陷。Shutter 在任何一款最新的 Linux 发行版上最常见的问题就是由于缺少了任务栏上的程序图标,导致默认禁用了编辑屏幕截图的功能。 对于这个缺陷,还是有解决方案的。下面介绍一下如何[在 Shutter 中重新打开这个功能并将程序图标在任务栏上显示出来][8]。问题修复后,就可以使用 Shutter 来快速编辑屏幕截图了。 +在使用过程中可能会遇到这个工具的一些缺陷。Shutter 在任何一款最新的 Linux 发行版上最常见的问题就是由于缺少了任务栏上的程序图标,导致默认禁用了编辑屏幕截图的功能。 对于这个缺陷,还是有解决方案的。你只需要跟随我们的教程[在 Shutter 中修复这个禁止编辑选项并将程序图标在任务栏上显示出来][8]。问题修复后,就可以使用 Shutter 来快速编辑屏幕截图了。 同样地,在软件中心搜索也可以找到进而安装 Shutter,也可以在基于 Ubuntu 的发行版中执行以下命令使用命令行安装: + ``` sudo apt install shutter - ``` -类似 Flameshot,你可以通过搜索 Shutter 手动启动它,也可以按照相似的方式设置自定义快捷方式以 **PrtSc** 键唤起 Shutter。 +类似 Flameshot,你可以通过搜索 Shutter 手动启动它,也可以按照相似的方式设置自定义快捷方式以 `PrtSc` 键唤起 Shutter。 如果要指定自定义键盘快捷键,只需要执行以下命令: + ``` shutter -f - ``` -### 方法 4: 在 Linux 中使用 GIMP 获取屏幕截图 +### 方法 4:在 Linux 中使用 GIMP 获取屏幕截图 ![][9] @@ -103,83 +101,79 @@ shutter -f * 高级图像编辑功能(缩放、添加滤镜、颜色校正、添加图层、裁剪等) * 截取某一区域的屏幕截图 - - 如果需要对屏幕截图进行一些预先编辑,GIMP 是一个不错的选择。 通过软件中心可以安装 GIMP。如果在安装时遇到问题,可以参考其[官方网站的安装说明][10]。 -要使用 GIMP 获取屏幕截图,需要先启动程序,然后通过 **File-> Create-> Screenshot** 导航。 +要使用 GIMP 获取屏幕截图,需要先启动程序,然后通过 “File-> Create-> Screenshot” 导航。 -打开 Screenshot 选项后,会看到几个控制点来控制屏幕截图范围。点击 **Snap** 截取屏幕截图,图像将自动显示在 GIMP 中可供编辑。 +打开 Screenshot 选项后,会看到几个控制点来控制屏幕截图范围。点击 “Snap” 截取屏幕截图,图像将自动显示在 GIMP 中可供编辑。 -### 方法 5: 在 Linux 中使用命令行工具获取屏幕截图 +### 方法 5:在 Linux 中使用命令行工具获取屏幕截图 -这一节内容仅适用于终端爱好者。如果你也喜欢使用终端,可以使用 **GNOME 截图工具**或 **ImageMagick** 或 **Deepin Scrot**,大部分流行的 Linux 发行版中都自带这些工具。 +这一节内容仅适用于终端爱好者。如果你也喜欢使用终端,可以使用 “GNOME 截图工具”或 “ImageMagick” 或 “Deepin Scrot”,大部分流行的 Linux 发行版中都自带这些工具。 要立即获取屏幕截图,可以执行以下命令: -#### GNOME Screenshot(可用于 GNOME 桌面) +#### GNOME 截图工具(可用于 GNOME 桌面) + ``` gnome-screenshot - ``` -GNOME Screenshot 是使用 GNOME 桌面的 Linux 发行版中都自带的一个默认工具。如果需要延时获取屏幕截图,可以执行以下命令(这里的 **5** 是需要延迟的秒数): +GNOME 截图工具是使用 GNOME 桌面的 Linux 发行版中都自带的一个默认工具。如果需要延时获取屏幕截图,可以执行以下命令(这里的 `5` 是需要延迟的秒数): ``` gnome-screenshot -d -5 - ``` #### ImageMagick 如果你的操作系统是 Ubuntu、Mint 或其它流行的 Linux 发行版,一般会自带 [ImageMagick][11] 这个工具。如果没有这个工具,也可以按照[官方安装说明][12]使用安装源来安装。你也可以在终端中执行这个命令: + ``` sudo apt-get install imagemagick - ``` 安装完成后,执行下面的命令就可以获取到屏幕截图(截取整个屏幕): ``` import -window root image.png - ``` -这里的“image.png”就是屏幕截图文件保存的名称。 +这里的 “image.png” 就是屏幕截图文件保存的名称。 要获取屏幕一个区域的截图,可以执行以下命令: + ``` import image.png - ``` #### Deepin Scrot Deepin Scrot 是基于终端的一个较新的截图工具。和前面两个工具类似,一般自带于 Linux 发行版中。如果需要自行安装,可以执行以下命令: + ``` sudo apt-get install scrot - ``` 安装完成后,使用下面这些命令可以获取屏幕截图。 获取整个屏幕的截图: + ``` scrot myimage.png - ``` 获取屏幕某一区域的截图: + ``` scrot -s myimage.png - ``` ### 总结 -以上是一些在 Linux 上的优秀截图工具。当然还有很多截图工具没有提及(例如 [Spectacle][13] for KDE-distros),但相比起来还是上面几个工具更为好用。 +以上是一些在 Linux 上的优秀截图工具。当然还有很多截图工具没有提及(例如用于 KDE 发行版的 [Spectacle][13]),但相比起来还是上面几个工具更为好用。 如果你有比文章中提到的更好的截图工具,欢迎讨论! @@ -189,8 +183,8 @@ via: https://itsfoss.com/take-screenshot-linux/ 作者:[Ankush Das][a] 选题:[lujun9972](https://github.com/lujun9972) -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) +译者:[HankChow](https://github.com/HankChow) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 From 0a57a6ddac60f2f27e3b7fd59b07ecbdaae91756 Mon Sep 17 00:00:00 2001 From: "Xingyu.Wang" Date: Mon, 1 Oct 2018 21:17:50 +0800 Subject: [PATCH 169/736] PUB:20180901 5 Ways to Take Screenshot in Linux GUI and Terminal.md @HankChow https://linux.cn/article-10070-1.html --- ...0180901 5 Ways to Take Screenshot in Linux GUI and Terminal.md | 0 1 file changed, 0 insertions(+), 0 deletions(-) rename {translated/tech => published}/20180901 5 Ways to Take Screenshot in Linux GUI and Terminal.md (100%) diff --git a/translated/tech/20180901 5 Ways to Take Screenshot in Linux GUI and Terminal.md b/published/20180901 5 Ways to Take Screenshot in Linux GUI and Terminal.md similarity index 100% rename from translated/tech/20180901 5 Ways to Take Screenshot in Linux GUI and Terminal.md rename to published/20180901 5 Ways to Take Screenshot in Linux GUI and Terminal.md From 2e08815b78381e9c561c194c1d45061f8a54b121 Mon Sep 17 00:00:00 2001 From: "Xingyu.Wang" Date: Mon, 1 Oct 2018 21:23:21 +0800 Subject: [PATCH 170/736] =?UTF-8?q?=E5=BD=92=E6=A1=A3=20201809?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- .../20140805 How to Install Cinnamon Desktop on Ubuntu.md | 0 ...loud Commander - A Web File Manager With Console And Editor.md | 0 ...20170706 Docker Guide Dockerizing Python Django Application.md | 0 .../20170709 The Extensive Guide to Creating Streams in RxJS.md | 0 ...ow To Set Up PF Firewall on FreeBSD to Protect a Web Server.md | 0 ... Trash-Cli - A Command Line Interface For Trashcan On Linux.md | 0 published/{ => 201809}/20171010 Operating a Kubernetes network.md | 0 published/{ => 201809}/20171124 How do groups work on Linux.md | 0 .../20171202 Scrot Linux command-line screen grabs made simple.md | 0 ... Top 7 open source project management tools for agile teams.md | 0 .../20180131 What I Learned from Programming Interviews.md | 0 ...some amazing advantages of Go that you dont hear much about.md | 0 ...0180203 API Star- Python 3 API Framework - Polyglot.Ninja().md | 0 .../20180226 Linux Virtual Machines vs Linux Live Images.md | 0 .../{ => 201809}/20180308 What is open source programming.md | 0 .../20180316 How to Encrypt Files From Within a File Manager.md | 0 .../20180324 How To Compress And Decompress Files In Linux.md | 0 ...30 minutes with Hugo, a static site generator written in Go.md | 0 .../20180402 Understanding Linux filesystems- ext4 and beyond.md | 0 .../{ => 201809}/20180424 A gentle introduction to FreeDOS.md | 0 ...tanding metrics and monitoring with Python - Opensource.com.md | 0 .../20180427 An Official Introduction to the Go Compiler.md | 0 published/{ => 201809}/20180516 How Graphics Cards Work.md | 0 .../{ => 201809}/20180516 Manipulating Directories in Linux.md | 0 published/{ => 201809}/20180518 Mastering CI-CD at OpenDev.md | 0 .../20180525 Getting started with the Python debugger.md | 0 ...29 How To Add Additional IP (Secondary IP) In Ubuntu System.md | 0 .../20180618 Twitter Sentiment Analysis using NodeJS.md | 0 ...w to build a professional network when you work in a bazaar.md | 0 ...ents Of An Archive Or Compressed File Without Extracting It.md | 0 .../20180703 Understanding Python Dataclasses — Part 1.md | 0 .../20180706 Anatomy of a Linux DNS Lookup - Part III.md | 0 ...0 How To View Detailed Information About A Package In Linux.md | 0 published/{ => 201809}/20180717 Getting started with Etcher.io.md | 0 published/{ => 201809}/20180720 An Introduction to Using Git.md | 0 ...o Install 2048 Game in Ubuntu and Other Linux Distributions.md | 0 .../20180720 How to build a URL shortener with Apache.md | 0 .../20180725 How do private keys work in PKI and cryptography.md | 0 .../20180730 7 Python libraries for more maintainable code.md | 0 .../20180730 How to use VS Code for your Python projects.md | 0 ...lps Linux Beginners To Choose A Suitable Linux Distribution.md | 0 ...03 10 Popular Windows Apps That Are Also Available on Linux.md | 0 .../{ => 201809}/20180804 Installing Andriod on VirtualBox.md | 0 .../20180806 Anatomy of a Linux DNS Lookup - Part IV.md | 0 ...nd using Git and GitHub on Ubuntu Linux- A beginner-s guide.md | 0 ...20180808 5 applications to manage your to-do list on Fedora.md | 0 .../20180808 5 open source role-playing games for Linux.md | 0 .../20180810 6 Reasons Why Linux Users Switch to BSD.md | 0 ...Themes Based On Sunrise And Sunset Times With AutomaThemely.md | 0 .../20180813 MPV Player- A Minimalist Video Player for Linux.md | 0 ...elerated Video Decoding In Chromium On Ubuntu Or Linux Mint.md | 0 ...To Switch Between TTYs Without Using Function Keys In Linux.md | 0 .../20180822 What is a Makefile and how does it work.md | 0 .../20180823 An introduction to pipes and named pipes in Linux.md | 0 ...w to publish a WordPress blog to a static GitLab Pages site.md | 0 ...0180824 How to install software from the Linux command line.md | 0 ...180824 Steam Makes it Easier to Play Windows Games on Linux.md | 0 ...cess usr bin dpkg returned an error code 1- Error in Ubuntu.md | 0 ...o capture and analyze packets with tcpdump command on Linux.md | 0 .../{ => 201809}/20180827 An introduction to diffs and patches.md | 0 .../20180828 15 command-line aliases to save you time.md | 0 ...28 A Cat Clone With Syntax Highlighting And Git Integration.md | 0 ...828 How to Play Windows-only Games on Linux with Steam Play.md | 0 ...d GUIs to your programs and scripts easily with PySimpleGUI.md | 0 .../20180830 How To Reset MySQL Or MariaDB Root Password.md | 0 .../20180830 How to Update Firmware on Ubuntu 18.04.md | 0 .../20180831 6 open source tools for making your own VPN.md | 0 ...how of Photos in Ubuntu 18.04 and other Linux Distributions.md | 0 ...20180903 Turn your vi editor into a productivity powerhouse.md | 0 .../20180904 8 Linux commands for effective process management.md | 0 published/{ => 201809}/20180904 Why I love Xonsh.md | 0 .../20180905 5 tips to improve productivity with zsh.md | 0 .../20180905 8 great Python libraries for side projects.md | 0 .../20180905 Find your systems easily on a LAN with mDNS.md | 0 .../20180906 3 top open source JavaScript chart libraries.md | 0 .../20180906 Two open source alternatives to Flash Player.md | 0 ...trash - A CLI Tool To Automatically Purge Old Trashed Files.md | 0 .../20180907 What do open source and cooking have in common.md | 0 ... What is ZFS- Why People Use ZFS- [Explained for Beginners].md | 0 ...10 13 Keyboard Shortcut Every Ubuntu 18.04 User Should Know.md | 0 .../{ => 201809}/20180910 3 open source log aggregation tools.md | 0 .../20180910 Randomize your MAC address using NetworkManager.md | 0 .../20180911 Visualize Disk Usage On Your Linux System.md | 0 ...2 How To Configure Mouse Support For Linux Virtual Consoles.md | 0 .../20180917 Linux tricks that can save you time and trouble.md | 0 ...ow To Force APT Package Manager To Use IPv4 In Ubuntu 16.04.md | 0 .../20180919 Understand Fedora memory usage with top.md | 0 ...Make The Output Of Ping Command Prettier And Easier To Read.md | 0 ...0180929 Getting started with the i3 window manager on Linux.md | 0 89 files changed, 0 insertions(+), 0 deletions(-) rename published/{ => 201809}/20140805 How to Install Cinnamon Desktop on Ubuntu.md (100%) rename published/{ => 201809}/20160503 Cloud Commander - A Web File Manager With Console And Editor.md (100%) rename published/{ => 201809}/20170706 Docker Guide Dockerizing Python Django Application.md (100%) rename published/{ => 201809}/20170709 The Extensive Guide to Creating Streams in RxJS.md (100%) rename published/{ => 201809}/20170829 How To Set Up PF Firewall on FreeBSD to Protect a Web Server.md (100%) rename published/{ => 201809}/20171003 Trash-Cli - A Command Line Interface For Trashcan On Linux.md (100%) rename published/{ => 201809}/20171010 Operating a Kubernetes network.md (100%) rename published/{ => 201809}/20171124 How do groups work on Linux.md (100%) rename published/{ => 201809}/20171202 Scrot Linux command-line screen grabs made simple.md (100%) rename published/{ => 201809}/20180102 Top 7 open source project management tools for agile teams.md (100%) rename published/{ => 201809}/20180131 What I Learned from Programming Interviews.md (100%) rename published/{ => 201809}/20180201 Here are some amazing advantages of Go that you dont hear much about.md (100%) rename published/{ => 201809}/20180203 API Star- Python 3 API Framework - Polyglot.Ninja().md (100%) rename published/{ => 201809}/20180226 Linux Virtual Machines vs Linux Live Images.md (100%) rename published/{ => 201809}/20180308 What is open source programming.md (100%) rename published/{ => 201809}/20180316 How to Encrypt Files From Within a File Manager.md (100%) rename published/{ => 201809}/20180324 How To Compress And Decompress Files In Linux.md (100%) rename published/{ => 201809}/20180326 Start a blog in 30 minutes with Hugo, a static site generator written in Go.md (100%) rename published/{ => 201809}/20180402 Understanding Linux filesystems- ext4 and beyond.md (100%) rename published/{ => 201809}/20180424 A gentle introduction to FreeDOS.md (100%) rename published/{ => 201809}/20180425 Understanding metrics and monitoring with Python - Opensource.com.md (100%) rename published/{ => 201809}/20180427 An Official Introduction to the Go Compiler.md (100%) rename published/{ => 201809}/20180516 How Graphics Cards Work.md (100%) rename published/{ => 201809}/20180516 Manipulating Directories in Linux.md (100%) rename published/{ => 201809}/20180518 Mastering CI-CD at OpenDev.md (100%) rename published/{ => 201809}/20180525 Getting started with the Python debugger.md (100%) rename published/{ => 201809}/20180529 How To Add Additional IP (Secondary IP) In Ubuntu System.md (100%) rename published/{ => 201809}/20180618 Twitter Sentiment Analysis using NodeJS.md (100%) rename published/{ => 201809}/20180626 How to build a professional network when you work in a bazaar.md (100%) rename published/{ => 201809}/20180702 View The Contents Of An Archive Or Compressed File Without Extracting It.md (100%) rename published/{ => 201809}/20180703 Understanding Python Dataclasses — Part 1.md (100%) rename published/{ => 201809}/20180706 Anatomy of a Linux DNS Lookup - Part III.md (100%) rename published/{ => 201809}/20180710 How To View Detailed Information About A Package In Linux.md (100%) rename published/{ => 201809}/20180717 Getting started with Etcher.io.md (100%) rename published/{ => 201809}/20180720 An Introduction to Using Git.md (100%) rename published/{ => 201809}/20180720 How to Install 2048 Game in Ubuntu and Other Linux Distributions.md (100%) rename published/{ => 201809}/20180720 How to build a URL shortener with Apache.md (100%) rename published/{ => 201809}/20180725 How do private keys work in PKI and cryptography.md (100%) rename published/{ => 201809}/20180730 7 Python libraries for more maintainable code.md (100%) rename published/{ => 201809}/20180730 How to use VS Code for your Python projects.md (100%) rename published/{ => 201809}/20180802 Distrochooser Helps Linux Beginners To Choose A Suitable Linux Distribution.md (100%) rename published/{ => 201809}/20180803 10 Popular Windows Apps That Are Also Available on Linux.md (100%) rename published/{ => 201809}/20180804 Installing Andriod on VirtualBox.md (100%) rename published/{ => 201809}/20180806 Anatomy of a Linux DNS Lookup - Part IV.md (100%) rename published/{ => 201809}/20180806 Installing and using Git and GitHub on Ubuntu Linux- A beginner-s guide.md (100%) rename published/{ => 201809}/20180808 5 applications to manage your to-do list on Fedora.md (100%) rename published/{ => 201809}/20180808 5 open source role-playing games for Linux.md (100%) rename published/{ => 201809}/20180810 6 Reasons Why Linux Users Switch to BSD.md (100%) rename published/{ => 201809}/20180810 Automatically Switch To Light - Dark Gtk Themes Based On Sunrise And Sunset Times With AutomaThemely.md (100%) rename published/{ => 201809}/20180813 MPV Player- A Minimalist Video Player for Linux.md (100%) rename published/{ => 201809}/20180815 How To Enable Hardware Accelerated Video Decoding In Chromium On Ubuntu Or Linux Mint.md (100%) rename published/{ => 201809}/20180822 How To Switch Between TTYs Without Using Function Keys In Linux.md (100%) rename published/{ => 201809}/20180822 What is a Makefile and how does it work.md (100%) rename published/{ => 201809}/20180823 An introduction to pipes and named pipes in Linux.md (100%) rename published/{ => 201809}/20180823 How to publish a WordPress blog to a static GitLab Pages site.md (100%) rename published/{ => 201809}/20180824 How to install software from the Linux command line.md (100%) rename published/{ => 201809}/20180824 Steam Makes it Easier to Play Windows Games on Linux.md (100%) rename published/{ => 201809}/20180824 [Solved] -sub process usr bin dpkg returned an error code 1- Error in Ubuntu.md (100%) rename published/{ => 201809}/20180826 How to capture and analyze packets with tcpdump command on Linux.md (100%) rename published/{ => 201809}/20180827 An introduction to diffs and patches.md (100%) rename published/{ => 201809}/20180828 15 command-line aliases to save you time.md (100%) rename published/{ => 201809}/20180828 A Cat Clone With Syntax Highlighting And Git Integration.md (100%) rename published/{ => 201809}/20180828 How to Play Windows-only Games on Linux with Steam Play.md (100%) rename published/{ => 201809}/20180829 Add GUIs to your programs and scripts easily with PySimpleGUI.md (100%) rename published/{ => 201809}/20180830 How To Reset MySQL Or MariaDB Root Password.md (100%) rename published/{ => 201809}/20180830 How to Update Firmware on Ubuntu 18.04.md (100%) rename published/{ => 201809}/20180831 6 open source tools for making your own VPN.md (100%) rename published/{ => 201809}/20180831 How to Create a Slideshow of Photos in Ubuntu 18.04 and other Linux Distributions.md (100%) rename published/{ => 201809}/20180903 Turn your vi editor into a productivity powerhouse.md (100%) rename published/{ => 201809}/20180904 8 Linux commands for effective process management.md (100%) rename published/{ => 201809}/20180904 Why I love Xonsh.md (100%) rename published/{ => 201809}/20180905 5 tips to improve productivity with zsh.md (100%) rename published/{ => 201809}/20180905 8 great Python libraries for side projects.md (100%) rename published/{ => 201809}/20180905 Find your systems easily on a LAN with mDNS.md (100%) rename published/{ => 201809}/20180906 3 top open source JavaScript chart libraries.md (100%) rename published/{ => 201809}/20180906 Two open source alternatives to Flash Player.md (100%) rename published/{ => 201809}/20180907 Autotrash - A CLI Tool To Automatically Purge Old Trashed Files.md (100%) rename published/{ => 201809}/20180907 What do open source and cooking have in common.md (100%) rename published/{ => 201809}/20180909 What is ZFS- Why People Use ZFS- [Explained for Beginners].md (100%) rename published/{ => 201809}/20180910 13 Keyboard Shortcut Every Ubuntu 18.04 User Should Know.md (100%) rename published/{ => 201809}/20180910 3 open source log aggregation tools.md (100%) rename published/{ => 201809}/20180910 Randomize your MAC address using NetworkManager.md (100%) rename published/{ => 201809}/20180911 Visualize Disk Usage On Your Linux System.md (100%) rename published/{ => 201809}/20180912 How To Configure Mouse Support For Linux Virtual Consoles.md (100%) rename published/{ => 201809}/20180917 Linux tricks that can save you time and trouble.md (100%) rename published/{ => 201809}/20180918 How To Force APT Package Manager To Use IPv4 In Ubuntu 16.04.md (100%) rename published/{ => 201809}/20180919 Understand Fedora memory usage with top.md (100%) rename published/{ => 201809}/20180924 Make The Output Of Ping Command Prettier And Easier To Read.md (100%) rename published/{ => 201809}/20180929 Getting started with the i3 window manager on Linux.md (100%) diff --git a/published/20140805 How to Install Cinnamon Desktop on Ubuntu.md b/published/201809/20140805 How to Install Cinnamon Desktop on Ubuntu.md similarity index 100% rename from published/20140805 How to Install Cinnamon Desktop on Ubuntu.md rename to published/201809/20140805 How to Install Cinnamon Desktop on Ubuntu.md diff --git a/published/20160503 Cloud Commander - A Web File Manager With Console And Editor.md b/published/201809/20160503 Cloud Commander - A Web File Manager With Console And Editor.md similarity index 100% rename from published/20160503 Cloud Commander - A Web File Manager With Console And Editor.md rename to published/201809/20160503 Cloud Commander - A Web File Manager With Console And Editor.md diff --git a/published/20170706 Docker Guide Dockerizing Python Django Application.md b/published/201809/20170706 Docker Guide Dockerizing Python Django Application.md similarity index 100% rename from published/20170706 Docker Guide Dockerizing Python Django Application.md rename to published/201809/20170706 Docker Guide Dockerizing Python Django Application.md diff --git a/published/20170709 The Extensive Guide to Creating Streams in RxJS.md b/published/201809/20170709 The Extensive Guide to Creating Streams in RxJS.md similarity index 100% rename from published/20170709 The Extensive Guide to Creating Streams in RxJS.md rename to published/201809/20170709 The Extensive Guide to Creating Streams in RxJS.md diff --git a/published/20170829 How To Set Up PF Firewall on FreeBSD to Protect a Web Server.md b/published/201809/20170829 How To Set Up PF Firewall on FreeBSD to Protect a Web Server.md similarity index 100% rename from published/20170829 How To Set Up PF Firewall on FreeBSD to Protect a Web Server.md rename to published/201809/20170829 How To Set Up PF Firewall on FreeBSD to Protect a Web Server.md diff --git a/published/20171003 Trash-Cli - A Command Line Interface For Trashcan On Linux.md b/published/201809/20171003 Trash-Cli - A Command Line Interface For Trashcan On Linux.md similarity index 100% rename from published/20171003 Trash-Cli - A Command Line Interface For Trashcan On Linux.md rename to published/201809/20171003 Trash-Cli - A Command Line Interface For Trashcan On Linux.md diff --git a/published/20171010 Operating a Kubernetes network.md b/published/201809/20171010 Operating a Kubernetes network.md similarity index 100% rename from published/20171010 Operating a Kubernetes network.md rename to published/201809/20171010 Operating a Kubernetes network.md diff --git a/published/20171124 How do groups work on Linux.md b/published/201809/20171124 How do groups work on Linux.md similarity index 100% rename from published/20171124 How do groups work on Linux.md rename to published/201809/20171124 How do groups work on Linux.md diff --git a/published/20171202 Scrot Linux command-line screen grabs made simple.md b/published/201809/20171202 Scrot Linux command-line screen grabs made simple.md similarity index 100% rename from published/20171202 Scrot Linux command-line screen grabs made simple.md rename to published/201809/20171202 Scrot Linux command-line screen grabs made simple.md diff --git a/published/20180102 Top 7 open source project management tools for agile teams.md b/published/201809/20180102 Top 7 open source project management tools for agile teams.md similarity index 100% rename from published/20180102 Top 7 open source project management tools for agile teams.md rename to published/201809/20180102 Top 7 open source project management tools for agile teams.md diff --git a/published/20180131 What I Learned from Programming Interviews.md b/published/201809/20180131 What I Learned from Programming Interviews.md similarity index 100% rename from published/20180131 What I Learned from Programming Interviews.md rename to published/201809/20180131 What I Learned from Programming Interviews.md diff --git a/published/20180201 Here are some amazing advantages of Go that you dont hear much about.md b/published/201809/20180201 Here are some amazing advantages of Go that you dont hear much about.md similarity index 100% rename from published/20180201 Here are some amazing advantages of Go that you dont hear much about.md rename to published/201809/20180201 Here are some amazing advantages of Go that you dont hear much about.md diff --git a/published/20180203 API Star- Python 3 API Framework - Polyglot.Ninja().md b/published/201809/20180203 API Star- Python 3 API Framework - Polyglot.Ninja().md similarity index 100% rename from published/20180203 API Star- Python 3 API Framework - Polyglot.Ninja().md rename to published/201809/20180203 API Star- Python 3 API Framework - Polyglot.Ninja().md diff --git a/published/20180226 Linux Virtual Machines vs Linux Live Images.md b/published/201809/20180226 Linux Virtual Machines vs Linux Live Images.md similarity index 100% rename from published/20180226 Linux Virtual Machines vs Linux Live Images.md rename to published/201809/20180226 Linux Virtual Machines vs Linux Live Images.md diff --git a/published/20180308 What is open source programming.md b/published/201809/20180308 What is open source programming.md similarity index 100% rename from published/20180308 What is open source programming.md rename to published/201809/20180308 What is open source programming.md diff --git a/published/20180316 How to Encrypt Files From Within a File Manager.md b/published/201809/20180316 How to Encrypt Files From Within a File Manager.md similarity index 100% rename from published/20180316 How to Encrypt Files From Within a File Manager.md rename to published/201809/20180316 How to Encrypt Files From Within a File Manager.md diff --git a/published/20180324 How To Compress And Decompress Files In Linux.md b/published/201809/20180324 How To Compress And Decompress Files In Linux.md similarity index 100% rename from published/20180324 How To Compress And Decompress Files In Linux.md rename to published/201809/20180324 How To Compress And Decompress Files In Linux.md diff --git a/published/20180326 Start a blog in 30 minutes with Hugo, a static site generator written in Go.md b/published/201809/20180326 Start a blog in 30 minutes with Hugo, a static site generator written in Go.md similarity index 100% rename from published/20180326 Start a blog in 30 minutes with Hugo, a static site generator written in Go.md rename to published/201809/20180326 Start a blog in 30 minutes with Hugo, a static site generator written in Go.md diff --git a/published/20180402 Understanding Linux filesystems- ext4 and beyond.md b/published/201809/20180402 Understanding Linux filesystems- ext4 and beyond.md similarity index 100% rename from published/20180402 Understanding Linux filesystems- ext4 and beyond.md rename to published/201809/20180402 Understanding Linux filesystems- ext4 and beyond.md diff --git a/published/20180424 A gentle introduction to FreeDOS.md b/published/201809/20180424 A gentle introduction to FreeDOS.md similarity index 100% rename from published/20180424 A gentle introduction to FreeDOS.md rename to published/201809/20180424 A gentle introduction to FreeDOS.md diff --git a/published/20180425 Understanding metrics and monitoring with Python - Opensource.com.md b/published/201809/20180425 Understanding metrics and monitoring with Python - Opensource.com.md similarity index 100% rename from published/20180425 Understanding metrics and monitoring with Python - Opensource.com.md rename to published/201809/20180425 Understanding metrics and monitoring with Python - Opensource.com.md diff --git a/published/20180427 An Official Introduction to the Go Compiler.md b/published/201809/20180427 An Official Introduction to the Go Compiler.md similarity index 100% rename from published/20180427 An Official Introduction to the Go Compiler.md rename to published/201809/20180427 An Official Introduction to the Go Compiler.md diff --git a/published/20180516 How Graphics Cards Work.md b/published/201809/20180516 How Graphics Cards Work.md similarity index 100% rename from published/20180516 How Graphics Cards Work.md rename to published/201809/20180516 How Graphics Cards Work.md diff --git a/published/20180516 Manipulating Directories in Linux.md b/published/201809/20180516 Manipulating Directories in Linux.md similarity index 100% rename from published/20180516 Manipulating Directories in Linux.md rename to published/201809/20180516 Manipulating Directories in Linux.md diff --git a/published/20180518 Mastering CI-CD at OpenDev.md b/published/201809/20180518 Mastering CI-CD at OpenDev.md similarity index 100% rename from published/20180518 Mastering CI-CD at OpenDev.md rename to published/201809/20180518 Mastering CI-CD at OpenDev.md diff --git a/published/20180525 Getting started with the Python debugger.md b/published/201809/20180525 Getting started with the Python debugger.md similarity index 100% rename from published/20180525 Getting started with the Python debugger.md rename to published/201809/20180525 Getting started with the Python debugger.md diff --git a/published/20180529 How To Add Additional IP (Secondary IP) In Ubuntu System.md b/published/201809/20180529 How To Add Additional IP (Secondary IP) In Ubuntu System.md similarity index 100% rename from published/20180529 How To Add Additional IP (Secondary IP) In Ubuntu System.md rename to published/201809/20180529 How To Add Additional IP (Secondary IP) In Ubuntu System.md diff --git a/published/20180618 Twitter Sentiment Analysis using NodeJS.md b/published/201809/20180618 Twitter Sentiment Analysis using NodeJS.md similarity index 100% rename from published/20180618 Twitter Sentiment Analysis using NodeJS.md rename to published/201809/20180618 Twitter Sentiment Analysis using NodeJS.md diff --git a/published/20180626 How to build a professional network when you work in a bazaar.md b/published/201809/20180626 How to build a professional network when you work in a bazaar.md similarity index 100% rename from published/20180626 How to build a professional network when you work in a bazaar.md rename to published/201809/20180626 How to build a professional network when you work in a bazaar.md diff --git a/published/20180702 View The Contents Of An Archive Or Compressed File Without Extracting It.md b/published/201809/20180702 View The Contents Of An Archive Or Compressed File Without Extracting It.md similarity index 100% rename from published/20180702 View The Contents Of An Archive Or Compressed File Without Extracting It.md rename to published/201809/20180702 View The Contents Of An Archive Or Compressed File Without Extracting It.md diff --git a/published/20180703 Understanding Python Dataclasses — Part 1.md b/published/201809/20180703 Understanding Python Dataclasses — Part 1.md similarity index 100% rename from published/20180703 Understanding Python Dataclasses — Part 1.md rename to published/201809/20180703 Understanding Python Dataclasses — Part 1.md diff --git a/published/20180706 Anatomy of a Linux DNS Lookup - Part III.md b/published/201809/20180706 Anatomy of a Linux DNS Lookup - Part III.md similarity index 100% rename from published/20180706 Anatomy of a Linux DNS Lookup - Part III.md rename to published/201809/20180706 Anatomy of a Linux DNS Lookup - Part III.md diff --git a/published/20180710 How To View Detailed Information About A Package In Linux.md b/published/201809/20180710 How To View Detailed Information About A Package In Linux.md similarity index 100% rename from published/20180710 How To View Detailed Information About A Package In Linux.md rename to published/201809/20180710 How To View Detailed Information About A Package In Linux.md diff --git a/published/20180717 Getting started with Etcher.io.md b/published/201809/20180717 Getting started with Etcher.io.md similarity index 100% rename from published/20180717 Getting started with Etcher.io.md rename to published/201809/20180717 Getting started with Etcher.io.md diff --git a/published/20180720 An Introduction to Using Git.md b/published/201809/20180720 An Introduction to Using Git.md similarity index 100% rename from published/20180720 An Introduction to Using Git.md rename to published/201809/20180720 An Introduction to Using Git.md diff --git a/published/20180720 How to Install 2048 Game in Ubuntu and Other Linux Distributions.md b/published/201809/20180720 How to Install 2048 Game in Ubuntu and Other Linux Distributions.md similarity index 100% rename from published/20180720 How to Install 2048 Game in Ubuntu and Other Linux Distributions.md rename to published/201809/20180720 How to Install 2048 Game in Ubuntu and Other Linux Distributions.md diff --git a/published/20180720 How to build a URL shortener with Apache.md b/published/201809/20180720 How to build a URL shortener with Apache.md similarity index 100% rename from published/20180720 How to build a URL shortener with Apache.md rename to published/201809/20180720 How to build a URL shortener with Apache.md diff --git a/published/20180725 How do private keys work in PKI and cryptography.md b/published/201809/20180725 How do private keys work in PKI and cryptography.md similarity index 100% rename from published/20180725 How do private keys work in PKI and cryptography.md rename to published/201809/20180725 How do private keys work in PKI and cryptography.md diff --git a/published/20180730 7 Python libraries for more maintainable code.md b/published/201809/20180730 7 Python libraries for more maintainable code.md similarity index 100% rename from published/20180730 7 Python libraries for more maintainable code.md rename to published/201809/20180730 7 Python libraries for more maintainable code.md diff --git a/published/20180730 How to use VS Code for your Python projects.md b/published/201809/20180730 How to use VS Code for your Python projects.md similarity index 100% rename from published/20180730 How to use VS Code for your Python projects.md rename to published/201809/20180730 How to use VS Code for your Python projects.md diff --git a/published/20180802 Distrochooser Helps Linux Beginners To Choose A Suitable Linux Distribution.md b/published/201809/20180802 Distrochooser Helps Linux Beginners To Choose A Suitable Linux Distribution.md similarity index 100% rename from published/20180802 Distrochooser Helps Linux Beginners To Choose A Suitable Linux Distribution.md rename to published/201809/20180802 Distrochooser Helps Linux Beginners To Choose A Suitable Linux Distribution.md diff --git a/published/20180803 10 Popular Windows Apps That Are Also Available on Linux.md b/published/201809/20180803 10 Popular Windows Apps That Are Also Available on Linux.md similarity index 100% rename from published/20180803 10 Popular Windows Apps That Are Also Available on Linux.md rename to published/201809/20180803 10 Popular Windows Apps That Are Also Available on Linux.md diff --git a/published/20180804 Installing Andriod on VirtualBox.md b/published/201809/20180804 Installing Andriod on VirtualBox.md similarity index 100% rename from published/20180804 Installing Andriod on VirtualBox.md rename to published/201809/20180804 Installing Andriod on VirtualBox.md diff --git a/published/20180806 Anatomy of a Linux DNS Lookup - Part IV.md b/published/201809/20180806 Anatomy of a Linux DNS Lookup - Part IV.md similarity index 100% rename from published/20180806 Anatomy of a Linux DNS Lookup - Part IV.md rename to published/201809/20180806 Anatomy of a Linux DNS Lookup - Part IV.md diff --git a/published/20180806 Installing and using Git and GitHub on Ubuntu Linux- A beginner-s guide.md b/published/201809/20180806 Installing and using Git and GitHub on Ubuntu Linux- A beginner-s guide.md similarity index 100% rename from published/20180806 Installing and using Git and GitHub on Ubuntu Linux- A beginner-s guide.md rename to published/201809/20180806 Installing and using Git and GitHub on Ubuntu Linux- A beginner-s guide.md diff --git a/published/20180808 5 applications to manage your to-do list on Fedora.md b/published/201809/20180808 5 applications to manage your to-do list on Fedora.md similarity index 100% rename from published/20180808 5 applications to manage your to-do list on Fedora.md rename to published/201809/20180808 5 applications to manage your to-do list on Fedora.md diff --git a/published/20180808 5 open source role-playing games for Linux.md b/published/201809/20180808 5 open source role-playing games for Linux.md similarity index 100% rename from published/20180808 5 open source role-playing games for Linux.md rename to published/201809/20180808 5 open source role-playing games for Linux.md diff --git a/published/20180810 6 Reasons Why Linux Users Switch to BSD.md b/published/201809/20180810 6 Reasons Why Linux Users Switch to BSD.md similarity index 100% rename from published/20180810 6 Reasons Why Linux Users Switch to BSD.md rename to published/201809/20180810 6 Reasons Why Linux Users Switch to BSD.md diff --git a/published/20180810 Automatically Switch To Light - Dark Gtk Themes Based On Sunrise And Sunset Times With AutomaThemely.md b/published/201809/20180810 Automatically Switch To Light - Dark Gtk Themes Based On Sunrise And Sunset Times With AutomaThemely.md similarity index 100% rename from published/20180810 Automatically Switch To Light - Dark Gtk Themes Based On Sunrise And Sunset Times With AutomaThemely.md rename to published/201809/20180810 Automatically Switch To Light - Dark Gtk Themes Based On Sunrise And Sunset Times With AutomaThemely.md diff --git a/published/20180813 MPV Player- A Minimalist Video Player for Linux.md b/published/201809/20180813 MPV Player- A Minimalist Video Player for Linux.md similarity index 100% rename from published/20180813 MPV Player- A Minimalist Video Player for Linux.md rename to published/201809/20180813 MPV Player- A Minimalist Video Player for Linux.md diff --git a/published/20180815 How To Enable Hardware Accelerated Video Decoding In Chromium On Ubuntu Or Linux Mint.md b/published/201809/20180815 How To Enable Hardware Accelerated Video Decoding In Chromium On Ubuntu Or Linux Mint.md similarity index 100% rename from published/20180815 How To Enable Hardware Accelerated Video Decoding In Chromium On Ubuntu Or Linux Mint.md rename to published/201809/20180815 How To Enable Hardware Accelerated Video Decoding In Chromium On Ubuntu Or Linux Mint.md diff --git a/published/20180822 How To Switch Between TTYs Without Using Function Keys In Linux.md b/published/201809/20180822 How To Switch Between TTYs Without Using Function Keys In Linux.md similarity index 100% rename from published/20180822 How To Switch Between TTYs Without Using Function Keys In Linux.md rename to published/201809/20180822 How To Switch Between TTYs Without Using Function Keys In Linux.md diff --git a/published/20180822 What is a Makefile and how does it work.md b/published/201809/20180822 What is a Makefile and how does it work.md similarity index 100% rename from published/20180822 What is a Makefile and how does it work.md rename to published/201809/20180822 What is a Makefile and how does it work.md diff --git a/published/20180823 An introduction to pipes and named pipes in Linux.md b/published/201809/20180823 An introduction to pipes and named pipes in Linux.md similarity index 100% rename from published/20180823 An introduction to pipes and named pipes in Linux.md rename to published/201809/20180823 An introduction to pipes and named pipes in Linux.md diff --git a/published/20180823 How to publish a WordPress blog to a static GitLab Pages site.md b/published/201809/20180823 How to publish a WordPress blog to a static GitLab Pages site.md similarity index 100% rename from published/20180823 How to publish a WordPress blog to a static GitLab Pages site.md rename to published/201809/20180823 How to publish a WordPress blog to a static GitLab Pages site.md diff --git a/published/20180824 How to install software from the Linux command line.md b/published/201809/20180824 How to install software from the Linux command line.md similarity index 100% rename from published/20180824 How to install software from the Linux command line.md rename to published/201809/20180824 How to install software from the Linux command line.md diff --git a/published/20180824 Steam Makes it Easier to Play Windows Games on Linux.md b/published/201809/20180824 Steam Makes it Easier to Play Windows Games on Linux.md similarity index 100% rename from published/20180824 Steam Makes it Easier to Play Windows Games on Linux.md rename to published/201809/20180824 Steam Makes it Easier to Play Windows Games on Linux.md diff --git a/published/20180824 [Solved] -sub process usr bin dpkg returned an error code 1- Error in Ubuntu.md b/published/201809/20180824 [Solved] -sub process usr bin dpkg returned an error code 1- Error in Ubuntu.md similarity index 100% rename from published/20180824 [Solved] -sub process usr bin dpkg returned an error code 1- Error in Ubuntu.md rename to published/201809/20180824 [Solved] -sub process usr bin dpkg returned an error code 1- Error in Ubuntu.md diff --git a/published/20180826 How to capture and analyze packets with tcpdump command on Linux.md b/published/201809/20180826 How to capture and analyze packets with tcpdump command on Linux.md similarity index 100% rename from published/20180826 How to capture and analyze packets with tcpdump command on Linux.md rename to published/201809/20180826 How to capture and analyze packets with tcpdump command on Linux.md diff --git a/published/20180827 An introduction to diffs and patches.md b/published/201809/20180827 An introduction to diffs and patches.md similarity index 100% rename from published/20180827 An introduction to diffs and patches.md rename to published/201809/20180827 An introduction to diffs and patches.md diff --git a/published/20180828 15 command-line aliases to save you time.md b/published/201809/20180828 15 command-line aliases to save you time.md similarity index 100% rename from published/20180828 15 command-line aliases to save you time.md rename to published/201809/20180828 15 command-line aliases to save you time.md diff --git a/published/20180828 A Cat Clone With Syntax Highlighting And Git Integration.md b/published/201809/20180828 A Cat Clone With Syntax Highlighting And Git Integration.md similarity index 100% rename from published/20180828 A Cat Clone With Syntax Highlighting And Git Integration.md rename to published/201809/20180828 A Cat Clone With Syntax Highlighting And Git Integration.md diff --git a/published/20180828 How to Play Windows-only Games on Linux with Steam Play.md b/published/201809/20180828 How to Play Windows-only Games on Linux with Steam Play.md similarity index 100% rename from published/20180828 How to Play Windows-only Games on Linux with Steam Play.md rename to published/201809/20180828 How to Play Windows-only Games on Linux with Steam Play.md diff --git a/published/20180829 Add GUIs to your programs and scripts easily with PySimpleGUI.md b/published/201809/20180829 Add GUIs to your programs and scripts easily with PySimpleGUI.md similarity index 100% rename from published/20180829 Add GUIs to your programs and scripts easily with PySimpleGUI.md rename to published/201809/20180829 Add GUIs to your programs and scripts easily with PySimpleGUI.md diff --git a/published/20180830 How To Reset MySQL Or MariaDB Root Password.md b/published/201809/20180830 How To Reset MySQL Or MariaDB Root Password.md similarity index 100% rename from published/20180830 How To Reset MySQL Or MariaDB Root Password.md rename to published/201809/20180830 How To Reset MySQL Or MariaDB Root Password.md diff --git a/published/20180830 How to Update Firmware on Ubuntu 18.04.md b/published/201809/20180830 How to Update Firmware on Ubuntu 18.04.md similarity index 100% rename from published/20180830 How to Update Firmware on Ubuntu 18.04.md rename to published/201809/20180830 How to Update Firmware on Ubuntu 18.04.md diff --git a/published/20180831 6 open source tools for making your own VPN.md b/published/201809/20180831 6 open source tools for making your own VPN.md similarity index 100% rename from published/20180831 6 open source tools for making your own VPN.md rename to published/201809/20180831 6 open source tools for making your own VPN.md diff --git a/published/20180831 How to Create a Slideshow of Photos in Ubuntu 18.04 and other Linux Distributions.md b/published/201809/20180831 How to Create a Slideshow of Photos in Ubuntu 18.04 and other Linux Distributions.md similarity index 100% rename from published/20180831 How to Create a Slideshow of Photos in Ubuntu 18.04 and other Linux Distributions.md rename to published/201809/20180831 How to Create a Slideshow of Photos in Ubuntu 18.04 and other Linux Distributions.md diff --git a/published/20180903 Turn your vi editor into a productivity powerhouse.md b/published/201809/20180903 Turn your vi editor into a productivity powerhouse.md similarity index 100% rename from published/20180903 Turn your vi editor into a productivity powerhouse.md rename to published/201809/20180903 Turn your vi editor into a productivity powerhouse.md diff --git a/published/20180904 8 Linux commands for effective process management.md b/published/201809/20180904 8 Linux commands for effective process management.md similarity index 100% rename from published/20180904 8 Linux commands for effective process management.md rename to published/201809/20180904 8 Linux commands for effective process management.md diff --git a/published/20180904 Why I love Xonsh.md b/published/201809/20180904 Why I love Xonsh.md similarity index 100% rename from published/20180904 Why I love Xonsh.md rename to published/201809/20180904 Why I love Xonsh.md diff --git a/published/20180905 5 tips to improve productivity with zsh.md b/published/201809/20180905 5 tips to improve productivity with zsh.md similarity index 100% rename from published/20180905 5 tips to improve productivity with zsh.md rename to published/201809/20180905 5 tips to improve productivity with zsh.md diff --git a/published/20180905 8 great Python libraries for side projects.md b/published/201809/20180905 8 great Python libraries for side projects.md similarity index 100% rename from published/20180905 8 great Python libraries for side projects.md rename to published/201809/20180905 8 great Python libraries for side projects.md diff --git a/published/20180905 Find your systems easily on a LAN with mDNS.md b/published/201809/20180905 Find your systems easily on a LAN with mDNS.md similarity index 100% rename from published/20180905 Find your systems easily on a LAN with mDNS.md rename to published/201809/20180905 Find your systems easily on a LAN with mDNS.md diff --git a/published/20180906 3 top open source JavaScript chart libraries.md b/published/201809/20180906 3 top open source JavaScript chart libraries.md similarity index 100% rename from published/20180906 3 top open source JavaScript chart libraries.md rename to published/201809/20180906 3 top open source JavaScript chart libraries.md diff --git a/published/20180906 Two open source alternatives to Flash Player.md b/published/201809/20180906 Two open source alternatives to Flash Player.md similarity index 100% rename from published/20180906 Two open source alternatives to Flash Player.md rename to published/201809/20180906 Two open source alternatives to Flash Player.md diff --git a/published/20180907 Autotrash - A CLI Tool To Automatically Purge Old Trashed Files.md b/published/201809/20180907 Autotrash - A CLI Tool To Automatically Purge Old Trashed Files.md similarity index 100% rename from published/20180907 Autotrash - A CLI Tool To Automatically Purge Old Trashed Files.md rename to published/201809/20180907 Autotrash - A CLI Tool To Automatically Purge Old Trashed Files.md diff --git a/published/20180907 What do open source and cooking have in common.md b/published/201809/20180907 What do open source and cooking have in common.md similarity index 100% rename from published/20180907 What do open source and cooking have in common.md rename to published/201809/20180907 What do open source and cooking have in common.md diff --git a/published/20180909 What is ZFS- Why People Use ZFS- [Explained for Beginners].md b/published/201809/20180909 What is ZFS- Why People Use ZFS- [Explained for Beginners].md similarity index 100% rename from published/20180909 What is ZFS- Why People Use ZFS- [Explained for Beginners].md rename to published/201809/20180909 What is ZFS- Why People Use ZFS- [Explained for Beginners].md diff --git a/published/20180910 13 Keyboard Shortcut Every Ubuntu 18.04 User Should Know.md b/published/201809/20180910 13 Keyboard Shortcut Every Ubuntu 18.04 User Should Know.md similarity index 100% rename from published/20180910 13 Keyboard Shortcut Every Ubuntu 18.04 User Should Know.md rename to published/201809/20180910 13 Keyboard Shortcut Every Ubuntu 18.04 User Should Know.md diff --git a/published/20180910 3 open source log aggregation tools.md b/published/201809/20180910 3 open source log aggregation tools.md similarity index 100% rename from published/20180910 3 open source log aggregation tools.md rename to published/201809/20180910 3 open source log aggregation tools.md diff --git a/published/20180910 Randomize your MAC address using NetworkManager.md b/published/201809/20180910 Randomize your MAC address using NetworkManager.md similarity index 100% rename from published/20180910 Randomize your MAC address using NetworkManager.md rename to published/201809/20180910 Randomize your MAC address using NetworkManager.md diff --git a/published/20180911 Visualize Disk Usage On Your Linux System.md b/published/201809/20180911 Visualize Disk Usage On Your Linux System.md similarity index 100% rename from published/20180911 Visualize Disk Usage On Your Linux System.md rename to published/201809/20180911 Visualize Disk Usage On Your Linux System.md diff --git a/published/20180912 How To Configure Mouse Support For Linux Virtual Consoles.md b/published/201809/20180912 How To Configure Mouse Support For Linux Virtual Consoles.md similarity index 100% rename from published/20180912 How To Configure Mouse Support For Linux Virtual Consoles.md rename to published/201809/20180912 How To Configure Mouse Support For Linux Virtual Consoles.md diff --git a/published/20180917 Linux tricks that can save you time and trouble.md b/published/201809/20180917 Linux tricks that can save you time and trouble.md similarity index 100% rename from published/20180917 Linux tricks that can save you time and trouble.md rename to published/201809/20180917 Linux tricks that can save you time and trouble.md diff --git a/published/20180918 How To Force APT Package Manager To Use IPv4 In Ubuntu 16.04.md b/published/201809/20180918 How To Force APT Package Manager To Use IPv4 In Ubuntu 16.04.md similarity index 100% rename from published/20180918 How To Force APT Package Manager To Use IPv4 In Ubuntu 16.04.md rename to published/201809/20180918 How To Force APT Package Manager To Use IPv4 In Ubuntu 16.04.md diff --git a/published/20180919 Understand Fedora memory usage with top.md b/published/201809/20180919 Understand Fedora memory usage with top.md similarity index 100% rename from published/20180919 Understand Fedora memory usage with top.md rename to published/201809/20180919 Understand Fedora memory usage with top.md diff --git a/published/20180924 Make The Output Of Ping Command Prettier And Easier To Read.md b/published/201809/20180924 Make The Output Of Ping Command Prettier And Easier To Read.md similarity index 100% rename from published/20180924 Make The Output Of Ping Command Prettier And Easier To Read.md rename to published/201809/20180924 Make The Output Of Ping Command Prettier And Easier To Read.md diff --git a/published/20180929 Getting started with the i3 window manager on Linux.md b/published/201809/20180929 Getting started with the i3 window manager on Linux.md similarity index 100% rename from published/20180929 Getting started with the i3 window manager on Linux.md rename to published/201809/20180929 Getting started with the i3 window manager on Linux.md From 43566086175494dd1f2d5c41ccb37b42e8ee286a Mon Sep 17 00:00:00 2001 From: pityonline Date: Sat, 22 Sep 2018 00:37:16 +0800 Subject: [PATCH 171/736] =?UTF-8?q?PRF:=20#10300=20=E6=A0=A1=E5=AF=B9?= =?UTF-8?q?=E5=B9=B6=E5=88=9D=E6=AD=A5=E8=B0=83=E6=95=B4=E6=A0=BC=E5=BC=8F?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- .../talk/20180117 How to get into DevOps.md | 123 ++++++++---------- 1 file changed, 57 insertions(+), 66 deletions(-) diff --git a/translated/talk/20180117 How to get into DevOps.md b/translated/talk/20180117 How to get into DevOps.md index ec169be76f..bd9172b468 100644 --- a/translated/talk/20180117 How to get into DevOps.md +++ b/translated/talk/20180117 How to get into DevOps.md @@ -1,43 +1,40 @@ - DevOps 实践指南 ====== ![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/rh_003784_02_os.comcareers_resume_rh1x.png?itok=S3HGxi6E) -在去年大概一年的时间里,我注意到对“Devops 实践”感兴趣的开发人员和系统管理员突然有了明显的增加。这样的变化也合理:现在开发者只要花很少的钱,调用一些 API, 就能单枪匹马地在一整套分布式基础设施上运行自己的应用, 在这个时代,开发和运维的紧密程度前所未有。我看过许多博客和文章介绍很酷的 DevOps 工具和相关思想,但是给那些希望践行 DevOps 的人以指导和建议的内容,我却很少看到。 +在去年大概一年的时间里,我注意到对“Devops 实践”感兴趣的开发人员和系统管理员突然有了明显的增加。这样的变化也合理:现在开发者只要花很少的钱,调用一些 API,就能单枪匹马地在一整套分布式基础设施上运行自己的应用,在这个时代,开发和运维的紧密程度前所未有。我看过许多博客和文章介绍很酷的 DevOps 工具和相关思想,但是给那些希望践行 DevOps 的人以指导和建议的内容,我却很少看到。 -这篇文章的目的就是描述一下如何去实践。我的想法基于 Reddit 上 [devops][1] 的一些访谈、聊天和深夜讨论,还有一些随机谈话,一般都发生在享受啤酒和美食的时候。如果你已经开始这样实践,我对你的反馈很感兴趣,请通过 [我的博客][2] 或者 [Twitter][3] 联系我,也可以直接在下面评论。我很乐意听到你们的想法和故事。 +这篇文章的目的就是描述一下如何去实践。我的想法基于 Reddit 上 [devops][1] 的一些访谈、聊天和深夜讨论,还有一些随机谈话,一般都发生在享受啤酒和美食的时候。如果你已经开始这样实践,我对你的反馈很感兴趣,请通过[我的博客][2]或者 [Twitter][3] 联系我,也可以直接在下面评论。我很乐意听到你们的想法和故事。 ### 古代的 IT 了解历史是搞清楚未来的关键,DevOps 也不例外。想搞清楚 DevOps 运动的普及和流行,去了解一下上世纪 90 年代后期和 21 世纪前十年 IT 的情况会有帮助。这是我的经验。 -我的第一份工作是在一家大型跨国金融服务公司做 Windows 系统管理员。当时给计算资源扩容需要给 Dell 打电话 (或者像我们公司那样打给 CDW ),并下一个价值数十万美元的订单,包含服务器、网络设备、电缆和软件,所有这些都要运到在线或离线的数据中心去。虽然 VMware 仍在尝试说服企业使用虚拟机运行他们的“性能敏感”型程序是更划算的,但是包括我们在内的很多公司都还忠于使用他们的物理机运行应用。 +我的第一份工作是在一家大型跨国金融服务公司做 Windows 系统管理员。当时给计算资源扩容需要给 Dell 打电话(或者像我们公司那样打给 CDW),并下一个价值数十万美元的订单,包含服务器、网络设备、电缆和软件,所有这些都要运到在线或离线的数据中心去。虽然 VMware 仍在尝试说服企业使用虚拟机运行他们的“性能敏感”型程序是更划算的,但是包括我们在内的很多公司都还忠于使用他们的物理机运行应用。 在我们技术部门,有一个专门做数据中心工程和操作的完整团队,他们的工作包括价格谈判,让荒唐的租赁月费能够下降一点点,还包括保证我们的系统能够正常冷却(如果设备太多,这个事情的难度会呈指数增长)。如果这个团队足够幸运足够有钱,境外数据中心的工作人员对我们所有的服务器型号又都有足够的了解,就能避免在盘后交易中不小心扯错东西。那时候亚马逊 AWS 和 Rackspace 逐渐开始加速扩张,但还远远没到临界规模。 -当时我们还有专门的团队来保证硬件上运行着的操作系统和软件能够按照预期工作。这些工程师负责设计可靠的架构以方便给系统打补丁,监控和报警,还要定义基础镜像 (gold image) 的内容。这些大都是通过很多手工实验完成的,很多手工实验是为了编写一个运行说明书 (runbook) 来描述要做的事情,并确保按照它执行后的结果确实在预期内。在我们这么大的组织里,这样做很重要,因为一线和二线的技术支持都是境外的,而他们的培训内容只覆盖到了这些运行说明而已。 +当时我们还有专门的团队来保证硬件上运行着的操作系统和软件能够按照预期工作。这些工程师负责设计可靠的架构以方便给系统打补丁,监控和报警,还要定义基础镜像gold image的内容。这些大都是通过很多手工实验完成的,很多手工实验是为了编写一个运行说明书runbook来描述要做的事情,并确保按照它执行后的结果确实在预期内。在我们这么大的组织里,这样做很重要,因为一线和二线的技术支持都是境外的,而他们的培训内容只覆盖到了这些运行说明而已。 (这是我职业生涯前三年的世界。我那时候的梦想是成为制定金本位制的人!) 软件发布则完全是另外一头怪兽。无可否认,我在这方面并没有积累太多经验。但是,从我收集的故事(和最近的经历)来看,当时大部分软件开发的日常大概是这样: - * 开发人员按照技术和功能需求来编写代码,这些需求来自于业务分析人员的会议,但是会议并没有邀请开发人员参加。 - * 开发人员可以选择为他们的代码编写单元测试,以确保在代码里没有任何明显的疯狂行为,比如除以 0 但不抛出异常。 - * 然后开发者会把他们的代码标记为 "Ready for QA."(准备好了接受测试),质量保障的成员会把这个版本的代码发布到他们自己的环境中,这个环境和生产环境可能相似,也可能不相似,甚至和开发环境相比也不一定相似。 - * 故障会在几天或者几个星期内反馈到开发人员那里,这个时长取决于其他业务活动和优先事项。 +* 开发人员按照技术和功能需求来编写代码,这些需求来自于业务分析人员的会议,但是会议并没有邀请开发人员参加。 +* 开发人员可以选择为他们的代码编写单元测试,以确保在代码里没有任何明显的疯狂行为,比如除以 0 但不抛出异常。 +* 然后开发者会把他们的代码标记为“Ready for QA”(准备好了接受测试),质量保障的成员会把这个版本的代码发布到他们自己的环境中,这个环境和生产环境可能相似,也可能不相似,甚至和开发环境相比也不一定相似。 +* 故障会在几天或者几个星期内反馈到开发人员那里,这个时长取决于其他业务活动和优先事项。 - - -虽然系统管理员和开发人员经常有不一致的意见,但是对“变更管理”的痛恨却是一致的。变更管理由高度规范的(就我当时的雇主而言)和非常有必要的规则和程序组成,用来管理一家公司应该什么时候做技术变更,以及如何做。很多公司都按照 [ITIL][4] 来操作, 简单的说,ITIL 问了很多和事情发生的原因、时间、地点和方式相关的问题,而且提供了一个过程,对产生最终答案的决定做审计跟踪。 +虽然系统管理员和开发人员经常有不一致的意见,但是对“变更管理”的痛恨却是一致的。变更管理由高度规范的(就我当时的雇主而言)和非常有必要的规则和程序组成,用来管理一家公司应该什么时候做技术变更,以及如何做。很多公司都按照 [ITIL][4] 来操作,简单的说,ITIL 问了很多和事情发生的原因、时间、地点和方式相关的问题,而且提供了一个过程,对产生最终答案的决定做审计跟踪。 你可能从我的简短历史课上了解到,当时 IT 的很多很多事情都是手工完成的。这导致了很多错误。错误又导致了很多财产损失。变更管理的工作就是尽量减少这些损失,它常常以这样的形式出现:不管变更的影响和规模大小,每两周才能发布部署一次。周五下午 4 点到周一早上 5 点 59 分这段时间,需要排队等候发布窗口。(讽刺的是,这种流程导致了更多错误,通常还是更严重的那种错误) ### DevOps 不是专家团 -你可能在想 "Carlos 你在讲啥啊,什么时候才能说到 Ansible playbooks? ",我热爱 Ansible, 但是请再等一会;下面这些很重要。 +你可能在想 “Carlos 你在讲啥啊,什么时候才能说到 Ansible playbooks?”,我热爱 Ansible,但是请再等一会;下面这些很重要。 -你有没有过被分配到过需要跟"DevOps"小组打交道的项目?你有没有依赖过“配置管理”或者“持续集成/持续交付”小组来保证业务流水线设置正确?你有没有在代码开发完的数周之后才参加发布部署的会议? +你有没有过被分配到过需要跟 DevOps 小组打交道的项目?你有没有依赖过“配置管理”或者“持续集成/持续交付”小组来保证业务流水线设置正确?你有没有在代码开发完的数周之后才参加发布部署的会议? 如果有过,那么你就是在重温历史,这个历史是由上面所有这些导致的。 @@ -45,13 +42,13 @@ DevOps 实践指南 在我工作过的很多公司里,系统管理员和开发人员不仅像这样形成了天然的筒仓,而且彼此还有激烈的对抗。开发人员的环境出问题了或者他们的权限太小了,就会对系统管理员很恼火。系统管理员怪开发者无时不刻的不在用各种方式破坏他们的环境,怪开发人员申请的计算资源严重超过他们的需要。双方都不理解对方,更糟糕的是,双方都不愿意去理解对方。 -大部分开发人员对操作系统,内核或计算机硬件都不感兴趣。同样的,大部分系统管理员,即使是 Linux 的系统管理员,也都不愿意学习编写代码,他们在大学期间学过一些 C 语言,然后就痛恨它,并且永远都不想再碰 IDE. 所以,开发人员把运行环境的问题甩给围墙外的系统管理员,系统管理员把这些问题和甩过来的其他上百个问题放在一起,做一个优先级安排。每个人都很忙,心怀怨恨的等待着。DevOps 的目的就是解决这种矛盾。 +大部分开发人员对操作系统,内核或计算机硬件都不感兴趣。同样的,大部分系统管理员,即使是 Linux 的系统管理员,也都不愿意学习编写代码,他们在大学期间学过一些 C 语言,然后就痛恨它,并且永远都不想再碰 IDE。所以,开发人员把运行环境的问题甩给围墙外的系统管理员,系统管理员把这些问题和甩过来的其他上百个问题放在一起,做一个优先级安排。每个人都很忙,心怀怨恨的等待着。DevOps 的目的就是解决这种矛盾。 DevOps 不是一个团队,CI/CD 也不是 Jira 系统的一个用户组。DevOps 是一种思考方式。根据这个运动来看,在理想的世界里,开发人员、系统管理员和业务相关人将作为一个团队工作。虽然他们可能不完全了解彼此的世界,可能没有足够的知识去了解彼此的积压任务,但他们在大多数情况下能有一致的看法。 -把所有基础设施和业务逻辑都代码化,再串到一个发布部署流水线里,就像是运行在这之上的应用一样。这个理念的基础就是 DevOps. 因为大家都理解彼此,所以人人都是赢家。聊天机器人和易用的监控工具、可视化工具的兴起,背后的基础也是 DevOps. +把所有基础设施和业务逻辑都代码化,再串到一个发布部署流水线里,就像是运行在这之上的应用一样。这个理念的基础就是 DevOps。因为大家都理解彼此,所以人人都是赢家。聊天机器人和易用的监控工具、可视化工具的兴起,背后的基础也是 DevOps。 -[Adam Jacob][6] 说的最好:"DevOps 就是企业往软件导向型过渡时我们用来描述操作的词" +[Adam Jacob][6] 说的最好:“DevOps 就是企业往软件导向型过渡时我们用来描述操作的词。” ### 要实践 DevOps 我需要知道些什么 @@ -61,22 +58,20 @@ DevOps 不是一个团队,CI/CD 也不是 Jira 系统的一个用户组。DevO 也就是说,我们一般是在找对深入学习以下内容感兴趣的工程师: - * 如何管理和设计安全、可扩展的云上的平台(通常是在 AWS 上,不过微软的 Azure, 谷歌的 Cloud Platform,还有 DigitalOcean 和 Heroku 这样的 PaaS 提供商,也都很流行) - * 如何用流行的 [CI/CD][8] 工具,比如 Jenkins,Gocd,还有基于云的 Travis CI 或者 CircleCI,来构造一条优化的发布部署流水线,和发布部署策略。 - * 如何在你的系统中使用基于时间序列的工具,比如 Kibana,Grafana,Splunk,Loggly 或者 Logstash,来监控,记录,并在变化的时候报警,还有 - * 如何使用配置管理工具,例如 Chef,Puppet 或者 Ansible 做到“基础设施即代码”,以及如何使用像 Terraform 或 CloudFormation 的工具发布这些基础设施。 - - +* 如何管理和设计安全、可扩展的云上的平台(通常是在 AWS 上,不过微软的 Azure,谷歌的 Cloud Platform,还有 DigitalOcean 和 Heroku 这样的 PaaS 提供商,也都很流行) +* 如何用流行的 [CI/CD][8] 工具,比如 Jenkins,Gocd,还有基于云的 Travis CI 或者 CircleCI,来构造一条优化的发布部署流水线,和发布部署策略。 +* 如何在你的系统中使用基于时间序列的工具,比如 Kibana,Grafana,Splunk,Loggly 或者 Logstash,来监控,记录,并在变化的时候报警,还有 +* 如何使用配置管理工具,例如 Chef,Puppet 或者 Ansible 做到“基础设施即代码”,以及如何使用像 Terraform 或 CloudFormation 的工具发布这些基础设施。 容器也变得越来越受欢迎。尽管有人对大规模使用 Docker 的现状[表示不满][9],但容器正迅速地成为一种很好的方式来实现在更少的操作系统上运行超高密度的服务和应用,同时提高它们的可靠性。(像 Kubernetes 或者 Mesos 这样的容器编排工具,能在宿主机故障的时候,几秒钟之内重新启动新的容器。)考虑到这些,掌握 Docker 或者 rkt 以及容器编排平台的知识会对你大有帮助。 -如果你是希望做 DevOps 实践的系统管理员,你还需要知道如何写代码。Python 和 Ruby 是 DevOps 领域的流行语言,因为他们是可移植的(也就是说可以在任何操作系统上运行),快速的,而且易读易学。它们还支撑着这个行业最流行的配置管理工具(Ansible 是使用 Python 写的,Chef 和 Puppet 是使用 Ruby 写的)以及云平台的 API 客户端(亚马逊 AWS, 微软 Azure, 谷歌 Cloud Platform 的客户端通常会提供 Python 和 Ruby 语言的版本)。 +如果你是希望做 DevOps 实践的系统管理员,你还需要知道如何写代码。Python 和 Ruby 是 DevOps 领域的流行语言,因为他们是可移植的(也就是说可以在任何操作系统上运行),快速的,而且易读易学。它们还支撑着这个行业最流行的配置管理工具(Ansible 是使用 Python 写的,Chef 和 Puppet 是使用 Ruby 写的)以及云平台的 API 客户端(亚马逊 AWS,微软 Azure,谷歌 Cloud Platform 的客户端通常会提供 Python 和 Ruby 语言的版本)。 如果你是开发人员,也希望做 DevOps 的实践,我强烈建议你去学习 Unix,Windows 操作系统以及网络基础知识。虽然云计算把很多系统管理的难题抽象化了,但是对慢应用的性能做 debug 的时候,你知道操作系统如何工作的就会有很大的帮助。下文包含了一些这个主题的图书。 -如果你觉得这些东西听起来内容太多,大家都是这么想的。幸运的是,有很多小项目可以让你开始探索。其中一个启动项目是 Gary Stafford 的[选举服务](https://github.com/garystafford/voter-service), 一个基于 Java 的简单投票平台。我们要求面试候选人通过一个流水线将该服务从 GitHub 部署到生产环境基础设施上。你可以把这个服务与 Rob Mile 写的了不起的 DevOps [入门教程](https://github.com/maxamg/cd-office-hours)结合起来,学习如何编写流水线。 +如果你觉得这些东西听起来内容太多,大家都是这么想的。幸运的是,有很多小项目可以让你开始探索。其中一个启动项目是 Gary Stafford 的[选举服务](https://github.com/garystafford/voter-service),一个基于 Java 的简单投票平台。我们要求面试候选人通过一个流水线将该服务从 GitHub 部署到生产环境基础设施上。你可以把这个服务与 Rob Mile 写的了不起的 DevOps [入门教程](https://github.com/maxamg/cd-office-hours)结合起来,学习如何编写流水线。 -还有一个熟悉这些工具的好方法,找一个流行的服务,然后只使用 AWS 和配置管理工具来搭建这个服务所需要的基础设施。第一次先手动搭建,了解清楚要做的事情,然后只用 CloudFormation (或者 Terraform) 和 Ansible 重写刚才的手动操作。令人惊讶的是,这就是我们基础设施开发人员为客户所做的大部分日常工作,我们的客户认为这样的工作非常有意义! +还有一个熟悉这些工具的好方法,找一个流行的服务,然后只使用 AWS 和配置管理工具来搭建这个服务所需要的基础设施。第一次先手动搭建,了解清楚要做的事情,然后只用 CloudFormation(或者 Terraform)和 Ansible 重写刚才的手动操作。令人惊讶的是,这就是我们基础设施开发人员为客户所做的大部分日常工作,我们的客户认为这样的工作非常有意义! ### 需要读的书 @@ -84,27 +79,23 @@ DevOps 不是一个团队,CI/CD 也不是 Jira 系统的一个用户组。DevO #### 理论书籍 - * Gene Kim 写的 [The Phoenix Project (凤凰项目)][10]。这是一本很不错的书,内容涵盖了我上文解释过的历史(写的更生动形象),描述了一个运行在敏捷和 DevOps 之上的公司向精益前进的过程。 - * Terrance Ryan 写的 [Driving Technical Change (布道之道)][11]。非常好的一小本书,讲了大多数技术型组织内的常见性格特点以及如何和他们打交道。这本书对我的帮助比我想象的更多。 - * Tom DeMarco 和 Tim Lister 合著的 [Peopleware (人件)][12]。管理工程师团队的经典图书,有一点过时,但仍然很有价值。 - * Tom Limoncelli 写的 [Time Management for System Administrators (时间管理: 给系统管理员)][13]。这本书主要面向系统管理员,它对很多大型组织内的系统管理员生活做了深入的展示。如果你想了解更多系统管理员和开发人员之间的冲突,这本书可能解释了更多。 - * Eric Ries 写的 [The Lean Startup (精益创业)][14]。描述了 Eric 自己的 3D 虚拟形象公司,IMVU, 发现了如何精益工作,快速失败和更快盈利。 - * Jez Humble 和他的朋友写的[Lean Enterprise (精益企业)][15]。这本书是对精益创业做的改编,以更适应企业,两本书都很棒,都很好的解释了 DevOps 背后的商业动机。 - * Kief Morris 写的 [Infrastructure As Code (基础设施即代码)][16]。关于 "基础设施即代码" 的非常好的入门读物!很好的解释了为什么所有公司都有必要采纳这种做法。 - * Betsy Beyer, Chris Jones, Jennifer Petoff 和 Niall Richard Murphy 合著的 [Site Reliability Engineering (站点可靠性工程师)][17]。一本解释谷歌 SRE 实践的书,也因为是 "DevOps 诞生之前的 DevOps" 被人熟知。在如何处理运行时间、时延和保持工程师快乐方面提供了有趣的看法。 - - +* Gene Kim 写的 [The Phoenix Project(凤凰项目)][10]。这是一本很不错的书,内容涵盖了我上文解释过的历史(写的更生动形象),描述了一个运行在敏捷和 DevOps 之上的公司向精益前进的过程。 +* Terrance Ryan 写的 [Driving Technical Change(布道之道)][11]。非常好的一小本书,讲了大多数技术型组织内的常见性格特点以及如何和他们打交道。这本书对我的帮助比我想象的更多。 +* Tom DeMarco 和 Tim Lister 合著的 [Peopleware(人件)][12]。管理工程师团队的经典图书,有一点过时,但仍然很有价值。 +* Tom Limoncelli 写的 [Time Management for System Administrators(时间管理:给系统管理员)][13]。这本书主要面向系统管理员,它对很多大型组织内的系统管理员生活做了深入的展示。如果你想了解更多系统管理员和开发人员之间的冲突,这本书可能解释了更多。 +* Eric Ries 写的 [The Lean Startup(精益创业)][14]。描述了 Eric 自己的 3D 虚拟形象公司,IMVU,发现了如何精益工作,快速失败和更快盈利。 +* Jez Humble 和他的朋友写的[Lean Enterprise(精益企业)][15]。这本书是对精益创业做的改编,以更适应企业,两本书都很棒,都很好的解释了 DevOps 背后的商业动机。 +* Kief Morris 写的 [Infrastructure As Code(基础设施即代码)][16]。关于“基础设施即代码”的非常好的入门读物!很好的解释了为什么所有公司都有必要采纳这种做法。 +* Betsy Beyer、Chris Jones、Jennifer Petoff 和 Niall Richard Murphy 合著的 [Site Reliability Engineering(站点可靠性工程师)][17]。一本解释谷歌 SRE 实践的书,也因为是“DevOps 诞生之前的 DevOps”被人熟知。在如何处理运行时间、时延和保持工程师快乐方面提供了有趣的看法。 #### 技术书籍 如果你想找的是让你直接跟代码打交道的书,看这里就对了。 - * W. Richard Stevens 的 [TCP/IP Illustrated (TCP/IP 详解)][18]。这是一套经典的(也可以说是最全面的)讲解基本网络协议的巨著,重点介绍了 TCP/IP 协议族。如果你听说过 1,2, 3,4 层网络,而且对深入学习他们感兴趣,那么你需要这本书。 - * Evi Nemeth, Trent Hein 和 Ben Whaley 合著的 [UNIX and Linux System Administration Handbook (UNIX/Linux 系统管理员手册)][19]。一本很好的入门书,介绍 Linux/Unix 如何工作以及如何使用。 - * Don Jones 和 Jeffrey Hicks 合著的 [Learn Windows Powershell In A Month of Lunches (Windows PowerShell实战指南)][20]. 如果你在 Windows 系统下做自动化任务,你需要学习怎么使用 Powershell。这本书能够帮助你。Don Jones 是这方面著名的 MVP。 - * 几乎所有 [James Turnbull][21] 写的东西,针对流行的 DevOps 工具,他发表了很好的技术入门读物。 - - +* W. Richard Stevens 的 [TCP/IP Illustrated(TCP/IP 详解)][18]。这是一套经典的(也可以说是最全面的)讲解基本网络协议的巨著,重点介绍了 TCP/IP 协议族。如果你听说过 1,2, 3,4 层网络,而且对深入学习他们感兴趣,那么你需要这本书。 +* Evi Nemeth、Trent Hein 和 Ben Whaley 合著的 [UNIX and Linux System Administration Handbook(UNIX/Linux 系统管理员手册)][19]。一本很好的入门书,介绍 Linux/Unix 如何工作以及如何使用。 +* Don Jones 和 Jeffrey Hicks 合著的 [Learn Windows Powershell In A Month of Lunches(Windows PowerShell实战指南)][20]。如果你在 Windows 系统下做自动化任务,你需要学习怎么使用 Powershell。这本书能够帮助你。Don Jones 是这方面著名的 MVP。 +* 几乎所有 [James Turnbull][21] 写的东西,针对流行的 DevOps 工具,他发表了很好的技术入门读物。 不管是在那些把所有应用都直接部署在物理机上的公司,(现在很多公司仍然有充分的理由这样做)还是在那些把所有应用都做成 serverless 的先驱公司,DevOps 都很可能会持续下去。这部分工作很有趣,产出也很有影响力,而且最重要的是,它搭起桥梁衔接了技术和业务之间的缺口。DevOps 是一个值得期待的美好事物。 @@ -116,30 +107,30 @@ via: https://opensource.com/article/18/1/getting-devops 作者:[Carlos Nunez][a] 译者:[belitex](https://github.com/belitex) -校对:[校对者ID](https://github.com/校对者ID) +校对:[pityonline](https://github.com/pityonline) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 -[a]:https://opensource.com/users/carlosonunez -[1]:https://www.reddit.com/r/devops/ -[2]:https://carlosonunez.wordpress.com/ -[3]:https://twitter.com/easiestnameever -[4]:https://en.wikipedia.org/wiki/ITIL -[5]:https://www.psychologytoday.com/blog/time-out/201401/getting-out-your-silo -[6]:https://twitter.com/adamhjk/status/572832185461428224 -[7]:https://landing.google.com/sre/interview/ben-treynor.html -[8]:https://en.wikipedia.org/wiki/CI/CD -[9]:https://thehftguy.com/2016/11/01/docker-in-production-an-history-of-failure/ -[10]:https://itrevolution.com/book/the-phoenix-project/ -[11]:https://pragprog.com/book/trevan/driving-technical-change -[12]:https://en.wikipedia.org/wiki/Peopleware:_Productive_Projects_and_Teams -[13]:http://shop.oreilly.com/product/9780596007836.do -[14]:http://theleanstartup.com/ -[15]:https://info.thoughtworks.com/lean-enterprise-book.html -[16]:http://infrastructure-as-code.com/book/ -[17]:https://landing.google.com/sre/book.html -[18]:https://en.wikipedia.org/wiki/TCP/IP_Illustrated -[19]:http://www.admin.com/ -[20]:https://www.manning.com/books/learn-windows-powershell-in-a-month-of-lunches-third-edition -[21]:https://jamesturnbull.net/ -[22]:https://carlosonunez.wordpress.com/2017/03/02/getting-into-devops/ +[a]: https://opensource.com/users/carlosonunez +[1]: https://www.reddit.com/r/devops/ +[2]: https://carlosonunez.wordpress.com/ +[3]: https://twitter.com/easiestnameever +[4]: https://en.wikipedia.org/wiki/ITIL +[5]: https://www.psychologytoday.com/blog/time-out/201401/getting-out-your-silo +[6]: https://twitter.com/adamhjk/status/572832185461428224 +[7]: https://landing.google.com/sre/interview/ben-treynor.html +[8]: https://en.wikipedia.org/wiki/CI/CD +[9]: https://thehftguy.com/2016/11/01/docker-in-production-an-history-of-failure/ +[10]: https://itrevolution.com/book/the-phoenix-project/ +[11]: https://pragprog.com/book/trevan/driving-technical-change +[12]: https://en.wikipedia.org/wiki/Peopleware:_Productive_Projects_and_Teams +[13]: http://shop.oreilly.com/product/9780596007836.do +[14]: http://theleanstartup.com/ +[15]: https://info.thoughtworks.com/lean-enterprise-book.html +[16]: http://infrastructure-as-code.com/book/ +[17]: https://landing.google.com/sre/book.html +[18]: https://en.wikipedia.org/wiki/TCP/IP_Illustrated +[19]: http://www.admin.com/ +[20]: https://www.manning.com/books/learn-windows-powershell-in-a-month-of-lunches-third-edition +[21]: https://jamesturnbull.net/ +[22]: https://carlosonunez.wordpress.com/2017/03/02/getting-into-devops/ From 7873f4666803dc92997d2df0cd8643fb2e11786c Mon Sep 17 00:00:00 2001 From: HankChow <280630620@qq.com> Date: Tue, 2 Oct 2018 08:21:27 +0800 Subject: [PATCH 172/736] translated --- ...ly And Safely Manage Cron Jobs In Linux.md | 133 ------------------ ...ly And Safely Manage Cron Jobs In Linux.md | 126 +++++++++++++++++ 2 files changed, 126 insertions(+), 133 deletions(-) delete mode 100644 sources/tech/20180823 How To Easily And Safely Manage Cron Jobs In Linux.md create mode 100644 translated/tech/20180823 How To Easily And Safely Manage Cron Jobs In Linux.md diff --git a/sources/tech/20180823 How To Easily And Safely Manage Cron Jobs In Linux.md b/sources/tech/20180823 How To Easily And Safely Manage Cron Jobs In Linux.md deleted file mode 100644 index 3f65ac7825..0000000000 --- a/sources/tech/20180823 How To Easily And Safely Manage Cron Jobs In Linux.md +++ /dev/null @@ -1,133 +0,0 @@ -HankChow translating - -How To Easily And Safely Manage Cron Jobs In Linux -====== - -![](https://www.ostechnix.com/wp-content/uploads/2018/08/Crontab-UI-720x340.jpg) - -When it comes to schedule tasks in Linux, which utility comes to your mind first? Yeah, you guessed it right. **Cron!** The cron utility helps you to schedule commands/tasks at specific time in Unix-like operating systems. We already published a [**beginners guides to Cron jobs**][1]. I have a few years experience in Linux, so setting up cron jobs is no big deal for me. But, it is not piece of cake for newbies. The noobs may unknowingly do small mistakes while editing plain text crontab and bring down all cron jobs. Just in case, if you think you might mess up with your cron jobs, there is a good alternative way. Say hello to **Crontab UI** , a web-based tool to easily and safely manage cron jobs in Unix-like operating systems. - -You don’t need to manually edit the crontab file to create, delete and manage cron jobs. Everything can be done via a web browser with a couple mouse clicks. Crontab UI allows you to easily create, edit, pause, delete, backup cron jobs, and even import, export and deploy jobs on other machines without much hassle. Error log, mailing and hooks support also possible. It is free, open source and written using NodeJS. - -### Installing Crontab UI - -Installing Crontab UI is just a one-liner command. Make sure you have installed NPM. If you haven’t install npm yet, refer the following link. - -Next, run the following command to install Crontab UI. -``` -$ npm install -g crontab-ui - -``` - -It’s that simple. Let us go ahead and see how to manage cron jobs using Crontab UI. - -### Easily And Safely Manage Cron Jobs In Linux - -To launch Crontab UI, simply run: -``` -$ crontab-ui - -``` - -You will see the following output: -``` -Node version: 10.8.0 -Crontab UI is running at http://127.0.0.1:8000 - -``` - -Now, open your web browser and navigate to ****. Make sure the port no 8000 is allowed in your firewall/router. - -Please note that you can only access Crontab UI web dashboard within the local system itself. - -If you want to run Crontab UI with your system’s IP and custom port (so you can access it from any remote system in the network), use the following command instead: -``` -$ HOST=0.0.0.0 PORT=9000 crontab-ui -Node version: 10.8.0 -Crontab UI is running at http://0.0.0.0:9000 - -``` - -Now, Crontab UI can be accessed from the any system in the nework using URL – **http:// :9000**. - -This is how Crontab UI dashboard looks like. - -![](https://www.ostechnix.com/wp-content/uploads/2018/08/crontab-ui-dashboard.png) - -As you can see in the above screenshot, Crontab UI dashbaord is very simply. All options are self-explanatory. - -To exit Crontab UI, press **CTRL+C**. - -**Create, edit, run, stop, delete a cron job** - -To create a new cron job, click on “New” button. Enter your cron job details and click Save. - - 1. Name the cron job. It is optional. - 2. The full command you want to run. - 3. Choose schedule time. You can either choose the quick schedule time, (such as Startup, Hourly, Daily, Weekly, Monthly, Yearly) or set the exact time to run the command. After you choosing the schedule time, the syntax of the cron job will be shown in **Jobs** field. - 4. Choose whether you want to enable error logging for the particular job. - - - -Here is my sample cron job. - -![](https://www.ostechnix.com/wp-content/uploads/2018/08/create-new-cron-job.png) - -As you can see, I have setup a cron job to clear pacman cache at every month. - -Similarly, you can create any number of jobs as you want. You will see all cron jobs in the dashboard. - -![](https://www.ostechnix.com/wp-content/uploads/2018/08/crontab-ui-dashboard-1.png) - -If you wanted to change any parameter in a cron job, just click on the **Edit** button below the job and modify the parameters as you wish. To run a job immediately, click on the button that says **Run**. To stop a job, click **Stop** button. You can view the log details of any job by clicking on the **Log** button. If the job is no longer required, simply press **Delete** button. - -**Backup cron jobs** - -To backup all cron jobs, press the Backup from main dashboard and choose OK to confirm the backup. - -![](https://www.ostechnix.com/wp-content/uploads/2018/08/backup-cron-jobs.png) - -You can use this backup in case you messed with the contents of the crontab file. - -**Import/Export cron jobs to other systems** - -Another notable feature of Crontab UI is you can import, export and deploy cron jobs to other systems. If you have multiple systems on your network that requires the same cron jobs, just press **Export** button and choose the location to save the file. All contents of crontab file will be saved in a file named **crontab.db**. - -Here is the contents of the crontab.db file. -``` -$ cat Downloads/crontab.db -{"name":"Remove Pacman Cache","command":"rm -rf /var/cache/pacman","schedule":"@monthly","stopped":false,"timestamp":"Thu Aug 23 2018 10:34:19 GMT+0000 (Coordinated Universal Time)","logging":"true","mailing":{},"created":1535020459093,"_id":"lcVc1nSdaceqS1ut"} - -``` - -Then you can transfer the entire crontab.db file to some other system and import its to the new system. You don’t need to manually create cron jobs in all systems. Just create them in one system and export and import all of them to every system on the network. - -**Get the contents from or save to existing crontab file** - -There are chances that you might have already created some cron jobs using **crontab** command. If so, you can retrieve contents of the existing crontab file by click on the **“Get from crontab”** button in main dashboard. - -![](https://www.ostechnix.com/wp-content/uploads/2018/08/get-from-crontab.png) - -Similarly, you can save the newly created jobs using Crontab UI utility to existing crontab file in your system. To do so, just click **Save to crontab** option in the dashboard. - -See? Managing cron jobs is not that complicated. Any newbie user can easily maintain any number of jobs without much hassle using Crontab UI. Give it a try and let us know what do you think about this tool. I am all ears! - -And, that’s all for now. Hope this was useful. More good stuffs to come. Stay tuned! - -Cheers! - - - --------------------------------------------------------------------------------- - -via: https://www.ostechnix.com/how-to-easily-and-safely-manage-cron-jobs-in-linux/ - -作者:[SK][a] -选题:[lujun9972](https://github.com/lujun9972) -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:https://www.ostechnix.com/author/sk/ -[1]:https://www.ostechnix.com/a-beginners-guide-to-cron-jobs/ diff --git a/translated/tech/20180823 How To Easily And Safely Manage Cron Jobs In Linux.md b/translated/tech/20180823 How To Easily And Safely Manage Cron Jobs In Linux.md new file mode 100644 index 0000000000..c556d485c3 --- /dev/null +++ b/translated/tech/20180823 How To Easily And Safely Manage Cron Jobs In Linux.md @@ -0,0 +1,126 @@ +在 Linux 中安全轻松地管理 Cron 定时任务 +====== + +![](https://www.ostechnix.com/wp-content/uploads/2018/08/Crontab-UI-720x340.jpg) + +在 Linux 中遇到计划任务的时候,你首先会想到的大概就是 Cron 定时任务了。Cron 定时任务能帮助你在类 Unix 操作系统中计划性地执行命令或者任务。也可以参考一下我们之前的一篇[《关于 Cron 定时任务的新手指导》][1]。对于有一定 Linux 经验的人来说,设置 Cron 定时任务不是什么难事,但对于新手来说就不一定了,他们在编辑 Crontab 文件的时候不知不觉中犯的一些小错误,也有可能把整个 Cron 定时任务搞挂了。如果你在处理 Cron 定时任务的时候为了以防万一,可以尝试使用 **Crontab UI**,它是一个可以在类 Unix 操作系统上安全轻松管理 Cron 定时任务的页面工具。 + +Crontab UI 是使用 NodeJS 编写的免费开源软件。有了 Crontab UI,你在创建、删除和修改 Cron 定时任务的时候就不需要手工编辑 Crontab 文件了,只需要打开浏览器稍微操作一下,就能完成上面这些工作。你可以用 Crontab UI 轻松创建、编辑、暂停、删除、备份 Cron 定时任务,甚至还可以简单做到导入、导出、部署其它机器上的 Cron 定时任务,它还支持错误日志、邮件发送和钩子。 + + +### 安装 Crontab UI + +只需要一条命令就可以安装好 Crontab UI,但前提是已经安装好 NPM。如果还没有安装 NPM,可以参考[《如何在 Linux 上安装 NodeJS》][2]这篇文章。 + +执行这一条命令来安装 Crontab UI。 +``` +$ npm install -g crontab-ui + +``` + +就是这么简单,下面继续来看看在 Crontab UI 上如何管理 Cron 定时任务。 + +### 在 Linux 上安全轻松管理 Cron 定时任务 + +执行这一条命令启动 Crontab UI: +``` +$ crontab-ui + +``` + +你会看到这样的输出: +``` +Node version: 10.8.0 +Crontab UI is running at http://127.0.0.1:8000 + +``` + +首先在你的防火墙和路由器上放开 8000 端口,然后打开浏览器访问 ****。 + +注意,默认只有在本地才能访问到 Crontab UI 的控制台页面。但如果你想让 Crontab UI 使用系统的 IP 地址和自定义端口,也就是想让其它机器也访问到本地的 Crontab UI,你需要使用以下这个命令: +``` +$ HOST=0.0.0.0 PORT=9000 crontab-ui +Node version: 10.8.0 +Crontab UI is running at http://0.0.0.0:9000 + +``` + +Crontab UI 就能够通过 :9000 这样的 URL 被远程机器访问到了。 + +Crontab UI 的控制台页面长这样: + +![](https://www.ostechnix.com/wp-content/uploads/2018/08/crontab-ui-dashboard.png) + +从上面的截图就可以看到,Crontab UI 的界面非常简洁,所有选项的含义都能不言自明。 + +输入 `Ctrl + C` 就可以关闭 Crontab UI。 + +**创建、编辑、运行、停止、删除 Cron 定时任务** + +点击“New”,输入 Cron 定时任务的信息并点击“Save”保存,就可以创建一个新的 Cron 定时任务了。 + + 1. 为 Cron 定时任务命名,这是可选的; + 2. 你想要执行的完整命令; + 3. 设定计划执行的时间。你可以按照启动、每时、每日、每周、每月、每年这些指标快速指定计划任务,也可以明确指定任务执行的具体时间。指定好计划时间后,**Jobs** 区域就会显示 Cron 定时任务的句式。 + 4. 选择是否为某个 Cron 定时任务记录错误日志。 + + + +这是我的一个 Cron 定时任务样例。 + +![](https://www.ostechnix.com/wp-content/uploads/2018/08/create-new-cron-job.png) + +如你所见,我设置了一个每月清理 `pacman` 缓存的 Cron 定时任务。你也可以设置多个 Cron 定时任务,都能在控制台页面看到。 + +![](https://www.ostechnix.com/wp-content/uploads/2018/08/crontab-ui-dashboard-1.png) + +如果你需要更改 Cron 定时任务中的某些参数,只需要点击 **Edit** 按钮并按照你的需求更改对应的参数。点击 **Run** 按钮可以立即执行 Cron 定时任务,点击 **Stop** 则可以立即停止 Cron 定时任务。如果想要查看某个 Cron 定时任务的详细日志,可以点击 **Log** 按钮。对于不再需要的 Cron 定时任务,就可以按 **Delete** 按钮删除。 + +**备份 Cron 定时任务** + +点击控制台页面的 **Backup** 按钮并确认,就可以备份所有 Cron 定时任务。 + +![](https://www.ostechnix.com/wp-content/uploads/2018/08/backup-cron-jobs.png) + +备份之后,一旦 Crontab 文件出现了错误,就可以使用备份来恢复了。 + +**导入/导出其它机器上的 Cron 定时任务** + +Crontab UI 还有一个令人注目的功能,就是导入、导出、部署其它机器上的 Cron 定时任务。如果同一个网络里的多台机器都需要执行同样的 Cron 定时任务,只需要点击 **Export** 按钮并选择文件的保存路径,所有的 Cron 定时任务都会导出到 `crontab.db` 文件中。 + +以下是 `crontab.db` 文件的内容: +``` +$ cat Downloads/crontab.db +{"name":"Remove Pacman Cache","command":"rm -rf /var/cache/pacman","schedule":"@monthly","stopped":false,"timestamp":"Thu Aug 23 2018 10:34:19 GMT+0000 (Coordinated Universal Time)","logging":"true","mailing":{},"created":1535020459093,"_id":"lcVc1nSdaceqS1ut"} + +``` + +导出成文件以后,你就可以把这个 `crontab.db` 文件放置到其它机器上并导入成 Cron 定时任务,而不需要在每一台主机上手动设置 Cron 定时任务。总之,在一台机器上设置完,导出,再导入到其他机器,就完事了。 + +**在 Crontab 文件获取/保存 Cron 定时任务** + +你可能在使用 Crontab UI 之前就已经使用 `crontab` 命令创建过 Cron 定时任务。如果是这样,你可以点击控制台页面上的 **Get from crontab** 按钮来获取已有的 Cron 定时任务。 + +![](https://www.ostechnix.com/wp-content/uploads/2018/08/get-from-crontab.png) + +同样地,你也可以使用 Crontab UI 来将新的 Cron 定时任务保存到 Crontab 文件中,只需要点击 **Save to crontab** 按钮就可以了。 + +管理 Cron 定时任务并没有想象中那么难,即使是新手使用 Crontab UI 也能轻松管理 Cron 定时任务。赶快开始尝试并发表一下你的看法吧。 + + + +-------------------------------------------------------------------------------- + +via: https://www.ostechnix.com/how-to-easily-and-safely-manage-cron-jobs-in-linux/ + +作者:[SK][a] +选题:[lujun9972](https://github.com/lujun9972) +译者:[HankChow](https://github.com/HankChow) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://www.ostechnix.com/author/sk/ +[1]:https://www.ostechnix.com/a-beginners-guide-to-cron-jobs/ +[2]:https://www.ostechnix.com/install-node-js-linux/ + From 84b78f07ef67243ad46548fc1491e384c20ec274 Mon Sep 17 00:00:00 2001 From: pityonline Date: Tue, 25 Sep 2018 23:47:00 +0800 Subject: [PATCH 173/736] =?UTF-8?q?PRF:=20#10300=20=E5=AE=8C=E6=88=90?= =?UTF-8?q?=E6=A0=A1=E5=AF=B9?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- .../talk/20180117 How to get into DevOps.md | 52 +++++++++---------- 1 file changed, 26 insertions(+), 26 deletions(-) diff --git a/translated/talk/20180117 How to get into DevOps.md b/translated/talk/20180117 How to get into DevOps.md index bd9172b468..6efd6976d5 100644 --- a/translated/talk/20180117 How to get into DevOps.md +++ b/translated/talk/20180117 How to get into DevOps.md @@ -11,11 +11,11 @@ DevOps 实践指南 了解历史是搞清楚未来的关键,DevOps 也不例外。想搞清楚 DevOps 运动的普及和流行,去了解一下上世纪 90 年代后期和 21 世纪前十年 IT 的情况会有帮助。这是我的经验。 -我的第一份工作是在一家大型跨国金融服务公司做 Windows 系统管理员。当时给计算资源扩容需要给 Dell 打电话(或者像我们公司那样打给 CDW),并下一个价值数十万美元的订单,包含服务器、网络设备、电缆和软件,所有这些都要运到在线或离线的数据中心去。虽然 VMware 仍在尝试说服企业使用虚拟机运行他们的“性能敏感”型程序是更划算的,但是包括我们在内的很多公司都还忠于使用他们的物理机运行应用。 +我的第一份工作是在一家大型跨国金融服务公司做 Windows 系统管理员。当时给计算资源扩容需要给 Dell 打电话(或者像我们公司那样打给 CDW),并下一个价值数十万美元的订单,包含服务器、网络设备、电缆和软件,所有这些都要运到生产或线下的数据中心去。虽然 VMware 仍在尝试说服企业使用虚拟机运行他们的“性能敏感”型程序是更划算的,但是包括我们在内的很多公司都还忠于使用他们的物理机运行应用。 -在我们技术部门,有一个专门做数据中心工程和操作的完整团队,他们的工作包括价格谈判,让荒唐的租赁月费能够下降一点点,还包括保证我们的系统能够正常冷却(如果设备太多,这个事情的难度会呈指数增长)。如果这个团队足够幸运足够有钱,境外数据中心的工作人员对我们所有的服务器型号又都有足够的了解,就能避免在盘后交易中不小心扯错东西。那时候亚马逊 AWS 和 Rackspace 逐渐开始加速扩张,但还远远没到临界规模。 +在我们技术部门,有一个专门做数据中心工程和运营的团队,他们的工作包括价格谈判,让荒唐的月租能够降一点点,还包括保证我们的系统能够正常冷却(如果设备太多,这个事情的难度会呈指数增长)。如果这个团队足够幸运足够有钱,境外数据中心的工作人员对我们所有的服务器型号又都有足够的了解,就能避免在盘后交易中不小心搞错东西。那时候亚马逊 AWS 和 Rackspace 逐渐开始加速扩张,但还远远没到临界规模。 -当时我们还有专门的团队来保证硬件上运行着的操作系统和软件能够按照预期工作。这些工程师负责设计可靠的架构以方便给系统打补丁,监控和报警,还要定义基础镜像gold image的内容。这些大都是通过很多手工实验完成的,很多手工实验是为了编写一个运行说明书runbook来描述要做的事情,并确保按照它执行后的结果确实在预期内。在我们这么大的组织里,这样做很重要,因为一线和二线的技术支持都是境外的,而他们的培训内容只覆盖到了这些运行说明而已。 +当时我们还有专门的团队来保证硬件上运行着的操作系统和软件能够按照预期工作。这些工程师负责设计可靠的架构以方便给系统打补丁,监控和报警,还要定义基础镜像gold image的内容。这些大都是通过很多手工实验完成的,很多手工实验是为了编写一个运行说明书runbook来描述要做的事情,并确保按照它执行后的结果确实在预期内。在我们这么大的组织里,这样做很重要,因为一线和二线的技术支持都是境外的,而他们的培训内容只覆盖到了这些运行说明而已。 (这是我职业生涯前三年的世界。我那时候的梦想是成为制定金本位制的人!) @@ -23,28 +23,28 @@ DevOps 实践指南 * 开发人员按照技术和功能需求来编写代码,这些需求来自于业务分析人员的会议,但是会议并没有邀请开发人员参加。 * 开发人员可以选择为他们的代码编写单元测试,以确保在代码里没有任何明显的疯狂行为,比如除以 0 但不抛出异常。 -* 然后开发者会把他们的代码标记为“Ready for QA”(准备好了接受测试),质量保障的成员会把这个版本的代码发布到他们自己的环境中,这个环境和生产环境可能相似,也可能不相似,甚至和开发环境相比也不一定相似。 -* 故障会在几天或者几个星期内反馈到开发人员那里,这个时长取决于其他业务活动和优先事项。 +* 然后开发者会把他们的代码标记为“Ready for QA”(准备好了接受测试),质量保障的成员会把这个版本的代码发布到他们自己的环境中,这个环境和生产环境可能相似,也可能不,甚至和开发环境相比也不一定相似。 +* 故障会在几天或者几个星期内反馈到开发人员那里,这个时长取决于其它业务活动和优先事项。 -虽然系统管理员和开发人员经常有不一致的意见,但是对“变更管理”的痛恨却是一致的。变更管理由高度规范的(就我当时的雇主而言)和非常有必要的规则和程序组成,用来管理一家公司应该什么时候做技术变更,以及如何做。很多公司都按照 [ITIL][4] 来操作,简单的说,ITIL 问了很多和事情发生的原因、时间、地点和方式相关的问题,而且提供了一个过程,对产生最终答案的决定做审计跟踪。 +虽然系统管理员和开发人员经常有不一致的意见,但是对“变更管理”却一致痛恨。变更管理由高度规范的(就我当时的雇主而言)和非常必要的规则和程序组成,用来管理一家公司应该什么时候做技术变更,以及如何做。很多公司都按照 [ITIL][4] 来操作,简单的说,ITIL 问了很多和事情发生的原因、时间、地点和方式相关的问题,而且提供了一个过程,对产生最终答案的决定做审计跟踪。 你可能从我的简短历史课上了解到,当时 IT 的很多很多事情都是手工完成的。这导致了很多错误。错误又导致了很多财产损失。变更管理的工作就是尽量减少这些损失,它常常以这样的形式出现:不管变更的影响和规模大小,每两周才能发布部署一次。周五下午 4 点到周一早上 5 点 59 分这段时间,需要排队等候发布窗口。(讽刺的是,这种流程导致了更多错误,通常还是更严重的那种错误) ### DevOps 不是专家团 -你可能在想 “Carlos 你在讲啥啊,什么时候才能说到 Ansible playbooks?”,我热爱 Ansible,但是请再等一会;下面这些很重要。 +你可能在想 “Carlos 你在讲啥啊,什么时候才能说到 Ansible playbooks?”,我喜欢 Ansible,但是请稍等 —— 下面这些很重要。 -你有没有过被分配到过需要跟 DevOps 小组打交道的项目?你有没有依赖过“配置管理”或者“持续集成/持续交付”小组来保证业务流水线设置正确?你有没有在代码开发完的数周之后才参加发布部署的会议? +你有没有过被分配到需要跟 DevOps 小组打交道的项目?你有没有依赖过“配置管理”或者“持续集成/持续交付”小组来保证业务流水线设置正确?你有没有在代码开发完的数周之后才参加发布部署的会议? 如果有过,那么你就是在重温历史,这个历史是由上面所有这些导致的。 -出于本能,我们喜欢和像自己的人一起工作,这会导致[筒仓][5]的行成。很自然,这种人类特质也会在工作场所表现出来是不足为奇的。我甚至在一个 250 人的创业公司里见到过这样的现象,当时我在那里工作。刚开始的时候,开发人员都在聚在一起工作,彼此深度协作。随着代码变得复杂,开发相同功能的人自然就坐到了一起,解决他们自己的复杂问题。然后按功能划分的小组很快就正式形成了。 +出于本能,我们喜欢和像自己的人一起工作,这会导致[壁垒][5]的形成。很自然,这种人类特质也会在工作场所表现出来是不足为奇的。我甚至在曾经工作过的一个 250 人的创业公司里见到过这样的现象。刚开始的时候,开发人员都在聚在一起工作,彼此深度协作。随着代码变得复杂,开发相同功能的人自然就坐到了一起,解决他们自己的复杂问题。然后按功能划分的小组很快就正式形成了。 -在我工作过的很多公司里,系统管理员和开发人员不仅像这样形成了天然的筒仓,而且彼此还有激烈的对抗。开发人员的环境出问题了或者他们的权限太小了,就会对系统管理员很恼火。系统管理员怪开发者无时不刻的不在用各种方式破坏他们的环境,怪开发人员申请的计算资源严重超过他们的需要。双方都不理解对方,更糟糕的是,双方都不愿意去理解对方。 +在我工作过的很多公司里,系统管理员和开发人员不仅像这样形成了天然的壁垒,而且彼此还有激烈的对抗。开发人员的环境出问题了或者他们的权限太小了,就会对系统管理员很恼火。系统管理员怪开发人员无时无刻地在用各种方式破坏他们的环境,怪开发人员申请的计算资源严重超过他们的需要。双方都不理解对方,更糟糕的是,双方都不愿意去理解对方。 -大部分开发人员对操作系统,内核或计算机硬件都不感兴趣。同样的,大部分系统管理员,即使是 Linux 的系统管理员,也都不愿意学习编写代码,他们在大学期间学过一些 C 语言,然后就痛恨它,并且永远都不想再碰 IDE。所以,开发人员把运行环境的问题甩给围墙外的系统管理员,系统管理员把这些问题和甩过来的其他上百个问题放在一起,做一个优先级安排。每个人都很忙,心怀怨恨的等待着。DevOps 的目的就是解决这种矛盾。 +大部分开发人员对操作系统,内核或计算机硬件都不感兴趣。同样,大部分系统管理员,即使是 Linux 的系统管理员,也都不愿意学习编写代码,他们在大学期间学过一些 C 语言,然后就痛恨它,并且永远都不想再碰 IDE。所以,开发人员把运行环境的问题甩给围墙外的系统管理员,系统管理员把这些问题和甩过来的其它上百个问题放在一起安排优先级。每个人都忙于怨恨对方。DevOps 的目的就是解决这种矛盾。 -DevOps 不是一个团队,CI/CD 也不是 Jira 系统的一个用户组。DevOps 是一种思考方式。根据这个运动来看,在理想的世界里,开发人员、系统管理员和业务相关人将作为一个团队工作。虽然他们可能不完全了解彼此的世界,可能没有足够的知识去了解彼此的积压任务,但他们在大多数情况下能有一致的看法。 +DevOps 不是一个团队,CI/CD 也不是 JIRA 系统的一个用户组。DevOps 是一种思考方式。根据这个运动来看,在理想的世界里,开发人员、系统管理员和业务相关人将作为一个团队工作。虽然他们可能不完全了解彼此的世界,可能没有足够的知识去了解彼此的积压任务,但他们在大多数情况下能有一致的看法。 把所有基础设施和业务逻辑都代码化,再串到一个发布部署流水线里,就像是运行在这之上的应用一样。这个理念的基础就是 DevOps。因为大家都理解彼此,所以人人都是赢家。聊天机器人和易用的监控工具、可视化工具的兴起,背后的基础也是 DevOps。 @@ -52,30 +52,30 @@ DevOps 不是一个团队,CI/CD 也不是 Jira 系统的一个用户组。DevO ### 要实践 DevOps 我需要知道些什么 -我经常被问到这个问题,它的答案,和同属于开放式的其他大部分问题一样:视情况而定。 +我经常被问到这个问题,它的答案和同属于开放式的其它大部分问题一样:视情况而定。 -现在“DevOps 工程师”在不同的公司有不同的含义。在软件开发人员比较多但是很少有人懂基础设施的小公司,他们很可能是在找有更多系统管理经验的人。而其他公司,通常是大公司或老公司或又大又老的公司,已经有一个稳固的系统管理团队了,他们在向类似于谷歌 [SRE][7] 的方向做优化,也就是“设计操作功能的软件工程师”。但是,这并不是金科玉律,就像其他技术类工作一样,这个决定很大程度上取决于他的招聘经理。 +现在“DevOps 工程师”在不同的公司有不同的含义。在软件开发人员比较多但是很少有人懂基础设施的小公司,他们很可能是在找有更多系统管理经验的人。而其他公司,通常是大公司或老公司,已经有一个稳固的系统管理团队了,他们在向类似于谷歌 [SRE][7] 的方向做优化,也就是“设计操作功能的软件工程师”。但是,这并不是金科玉律,就像其它技术类工作一样,这个决定很大程度上取决于他的招聘经理。 也就是说,我们一般是在找对深入学习以下内容感兴趣的工程师: -* 如何管理和设计安全、可扩展的云上的平台(通常是在 AWS 上,不过微软的 Azure,谷歌的 Cloud Platform,还有 DigitalOcean 和 Heroku 这样的 PaaS 提供商,也都很流行) -* 如何用流行的 [CI/CD][8] 工具,比如 Jenkins,Gocd,还有基于云的 Travis CI 或者 CircleCI,来构造一条优化的发布部署流水线,和发布部署策略。 -* 如何在你的系统中使用基于时间序列的工具,比如 Kibana,Grafana,Splunk,Loggly 或者 Logstash,来监控,记录,并在变化的时候报警,还有 -* 如何使用配置管理工具,例如 Chef,Puppet 或者 Ansible 做到“基础设施即代码”,以及如何使用像 Terraform 或 CloudFormation 的工具发布这些基础设施。 +* 如何管理和设计安全、可扩展的云平台(通常是在 AWS 上,不过微软的 Azure,Google Cloud Platform,还有 DigitalOcean 和 Heroku 这样的 PaaS 提供商,也都很流行)。 +* 如何用流行的 [CI/CD][8] 工具,比如 Jenkins,GoCD,还有基于云的 Travis CI 或者 CircleCI,来构造一条优化的发布部署流水线和发布部署策略。 +* 如何在你的系统中使用基于时间序列的工具,比如 Kibana,Grafana,Splunk,Loggly 或者 Logstash 来监控,记录,并在变化的时候报警。 +* 如何使用配置管理工具,例如 Chef,Puppet 或者 Ansible 做到“基础设施即代码”,以及如何使用像 Terraform 或 CloudFormation 的工具发布这些基础设施。 容器也变得越来越受欢迎。尽管有人对大规模使用 Docker 的现状[表示不满][9],但容器正迅速地成为一种很好的方式来实现在更少的操作系统上运行超高密度的服务和应用,同时提高它们的可靠性。(像 Kubernetes 或者 Mesos 这样的容器编排工具,能在宿主机故障的时候,几秒钟之内重新启动新的容器。)考虑到这些,掌握 Docker 或者 rkt 以及容器编排平台的知识会对你大有帮助。 -如果你是希望做 DevOps 实践的系统管理员,你还需要知道如何写代码。Python 和 Ruby 是 DevOps 领域的流行语言,因为他们是可移植的(也就是说可以在任何操作系统上运行),快速的,而且易读易学。它们还支撑着这个行业最流行的配置管理工具(Ansible 是使用 Python 写的,Chef 和 Puppet 是使用 Ruby 写的)以及云平台的 API 客户端(亚马逊 AWS,微软 Azure,谷歌 Cloud Platform 的客户端通常会提供 Python 和 Ruby 语言的版本)。 +如果你是希望做 DevOps 实践的系统管理员,你还需要知道如何写代码。Python 和 Ruby 是 DevOps 领域的流行语言,因为它们是可移植的(也就是说可以在任何操作系统上运行),快速的,而且易读易学。它们还支撑着这个行业最流行的配置管理工具(Ansible 是使用 Python 写的,Chef 和 Puppet 是使用 Ruby 写的)以及云平台的 API 客户端(亚马逊 AWS,微软 Azure,Google Cloud Platform 的客户端通常会提供 Python 和 Ruby 语言的版本)。 -如果你是开发人员,也希望做 DevOps 的实践,我强烈建议你去学习 Unix,Windows 操作系统以及网络基础知识。虽然云计算把很多系统管理的难题抽象化了,但是对慢应用的性能做 debug 的时候,你知道操作系统如何工作的就会有很大的帮助。下文包含了一些这个主题的图书。 +如果你是开发人员,也希望做 DevOps 的实践,我强烈建议你去学习 Unix,Windows 操作系统以及网络基础知识。虽然云计算把很多系统管理的难题抽象化了,但是对应用的性能做 debug 的时候,如果你知道操作系统如何工作的就会有很大的帮助。下文包含了一些这个主题的图书。 -如果你觉得这些东西听起来内容太多,大家都是这么想的。幸运的是,有很多小项目可以让你开始探索。其中一个启动项目是 Gary Stafford 的[选举服务](https://github.com/garystafford/voter-service),一个基于 Java 的简单投票平台。我们要求面试候选人通过一个流水线将该服务从 GitHub 部署到生产环境基础设施上。你可以把这个服务与 Rob Mile 写的了不起的 DevOps [入门教程](https://github.com/maxamg/cd-office-hours)结合起来,学习如何编写流水线。 +如果你觉得这些东西听起来内容太多,没关系,大家都是这么想的。幸运的是,有很多小项目可以让你开始探索。其中一个项目是 Gary Stafford 的[选举服务](https://github.com/garystafford/voter-service),一个基于 Java 的简单投票平台。我们要求面试候选人通过一个流水线将该服务从 GitHub 部署到生产环境基础设施上。你可以把这个服务与 Rob Mile 写的了不起的 DevOps [入门教程](https://github.com/maxamg/cd-office-hours)结合起来学习。 还有一个熟悉这些工具的好方法,找一个流行的服务,然后只使用 AWS 和配置管理工具来搭建这个服务所需要的基础设施。第一次先手动搭建,了解清楚要做的事情,然后只用 CloudFormation(或者 Terraform)和 Ansible 重写刚才的手动操作。令人惊讶的是,这就是我们基础设施开发人员为客户所做的大部分日常工作,我们的客户认为这样的工作非常有意义! ### 需要读的书 -如果你在找 DevOps 的其他资源,下面这些理论和技术书籍值得一读。 +如果你在找 DevOps 的其它资源,下面这些理论和技术书籍值得一读。 #### 理论书籍 @@ -84,17 +84,17 @@ DevOps 不是一个团队,CI/CD 也不是 Jira 系统的一个用户组。DevO * Tom DeMarco 和 Tim Lister 合著的 [Peopleware(人件)][12]。管理工程师团队的经典图书,有一点过时,但仍然很有价值。 * Tom Limoncelli 写的 [Time Management for System Administrators(时间管理:给系统管理员)][13]。这本书主要面向系统管理员,它对很多大型组织内的系统管理员生活做了深入的展示。如果你想了解更多系统管理员和开发人员之间的冲突,这本书可能解释了更多。 * Eric Ries 写的 [The Lean Startup(精益创业)][14]。描述了 Eric 自己的 3D 虚拟形象公司,IMVU,发现了如何精益工作,快速失败和更快盈利。 -* Jez Humble 和他的朋友写的[Lean Enterprise(精益企业)][15]。这本书是对精益创业做的改编,以更适应企业,两本书都很棒,都很好的解释了 DevOps 背后的商业动机。 +* Jez Humble 和他的朋友写的 [Lean Enterprise(精益企业)][15]。这本书是对精益创业做的改编,以更适应企业,两本书都很棒,都很好地解释了 DevOps 背后的商业动机。 * Kief Morris 写的 [Infrastructure As Code(基础设施即代码)][16]。关于“基础设施即代码”的非常好的入门读物!很好的解释了为什么所有公司都有必要采纳这种做法。 -* Betsy Beyer、Chris Jones、Jennifer Petoff 和 Niall Richard Murphy 合著的 [Site Reliability Engineering(站点可靠性工程师)][17]。一本解释谷歌 SRE 实践的书,也因为是“DevOps 诞生之前的 DevOps”被人熟知。在如何处理运行时间、时延和保持工程师快乐方面提供了有趣的看法。 +* Betsy Beyer、Chris Jones、Jennifer Petoff 和 Niall Richard Murphy 合著的 [Site Reliability Engineering(站点可靠性工程师)][17]。一本解释谷歌 SRE 实践的书,也因为是“DevOps 诞生之前的 DevOps”被人熟知。在如何处理运行时间、时延和保持工程师快乐方面提供了有意思的看法。 #### 技术书籍 如果你想找的是让你直接跟代码打交道的书,看这里就对了。 -* W. Richard Stevens 的 [TCP/IP Illustrated(TCP/IP 详解)][18]。这是一套经典的(也可以说是最全面的)讲解基本网络协议的巨著,重点介绍了 TCP/IP 协议族。如果你听说过 1,2, 3,4 层网络,而且对深入学习他们感兴趣,那么你需要这本书。 +* W. Richard Stevens 的 [TCP/IP Illustrated(TCP/IP 详解)][18]。这是一套经典的(也可以说是最全面的)讲解网络协议基础的巨著,重点介绍了 TCP/IP 协议族。如果你听说过 1,2,3,4 层网络,而且对深入学习它们感兴趣,那么你需要这本书。 * Evi Nemeth、Trent Hein 和 Ben Whaley 合著的 [UNIX and Linux System Administration Handbook(UNIX/Linux 系统管理员手册)][19]。一本很好的入门书,介绍 Linux/Unix 如何工作以及如何使用。 -* Don Jones 和 Jeffrey Hicks 合著的 [Learn Windows Powershell In A Month of Lunches(Windows PowerShell实战指南)][20]。如果你在 Windows 系统下做自动化任务,你需要学习怎么使用 Powershell。这本书能够帮助你。Don Jones 是这方面著名的 MVP。 +* Don Jones 和 Jeffrey Hicks 合著的 [Learn Windows Powershell In A Month of Lunches(Windows PowerShell 实战指南)][20]。如果你在 Windows 系统下做自动化任务,你需要学习怎么使用 Powershell。这本书能够帮助你。Don Jones 是这方面著名的 MVP。 * 几乎所有 [James Turnbull][21] 写的东西,针对流行的 DevOps 工具,他发表了很好的技术入门读物。 不管是在那些把所有应用都直接部署在物理机上的公司,(现在很多公司仍然有充分的理由这样做)还是在那些把所有应用都做成 serverless 的先驱公司,DevOps 都很可能会持续下去。这部分工作很有趣,产出也很有影响力,而且最重要的是,它搭起桥梁衔接了技术和业务之间的缺口。DevOps 是一个值得期待的美好事物。 From 8c297314426f11aa7411a099bc9adbf385ee8bd9 Mon Sep 17 00:00:00 2001 From: "Xingyu.Wang" Date: Tue, 2 Oct 2018 12:16:46 +0800 Subject: [PATCH 174/736] PRF:20180917 Getting started with openmediavault- A home NAS solution.md MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit @jamelouis 翻译完再润色一下更好 --- ...ith openmediavault- A home NAS solution.md | 45 ++++++++++--------- 1 file changed, 23 insertions(+), 22 deletions(-) diff --git a/translated/tech/20180917 Getting started with openmediavault- A home NAS solution.md b/translated/tech/20180917 Getting started with openmediavault- A home NAS solution.md index 833180811a..0d5d00ca74 100644 --- a/translated/tech/20180917 Getting started with openmediavault- A home NAS solution.md +++ b/translated/tech/20180917 Getting started with openmediavault- A home NAS solution.md @@ -1,52 +1,53 @@ -openmediavault入门:一个家庭NAS解决方案 +openmediavault 入门:一个家庭 NAS 解决方案 ====== -这个网络附加文件服务提供了一序列功能,并且易于安装和配置。 + +> 这个网络附属文件服务提供了一系列可靠的功能,并且易于安装和配置。 ![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/bus-cloud.png?itok=vz0PIDDS) -面对许多可供选择的云存储方案,一些人可能会质疑一个家庭网络附加存储服务的价值。毕竟,当所有你的文件存储在云上,你不需要为你自己云服务的维护,更新,和安全担忧。 +面对许多可供选择的云存储方案,一些人可能会质疑一个家庭 NAS(网络附属存储network-attached storage)服务器的价值。毕竟,当所有你的文件存储在云上,你就不需要为你自己云服务的维护、更新和安全担忧。 -但是,这不完全对,是不是?你有一个家庭网络,所以你不得不负责维护网络的健康和安全。假定你已经维护一个家庭网络,那么[一个家庭NAS][1]并不会增加额外负担。反而你能从少量的工作中得到许多的好处。 +但是,这不完全对,是不是?你有一个家庭网络,所以你已经要负责维护网络的健康和安全。假定你已经维护一个家庭网络,那么[一个家庭 NAS][1]并不会增加额外负担。反而你能从少量的工作中得到许多的好处。 -你可以为你家里所有的计算机备份(你也可以备份离线网站).构架一个存储电影,音乐和照片的媒体服务器,无需担心网络连接是否连接。在家里的多台计算机处理大型文件,不需要等待从网络其他随机的计算机传输这些文件过来。另外,可以让NAS与其他服务一起进行双重任务,如托管本地邮件或者家庭Wiki。也许最重要的是,构架家庭NAS,数据完全是你的,始终在控制下和随时可访问的。 +你可以为你家里所有的计算机进行备份(你也可以备份到其它地方)。构架一个存储电影、音乐和照片的媒体服务器,无需担心互联网连接是否连通。在家里的多台计算机上处理大型文件,不需要等待从互联网某个其它计算机传输这些文件过来。另外,可以让 NAS 与其他服务配合工作,如托管本地邮件或者家庭 Wiki。也许最重要的是,构架家庭 NAS,数据完全是你的,它始终处于在控制下,随时可访问。 -接下来的问题是如何选择NAS方案。当然,你可以购买预先建立的解决方案,并在某一天打电话购买,但是这会有什么乐趣呢?实际上,尽管拥有一个能处理一切的设备很棒,但最好还是有一个可以修复和升级的钻机。这是一个我近期发现的解决方案。我选择安装和配置[openmediavault][2]。 +接下来的问题是如何选择 NAS 方案。当然,你可以购买预先搭建好的商品,并在一天内搞定,但是这会有什么乐趣呢?实际上,尽管拥有一个能为你搞定一切的设备很棒,但是有一个可以修复和升级的钻机平台更棒。这就我近期的需求,我选择安装和配置 [openmediavault][2]。 -### 为什么选择openmediavault? +### 为什么选择 openmediavault? -市面上有不少开源的NAS解决方案,其中有些无可争议的比openmediavault流行。当我询问周遭,例如,[freeNAS][3]最常被推荐给我。那么为什么我不采纳他们的建议呢?毕竟,它被大范围的使用,包含很多的功能,并且提供许多支持选项,[基于FreeNAS官网的一份对比数据][4]。当然这些全部是对的。但是openmediavault也不差。它是基于FreeNAS早期版本,虽然它在下载和功能方面的数量较低,但是对于我的需求而言,它已经相当足够了。 +市面上有不少开源的 NAS 解决方案,其中有些肯定比 openmediavault 流行。当我询问周遭,例如 [freeNAS][3] 这样的最常被推荐给我。那么为什么我不采纳他们的建议呢?毕竟,用它的人更多。[基于 FreeNAS 官网的一份对比数据][4],它包含了很多的功能,并且提供许多支持选项。这当然都对。但是 openmediavault 也不差。它实际上是基于 FreeNAS 早期版本的,虽然它在下载量和功能方面较少,但是对于我的需求而言,它已经相当足够了。 -另外一个因素是它让我感到很舒适。openmediavault的底层操作系统是[Debian][5],然而FreeNAS是[FreeBSD][6]。由于我个人对FressBSD不是很熟悉,因此如果我的NAS出现故障,必定会很难在FreeBSD上修复故障。同样的,也会让我觉得很难微调配置或添加服务到机器上。当然,我可以学习FreeBSD和更熟悉它,但是我已经在家里构架了这个NAS;我发现,如果限制给定自己完成构建NAS的“学习机会”的数量,构建NAS往往会更成功。 +另外一个因素是它让我感到很舒适。openmediavault 的底层操作系统是 [Debian][5],然而 FreeNAS 是 [FreeBSD][6]。由于我个人对 FreeBSD 不是很熟悉,因此如果我的 NAS 出现故障,必定难于在 FreeBSD 上修复故障。同样的,也会让我觉得难于优化或添加一些服务到这个机器上。当然,我可以学习 FreeBSD 以更熟悉它,但是我已经在家里构架了这个 NAS;我发现,如果完成它只需要较少的“学习机会”,那么构建 NAS 往往会更成功。 -当然,每个情况都不同,所以你要自己调研,然后作出最适合自己方案的决定。FreeNAS对于许多人似乎都是不错的解决方案。Openmediavault正是适合我的解决方案。 +当然,每个人情况都不同,所以你要自己调研,然后作出最适合自己方案的决定。FreeNAS 对于许多人似乎都是不错的解决方案。openmediavault 正是适合我的解决方案。 ### 安装与配置 -在[openmediavault文档]里详细记录了安装步骤,所以我不在这里重述了。如果你曾经安装过任何一个linux版本,大部分安装步骤都是很类似的(虽然在相对丑陋的[Ucurses][9]界面,不像你可能在现代版本的相对美观的安装界面)。我通过使用[专用驱动器][9]指令来安装它。然而,这些指令不但很好,而且相当精炼的。当你搞定这些指令,你安装了一个基本的系统,但是你还需要做很多才能真正构建好NAS来存储任何文件。例如,专用驱动器指令在硬盘驱动上安装openmediavault,但那是操作系统的驱动,而不是和网络上其他计算机共享空间的那个驱动。你需要自己把这些建立起来并且配置好。 +在 [openmediavault 文档][7]里详细记录了安装步骤,所以我不在这里重述了。如果你曾经安装过任何一个 Linux 发行版,大部分安装步骤都是很类似的(虽然是在相对丑陋的 [Ncurses][8] 界面,而不像你或许在现代发行版里见到的)。我按照 [专用的驱动器][9] 的说明来安装它。这些说明不但很好,而且相当精炼的。当你搞定这些步骤,就安装好了一个基本的系统,但是你还需要做更多才能真正构建好 NAS 来存储各种文件。例如,专用驱动器方式需要在硬盘驱动器上安装 openmediavault,但那是指你的操作系统的驱动器,而不是和网络上其他计算机共享的驱动器。你需要自己把这些建立起来并且配置好。 -你要做的第一件事是加载用来管理的网页界面和修改默认密码。这个密码和之前你安装过程设置的根密码是不同的。这是网页洁面的管理员账号,和默认的账户和密码分别是 `admin` 和 `openmediavault`,当你登入后自然而然地会修改这些配置属性。 +你要做的第一件事是加载用来管理的网页界面,并修改默认密码。这个密码和之前你安装过程设置的 root 密码是不同的。这是网页界面的管理员账号,默认的账户和密码分别是 `admin` 和 `openmediavault`,当你登入后要马上修改。 -#### 设置你的驱动 +#### 设置你的驱动器 -一旦你安装好openmediavault,你需要它为你做一些工作。逻辑上的第一个步骤是设置好你即将用来作为存储的驱动。在这里,我假定你已经物理上安装好它们了,所以接下来你要做的就是让openmediavault识别和配置它们。第一步是确保这些磁盘是可见的。侧边栏菜单有很多选项,而且被精心的归类了。选择**存储 - > 磁盘**。一旦你点击该菜单,你应该能够看到所有你已经安装到该服务器的驱动,包括那个你已经用来安装openmediavault的驱动。如果你没有在那里看到所有驱动,点击扫描按钮去看它能够接载它们。通常,这不会是一个问题。 +一旦你安装好 openmediavault,你需要它为你做一些工作。逻辑上的第一个步骤是设置好你即将用来作为存储的驱动器。在这里,我假定你已经物理上安装好它们了,所以接下来你要做的就是让 openmediavault 识别和配置它们。第一步是确保这些磁盘是可见的。侧边栏菜单有很多选项,而且被精心的归类了。选择“Storage -> Disks”。一旦你点击该菜单,你应该能够看到所有你已经安装到该服务器的驱动,包括那个你已经用来安装 openmediavault 的驱动器。如果你没有在那里看到所有驱动器,点击“Scan”按钮去看是否能够挂载它们。通常,这不会是一个问题。 -当你的文件共享时,你可以独立的挂载和设置这些驱动,但是对于一个文件服务器,你将想要一些冗余驱动。你想要能够把很多驱动当作一个单一卷和能够在某一个驱动出现故障或者空间不足下安装新驱动的情况下恢复你的数据。这意味你将需要一个[RAID][10]。你想要的什么特定类型的RAID的主题是一个深深的兔子洞,是一个值得另写一片文章专门来讲述它(而且已经有很多关于该主题的文章了),但是简而言之是你将需要不仅仅一个驱动和最好的情况下,你的所有驱动都存储一样数量的数据。 +你可以独立的挂载和设置这些驱动器用于文件共享,但是对于一个文件服务器,你会想要一些冗余。你想要能够把很多驱动器当作一个单一卷,并能够在某一个驱动器出现故障时恢复你的数据,或者空间不足时安装新驱动器。这意味你将需要一个 [RAID][10]。你想要的什么特定类型的 RAID 的这个主题是一个大坑,值得另写一篇文章专门来讲述它(而且已经有很多关于该主题的文章了),但是简而言之是你将需要不止一个驱动器,最好的情况下,你所有的驱动都存储一样的容量。 -openmedia支持所有标准的RAID级别,所以多了解RAID对你很有好处的。可以在**存储 - > RAID管理**配置你的RAID。配置是相当简单:点击创建按钮,在你的RAID阵列里选择你想要的磁盘和你想要使用的RAID级别,和给这个阵列一个名字。openmediavault为你处理剩下的工作。没有混乱的命令行,试图记住‘mdadm'命令的一些标志参数。在我特别的例子,我有六个2TB驱动,并被设置为RAID 10. +openmediavault 支持所有标准的 RAID 级别,所以这里很简单。可以在“Storage -> RAID Management”里配置你的 RAID。配置是相当简单的:点击“Create”按钮,在你的 RAID 阵列里选择你想要的磁盘和你想要使用的 RAID 级别,并给这个阵列一个名字。openmediavault 为你处理剩下的工作。这里没有复杂的命令行,也不需要试图记住 `mdadm` 命令的一些选项参数。在我的例子,我有六个 2TB 驱动器,设置成了 RAID 10。 -当你的RAID构建好了,基本上你已经有一个地方可以存储东西了。你仅仅需要设置一个文件系统。正如你的桌面系统,一个硬盘驱动在没有格式化情况下是没什么用处的。所以下一个你要去的地方的是位于openmediavault控制面板里的 **存储 - > 文件系统**。和配置你的RAID一样,点击创建按钮,然后跟着提示操作。如果你只有一个RAID在你的服务器上,你应该可以看到一个像 `md0`的东西。你也需要选择文件系统的类别。如果你不能确定,选择标准的ext4类型即可。 +当你的 RAID 构建好了,基本上你已经有一个地方可以存储东西了。你仅仅需要设置一个文件系统。正如你的桌面系统,一个硬盘驱动器在没有格式化的情况下是没什么用处的。所以下一个你要去的地方的是位于 openmediavault 控制面板里的“Storage -> File Systems”。和配置你的 RAID 一样,点击“Create”按钮,然后跟着提示操作。如果你在你的服务器上只有一个 RAID ,你应该可以看到一个像 `md0` 的东西。你也需要选择文件系统的类别。如果你不能确定,选择标准的 ext4 类型即可。 #### 定义你的共享 -亲爱的!你有个地方可以存储文件了。现在你只需要让它在你的家庭网络中可见。可以从在openmediavault控制面板上的**服务**部分上配置。当谈到在网络上设置文件共享,有两个主要的选择:NFS或者SMB/CIFS. 根据以往经验,如果你网络上的所有计算机都是Linux系统,那么你使用NFS会更好。然而,当你家庭网络是一个混合环境,是一个包含Linux,Windows,苹果系统和嵌入式设备的组合,那么SMB/CIF可能会是你合适的选择。 +亲爱的!你有个地方可以存储文件了。现在你只需要让它在你的家庭网络中可见。可以从在 openmediavault 控制面板上的“Services”部分上配置。当谈到在网络上设置文件共享,主要有两个选择:NFS 或者 SMB/CIFS. 根据以往经验,如果你网络上的所有计算机都是 Linux 系统,那么你使用 NFS 会更好。然而,当你家庭网络是一个混合环境,是一个包含Linux、Windows、苹果系统和嵌入式设备的组合,那么 SMB/CIFS 可能会是你合适的选择。 -这些选项不是互斥的。实际上,你可以在服务器上运行这些服务和同时拥有这些服务的好处。或者你可以混合起来,如果你有一个特定的设备做特定的任务。不管你的使用场景是怎样,配置这些服务是相当简单。点击你想要的服务,从它配置中激活它,和在网络中设定你想要的共享文件夹为可见。在基于SMB/CIFS共享的情况下,相对于NFS多了一些可用的配置,但是一般用默认配置就挺好的,接着可以在默认基础上修改配置。最酷的事情是它很容易配置,同时也很容易在需要的时候修改配置。 +这些选项不是互斥的。实际上,你可以在服务器上运行这两个服务,同时拥有这些服务的好处。或者你可以混合起来,如果你有一个特定的设备做特定的任务。不管你的使用场景是怎样,配置这些服务是相当简单。点击你想要的服务,从它配置中激活它,和在网络中设定你想要的共享文件夹为可见。在基于 SMB/CIFS 共享的情况下,相对于 NFS 多了一些可用的配置,但是一般用默认配置就挺好的,接着可以在默认基础上修改配置。最酷的事情是它很容易配置,同时也很容易在需要的时候修改配置。 #### 用户配置 -基本上已将完成了。你已经在RAID配置你的驱动。你已经用一种文件系统格式化了RAID。和你已经在格式化的RAID上设定了共享文件夹。剩下来的一件事情是配置那些人可以访问这些共享和可以访问多少。这个可以在 **访问权限管理** 配置区设置。使用 **用户** 和 **群组** 选项来设定可以连接到你共享文件加的用户和设定这些共享文件的访问权限。 +基本上已将完成了。你已经在 RAID 中配置了你的驱动器。你已经用一种文件系统格式化了 RAID,并且你已经在格式化的 RAID 上设定了共享文件夹。剩下来的一件事情是配置那些人可以访问这些共享和可以访问多少。这个可以在“Access Rights Management”配置里设置。使用“User”和“Group”选项来设定可以连接到你共享文件夹的用户,并设定这些共享文件的访问权限。 -一旦你完成用户配置,你几乎准备好了。你需要从不同客户端机器访问你的共享,但是这是另外一个可以单独写个文章的话题了。 +一旦你完成用户配置,就几乎准备好了。你需要从不同客户端机器访问你的共享,但是这是另外一个可以单独写个文章的话题了。 玩得开心! @@ -57,7 +58,7 @@ via: https://opensource.com/article/18/9/openmediavault 作者:[Jason van Gumster][a] 选题:[lujun9972](https://github.com/lujun9972) 译者:[jamelouis](https://github.com/jamelouis) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 From ecb0f1341a054b86aee0be0fb7b0763604c4995e Mon Sep 17 00:00:00 2001 From: "Xingyu.Wang" Date: Tue, 2 Oct 2018 12:17:13 +0800 Subject: [PATCH 175/736] PUB: 20180917 Getting started with openmediavault- A home NAS solution.md @jamelouis https://linux.cn/article-10071-1.html --- ...17 Getting started with openmediavault- A home NAS solution.md | 0 1 file changed, 0 insertions(+), 0 deletions(-) rename {translated/tech => published}/20180917 Getting started with openmediavault- A home NAS solution.md (100%) diff --git a/translated/tech/20180917 Getting started with openmediavault- A home NAS solution.md b/published/20180917 Getting started with openmediavault- A home NAS solution.md similarity index 100% rename from translated/tech/20180917 Getting started with openmediavault- A home NAS solution.md rename to published/20180917 Getting started with openmediavault- A home NAS solution.md From 3530561194139e1a7b201bc319e681d5da8cbc11 Mon Sep 17 00:00:00 2001 From: "Xingyu.Wang" Date: Tue, 2 Oct 2018 12:25:37 +0800 Subject: [PATCH 176/736] PRF:20180915 Backup Installed Packages And Restore Them On Freshly Installed Ubuntu.md @geekpi --- ...estore Them On Freshly Installed Ubuntu.md | 29 +++++++++---------- 1 file changed, 13 insertions(+), 16 deletions(-) diff --git a/translated/tech/20180915 Backup Installed Packages And Restore Them On Freshly Installed Ubuntu.md b/translated/tech/20180915 Backup Installed Packages And Restore Them On Freshly Installed Ubuntu.md index 1b21607ee9..b5a74c0ea9 100644 --- a/translated/tech/20180915 Backup Installed Packages And Restore Them On Freshly Installed Ubuntu.md +++ b/translated/tech/20180915 Backup Installed Packages And Restore Them On Freshly Installed Ubuntu.md @@ -1,23 +1,21 @@ -备份安装包并在全新安装的 Ubuntu 上恢复它们 +备份安装的包并在全新安装的 Ubuntu 上恢复它们 ====== ![](https://www.ostechnix.com/wp-content/uploads/2018/09/apt-clone-720x340.png) -在多个 Ubuntu 系统上安装同一组软件包是一项耗时且无聊的任务。你不会想花时间在多个系统上反复安装相同的软件包。在类似架构的 Ubuntu 系统上安装软件包时,有许多方法可以使这项任务更容易。你可以方便地通过 [**Aptik**][1] 并点击几次鼠标将以前的 Ubuntu 系统的应用程序、设置和数据迁移到新安装的系统中。或者,你可以使用软件包管理器(例如 APT)获取[**备份的已安装软件包的完整列表**][2],然后在新安装的系统上安装它们。今天,我了解到还有另一个专用工具可以完成这项工作。来看一下 **apt-clone**,这是一个简单的工具,可以让你为 Debian/Ubuntu 系统创建一个已安装的软件包列表,这些软件包可以在新安装的系统或容器上或目录中恢复。 +在多个 Ubuntu 系统上安装同一组软件包是一项耗时且无聊的任务。你不会想花时间在多个系统上反复安装相同的软件包。在类似架构的 Ubuntu 系统上安装软件包时,有许多方法可以使这项任务更容易。你可以方便地通过 [Aptik][1] 并点击几次鼠标将以前的 Ubuntu 系统的应用程序、设置和数据迁移到新安装的系统中。或者,你可以使用软件包管理器(例如 APT)获取[备份的已安装软件包的完整列表][2],然后在新安装的系统上安装它们。今天,我了解到还有另一个专用工具可以完成这项工作。来看一下 `apt-clone`,这是一个简单的工具,可以让你为 Debian/Ubuntu 系统创建一个已安装的软件包列表,这些软件包可以在新安装的系统或容器上或目录中恢复。 -Apt-clone 会帮助你处理你想要的情况, +`apt-clone` 会帮助你处理你想要的情况, - * 在运行类似 Ubuntu(及衍生版)的多个系统上安装一致的应用程序。 -  * 经常在多个系统上安装相同的软件包。 -  * 备份已安装的应用程序的完整列表,并在需要时随时随地恢复它们。 +* 在运行类似 Ubuntu(及衍生版)的多个系统上安装一致的应用程序。 +* 经常在多个系统上安装相同的软件包。 +* 备份已安装的应用程序的完整列表,并在需要时随时随地恢复它们。 - - -在本简要指南中,我们将讨论如何在基于 Debian 的系统上安装和使用 Apt-clone。我在 Ubuntu 18.04 LTS 上测试了这个程序,但它应该适用于所有基于 Debian 和 Ubuntu 的系统。 +在本简要指南中,我们将讨论如何在基于 Debian 的系统上安装和使用 `apt-clone`。我在 Ubuntu 18.04 LTS 上测试了这个程序,但它应该适用于所有基于 Debian 和 Ubuntu 的系统。 ### 备份已安装的软件包并在新安装的 Ubuntu 上恢复它们 -Apt-clone 在默认仓库中有。要安装它,只需在终端输入以下命令: +`apt-clone` 在默认仓库中有。要安装它,只需在终端输入以下命令: ``` $ sudo apt install apt-clone @@ -27,11 +25,10 @@ $ sudo apt install apt-clone ``` $ mkdir ~/mypackages - $ sudo apt-clone clone ~/mypackages ``` -上面的命令将我的 Ubuntu 中所有已安装的软件包保存在 **~/mypackages** 目录下名为 **apt-clone-state-ubuntuserver.tar.gz** 的文件中。 +上面的命令将我的 Ubuntu 中所有已安装的软件包保存在 `~/mypackages` 目录下名为 `apt-clone-state-ubuntuserver.tar.gz` 的文件中。 要查看备份文件的详细信息,请运行: @@ -53,7 +50,7 @@ Date: Sat Sep 15 10:23:05 2018 $ sudo apt-clone restore apt-clone-state-ubuntuserver.tar.gz ``` -请注意,此命令将覆盖你现有的 **/etc/apt/sources.list** 并将安装/删除软件包。警告过你了!此外,只需确保目标系统是相同的架构和操作系统。例如,如果源系统是 18.04 LTS 64位,那么目标系统必须也是相同的。 +请注意,此命令将覆盖你现有的 `/etc/apt/sources.list` 并将安装/删除软件包。警告过你了!此外,只需确保目标系统是相同的 CPU 架构和操作系统。例如,如果源系统是 18.04 LTS 64 位,那么目标系统必须也是相同的。 如果你不想在系统上恢复软件包,可以使用 `--destination /some/location` 选项将克隆复制到这个文件夹中。 @@ -61,7 +58,7 @@ $ sudo apt-clone restore apt-clone-state-ubuntuserver.tar.gz $ sudo apt-clone restore apt-clone-state-ubuntuserver.tar.gz --destination ~/oldubuntu ``` -在此例中,上面的命令将软件包恢复到 **~/oldubuntu** 中。 +在此例中,上面的命令将软件包恢复到 `~/oldubuntu` 中。 有关详细信息,请参阅帮助部分: @@ -75,7 +72,7 @@ $ apt-clone -h $ man apt-clone ``` -**建议阅读:** +建议阅读: + [Systemback - 将 Ubuntu 桌面版和服务器版恢复到以前的状态][3] + [Cronopete - Linux 下的苹果时间机器][4] @@ -94,7 +91,7 @@ via: https://www.ostechnix.com/backup-installed-packages-and-restore-them-on-fre 作者:[SK][a] 选题:[lujun9972](https://github.com/lujun9972) 译者:[geekpi](https://github.com/geekpi) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 From 65057bb7212a52f394a220cdff2221e30e42ed0c Mon Sep 17 00:00:00 2001 From: "Xingyu.Wang" Date: Tue, 2 Oct 2018 12:25:58 +0800 Subject: [PATCH 177/736] PUB:20180915 Backup Installed Packages And Restore Them On Freshly Installed Ubuntu.md @geekpi https://linux.cn/article-10072-1.html --- ...alled Packages And Restore Them On Freshly Installed Ubuntu.md | 0 1 file changed, 0 insertions(+), 0 deletions(-) rename {translated/tech => published}/20180915 Backup Installed Packages And Restore Them On Freshly Installed Ubuntu.md (100%) diff --git a/translated/tech/20180915 Backup Installed Packages And Restore Them On Freshly Installed Ubuntu.md b/published/20180915 Backup Installed Packages And Restore Them On Freshly Installed Ubuntu.md similarity index 100% rename from translated/tech/20180915 Backup Installed Packages And Restore Them On Freshly Installed Ubuntu.md rename to published/20180915 Backup Installed Packages And Restore Them On Freshly Installed Ubuntu.md From 815d5423eddf0ed0969ede9f2cf26718c51fd6d3 Mon Sep 17 00:00:00 2001 From: "Xingyu.Wang" Date: Tue, 2 Oct 2018 12:38:53 +0800 Subject: [PATCH 178/736] PRF:20180924 How To Find Out Which Port Number A Process Is Using In Linux.md @HankChow --- ...Port Number A Process Is Using In Linux.md | 53 +++++++++---------- 1 file changed, 25 insertions(+), 28 deletions(-) diff --git a/translated/tech/20180924 How To Find Out Which Port Number A Process Is Using In Linux.md b/translated/tech/20180924 How To Find Out Which Port Number A Process Is Using In Linux.md index ed3402e0fa..a77ee1ad62 100644 --- a/translated/tech/20180924 How To Find Out Which Port Number A Process Is Using In Linux.md +++ b/translated/tech/20180924 How To Find Out Which Port Number A Process Is Using In Linux.md @@ -1,22 +1,22 @@ 如何在 Linux 中查看进程占用的端口号 ====== + 对于 Linux 系统管理员来说,清楚某个服务是否正确地绑定或监听某个端口,是至关重要的。如果你需要处理端口相关的问题,这篇文章可能会对你有用。 端口是 Linux 系统上特定进程之间逻辑连接的标识,包括物理端口和软件端口。由于 Linux 操作系统是一个软件,因此本文只讨论软件端口。软件端口始终与主机的 IP 地址和相关的通信协议相关联,因此端口常用于区分应用程序。大部分涉及到网络的服务都必须打开一个套接字来监听传入的网络请求,而每个服务都使用一个独立的套接字。 **推荐阅读:** -**(#)** [在 Linux 上查看进程 ID 的 4 种方法][1] -**(#)** [在 Linux 上终止进程的 3 种方法][2] -套接字是和 IP 地址,软件端口和协议结合起来使用的,而端口号对传输控制协议(Transmission Control Protocol, TCP)和 用户数据报协议(User Datagram Protocol, UDP)协议都适用,TCP 和 UDP 都可以使用0到65535之间的端口号进行通信。 +- [在 Linux 上查看进程 ID 的 4 种方法][1] +- [在 Linux 上终止进程的 3 种方法][2] + +套接字是和 IP 地址、软件端口和协议结合起来使用的,而端口号对传输控制协议(TCP)和用户数据报协议(UDP)协议都适用,TCP 和 UDP 都可以使用 0 到 65535 之间的端口号进行通信。 以下是端口分配类别: - * `0-1023:` 常用端口和系统端口 - * `1024-49151:` 软件的注册端口 - * `49152-65535:` 动态端口或私有端口 - - + * 0 - 1023: 常用端口和系统端口 + * 1024 - 49151: 软件的注册端口 + * 49152 - 65535: 动态端口或私有端口 在 Linux 上的 `/etc/services` 文件可以查看到更多关于保留端口的信息。 @@ -74,29 +74,25 @@ telnet 23/udp # 24 - private mail system lmtp 24/tcp # LMTP Mail Delivery lmtp 24/udp # LMTP Mail Delivery - ``` 可以使用以下六种方法查看端口信息。 - * `ss:` ss 可以用于转储套接字统计信息。 - * `netstat:` netstat 可以显示打开的套接字列表。 - * `lsof:` lsof 可以列出打开的文件。 - * `fuser:` fuser 可以列出那些打开了文件的进程的进程 ID。 - * `nmap:` nmap 是网络检测工具和端口扫描程序。 - * `systemctl:` systemctl 是 systemd 系统的控制管理器和服务管理器。 - - + * `ss`:可以用于转储套接字统计信息。 + * `netstat`:可以显示打开的套接字列表。 + * `lsof`:可以列出打开的文件。 + * `fuser`:可以列出那些打开了文件的进程的进程 ID。 + * `nmap`:是网络检测工具和端口扫描程序。 + * `systemctl`:是 systemd 系统的控制管理器和服务管理器。 以下我们将找出 `sshd` 守护进程所使用的端口号。 -### 方法1:使用 ss 命令 +### 方法 1:使用 ss 命令 `ss` 一般用于转储套接字统计信息。它能够输出类似于 `netstat` 输出的信息,但它可以比其它工具显示更多的 TCP 信息和状态信息。 它还可以显示所有类型的套接字统计信息,包括 PACKET、TCP、UDP、DCCP、RAW、Unix 域等。 - ``` # ss -tnlp | grep ssh LISTEN 0 128 *:22 *:* users:(("sshd",pid=997,fd=3)) @@ -111,7 +107,7 @@ LISTEN 0 128 *:22 *:* users:(("sshd",pid=997,fd=3)) LISTEN 0 128 :::22 :::* users:(("sshd",pid=997,fd=4)) ``` -### 方法2:使用 netstat 命令 +### 方法 2:使用 netstat 命令 `netstat` 能够显示网络连接、路由表、接口统计信息、伪装连接以及多播成员。 @@ -131,7 +127,7 @@ tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN 1208/sshd tcp6 0 0 :::22 :::* LISTEN 1208/sshd ``` -### 方法3:使用 lsof 命令 +### 方法 3:使用 lsof 命令 `lsof` 能够列出打开的文件,并列出系统上被进程打开的文件的相关信息。 @@ -153,7 +149,7 @@ sshd 1208 root 4u IPv6 20921 0t0 TCP *:ssh (LISTEN) sshd 11592 root 3u IPv4 27744 0t0 TCP vps.2daygeek.com:ssh->103.5.134.167:49902 (ESTABLISHED) ``` -### 方法4:使用 fuser 命令 +### 方法 4:使用 fuser 命令 `fuser` 工具会将本地系统上打开了文件的进程的进程 ID 显示在标准输出中。 @@ -165,7 +161,7 @@ sshd 11592 root 3u IPv4 27744 0t0 TCP vps.2daygeek.com:ssh->103.5.134.167:49902 root 49339 F.... sshd ``` -### 方法5:使用 nmap 命令 +### 方法 5:使用 nmap 命令 `nmap`(“Network Mapper”)是一款用于网络检测和安全审计的开源工具。它最初用于对大型网络进行快速扫描,但它对于单个主机的扫描也有很好的表现。 @@ -185,13 +181,14 @@ Service detection performed. Please report any incorrect results at http://nmap. Nmap done: 1 IP address (1 host up) scanned in 0.44 seconds ``` -### 方法6:使用 systemctl 命令 +### 方法 6:使用 systemctl 命令 -`systemctl` 是 systemd 系统的控制管理器和服务管理器。它取代了旧的 SysV init 系统管理,目前大多数现代 Linux 操作系统都采用了 systemd。 +`systemctl` 是 systemd 系统的控制管理器和服务管理器。它取代了旧的 SysV 初始化系统管理,目前大多数现代 Linux 操作系统都采用了 systemd。 **推荐阅读:** -**(#)** [chkservice – Linux 终端上的 systemd 单元管理工具][3] -**(#)** [如何查看 Linux 系统上正在运行的服务][4] + +- [chkservice – Linux 终端上的 systemd 单元管理工具][3] +- [如何查看 Linux 系统上正在运行的服务][4] ``` # systemctl status sshd @@ -258,7 +255,7 @@ via: https://www.2daygeek.com/how-to-find-out-which-port-number-a-process-is-usi 作者:[Prakash Subramanian][a] 选题:[lujun9972](https://github.com/lujun9972) 译者:[HankChow](https://github.com/HankChow) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 From c1d26bacc5f5b8c27703775ad6e2a9f58bb38967 Mon Sep 17 00:00:00 2001 From: "Xingyu.Wang" Date: Tue, 2 Oct 2018 12:39:14 +0800 Subject: [PATCH 179/736] PUB:20180924 How To Find Out Which Port Number A Process Is Using In Linux.md @HankChow https://linux.cn/article-10073-1.html --- ...w To Find Out Which Port Number A Process Is Using In Linux.md | 0 1 file changed, 0 insertions(+), 0 deletions(-) rename {translated/tech => published}/20180924 How To Find Out Which Port Number A Process Is Using In Linux.md (100%) diff --git a/translated/tech/20180924 How To Find Out Which Port Number A Process Is Using In Linux.md b/published/20180924 How To Find Out Which Port Number A Process Is Using In Linux.md similarity index 100% rename from translated/tech/20180924 How To Find Out Which Port Number A Process Is Using In Linux.md rename to published/20180924 How To Find Out Which Port Number A Process Is Using In Linux.md From b4d429503b45b72e0eb65d83368206a064719ba9 Mon Sep 17 00:00:00 2001 From: brifuture Date: Tue, 2 Oct 2018 13:28:28 +0800 Subject: [PATCH 180/736] pick article --- ...lt our first full-stack JavaScript web app in three weeks.md | 2 ++ 1 file changed, 2 insertions(+) diff --git a/sources/tech/20170810 How we built our first full-stack JavaScript web app in three weeks.md b/sources/tech/20170810 How we built our first full-stack JavaScript web app in three weeks.md index e423386d85..566d29d768 100644 --- a/sources/tech/20170810 How we built our first full-stack JavaScript web app in three weeks.md +++ b/sources/tech/20170810 How we built our first full-stack JavaScript web app in three weeks.md @@ -1,3 +1,5 @@ +BriFuture is translating this article + The user’s home dashboard in our app, AlignHow we built our first full-stack JavaScript web app in three weeks ============================================================ From 25cc2fa2c12f6b9509c9bbbbe07a28aba7cb175f Mon Sep 17 00:00:00 2001 From: "Xingyu.Wang" Date: Tue, 2 Oct 2018 23:14:39 +0800 Subject: [PATCH 181/736] PRF:20180531 How to create shortcuts in vi.md @sd886393 --- .../20180531 How to create shortcuts in vi.md | 96 ++++++++----------- 1 file changed, 38 insertions(+), 58 deletions(-) diff --git a/translated/tech/20180531 How to create shortcuts in vi.md b/translated/tech/20180531 How to create shortcuts in vi.md index 8616013e96..ec51ab53f7 100644 --- a/translated/tech/20180531 How to create shortcuts in vi.md +++ b/translated/tech/20180531 How to create shortcuts in vi.md @@ -1,120 +1,103 @@ 如何在 vi 中创建快捷键 ====== +> 那些常见编辑任务的快捷键可以使 Vi 编辑器更容易使用,更有效率。 + ![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/documentation-type-keys-yearbook.png?itok=Q-ELM2rn) -学习使用 [vi 文本编辑器][1] 确实得花点功夫,不过 vi 的老手们都知道,经过一小会的锻炼,就可以将基本的 vi 操作融汇贯通。我们都知道“肌肉记忆”,那么学习 vi 的过程可以称之为“手指记忆”。 +学习使用 [vi 文本编辑器][1] 确实得花点功夫,不过 vi 的老手们都知道,经过一小会儿的锻炼,就可以将基本的 vi 操作融汇贯通。我们都知道“肌肉记忆”,那么学习 vi 的过程可以称之为“手指记忆”。 -当你抓住了基础的操作窍门之后,你就可以定制化地配置 vi 的快捷键,从而让其处理的功能更为强大、流畅。 +当你抓住了基础的操作窍门之后,你就可以定制化地配置 vi 的快捷键,从而让其处理的功能更为强大、流畅。我希望下面描述的技术可以加速您的协作、编程和数据操作。 -在开始之前,我想先感谢下 Chris Hermansen(他雇佣我写了这篇文章)仔细地检查了我的另一篇关于使用 vi 增强版本[Vim][2]的文章。当然还有他那些我未采纳的建议。 +在开始之前,我想先感谢下 Chris Hermansen(是他雇佣我写了这篇文章)仔细地检查了我的另一篇关于使用 vi 增强版本 [Vim][2] 的文章。当然还有他那些我未采纳的建议。 -首先,我们来说明下面几个惯例设定。我会使用符号来代表按下 RETURN 或者 ENTER 键, 代表按下空格键,CTRL-x 表示一起按下 Control 键和 x 键 +首先,我们来说明下面几个惯例设定。我会使用符号 `` 来代表按下回车,`` 代表按下空格键,`CTRL-x` 表示一起按下 `Control` 键和 `x` 键(`x` 可以是需要的某个键)。 使用 `map` 命令来进行按键的映射。第一个例子是 `write` 命令,通常你之前保存使用这样的命令: ``` :w - ``` -虽然这里只有三个键,不过考虑到我用这个命令实在是太频繁了,我更想“一键”搞定它。在这里我选择逗号键,比如这样: +虽然这里只有三个键,不过考虑到我用这个命令实在是太频繁了,我更想“一键”搞定它。在这里我选择逗号键,它不是标准的 vi 命令集的一部分。这样设置: + ``` :map , :wCTRL-v - ``` -这里的 CTRL-v 事实上是对 做了转义的操作,如果不加这个的话,默认 会作为这条映射指令的结束信号,而非映射中的一个操作。 CTRL-v 后面所跟的操作会翻译为用户的实际操作,而非该按键平常的操作。 +这里的 `CTRL-v` 事实上是对 `` 做了转义的操作,如果不加这个的话,默认 `` 会作为这条映射指令的结束信号,而非映射中的一个操作。 `CTRL-v` 后面所跟的操作会翻译为用户的实际操作,而非该按键平常的操作。 -在上面的映射中,右边的部分会在屏幕中显示为 `:w^M`,其中 `^` 字符就是指代 `control`,完整的意思就是 CTRL-m,表示就是系统中一行的结尾 +在上面的映射中,右边的部分会在屏幕中显示为 `:w^M`,其中 `^` 字符就是指代 `control`,完整的意思就是 `CTRL-m`,表示就是系统中一行的结尾。 - -目前来说,就很不错了。如果我编辑、创建了十二次文件,这个键位映射就可以省掉了 2*12 次按键。不过这里没有计算你建立这个键位映射所花费的 11次按键(计算CTRL-v 和 冒号均为一次按键)。虽然这样已经省了很多次,但是每次打开 vi 都要重新建立这个映射也会觉得非常麻烦。 +目前来说,就很不错了。如果我编辑、创建了十二次文件,这个键位映射就可以省掉了 2*12 次按键。不过这里没有计算你建立这个键位映射所花费的 11 次按键(计算 `CTRL-v` 和 `:` 均为一次按键)。虽然这样已经省了很多次,但是每次打开 vi 都要重新建立这个映射也会觉得非常麻烦。 幸运的是,这里可以将这些键位映射放到 vi 的启动配置文件中,让其在每次启动的时候自动读取:文件为 `.exrc`,对于 vim 是 `.vimrc`。只需要将这些文件放在你的用户根目录中即可,并在文件中每行写入一个键位映射,之后就会在每次启动 vi 生效直到你删除对应的配置。 在继续说明 `map` 其他用法以及其他的缩写机制之前,这里在列举几个我常用提高文本处理效率的 map 设置: -``` -                                        Displays as +| 映射 | 显示为 | +|------|-------| +| `:map X :xCTRL-v` | `:x^M` | +| `:map X ,:qCTRL-v` | `,:q^M` | +上面的 `map` 指令的意思是写入并关闭当前的编辑文件。其中 `:x` 是 vi 原本的命令,而下面的版本说明之前的 `map` 配置可以继续用作第二个 `map` 键位映射。 -:map X :xCTRL-v                    :x^M +| 映射 | 显示为 | +|------|-------| +| `:map v :e` | `:e` | +上面的指令意思是在 vi 编辑器内部切换文件,使用这个时候,只需要按 `v` 并跟着输入文件名,之后按 `` 键。 - -or - - - -:map X ,:qCTRL-v                   ,:q^M - -``` - -上面的 map 指令的意思是写入并关闭当前的编辑文件。其中 `:x` 是 vi 原本的命令,而下面的版本说明之前的 map 配置可以继续用作第二个 map 键位映射。 -``` -:map v :e                   :e - -``` - -上面的指令意思是在 vi 编辑器内部 切换文件,使用这个时候,只需要按 `v` 并跟着输入文件名,之后按 `` 键。 -``` -:map CTRL-vCTRL-e :e#CTRL-v    :e #^M - -``` +| 映射 | 显示为 | +|------|-------| +| `:map CTRL-vCTRL-e :e#CTRL-v` | `:e #^M` | `#` 在这里是 vi 中标准的符号,意思是最后使用的文件名。所以切换当前与上一个文件的方法就使用上面的映射。 -``` -map CTRL-vCTRL-r :!spell %>err &CTRL-v     :!spell %>err&^M -``` +| 映射 | 显示为 | +|------|-------| +| `map CTRL-vCTRL-r :!spell %>err &CTRL-v` | `:!spell %>err&^M` | -(注意:在两个例子中出现的第一个 CRTL-v 在某些 vi 的版本中是不需要的)其中,`:!` 用来运行一个外部的(非 vi 内部的)命令。在这个拼写检查的例子中,`%` 是 vi 中的符号用来只带目前的文件, `>` 用来重定向拼写检查中的输出到 `err` 文件中,之后跟上 `&` 说明该命令是一个后台运行的任务,这样可以保证在拼写检查的同时还可以进行编辑文件的工作。这里我可以键入 `verr`(使用我之前定义的快捷键 `v` 跟上 `err`),进入 `spell` 输出结果的文件,之后再输入 `CTRL-e` 来回到刚才编辑的文件中。这样我就可以在拼写检查之后,使用 CTRL-r 来查看检查的错误,再通过 CTRL-e 返回刚才编辑的文件。 +(注意:在两个例子中出现的第一个 `CRTL-v` 在某些 vi 的版本中是不需要的)其中,`:!` 用来运行一个外部的(非 vi 内部的)命令。在这个拼写检查的例子中,`%` 是 vi 中的符号用来指代目前的文件, `>` 用来重定向拼写检查中的输出到 `err` 文件中,之后跟上 `&` 说明该命令是一个后台运行的任务,这样可以保证在拼写检查的同时还可以进行编辑文件的工作。这里我可以键入 `verr`(使用我之前定义的快捷键 `v` 跟上 `err`),进入 `spell` 输出结果的文件,之后再输入 `CTRL-e` 来回到刚才编辑的文件中。这样我就可以在拼写检查之后,使用 `CTRL-r` 来查看检查的错误,再通过 `CTRL-e` 返回刚才编辑的文件。 + +还用很多字符串输入的缩写,也使用了各种 `map` 命令,比如: -还用很多字符串输入的缩写,也使用了各种 map 命令,比如: ``` :map! CTRL-o \fI - :map! CTRL-k \fP - ``` -这个映射允许你使用 CTRL-o 作为 `groff` 命令的缩写,从而让让接下来书写的单词有斜体的效果,并使用 CTRL-k 进行恢复 +这个映射允许你使用 `CTRL-o` 作为 `groff` 命令的缩写,从而让让接下来书写的单词有斜体的效果,并使用 `CTRL-k` 进行恢复。 还有两个类似的映射: + ``` :map! rh rhinoceros - :map! hi hippopotamus - ``` -上面的也可以使用 `ab` 命令来替换,就像下面这样(如果想这么用的话,需要首先按顺序运行 1. `unmap! rh` 2. `umap! hi`): +上面的也可以使用 `ab` 命令来替换,就像下面这样(如果想这么用的话,需要首先按顺序运行: 1、 `unmap! rh`,2、`umap! hi`): + ``` :ab rh rhinoceros - :ab hi hippopotamus - ``` -在上面 `map!` 的命令中,缩写会马上的展开成原有的单词,而在 `ab` 命令中,单词展开的操作会在输入了空格和标点之后才展开(不过在Vim 和 本机使用的 vi中,展开的形式与 `map!` 类似) +在上面 `map!` 的命令中,缩写会马上的展开成原有的单词,而在 `ab` 命令中,单词展开的操作会在输入了空格和标点之后才展开(不过在 Vim 和我的 vi 中,展开的形式与 `map!` 类似)。 -想要取消刚才设定的按键映射,可以对应的输入 `:unmap`, `unmap!`, `:unab` +想要取消刚才设定的按键映射,可以对应的输入 `:unmap`、 `unmap!` 或 `:unab`。 -在我使用的 vi 版本中,比较好用的候选映射按键包括 `g, K, q, v, V, Z`,控制字符包括:`CTRL-a, CTRL-c, CTRL-k, CTRL-n, CTRL-p, CTRL-x`;还有一些其他的字符如`#, *`,当然你也可以使用那些已经在 vi 中有过定义但不经常使用的字符,比如本文选择`X`和`I`,其中`X`表示删除左边的字符,并立刻左移当前字符。 +在我使用的 vi 版本中,比较好用的候选映射按键包括 `g`、`K`、`q`、 `v`、 `V`、 `Z`,控制字符包括:`CTRL-a`、`CTRL-c`、 `CTRL-k`、`CTRL-n`、`CTRL-p`、`CTRL-x`;还有一些其他的字符如 `#`、 `*`,当然你也可以使用那些已经在 vi 中有过定义但不经常使用的字符,比如本文选择 `X` 和 `I`,其中 `X` 表示删除左边的字符,并立刻左移当前字符。 最后,下面的命令 + ``` :map - :map! - :ab - ``` 将会显示,目前所有的缩写和键位映射。 -will show all the currently defined mappings and abbreviations. 希望上面的技巧能够更好地更高效地帮助你使用 vi。 @@ -122,10 +105,7 @@ will show all the currently defined mappings and abbreviations. via: https://opensource.com/article/18/5/shortcuts-vi-text-editor -作者:[Dan Sonnenschein][a] -选题:[lujun9972](https://github.com/lujun9972) -译者:[译者ID](https://github.com/sd886393) -校对:[校对者ID](https://github.com/校对者ID) +作者:[Dan Sonnenschein][a] 
选题:[lujun9972](https://github.com/lujun9972) 
译者:[sd886393](https://github.com/sd886393) 
校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 From da615e60f1007e15a4e6f62cbf16d6574fcee891 Mon Sep 17 00:00:00 2001 From: "Xingyu.Wang" Date: Tue, 2 Oct 2018 23:15:06 +0800 Subject: [PATCH 182/736] PUB:20180531 How to create shortcuts in vi.md @sd886393 https://linux.cn/article-10074-1.html --- .../tech => published}/20180531 How to create shortcuts in vi.md | 0 1 file changed, 0 insertions(+), 0 deletions(-) rename {translated/tech => published}/20180531 How to create shortcuts in vi.md (100%) diff --git a/translated/tech/20180531 How to create shortcuts in vi.md b/published/20180531 How to create shortcuts in vi.md similarity index 100% rename from translated/tech/20180531 How to create shortcuts in vi.md rename to published/20180531 How to create shortcuts in vi.md From 7542ac2b769feed277f220af16d1d491e9f27665 Mon Sep 17 00:00:00 2001 From: dianbanjiu Date: Tue, 2 Oct 2018 23:17:27 +0800 Subject: [PATCH 183/736] dianbanjiu translating --- sources/tech/20170926 Managing users on Linux systems.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/sources/tech/20170926 Managing users on Linux systems.md b/sources/tech/20170926 Managing users on Linux systems.md index e47fc572df..cc4db1e693 100644 --- a/sources/tech/20170926 Managing users on Linux systems.md +++ b/sources/tech/20170926 Managing users on Linux systems.md @@ -1,4 +1,4 @@ -Managing users on Linux systems +translating by dianbanjiu Managing users on Linux systems ====== Your Linux users may not be raging bulls, but keeping them happy is always a challenge as it involves managing their accounts, monitoring their access rights, tracking down the solutions to problems they run into, and keeping them informed about important changes on the systems they use. Here are some of the tasks and tools that make the job a little easier. From ebc6520b72fb04b1da6bd5f53846c9b5f109787a Mon Sep 17 00:00:00 2001 From: Baron Hou Date: Tue, 2 Oct 2018 23:55:01 +0800 Subject: [PATCH 184/736] Translating By houbaron --- sources/tech/20140607 Five things that make Go fast.md | 2 ++ 1 file changed, 2 insertions(+) diff --git a/sources/tech/20140607 Five things that make Go fast.md b/sources/tech/20140607 Five things that make Go fast.md index 88db93011c..65add9552d 100644 --- a/sources/tech/20140607 Five things that make Go fast.md +++ b/sources/tech/20140607 Five things that make Go fast.md @@ -1,3 +1,5 @@ +Translating By houbaron + Five things that make Go fast ============================================================ From e4930740f6fda2399d0cac813ff4d1094ef60237 Mon Sep 17 00:00:00 2001 From: Andy Luo Date: Wed, 3 Oct 2018 00:03:08 +0800 Subject: [PATCH 185/736] Delete 20180927 How To Find And Delete Duplicate Files In Linux.md MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit 删除原文 --- ...ind And Delete Duplicate Files In Linux.md | 442 ------------------ 1 file changed, 442 deletions(-) delete mode 100644 sources/tech/20180927 How To Find And Delete Duplicate Files In Linux.md diff --git a/sources/tech/20180927 How To Find And Delete Duplicate Files In Linux.md b/sources/tech/20180927 How To Find And Delete Duplicate Files In Linux.md deleted file mode 100644 index 2b9c610f1d..0000000000 --- a/sources/tech/20180927 How To Find And Delete Duplicate Files In Linux.md +++ /dev/null @@ -1,442 +0,0 @@ -Translating by pygmalion666 -How To Find And Delete Duplicate Files In Linux -====== - -![](https://www.ostechnix.com/wp-content/uploads/2018/09/Find-And-Delete-Duplicate-Files-720x340.png) - -I always backup the configuration files or any old files to somewhere in my hard disk before edit or modify them, so I can restore them from the backup if I accidentally did something wrong. But the problem is I forgot to clean up those files and my hard disk is filled with a lot of duplicate files after a certain period of time. I feel either too lazy to clean the old files or afraid that I may delete an important files. If you’re anything like me and overwhelming with multiple copies of same files in different backup directories, you can find and delete duplicate files using the tools given below in Unix-like operating systems. - -**A word of caution:** - -Please be careful while deleting duplicate files. If you’re not careful, it will lead you to [**accidental data loss**][1]. I advice you to pay extra attention while using these tools. - -### Find And Delete Duplicate Files In Linux - -For the purpose of this guide, I am going to discuss about three utilities namely, - - 1. Rdfind, - 2. Fdupes, - 3. FSlint. - - - -These three utilities are free, open source and works on most Unix-like operating systems. - -##### 1. Rdfind - -**Rdfind** , stands for **r** edundant **d** ata **find** , is a free and open source utility to find duplicate files across and/or within directories and sub-directories. It compares files based on their content, not on their file names. Rdfind uses **ranking** algorithm to classify original and duplicate files. If you have two or more equal files, Rdfind is smart enough to find which is original file, and consider the rest of the files as duplicates. Once it found the duplicates, it will report them to you. You can decide to either delete them or replace them with [**hard links** or **symbolic (soft) links**][2]. - -**Installing Rdfind** - -Rdfind is available in [**AUR**][3]. So, you can install it in Arch-based systems using any AUR helper program like [**Yay**][4] as shown below. - -``` -$ yay -S rdfind - -``` - -On Debian, Ubuntu, Linux Mint: - -``` -$ sudo apt-get install rdfind - -``` - -On Fedora: - -``` -$ sudo dnf install rdfind - -``` - -On RHEL, CentOS: - -``` -$ sudo yum install epel-release - -$ sudo yum install rdfind - -``` - -**Usage** - -Once installed, simply run Rdfind command along with the directory path to scan for the duplicate files. - -``` -$ rdfind ~/Downloads - -``` - -![](https://www.ostechnix.com/wp-content/uploads/2018/09/rdfind-1.png) - -As you see in the above screenshot, Rdfind command will scan ~/Downloads directory and save the results in a file named **results.txt** in the current working directory. You can view the name of the possible duplicate files in results.txt file. - -``` -$ cat results.txt -# Automatically generated -# duptype id depth size device inode priority name -DUPTYPE_FIRST_OCCURRENCE 1469 8 9 2050 15864884 1 /home/sk/Downloads/tor-browser_en-US/Browser/TorBrowser/Tor/PluggableTransports/fte/tests/dfas/test5.regex -DUPTYPE_WITHIN_SAME_TREE -1469 8 9 2050 15864886 1 /home/sk/Downloads/tor-browser_en-US/Browser/TorBrowser/Tor/PluggableTransports/fte/tests/dfas/test6.regex -[...] -DUPTYPE_FIRST_OCCURRENCE 13 0 403635 2050 15740257 1 /home/sk/Downloads/Hyperledger(1).pdf -DUPTYPE_WITHIN_SAME_TREE -13 0 403635 2050 15741071 1 /home/sk/Downloads/Hyperledger.pdf -# end of file - -``` - -By reviewing the results.txt file, you can easily find the duplicates. You can remove the duplicates manually if you want to. - -Also, you can **-dryrun** option to find all duplicates in a given directory without changing anything and output the summary in your Terminal: - -``` -$ rdfind -dryrun true ~/Downloads - -``` - -Once you found the duplicates, you can replace them with either hardlinks or symlinks. - -To replace all duplicates with hardlinks, run: - -``` -$ rdfind -makehardlinks true ~/Downloads - -``` - -To replace all duplicates with symlinks/soft links, run: - -``` -$ rdfind -makesymlinks true ~/Downloads - -``` - -You may have some empty files in a directory and want to ignore them. If so, use **-ignoreempty** option like below. - -``` -$ rdfind -ignoreempty true ~/Downloads - -``` - -If you don’t want the old files anymore, just delete duplicate files instead of replacing them with hard or soft links. - -To delete all duplicates, simply run: - -``` -$ rdfind -deleteduplicates true ~/Downloads - -``` - -If you do not want to ignore empty files and delete them along with all duplicates, run: - -``` -$ rdfind -deleteduplicates true -ignoreempty false ~/Downloads - -``` - -For more details, refer the help section: - -``` -$ rdfind --help - -``` - -And, the manual pages: - -``` -$ man rdfind - -``` - -##### 2. Fdupes - -**Fdupes** is yet another command line utility to identify and remove the duplicate files within specified directories and the sub-directories. It is free, open source utility written in **C** programming language. Fdupes identifies the duplicates by comparing file sizes, partial MD5 signatures, full MD5 signatures, and finally performing a byte-by-byte comparison for verification. - -Similar to Rdfind utility, Fdupes comes with quite handful of options to perform operations, such as: - - * Recursively search duplicate files in directories and sub-directories - * Exclude empty files and hidden files from consideration - * Show the size of the duplicates - * Delete duplicates immediately as they encountered - * Exclude files with different owner/group or permission bits as duplicates - * And a lot more. - - - -**Installing Fdupes** - -Fdupes is available in the default repositories of most Linux distributions. - -On Arch Linux and its variants like Antergos, Manjaro Linux, install it using Pacman like below. - -``` -$ sudo pacman -S fdupes - -``` - -On Debian, Ubuntu, Linux Mint: - -``` -$ sudo apt-get install fdupes - -``` - -On Fedora: - -``` -$ sudo dnf install fdupes - -``` - -On RHEL, CentOS: - -``` -$ sudo yum install epel-release - -$ sudo yum install fdupes - -``` - -**Usage** - -Fdupes usage is pretty simple. Just run the following command to find out the duplicate files in a directory, for example **~/Downloads**. - -``` -$ fdupes ~/Downloads - -``` - -Sample output from my system: - -``` -/home/sk/Downloads/Hyperledger.pdf -/home/sk/Downloads/Hyperledger(1).pdf - -``` - -As you can see, I have a duplicate file in **/home/sk/Downloads/** directory. It shows the duplicates from the parent directory only. How to view the duplicates from sub-directories? Just use **-r** option like below. - -``` -$ fdupes -r ~/Downloads - -``` - -Now you will see the duplicates from **/home/sk/Downloads/** directory and its sub-directories as well. - -Fdupes can also be able to find duplicates from multiple directories at once. - -``` -$ fdupes ~/Downloads ~/Documents/ostechnix - -``` - -You can even search multiple directories, one recursively like below: - -``` -$ fdupes ~/Downloads -r ~/Documents/ostechnix - -``` - -The above commands searches for duplicates in “~/Downloads” directory and “~/Documents/ostechnix” directory and its sub-directories. - -Sometimes, you might want to know the size of the duplicates in a directory. If so, use **-S** option like below. - -``` -$ fdupes -S ~/Downloads -403635 bytes each: -/home/sk/Downloads/Hyperledger.pdf -/home/sk/Downloads/Hyperledger(1).pdf - -``` - -Similarly, to view the size of the duplicates in parent and child directories, use **-Sr** option. - -We can exclude empty and hidden files from consideration using **-n** and **-A** respectively. - -``` -$ fdupes -n ~/Downloads - -$ fdupes -A ~/Downloads - -``` - -The first command will exclude zero-length files from consideration and the latter will exclude hidden files from consideration while searching for duplicates in the specified directory. - -To summarize duplicate files information, use **-m** option. - -``` -$ fdupes -m ~/Downloads -1 duplicate files (in 1 sets), occupying 403.6 kilobytes - -``` - -To delete all duplicates, use **-d** option. - -``` -$ fdupes -d ~/Downloads - -``` - -Sample output: - -``` -[1] /home/sk/Downloads/Hyperledger Fabric Installation.pdf -[2] /home/sk/Downloads/Hyperledger Fabric Installation(1).pdf - -Set 1 of 1, preserve files [1 - 2, all]: - -``` - -This command will prompt you for files to preserve and delete all other duplicates. Just enter any number to preserve the corresponding file and delete the remaining files. Pay more attention while using this option. You might delete original files if you’re not be careful. - -If you want to preserve the first file in each set of duplicates and delete the others without prompting each time, use **-dN** option (not recommended). - -``` -$ fdupes -dN ~/Downloads - -``` - -To delete duplicates as they are encountered, use **-I** flag. - -``` -$ fdupes -I ~/Downloads - -``` - -For more details about Fdupes, view the help section and man pages. - -``` -$ fdupes --help - -$ man fdupes - -``` - -##### 3. FSlint - -**FSlint** is yet another duplicate file finder utility that I use from time to time to get rid of the unnecessary duplicate files and free up the disk space in my Linux system. Unlike the other two utilities, FSlint has both GUI and CLI modes. So, it is more user-friendly tool for newbies. FSlint not just finds the duplicates, but also bad symlinks, bad names, temp files, bad IDS, empty directories, and non stripped binaries etc. - -**Installing FSlint** - -FSlint is available in [**AUR**][5], so you can install it using any AUR helpers. - -``` -$ yay -S fslint - -``` - -On Debian, Ubuntu, Linux Mint: - -``` -$ sudo apt-get install fslint - -``` - -On Fedora: - -``` -$ sudo dnf install fslint - -``` - -On RHEL, CentOS: - -``` -$ sudo yum install epel-release - -``` - -$ sudo yum install fslint - -Once it is installed, launch it from menu or application launcher. - -This is how FSlint GUI looks like. - -![](http://www.ostechnix.com/wp-content/uploads/2018/09/fslint-1.png) - -As you can see, the interface of FSlint is user-friendly and self-explanatory. In the **Search path** tab, add the path of the directory you want to scan and click **Find** button on the lower left corner to find the duplicates. Check the recurse option to recursively search for duplicates in directories and sub-directories. The FSlint will quickly scan the given directory and list out them. - -![](https://www.ostechnix.com/wp-content/uploads/2018/09/fslint-2.png) - -From the list, choose the duplicates you want to clean and select any one of them given actions like Save, Delete, Merge and Symlink. - -In the **Advanced search parameters** tab, you can specify the paths to exclude while searching for duplicates. - -![](http://www.ostechnix.com/wp-content/uploads/2018/09/fslint-3.png) - -**FSlint command line options** - -FSlint provides a collection of the following CLI utilities to find duplicates in your filesystem: - - * **findup** — find DUPlicate files - * **findnl** — find Name Lint (problems with filenames) - * **findu8** — find filenames with invalid utf8 encoding - * **findbl** — find Bad Links (various problems with symlinks) - * **findsn** — find Same Name (problems with clashing names) - * **finded** — find Empty Directories - * **findid** — find files with dead user IDs - * **findns** — find Non Stripped executables - * **findrs** — find Redundant Whitespace in files - * **findtf** — find Temporary Files - * **findul** — find possibly Unused Libraries - * **zipdir** — Reclaim wasted space in ext2 directory entries - - - -All of these utilities are available under **/usr/share/fslint/fslint/fslint** location. - -For example, to find duplicates in a given directory, do: - -``` -$ /usr/share/fslint/fslint/findup ~/Downloads/ - -``` - -Similarly, to find empty directories, the command would be: - -``` -$ /usr/share/fslint/fslint/finded ~/Downloads/ - -``` - -To get more details on each utility, for example **findup** , run: - -``` -$ /usr/share/fslint/fslint/findup --help - -``` - -For more details about FSlint, refer the help section and man pages. - -``` -$ /usr/share/fslint/fslint/fslint --help - -$ man fslint - -``` - -##### Conclusion - -You know now about three tools to find and delete unwanted duplicate files in Linux. Among these three tools, I often use Rdfind. It doesn’t mean that the other two utilities are not efficient, but I am just happy with Rdfind so far. Well, it’s your turn. Which is your favorite tool and why? Let us know them in the comment section below. - -And, that’s all for now. Hope this was useful. More good stuffs to come. Stay tuned! - -Cheers! - - - --------------------------------------------------------------------------------- - -via: https://www.ostechnix.com/how-to-find-and-delete-duplicate-files-in-linux/ - -作者:[SK][a] -选题:[lujun9972](https://github.com/lujun9972) -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://www.ostechnix.com/author/sk/ -[1]: https://www.ostechnix.com/prevent-files-folders-accidental-deletion-modification-linux/ -[2]: https://www.ostechnix.com/explaining-soft-link-and-hard-link-in-linux-with-examples/ -[3]: https://aur.archlinux.org/packages/rdfind/ -[4]: https://www.ostechnix.com/yay-found-yet-another-reliable-aur-helper/ -[5]: https://aur.archlinux.org/packages/fslint/ From ae65ec1eed5a16a3af40cf064b61682f434e3d40 Mon Sep 17 00:00:00 2001 From: Andy Luo Date: Wed, 3 Oct 2018 00:03:32 +0800 Subject: [PATCH 186/736] Create 20180927 How To Find And Delete Duplicate Files In Linux.md MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit 翻译完毕 --- ...ind And Delete Duplicate Files In Linux.md | 446 ++++++++++++++++++ 1 file changed, 446 insertions(+) create mode 100644 translated/tech/20180927 How To Find And Delete Duplicate Files In Linux.md diff --git a/translated/tech/20180927 How To Find And Delete Duplicate Files In Linux.md b/translated/tech/20180927 How To Find And Delete Duplicate Files In Linux.md new file mode 100644 index 0000000000..c831a8bc5d --- /dev/null +++ b/translated/tech/20180927 How To Find And Delete Duplicate Files In Linux.md @@ -0,0 +1,446 @@ +如何在 Linux 中找到并删除重复文件 +====== + +![](https://www.ostechnix.com/wp-content/uploads/2018/09/Find-And-Delete-Duplicate-Files-720x340.png) + + +在编辑或修改配置文件或旧文件前,我经常会把它们备份到硬盘的某个地方,因此我如果意外地改错了这些文件,我可以从备份中恢复它们。但问题是如果我忘记清理备份文件,一段时间之后,我的磁盘会被这些大量重复文件填满。我觉得要么是懒得清理这些旧文件,要么是担心可能会删掉重要文件。如果你们像我一样,在类 Unix 操作系统中,大量多版本的相同文件放在不同的备份目录,你可以使用下面的工具找到并删除重复文件。 + +**提醒一句:** + +在删除重复文件的时请尽量小心。如果你不小心,也许会导致[**意外丢失数据**][1]。我建议你在使用这些工具的时候要特别注意。 + +### 在 Linux 中找到并删除重复文件 + + +出于本指南的目的,我将讨论下面的三个工具: + + 1. Rdfind + 2. Fdupes + 3. FSlint + + + +这三个工具是免费的、开源的,且运行在大多数类 Unix 系统中。 + +##### 1. Rdfind + +**Rdfind** 代表找到找到冗余数据,是一个通过访问目录和子目录来找出重复文件的免费、开源的工具。它是基于文件内容而不是文件名来比较。Rdfind 使用**排序**算法来区分原始文件和重复文件。如果你有两个或者更多的相同文件,Rdfind 会很智能的找到原始文件并认定剩下的文件为重复文件。一旦找到副本文件,它会向你报告。你可以决定是删除还是使用[**硬链接**或者**符号(软)链接**][2]代替它们。 + +**安装 Rdfind** + +Rdfind is available in [**AUR**][3]. So, you can install it in Arch-based systems using any AUR helper program like [**Yay**][4] as shown below. + +Rdfind 存在于 [**AUR**][3] 中。因此,在基于 Arch 的系统中,你可以像下面一样使用任一如 [**Yay**][4] AUR 程序助手安装它。 + +``` +$ yay -S rdfind + +``` + +在 Debian、Ubuntu、Linux Mint 上: + +``` +$ sudo apt-get install rdfind + +``` + +在 Fedora 上: + +``` +$ sudo dnf install rdfind + +``` + +在 RHEL、CentOS 上: + +``` +$ sudo yum install epel-release + +$ sudo yum install rdfind + +``` + +**用法** + +一旦安装完成,仅带上目录路径运行 Rdfind 命令就可以扫描重复文件。 + +``` +$ rdfind ~/Downloads + +``` + +![](https://www.ostechnix.com/wp-content/uploads/2018/09/rdfind-1.png) + +As you see in the above screenshot, Rdfind command will scan ~/Downloads directory and save the results in a file named **results.txt** in the current working directory. You can view the name of the possible duplicate files in results.txt file. + +正如你看到上面的截屏,Rdfind 命令将扫描 ~/Downloads 目录,并将结果存储到当前工作目录下一个名为 **results.txt** 的文件中。你可以在 results.txt 文件中看到可能是重复文件的名字。 + +``` +$ cat results.txt +# Automatically generated +# duptype id depth size device inode priority name +DUPTYPE_FIRST_OCCURRENCE 1469 8 9 2050 15864884 1 /home/sk/Downloads/tor-browser_en-US/Browser/TorBrowser/Tor/PluggableTransports/fte/tests/dfas/test5.regex +DUPTYPE_WITHIN_SAME_TREE -1469 8 9 2050 15864886 1 /home/sk/Downloads/tor-browser_en-US/Browser/TorBrowser/Tor/PluggableTransports/fte/tests/dfas/test6.regex +[...] +DUPTYPE_FIRST_OCCURRENCE 13 0 403635 2050 15740257 1 /home/sk/Downloads/Hyperledger(1).pdf +DUPTYPE_WITHIN_SAME_TREE -13 0 403635 2050 15741071 1 /home/sk/Downloads/Hyperledger.pdf +# end of file + +``` + +通过检查 results.txt 文件,你可以很容易的找到那些重复文件。如果愿意你可以手动的删除它们。 + +此外,你可在不修改其他事情情况下使用 **-dryrun** 选项找出所有重复文件,并在终端上输出汇总信息。 + +``` +$ rdfind -dryrun true ~/Downloads + +``` + +一旦找到重复文件,你可以使用硬链接或符号链接代替他们。 + +使用硬链接代替所有重复文件,运行: + +``` +$ rdfind -makehardlinks true ~/Downloads + +``` + +使用符号链接/软链接代替所有重复文件,运行: + +``` +$ rdfind -makesymlinks true ~/Downloads + +``` + +目录中有一些空文件,也许你想忽略他们,你可以像下面一样使用 **-ignoreempty** 选项: + +``` +$ rdfind -ignoreempty true ~/Downloads + +``` + +如果你不再想要这些旧文件,删除重复文件,而不是使用硬链接或软链接代替它们。 + +删除重复文件,就运行: + +``` +$ rdfind -deleteduplicates true ~/Downloads + +``` + +如果你不想忽略空文件,并且和所哟重复文件一起删除。运行: + +``` +$ rdfind -deleteduplicates true -ignoreempty false ~/Downloads + +``` + +更多细节,参照帮助部分: + +``` +$ rdfind --help + +``` + +手册页: + +``` +$ man rdfind + +``` + +##### 2. Fdupes + +**Fdupes** 是另一个在指定目录以及子目录中识别和移除重复文件的命令行工具。这是一个使用 **C** 语言编写的免费、开源工具。Fdupes 通过对比文件大小、部分 MD5 签名、全部 MD5 签名,最后执行逐个字节对比校验来识别重复文件。 + +与 Rdfind 工具类似,Fdupes 附带非常少的选项来执行操作,如: + + * 在目录和子目录中递归的搜索重复文件 + * 从计算中排除空文件和隐藏文件 + * 显示重复文件大小 + * 出现重复文件时立即删除 + * 使用不同的拥有者/组或权限位来排除重复文件 + * 更多 + + + +**安装 Fdupes** + +Fdupes 存在于大多数 Linux 发行版的默认仓库中。 + +On Arch Linux and its variants like Antergos, Manjaro Linux, install it using Pacman like below. + +在 Arch Linux 和它的变种如 Antergos、Manjaro Linux 上,如下使用 Pacman 安装它。 + +``` +$ sudo pacman -S fdupes + +``` + +在 Debian、Ubuntu、Linux Mint 上: + +``` +$ sudo apt-get install fdupes + +``` + +在 Fedora 上: + +``` +$ sudo dnf install fdupes + +``` + +在 RHEL、CentOS 上: + +``` +$ sudo yum install epel-release + +$ sudo yum install fdupes + +``` + +**用法** + +Fdupes 用法非常简单。仅运行下面的命令就可以在目录中找到重复文件,如:**~/Downloads**. + +``` +$ fdupes ~/Downloads + +``` + +我系统中的样例输出: + +``` +/home/sk/Downloads/Hyperledger.pdf +/home/sk/Downloads/Hyperledger(1).pdf + +``` +你可以看到,在 **/home/sk/Downloads/** 目录下有一个重复文件。它仅显示了父级目录中的重复文件。如何显示子目录中的重复文件?像下面一样,使用 **-r** 选项。 + +``` +$ fdupes -r ~/Downloads + +``` + +现在你将看到 **/home/sk/Downloads/** 目录以及子目录中的重复文件。 + +Fdupes 也可用来从多个目录中迅速查找重复文件。 + +``` +$ fdupes ~/Downloads ~/Documents/ostechnix + +``` + +你甚至可以搜索多个目录,递归搜索其中一个目录,如下: + +``` +$ fdupes ~/Downloads -r ~/Documents/ostechnix + +``` + +上面的命令将搜索 “~/Downloads” 目录,“~/Documents/ostechnix” 目录和它的子目录中的重复文件。 + +有时,你可能想要知道一个目录中重复文件的大小。你可以使用 **-S** 选项,如下: + +``` +$ fdupes -S ~/Downloads +403635 bytes each: +/home/sk/Downloads/Hyperledger.pdf +/home/sk/Downloads/Hyperledger(1).pdf + +``` + +类似的,为了显示父目录和子目录中重复文件的大小,使用 **-Sr** 选项。 + +我们可以在计算时分别使用 **-n** 和 **-A** 选项排除空白文件以及排除隐藏文件。 + +``` +$ fdupes -n ~/Downloads + +$ fdupes -A ~/Downloads + +``` + +在搜索指定目录的重复文件时,第一个命令将排除零长度文件,后面的命令将排除隐藏文件。 + +汇总重复文件信息,使用 **-m** 选项。 + +``` +$ fdupes -m ~/Downloads +1 duplicate files (in 1 sets), occupying 403.6 kilobytes + +``` + +删除所有重复文件,使用 **-d** 选项。 + +``` +$ fdupes -d ~/Downloads + +``` + +样例输出: + +``` +[1] /home/sk/Downloads/Hyperledger Fabric Installation.pdf +[2] /home/sk/Downloads/Hyperledger Fabric Installation(1).pdf + +Set 1 of 1, preserve files [1 - 2, all]: + +``` + +这个命令将提示你保留还是删除所有其他重复文件。输入任一号码保留相应的文件,并删除剩下的文件。当使用这个选项的时候需要更加注意。如果不小心,你可能会删除原文件。 + +如果你想要每次保留每个重复文件集合的第一个文件,且无提示的删除其他文件,使用 **-dN** 选项(不推荐)。 + +``` +$ fdupes -dN ~/Downloads + +``` + +当遇到重复文件时删除它们,使用 **-I** 标志。 + +``` +$ fdupes -I ~/Downloads + +``` + +关于 Fdupes 的更多细节,查看帮助部分和 man 页面。 + +``` +$ fdupes --help + +$ man fdupes + +``` + +##### 3. FSlint + +**FSlint** 是另外一个查找重复文件的工具,有时我用它去掉 Linux 系统中不需要的重复文件并释放磁盘空间。不像另外两个工具,FSlint 有 GUI 和 CLI 两种模式。因此对于新手来说它更友好。FSlint 不仅仅找出重复文件,也找出坏符号链接、坏名字文件、临时文件、坏 IDS、空目录和非剥离二进制文件等等。 + +**安装 FSlint** + +FSlint 存在于 [**AUR**][5],因此你可以使用任一 AUR 助手安装它。 + +``` +$ yay -S fslint + +``` + +在 Debian、Ubuntu、Linux Mint 上: + +``` +$ sudo apt-get install fslint + +``` + +在 Fedora 上: + +``` +$ sudo dnf install fslint + +``` + +在 RHEL,CentOS 上: + +``` +$ sudo yum install epel-release +$ sudo yum install fslint + +``` + +一旦安装完成,从菜单或者应用程序启动器启动它。 + +FSlint GUI 展示如下: + +![](http://www.ostechnix.com/wp-content/uploads/2018/09/fslint-1.png) + +如你所见,FSlint 接口友好、一目了然。在 **Search path** 栏,添加你要扫描的目录路径,点击左下角 **Find** 按钮查找重复文件。验证递归选项可以在目录和子目录中递归的搜索重复文件。FSlint 将快速的扫描给定的目录并列出重复文件。 + +![](https://www.ostechnix.com/wp-content/uploads/2018/09/fslint-2.png) + +从列表中选择那些要清理的重复文件,也可以选择 Save、Delete、Merge 和 Symlink 操作他们。 + +在 **Advanced search parameters** 栏,你可以在搜索重复文件的时候指定排除的路径。 + +![](http://www.ostechnix.com/wp-content/uploads/2018/09/fslint-3.png) + +**FSlint 命令行选项** + +FSlint 提供下面的 CLI 工具集在你的文件系统中查找重复文件。 + + * **findup** — 查找重复文件 + * **findnl** — 查找 Lint 名称文件(有问题的文件名) + * **findu8** — 查找非法的 utf8 编码文件 + * **findbl** — 查找坏链接(有问题的符号链接) + * **findsn** — 查找同名文件(可能有冲突的文件名) + * **finded** — 查找空目录 + * **findid** — 查找死用户的文件 + * **findns** — 查找非剥离的可执行文件 + * **findrs** — 查找文件中多于的空白 + * **findtf** — 查找临时文件 + * **findul** — 查找可能未使用的库 + * **zipdir** — 回收 ext2 目录实体下浪费的空间 + + + +所有这些工具位于 **/usr/share/fslint/fslint/fslint** 下面。 + + +例如,在给定的目录中查找重复文件,运行: + +``` +$ /usr/share/fslint/fslint/findup ~/Downloads/ + +``` + +类似的,找出空目录命令是: + +``` +$ /usr/share/fslint/fslint/finded ~/Downloads/ + +``` + +获取每个工具更多细节,例如:**findup**,运行: + +``` +$ /usr/share/fslint/fslint/findup --help + +``` + +关于 FSlint 的更多细节,参照帮助部分和 man 页。 + +``` +$ /usr/share/fslint/fslint/fslint --help + +$ man fslint + +``` + +##### 总结 + +现在你知道在 Linux 中,使用三个工具来查找和删除不需要的重复文件。这三个工具中,我经常使用 Rdfind。这并不意味着其他的两个工具效率低下,因为到目前为止我更喜欢 Rdfind。好了,到你了。你的最喜欢哪一个工具呢?为什么?在下面的评论区留言让我们知道吧。 + +就到这里吧。希望这篇文章对你有帮助。更多的好东西就要来了,敬请期待。 + +谢谢! + +-------------------------------------------------------------------------------- + +via: https://www.ostechnix.com/how-to-find-and-delete-duplicate-files-in-linux/ + +作者:[SK][a] +选题:[lujun9972](https://github.com/lujun9972) +译者:[pygmalion666](https://github.com/pygmalion666) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://www.ostechnix.com/author/sk/ +[1]: https://www.ostechnix.com/prevent-files-folders-accidental-deletion-modification-linux/ +[2]: https://www.ostechnix.com/explaining-soft-link-and-hard-link-in-linux-with-examples/ +[3]: https://aur.archlinux.org/packages/rdfind/ +[4]: https://www.ostechnix.com/yay-found-yet-another-reliable-aur-helper/ +[5]: https://aur.archlinux.org/packages/fslint/ From 9895b71cd2601806ccdd649308e4801465d0249a Mon Sep 17 00:00:00 2001 From: Andy Luo Date: Wed, 3 Oct 2018 00:09:26 +0800 Subject: [PATCH 187/736] Create 20180927 How To Find And Delete Duplicate Files In Linux.md MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit 翻译完毕 --- ...0927 How To Find And Delete Duplicate Files In Linux.md | 7 ------- 1 file changed, 7 deletions(-) diff --git a/translated/tech/20180927 How To Find And Delete Duplicate Files In Linux.md b/translated/tech/20180927 How To Find And Delete Duplicate Files In Linux.md index c831a8bc5d..c1b637bf2f 100644 --- a/translated/tech/20180927 How To Find And Delete Duplicate Files In Linux.md +++ b/translated/tech/20180927 How To Find And Delete Duplicate Files In Linux.md @@ -3,7 +3,6 @@ ![](https://www.ostechnix.com/wp-content/uploads/2018/09/Find-And-Delete-Duplicate-Files-720x340.png) - 在编辑或修改配置文件或旧文件前,我经常会把它们备份到硬盘的某个地方,因此我如果意外地改错了这些文件,我可以从备份中恢复它们。但问题是如果我忘记清理备份文件,一段时间之后,我的磁盘会被这些大量重复文件填满。我觉得要么是懒得清理这些旧文件,要么是担心可能会删掉重要文件。如果你们像我一样,在类 Unix 操作系统中,大量多版本的相同文件放在不同的备份目录,你可以使用下面的工具找到并删除重复文件。 **提醒一句:** @@ -29,8 +28,6 @@ **安装 Rdfind** -Rdfind is available in [**AUR**][3]. So, you can install it in Arch-based systems using any AUR helper program like [**Yay**][4] as shown below. - Rdfind 存在于 [**AUR**][3] 中。因此,在基于 Arch 的系统中,你可以像下面一样使用任一如 [**Yay**][4] AUR 程序助手安装它。 ``` @@ -72,8 +69,6 @@ $ rdfind ~/Downloads ![](https://www.ostechnix.com/wp-content/uploads/2018/09/rdfind-1.png) -As you see in the above screenshot, Rdfind command will scan ~/Downloads directory and save the results in a file named **results.txt** in the current working directory. You can view the name of the possible duplicate files in results.txt file. - 正如你看到上面的截屏,Rdfind 命令将扫描 ~/Downloads 目录,并将结果存储到当前工作目录下一个名为 **results.txt** 的文件中。你可以在 results.txt 文件中看到可能是重复文件的名字。 ``` @@ -170,8 +165,6 @@ $ man rdfind Fdupes 存在于大多数 Linux 发行版的默认仓库中。 -On Arch Linux and its variants like Antergos, Manjaro Linux, install it using Pacman like below. - 在 Arch Linux 和它的变种如 Antergos、Manjaro Linux 上,如下使用 Pacman 安装它。 ``` From 32646794d50f1a75b83c898d49cc5dc160031b9a Mon Sep 17 00:00:00 2001 From: darksun Date: Wed, 3 Oct 2018 13:28:47 +0800 Subject: [PATCH 188/736] translate done: 20180827 4 tips for better tmux sessions.md --- ...0180827 4 tips for better tmux sessions.md | 89 ------------------- ...0180827 4 tips for better tmux sessions.md | 88 ++++++++++++++++++ 2 files changed, 88 insertions(+), 89 deletions(-) delete mode 100644 sources/tech/20180827 4 tips for better tmux sessions.md create mode 100644 translated/tech/20180827 4 tips for better tmux sessions.md diff --git a/sources/tech/20180827 4 tips for better tmux sessions.md b/sources/tech/20180827 4 tips for better tmux sessions.md deleted file mode 100644 index b6d6a3e4fe..0000000000 --- a/sources/tech/20180827 4 tips for better tmux sessions.md +++ /dev/null @@ -1,89 +0,0 @@ -translating by lujun9972 -4 tips for better tmux sessions -====== - -![](https://fedoramagazine.org/wp-content/uploads/2018/08/tmux-4-tips-816x345.jpg) - -The tmux utility, a terminal multiplexer, lets you treat your terminal as a multi-paned window into your system. You can arrange the configuration, run different processes in each, and generally make better use of your screen. We introduced some readers to this powerful tool [in this earlier article][1]. Here are some tips that will help you get more out of tmux if you’re getting started. - -This article assumes your current prefix key is Ctrl+b. If you’ve remapped that prefix, simply substitute your prefix in its place. - -### Set your terminal to automatically use tmux - -One of the biggest benefits of tmux is being able to disconnect and reconnect to sesions at wilI. This makes remote login sessions more powerful. Have you ever lost a connection and wished you could get back the work you were doing on the remote system? With tmux this problem is solved. - -However, you may sometimes find yourself doing work on a remote system, and realize you didn’t start a session. One way to avoid this is to have tmux start or attach every time you login to a system with in interactive shell. - -Add this to your remote system’s ~/.bash_profile file: - -``` -if [ -z "$TMUX" ]; then - tmux attach -t default || tmux new -s default -fi -``` - -Then logout of the remote system, and log back in with SSH. You’ll find you’re in a tmux session named default. This session will be regenerated at next login if you exit it. But more importantly, if you detach from it as normal, your work is waiting for you next time you login — especially useful if your connection is interrupted. - -Of course you can add this to your local system as well. Note that terminals inside most GUIs won’t use the default session automatically, because they aren’t login shells. While you can change that behavior, it may result in nesting that makes the session less usable, so proceed with caution. - -### Use zoom to focus on a single process - -While the point of tmux is to offer multiple windows, panes, and processes in a single session, sometimes you need to focus. If you’re in a process and need more space, or to focus on a single task, the zoom command works well. It expands the current pane to take up the entire current window space. - -Zoom can be useful in other situations too. For instance, imagine you’re using a terminal window in a graphical desktop. Panes can make it harder to copy and paste multiple lines from inside your tmux session. If you zoom the pane, you can do a clean copy/paste of multiple lines of data with ease. - -To zoom into the current pane, hit Ctrl+b, z. When you’re finished with the zoom function, hit the same key combo to unzoom the pane. - -### Bind some useful commands - -By default tmux has numerous commands available. But it’s helpful to have some of the more common operations bound to keys you can easily remember. Here are some examples you can add to your ~/.tmux.conf file to make sessions more enjoyable: - -``` -bind r source-file ~/.tmux.conf \; display "Reloaded config" -``` - -This command rereads the commands and bindings in your config file. Once you add this binding, exit any tmux sessions and then restart one. Now after you make any other future changes, simply run Ctrl+b, r and the changes will be part of your existing session. - -``` -bind V split-window -h -bind H split-window -``` - -These commands make it easier to split the current window across a vertical axis (note that’s Shift+V) or across a horizontal axis (Shift+H). - -If you want to see how all keys are bound, use Ctrl+B, ? to see a list. You may see keys bound in copy-mode first, for when you’re working with copy and paste inside tmux. The prefix mode bindings are where you’ll see ones you’ve added above. Feel free to experiment with your own! - -### Use powerline for great justice - -[As reported in a previous Fedora Magazine article][2], the powerline utility is a fantastic addition to your shell. But it also has capabilities when used with tmux. Because tmux takes over the entire terminal space, the powerline window can provide more than just a better shell prompt. - - [![Screenshot of tmux powerline in git folder](https://fedoramagazine.org/wp-content/uploads/2018/08/Screenshot-from-2018-08-25-19-36-53-1024x690.png)][3] - -If you haven’t already, follow the instructions in the [Magazine’s powerline article][4] to install that utility. Then, install the addon [using sudo][5]: - -``` -sudo dnf install tmux-powerline -``` - -Now restart your session, and you’ll see a spiffy new status line at the bottom. Depending on the terminal width, the default status line now shows your current session ID, open windows, system information, date and time, and hostname. If you change directory into a git-controlled project, you’ll see the branch and color-coded status as well. - -Of course, this status bar is highly configurable as well. Enjoy your new supercharged tmux session, and have fun experimenting with it. - - --------------------------------------------------------------------------------- - -via: https://fedoramagazine.org/4-tips-better-tmux-sessions/ - -作者:[Paul W. Frields][a] -选题:[lujun9972](https://github.com/lujun9972) -译者:[lujun9972](https://github.com/lujun9972) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:https://fedoramagazine.org/author/pfrields/ -[1]:https://fedoramagazine.org/use-tmux-more-powerful-terminal/ -[2]:https://fedoramagazine.org/add-power-terminal-powerline/ -[3]:https://fedoramagazine.org/wp-content/uploads/2018/08/Screenshot-from-2018-08-25-19-36-53.png -[4]:https://fedoramagazine.org/add-power-terminal-powerline/ -[5]:https://fedoramagazine.org/howto-use-sudo/ diff --git a/translated/tech/20180827 4 tips for better tmux sessions.md b/translated/tech/20180827 4 tips for better tmux sessions.md new file mode 100644 index 0000000000..5e507985fb --- /dev/null +++ b/translated/tech/20180827 4 tips for better tmux sessions.md @@ -0,0 +1,88 @@ +更好利用 tmux 会话的 4 个技巧 +====== + +![](https://fedoramagazine.org/wp-content/uploads/2018/08/tmux-4-tips-816x345.jpg) + +tmux 是一个终端多路复用工具,它可以让你系统上的终端支持多面板。你可以安排好面板配置,在每个面板用运行不同进程,这通常可以更好的地用你的屏幕。我们在 [这篇早期的文章 ][1] 中向读者介绍过这一强力工具。如果你已经开始使用 tmux 了,那么这里有一些技巧可以帮你更好地使用它。 + +本文假设你当前的前缀键是 `Ctrl+b`。如果你已重新映射该前缀,只需在相应位置替换为你定义的前缀即可。。 + +### 设置终端为自动使用 tmux + +使用 tmux 的一个最大好处就是可以随意的从会话中断开和重连。这使得远程登陆会话更加强力。你有没有遇到过丢失了与远程系统的连接,然后好希望能够恢复在远程系统上做过的那些工作的情况?tmux 能够解决这一问题。 + +然而,有时在远程系统上工作时,你可能会忘记开启一个会话。避免出现这一情况的一个方法就是每次通过交互式 shell 登陆系统时都让 tmux 启动或附加上一个会话。 + +在你远程系统上的 ~/.bash_profile 文件中加入下面内容: + +``` +if [ -z "$TMUX" ]; then + tmux attach -t default || tmux new -s default +fi +``` + +然后注销远程系统,并使用 SSH 重新登录。你会发现你处在一个名为 default 的 tmux 会话中了。如果退出该会话,则下次登录时还会重新生成此会话。但更重要的是,若您正常地从会话中分离,那么下次登录时你会发现之前工作并没有丢失 - 这在连接中断时非常有用。 + +你当然也可以将这段配置加入本地系统中。需要注意的是,大多数 GUI 界面的终端并不会自动使用这个 default 会话,因此它们并不是登陆 shell。虽然你可以修改这一行为,但它可能会导致终端嵌套执行附加到 tmux 会话这一动作从而导致会话不太可用,因此当进行此操作时请一定小心。 + +### 使用 zoom 使注意力专注于单个进程 + +然而 tmux 的目的就是在单个 session 中提供多窗口,多面板和多进程的能力,但有时候你需要专注。如果你正在与一个进程进行交互并且需要更多空间,或需要专注于某个任务,则可以使用 zoom 命令。该命令会将当前面板扩展,占据整个当前窗口的空间。 + +Zoom 在其他情况下也很有用。比如,想象你在图形桌面上运行一个终端窗口。面板会使得从 tmux 会话中拷贝和粘帖多行内容变得相对困难。但若你对面板进行用 zoom 进行了缩放,就可以很容易地对多行数据进行拷贝/粘帖。 + +要对当前面板进行缩放,按下 `Ctrl+b,z`。需要回复的话,按下相同按键组合来回复面板。 + +### 绑定一些有用的命令 + +tmux 默认有大量的命令可用。但将一些更常用的操作绑定到容易记忆的快捷键会很有有。下面一些例子可以让会话变得更好用,你可以添加到 ~/.tmux.conf 文件中: + +``` +bind r source-file ~/.tmux.conf \; display "Reloaded config" +``` + +该命令重新读取你配置文件中的命令和键绑定。添加该条绑定后,退出所有的 tmux 会话然后重启一个会话。现在你做了任何更改后,只需要简单的按下 `Ctrl+b,r` 就能将修改的内容应用到现有的会话中了。 + +``` +bind V split-window -h +bind H split-window +``` + +这些命令可以很方便地对窗口进行横向切分(按下 Shift+V) 和纵向切分 (Shift+H)。 + +若你想查看所有绑定的快捷键,按下 `Ctrl+B,?` 可以看到一个列表。你首先看到的应该是复制模式下的快捷键绑定,表示的是当你在 tmux 中进行复制粘帖时对应的快捷键。你添加的那两个键绑定会在前缀模式 (prefix mode) 中看到。请随意把玩吧! + +### Use powerline for great justice + +[如前文所示 ][2],powerline 工具是对 shell 的绝佳补充。而且它也兼容在 tmux 中使用。由于 tmux 接管了整个终端空间,powerline 窗口能理工的可不仅仅是更好的 shell 提示那么简单。 + + [![Screenshot of tmux powerline in git folder](https://fedoramagazine.org/wp-content/uploads/2018/08/Screenshot-from-2018-08-25-19-36-53-1024x690.png)][3] + +如果你还没有这么做,按照 [本文 ][4] 中的指示来安装该工具。然后[使用 sudo][5] 来安装附件: + +``` +sudo dnf install tmux-powerline +``` + +然后重启会话,就会在底部看到一个漂亮的新状态栏。根据终端的宽度,默认的状态栏会显示你当前会话 ID,打开的窗口,系统信息,日期和时间,以及主机名。若你进入了使用 git 进行版本控制的项目目录中还能看到分支名和用色彩标注的版本库状态。 + +当然,这个状态栏具有很好的可配置性。享受你新增强的 tmux 会话吧,玩的开心点。 + + +-------------------------------------------------------------------------------- + +via: https://fedoramagazine.org/4-tips-better-tmux-sessions/ + +作者:[Paul W. Frields][a] +选题:[lujun9972](https://github.com/lujun9972) +译者:[lujun9972](https://github.com/lujun9972) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://fedoramagazine.org/author/pfrields/ +[1]:https://fedoramagazine.org/use-tmux-more-powerful-terminal/ +[2]:https://fedoramagazine.org/add-power-terminal-powerline/ +[3]:https://fedoramagazine.org/wp-content/uploads/2018/08/Screenshot-from-2018-08-25-19-36-53.png +[4]:https://fedoramagazine.org/add-power-terminal-powerline/ +[5]:https://fedoramagazine.org/howto-use-sudo/ From 4f271e8006b067a42292ebec73545c2f59e81ca2 Mon Sep 17 00:00:00 2001 From: DavidChenLiang Date: Wed, 3 Oct 2018 16:03:13 +0800 Subject: [PATCH 189/736] Update 20180823 CLI- improved.md --- sources/tech/20180823 CLI- improved.md | 2 ++ 1 file changed, 2 insertions(+) diff --git a/sources/tech/20180823 CLI- improved.md b/sources/tech/20180823 CLI- improved.md index d06bb1b2aa..52edaa28c8 100644 --- a/sources/tech/20180823 CLI- improved.md +++ b/sources/tech/20180823 CLI- improved.md @@ -1,3 +1,5 @@ +Translating by DavidChenLiang + CLI: improved ====== I'm not sure many web developers can get away without visiting the command line. As for me, I've been using the command line since 1997, first at university when I felt both super cool l33t-hacker and simultaneously utterly out of my depth. From a28d2cc31318ec4ca5cfd5ca80ce5bf62e3171ca Mon Sep 17 00:00:00 2001 From: "Xingyu.Wang" Date: Wed, 3 Oct 2018 17:18:16 +0800 Subject: [PATCH 190/736] PRF:20180918 Linux firewalls- What you need to know about iptables and firewalld.md @heguangzhi --- ...ed to know about iptables and firewalld.md | 84 +++++++++---------- 1 file changed, 38 insertions(+), 46 deletions(-) diff --git a/translated/tech/20180918 Linux firewalls- What you need to know about iptables and firewalld.md b/translated/tech/20180918 Linux firewalls- What you need to know about iptables and firewalld.md index c3ecb7b1d3..2e52cabba0 100644 --- a/translated/tech/20180918 Linux firewalls- What you need to know about iptables and firewalld.md +++ b/translated/tech/20180918 Linux firewalls- What you need to know about iptables and firewalld.md @@ -1,96 +1,90 @@ -Linux 防火墙: 关于 iptables 和 firewalld,你需要知道些什么 +Linux 防火墙:关于 iptables 和 firewalld 的那些事 ====== -以下是如何使用 iptables 和 firewalld 工具来管理 Linux 防火墙规则。 +> 以下是如何使用 iptables 和 firewalld 工具来管理 Linux 防火墙规则。 ![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/openwires_fromRHT_520_0612LL.png?itok=PqZi55Ab) -这篇文章摘自我的书[Linux in Action][1],第二 Manning project 尚未发布。 + +这篇文章摘自我的书《[Linux in Action][1]》,尚未发布的第二个曼宁出版项目。 ### 防火墙 - -防火墙是一组规则。当数据包进出受保护的网络时,进出内容(特别是关于其来源、目标和使用的协议等信息)会根据防火墙规则进行检测,以确定是否允许其通过。下面是一个简单的例子: - +防火墙是一组规则。当数据包进出受保护的网络区域时,进出内容(特别是关于其来源、目标和使用的协议等信息)会根据防火墙规则进行检测,以确定是否允许其通过。下面是一个简单的例子: ![防火墙过滤请求] [3] -防火墙可以根据协议或基于目标的规则过滤请求。 +*防火墙可以根据协议或基于目标的规则过滤请求。* 一方面, [iptables][4] 是 Linux 机器上管理防火墙规则的工具。 -另一方面,[firewalld][5]也是 Linux 机器上管理防火墙规则的工具。 +另一方面,[firewalld][5] 也是 Linux 机器上管理防火墙规则的工具。 -你有什么问题吗?如果我告诉你还有另外一种工具,叫做 [nftables][6],这会不会糟蹋你的一天呢? +你有什么问题吗?如果我告诉你还有另外一种工具,叫做 [nftables][6],这会不会糟蹋你的美好一天呢? -好吧,我承认整件事确实有点好笑,所以让我解释一下了。这一切都从 Netfilter 开始,在 Linux 内核模块级别, Netfilter 控制访问网络栈。几十年来,管理 Netfilter 钩子的主要命令行工具是 iptables 规则集。 +好吧,我承认整件事确实有点好笑,所以让我来解释一下。这一切都从 Netfilter 开始,它在 Linux 内核模块级别控制访问网络栈。几十年来,管理 Netfilter 钩子的主要命令行工具是 iptables 规则集。 -因为调用这些规则所需的语法看起来有点晦涩难懂,所以各种用户友好的实现方式,如[ufw][7] 和 firewalld 被引入作,并为更高级别的 Netfilter 解释器。然而,Ufw 和 firewalld 主要是为解决独立计算机面临的各种问题而设计的。构建全方面的网络解决方案通常需要 iptables,或者从2014年起,它的替代品 nftables (nft 命令行工具)。 - - -iptables 没有消失,仍然被广泛使用着。事实上,在未来的许多年里,作为一名管理员,你应该会使用 iptables 来保护的网络。但是nftables 通过操作经典的 Netfilter 工具集带来了一些重要的崭新的功能。 +因为调用这些规则所需的语法看起来有点晦涩难懂,所以各种用户友好的实现方式,如 [ufw][7] 和 firewalld 被引入,作为更高级别的 Netfilter 解释器。然而,ufw 和 firewalld 主要是为解决单独的计算机所面临的各种问题而设计的。构建全方面的网络解决方案通常需要 iptables,或者从 2014 年起,它的替代品 nftables (nft 命令行工具)。 +iptables 没有消失,仍然被广泛使用着。事实上,在未来的许多年里,作为一名管理员,你应该会使用 iptables 来保护的网络。但是 nftables 通过操作经典的 Netfilter 工具集带来了一些重要的崭新的功能。 从现在开始,我将通过示例展示 firewalld 和 iptables 如何解决简单的连接问题。 ### 使用 firewalld 配置 HTTP 访问 -正如你能从它的名字中猜到的,firewalld 是 [systemd][8] 家族的一部分。Firewalld 可以安装在 Debian/Ubuntu 机器上,不过, 它默认安装在 RedHat 和 CentOS 上。如果您的计算机上运行着像 Apache 这样的 web 服务器,您可以通过浏览服务器的 web 根目录来确认防火墙是否正在工作。如果网站不可访问,那么 firewalld 正在工作。 +正如你能从它的名字中猜到的,firewalld 是 [systemd][8] 家族的一部分。firewalld 可以安装在 Debian/Ubuntu 机器上,不过,它默认安装在 RedHat 和 CentOS 上。如果您的计算机上运行着像 Apache 这样的 web 服务器,您可以通过浏览服务器的 web 根目录来确认防火墙是否正在工作。如果网站不可访问,那么 firewalld 正在工作。 -你可以使用 `firewall-cmd` 工具从命令行管理 firewalld 设置。添加 `–state` 参数将返回当前防火墙的状态: +你可以使用 `firewall-cmd` 工具从命令行管理 firewalld 设置。添加 `–state` 参数将返回当前防火墙的状态: ``` # firewall-cmd --state running ``` -默认情况下,firewalld 将处于运行状态,并将拒绝所有传入流量,但有几个例外,如 SSH。这意味着你的网站不会有太多的访问者,这无疑会为你节省大量的数据传输成本。然而,这不是你对 web 服务器的要求,你希望打开 HTTP 和 HTTPS 端口,按照惯例,这两个端口分别被指定为80和443。firewalld 提供了两种方法来实现这个功能。一个是通过 `–add-port` 参数,该参数直接引用端口号及其将使用的网络协议(在本例中为TCP )。 另外一个是通过`–permanent` 参数,它告诉 firewalld 在每次服务器启动时加载此规则: - +默认情况下,firewalld 处于运行状态,并拒绝所有传入流量,但有几个例外,如 SSH。这意味着你的网站不会有太多的访问者,这无疑会为你节省大量的数据传输成本。然而,这不是你对 web 服务器的要求,你希望打开 HTTP 和 HTTPS 端口,按照惯例,这两个端口分别被指定为 80 和 443。firewalld 提供了两种方法来实现这个功能。一个是通过 `–add-port` 参数,该参数直接引用端口号及其将使用的网络协议(在本例中为TCP)。 另外一个是通过 `–permanent` 参数,它告诉 firewalld 在每次服务器启动时加载此规则: ``` # firewall-cmd --permanent --add-port=80/tcp # firewall-cmd --permanent --add-port=443/tcp ``` - `–reload` 参数将这些规则应用于当前会话: +`–reload` 参数将这些规则应用于当前会话: ``` # firewall-cmd --reload ``` -查看当前防火墙上的设置, 运行 `–list-services` : +查看当前防火墙上的设置,运行 `–list-services`: ``` # firewall-cmd --list-services dhcpv6-client http https ssh ``` -假设您已经如前所述添加了浏览器访问,那么 HTTP、HTTPS 和 SSH 端口现在都应该是开放的—— `dhcpv6-client` ,它允许 Linux 从本地 DHCP 服务器请求 IPv6 IP地址。 +假设您已经如前所述添加了浏览器访问,那么 HTTP、HTTPS 和 SSH 端口现在都应该是和 `dhcpv6-client` 一样开放的 —— 它允许 Linux 从本地 DHCP 服务器请求 IPv6 IP 地址。 ### 使用 iptables 配置锁定的客户信息亭 -我相信你已经看到了信息亭——它们是放在机场、图书馆和商务场所的盒子里的平板电脑、触摸屏和ATM类电脑,邀请顾客和路人浏览内容。大多数信息亭的问题是,你通常不希望用户像在自己家一样,把他们当成自己的设备。它们通常不是用来浏览、观看 YouTube 视频或对五角大楼发起拒绝服务攻击的。因此,为了确保它们没有被滥用,你需要锁定它们。 +我相信你已经看到了信息亭——它们是放在机场、图书馆和商务场所的盒子里的平板电脑、触摸屏和 ATM 类电脑,邀请顾客和路人浏览内容。大多数信息亭的问题是,你通常不希望用户像在自己家一样,把他们当成自己的设备。它们通常不是用来浏览、观看 YouTube 视频或对五角大楼发起拒绝服务攻击的。因此,为了确保它们没有被滥用,你需要锁定它们。 +一种方法是应用某种信息亭模式,无论是通过巧妙使用 Linux 显示管理器还是控制在浏览器级别。但是为了确保你已经堵塞了所有的漏洞,你可能还想通过防火墙添加一些硬性的网络控制。在下一节中,我将讲解如何使用iptables 来完成。 -一种方法是应用某种信息亭模式,无论是通过巧妙使用Linux显示管理器还是在浏览器级别。但是为了确保你已经堵塞了所有的漏洞,你可能还想通过防火墙添加一些硬网络控制。在下一节中,我将讲解如何使用iptables 来完成。 - - -关于使用iptables,有两件重要的事情需要记住:你给规则的顺序非常关键,iptables 规则本身在重新启动后将无法存活。我会一次一个地在解释这些。 +关于使用 iptables,有两件重要的事情需要记住:你给出的规则的顺序非常关键;iptables 规则本身在重新启动后将无法保持。我会一次一个地在解释这些。 ### 信息亭项目 -为了说明这一切,让我们想象一下,我们为一家名为 BigMart 的大型连锁商店工作。它们已经存在了几十年;事实上,我们想象中的祖父母可能是在那里购物并长大的。但是这些天,BigMart 公司总部的人可能只是在数着亚马逊将他们永远赶下去的时间。 +为了说明这一切,让我们想象一下,我们为一家名为 BigMart 的大型连锁商店工作。它们已经存在了几十年;事实上,我们想象中的祖父母可能是在那里购物并长大的。但是如今,BigMart 公司总部的人可能只是在数着亚马逊将他们永远赶下去的时间。 -尽管如此,BigMart 的IT部门正在尽他们最大努力提供解决方案,他们向你发放了一些具有 WiFi 功能信息亭设备,你在整个商店的战略位置使用这些设备。其想法是,登录到 BigMart.com 产品页面,允许查找商品特征、过道位置和库存水平。信息亭还允许进入 bigmart-data.com,那里储存着许多图像和视频媒体信息。 +尽管如此,BigMart 的 IT 部门正在尽他们最大努力提供解决方案,他们向你发放了一些具有 WiFi 功能信息亭设备,你在整个商店的战略位置使用这些设备。其想法是,登录到 BigMart.com 产品页面,允许查找商品特征、过道位置和库存水平。信息亭还允许进入 bigmart-data.com,那里储存着许多图像和视频媒体信息。 -除此之外,您还需要允许下载软件包更新。最后,您还希望只允许从本地工作站访问SSH,并阻止其他人登录。下图说明了它将如何工作: +除此之外,您还需要允许下载软件包更新。最后,您还希望只允许从本地工作站访问 SSH,并阻止其他人登录。下图说明了它将如何工作: ![信息亭流量IP表] [10] -信息亭业务流由 iptables 控制。 +*信息亭业务流由 iptables 控制。 * ### 脚本 -以下是 Bash 脚本内容: +以下是 Bash 脚本内容: ``` #!/bin/bash @@ -104,24 +98,24 @@ iptables -A INPUT -p tcp -s 10.0.3.1 --dport 22 -j ACCEPT iptables -A INPUT -p tcp -s 0.0.0.0/0 --dport 22 -j DROP ``` -我们从基本规则 `-A` 开始分析,它告诉iptables 我们要添加规则。`OUTPUT` 意味着这条规则应该成为输出的一部分。`-p` 表示该规则仅使用TCP协议的数据包,正如`-d` 告诉我们的,目的地址是 [bigmart.com][11]。`-j` 参数作用为数据包符合规则时要采取的操作是 `ACCEPT`。第一条规则表示允许或接受请求。但,最后一条规则表示删除或拒绝的请求。 +我们从基本规则 `-A` 开始分析,它告诉 iptables 我们要添加规则。`OUTPUT` 意味着这条规则应该成为输出链的一部分。`-p` 表示该规则仅使用 TCP 协议的数据包,正如 `-d` 告诉我们的,目的地址是 [bigmart.com][11]。`-j` 参数的作用是当数据包符合规则时要采取的操作是 `ACCEPT`。第一条规则表示允许(或接受)请求。但,往下的规则你能看到丢弃(或拒绝)的请求。 -规则顺序是很重要的。iptables 仅仅允许匹配规则的内容请求通过。一个向外发出的浏览器请求,比如访问[youtube.com][12] 是会通过的,因为这个请求匹配第四条规则,但是当它到达“dport 80”或“dport 443”规则时——取决于是HTTP还是HTTPS请求——它将被删除。iptables不再麻烦检查了,因为那是一场比赛。 +规则顺序是很重要的。因为 iptables 会对一个请求遍历每个规则,直到遇到匹配的规则。一个向外发出的浏览器请求,比如访问 bigmart.com 是会通过的,因为这个请求匹配第一条规则,但是当它到达 `dport 80` 或 `dport 443` 规则时——取决于是 HTTP 还是 HTTPS 请求——它将被丢弃。当遇到匹配时,iptables 不再继续往下检查了。(LCTT 译注:此处原文有误,径改。) -另一方面,向ubuntu.com 发出软件升级的系统请求,只要符合其适当的规则,就会通过。显然,我们在这里做的是,只允许向我们的 BigMart 或 Ubuntu 发送 HTTP 或 HTTPS 请求,而不允许向其他目的地发送。 +另一方面,向 ubuntu.com 发出软件升级的系统请求,只要符合其适当的规则,就会通过。显然,我们在这里做的是,只允许向我们的 BigMart 或 Ubuntu 发送 HTTP 或 HTTPS 请求,而不允许向其他目的地发送。 -最后两条规则将处理 SSH 请求。因为它不使用端口80或443端口,而是使用22端口,所以之前的两个丢弃规则不会拒绝它。在这种情况下,来自我的工作站的登录请求将被接受,但是对其他任何地方的请求将被拒绝。这一点很重要:确保用于端口22规则的IP地址与您用来登录的机器的地址相匹配——如果不这样做,将立即被锁定。当然,这没什么大不了的,因为按照目前的配置方式,只需重启服务器,iptables 规则就会全部丢失。如果使用 LXC 容器作为服务器并从 LXC 主机登录,则使用主机 IP 地址连接容器,而不是其公共地址。 +最后两条规则将处理 SSH 请求。因为它不使用端口 80 或 443 端口,而是使用 22 端口,所以之前的两个丢弃规则不会拒绝它。在这种情况下,来自我的工作站的登录请求将被接受,但是对其他任何地方的请求将被拒绝。这一点很重要:确保用于端口 22 规则的 IP 地址与您用来登录的机器的地址相匹配——如果不这样做,将立即被锁定。当然,这没什么大不了的,因为按照目前的配置方式,只需重启服务器,iptables 规则就会全部丢失。如果使用 LXC 容器作为服务器并从 LXC 主机登录,则使用主机 IP 地址连接容器,而不是其公共地址。 -如果机器的IP发生变化,请记住更新这个规则;否则,你会被拒绝访问。 +如果机器的 IP 发生变化,请记住更新这个规则;否则,你会被拒绝访问。 -在家玩(是在某种性虚拟机上)?太好了。创建自己的脚本。现在我可以保存脚本,使用`chmod` 使其可执行,并以`sudo` 的形式运行它。不要担心 `igmart-data.com没找到`错误——当然没找到;它不存在。 +在家玩(是在某种一次性虚拟机上)?太好了。创建自己的脚本。现在我可以保存脚本,使用 `chmod` 使其可执行,并以 `sudo` 的形式运行它。不要担心“igmart-data.com 没找到”之类的错误 —— 当然没找到;它不存在。 ``` chmod +X scriptname.sh sudo ./scriptname.sh ``` -你可以使用`cURL` 命令行测试防火墙。请求 ubuntu.com 奏效,但请求 [manning.com][13]是失败的 。 +你可以使用 `cURL` 命令行测试防火墙。请求 ubuntu.com 奏效,但请求 [manning.com][13] 是失败的 。 ``` @@ -131,24 +125,22 @@ curl manning.com ### 配置 iptables 以在系统启动时加载 -现在,我如何让这些规则在每次 kiosk 启动时自动加载?第一步是将当前规则保存。使用`iptables-save` 工具保存规则文件。将在根目录中创建一个包含规则列表的文件。管道后面跟着 tee 命令,是将我的`sudo` 权限应用于字符串的第二部分:将文件实际保存到否则受限的根目录。 +现在,我如何让这些规则在每次信息亭启动时自动加载?第一步是将当前规则保存。使用 `iptables-save` 工具保存规则文件。这将在根目录中创建一个包含规则列表的文件。管道后面跟着 `tee` 命令,是将我的`sudo` 权限应用于字符串的第二部分:将文件实际保存到否则受限的根目录。 -然后我可以告诉系统每次启动时运行一个相关的工具,叫做`iptables-restore` 。我们在上一模块中看到的常规cron 作业,因为它们在设定的时间运行,但是我们不知道什么时候我们的计算机可能会决定崩溃和重启。 +然后我可以告诉系统每次启动时运行一个相关的工具,叫做 `iptables-restore` 。我们在上一章节(LCTT 译注:指作者的书)中看到的常规 cron 任务并不适用,因为它们在设定的时间运行,但是我们不知道什么时候我们的计算机可能会决定崩溃和重启。 -有许多方法来处理这个问题。这里有一个: +有许多方法来处理这个问题。这里有一个: - -在我的 Linux 机器上,我将安装一个名为 [anacron][14] 的程序,该程序将在 /etc/ 目录中为我们提供一个名为anacrondab 的文件。我将编辑该文件并添加这个 `iptables-restore` 命令,告诉它加载该文件的当前值。引导后一分钟,规则每天(必要时)加载到 iptables 中。我会给作业一个标识符( `iptables-restore` ),然后添加命令本身。如果你在家和我一起这样,你应该通过重启系统来测试一下。 +在我的 Linux 机器上,我将安装一个名为 [anacron][14] 的程序,该程序将在 `/etc/` 目录中为我们提供一个名为 `anacrontab` 的文件。我将编辑该文件并添加这个 `iptables-restore` 命令,告诉它加载那个 .rule 文件的当前内容。当引导后,规则每天(必要时)01:01 时加载到 iptables 中(LCTT 译注:anacron 会补充执行由于机器没有运行而错过的 cron 任务,因此,即便 01:01 时机器没有启动,也会在机器启动会尽快执行该任务)。我会给该任务一个标识符(`iptables-restore`),然后添加命令本身。如果你在家和我一起这样,你应该通过重启系统来测试一下。 ``` sudo iptables-save | sudo tee /root/my.active.firewall.rules sudo apt install anacron sudo nano /etc/anacrontab 1 1 iptables-restore iptables-restore < /root/my.active.firewall.rules - ``` -我希望这些实际例子已经说明了如何使用 iptables 和 firewalld 来管理基于Linux的防火墙上的连接问题。 +我希望这些实际例子已经说明了如何使用 iptables 和 firewalld 来管理基于 Linux 的防火墙上的连接问题。 -------------------------------------------------------------------------------- @@ -157,7 +149,7 @@ via: https://opensource.com/article/18/9/linux-iptables-firewalld 作者:[David Clinton][a] 选题:[lujun9972](https://github.com/lujun9972) 译者:[heguangzhi](https://github.com/heguangzhi) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 From 0dc692ee68d59d5856a8dee957d91abe942398b0 Mon Sep 17 00:00:00 2001 From: "Xingyu.Wang" Date: Wed, 3 Oct 2018 17:18:59 +0800 Subject: [PATCH 191/736] PUB:20180918 Linux firewalls- What you need to know about iptables and firewalld.md @heguangzhi https://linux.cn/article-10075-1.html --- ...rewalls- What you need to know about iptables and firewalld.md | 0 1 file changed, 0 insertions(+), 0 deletions(-) rename {translated/tech => published}/20180918 Linux firewalls- What you need to know about iptables and firewalld.md (100%) diff --git a/translated/tech/20180918 Linux firewalls- What you need to know about iptables and firewalld.md b/published/20180918 Linux firewalls- What you need to know about iptables and firewalld.md similarity index 100% rename from translated/tech/20180918 Linux firewalls- What you need to know about iptables and firewalld.md rename to published/20180918 Linux firewalls- What you need to know about iptables and firewalld.md From d00d9d4739ba473ca1ea9904761c4c936c927dfc Mon Sep 17 00:00:00 2001 From: "Xingyu.Wang" Date: Wed, 3 Oct 2018 20:55:52 +0800 Subject: [PATCH 192/736] PRF:20180813 5 of the Best Linux Educational Software and Games for Kids.md @qhwdw --- ...Educational Software and Games for Kids.md | 39 ++++++++++--------- 1 file changed, 20 insertions(+), 19 deletions(-) diff --git a/translated/tech/20180813 5 of the Best Linux Educational Software and Games for Kids.md b/translated/tech/20180813 5 of the Best Linux Educational Software and Games for Kids.md index 3a1981f0bc..029c70b675 100644 --- a/translated/tech/20180813 5 of the Best Linux Educational Software and Games for Kids.md +++ b/translated/tech/20180813 5 of the Best Linux Educational Software and Games for Kids.md @@ -1,4 +1,5 @@ -# 5 个给孩子的非常好的 Linux 教育软件和游戏 +5 个给孩子的非常好的 Linux 游戏和教育软件 +================= ![](https://www.maketecheasier.com/assets/uploads/2018/07/Linux-educational-programs-for-kids-featured.jpg) @@ -8,39 +9,39 @@ Linux 是一个非常强大的操作系统,因此因特网上的大多数服 **相关阅读**:[使用一个 Linux 发行版的新手指南][1] -### 1. GCompris +### 1、GCompris -如果你正在为你的孩子寻找一款最佳的教育软件,[GCompris][2] 将是你的最好的开端。这款软件专门为 2 到 10 岁的孩子所设计。作为所有的 Linux 教育软件套装的巅峰之作,GCompris 为孩子们提供了大约 100 项活动。它囊括了你期望你的孩子学习的所有内容,从阅读材料到科学、地理、绘画、代数、测验、等等。 +如果你正在为你的孩子寻找一款最佳的教育软件,[GCompris][2] 将是你的最好的开端。这款软件专门为 2 到 10 岁的孩子所设计。作为所有的 Linux 教育软件套装的巅峰之作,GCompris 为孩子们提供了大约 100 项活动。它囊括了你期望你的孩子学习的所有内容,从阅读材料到科学、地理、绘画、代数、测验等等。 ![Linux educational software and games][3] -GCompris 甚至有一项活动可以帮你的孩子学习计算机的相关知识。如果你的孩子还很小,你希望他去学习字母、颜色、和形状,GCompris 也有这方面的相关内容。更重要的是,它也为孩子们准备了一些益智类游戏,比如国际象棋、井字棋、好记性、以及猜词游戏。GCompris 并不是一个仅在 Linux 上可运行的游戏。它也可以运行在 Windows 和 Android 上。 +GCompris 甚至有一项活动可以帮你的孩子学习计算机的相关知识。如果你的孩子还很小,你希望他去学习字母、颜色和形状,GCompris 也有这方面的相关内容。更重要的是,它也为孩子们准备了一些益智类游戏,比如国际象棋、井字棋、好记性、以及猜词游戏。GCompris 并不是一个仅在 Linux 上可运行的游戏。它也可以运行在 Windows 和 Android 上。 -### 2. TuxMath +### 2、TuxMath -很多学生认为数学是们非常难的课程。你可以通过 Linux 教育软件如 [TuxMath][4] 来让你的孩子了解数学技能,从而来改变这种看法。TuxMath 是为孩子开发的顶级的数学教育辅助游戏。在这个游戏中,你的角色是在如雨点般下降的数学问题中帮助 Linux 企鹅 Tux 来保护它的星球。 +很多学生认为数学是门非常难的课程。你可以通过 Linux 教育软件如 [TuxMath][4] 来让你的孩子了解数学技能,从而来改变这种看法。TuxMath 是为孩子开发的顶级的数学教育辅助游戏。在这个游戏中,你的角色是在如雨点般下降的数学问题中帮助 Linux 企鹅 Tux 来保护它的星球。 ![linux-educational-software-tuxmath-1][5] 在它们落下来毁坏 Tux 的星球之前,找到问题的答案,就可以使用你的激光去帮助 Tux 拯救它的星球。数字问题的难度每过一关就会提升一点。这个游戏非常适合孩子,因为它可以让孩子们去开动脑筋解决问题。而且还有助他们学好数学,以及帮助他们开发智力。 -### 3. Sugar on a Stick +### 3、Sugar on a Stick -[Sugar on a Stick][6] 是献给孩子们的学习程序 —— 一个广受好评的全新教学法。这个程序为你的孩子提供一个成熟的教学平台,在那里,他们可以收获创造、探索、发现和思考方面的技能。和 GCompris 一样,Sugar on a Stick 为孩子们带来了包括游戏和谜题在内的大量学习资源。 +[Sugar on a Stick][6] 是献给孩子们的学习程序 —— 一个广受好评的全新教学法。这个程序为你的孩子提供一个成熟的教学平台,在那里,他们可以收获创造、探索、发现和思考方面的技能。和 GCompris 一样,Sugar on a Stick 为孩子们带来了包括游戏和谜题在内的大量学习资源。 ![linux-educational-software-sugar-on-a-stick][7] 关于 Sugar on a Stick 最大的一个好处是你可以将它配置在一个 U 盘上。你只要有一台 X86 的 PC,插入那个 U 盘,然后就可以从 U 盘引导这个发行版。Sugar on a Stick 是由 Sugar 实验室提供的一个项目 —— 这个实验室是一个由志愿者运作的非盈利组织。 -### 4. KDE Edu Suite +### 4、KDE Edu Suite -[KDE Edu Suite][8] 是一个用途与众不同的软件包。带来了大量不同领域的应用程序,KDE 社区已经证实,它不仅是一系列成年人授权的问题;它还关心年青一代如何适应他们周围的一切。它囊括了一系列孩子们使用的应用程序,从科学到数学、地理等等。 +[KDE Edu Suite][8] 是一个用途与众不同的软件包。带来了大量不同领域的应用程序,KDE 社区已经证实,它不仅可以给成年人授权;它还关心年青一代如何适应他们周围的一切。它囊括了一系列孩子们使用的应用程序,从科学到数学、地理等等。 ![linux-educational-software-kde-1][9] KDE Edu 套件根据长大后所必需的知识为基础,既能够用作学校的教学软件,也能够作为孩子们的学习 APP。它提供了大量的可免费下载的软件包。KDE Edu 套件在主流的 GNU/Linux 发行版都能安装。 -### 5. Tux Paint +### 5、Tux Paint ![linux-educational-software-tux-paint-2][10] @@ -61,20 +62,20 @@ via: https://www.maketecheasier.com/5-best-linux-software-packages-for-kids/ 作者:[Kenneth Kimari][a] 选题:[lujun9972](https://github.com/lujun9972) 译者:[qhwdw](https://github.com/qhwdw) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 [a]: https://www.maketecheasier.com/author/kennkimari/ -[1]: https://www.maketecheasier.com/beginner-guide-to-using-linux-distro/ "The Beginner’s Guide to Using a Linux Distro" +[1]: https://www.maketecheasier.com/beginner-guide-to-using-linux-distro/ [2]: http://www.gcompris.net/downloads-en.html -[3]: https://www.maketecheasier.com/assets/uploads/2018/07/Linux-educational-software-gcompris.jpg "Linux educational software and games" +[3]: https://www.maketecheasier.com/assets/uploads/2018/07/Linux-educational-software-gcompris.jpg [4]: https://tuxmath.en.uptodown.com/ubuntu -[5]: https://www.maketecheasier.com/assets/uploads/2018/07/Linux-educational-software-tuxmath-1.jpg "linux-educational-software-tuxmath-1" +[5]: https://www.maketecheasier.com/assets/uploads/2018/07/Linux-educational-software-tuxmath-1.jpg [6]: http://wiki.sugarlabs.org/go/Sugar_on_a_Stick/Downloads -[7]: https://www.maketecheasier.com/assets/uploads/2018/07/Linux-educational-software-sugar-on-a-stick.png "linux-educational-software-sugar-on-a-stick" +[7]: https://www.maketecheasier.com/assets/uploads/2018/07/Linux-educational-software-sugar-on-a-stick.png [8]: https://edu.kde.org/ -[9]: https://www.maketecheasier.com/assets/uploads/2018/07/Linux-educational-software-kde-1.jpg "linux-educational-software-kde-1" -[10]: https://www.maketecheasier.com/assets/uploads/2018/07/Linux-educational-software-tux-paint-2.jpg "linux-educational-software-tux-paint-2" +[9]: https://www.maketecheasier.com/assets/uploads/2018/07/Linux-educational-software-kde-1.jpg +[10]: https://www.maketecheasier.com/assets/uploads/2018/07/Linux-educational-software-tux-paint-2.jpg [11]: http://www.tuxpaint.org/ -[12]: http://edubuntu.org/ \ No newline at end of file +[12]: http://edubuntu.org/ From 35b0acca5d201b87fcc6c8e3e0a06b60c634a78b Mon Sep 17 00:00:00 2001 From: "Xingyu.Wang" Date: Wed, 3 Oct 2018 20:56:29 +0800 Subject: [PATCH 193/736] PUB:20180813 5 of the Best Linux Educational Software and Games for Kids.md @qhwdw https://linux.cn/article-10076-1.html --- ...5 of the Best Linux Educational Software and Games for Kids.md | 0 1 file changed, 0 insertions(+), 0 deletions(-) rename {translated/tech => published}/20180813 5 of the Best Linux Educational Software and Games for Kids.md (100%) diff --git a/translated/tech/20180813 5 of the Best Linux Educational Software and Games for Kids.md b/published/20180813 5 of the Best Linux Educational Software and Games for Kids.md similarity index 100% rename from translated/tech/20180813 5 of the Best Linux Educational Software and Games for Kids.md rename to published/20180813 5 of the Best Linux Educational Software and Games for Kids.md From 95ed6758ab4a2b82cac59d979827b8e93f530c00 Mon Sep 17 00:00:00 2001 From: darksun Date: Wed, 3 Oct 2018 21:13:40 +0800 Subject: [PATCH 194/736] translate done: 20180827 Solve -error- failed to commit transaction (conflicting files)- In Arch Linux.md --- ...tion (conflicting files)- In Arch Linux.md | 50 ------------------- ...tion (conflicting files)- In Arch Linux.md | 49 ++++++++++++++++++ 2 files changed, 49 insertions(+), 50 deletions(-) delete mode 100644 sources/tech/20180827 Solve -error- failed to commit transaction (conflicting files)- In Arch Linux.md create mode 100644 translated/tech/20180827 Solve -error- failed to commit transaction (conflicting files)- In Arch Linux.md diff --git a/sources/tech/20180827 Solve -error- failed to commit transaction (conflicting files)- In Arch Linux.md b/sources/tech/20180827 Solve -error- failed to commit transaction (conflicting files)- In Arch Linux.md deleted file mode 100644 index bb0479e7fe..0000000000 --- a/sources/tech/20180827 Solve -error- failed to commit transaction (conflicting files)- In Arch Linux.md +++ /dev/null @@ -1,50 +0,0 @@ -translating by lujun9972 -Solve "error: failed to commit transaction (conflicting files)" In Arch Linux -====== - -![](https://www.ostechnix.com/wp-content/uploads/2018/06/arch_linux_wallpaper-720x340.png) - -It’s been a month since I upgraded my Arch Linux desktop. Today, I tried to update my Arch Linux system, and ran into an error that said **“error: failed to commit transaction (conflicting files) stfl: /usr/lib/libstfl.so.0 exists in filesystem”**. It looks like one library (/usr/lib/libstfl.so.0) that exists on my filesystem and pacman can’t upgrade it. If you’re encountered with the same error, here is a quick fix to resolve it. - -### Solve “error: failed to commit transaction (conflicting files)” In Arch Linux - -You have three options. - -1. Simply ignore the problematic **stfl** library from being upgraded and try to update the system again. Refer this guide to know [**how to ignore package from being upgraded**][1]. - -2. Overwrite the package using command: -``` -$ sudo pacman -Syu --overwrite /usr/lib/libstfl.so.0 -``` - -3. Remove stfl library file manually and try to upgrade the system again. Please make sure the intended package is not a dependency to any important package. and check the archlinux.org whether are mentions of this conflict. -``` -$ sudo rm /usr/lib/libstfl.so.0 -``` - -Now, try to update the system: -``` -$ sudo pacman -Syu -``` - -I chose the third option and just deleted the file and upgraded my Arch Linux system. It works now! - -Hope this helps. More good stuffs to come. Stay tuned! - -Cheers! - - - --------------------------------------------------------------------------------- - -via: https://www.ostechnix.com/how-to-solve-error-failed-to-commit-transaction-conflicting-files-in-arch-linux/ - -作者:[SK][a] -选题:[lujun9972](https://github.com/lujun9972) -译者:[lujun9972](https://github.com/lujun9972) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:https://www.ostechnix.com/author/sk/ -[1]:https://www.ostechnix.com/safely-ignore-package-upgraded-arch-linux/ diff --git a/translated/tech/20180827 Solve -error- failed to commit transaction (conflicting files)- In Arch Linux.md b/translated/tech/20180827 Solve -error- failed to commit transaction (conflicting files)- In Arch Linux.md new file mode 100644 index 0000000000..d77a63be3d --- /dev/null +++ b/translated/tech/20180827 Solve -error- failed to commit transaction (conflicting files)- In Arch Linux.md @@ -0,0 +1,49 @@ +解决 Arch Linux 中出现的 “error:failed to commit transaction (conflicting files)” +====== + +![](https://www.ostechnix.com/wp-content/uploads/2018/06/arch_linux_wallpaper-720x340.png) + +自我更新 Arch Linux 桌面以来已经有一个月了。今天我试着更新我的 Arch Linux 系统,然后遇到一个错误 **“error:failed to commit transaction (conflicting files) stfl:/usr/lib/libstfl.so.0 exists in filesystem”**。看起来是 pacman 无法更新一个已经存在于文件系统上的库 (/usr/lib/libstfl.so.0)。如果你也遇到了同样的问题,下面是一个快速解决方案。 + +### 解决 Arch Linux 中出现的 “error:failed to commit transaction (conflicting files)” + +有三种方法。 + +1。简单在升级时忽略导致问题的 **stfl** 库并尝试再次更新系统。请参阅此指南以了解 [**如何在更新时忽略软件包 **][1]。 + +2。使用命令覆盖这个包: +``` +$ sudo pacman -Syu --overwrite /usr/lib/libstfl.so.0 +``` + +3。手工删掉 stfl 库然后再次升级系统。请确保目标包不被其他任何重要的包所依赖。可以通过去 archlinux.org 查看是否有这种冲突。 +``` +$ sudo rm /usr/lib/libstfl.so.0 +``` + +现在,尝试更新系统: +``` +$ sudo pacman -Syu +``` + +我选择第三种方法,直接删除该文件然后升级 Arch Linux 系统。很有效! + +希望本文对你有所帮助。还有更多好东西。敬请期待! + +干杯! + + + +-------------------------------------------------------------------------------- + +via: https://www.ostechnix.com/how-to-solve-error-failed-to-commit-transaction-conflicting-files-in-arch-linux/ + +作者:[SK][a] +选题:[lujun9972](https://github.com/lujun9972) +译者:[lujun9972](https://github.com/lujun9972) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://www.ostechnix.com/author/sk/ +[1]:https://www.ostechnix.com/safely-ignore-package-upgraded-arch-linux/ From 19fef64b60bd3bfbe9a9a929b5679bcbe54ae130 Mon Sep 17 00:00:00 2001 From: "Xingyu.Wang" Date: Wed, 3 Oct 2018 21:29:52 +0800 Subject: [PATCH 195/736] PRF:20180823 How To Easily And Safely Manage Cron Jobs In Linux.md @HankChow --- ...ly And Safely Manage Cron Jobs In Linux.md | 54 +++++++++---------- 1 file changed, 25 insertions(+), 29 deletions(-) diff --git a/translated/tech/20180823 How To Easily And Safely Manage Cron Jobs In Linux.md b/translated/tech/20180823 How To Easily And Safely Manage Cron Jobs In Linux.md index c556d485c3..84c37055bb 100644 --- a/translated/tech/20180823 How To Easily And Safely Manage Cron Jobs In Linux.md +++ b/translated/tech/20180823 How To Easily And Safely Manage Cron Jobs In Linux.md @@ -1,21 +1,20 @@ -在 Linux 中安全轻松地管理 Cron 定时任务 +在 Linux 中安全且轻松地管理 Cron 定时任务 ====== ![](https://www.ostechnix.com/wp-content/uploads/2018/08/Crontab-UI-720x340.jpg) -在 Linux 中遇到计划任务的时候,你首先会想到的大概就是 Cron 定时任务了。Cron 定时任务能帮助你在类 Unix 操作系统中计划性地执行命令或者任务。也可以参考一下我们之前的一篇[《关于 Cron 定时任务的新手指导》][1]。对于有一定 Linux 经验的人来说,设置 Cron 定时任务不是什么难事,但对于新手来说就不一定了,他们在编辑 Crontab 文件的时候不知不觉中犯的一些小错误,也有可能把整个 Cron 定时任务搞挂了。如果你在处理 Cron 定时任务的时候为了以防万一,可以尝试使用 **Crontab UI**,它是一个可以在类 Unix 操作系统上安全轻松管理 Cron 定时任务的页面工具。 - -Crontab UI 是使用 NodeJS 编写的免费开源软件。有了 Crontab UI,你在创建、删除和修改 Cron 定时任务的时候就不需要手工编辑 Crontab 文件了,只需要打开浏览器稍微操作一下,就能完成上面这些工作。你可以用 Crontab UI 轻松创建、编辑、暂停、删除、备份 Cron 定时任务,甚至还可以简单做到导入、导出、部署其它机器上的 Cron 定时任务,它还支持错误日志、邮件发送和钩子。 +在 Linux 中遇到计划任务的时候,你首先会想到的大概就是 Cron 定时任务了。Cron 定时任务能帮助你在类 Unix 操作系统中计划性地执行命令或者任务。也可以参考一下我们之前的一篇《[关于 Cron 定时任务的新手指导][1]》。对于有一定 Linux 经验的人来说,设置 Cron 定时任务不是什么难事,但对于新手来说就不一定了,他们在编辑 crontab 文件的时候不知不觉中犯的一些小错误,也有可能把整个 Cron 定时任务搞挂了。如果你在处理 Cron 定时任务的时候为了以防万一,可以尝试使用 **Crontab UI**,它是一个可以在类 Unix 操作系统上安全轻松管理 Cron 定时任务的 Web 页面工具。 +Crontab UI 是使用 NodeJS 编写的自由开源软件。有了 Crontab UI,你在创建、删除和修改 Cron 定时任务的时候就不需要手工编辑 Crontab 文件了,只需要打开浏览器稍微操作一下,就能完成上面这些工作。你可以用 Crontab UI 轻松创建、编辑、暂停、删除、备份 Cron 定时任务,甚至还可以简单地做到导入、导出、部署其它机器上的 Cron 定时任务,它还支持错误日志、邮件发送和钩子。 ### 安装 Crontab UI -只需要一条命令就可以安装好 Crontab UI,但前提是已经安装好 NPM。如果还没有安装 NPM,可以参考[《如何在 Linux 上安装 NodeJS》][2]这篇文章。 +只需要一条命令就可以安装好 Crontab UI,但前提是已经安装好 NPM。如果还没有安装 NPM,可以参考《[如何在 Linux 上安装 NodeJS][2]》这篇文章。 执行这一条命令来安装 Crontab UI。 + ``` $ npm install -g crontab-ui - ``` 就是这么简单,下面继续来看看在 Crontab UI 上如何管理 Cron 定时任务。 @@ -23,29 +22,29 @@ $ npm install -g crontab-ui ### 在 Linux 上安全轻松管理 Cron 定时任务 执行这一条命令启动 Crontab UI: + ``` $ crontab-ui - ``` 你会看到这样的输出: + ``` Node version: 10.8.0 Crontab UI is running at http://127.0.0.1:8000 - ``` -首先在你的防火墙和路由器上放开 8000 端口,然后打开浏览器访问 ****。 +首先在你的防火墙和路由器上放开 8000 端口,然后打开浏览器访问 ``。 注意,默认只有在本地才能访问到 Crontab UI 的控制台页面。但如果你想让 Crontab UI 使用系统的 IP 地址和自定义端口,也就是想让其它机器也访问到本地的 Crontab UI,你需要使用以下这个命令: + ``` $ HOST=0.0.0.0 PORT=9000 crontab-ui Node version: 10.8.0 Crontab UI is running at http://0.0.0.0:9000 - ``` -Crontab UI 就能够通过 :9000 这样的 URL 被远程机器访问到了。 +Crontab UI 就能够通过 `:9000` 这样的 URL 被远程机器访问到了。 Crontab UI 的控制台页面长这样: @@ -53,19 +52,17 @@ Crontab UI 的控制台页面长这样: 从上面的截图就可以看到,Crontab UI 的界面非常简洁,所有选项的含义都能不言自明。 -输入 `Ctrl + C` 就可以关闭 Crontab UI。 +在终端输入 `Ctrl + C` 就可以关闭 Crontab UI。 -**创建、编辑、运行、停止、删除 Cron 定时任务** - -点击“New”,输入 Cron 定时任务的信息并点击“Save”保存,就可以创建一个新的 Cron 定时任务了。 +#### 创建、编辑、运行、停止、删除 Cron 定时任务 + +点击 “New”,输入 Cron 定时任务的信息并点击 “Save” 保存,就可以创建一个新的 Cron 定时任务了。 1. 为 Cron 定时任务命名,这是可选的; 2. 你想要执行的完整命令; - 3. 设定计划执行的时间。你可以按照启动、每时、每日、每周、每月、每年这些指标快速指定计划任务,也可以明确指定任务执行的具体时间。指定好计划时间后,**Jobs** 区域就会显示 Cron 定时任务的句式。 + 3. 设定计划执行的时间。你可以按照启动、每时、每日、每周、每月、每年这些指标快速指定计划任务,也可以明确指定任务执行的具体时间。指定好计划时间后,“Jobs” 区域就会显示 Cron 定时任务的句式。 4. 选择是否为某个 Cron 定时任务记录错误日志。 - - 这是我的一个 Cron 定时任务样例。 ![](https://www.ostechnix.com/wp-content/uploads/2018/08/create-new-cron-job.png) @@ -74,41 +71,40 @@ Crontab UI 的控制台页面长这样: ![](https://www.ostechnix.com/wp-content/uploads/2018/08/crontab-ui-dashboard-1.png) -如果你需要更改 Cron 定时任务中的某些参数,只需要点击 **Edit** 按钮并按照你的需求更改对应的参数。点击 **Run** 按钮可以立即执行 Cron 定时任务,点击 **Stop** 则可以立即停止 Cron 定时任务。如果想要查看某个 Cron 定时任务的详细日志,可以点击 **Log** 按钮。对于不再需要的 Cron 定时任务,就可以按 **Delete** 按钮删除。 +如果你需要更改 Cron 定时任务中的某些参数,只需要点击 “Edit” 按钮并按照你的需求更改对应的参数。点击 “Run” 按钮可以立即执行 Cron 定时任务,点击 “Stop” 则可以立即停止 Cron 定时任务。如果想要查看某个 Cron 定时任务的详细日志,可以点击 “Log” 按钮。对于不再需要的 Cron 定时任务,就可以按 “Delete” 按钮删除。 -**备份 Cron 定时任务** +#### 备份 Cron 定时任务 -点击控制台页面的 **Backup** 按钮并确认,就可以备份所有 Cron 定时任务。 +点击控制台页面的 “Backup” 按钮并确认,就可以备份所有 Cron 定时任务。 ![](https://www.ostechnix.com/wp-content/uploads/2018/08/backup-cron-jobs.png) 备份之后,一旦 Crontab 文件出现了错误,就可以使用备份来恢复了。 -**导入/导出其它机器上的 Cron 定时任务** +#### 导入/导出其它机器上的 Cron 定时任务 -Crontab UI 还有一个令人注目的功能,就是导入、导出、部署其它机器上的 Cron 定时任务。如果同一个网络里的多台机器都需要执行同样的 Cron 定时任务,只需要点击 **Export** 按钮并选择文件的保存路径,所有的 Cron 定时任务都会导出到 `crontab.db` 文件中。 +Crontab UI 还有一个令人注目的功能,就是导入、导出、部署其它机器上的 Cron 定时任务。如果同一个网络里的多台机器都需要执行同样的 Cron 定时任务,只需要点击 “Export” 按钮并选择文件的保存路径,所有的 Cron 定时任务都会导出到 `crontab.db` 文件中。 以下是 `crontab.db` 文件的内容: + ``` $ cat Downloads/crontab.db {"name":"Remove Pacman Cache","command":"rm -rf /var/cache/pacman","schedule":"@monthly","stopped":false,"timestamp":"Thu Aug 23 2018 10:34:19 GMT+0000 (Coordinated Universal Time)","logging":"true","mailing":{},"created":1535020459093,"_id":"lcVc1nSdaceqS1ut"} - ``` 导出成文件以后,你就可以把这个 `crontab.db` 文件放置到其它机器上并导入成 Cron 定时任务,而不需要在每一台主机上手动设置 Cron 定时任务。总之,在一台机器上设置完,导出,再导入到其他机器,就完事了。 -**在 Crontab 文件获取/保存 Cron 定时任务** +#### 在 Crontab 文件获取/保存 Cron 定时任务 -你可能在使用 Crontab UI 之前就已经使用 `crontab` 命令创建过 Cron 定时任务。如果是这样,你可以点击控制台页面上的 **Get from crontab** 按钮来获取已有的 Cron 定时任务。 +你可能在使用 Crontab UI 之前就已经使用 `crontab` 命令创建过 Cron 定时任务。如果是这样,你可以点击控制台页面上的 “Get from crontab” 按钮来获取已有的 Cron 定时任务。 ![](https://www.ostechnix.com/wp-content/uploads/2018/08/get-from-crontab.png) -同样地,你也可以使用 Crontab UI 来将新的 Cron 定时任务保存到 Crontab 文件中,只需要点击 **Save to crontab** 按钮就可以了。 +同样地,你也可以使用 Crontab UI 来将新的 Cron 定时任务保存到 Crontab 文件中,只需要点击 “Save to crontab” 按钮就可以了。 管理 Cron 定时任务并没有想象中那么难,即使是新手使用 Crontab UI 也能轻松管理 Cron 定时任务。赶快开始尝试并发表一下你的看法吧。 - -------------------------------------------------------------------------------- via: https://www.ostechnix.com/how-to-easily-and-safely-manage-cron-jobs-in-linux/ @@ -116,7 +112,7 @@ via: https://www.ostechnix.com/how-to-easily-and-safely-manage-cron-jobs-in-linu 作者:[SK][a] 选题:[lujun9972](https://github.com/lujun9972) 译者:[HankChow](https://github.com/HankChow) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 From 8b90c932c34a68173ab74841a5b9a45de7ffad6b Mon Sep 17 00:00:00 2001 From: "Xingyu.Wang" Date: Wed, 3 Oct 2018 21:30:17 +0800 Subject: [PATCH 196/736] PUB:20180823 How To Easily And Safely Manage Cron Jobs In Linux.md @HankChow https://linux.cn/article-10077-1.html --- ...20180823 How To Easily And Safely Manage Cron Jobs In Linux.md | 0 1 file changed, 0 insertions(+), 0 deletions(-) rename {translated/tech => published}/20180823 How To Easily And Safely Manage Cron Jobs In Linux.md (100%) diff --git a/translated/tech/20180823 How To Easily And Safely Manage Cron Jobs In Linux.md b/published/20180823 How To Easily And Safely Manage Cron Jobs In Linux.md similarity index 100% rename from translated/tech/20180823 How To Easily And Safely Manage Cron Jobs In Linux.md rename to published/20180823 How To Easily And Safely Manage Cron Jobs In Linux.md From d0792ed447bfb0571295673fa3b4541e3f4ebf17 Mon Sep 17 00:00:00 2001 From: "Xingyu.Wang" Date: Wed, 3 Oct 2018 21:39:31 +0800 Subject: [PATCH 197/736] PRF:20180927 How to Use RAR files in Ubuntu Linux.md @HankChow --- ...27 How to Use RAR files in Ubuntu Linux.md | 22 ++++++++----------- 1 file changed, 9 insertions(+), 13 deletions(-) diff --git a/translated/tech/20180927 How to Use RAR files in Ubuntu Linux.md b/translated/tech/20180927 How to Use RAR files in Ubuntu Linux.md index 3521b21a8a..0a087de8be 100644 --- a/translated/tech/20180927 How to Use RAR files in Ubuntu Linux.md +++ b/translated/tech/20180927 How to Use RAR files in Ubuntu Linux.md @@ -1,40 +1,39 @@ 如何在 Ubuntu Linux 中使用 RAR 文件 ====== + [RAR][1] 是一种非常好的归档文件格式。但相比之下 7-zip 能提供了更好的压缩率,并且默认情况下还可以在多个平台上轻松支持 Zip 文件。不过 RAR 仍然是最流行的归档格式之一。然而 [Ubuntu][2] 自带的归档管理器却不支持提取 RAR 文件,也不允许创建 RAR 文件。 -方法总比问题多。只要安装 `unrar` 这款由 [RARLAB][3] 提供的免费软件,就能在 Ubuntu 上支持提取RAR文件了。你也可以试安装 `rar` 来创建和管理 RAR 文件。 +办法总比问题多。只要安装 `unrar` 这款由 [RARLAB][3] 提供的免费软件,就能在 Ubuntu 上支持提取 RAR 文件了。你也可以安装 `rar` 试用版来创建和管理 RAR 文件。 ![RAR files in Ubuntu Linux][4] ### 提取 RAR 文件 -在未安装 unrar 的情况下,提取 RAR 文件会报出“未能提取”错误,就像下面这样(以 [Ubuntu 18.04][5] 为例): +在未安装 `unrar` 的情况下,提取 RAR 文件会报出“未能提取”错误,就像下面这样(以 [Ubuntu 18.04][5] 为例): ![Error in RAR extraction in Ubuntu][6] -如果要解决这个错误并提取 RAR 文件,请按照以下步骤安装 unrar: +如果要解决这个错误并提取 RAR 文件,请按照以下步骤安装 `unrar`: 打开终端并输入: ``` - sudo apt-get install unrar - +sudo apt-get install unrar ``` -安装 unrar 后,直接输入 `unrar` 就可以看到它的用法以及如何使用这个工具处理 RAR 文件。 +安装 `unrar` 后,直接输入 `unrar` 就可以看到它的用法以及如何使用这个工具处理 RAR 文件。 最常用到的功能是提取 RAR 文件。因此,可以**通过右键单击 RAR 文件并执行提取**,也可以借助此以下命令通过终端执行操作: ``` unrar x FileName.rar - ``` 结果类似以下这样: ![Using unrar in Ubuntu][7] -如果家目录中不存在对应的文件,就必须使用 `cd` 命令移动到目标目录下。例如 RAR 文件如果在 `Music` 目录下,只需要使用 `cd Music` 就可以移动到相应的目录,然后提取 RAR 文件。 +如果压缩文件没放在家目录中,就必须使用 `cd` 命令移动到目标目录下。例如 RAR 文件如果在 `Music` 目录下,只需要使用 `cd Music` 就可以移动到相应的目录,然后提取 RAR 文件。 ### 创建和管理 RAR 文件 @@ -42,18 +41,16 @@ unrar x FileName.rar `unrar` 不允许创建 RAR 文件。因此还需要安装 `rar` 命令行工具才能创建 RAR 文件。 -要创建 RAR 文件,首先需要通过以下命令安装 rar: +要创建 RAR 文件,首先需要通过以下命令安装 `rar`: ``` sudo apt-get install rar - ``` 按照下面的命令语法创建 RAR 文件: ``` rar a ArchiveName File_1 File_2 Dir_1 Dir_2 - ``` 按照这个格式输入命令时,它会将目录中的每个文件添加到 RAR 文件中。如果需要某一个特定的文件,就要指定文件确切的名称或路径。 @@ -64,7 +61,6 @@ rar a ArchiveName File_1 File_2 Dir_1 Dir_2 ``` rar u ArchiveName Filename - ``` 在终端输入 `rar` 就可以列出 RAR 工具的相关命令。 @@ -82,7 +78,7 @@ via: https://itsfoss.com/use-rar-ubuntu-linux/ 作者:[Ankush Das][a] 选题:[lujun9972](https://github.com/lujun9972) 译者:[HankChow](https://github.com/HankChow) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 From f08020750a7165bb04b74f60f63cd9bf793c241e Mon Sep 17 00:00:00 2001 From: "Xingyu.Wang" Date: Wed, 3 Oct 2018 21:39:51 +0800 Subject: [PATCH 198/736] PUB:20180927 How to Use RAR files in Ubuntu Linux.md @HankChow https://linux.cn/article-10078-1.html --- .../20180927 How to Use RAR files in Ubuntu Linux.md | 0 1 file changed, 0 insertions(+), 0 deletions(-) rename {translated/tech => published}/20180927 How to Use RAR files in Ubuntu Linux.md (100%) diff --git a/translated/tech/20180927 How to Use RAR files in Ubuntu Linux.md b/published/20180927 How to Use RAR files in Ubuntu Linux.md similarity index 100% rename from translated/tech/20180927 How to Use RAR files in Ubuntu Linux.md rename to published/20180927 How to Use RAR files in Ubuntu Linux.md From 0704d0a920a90278577715fdeb4121c7212d5910 Mon Sep 17 00:00:00 2001 From: sd886393 Date: Wed, 3 Oct 2018 21:45:09 +0800 Subject: [PATCH 199/736] translating --- ...28 What containers can teach us about DevOps.md | 14 +++++++------- 1 file changed, 7 insertions(+), 7 deletions(-) diff --git a/sources/tech/20180928 What containers can teach us about DevOps.md b/sources/tech/20180928 What containers can teach us about DevOps.md index 33f83fb0f7..cd4a985dd5 100644 --- a/sources/tech/20180928 What containers can teach us about DevOps.md +++ b/sources/tech/20180928 What containers can teach us about DevOps.md @@ -1,16 +1,16 @@ -认领:by sd886393 -What containers can teach us about DevOps +容器技术对指导我们 DevOps 的一些启发 ====== -The use of containers supports the three pillars of DevOps practices: flow, feedback, and continual experimentation and learning. +容器技术的使用支撑了目前 DevOps 三大主要实践:流水线,及时反馈,持续实验与学习以改进。 ![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/LAW-patent_reform_520x292_10136657_1012_dc.png?itok=Cd2PmDWf) -One can argue that containers and DevOps were made for one another. Certainly, the container ecosystem benefits from the skyrocketing popularity of DevOps practices, both in design choices and in DevOps’ use by teams developing container technologies. Because of this parallel evolution, the use of containers in production can teach teams the fundamentals of DevOps and its three pillars: [The Three Ways][1]. +容器技术与 DevOps 二者在发展的过程中是互相促进的关系。得益于 DevOps 的设计理念愈发先进,容器生态系统在设计上与组件选择上也有相应发展。同时,由于容器技术在生产环境中的使用,反过来也促进了 DevOps 三大主要实践:[DevOps 三个实践][1]. -### Principles of flow -**Container flow** +### 流水线的原则 + +**容器流** A container can be seen as a silo, and from inside, it is easy to forget the rest of the system: the host node, the cluster, the underlying infrastructure. Inside the container, it might appear that everything is functioning in an acceptable manner. From the outside perspective, though, the application inside the container is a part of a larger ecosystem of applications that make up a service: the web API, the web app user interface, the database, the workers, and caching services and garbage collectors. Teams put constraints on the container to limit performance impact on infrastructure, and much has been done to provide metrics for measuring container performance because overloaded or slow container workloads have downstream impact on other services or customers. @@ -85,7 +85,7 @@ via: https://opensource.com/article/18/9/containers-can-teach-us-devops 作者:[Chris Hermansen][a] 选题:[lujun9972](https://github.com/lujun9972) -译者:[译者ID](https://github.com/译者ID) +译者:[译者ID](https://github.com/sd886393) 校对:[校对者ID](https://github.com/校对者ID) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 From 3a9c594e66be79405636f15709f8aa6d3a373aeb Mon Sep 17 00:00:00 2001 From: darksun Date: Wed, 3 Oct 2018 22:12:33 +0800 Subject: [PATCH 200/736] translate done: 20180815 How to Create M3U Playlists in Linux [Quick Tip].md --- ...eate M3U Playlists in Linux [Quick Tip].md | 41 +++++++++---------- 1 file changed, 20 insertions(+), 21 deletions(-) diff --git a/sources/tech/20180815 How to Create M3U Playlists in Linux [Quick Tip].md b/sources/tech/20180815 How to Create M3U Playlists in Linux [Quick Tip].md index 3c0b63d63b..21f5bc61df 100644 --- a/sources/tech/20180815 How to Create M3U Playlists in Linux [Quick Tip].md +++ b/sources/tech/20180815 How to Create M3U Playlists in Linux [Quick Tip].md @@ -1,37 +1,36 @@ -translating by lujun9972 -How to Create M3U Playlists in Linux [Quick Tip] +Linux下如何创建 M3U 播放列表 [小建议] ====== -**Brief: A quick tip on how to create M3U playlists in Linux terminal from unordered files to play them in a sequence.** +**简介:关于如何在Linux终端中根据乱序文件创建M3U播放列表实现循序播放的小建议。** ![Create M3U playlists in Linux Terminal][1] -I am a fan of foreign tv series and it’s not always easy to get them on DVD or on streaming services like [Netflix][2]. Thankfully, you can find some of them on YouTube and [download them from YouTube][3]. +我是外国电视连续剧的粉丝,这些连续剧不太容易从DVD或像[Netflix] [2]这样的流媒体上获得。还在,您可以在YouTube上找到一些内容并[从YouTube下载][3]。 -Now there comes a problem. Your files might not be sorted in a particular order. In GNU/Linux files are not naturally sort ordered by number sequencing so I had to make a .m3u playlist so [MPV video player][4] would play the videos in sequence and not out of sequence. +现在出现了一个问题. 你的文件可能不是按顺序存储的. 在GNU / Linux中,文件不是按数字顺序自然排序的,因此我必须创建.m3u播放列表,以便[MPV视频播放器][4]可以按顺序播放视频而不是乱顺进行播放。 -Also sometimes the numbers are in the middle or the end like ‘My Web Series S01E01.mkv’ as an example. The episode information here is in the middle of the filename, the ‘S01E01’ which tells us, humans, which is the first episode and which needs to come in next. +同样的,有时候表示第几集的数字实在文件名中间或结尾的,像这样 ‘My Web Series S01E01.mkv’. 这里的剧集信息位于文件名的中间,'S01E01'告诉我们人类,哪个是第一集,下一集是哪一个文件。 -So what I did was to generate an m3u playlist in the video directory and tell MPV to play the .m3u playlist and it would take care of playing them in the sequence. +因此我要做的事情就是在视频墓中创建一个 m3u 播放列表并告诉MPV播放这个 .m3u 播放列表,MPV自然会按顺序播放这些视频. -### What is an M3U file? +### 什么是M3U 文件? -[M3U][5] is basically a text file that contains filenames in a specific order. When a player like MPV or VLC opens an M3U file, it tries to play the specified files in the given sequence. +[M3U][5] 基本上就是个按特定顺序包含文件名的文本文件. 当类似MPV或VLC这样的播放器打开M3U文件时, 它会尝试按给定的顺序播放指定文件. -### Creating M3U to play audio/video files in a sequence +### 创建M3U来按顺序播放音频/视频文件 -In my case, I used the following command: +就我而言, 我使用了下面命令: ``` $/home/shirish/Videos/web-series-video/$ ls -1v |grep .mkv > /tmp/1.m3u && mv /tmp/1.m3u . ``` -Let’s break it down a bit and see each bit as to what it means – +然我们拆分一下看看每个部分表示什么意思 – -**ls -1v** = This is using the plain ls or listing entries in the directory. The -1 means list one file per line. while -v natural sort of (version) numbers within text +**ls -1v** = 这就是用普通的 ls 来列出目录中的内容. 其中 `-1` 表示每行显示一个文件. 而 `-v` 表示根据文本中的数字(版本)进行自然排序 -**| grep .mkv** = It’s basically telling `ls` to look for files which are ending in .mkv . It could be .mp4 or any other media file format that you want. +**| grep .mkv** = 基本上就是告诉 `ls` 寻找那些以 `.mkv` 结尾的文件. 它也可以是 `.mp4` 或其他任何你想要的媒体文件格式. -It’s usually a good idea to do a dry run by running the command on the console: +通过在控制台上运行命令来进行试运行通常是个好主意: ``` ls -1v |grep .mkv My Web Series S01E01 [Episode 1 Name] Multi 480p WEBRip x264 - xRG.mkv @@ -45,25 +44,25 @@ My Web Series S01E08 [Episode 8 Name] Multi 480p WEBRip x264 - xRG.mkv ``` -This tells me that what I’m trying to do is correct. Now just have to make that the output is in the form of a .m3u playlist which is the next part. +结果显示我要做的事情是正确的. 现在下一步就是让输出以 `.m3u` 播放列表的格式输出. ``` ls -1v |grep .mkv > /tmp/web_playlist.m3u && mv /tmp/web_playlist.m3u . ``` -This makes the .m3u generate in the current directory. The .m3u playlist is nothing but a .txt file with the same contents as above with the .m3u extension. You can edit it manually as well and add the exact filenames in an order you desire. +这就在当前目录中创建了 `.m3u` 文件. 这个`.m3u`播放列表只不过是一个.txt文件,其内容与上面相同,扩展名为.m3u。 你也可以手动编辑它,并按照想要的顺序添加确切的文件名。 -After that you just have to do something like this: +之后你只需要这样做: ``` mpv web_playlist.m3u ``` -The great thing about MPV and the playlists, in general, is that you don’t have to binge-watch. You can see however much you want to do in one sitting and see the rest in the next session or the session after that. +一般来说,关于MPV和播放列表的好处在于你不需要一次性全部看完。 您可以一次看任意长时间,然后在下一次查看其余部分。 -I hope to do articles featuring MPV as well as how to make mkv files embedding subtitles in a media file but that’s in the future. +我希望写一些有关MPV的文章,以及如何制作在媒体文件中嵌入字幕的mkv文件,但这是将来的事情了。 -Note: It’s FOSS doesn’t encourage piracy. +注意: 这是开源软件,不鼓励盗版 -------------------------------------------------------------------------------- From a166bd6dc3af46d410b10d852ea1b315ef79098e Mon Sep 17 00:00:00 2001 From: darksun Date: Wed, 3 Oct 2018 22:17:22 +0800 Subject: [PATCH 201/736] move to translted --- .../20180815 How to Create M3U Playlists in Linux [Quick Tip].md | 0 1 file changed, 0 insertions(+), 0 deletions(-) rename {sources => translated}/tech/20180815 How to Create M3U Playlists in Linux [Quick Tip].md (100%) diff --git a/sources/tech/20180815 How to Create M3U Playlists in Linux [Quick Tip].md b/translated/tech/20180815 How to Create M3U Playlists in Linux [Quick Tip].md similarity index 100% rename from sources/tech/20180815 How to Create M3U Playlists in Linux [Quick Tip].md rename to translated/tech/20180815 How to Create M3U Playlists in Linux [Quick Tip].md From 792b8112b5201492393a21ce244a4e384662a806 Mon Sep 17 00:00:00 2001 From: houbaron Date: Thu, 4 Oct 2018 00:54:54 +0800 Subject: [PATCH 202/736] houbaron translated --- .../20140607 Five things that make Go fast.md | 495 ------------------ .../20140607 Five things that make Go fast.md | 494 +++++++++++++++++ 2 files changed, 494 insertions(+), 495 deletions(-) delete mode 100644 sources/tech/20140607 Five things that make Go fast.md create mode 100644 translated/tech/20140607 Five things that make Go fast.md diff --git a/sources/tech/20140607 Five things that make Go fast.md b/sources/tech/20140607 Five things that make Go fast.md deleted file mode 100644 index 65add9552d..0000000000 --- a/sources/tech/20140607 Five things that make Go fast.md +++ /dev/null @@ -1,495 +0,0 @@ -Translating By houbaron - -Five things that make Go fast -============================================================ - - _Anthony Starks has remixed my original Google Present based slides using his fantastic Deck presentation tool. You can check out his remix over on his blog,[mindchunk.blogspot.com.au/2014/06/remixing-with-deck][5]._ - -* * * - -I was recently invited to give a talk at Gocon, a fantastic Go conference held semi-annually in Tokyo, Japan. [Gocon 2014][6] was an entirely community-run one day event combining training and an afternoon of presentations surrounding the theme of Go in production. - -The following is the text of my presentation. The original text was structured to force me to speak slowly and clearly, so I have taken the liberty of editing it slightly to be more readable. - -I want to thank [Bill Kennedy][7], Minux Ma, and especially [Josh Bleecher Snyder][8], for their assistance in preparing this talk. - -* * * - -Good afternoon. - -My name is David. - -I am delighted to be here at Gocon today. I have wanted to come to this conference for two years and I am very grateful to the organisers for extending me the opportunity to present to you today. - - [![Gocon 2014](https://dave.cheney.net/wp-content/uploads/2014/06/Gocon-2014-1.jpg)][9] -I want to begin my talk with a question. - -Why are people choosing to use Go ? - -When people talk about their decision to learn Go, or use it in their product, they have a variety of answers, but there always three that are at the top of their list - - [![Gocon 2014 ](https://dave.cheney.net/wp-content/uploads/2014/06/Gocon-2014-2.jpg)][10] -These are the top three. - -The first, Concurrency. - -Go’s concurrency primitives are attractive to programmers who come from single threaded scripting languages like Nodejs, Ruby, or Python, or from languages like C++ or Java with their heavyweight threading model. - -Ease of deployment. - -We have heard today from experienced Gophers who appreciate the simplicity of deploying Go applications. - - [![Gocon 2014](https://dave.cheney.net/wp-content/uploads/2014/06/Gocon-2014-3.jpg)][11] - -This leaves Performance. - -I believe an important reason why people choose to use Go is because it is  _fast_ . - - [![Gocon 2014 (4)](https://dave.cheney.net/wp-content/uploads/2014/06/Gocon-2014-4.jpg)][12] - -For my talk today I want to discuss five features that contribute to Go’s performance. - -I will also share with you the details of how Go implements these features. - - [![Gocon 2014 (5)](https://dave.cheney.net/wp-content/uploads/2014/06/Gocon-2014-5.jpg)][13] - -The first feature I want to talk about is Go’s efficient treatment and storage of values. - - [![Gocon 2014 (6)](https://dave.cheney.net/wp-content/uploads/2014/06/Gocon-2014-6.jpg)][14] - -This is an example of a value in Go. When compiled, `gocon` consumes exactly four bytes of memory. - -Let’s compare Go with some other languages - - [![Gocon 2014 (7)](https://dave.cheney.net/wp-content/uploads/2014/06/Gocon-2014-7.jpg)][15] - -Due to the overhead of the way Python represents variables, storing the same value using Python consumes six times more memory. - -This extra memory is used by Python to track type information, do reference counting, etc - -Let’s look at another example: - - [![Gocon 2014 (8)](https://dave.cheney.net/wp-content/uploads/2014/06/Gocon-2014-8.jpg)][16] - -Similar to Go, the Java `int` type consumes 4 bytes of memory to store this value. - -However, to use this value in a collection like a `List` or `Map`, the compiler must convert it into an `Integer` object. - - [![Gocon 2014 (9)](https://dave.cheney.net/wp-content/uploads/2014/06/Gocon-2014-9.jpg)][17] - -So an integer in Java frequently looks more like this and consumes between 16 and 24 bytes of memory. - -Why is this important ? Memory is cheap and plentiful, why should this overhead matter ? - - [![Gocon 2014 (10)](https://dave.cheney.net/wp-content/uploads/2014/06/Gocon-2014-10.jpg)][18] - -This is a graph showing CPU clock speed vs memory bus speed. - -Notice how the gap between CPU clock speed and memory bus speed continues to widen. - -The difference between the two is effectively how much time the CPU spends waiting for memory. - - [![Gocon 2014 (11)](https://dave.cheney.net/wp-content/uploads/2014/06/Gocon-2014-11.jpg)][19] - -Since the late 1960’s CPU designers have understood this problem. - -Their solution is a cache, an area of smaller, faster memory which is inserted between the CPU and main memory. - - [![Gocon 2014 (12)](https://dave.cheney.net/wp-content/uploads/2014/06/Gocon-2014-12.jpg)][20] - -This is a `Location` type which holds the location of some object in three dimensional space. It is written in Go, so each `Location` consumes exactly 24 bytes of storage. - -We can use this type to construct an array type of 1,000 `Location`s, which consumes exactly 24,000 bytes of memory. - -Inside the array, the `Location` structures are stored sequentially, rather than as pointers to 1,000 Location structures stored randomly. - -This is important because now all 1,000 `Location` structures are in the cache in sequence, packed tightly together. - - [![Gocon 2014 (13)](https://dave.cheney.net/wp-content/uploads/2014/06/Gocon-2014-13.jpg)][21] - -Go lets you create compact data structures, avoiding unnecessary indirection. - -Compact data structures utilise the cache better. - -Better cache utilisation leads to better performance. - - [![Gocon 2014 (14)](https://dave.cheney.net/wp-content/uploads/2014/06/Gocon-2014-14.jpg)][22] - -Function calls are not free. - - [![Gocon 2014 (15)](https://dave.cheney.net/wp-content/uploads/2014/06/Gocon-2014-15.jpg)][23] - -Three things happen when a function is called. - -A new stack frame is created, and the details of the caller recorded. - -Any registers which may be overwritten during the function call are saved to the stack. - -The processor computes the address of the function and executes a branch to that new address. - - [![Gocon 2014 (16)](https://dave.cheney.net/wp-content/uploads/2014/06/Gocon-2014-16.jpg)][24] - -Because function calls are very common operations, CPU designers have worked hard to optimise this procedure, but they cannot eliminate the overhead. - -Depending on what the function does, this overhead may be trivial or significant. - -A solution to reducing function call overhead is an optimisation technique called Inlining. - - [![Gocon 2014 (17)](https://dave.cheney.net/wp-content/uploads/2014/06/Gocon-2014-17.jpg)][25] - -The Go compiler inlines a function by treating the body of the function as if it were part of the caller. - -Inlining has a cost; it increases binary size. - -It only makes sense to inline when the overhead of calling a function is large relative to the work the function does, so only simple functions are candidates for inlining. - -Complicated functions are usually not dominated by the overhead of calling them and are therefore not inlined. - - [![Gocon 2014 (18)](https://dave.cheney.net/wp-content/uploads/2014/06/Gocon-2014-18.jpg)][26] - -This example shows the function `Double` calling `util.Max`. - -To reduce the overhead of the call to `util.Max`, the compiler can inline `util.Max` into `Double`, resulting in something like this - - [![Gocon 2014 (19)](https://dave.cheney.net/wp-content/uploads/2014/06/Gocon-2014-19.jpg)][27] - -After inlining there is no longer a call to `util.Max`, but the behaviour of `Double` is unchanged. - -Inlining isn’t exclusive to Go. Almost every compiled or JITed language performs this optimisation. But how does inlining in Go work? - -The Go implementation is very simple. When a package is compiled, any small function that is suitable for inlining is marked and then compiled as usual. - -Then both the source of the function and the compiled version are stored. - - [![Gocon 2014 (20)](https://dave.cheney.net/wp-content/uploads/2014/06/Gocon-2014-20.jpg)][28] - -This slide shows the contents of util.a. The source has been transformed a little to make it easier for the compiler to process quickly. - -When the compiler compiles Double it sees that `util.Max` is inlinable, and the source of `util.Max`is available. - -Rather than insert a call to the compiled version of `util.Max`, it can substitute the source of the original function. - -Having the source of the function enables other optimizations. - - [![Gocon 2014 (21)](https://dave.cheney.net/wp-content/uploads/2014/06/Gocon-2014-21.jpg)][29] - -In this example, although the function Test always returns false, Expensive cannot know that without executing it. - -When `Test` is inlined, we get something like this - - [![Gocon 2014 (22)](https://dave.cheney.net/wp-content/uploads/2014/06/Gocon-2014-22.jpg)][30] - -The compiler now knows that the expensive code is unreachable. - -Not only does this save the cost of calling Test, it saves compiling or running any of the expensive code that is now unreachable. - -The Go compiler can automatically inline functions across files and even across packages. This includes code that calls inlinable functions from the standard library. - - [![Gocon 2014 (23)](https://dave.cheney.net/wp-content/uploads/2014/06/Gocon-2014-23.jpg)][31] - -Mandatory garbage collection makes Go a simpler and safer language. - -This does not imply that garbage collection makes Go slow, or that garbage collection is the ultimate arbiter of the speed of your program. - -What it does mean is memory allocated on the heap comes at a cost. It is a debt that costs CPU time every time the GC runs until that memory is freed. - - [![Gocon 2014 (24)](https://dave.cheney.net/wp-content/uploads/2014/06/Gocon-2014-24.jpg)][32] - -There is however another place to allocate memory, and that is the stack. - -Unlike C, which forces you to choose if a value will be stored on the heap, via `malloc`, or on the stack, by declaring it inside the scope of the function, Go implements an optimisation called  _escape analysis_ . - - [![Gocon 2014 (25)](https://dave.cheney.net/wp-content/uploads/2014/06/Gocon-2014-25.jpg)][33] - -Escape analysis determines whether any references to a value escape the function in which the value is declared. - -If no references escape, the value may be safely stored on the stack. - -Values stored on the stack do not need to be allocated or freed. - -Lets look at some examples - - [![Gocon 2014 (26)](https://dave.cheney.net/wp-content/uploads/2014/06/Gocon-2014-26.jpg)][34] - -`Sum` adds the numbers between 1 and 100 and returns the result. This is a rather unusual way to do this, but it illustrates how Escape Analysis works. - -Because the numbers slice is only referenced inside `Sum`, the compiler will arrange to store the 100 integers for that slice on the stack, rather than the heap. - -There is no need to garbage collect `numbers`, it is automatically freed when `Sum` returns. - - [![Gocon 2014 (27)](https://dave.cheney.net/wp-content/uploads/2014/06/Gocon-2014-27.jpg)][35] - -This second example is also a little contrived. In `CenterCursor` we create a new `Cursor` and store a pointer to it in c. - -Then we pass `c` to the `Center()` function which moves the `Cursor` to the center of the screen. - -Then finally we print the X and Y locations of that `Cursor`. - -Even though `c` was allocated with the `new` function, it will not be stored on the heap, because no reference `c` escapes the `CenterCursor` function. - - [![Gocon 2014 (28)](https://dave.cheney.net/wp-content/uploads/2014/06/Gocon-2014-28.jpg)][36] - -Go’s optimisations are always enabled by default. You can see the compiler’s escape analysis and inlining decisions with the `-gcflags=-m` switch. - -Because escape analysis is performed at compile time, not run time, stack allocation will always be faster than heap allocation, no matter how efficient your garbage collector is. - -I will talk more about the stack in the remaining sections of this talk. - - [![Gocon 2014 (30)](https://dave.cheney.net/wp-content/uploads/2014/06/Gocon-2014-30.jpg)][37] - -Go has goroutines. These are the foundations for concurrency in Go. - -I want to step back for a moment and explore the history that leads us to goroutines. - -In the beginning computers ran one process at a time. Then in the 60’s the idea of multiprocessing, or time sharing became popular. - -In a time-sharing system the operating systems must constantly switch the attention of the CPU between these processes by recording the state of the current process, then restoring the state of another. - -This is called  _process switching_ . - - [![Gocon 2014 (29)](https://dave.cheney.net/wp-content/uploads/2014/06/Gocon-2014-29.jpg)][38] - -There are three main costs of a process switch. - -First is the kernel needs to store the contents of all the CPU registers for that process, then restore the values for another process. - -The kernel also needs to flush the CPU’s mappings from virtual memory to physical memory as these are only valid for the current process. - -Finally there is the cost of the operating system context switch, and the overhead of the scheduler function to choose the next process to occupy the CPU. - - [![Gocon 2014 (31)](https://dave.cheney.net/wp-content/uploads/2014/06/Gocon-2014-31.jpg)][39] - -There are a surprising number of registers in a modern processor. I have difficulty fitting them on one slide, which should give you a clue how much time it takes to save and restore them. - -Because a process switch can occur at any point in a process’ execution, the operating system needs to store the contents of all of these registers because it does not know which are currently in use. - - [![Gocon 2014 (32)](https://dave.cheney.net/wp-content/uploads/2014/06/Gocon-2014-32.jpg)][40] - -This lead to the development of threads, which are conceptually the same as processes, but share the same memory space. - -As threads share address space, they are lighter than processes so are faster to create and faster to switch between. - - [![Gocon 2014 (33)](https://dave.cheney.net/wp-content/uploads/2014/06/Gocon-2014-33.jpg)][41] - -Goroutines take the idea of threads a step further. - -Goroutines are cooperatively scheduled, rather than relying on the kernel to manage their time sharing. - -The switch between goroutines only happens at well defined points, when an explicit call is made to the Go runtime scheduler. - -The compiler knows the registers which are in use and saves them automatically. - - [![Gocon 2014 (34)](https://dave.cheney.net/wp-content/uploads/2014/06/Gocon-2014-34.jpg)][42] - -While goroutines are cooperatively scheduled, this scheduling is handled for you by the runtime. - -Places where Goroutines may yield to others are: - -* Channel send and receive operations, if those operations would block. - -* The Go statement, although there is no guarantee that new goroutine will be scheduled immediately. - -* Blocking syscalls like file and network operations. - -* After being stopped for a garbage collection cycle. - - [![Gocon 2014 (35)](https://dave.cheney.net/wp-content/uploads/2014/06/Gocon-2014-35.jpg)][43] - -This an example to illustrate some of the scheduling points described in the previous slide. - -The thread, depicted by the arrow, starts on the left in the `ReadFile` function. It encounters `os.Open`, which blocks the thread while waiting for the file operation to complete, so the scheduler switches the thread to the goroutine on the right hand side. - -Execution continues until the read from the `c` chan blocks, and by this time the `os.Open` call has completed so the scheduler switches the thread back the left hand side and continues to the `file.Read` function, which again blocks on file IO. - -The scheduler switches the thread back to the right hand side for another channel operation, which has unblocked during the time the left hand side was running, but it blocks again on the channel send. - -Finally the thread switches back to the left hand side as the `Read` operation has completed and data is available. - - [![Gocon 2014 (36)](https://dave.cheney.net/wp-content/uploads/2014/06/Gocon-2014-36.jpg)][44] - -This slide shows the low level `runtime.Syscall` function which is the base for all functions in the os package. - -Any time your code results in a call to the operating system, it will go through this function. - -The call to `entersyscall` informs the runtime that this thread is about to block. - -This allows the runtime to spin up a new thread which will service other goroutines while this current thread blocked. - -This results in relatively few operating system threads per Go process, with the Go runtime taking care of assigning a runnable Goroutine to a free operating system thread. - - [![Gocon 2014 (37)](https://dave.cheney.net/wp-content/uploads/2014/06/Gocon-2014-37.jpg)][45] - -In the previous section I discussed how goroutines reduce the overhead of managing many, sometimes hundreds of thousands of concurrent threads of execution. - -There is another side to the goroutine story, and that is stack management, which leads me to my final topic. - - [![Gocon 2014 (39)](https://dave.cheney.net/wp-content/uploads/2014/06/Gocon-2014-39.jpg)][46] - -This is a diagram of the memory layout of a process. The key thing we are interested is the location of the heap and the stack. - -Traditionally inside the address space of a process, the heap is at the bottom of memory, just above the program (text) and grows upwards. - -The stack is located at the top of the virtual address space, and grows downwards. - - [![Gocon 2014 (40)](https://dave.cheney.net/wp-content/uploads/2014/06/Gocon-2014-40.jpg)][47] - -Because the heap and stack overwriting each other would be catastrophic, the operating system usually arranges to place an area of unwritable memory between the stack and the heap to ensure that if they did collide, the program will abort. - -This is called a guard page, and effectively limits the stack size of a process, usually in the order of several megabytes. - - [![Gocon 2014 (41)](https://dave.cheney.net/wp-content/uploads/2014/06/Gocon-2014-41.jpg)][48] - -We’ve discussed that threads share the same address space, so for each thread, it must have its own stack. - -Because it is hard to predict the stack requirements of a particular thread, a large amount of memory is reserved for each thread’s stack along with a guard page. - -The hope is that this is more than will ever be needed and the guard page will never be hit. - -The downside is that as the number of threads in your program increases, the amount of available address space is reduced. - - [![Gocon 2014 (42)](https://dave.cheney.net/wp-content/uploads/2014/06/Gocon-2014-42.jpg)][49] - -We’ve seen that the Go runtime schedules a large number of goroutines onto a small number of threads, but what about the stack requirements of those goroutines ? - -Instead of using guard pages, the Go compiler inserts a check as part of every function call to check if there is sufficient stack for the function to run. If there is not, the runtime can allocate more stack space. - -Because of this check, a goroutines initial stack can be made much smaller, which in turn permits Go programmers to treat goroutines as cheap resources. - - [![Gocon 2014 (43)](https://dave.cheney.net/wp-content/uploads/2014/06/Gocon-2014-43.jpg)][50] - -This is a slide that shows how stacks are managed in Go 1.2. - -When `G` calls to `H` there is not enough space for `H` to run, so the runtime allocates a new stack frame from the heap, then runs `H` on that new stack segment. When `H` returns, the stack area is returned to the heap before returning to `G`. - - [![Gocon 2014 (44)](https://dave.cheney.net/wp-content/uploads/2014/06/Gocon-2014-44.jpg)][51] - -This method of managing the stack works well in general, but for certain types of code, usually recursive code, it can cause the inner loop of your program to straddle one of these stack boundaries. - -For example, in the inner loop of your program, function `G` may call `H` many times in a loop, - -Each time this will cause a stack split. This is known as the hot split problem. - - [![Gocon 2014 (45)](https://dave.cheney.net/wp-content/uploads/2014/06/Gocon-2014-45.jpg)][52] - -To solve hot splits, Go 1.3 has adopted a new stack management method. - -Instead of adding and removing additional stack segments, if the stack of a goroutine is too small, a new, larger, stack will be allocated. - -The old stack’s contents are copied to the new stack, then the goroutine continues with its new larger stack. - -After the first call to `H` the stack will be large enough that the check for available stack space will always succeed. - -This resolves the hot split problem. - - [![Gocon 2014 (46)](https://dave.cheney.net/wp-content/uploads/2014/06/Gocon-2014-46.jpg)][53] - -Values, Inlining, Escape Analysis, Goroutines, and segmented/copying stacks. - -These are the five features that I chose to speak about today, but they are by no means the only things that makes Go a fast programming language, just as there more that three reasons that people cite as their reason to learn Go. - -As powerful as these five features are individually, they do not exist in isolation. - -For example, the way the runtime multiplexes goroutines onto threads would not be nearly as efficient without growable stacks. - -Inlining reduces the cost of the stack size check by combining smaller functions into larger ones. - -Escape analysis reduces the pressure on the garbage collector by automatically moving allocations from the heap to the stack. - -Escape analysis is also provides better cache locality. - -Without growable stacks, escape analysis might place too much pressure on the stack. - - [![Gocon 2014 (47)](https://dave.cheney.net/wp-content/uploads/2014/06/Gocon-2014-47.jpg)][54] - -* Thank you to the Gocon organisers for permitting me to speak today -* twitter / web / email details -* thanks to @offbymany, @billkennedy_go, and Minux for their assistance in preparing this talk. - -### Related Posts: - -1. [Hear me speak about Go performance at OSCON][1] - -2. [Why is a Goroutine’s stack infinite ?][2] - -3. [A whirlwind tour of Go’s runtime environment variables][3] - -4. [Performance without the event loop][4] - --------------------------------------------------------------------------------- - -作者简介: - -David is a programmer and author from Sydney Australia. - -Go contributor since February 2011, committer since April 2012. - -Contact information - -* dave@cheney.net -* twitter: @davecheney - ----------------------- - -via: https://dave.cheney.net/2014/06/07/five-things-that-make-go-fast - -作者:[Dave Cheney ][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:https://dave.cheney.net/ -[1]:https://dave.cheney.net/2015/05/31/hear-me-speak-about-go-performance-at-oscon -[2]:https://dave.cheney.net/2013/06/02/why-is-a-goroutines-stack-infinite -[3]:https://dave.cheney.net/2015/11/29/a-whirlwind-tour-of-gos-runtime-environment-variables -[4]:https://dave.cheney.net/2015/08/08/performance-without-the-event-loop -[5]:http://mindchunk.blogspot.com.au/2014/06/remixing-with-deck.html -[6]:http://ymotongpoo.hatenablog.com/entry/2014/06/01/124350 -[7]:http://www.goinggo.net/ -[8]:https://twitter.com/offbymany -[9]:https://dave.cheney.net/wp-content/uploads/2014/06/Gocon-2014-1.jpg -[10]:https://dave.cheney.net/2014/06/07/five-things-that-make-go-fast/gocon-2014-2 -[11]:https://dave.cheney.net/wp-content/uploads/2014/06/Gocon-2014-3.jpg -[12]:https://dave.cheney.net/wp-content/uploads/2014/06/Gocon-2014-4.jpg -[13]:https://dave.cheney.net/wp-content/uploads/2014/06/Gocon-2014-5.jpg -[14]:https://dave.cheney.net/wp-content/uploads/2014/06/Gocon-2014-6.jpg -[15]:https://dave.cheney.net/wp-content/uploads/2014/06/Gocon-2014-7.jpg -[16]:https://dave.cheney.net/wp-content/uploads/2014/06/Gocon-2014-8.jpg -[17]:https://dave.cheney.net/wp-content/uploads/2014/06/Gocon-2014-9.jpg -[18]:https://dave.cheney.net/wp-content/uploads/2014/06/Gocon-2014-10.jpg -[19]:https://dave.cheney.net/wp-content/uploads/2014/06/Gocon-2014-11.jpg -[20]:https://dave.cheney.net/wp-content/uploads/2014/06/Gocon-2014-12.jpg -[21]:https://dave.cheney.net/wp-content/uploads/2014/06/Gocon-2014-13.jpg -[22]:https://dave.cheney.net/wp-content/uploads/2014/06/Gocon-2014-14.jpg -[23]:https://dave.cheney.net/wp-content/uploads/2014/06/Gocon-2014-15.jpg -[24]:https://dave.cheney.net/wp-content/uploads/2014/06/Gocon-2014-16.jpg -[25]:https://dave.cheney.net/wp-content/uploads/2014/06/Gocon-2014-17.jpg -[26]:https://dave.cheney.net/wp-content/uploads/2014/06/Gocon-2014-18.jpg -[27]:https://dave.cheney.net/wp-content/uploads/2014/06/Gocon-2014-19.jpg -[28]:https://dave.cheney.net/wp-content/uploads/2014/06/Gocon-2014-20.jpg -[29]:https://dave.cheney.net/wp-content/uploads/2014/06/Gocon-2014-21.jpg -[30]:https://dave.cheney.net/wp-content/uploads/2014/06/Gocon-2014-22.jpg -[31]:https://dave.cheney.net/wp-content/uploads/2014/06/Gocon-2014-23.jpg -[32]:https://dave.cheney.net/wp-content/uploads/2014/06/Gocon-2014-24.jpg -[33]:https://dave.cheney.net/wp-content/uploads/2014/06/Gocon-2014-25.jpg -[34]:https://dave.cheney.net/wp-content/uploads/2014/06/Gocon-2014-26.jpg -[35]:https://dave.cheney.net/wp-content/uploads/2014/06/Gocon-2014-27.jpg -[36]:https://dave.cheney.net/wp-content/uploads/2014/06/Gocon-2014-28.jpg -[37]:https://dave.cheney.net/wp-content/uploads/2014/06/Gocon-2014-30.jpg -[38]:https://dave.cheney.net/wp-content/uploads/2014/06/Gocon-2014-29.jpg -[39]:https://dave.cheney.net/wp-content/uploads/2014/06/Gocon-2014-31.jpg -[40]:https://dave.cheney.net/wp-content/uploads/2014/06/Gocon-2014-32.jpg -[41]:https://dave.cheney.net/wp-content/uploads/2014/06/Gocon-2014-33.jpg -[42]:https://dave.cheney.net/wp-content/uploads/2014/06/Gocon-2014-34.jpg -[43]:https://dave.cheney.net/wp-content/uploads/2014/06/Gocon-2014-35.jpg -[44]:https://dave.cheney.net/wp-content/uploads/2014/06/Gocon-2014-36.jpg -[45]:https://dave.cheney.net/wp-content/uploads/2014/06/Gocon-2014-37.jpg -[46]:https://dave.cheney.net/wp-content/uploads/2014/06/Gocon-2014-39.jpg -[47]:https://dave.cheney.net/wp-content/uploads/2014/06/Gocon-2014-40.jpg -[48]:https://dave.cheney.net/wp-content/uploads/2014/06/Gocon-2014-41.jpg -[49]:https://dave.cheney.net/wp-content/uploads/2014/06/Gocon-2014-42.jpg -[50]:https://dave.cheney.net/wp-content/uploads/2014/06/Gocon-2014-43.jpg -[51]:https://dave.cheney.net/wp-content/uploads/2014/06/Gocon-2014-44.jpg -[52]:https://dave.cheney.net/wp-content/uploads/2014/06/Gocon-2014-45.jpg -[53]:https://dave.cheney.net/wp-content/uploads/2014/06/Gocon-2014-46.jpg -[54]:https://dave.cheney.net/wp-content/uploads/2014/06/Gocon-2014-47.jpg diff --git a/translated/tech/20140607 Five things that make Go fast.md b/translated/tech/20140607 Five things that make Go fast.md new file mode 100644 index 0000000000..6adee59e52 --- /dev/null +++ b/translated/tech/20140607 Five things that make Go fast.md @@ -0,0 +1,494 @@ +五种加速 Go 的特性 +============================================================ + + _Anthony Starks 使用他出色的 Deck 演示工具重构了我原来的基于 Google Slides 的幻灯片。你可以在他的博客上查看他重构后的幻灯片, [mindchunk.blogspot.com.au/2014/06/remixing-with-deck][5]._ + +* * * + +我最近被邀请在 Gocon 发表演讲,这是一个每半年在日本东京举行的精彩 Go 的大会。[Gocon 2014][6] 是一个完全由社区驱动的为期一天的活动,由培训和一整个下午的围绕着 生产环境中的 Go 这个主题的演讲组成. + +以下是我的讲义。原文的结构能让我缓慢而清晰的演讲,因此我已经编辑了它使其更可读。 + +我要感谢 [Bill Kennedy][7] 和 Minux Ma,特别是 [Josh Bleecher Snyder][8],感谢他们在我准备这次演讲中的帮助。 + +* * * + +大家下午好。 + +我叫 David. + +我很高兴今天能来到 Gocon。我想参加这个会议已经两年了,我很感谢主办方能提供给我向你们演讲的机会。 + + [![Gocon 2014](https://dave.cheney.net/wp-content/uploads/2014/06/Gocon-2014-1.jpg)][9] +我想以一个问题开始我的演讲。 + +为什么选择 Go? + +当大家讨论学习或在生产环境中使用 Go 的原因时,答案不一而足,但因为以下三个原因的最多。 + + [![Gocon 2014 ](https://dave.cheney.net/wp-content/uploads/2014/06/Gocon-2014-2.jpg)][10] +这就是 TOP3 的原因。 + +第一,并发。 + +Go 的 并发原语Concurrency Primitives 对于来自 Nodejs,Ruby 或 Python 等单线程脚本语言的程序员,或者来自 C++ 或 Java 等重量级线程模型的语言都很有吸引力。 + +易于部署。 + +我们今天从经验丰富的 Gophers 那里听说过,他们非常欣赏部署 Go 应用的简单性。 + + [![Gocon 2014](https://dave.cheney.net/wp-content/uploads/2014/06/Gocon-2014-3.jpg)][11] + +然后是性能。 + +我相信人们选择 Go 的一个重要原因是它 _快_。 + + [![Gocon 2014 (4)](https://dave.cheney.net/wp-content/uploads/2014/06/Gocon-2014-4.jpg)][12] + +在今天的演讲中,我想讨论五个有助于提高 Go 性能的特性。 + +我还将与大家分享 Go 如何实现这些特性的细节。 + + [![Gocon 2014 (5)](https://dave.cheney.net/wp-content/uploads/2014/06/Gocon-2014-5.jpg)][13] + +我要谈的第一个特性是 Go 对于值的高效处理和存储。 + + [![Gocon 2014 (6)](https://dave.cheney.net/wp-content/uploads/2014/06/Gocon-2014-6.jpg)][14] + +这是 Go 中一个值的例子。编译时,`gocon` 正好消耗四个字节的内存。 + +让我们将 Go 与其他一些语言进行比较 + + [![Gocon 2014 (7)](https://dave.cheney.net/wp-content/uploads/2014/06/Gocon-2014-7.jpg)][15] + +由于 Python 表示变量的方式的开销,使用 Python 存储相同的值会消耗六倍的内存。 + +Python 使用额外的内存来跟踪类型信息,进行 引用计数Reference Counting 等。 + +让我们看另一个例子: + + [![Gocon 2014 (8)](https://dave.cheney.net/wp-content/uploads/2014/06/Gocon-2014-8.jpg)][16] + +与 Go 类似,Java 消耗 4 个字节的内存来存储 `int` 型。 + +但是,要在像 `List` 或 `Map` 这样的集合中使用此值,编译器必须将其转换为 `Integer` 对象。 + + [![Gocon 2014 (9)](https://dave.cheney.net/wp-content/uploads/2014/06/Gocon-2014-9.jpg)][17] + +因此,Java 中的整数通常消耗 16 到 24 个字节的内存。 + +为什么这很重要? 内存便宜且充足,为什么这个开销很重要? + + [![Gocon 2014 (10)](https://dave.cheney.net/wp-content/uploads/2014/06/Gocon-2014-10.jpg)][18] + +这是一张显示 CPU 时钟速度与内存总线速度的图表。 + +请注意 CPU 时钟速度和内存总线速度之间的差距如何继续扩大。 + +两者之间的差异实际上是 CPU 花费多少时间等待内存。 + + [![Gocon 2014 (11)](https://dave.cheney.net/wp-content/uploads/2014/06/Gocon-2014-11.jpg)][19] + +自 1960 年代后期以来,CPU 设计师已经意识到了这个问题。 + +他们的解决方案是一个缓存,一个更小,更快的内存区域,介入 CPU 和主存之间。 + + [![Gocon 2014 (12)](https://dave.cheney.net/wp-content/uploads/2014/06/Gocon-2014-12.jpg)][20] + +这是一个 `Location` 类型,它保存物体在三维空间中的位置。它是用 Go 编写的,因此每个 `Location` 只消耗 24 个字节的存储空间。 + +我们可以使用这种类型来构造一个容纳 1000 个 `Location` 的数组类型,它只消耗 24000 字节的内存。 + +在数组内部,`Location` 结构体是顺序存储的,而不是随机存储的 1000 个 `Location` 结构体的指针。 + +这很重要,因为现在所有 1000 个 `Location` 结构体都按顺序放在缓存中,紧密排列在一起。 + + [![Gocon 2014 (13)](https://dave.cheney.net/wp-content/uploads/2014/06/Gocon-2014-13.jpg)][21] + +Go 允许您创建紧凑的数据结构,避免不必要的填充字节。 + +紧凑的数据结构能更好地利用缓存。 + +更好的缓存利用率可带来更好的性能。 + + [![Gocon 2014 (14)](https://dave.cheney.net/wp-content/uploads/2014/06/Gocon-2014-14.jpg)][22] + +函数调用不是无开销的。 + + [![Gocon 2014 (15)](https://dave.cheney.net/wp-content/uploads/2014/06/Gocon-2014-15.jpg)][23] + +调用函数时会发生三件事。 + +创建一个新的 栈帧Stack Frame,并记录调用者的详细信息。 + +在函数调用期间可能被覆盖的任何寄存器都将保存到栈中。 + +处理器计算函数的地址并执行到该新地址的分支。 + + [![Gocon 2014 (16)](https://dave.cheney.net/wp-content/uploads/2014/06/Gocon-2014-16.jpg)][24] + +由于函数调用是非常常见的操作,因此 CPU 设计师一直在努力优化此过程,但他们无法消除开销。 + +函调固有开销,或重于泰山,或轻于鸿毛,这取决于函数做了什么。 + +减少函数调用开销的解决方案是 内联Inlining。 + + [![Gocon 2014 (17)](https://dave.cheney.net/wp-content/uploads/2014/06/Gocon-2014-17.jpg)][25] + +Go 编译器通过将函数体视为调用者的一部分来内联函数。 + +内联也有成本,它增加了二进制文件大小。 + +只有当调用开销与函数所做工作关联度的很大时内联才有意义,因此只有简单的函数才能用于内联。 + +复杂的函数通常不受调用它们的开销所支配,因此不会内联。 + + [![Gocon 2014 (18)](https://dave.cheney.net/wp-content/uploads/2014/06/Gocon-2014-18.jpg)][26] + +这个例子显示函数 `Double` 调用 `util.Max`。 + +为了减少调用 `util.Max` 的开销,编译器可以将 `util.Max` 内联到 `Double` 中,就象这样 + + [![Gocon 2014 (19)](https://dave.cheney.net/wp-content/uploads/2014/06/Gocon-2014-19.jpg)][27] + +内联后不再调用 `util.Max`,但是 `Double` 的行为没有改变。 + +内联并不是 Go 独有的。几乎每种编译或及时编译的语言都执行此优化。但是 Go 的内联是如何实现的? + +Go 实现非常简单。编译包时,会标记任何适合内联的小函数,然后照常编译。 + +然后函数的源代码和编译后版本都会被存储。 + + [![Gocon 2014 (20)](https://dave.cheney.net/wp-content/uploads/2014/06/Gocon-2014-20.jpg)][28] + +此幻灯片显示了 `util.a` 的内容。源代码已经过一些转换,以便编译器更容易快速处理。 + +当编译器编译 `Double` 时,它看到 `util.Max` 可内联的,并且 `util.Max` 的源代码是可用的。 + +就会替换原函数中的代码,而不是插入对 `util.Max` 的编译版本的调用。 + +拥有该函数的源代码可以实现其他优化。 + + [![Gocon 2014 (21)](https://dave.cheney.net/wp-content/uploads/2014/06/Gocon-2014-21.jpg)][29] + +在这个例子中,尽管函数 `Test` 总是返回 `false`,但 `Expensive` 在不执行它的情况下无法知道结果。 + +当 `Test` 被内联时,我们得到这样的东西 + + [![Gocon 2014 (22)](https://dave.cheney.net/wp-content/uploads/2014/06/Gocon-2014-22.jpg)][30] + +编译器现在知道 `Expensive` 的代码无法访问。 + +这不仅节省了调用 `Test` 的成本,还节省了编译或运行任何现在无法访问的 `Expensive` 代码。 + +Go 编译器可以跨文件甚至跨包自动内联函数。还包括从标准库调用的可内联函数的代码。 + + [![Gocon 2014 (23)](https://dave.cheney.net/wp-content/uploads/2014/06/Gocon-2014-23.jpg)][31] + +强制垃圾回收Mandatory Garbage Collection 使 Go 成为一种更简单,更安全的语言。 + +这并不意味着垃圾回收会使 Go 变慢,或者垃圾回收是程序速度的瓶颈。 + +这意味着在堆上分配的内存是有代价的。每次 GC 运行时都会花费 CPU 时间,直到释放内存为止。 + + [![Gocon 2014 (24)](https://dave.cheney.net/wp-content/uploads/2014/06/Gocon-2014-24.jpg)][32] + +然而,有另一个地方分配内存,那就是栈。 + +与 C 不同,它强制您选择是否将值通过 `malloc` 将其存储在堆上,还是通过在函数范围内声明将其储存在栈上;Go 实现了一个名为 逃逸分析Escape Analysis 的优化。 + + [![Gocon 2014 (25)](https://dave.cheney.net/wp-content/uploads/2014/06/Gocon-2014-25.jpg)][33] + +逃逸分析决定了对一个值的任何引用是否会从被声明的函数中逃逸。 + +如果没有引用逃逸,则该值可以安全地存储在栈中。 + +存储在栈中的值不需要分配或释放。 + +让我们看一些例子 + + [![Gocon 2014 (26)](https://dave.cheney.net/wp-content/uploads/2014/06/Gocon-2014-26.jpg)][34] + +`Sum` 返回 1 到 100 的整数的和。这是一种相当不寻常的做法,但它说明了逃逸分析的工作原理。 + +因为切片 `numbers` 仅在 `Sum` 内引用,所以编译器将安排到栈上来存储的 100 个整数,而不是安排到堆上。 + +没有必要回收 `numbers`,它会在 `Sum` 返回时自动释放。 + + [![Gocon 2014 (27)](https://dave.cheney.net/wp-content/uploads/2014/06/Gocon-2014-27.jpg)][35] + +第二个例子也有点尬。在 `CenterCursor` 中,我们创建一个新的 `Cursor` 对象并在 `c` 中存储指向它的指针。 + +然后我们将 `c` 传递给 `Center()` 函数,它将 `Cursor` 移动到屏幕的中心。 + +最后我们打印出那个 'Cursor` 的 X 和 Y 坐标。 + +即使 `c` 被 `new` 函数分配了空间,它也不会存储在堆上,因为没有引用 `c` 的变量逃逸 `CenterCursor` 函数。 + + [![Gocon 2014 (28)](https://dave.cheney.net/wp-content/uploads/2014/06/Gocon-2014-28.jpg)][36] + +默认情况下,Go 的优化始终处于启用状态。可以使用 `-gcflags = -m` 开关查看编译器的逃逸分析和内联决策。 + +因为逃逸分析是在编译时执行的,而不是运行时,所以无论垃圾回收的效率如何,栈分配总是比堆分配快。 + +我将在本演讲的其余部分详细讨论栈。 + + [![Gocon 2014 (30)](https://dave.cheney.net/wp-content/uploads/2014/06/Gocon-2014-30.jpg)][37] + +Go 有 goroutines。 这是 Go 并发的基石。 + +我想退一步,探索 goroutines 的历史。 + +最初,计算机一次运行一个进程。在 60 年代,多进程或 分时Time Sharing 的想法变得流行起来。 + +在分时系统中,操作系统必须通过保护当前进程的现场,然后恢复另一个进程的现场,不断地在这些进程之间切换 CPU 的注意力。 + +这称为 _进程切换_。 + + [![Gocon 2014 (29)](https://dave.cheney.net/wp-content/uploads/2014/06/Gocon-2014-29.jpg)][38] + +进程切换有三个主要开销。 + +首先,内核需要保护该进程的所有 CPU 寄存器的现场,然后恢复另一个进程的现场。 + +内核还需要将 CPU 的映射从虚拟内存刷新到物理内存,因为这些映射仅对当前进程有效。 + +最后是操作系统 上下文切换Context Switch 的成本,以及 调度函数Scheduler Function 选择占用 CPU 的下一个进程的开销。 + + [![Gocon 2014 (31)](https://dave.cheney.net/wp-content/uploads/2014/06/Gocon-2014-31.jpg)][39] + +现代处理器中有数量惊人的寄存器。我很难在一张幻灯片上排开它们,这可以让你知道保护和恢复它们需要多少时间。 + +由于进程切换可以在进程执行的任何时刻发生,因此操作系统需要存储所有寄存器的内容,因为它不知道当前正在使用哪些寄存器。 + + [![Gocon 2014 (32)](https://dave.cheney.net/wp-content/uploads/2014/06/Gocon-2014-32.jpg)][40] + +这导致了线程的出生,这些线程在概念上与进程相同,但共享相同的内存空间。 + +由于线程共享地址空间,因此它们比进程更轻,因此创建速度更快,切换速度更快。 + + [![Gocon 2014 (33)](https://dave.cheney.net/wp-content/uploads/2014/06/Gocon-2014-33.jpg)][41] + +Goroutines 升华了线程的思想。 + +Goroutines 是 协作式调度Cooperative Scheduled +的,而不是依靠内核来调度。 + +当对 Go 运行时调度器Runtime Scheduler 进行显式调用时,goroutine 之间的切换仅发生在明确定义的点上。 + +编译器知道正在使用的寄存器并自动保存它们。 + + [![Gocon 2014 (34)](https://dave.cheney.net/wp-content/uploads/2014/06/Gocon-2014-34.jpg)][42] + +虽然 goroutine 是协作式调度的,但运行时会为你处理。 + +Goroutines 可能会给禅让给其他协程时刻是: + +* 阻塞式通道发送和接收。 + +* Go 声明,虽然不能保证会立即调度新的 goroutine。 + +* 文件和网络操作式的阻塞式系统调用。 + +* 在被垃圾回收循环停止后。 + + [![Gocon 2014 (35)](https://dave.cheney.net/wp-content/uploads/2014/06/Gocon-2014-35.jpg)][43] + +这个例子说明了上一张幻灯片中描述的一些调度点。 + +箭头所示的线程从左侧的 `ReadFile` 函数开始。遇到 `os.Open`,它在等待文件操作完成时阻塞线程,因此调度器将线程切换到右侧的 goroutine。 + +继续执行直到从通道 `c` 中读,并且此时 `os.Open` 调用已完成,因此调度器将线程切换回左侧并继续执行 `file.Read` 函数,然后又被文件 IO 阻塞。 + +调度器将线程切换回右侧以进行另一个通道操作,该操作在左侧运行期间已解锁,但在通道发送时再次阻塞。 + +最后,当 `Read` 操作完成并且数据可用时,线程切换回左侧。 + + [![Gocon 2014 (36)](https://dave.cheney.net/wp-content/uploads/2014/06/Gocon-2014-36.jpg)][44] + +这张幻灯片显示了低级语言描述的 `runtime.Syscall` 函数,它是 `os` 包中所有函数的基础。 + +只要你的代码调用操作系统,就会通过此函数。 + +对 `entersyscall` 的调用通知运行时该线程即将阻塞。 + +这允许运行时启动一个新线程,该线程将在当前线程被阻塞时为其他 goroutine 提供服务。 + +这导致每 Go 进程的操作系统线程相对较少,Go 运行时负责将可运行的 Goroutine 分配给空闲的操作系统线程。 + + [![Gocon 2014 (37)](https://dave.cheney.net/wp-content/uploads/2014/06/Gocon-2014-37.jpg)][45] + +在上一节中,我讨论了 goroutine 如何减少管理许多(有时是数十万个并发执行线程)的开销。 + +Goroutine故事还有另一面,那就是栈管理,它引导我进入我的最后一个话题。 + + [![Gocon 2014 (39)](https://dave.cheney.net/wp-content/uploads/2014/06/Gocon-2014-39.jpg)][46] + +这是一个进程的内存布局图。我们感兴趣的关键是堆和栈的位置。 + +传统上,在进程的地址空间内,堆位于内存的底部,位于程序(代码)的上方并向上增长。 + +栈位于虚拟地址空间的顶部,并向下增长。 + + [![Gocon 2014 (40)](https://dave.cheney.net/wp-content/uploads/2014/06/Gocon-2014-40.jpg)][47] + +因为堆和栈相互覆盖的结果会是灾难性的,操作系统通常会安排在栈和堆之间放置一个不可写内存区域,以确保如果它们发生碰撞,程序将中止。 + +这称为保护页,有效地限制了进程的栈大小,通常大约为几兆字节。 + + [![Gocon 2014 (41)](https://dave.cheney.net/wp-content/uploads/2014/06/Gocon-2014-41.jpg)][48] + +我们已经讨论过线程共享相同的地址空间,因此对于每个线程,它必须有自己的栈。 + +由于很难预测特定线程的栈需求,因此为每个线程的栈和保护页面保留了大量内存。 + +希望是这些区域永远不被使用,而且防护页永远不会被击中。 + +缺点是随着程序中线程数的增加,可用地址空间的数量会减少。 + + [![Gocon 2014 (42)](https://dave.cheney.net/wp-content/uploads/2014/06/Gocon-2014-42.jpg)][49] + +我们已经看到 Go 运行时将大量的 goroutine 调度到少量线程上,但那些 goroutines 的栈需求呢? + +Go 编译器不使用保护页,而是在每个函数调用时插入一个检查,以检查是否有足够的栈来运行该函数。如果没有,运行时可以分配更多的栈空间。 + +由于这种检查,goroutines 初始栈可以做得更小,这反过来允许 Go 程序员将 goroutines 视为廉价资源。 + + [![Gocon 2014 (43)](https://dave.cheney.net/wp-content/uploads/2014/06/Gocon-2014-43.jpg)][50] + +这是一张显示了 Go 1.2 如何管理栈的幻灯片。 + +当 `G` 调用 `H` 时,没有足够的空间让 `H` 运行,所以运行时从堆中分配一个新的栈帧,然后在新的栈段上运行 `H`。当 `H` 返回时,栈区域返回到堆,然后返回到 `G`。 + + [![Gocon 2014 (44)](https://dave.cheney.net/wp-content/uploads/2014/06/Gocon-2014-44.jpg)][51] + +这种管理栈的方法通常很好用,但对于某些类型的代码,通常是递归代码,它可能导致程序的内部循环跨越这些栈边界之一。 + +例如,在程序的内部循环中,函数 `G` 可以在循环中多次调用 `H`, + +每次都会导致栈拆分。 这被称为 热分裂Hot Split 问题。 + + [![Gocon 2014 (45)](https://dave.cheney.net/wp-content/uploads/2014/06/Gocon-2014-45.jpg)][52] + +为了解决热分裂问题,Go 1.3 采用了一种新的栈管理方法。 + +如果 goroutine 的栈太小,则不会添加和删除其他栈段,而是分配新的更大的栈。 + +旧栈的内容被复制到新栈,然后 goroutine 使用新的更大的栈继续运行。 + +在第一次调用 `H` 之后,栈将足够大,对可用栈空间的检查将始终成功。 + +这解决了热分裂问题。 + + [![Gocon 2014 (46)](https://dave.cheney.net/wp-content/uploads/2014/06/Gocon-2014-46.jpg)][53] + +值,内联,逃逸分析,Goroutines 和分段/复制栈。 + +这些是我今天选择谈论的五个特性,但它们绝不是使 Go 成为快速的语言的唯一因素,就像人们引用他们学习 Go 的理由的三个原因一样。 + +这五个特性一样强大,它们不是孤立存在的。 + +例如,运行时将 goroutine 复用到线程上的方式在没有可扩展栈的情况下几乎没有效率。 + +内联通过将较小的函数组合成较大的函数来降低栈大小检查的成本。 + +逃逸分析通过自动将从实例从堆移动到栈来减少垃圾回收器的压力。 + +逃逸分析还提供了更好的 缓存局部性Cache Locality。 + +如果没有可增长的栈,逃逸分析可能会对栈施加太大的压力。 + + [![Gocon 2014 (47)](https://dave.cheney.net/wp-content/uploads/2014/06/Gocon-2014-47.jpg)][54] + +* 感谢 Gocon 主办方允许我今天发言 +* twitter / web / email details +* 感谢 @offbymany,@billkennedy_go 和 Minux 在准备这个演讲的过程中所提供的帮助。 + +### 相关文章: + +1. [听我在 OSCON 上关于 Go 性能的演讲][1] + +2. [为什么 Goroutine 的栈是无限大的?][2] + +3. [Go 的运行时环境变量的旋风之旅][3] + +4. [没有事件循环的性能][4] + +-------------------------------------------------------------------------------- + +作者简介: + +David 是来自澳大利亚悉尼的程序员和作者。 + +自 2011 年 2 月起成为 Go 的 contributor,自 2012 年 4 月起成为 committer。 + +联系信息 + +* dave@cheney.net +* twitter: @davecheney + +---------------------- + +via: https://dave.cheney.net/2014/06/07/five-things-that-make-go-fast + +作者:[Dave Cheney ][a] +译者:[houbaron](https://github.com/houbaron) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://dave.cheney.net/ +[1]:https://dave.cheney.net/2015/05/31/hear-me-speak-about-go-performance-at-oscon +[2]:https://dave.cheney.net/2013/06/02/why-is-a-goroutines-stack-infinite +[3]:https://dave.cheney.net/2015/11/29/a-whirlwind-tour-of-gos-runtime-environment-variables +[4]:https://dave.cheney.net/2015/08/08/performance-without-the-event-loop +[5]:http://mindchunk.blogspot.com.au/2014/06/remixing-with-deck.html +[6]:http://ymotongpoo.hatenablog.com/entry/2014/06/01/124350 +[7]:http://www.goinggo.net/ +[8]:https://twitter.com/offbymany +[9]:https://dave.cheney.net/wp-content/uploads/2014/06/Gocon-2014-1.jpg +[10]:https://dave.cheney.net/2014/06/07/five-things-that-make-go-fast/gocon-2014-2 +[11]:https://dave.cheney.net/wp-content/uploads/2014/06/Gocon-2014-3.jpg +[12]:https://dave.cheney.net/wp-content/uploads/2014/06/Gocon-2014-4.jpg +[13]:https://dave.cheney.net/wp-content/uploads/2014/06/Gocon-2014-5.jpg +[14]:https://dave.cheney.net/wp-content/uploads/2014/06/Gocon-2014-6.jpg +[15]:https://dave.cheney.net/wp-content/uploads/2014/06/Gocon-2014-7.jpg +[16]:https://dave.cheney.net/wp-content/uploads/2014/06/Gocon-2014-8.jpg +[17]:https://dave.cheney.net/wp-content/uploads/2014/06/Gocon-2014-9.jpg +[18]:https://dave.cheney.net/wp-content/uploads/2014/06/Gocon-2014-10.jpg +[19]:https://dave.cheney.net/wp-content/uploads/2014/06/Gocon-2014-11.jpg +[20]:https://dave.cheney.net/wp-content/uploads/2014/06/Gocon-2014-12.jpg +[21]:https://dave.cheney.net/wp-content/uploads/2014/06/Gocon-2014-13.jpg +[22]:https://dave.cheney.net/wp-content/uploads/2014/06/Gocon-2014-14.jpg +[23]:https://dave.cheney.net/wp-content/uploads/2014/06/Gocon-2014-15.jpg +[24]:https://dave.cheney.net/wp-content/uploads/2014/06/Gocon-2014-16.jpg +[25]:https://dave.cheney.net/wp-content/uploads/2014/06/Gocon-2014-17.jpg +[26]:https://dave.cheney.net/wp-content/uploads/2014/06/Gocon-2014-18.jpg +[27]:https://dave.cheney.net/wp-content/uploads/2014/06/Gocon-2014-19.jpg +[28]:https://dave.cheney.net/wp-content/uploads/2014/06/Gocon-2014-20.jpg +[29]:https://dave.cheney.net/wp-content/uploads/2014/06/Gocon-2014-21.jpg +[30]:https://dave.cheney.net/wp-content/uploads/2014/06/Gocon-2014-22.jpg +[31]:https://dave.cheney.net/wp-content/uploads/2014/06/Gocon-2014-23.jpg +[32]:https://dave.cheney.net/wp-content/uploads/2014/06/Gocon-2014-24.jpg +[33]:https://dave.cheney.net/wp-content/uploads/2014/06/Gocon-2014-25.jpg +[34]:https://dave.cheney.net/wp-content/uploads/2014/06/Gocon-2014-26.jpg +[35]:https://dave.cheney.net/wp-content/uploads/2014/06/Gocon-2014-27.jpg +[36]:https://dave.cheney.net/wp-content/uploads/2014/06/Gocon-2014-28.jpg +[37]:https://dave.cheney.net/wp-content/uploads/2014/06/Gocon-2014-30.jpg +[38]:https://dave.cheney.net/wp-content/uploads/2014/06/Gocon-2014-29.jpg +[39]:https://dave.cheney.net/wp-content/uploads/2014/06/Gocon-2014-31.jpg +[40]:https://dave.cheney.net/wp-content/uploads/2014/06/Gocon-2014-32.jpg +[41]:https://dave.cheney.net/wp-content/uploads/2014/06/Gocon-2014-33.jpg +[42]:https://dave.cheney.net/wp-content/uploads/2014/06/Gocon-2014-34.jpg +[43]:https://dave.cheney.net/wp-content/uploads/2014/06/Gocon-2014-35.jpg +[44]:https://dave.cheney.net/wp-content/uploads/2014/06/Gocon-2014-36.jpg +[45]:https://dave.cheney.net/wp-content/uploads/2014/06/Gocon-2014-37.jpg +[46]:https://dave.cheney.net/wp-content/uploads/2014/06/Gocon-2014-39.jpg +[47]:https://dave.cheney.net/wp-content/uploads/2014/06/Gocon-2014-40.jpg +[48]:https://dave.cheney.net/wp-content/uploads/2014/06/Gocon-2014-41.jpg +[49]:https://dave.cheney.net/wp-content/uploads/2014/06/Gocon-2014-42.jpg +[50]:https://dave.cheney.net/wp-content/uploads/2014/06/Gocon-2014-43.jpg +[51]:https://dave.cheney.net/wp-content/uploads/2014/06/Gocon-2014-44.jpg +[52]:https://dave.cheney.net/wp-content/uploads/2014/06/Gocon-2014-45.jpg +[53]:https://dave.cheney.net/wp-content/uploads/2014/06/Gocon-2014-46.jpg +[54]:https://dave.cheney.net/wp-content/uploads/2014/06/Gocon-2014-47.jpg From 4ddd2bd166af2e0f707a6c0ca69316fd2bb39b69 Mon Sep 17 00:00:00 2001 From: thecyanbird <2534930703@qq.com> Date: Thu, 4 Oct 2018 09:14:16 +0800 Subject: [PATCH 203/736] Update 20180920 WinWorld - A Large Collection Of Defunct OSs, Software And Games.md --- ...d - A Large Collection Of Defunct OSs, Software And Games.md | 2 ++ 1 file changed, 2 insertions(+) diff --git a/sources/talk/20180920 WinWorld - A Large Collection Of Defunct OSs, Software And Games.md b/sources/talk/20180920 WinWorld - A Large Collection Of Defunct OSs, Software And Games.md index 93c84ae43c..74b6347228 100644 --- a/sources/talk/20180920 WinWorld - A Large Collection Of Defunct OSs, Software And Games.md +++ b/sources/talk/20180920 WinWorld - A Large Collection Of Defunct OSs, Software And Games.md @@ -1,3 +1,5 @@ +thecyanbird translating + WinWorld – A Large Collection Of Defunct OSs, Software And Games ====== From 96eac58dfe0896e27429238087994785fa8dc006 Mon Sep 17 00:00:00 2001 From: HankChow <280630620@qq.com> Date: Thu, 4 Oct 2018 17:54:59 +0800 Subject: [PATCH 204/736] hankchow translating --- .../tech/20180803 5 Essential Tools for Linux Development.md | 2 ++ 1 file changed, 2 insertions(+) diff --git a/sources/tech/20180803 5 Essential Tools for Linux Development.md b/sources/tech/20180803 5 Essential Tools for Linux Development.md index 006372ca82..7c2ab1f7d5 100644 --- a/sources/tech/20180803 5 Essential Tools for Linux Development.md +++ b/sources/tech/20180803 5 Essential Tools for Linux Development.md @@ -1,3 +1,5 @@ +HankChow translating + 5 Essential Tools for Linux Development ====== From b494dc648dda1a365ecb1445f143a620eff99704 Mon Sep 17 00:00:00 2001 From: dianbanjiu Date: Thu, 4 Oct 2018 19:26:21 +0800 Subject: [PATCH 205/736] translated --- ...0170926 Managing users on Linux systems.md | 223 ------------------ ...0170926 Managing users on Linux systems.md | 222 +++++++++++++++++ 2 files changed, 222 insertions(+), 223 deletions(-) delete mode 100644 sources/tech/20170926 Managing users on Linux systems.md create mode 100644 translated/tech/20170926 Managing users on Linux systems.md diff --git a/sources/tech/20170926 Managing users on Linux systems.md b/sources/tech/20170926 Managing users on Linux systems.md deleted file mode 100644 index cc4db1e693..0000000000 --- a/sources/tech/20170926 Managing users on Linux systems.md +++ /dev/null @@ -1,223 +0,0 @@ -translating by dianbanjiu Managing users on Linux systems -====== -Your Linux users may not be raging bulls, but keeping them happy is always a challenge as it involves managing their accounts, monitoring their access rights, tracking down the solutions to problems they run into, and keeping them informed about important changes on the systems they use. Here are some of the tasks and tools that make the job a little easier. - -### Configuring accounts - -Adding and removing accounts is the easier part of managing users, but there are still a lot of options to consider. Whether you use a desktop tool or go with command line options, the process is largely automated. You can set up a new user with a command as simple as **adduser jdoe** and a number of things will happen. John 's account will be created using the next available UID and likely populated with a number of files that help to configure his account. When you run the adduser command with a single argument (the new username), it will prompt for some additional information and explain what it is doing. -``` -$ sudo adduser jdoe -Adding user `jdoe' ... -Adding new group `jdoe' (1001) ... -Adding new user `jdoe' (1001) with group `jdoe' ... -Creating home directory `/home/jdoe' ... -Copying files from `/etc/skel' … -Enter new UNIX password: -Retype new UNIX password: -passwd: password updated successfully -Changing the user information for jdoe -Enter the new value, or press ENTER for the default - Full Name []: John Doe - Room Number []: - Work Phone []: - Home Phone []: - Other []: -Is the information correct? [Y/n] Y - -``` - -As you can see, adduser adds the user's information (to the /etc/passwd and /etc/shadow files), creates the new home directory and populates it with some files from /etc/skel, prompts for you to assign the initial password and identifying information, and then verifies that it's got everything right. If you answer "n" for no at the final "Is the information correct?" prompt, it will run back through all of your previous answers, allowing you to change any that you might want to change. - -Once an account is set up, you might want to verify that it looks as you'd expect. However, a better strategy is to ensure that the choices being made "automagically" match what you want to see _before_ you add your first account. The defaults are defaults for good reason, but it 's useful to know where they're defined in case you want some to be different - for example, if you don't want home directories in /home, you don't want user UIDs to start with 1000, or you don't want the files in home directories to be readable by _everyone_ on the system. - -Some of the details of how the adduser command works are configured in the /etc/adduser.conf file. This file contains a lot of settings that determine how new accounts are configured and will look something like this. Note that the comments and blanks lines are omitted in the output below so that we can focus more easily on just the settings. -``` -$ cat /etc/adduser.conf | grep -v "^#" | grep -v "^$" -DSHELL=/bin/bash -DHOME=/home -GROUPHOMES=no -LETTERHOMES=no -SKEL=/etc/skel -FIRST_SYSTEM_UID=100 -LAST_SYSTEM_UID=999 -FIRST_SYSTEM_GID=100 -LAST_SYSTEM_GID=999 -FIRST_UID=1000 -LAST_UID=29999 -FIRST_GID=1000 -LAST_GID=29999 -USERGROUPS=yes -USERS_GID=100 -DIR_MODE=0755 -SETGID_HOME=no -QUOTAUSER="" -SKEL_IGNORE_REGEX="dpkg-(old|new|dist|save)" - -``` - -As you can see, we've got a default shell (DSHELL), the starting value for UIDs (FIRST_UID), the location for home directories (DHOME) and the source location for startup files (SKEL) that will be added to each account as it is set up - along with a number of additional settings. This file also specifies the permissions to be assigned to home directories (DIR_MODE). - -One of the more important settings is DIR_MODE, which determines the permissions that are used for each user's home directory. Given this setting, the permissions assigned to a directory that the user creates will be 755. Given this setting, home directories will be set up with rwxr-xr-x permissions. Users will be able to read other users' files, but not modify or remove them. If you want to be more restrictive, you can change this setting to 750 (no access by anyone outside the user's group) or even 700 (no access but the user himself). - -Any user account settings can be manually changed after the accounts are set up. For example, you can edit the /etc/passwd file or chmod home directory, but configuring the /etc/adduser.conf file _before_ you start adding accounts on a new server will ensure some consistency and save you some time and trouble over the long run. - -Changes to the /etc/adduser.conf file will affect all accounts that are set up subsequent to those changes. If you want to set up some specific account differently, you've also got the option of providing account configuration options as arguments with the adduser command in addition to the username. Maybe you want to assign a different shell for some user, request a specific UID, or disable logins altogether. The man page for the adduser command will display some of your choices for configuring an individual account. -``` -adduser [options] [--home DIR] [--shell SHELL] [--no-create-home] -[--uid ID] [--firstuid ID] [--lastuid ID] [--ingroup GROUP | --gid ID] -[--disabled-password] [--disabled-login] [--gecos GECOS] -[--add_extra_groups] [--encrypt-home] user - -``` - -These days probably every Linux system is, by default, going to put each user into his or her own group. As an admin, you might elect to do things differently. You might find that putting users in shared groups works better for your site, electing to use adduser's --gid option to select a specific group. Users can, of course, always be members of multiple groups, so you have some options on how to manage groups -- both primary and secondary. - -### Dealing with user passwords - -Since it's always a bad idea to know someone else's password, admins will generally use a temporary password when they set up an account and then run a command that will force the user to change his password on his first login. Here's an example: -``` -$ sudo chage -d 0 jdoe - -``` - -When the user logs in, he will see something like this: -``` -WARNING: Your password has expired. -You must change your password now and login again! -Changing password for jdoe. -(current) UNIX password: - -``` - -### Adding users to secondary groups - -To add a user to a secondary group, you might use the usermod command as shown below -- to add the user to the group and then verify that the change was made. -``` -$ sudo usermod -a -G sudo jdoe -$ sudo grep sudo /etc/group -sudo:x:27:shs,jdoe - -``` - -Keep in mind that some groups -- like the sudo or wheel group -- imply certain privileges. More on this in a moment. - -### Removing accounts, adding groups, etc. - -Linux systems also provide commands to remove accounts, add new groups, remove groups, etc. The **deluser** command, for example, will remove the user login entries from the /etc/passwd and /etc/shadow files but leave her home directory intact unless you add the --remove-home or --remove-all-files option. The **addgroup** command adds a group, but will give it the next group id in the sequence (i.e., likely in the user group range) unless you use the --gid option. -``` -$ sudo addgroup testgroup --gid=131 -Adding group `testgroup' (GID 131) ... -Done. - -``` - -### Managing privileged accounts - -Some Linux systems have a wheel group that gives members the ability to run commands as root. In this case, the /etc/sudoers file references this group. On Debian systems, this group is called sudo, but it works the same way and you'll see a reference like this in the /etc/sudoers file: -``` -%sudo ALL=(ALL:ALL) ALL - -``` - -This setting basically means that anyone in the wheel or sudo group can run all commands with the power of root once they preface them with the sudo command. - -You can also add more limited privileges to the sudoers file -- maybe to give particular users the ability to run one or two commands as root. If you do, you should also periodically review the /etc/sudoers file to gauge how much privilege users have and very that the privileges provided are still required. - -In the command shown below, we're looking at the active lines in the /etc/sudoers file. The most interesting lines in this file include the path set for commands that can be run using the sudo command and the two groups that are allowed to run commands via sudo. As was just mentioned, individuals can be given permissions by being directly included in the sudoers file, but it is generally better practice to define privileges through group memberships. -``` -# cat /etc/sudoers | grep -v "^#" | grep -v "^$" -Defaults env_reset -Defaults mail_badpass -Defaults secure_path="/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin" -root ALL=(ALL:ALL) ALL -%admin ALL=(ALL) ALL <== admin group -%sudo ALL=(ALL:ALL) ALL <== sudo group - -``` - -### Checking on logins - -To see when a user last logged in, you can use a command like this one: -``` -# last jdoe -jdoe pts/18 192.168.0.11 Thu Sep 14 08:44 - 11:48 (00:04) -jdoe pts/18 192.168.0.11 Thu Sep 14 13:43 - 18:44 (00:00) -jdoe pts/18 192.168.0.11 Thu Sep 14 19:42 - 19:43 (00:00) - -``` - -If you want to see when each of your users last logged in, you can run the last command through a loop like this one: -``` -$ for user in `ls /home`; do last $user | head -1; done - -jdoe pts/18 192.168.0.11 Thu Sep 14 19:42 - 19:43 (00:03) - -rocket pts/18 192.168.0.11 Thu Sep 14 13:02 - 13:02 (00:00) -shs pts/17 192.168.0.11 Thu Sep 14 12:45 still logged in - - -``` - -This command will only show you users who have logged on since the current wtmp file became active. The blank lines indicate that some users have never logged in since that time, but doesn't call them out. A better command would be this one that clear displays the users who have not logged in at all in this time period: -``` -$ for user in `ls /home`; do echo -n "$user ";last $user | head -1 | awk '{print substr($0,40)}'; done -dhayes -jdoe pts/18 192.168.0.11 Thu Sep 14 19:42 - 19:43 -peanut pts/19 192.168.0.29 Mon Sep 11 09:15 - 17:11 -rocket pts/18 192.168.0.11 Thu Sep 14 13:02 - 13:02 -shs pts/17 192.168.0.11 Thu Sep 14 12:45 still logged -tsmith - -``` - -That command is a lot to type, but could be turned into a script to make it a lot easier to use. -``` -#!/bin/bash - -for user in `ls /home` -do - echo -n "$user ";last $user | head -1 | awk '{print substr($0,40)}' -done - -``` - -Sometimes this kind of information can alert you to changes in users' roles that suggest they may no longer need the accounts in question. - -### Communicating with users - -Linux systems provide a number of ways to communicate with your users. You can add messages to the /etc/motd file that will be displayed when a user logs into a server using a terminal connection. You can also message users with commands such as write (message to single user) or wall (write to all logled in users. -``` -$ wall System will go down in one hour - -Broadcast message from shs@stinkbug (pts/17) (Thu Sep 14 14:04:16 2017): - -System will go down in one hour - -``` - -Important messages should probably be delivered through multiple channels as it's difficult to predict what users will actually notice. Together, message-of-the-day (motd), wall, and email notifications might stand of chance of getting most of your users' attention. - -### Paying attention to log files - -Paying attention to log files can also help you understand user activity. In particular, the /var/log/auth.log file will show you user login and logout activity, creation of new groups, etc. The /var/log/messages or /var/log/syslog files will tell you more about system activity. - -### Tracking problems and requests - -Whether or not you install a ticketing application on your Linux system, it's important to track the problems that your users run into and the requests that they make. Your users won't be happy if some portion of their requests fall through the proverbial cracks. Even a paper log could be helpful or, better yet, a spreadsheet that allows you to notice what issues are still outstanding and what the root cause of the problems turned out to be. Ensuring that problems and requests are addressed is important and logs can also help you remember what you had to do to address a problem that re-emerges many months or even years later. - -### Wrap-up - -Managing user accounts on a busy server depends in part on starting out with well configured defaults and in part on monitoring user activities and problems encountered. Users are likely to be happy if they feel you are responsive to their concerns and know what to expect when system upgrades are needed. - - --------------------------------------------------------------------------------- - -via: https://www.networkworld.com/article/3225109/linux/managing-users-on-linux-systems.html - -作者:[Sandra Henry-Stocker][a] -译者:[runningwater](https://github.com/runningwater) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:https://www.networkworld.com/author/Sandra-Henry_Stocker/ diff --git a/translated/tech/20170926 Managing users on Linux systems.md b/translated/tech/20170926 Managing users on Linux systems.md new file mode 100644 index 0000000000..719b0575b6 --- /dev/null +++ b/translated/tech/20170926 Managing users on Linux systems.md @@ -0,0 +1,222 @@ +# 管理 Linux 系统中的用户 + +也许你的 Lniux 用户并不是愤怒的公牛,但是当涉及管理他们的账户的时候,能让他们一直开心也是一种挑战。监控他们当前正在访问的东西,追踪他们他们遇到问题时的解决方案,并且保证能把他们在使用系统时出现的重要变动记录下来。这里有一些方法和工具可以使这份工作轻松一点。 + +### 配置账户 + +添加和移除账户是管理用户中最简单的一项,但是这里面仍然有很多需要考虑的选项。无论你是用桌面工具或是命令行选项,这都是一个非常自动化的过程。你可以使用命令添加一个新用户,像是 **adduser jdoe**,这同时会触发一系列的事情。使用下一个可用的 UID 可以创建 John 的账户,或许还会被许多用以配置账户的文件所填充。当你运行 adduser 命令加一个新的用户名的时候,它将会提示一些额外的信息,同时解释这是在干什么。 +``` +$ sudo adduser jdoe +Adding user 'jdoe' ... +Adding new group `jdoe' (1001) ... +Adding new user `jdoe' (1001) with group `jdoe' ... +Creating home directory `/home/jdoe' ... +Copying files from `/etc/skel' … +Enter new UNIX password: +Retype new UNIX password: +passwd: password updated successfully +Changing the user information for jdoe +Enter the new value, or press ENTER for the default + Full Name []: John Doe + Room Number []: + Work Phone []: + Home Phone []: + Other []: +Is the information correct? [Y/n] Y + +``` + +像你看到的那样,adduser 将添加用户的信息(到 /etc/passwd 和 /etc/shadow 文件中),创建新的家目录,并用 /etc/skel 里设置的文件填充家目录,提示你分配初始密码和认定信息,然后确认这些信息都是正确的,如果你在最后的提示 “Is the information correct” 处的答案是 “n”,它将回溯你之前所有的回答,允许修改任何你想要修改的地方。 + +创建好一个用户后,你可能会想要确认一下它是否是你期望的样子,更好的方法是确保在添加第一个帐户**之前**,“自动”选择与您想要查看的内容相匹配。默认有默认的好处,它对于你想知道他们定义在哪里有所用处,以防你想作出一些变动 —— 例如,你不想家目录在 /home 里,你不想用户 UIDs 从 1000 开始,或是你不想家目录下的文件被系统上的**每个人**都可读。 + +adduser 如何工作的一些细节设置在 /etc/adduser.conf 文件里。这个文件包含的一些设置决定了一个新的账户如何配置,以及它之后的样子。注意,注释和空白行将会在输出中被忽略,因此我们可以更加集中注意在设置上面。 +``` +$ cat /etc/adduser.conf | grep -v "^#" | grep -v "^$" +DSHELL=/bin/bash +DHOME=/home +GROUPHOMES=no +LETTERHOMES=no +SKEL=/etc/skel +FIRST_SYSTEM_UID=100 +LAST_SYSTEM_UID=999 +FIRST_SYSTEM_GID=100 +LAST_SYSTEM_GID=999 +FIRST_UID=1000 +LAST_UID=29999 +FIRST_GID=1000 +LAST_GID=29999 +USERGROUPS=yes +USERS_GID=100 +DIR_MODE=0755 +SETGID_HOME=no +QUOTAUSER="" +SKEL_IGNORE_REGEX="dpkg-(old|new|dist|save)" + +``` + +可以看到,我们有了一个默认的 shell(DSHELL),UIDs(FIRST_UID)的开始数值,家目录(DHOME)的位置,以及启动文件(SKEL)的来源位置。这个文件也会指定分配给家目录(DIR_HOME)的权限。 + +其中 DIR_HOME 是最重要的设置,它决定了每个家目录被使用的权限。这个设置分配给用户创建的目录权限是 755,家目录的权限将会设置为 rwxr-xr-x。用户可以读其他用户的文件,但是不能修改和移除他们。如果你想要更多的限制,你可以更改这个设置为 750(用户组外的任何人都不可访问)甚至是 700(除用户自己外的人都不可访问)。 + +任何用户账号在创建之前都可以进行手动修改。例如,你可以编辑 /etc/passwd 或者修改家目录的权限,开始在新服务器上添加用户之前配置 /etc/adduser.conf 可以确保一定的一致性,从长远来看可以节省时间和避免一些麻烦。 + +/etc/adduser.conf 的修改将会在之后创建的用户上生效。如果你想以不同的方式设置某个特定账户,除了用户名之外,你还可以选择使用 adduser 命令提供账户配置选项。或许你想为某些账户分配不同的 shell,请求特殊的 UID,完全禁用登录。adduser 的帮助页将会为你显示一些配置个人账户的选择。 + +``` +adduser [options] [--home DIR] [--shell SHELL] [--no-create-home] +[--uid ID] [--firstuid ID] [--lastuid ID] [--ingroup GROUP | --gid ID] +[--disabled-password] [--disabled-login] [--gecos GECOS] +[--add_extra_groups] [--encrypt-home] user + +``` + +每个 Linux 系统现在都会默认把每个用户放入对应的组中。作为一个管理员,你可能会选择以不同的方式去做事。你也许会发现把用户放在一个共享组中可以让你的站点工作的更好,这时,选择使用 adduser 的 --gid 选项去选择一个特定的组。当然,用户总是许多组的成员,因此也有一些选项去管理主要和次要的组。 + +### 处理用户密码 + +一直以来,知道其他人的密码都是一个不好的念头,在设置账户时,管理员通常使用一个临时的密码,然后在用户第一次登录时会运行一条命令强制他修改密码。这里是一个例子: +``` +$ sudo chage -d 0 jdoe +``` + +当用户第一次登录的时候,会看到像这样的事情: +``` +WARNING: Your password has expired. +You must change your password now and login again! +Changing password for jdoe. +(current) UNIX password: + +``` + +### 添加用户到副组 + +添加用户到副组中,你可能会用如下所示的 usermod 命令 —— 添加用户到组中并确认已经做出变动。 +``` +$ sudo usermod -a -G sudo jdoe +$ sudo grep sudo /etc/group +sudo:x:27:shs,jdoe + +``` + +记住在一些组,像是 sudo 或者 wheel 组中,意味着包含特权,一定要特别注意这一点。 + +### 移除用户,添加组等 + +Linux 系统也提供了命令去移除账户,添加新的组,移除组等。例如,**deluser** 命令,将会从 /etc/passwd 和 /etc/shadow 中移除用户登录入口,但是会完整保留他的家目录,除非你添加了 --remove-home 或者 --remove-all-files 选项。**addgroup** 命令会添加一个组,按目前组的次序给他下一个 id(在用户组范围内),除非你使用 --gid 选项指定 id。 +``` +$ sudo addgroup testgroup --gid=131 +Adding group `testgroup' (GID 131) ... +Done. + +``` + +### 管理特权账户 + +一些 Linux 系统中有一个 wheel 组,它给组中成员赋予了像 root 一样运行命令的能力。在这种情况下,/etc/sudoers 将会引用该组。在 Debian 系统中,这个组被叫做 sudo,但是以相同的方式工作,你在 /etc/sudoers 中可以看到像这样的引用: +``` +%sudo ALL=(ALL:ALL) ALL + +``` + +这个基础的设定意味着,任何在 wheel 或者 sudo 组中的成员,只要在他们运行的命令之前添加 sudo,就可以以 root 的权限去运行命令。 + +你可以向 sudoers 文件中添加更多有限的特权 —— 也许给特定用户运行一两个 root 的命令。如果这样做,您还应定期查看 /etc/sudoers 文件以评估用户拥有的权限,以及仍然需要提供的权限。 + +在下面显示的命令中,我们看到在 /etc/sudoers 中匹配到的行。在这个文件中最有趣的行是,包含能使用 sudo 运行命令的路径设置,以及两个允许通过 sudo 运行命令的组。像刚才提到的那样,单个用户可以通过包含在 sudoers 文件中来获得权限,但是更有实际意义的方法是通过组成员来定义各自的权限。 +``` +# cat /etc/sudoers | grep -v "^#" | grep -v "^$" +Defaults env_reset +Defaults mail_badpass +Defaults secure_path="/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin" +root ALL=(ALL:ALL) ALL +%admin ALL=(ALL) ALL <== admin group +%sudo ALL=(ALL:ALL) ALL <== sudo group + +``` + +### 登录检查 + +你可以通过以下命令查看用户的上一次登录: +``` +# last jdoe +jdoe pts/18 192.168.0.11 Thu Sep 14 08:44 - 11:48 (00:04) +jdoe pts/18 192.168.0.11 Thu Sep 14 13:43 - 18:44 (00:00) +jdoe pts/18 192.168.0.11 Thu Sep 14 19:42 - 19:43 (00:00) + +``` + +如果你想查看每一个用户上一次的登录情况,你可以通过一个像这样的循环来运行 last 命令: +``` +$ for user in `ls /home`; do last $user | head -1; done + +jdoe pts/18 192.168.0.11 Thu Sep 14 19:42 - 19:43 (00:03) + +rocket pts/18 192.168.0.11 Thu Sep 14 13:02 - 13:02 (00:00) +shs pts/17 192.168.0.11 Thu Sep 14 12:45 still logged in + + +``` + +此命令仅显示自当前 wtmp 文件变为活跃状态以来已登录的用户。空白行表示用户自那以后从未登录过,但没有将其调出。一个更好的命令是过滤掉在这期间从未登录过的用户的显示: +``` +$ for user in `ls /home`; do echo -n "$user ";last $user | head -1 | awk '{print substr($0,40)}'; done +dhayes +jdoe pts/18 192.168.0.11 Thu Sep 14 19:42 - 19:43 +peanut pts/19 192.168.0.29 Mon Sep 11 09:15 - 17:11 +rocket pts/18 192.168.0.11 Thu Sep 14 13:02 - 13:02 +shs pts/17 192.168.0.11 Thu Sep 14 12:45 still logged +tsmith + +``` + +这个命令会打印很多,但是可以通过一个脚本使它更加清晰易用。 +``` +#!/bin/bash + +for user in `ls /home` +do + echo -n "$user ";last $user | head -1 | awk '{print substr($0,40)}' +done + +``` + +有时,此类信息可以提醒您用户角色的变动,表明他们可能不再需要相关帐户。 + +### 与用户沟通 + +Linux 提供了许多方法和用户沟通。你可以向 /etc/motd 文件中添加信息,当用户从终端登录到服务器时,将会显示这些信息。你也可以通过例如 write(通知单个用户)或者 wall(write 给所有已登录的用户)命令发送通知。 +``` +$ wall System will go down in one hour + +Broadcast message from shs@stinkbug (pts/17) (Thu Sep 14 14:04:16 2017): + +System will go down in one hour + +``` + +重要的通知应该通过多个管道传递,因为很难预测用户实际会注意到什么。mesage-of-the-day(motd),wall 和 email 通知可以吸引用户大部分的注意力。 + +### 注意日志文件 + +更多地注意日志文件上也可以帮你理解用户活动。事实上,/var/log/auth.log 文件将会为你显示用户的登录和注销活动,组的创建等。/var/log/message 或者 /var/log/syslog 文件将会告诉你更多有关系统活动的事情。 + +### 追踪问题和请求 + +无论你是否在 Linux 系统上安装了票务系统,跟踪用户遇到的问题以及他们提出的请求都非常重要。如果请求的一部分久久不见回应,用户必然不会高兴。即使是纸质日志也可能是有用的,或者更好的是,有一个电子表格,可以让你注意到哪些问题仍然悬而未决,以及问题的根本原因是什么。确保解决问题和请求非常重要,日志还可以帮助您记住你必须采取的措施来解决几个月甚至几年后重新出现的问题。 + +### 总结 + +在繁忙的服务器上管理用户帐户部分取决于从配置良好的默认值开始,部分取决于监控用户活动和遇到的问题。如果用户觉得你对他们的顾虑有所回应并且知道在需要系统升级时会发生什么,他们可能会很高兴。 + +----------- + +via: https://www.networkworld.com/article/3225109/linux/managing-users-on-linux-systems.html + +作者:[Sandra Henry-Stocker][a] +译者:[dianbanjiu](https://github.com/dianbanjiu) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://www.networkworld.com/author/Sandra-Henry_Stocker/ From ee1a98bf92b6dade718eab186098f75e95f0689e Mon Sep 17 00:00:00 2001 From: "Xingyu.Wang" Date: Thu, 4 Oct 2018 21:02:02 +0800 Subject: [PATCH 206/736] PRF:20180917 4 scanning tools for the Linux desktop.md @way-ww --- ... 4 scanning tools for the Linux desktop.md | 39 ++++++++++--------- 1 file changed, 20 insertions(+), 19 deletions(-) diff --git a/translated/tech/20180917 4 scanning tools for the Linux desktop.md b/translated/tech/20180917 4 scanning tools for the Linux desktop.md index 89aaad3a89..b376fab108 100644 --- a/translated/tech/20180917 4 scanning tools for the Linux desktop.md +++ b/translated/tech/20180917 4 scanning tools for the Linux desktop.md @@ -1,54 +1,55 @@ -用于Linux桌面的4个扫描工具 +用于 Linux 桌面的 4 个扫描工具 ====== -使用其中一个开源软件驱动扫描仪来实现无纸化办公。 + +> 使用这些开源软件之一驱动你的扫描仪来实现无纸化办公。 ![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/osdc-photo-camera-blue.png?itok=AsIMZ9ga) 尽管无纸化世界还没有到来,但越来越多的人通过扫描文件和照片来摆脱纸张的束缚。不过,仅仅拥有一台扫描仪还不足够。你还需要软件来驱动扫描仪。 -然而问题是许多扫描仪制造商没有将Linux版本的软件与他们的设备适配在一起。不过在大多数情况下,即使没有也没多大关系。因为在linux桌面上已经有很好的扫描软件了。它们能够与许多扫描仪配合很好的完成工作。 +然而问题是许多扫描仪制造商没有与他们的设备适配在一起的软件的 Linux 版本。不过在大多数情况下,即使没有也没多大关系。因为在 Linux 桌面上已经有很好的扫描软件了。它们能够与许多扫描仪配合很好的完成工作。 -现在就让我们看看四个简单又灵活的开源Linux扫描工具。我已经使用过了下面这些工具(甚至[早在2014年][1]写过关于其中三个工具的文章)并且觉得它们非常有用。希望你也会这样认为。 +现在就让我们看看四个简单又灵活的开源 Linux 扫描工具。我已经使用过了下面这些工具(甚至[早在 2014 年][1]写过关于其中三个工具的文章)并且觉得它们非常有用。希望你也会这样认为。 ### Simple Scan -这是我最喜欢的一个软件之一,[Simple Scan][2]小巧,迅速,高效,且易于使用。如果你以前见过它,那是因为Simple Scan是GNOME桌面上的默认扫描程序应用程序,也是许多Linux发行版的默认扫描程序。 +这是我最喜欢的一个软件之一,[Simple Scan][2] 小巧、快捷、高效且易用。如果你以前见过它,那是因为 Simple Scan 是 GNOME 桌面上的默认扫描应用程序,也是许多 Linux 发行版的默认扫描程序。 -你只需单击一下就能扫描文档或照片。扫描过某些内容后,你可以旋转或裁剪它并将其另存为图像(仅限JPEG或PNG格式)或PDF格式。也就是说Simple Scan可能会很慢,即使你用较低分辨率来扫描文档。最重要的是,Simple Scan在扫描时会使用一组全局的默认值,例如150dpi用于文本,300dpi用于照片。你需要进入Simple Scan的首选项才能更改这些设置。 +你只需单击一下就能扫描文档或照片。扫描过某些内容后,你可以旋转或裁剪它并将其另存为图像(仅限 JPEG 或 PNG 格式)或 PDF 格式。也就是说 Simple Scan 可能会很慢,即使你用较低分辨率来扫描文档。最重要的是,Simple Scan 在扫描时会使用一组全局的默认值,例如 150dpi 用于文本,300dpi 用于照片。你需要进入 Simple Scan 的首选项才能更改这些设置。 -如果你扫描的内容超过了几页,还可以在保存之前重新排序页面。如果有必要的话 - 假如你正在提交已签名的表格 - 你可以使用Simple Scan来发送电子邮件。 +如果你扫描的内容超过了几页,还可以在保存之前重新排序页面。如果有必要的话 —— 假如你正在提交已签名的表格 —— 你可以使用 Simple Scan 来发送电子邮件。 ### Skanlite -从很多方面来看,[Skanlite][3]是Simple Scan在KDE世界中的表兄弟。虽然Skanlite功能很少,但它可以出色的完成工作。 +从很多方面来看,[Skanlite][3] 是 Simple Scan 在 KDE 世界中的表兄弟。虽然 Skanlite 功能很少,但它可以出色的完成工作。 -你可以自己配置这个软件的选项,包括自动保存扫描文件,设置扫描质量以及确定扫描保存位置。 Skanlite可以保存为以下图像格式:JPEG,PNG,BMP,PPM,XBM和XPM。 +你可以自己配置这个软件的选项,包括自动保存扫描文件、设置扫描质量以及确定扫描保存位置。 Skanlite 可以保存为以下图像格式:JPEG、PNG、BMP、PPM、XBM 和 XPM。 -其中一个很棒的功能是Skanlite能够将你扫描的部分内容保存到单独的文件中。当你想要从照片中删除某人或某物时,这就派上用场了。 +其中一个很棒的功能是 Skanlite 能够将你扫描的部分内容保存到单独的文件中。当你想要从照片中删除某人或某物时,这就派上用场了。 ### Gscan2pdf -这是我另一个最爱的老软件,[gscan2pdf][4]可能会显得很老旧了,但它仍然包含一些比这里提到的其他软件更好的功能。即便如此,gscan2pdf仍然显得很轻便。 +这是我另一个最爱的老软件,[gscan2pdf][4] 可能会显得很老旧了,但它仍然包含一些比这里提到的其他软件更好的功能。即便如此,gscan2pdf 仍然显得很轻便。 -除了以各种图像格式(JPEG,PNG和TIFF)保存扫描外,gscan2pdf还将它们保存为PDF或[DjVu][5]文件。你可以在单击“扫描”按钮之前设置扫描的分辨率,无论是黑白,彩色还是纸张大小,每当你想要更改任何这些设置时,这都会进入gscan2pdf的首选项。你还可以旋转,裁剪和删除页面。 +除了以各种图像格式(JPEG、PNG 和 TIFF)保存扫描外,gscan2pdf 还可以将它们保存为 PDF 或 [DjVu][5] 文件。你可以在单击“扫描”按钮之前设置扫描的分辨率,无论是黑白、彩色还是纸张大小,每当你想要更改任何这些设置时,都可以进入 gscan2pdf 的首选项。你还可以旋转、裁剪和删除页面。 虽然这些都不是真正的杀手级功能,但它们会给你带来更多的灵活性。 ### GIMP -你大概会知道[GIMP][6]是一个图像编辑工具。但是你恐怕不知道可以用它来驱动你的扫描仪吧。 +你大概会知道 [GIMP][6] 是一个图像编辑工具。但是你恐怕不知道可以用它来驱动你的扫描仪吧。 -你需要安装[XSane][7]扫描软件和GIMP XSane插件。这两个应该都可以从你的Linux发行版的包管理器中获得。在软件里,选择文件>创建>扫描仪/相机。单击扫描仪,然后单击扫描按钮即可进行扫描。 +你需要安装 [XSane][7] 扫描软件和 GIMP XSane 插件。这两个应该都可以从你的 Linux 发行版的包管理器中获得。在软件里,选择“文件>创建>扫描仪/相机”。单击“扫描仪”,然后单击“扫描”按钮即可进行扫描。 -如果这不是你想要的,或者它不起作用,你可以将GIMP和一个叫作[QuiteInsane][8]的插件结合起来。使用任一插件,都能使GIMP成为一个功能强大的扫描软件,它可以让你设置许多选项,如是否扫描彩色或黑白,扫描的分辨率,以及是否压缩结果等。你还可以使用GIMP的工具来修改或应用扫描后的效果。这使得它非常适合扫描照片和艺术品。 +如果这不是你想要的,或者它不起作用,你可以将 GIMP 和一个叫作 [QuiteInsane][8] 的插件结合起来。使用任一插件,都能使 GIMP 成为一个功能强大的扫描软件,它可以让你设置许多选项,如是否扫描彩色或黑白、扫描的分辨率,以及是否压缩结果等。你还可以使用 GIMP 的工具来修改或应用扫描后的效果。这使得它非常适合扫描照片和艺术品。 ### 它们真的能够工作吗? -所有的这些软件在大多数时候都能够在各种硬件上运行良好。我将它们与我过去几年来拥有的多台多功能打印机一起使用 - 无论是使用USB线连接还是通过无线连接。 +所有的这些软件在大多数时候都能够在各种硬件上运行良好。我将它们与我过去几年来拥有的多台多功能打印机一起使用 —— 无论是使用 USB 线连接还是通过无线连接。 -你可能已经注意到我在前一段中写过“大多数时候运行良好”。这是因为我确实遇到过一个例外:一个便宜的canon多功能打印机。我使用的软件都没有检测到它。最后我不得不下载并安装canon的Linux扫描仪软件才使它工作。 +你可能已经注意到我在前一段中写过“大多数时候运行良好”。这是因为我确实遇到过一个例外:一个便宜的 canon 多功能打印机。我使用的软件都没有检测到它。最后我不得不下载并安装 canon 的 Linux 扫描仪软件才使它工作。 -你最喜欢的Linux开源扫描工具是什么?发表评论,分享你的选择。 +你最喜欢的 Linux 开源扫描工具是什么?发表评论,分享你的选择。 -------------------------------------------------------------------------------- @@ -57,7 +58,7 @@ via: https://opensource.com/article/18/9/linux-scanner-tools 作者:[Scott Nesbitt][a] 选题:[lujun9972](https://github.com/lujun9972) 译者:[way-ww](https://github.com/way-ww) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 From 44ab7c84ab6a29e9a6e12df034631afe9f81b9c7 Mon Sep 17 00:00:00 2001 From: "Xingyu.Wang" Date: Thu, 4 Oct 2018 21:02:35 +0800 Subject: [PATCH 207/736] PUB:20180917 4 scanning tools for the Linux desktop.md @way-ww https://linux.cn/article-10079-1.html --- .../20180917 4 scanning tools for the Linux desktop.md | 0 1 file changed, 0 insertions(+), 0 deletions(-) rename {translated/tech => published}/20180917 4 scanning tools for the Linux desktop.md (100%) diff --git a/translated/tech/20180917 4 scanning tools for the Linux desktop.md b/published/20180917 4 scanning tools for the Linux desktop.md similarity index 100% rename from translated/tech/20180917 4 scanning tools for the Linux desktop.md rename to published/20180917 4 scanning tools for the Linux desktop.md From cfd67ebeff1801310026bc10e7b437b53c17eba9 Mon Sep 17 00:00:00 2001 From: "Xingyu.Wang" Date: Thu, 4 Oct 2018 21:30:27 +0800 Subject: [PATCH 208/736] PRF:20180918 Top 3 Python libraries for data science.md @ucasFL --- ...Top 3 Python libraries for data science.md | 19 +++++++------------ 1 file changed, 7 insertions(+), 12 deletions(-) diff --git a/translated/tech/20180918 Top 3 Python libraries for data science.md b/translated/tech/20180918 Top 3 Python libraries for data science.md index 4026b751d5..c6156e575a 100644 --- a/translated/tech/20180918 Top 3 Python libraries for data science.md +++ b/translated/tech/20180918 Top 3 Python libraries for data science.md @@ -1,7 +1,7 @@ 3 个用于数据科学的顶级 Python 库 ====== ->使用这些库把 Python 变成一个科学数据分析和建模工具。 +> 使用这些库把 Python 变成一个科学数据分析和建模工具。 ![][7] @@ -49,7 +49,6 @@ matrix_two = np.arange(1,10).reshape(3,3) matrix_two ``` -Here is the output: 输出如下: ``` @@ -62,9 +61,7 @@ array([[1, 2, 3], ``` matrix_multiply = np.dot(matrix_one, matrix_two) - matrix_multiply - ``` 相乘后的输出如下: @@ -96,17 +93,15 @@ matrix_multiply ### Pandas -[Pandas][3] 是另一个可以提高你的 Python 数据科学技能的优秀库。就和 NumPy 一样,它属于 SciPy 开源软件家族,可以在 BSD 免费许可证许可下使用。 +[Pandas][3] 是另一个可以提高你的 Python 数据科学技能的优秀库。就和 NumPy 一样,它属于 SciPy 开源软件家族,可以在 BSD 自由许可证许可下使用。 -Pandas 提供了多功能并且很强大的工具用于管理数据结构和执行大量数据分析。该库能够很好的处理不完整、非结构化和无序的真实世界数据,并且提供了用于整形、聚合、分析和可视化数据集的工具 +Pandas 提供了多能而强大的工具,用于管理数据结构和执行大量数据分析。该库能够很好的处理不完整、非结构化和无序的真实世界数据,并且提供了用于整形、聚合、分析和可视化数据集的工具 Pandas 中有三种类型的数据结构: - * Series: 一维、相同数据类型的数组 - * DataFrame: 二维异型矩阵 - * Panel: 三维大小可变数组 - - + * Series:一维、相同数据类型的数组 + * DataFrame:二维异型矩阵 + * Panel:三维大小可变数组 例如,我们来看一下如何使用 Panda 库(缩写成 `pd`)来执行一些描述性统计计算。 @@ -232,7 +227,7 @@ via: https://opensource.com/article/18/9/top-3-python-libraries-data-science 作者:[Dr.Michael J.Garbade][a] 选题:[lujun9972](https://github.com/lujun9972) 译者:[ucasFL](https://github.com/ucasFL) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 From 831075cc6b0446f079b1204204eabf6606cf243a Mon Sep 17 00:00:00 2001 From: "Xingyu.Wang" Date: Thu, 4 Oct 2018 21:31:34 +0800 Subject: [PATCH 209/736] PUB:20180918 Top 3 Python libraries for data science.md @ucasFL https://linux.cn/article-10080-1.html --- .../20180918 Top 3 Python libraries for data science.md | 0 1 file changed, 0 insertions(+), 0 deletions(-) rename {translated/tech => published}/20180918 Top 3 Python libraries for data science.md (100%) diff --git a/translated/tech/20180918 Top 3 Python libraries for data science.md b/published/20180918 Top 3 Python libraries for data science.md similarity index 100% rename from translated/tech/20180918 Top 3 Python libraries for data science.md rename to published/20180918 Top 3 Python libraries for data science.md From 67073384c75f3b6abd1edf64784f387b05bc3d5f Mon Sep 17 00:00:00 2001 From: sd886393 Date: Thu, 4 Oct 2018 22:46:03 +0800 Subject: [PATCH 210/736] Update 20180928 What containers can teach us about DevOps.md --- .../tech/20180928 What containers can teach us about DevOps.md | 1 + 1 file changed, 1 insertion(+) diff --git a/sources/tech/20180928 What containers can teach us about DevOps.md b/sources/tech/20180928 What containers can teach us about DevOps.md index cd4a985dd5..a0f3f5ef64 100644 --- a/sources/tech/20180928 What containers can teach us about DevOps.md +++ b/sources/tech/20180928 What containers can teach us about DevOps.md @@ -12,6 +12,7 @@ **容器流** +每个容器都可以看成一个独立的封闭仓库,当你置身其中,不需要管外部的系统环境、集群环境、以及其他基础设施。而在容器内部, A container can be seen as a silo, and from inside, it is easy to forget the rest of the system: the host node, the cluster, the underlying infrastructure. Inside the container, it might appear that everything is functioning in an acceptable manner. From the outside perspective, though, the application inside the container is a part of a larger ecosystem of applications that make up a service: the web API, the web app user interface, the database, the workers, and caching services and garbage collectors. Teams put constraints on the container to limit performance impact on infrastructure, and much has been done to provide metrics for measuring container performance because overloaded or slow container workloads have downstream impact on other services or customers. **Real-world flow** From ca7baad0cfb54f2eec68f03548ebf2225ccfa684 Mon Sep 17 00:00:00 2001 From: thecyanbird <2534930703@qq.com> Date: Thu, 4 Oct 2018 22:46:19 +0800 Subject: [PATCH 211/736] Delete 20180920 WinWorld - A Large Collection Of Defunct OSs, Software And Games.md --- ...tion Of Defunct OSs, Software And Games.md | 128 ------------------ 1 file changed, 128 deletions(-) delete mode 100644 sources/talk/20180920 WinWorld - A Large Collection Of Defunct OSs, Software And Games.md diff --git a/sources/talk/20180920 WinWorld - A Large Collection Of Defunct OSs, Software And Games.md b/sources/talk/20180920 WinWorld - A Large Collection Of Defunct OSs, Software And Games.md deleted file mode 100644 index 74b6347228..0000000000 --- a/sources/talk/20180920 WinWorld - A Large Collection Of Defunct OSs, Software And Games.md +++ /dev/null @@ -1,128 +0,0 @@ -thecyanbird translating - -WinWorld – A Large Collection Of Defunct OSs, Software And Games -====== - -![](https://www.ostechnix.com/wp-content/uploads/2018/09/WinWorld-720x340.jpeg) - -The other day, I was testing **Dosbox** which is used to [**run MS-DOS games and programs in Linux**][1]. While searching for some classic programs like Turbo C++, I stumbled upon a website named **WinWorld**. I went through a few links in this site and quite surprised. WinWorld has a plenty of good-old and classic OSs, software, applications, development tools, games and a lot of other miscellaneous utilities which are abandoned by the developers a long time ago. It is an online museum run by community members, volunteers and is dedicated to the preservation and sharing of vintage, abandoned, and pre-release software. - -WinWorld was started back in 2003 and its founder claims that the idea to start this site inspired by Yahoo briefcases. The primary purpose of this site is to preserve and share old software. Over the years, many people volunteered to improve this site in numerous ways and the collection of old software in WinWorld has grown exponentially. The entire WinWorld library is free, open and available to everyone. - -### WinWorld Hosts A Huge Collection Of Defunct OSs, Software, System Applications And Games - -Like I already said, WinWorld hosts a huge collection of abandonware which are no-longer in development. - -**Linux and Unix:** - -Here, I have given the complete list of UNIX and LINUX OSs with brief summary of the each OS and the release year of first version. - - * **A/UX** – An early port of Unix to Apple’s 68k based Macintosh platform, released in 1988. - * **AIX** – A Unix port originally developed by IBM, released in 1986. - * **AT &T System V Unix** – One of the first commercial versions of the Unix OS, released in 1983. - * **Banyan VINES** – A network operating system originally designed for Unix, released in 1984. - * **Corel Linux** – A commercial Linux distro, released in 1999. - * **DEC OSF-1** – A version of UNIX developed by Digital Equipment Corporation (DEC), released in 1991. - * **Digital UNIX** – A renamed version of **OSF-1** , released by DEC in 1995.** -** - * **FreeBSD** **1.0** – The first release of FreeBSD, released in 1993. It is based on 4.3BSD. - * **Gentus Linux** – A distribution that failed to comply with GPL. Developed by ABIT and released in 2000. - * **HP-UX** – A UNIX variant, released in 1992. - * **IRIX** – An a operating system developed by Silicon Graphics Inc (SGI ) and it is released in 1988. - * **Lindows** – Similar to Corel Linux. It is developed for commercial purpose and released in 2002. - * **Linux Kernel** – A copy of the Linux Sourcecode, version 0.01. Released in the early 90’s. - * **Mandrake Linux** – A Linux distribution based on Red Hat Linux. It was later renamed to Mandriva. Released in 1999. - * **NEWS-OS** – A variant of BSD, developed by Sony and released in 1989. - * **NeXTStep** – A Unix based OS from NeXT computers headed by **Steve Jobs**. It is released in 1987. - * **PC/IX** – A UNIX variant created for IBM PCs. Released in 1984. - * **Red Hat Linux 5.0** – A commercial Linux distribution by Red Hat. - * **Sun Solaris** – A Unix based OS by Sun Microsystems. Released in 1992. - * **SunOS** – A Unix-based OS derived from BSD by Sun Microsystems, released in 1982. - * **Tru64 UNIX** – A formerly known OSF/1 by DEC. - * **Ubuntu 4.10** – The well-known OS based on Debian.This was a beta pre-release, prior to the very first official Ubuntu release. - * **Ultrix** – A UNIX clone developed by DEC. - * **UnixWare** – A UNIX variant from Novell. - * **Xandros Linux** – A proprietary variant of Linux. It is based on Corel Linux. The first version is released in 2003. - * **Xenix** – A UNIX variant originally published by Microsoft released in 1984. - - - -Not just Linux/Unix, you can find other operating systems including DOS, Windows, Apple/Mac, OS 2, Novell netware and other OSs and shells. - -**DOS & CP/M:** - - * 86-DOS - * Concurrent CPM-86 & Concurrent DOS - * CP/M 86 & CP/M-80 - * DOS Plus - * DR-DOS - * GEM - * MP/M - * MS-DOS - * Multitasking MS-DOS 4.00 - * Multiuser DOS - * PC-DOS - * PC-MOS - * PTS-DOS - * Real/32 - * Tandy Deskmate - * Wendin DOS - - - -**Windows:** - - * BackOffice Server - * Windows 1.0/2.x/3.0/3.1/95/98/2000/ME/NT 3.X/NT 4.0 - * Windows Whistler - * WinFrame - - - -**Apple/Mac:** - - * Mac OS 7/8/9 - * Mac OS X - * System Software (0-6) - - - -**OS/2:** - - * Citrix Multiuser - * OS/2 1.x - * OS/2 2.0 - * OS/2 3.x - * OS/2 Warp 4 - - - -Also, WinWorld hosts a huge collection of old software, system applications, development tools and games. Go and check them out as well. - -To be honest, I don’t even know the existence of most of the stuffs listed in this site. Some of the tools listed here were released years before I was born. - -Just in case, If you ever in need of or wanted to test a classic stuff (be it a game, software, OS), look nowhere, just head over to WinWorld library and download them that you want to explore. Good luck! - -**Disclaimer:** - -OSTechNix is not affiliated with WinWorld site in any way. We, at OSTechNix, don’t know the authenticity and integrity of the stuffs hosted in this site. Also, downloading software from third-party sites is not safe or may be illegal in your region. Neither the author nor OSTechNix is responsible for any kind of damage. Use this service at your own risk. - -And, that’s all for now. Hope this was useful. More good stuffs to come. Stay tuned! - -Cheers! - - - --------------------------------------------------------------------------------- - -via: https://www.ostechnix.com/winworld-a-large-collection-of-defunct-oss-software-and-games/ - -作者:[SK][a] -选题:[lujun9972](https://github.com/lujun9972) -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://www.ostechnix.com/author/sk/ -[1]: https://www.ostechnix.com/how-to-run-ms-dos-games-and-programs-in-linux/ From d4a336e6dd9b1509542cbae4d8dc40a7225a0be3 Mon Sep 17 00:00:00 2001 From: thecyanbird <2534930703@qq.com> Date: Thu, 4 Oct 2018 22:47:36 +0800 Subject: [PATCH 212/736] Create A Large Collection Of Defunct OSs, Software And Games.md --- ...tion Of Defunct OSs, Software And Games.md | 84 +++++++++++++++++++ 1 file changed, 84 insertions(+) create mode 100644 translated/talk/A Large Collection Of Defunct OSs, Software And Games.md diff --git a/translated/talk/A Large Collection Of Defunct OSs, Software And Games.md b/translated/talk/A Large Collection Of Defunct OSs, Software And Games.md new file mode 100644 index 0000000000..a0c51974b9 --- /dev/null +++ b/translated/talk/A Large Collection Of Defunct OSs, Software And Games.md @@ -0,0 +1,84 @@ +WinWorld -- 大型废弃操作系统,软件,游戏合集网站 +===== +![](https://www.ostechnix.com/wp-content/uploads/2018/09/WinWorld-720x340.jpeg) +有一天,我正在测试 **Dosbox** -- 一个[**在 Linux 平台上运行 MS-DOS 游戏与程序的软件**][1]。当我在搜索一些常用的软件,例如 Turbo C++ 时,我意外留意到了一个叫做 **WinWorld** 的网站。我查看了这个网站上的某些内容,并且着实被惊艳到了。WinWorld 收集了非常多经典的,但已经被它们的开发者所抛弃许久的操作系统,软件,应用,开发工具,游戏以及各式各样的工具。它是一个以保存和分享古老,已经被废弃的或者预发布版本程序为目的的线上博物馆,由社区成员和志愿者运营。 +WinWorld 于 2013 年开始运营。它的创始者声称是被 Yahoo birefcases 激发了灵感并以此构建了这个网站。这个网站原目标是保存并且分享古旧软件。经过许多以不同方式提供帮助的志愿者以及多年的运营,WinWorld 得以迅猛发展。作为一个非盈利网站,WinWorld 的所有内容都免费开放。 +###WinWorld 保存了大量的废弃操作系统,软件,系统应用以及游戏 +就像我刚才说的那样, WinWorld 存储了大量的被抛弃并且不再被开发的软件。 +**Linux 与 Unix:** +这里提供了完整的 UNIX 和 LINUX 操作系统列表以及它们各自的简要介绍,首次发行的年代。 +* **A/UX** - 于 1988 年推出,移植到 68k Macintosh 平台的 Unix 系统。 +* **AIX** - 于 1986 年推出,IBM 移植的 Unix 系统。 +* **AT &T System V Unix** - 于 1983 年推出,最早的商业版 Unix 之一。 +* **Banyan VINES** - 于 1984 年推出,专为 Unix 设计的网络操作系统。 +* **Corel Linux** - 于 1999 年推出,商业 Linux 发行版。 +* **DEC OSF-1** - 于 1991 年推出,由迪吉多公司(DEC)开发的 Unix 版本。 +* **Digital UNIX** - 由 DEC 于 1995 年推出,**OSF-1** 的重命名版本。 +* **FreeBSD 1.0** - 于 1993 年推出,FreeBSD 的首次发行版。这个系统是基于 4.3BSD 开发的。 +* **Gentus Linux** - 由 ABIT 于 2000 年推出,未遵守 GPL 协议的 Linux 发行版。 +* **HP-UX** - 于 1992 年推出,UNIX 的变种系统。 +* **IRIX** - 由硅谷图形公司(SGI)于1988年推出的操作系统。 +* **Lindows** - 于 2002 年推出,与 Corel Linux 类似的商业操作系统。 +* **Linux Kernel** - 0.01 版本于 90 年代早期推出,Linux 源代码的副本。 +* **Mandrake Linux** - 于 1999 年推出。基于 Red Hat Linux 的 Linux 发行版,稍后被重新命名为 Mandriva。 +* **NEWS-OS** - 由 Sony 于 1989 年推出,BSD 的变种。 +* **NeXTStep** - 由史蒂夫·乔布斯创立的 NeXT 公司于 1987 年推出,基于 Unix 的操作系统。 +* **PC/IX** - 于 1984 年推出,为 IBM 个人电脑服务的基于 Unix 的操作系统。 +* **Red Hat Linux 5.0** - 由 Red Hat 推出,商业 Linux 发行版。 +* **Sun Solaris** - 由 Sun Microsystem 于 1992 年推出,基于 Unix 的操作系统。 +* **SunOS** - 由 Sun Microsystem 于 1982 年推出,衍生自 BSD 基于 Unix 的操作系统。 +* **Tru64 UNIX** - 由 DEC 开发,旧称 OSF/1。 +* **Ubuntu 4.10** - 基于 Debian 的知名操作系统。这是早期的 beta 预发布版本,较早期 Ubuntu 正式发行版更早推出。 +* **Ultrix** - 由 DEC 开发, UNIX 克隆。 +* **UnixWare** - 由 Novell 推出, UNIX 变种。 +* **Xandros Linux** - 首个版本于 2003 年推出。基于 Corel Linux 的专有 Linux 发行版。 +* **Xenix** - 最初由 Microsoft 于 1984 推出, UNIX 变种操作系统。 +不仅仅是 Linux/Unix,你还能找到例如 DOS,Windows,Apple/Mac,OS 2,Novell netware等其他的操作系统与 shell。 +**DOS & CP/M:** + *86-DOS + *Concurrent CPM-86 & Concurrent DOS + *CP/M 86 & CP/M-80 + *DOS Plus + *DR-DOS + *GEM + *MP/M + *MS-DOS + *Multitasking MS-DOS 4.00 + *Multiuser DOS + *PC-DOS + *PC-MOS + *PTS-DOS + *Real/32 + *Tandy Deskmate + *Wendin DOS +**Windows:** + *BackOffice Server + *Windows 1.0/2.x/3.0/3.1/95/98/2000/ME/NT 3.X/NT 4.0 + *Windows Whistler + *WinFrame +**Apple/Mac:** + *Mac OS 7/8/9 + *Mac OS X + *System Software (0-6) +**OS/2:** + *Citrix Multiuser + *OS/2 1.x + *OS/2 2.0 + *OS/2 3.x + *OS/2 Warp 4 +于此同时,WinWorld 也收集了大量的旧软件,系统应用,开发工具和游戏。您在访问网站的时候也可以同时查看它们。 +说实话,我甚至不知道这个网站列出的绝大部分东西,我甚至不知道它们存在过。其中列出的某些工具发布于我出生之前。 +如果您需要或者打算去测试一个经典的程序(例如游戏,软件,操作系统),并且在其他地方找不到它们,那么来 WinWorld 资源库看看,下载它们然后开始你的探险吧。祝您好运! +免责声明: +OSTechNix 并非隶属于 WinWorld。我们,OSTechNix,并不确保 WinWorld 站点存储数据的真实性与可靠性。而且在你所在的地区,或许从第三方站点下载软件是违法行为。本篇文章作者和 OSTechNix 都不会承担任何责任,使用此服务意味着您将自行承担风险。 +本篇文章到此为止。希望这对您有用,更多的好文章即将发布,敬请期待! +谢谢各位的阅读! +-------------------------------------------------------------------------------- +via: https://www.ostechnix.com/winworld-a-large-collection-of-defunct-oss-software-and-games/ +作者:[SK][a] +选题:[lujun9972](https://github.com/lujun9972) +译者:[thecyanbird](https://github.com/thecyanbird) +校对:[校对者ID](https://github.com/校对者ID) +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 +[a]: https://www.ostechnix.com/author/sk/ +[1]: https://www.ostechnix.com/how-to-run-ms-dos-games-and-programs-in-linux/ From 9c249cb798241b78c0b2e2808670b1a640c1bdca Mon Sep 17 00:00:00 2001 From: thecyanbird <2534930703@qq.com> Date: Thu, 4 Oct 2018 22:50:28 +0800 Subject: [PATCH 213/736] Rename A Large Collection Of Defunct OSs, Software And Games.md to 20180920 WinWorld - A Large Collection Of Defunct OSs, Software And Games.md --- ...ld - A Large Collection Of Defunct OSs, Software And Games.md} | 0 1 file changed, 0 insertions(+), 0 deletions(-) rename translated/talk/{A Large Collection Of Defunct OSs, Software And Games.md => 20180920 WinWorld - A Large Collection Of Defunct OSs, Software And Games.md} (100%) diff --git a/translated/talk/A Large Collection Of Defunct OSs, Software And Games.md b/translated/talk/20180920 WinWorld - A Large Collection Of Defunct OSs, Software And Games.md similarity index 100% rename from translated/talk/A Large Collection Of Defunct OSs, Software And Games.md rename to translated/talk/20180920 WinWorld - A Large Collection Of Defunct OSs, Software And Games.md From 705c6004e2434e6217613f716a05939471988e2a Mon Sep 17 00:00:00 2001 From: thecyanbird <2534930703@qq.com> Date: Thu, 4 Oct 2018 22:53:50 +0800 Subject: [PATCH 214/736] Update 20180919 Linux Has a Code of Conduct and Not Everyone is Happy With it.md --- ...x Has a Code of Conduct and Not Everyone is Happy With it.md | 2 ++ 1 file changed, 2 insertions(+) diff --git a/sources/talk/20180919 Linux Has a Code of Conduct and Not Everyone is Happy With it.md b/sources/talk/20180919 Linux Has a Code of Conduct and Not Everyone is Happy With it.md index e161ec4eec..baa490ccbc 100644 --- a/sources/talk/20180919 Linux Has a Code of Conduct and Not Everyone is Happy With it.md +++ b/sources/talk/20180919 Linux Has a Code of Conduct and Not Everyone is Happy With it.md @@ -1,3 +1,5 @@ + + Linux Has a Code of Conduct and Not Everyone is Happy With it ====== **Linux kernel has a new code of conduct (CoC). Linus Torvalds took a break from Linux kernel development just 30 minutes after signing this code of conduct. And since **the writer of this code of conduct has had a controversial past,** it has now become a point of heated discussion. With all the politics involved, not many people are happy with this new CoC.** From c8af2a415ce02128a4ba36d4dea4abfdcd98fc35 Mon Sep 17 00:00:00 2001 From: thecyanbird <2534930703@qq.com> Date: Thu, 4 Oct 2018 22:56:08 +0800 Subject: [PATCH 215/736] Update 20180919 Linux Has a Code of Conduct and Not Everyone is Happy With it.md --- ...x Has a Code of Conduct and Not Everyone is Happy With it.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/sources/talk/20180919 Linux Has a Code of Conduct and Not Everyone is Happy With it.md b/sources/talk/20180919 Linux Has a Code of Conduct and Not Everyone is Happy With it.md index baa490ccbc..971a91f94f 100644 --- a/sources/talk/20180919 Linux Has a Code of Conduct and Not Everyone is Happy With it.md +++ b/sources/talk/20180919 Linux Has a Code of Conduct and Not Everyone is Happy With it.md @@ -1,4 +1,4 @@ - +thecyanbird translating Linux Has a Code of Conduct and Not Everyone is Happy With it ====== From 72eff6cbc7e6ba4b6f3e0c4e16fd697366ad7d07 Mon Sep 17 00:00:00 2001 From: "Xingyu.Wang" Date: Fri, 5 Oct 2018 07:40:07 +0800 Subject: [PATCH 216/736] PRF:20180928 How to Install Popcorn Time on Ubuntu 18.04 and Other Linux Distributions.md @dianbanjiu --- ...ntu 18.04 and Other Linux Distributions.md | 90 +++++++++---------- 1 file changed, 44 insertions(+), 46 deletions(-) diff --git a/translated/tech/20180928 How to Install Popcorn Time on Ubuntu 18.04 and Other Linux Distributions.md b/translated/tech/20180928 How to Install Popcorn Time on Ubuntu 18.04 and Other Linux Distributions.md index 5d7b3f772b..72763c754b 100644 --- a/translated/tech/20180928 How to Install Popcorn Time on Ubuntu 18.04 and Other Linux Distributions.md +++ b/translated/tech/20180928 How to Install Popcorn Time on Ubuntu 18.04 and Other Linux Distributions.md @@ -1,22 +1,23 @@ -# 如何在 Ubuntu 以及其他 Linux 发行版上安装 Popcorn Time +如何在 Ubuntu 18.04 上安装 Popcorn Time +====== -**简要:这篇教程展示给你如何在 Ubuntu 和其他 Linux 发行版上安装 Popcorn Time,也会讨论一些 Popcorn Time 的便捷操作** +> 简要:这篇教程展示给你如何在 Ubuntu 和其他 Linux 发行版上安装 Popcorn Time,也会讨论一些 Popcorn Time 的便捷操作。 -[Popcorn Time][1] 是一个受开源 [Netflix][2] 启发的 [torrent][3] 流媒体应用,可以在 Linux,Mac上Windows 上运行。 +[Popcorn Time][1] 是一个受 [Netflix][2] 启发的开源的 [torrent][3] 流媒体应用,可以在 Linux、Mac、Windows 上运行。 -传统的 torrents,在你看影片之前必须等待它下载完成。 +传统的 torrent,在你看影片之前必须等待它下载完成。 -[Popcorn Time][4] 有所不同。它的使用基于 torrent,但是允许你(几乎)立即开始观看影片。他跟你在 Youtube 或者 Netflix 等流媒体网页上看影片一样,无需等待它下载完成。 +[Popcorn Time][4] 有所不同。它的使用基于 torrent,但是允许你(几乎)立即开始观看影片。它跟你在 Youtube 或者 Netflix 等流媒体网页上看影片一样,无需等待它下载完成。 ![Popcorn Time in Ubuntu Linux][5] -Popcorn Time -如果你不想在看在线电影时被突如其来的广告吓倒的话,Popcorn Time 是一个不错的选择。不过要记得,它的播放质量依赖于当前网络中可用的种子(seeds)数。 +*Popcorn Time* -Popcorn Time 还提供了一个不错的用户界面,让你能够浏览可用的电影,电视剧和其他视频内容。如果你曾经[在 Linux 上使用过 Netflix][6],你会发现两者有一些相似之处。 +如果你不想在看在线电影时被突如其来的广告吓倒的话,Popcorn Time 是一个不错的选择。不过要记得,它的播放质量依赖于当前网络中可用的种子seed数。 -有些国家严格打击盗版,所以使用 torrent 下载电影是违法行为。在类似美国,英国和西欧等一些国家,你或许曾经收到过法律声明。也就是说,是否使用取决于你。已经警告过你了。 -(如果你仍想要冒险使用 Popcorn Time,你应该使用像 [Ivacy][7] 这样的 VPN 服务,它为使用 Torrents 和保护隐私有特别的设计。即便这样,也不能完全避免被查到。) +Popcorn Time 还提供了一个不错的用户界面,让你能够浏览可用的电影、电视剧和其他视频内容。如果你曾经[在 Linux 上使用过 Netflix][6],你会发现两者有一些相似之处。 + +有些国家严格打击盗版,所以使用 torrent 下载电影是违法行为。在类似美国、英国和西欧等一些国家,你或许曾经收到过法律声明。也就是说,是否使用取决于你。已经警告过你了。 Popcorn Time 一些主要的特点: @@ -24,39 +25,42 @@ Popcorn Time 一些主要的特点: * 有一个时尚的用户界面让你浏览可用的电影和电视剧资源 * 调整流媒体的质量 * 标记为稍后观看 - * 下载为离线观看 + * 下载为离线观看 * 可以默认开启字幕,改变字母尺寸等 * 使用键盘快捷键浏览 ### 如何在 Ubuntu 和其它 Linux 发行版上安装 Popcorn Time -这篇教程以 Ubuntu 18.04 为例,但是你可以使用类似的结构,在例如 Linux Mint,Debian,Manjaro,Deepin等 Linux 发行版上安装。 +这篇教程以 Ubuntu 18.04 为例,但是你可以使用类似的说明,在例如 Linux Mint、Debian、Manjaro、Deepin 等 Linux 发行版上安装。 + +Popcorn Time 在 Deepin Linux 的软件中心中也可用。Manjaro 和 Arch 用户也可以轻松地使用 AUR 来安装 Popcorn Time。 接下来我们看该如何在 Linux 上安装 Popcorn Time。事实上,这个过程非常简单。只需要按照说明操作复制粘贴我提到的这些命令即可。 #### 第一步:下载 Popcorn Time -你可以从它的官网上安装 Popcorn Time。它主页上的下载链接是。 -[Get Popcorn Time](https://popcorntime.sh/) +你可以从它的官网上安装 Popcorn Time。下载链接在它的主页上。 + +- [下载 Popcorn Time](https://popcorntime.sh/) #### 第二步:安装 Popcorn Time -下载完成之后,就该使用它了。下载下来的是一个 tar 文件,在这些文件里面包含有一个可执行文件。你可以把 tar 文件提取在任何位置,[Linux 常把附加软件安装在][8] /[opt 目录。][8] +下载完成之后,就该使用它了。下载下来的是一个 tar 文件,在这些文件里面包含有一个可执行文件。你可以把 tar 文件提取在任何位置,[Linux 常把附加软件安装在][8] [/opt 目录][8]。 -在 /opt 下创建一个新的目录: +在 `/opt` 下创建一个新的目录: ``` sudo mkdir /opt/popcorntime ``` -现在进入你下载文件的文件夹中,比如我把 Popcorn Time 下载到了主目录的 Downloads目录下。 +现在进入你下载文件的文件夹中,比如我把 Popcorn Time 下载到了主目录的 Downloads 目录下。 ``` cd ~/Downloads ``` -提取下载好的 Popcorn Time 文件到新创建的 /opt/popcorntime 目录下 +提取下载好的 Popcorn Time 文件到新创建的 `/opt/popcorntime` 目录下: ``` sudo tar Jxf Popcorn-Time-* -C /opt/popcorntime @@ -64,7 +68,7 @@ sudo tar Jxf Popcorn-Time-* -C /opt/popcorntime #### 第三步:让所有用户可以使用 Popcorn Time -如果你想要系统中所有的用户无需经过 sudo 就可以运行 Popcorn Time。你需要在 /usr/bin 目录下创建一个[符号链接(软链接)][9]指向这个可执行文件。 +如果你想要系统中所有的用户无需经过 `sudo` 就可以运行 Popcorn Time。你需要在 `/usr/bin` 目录下创建一个[符号链接(软链接)][9]指向这个可执行文件。 ``` ln -sf /opt/popcorntime/Popcorn-Time /usr/bin/Popcorn-Time @@ -76,13 +80,13 @@ ln -sf /opt/popcorntime/Popcorn-Time /usr/bin/Popcorn-Time 为此,你需要创建一个桌面入口。 -打开一个终端窗口,在 /usr/share/applications 目录下创建一个名为 popcorntime.desktop 的文件。 +打开一个终端窗口,在 `/usr/share/applications` 目录下创建一个名为 `popcorntime.desktop` 的文件。 你可以使用任何[基于命令行的文本编辑器][10]。Ubuntu 默认安装了 [Nano][11],所以你可以直接使用这个。 ``` sudo nano /usr/share/applications/popcorntime.desktop -``` +``` 在里面插入以下内容: @@ -95,11 +99,11 @@ Name = Popcorn-Time Exec = /usr/bin/Popcorn-Time Icon = /opt/popcorntime/popcorn.png Categories = Application; -``` +``` -如果你使用的是 Nano 编辑器,使用 Ctrl+X 保存输入的内容,当询问是否保存时,输入 Y,然后按回车保存并退出。 +如果你使用的是 Nano 编辑器,使用 `Ctrl+X` 保存输入的内容,当询问是否保存时,输入 `Y`,然后按回车保存并退出。 -就快要完成了。最后一件事就是为 Popcorn Time 设置一个正确的图标。你可以下载一个 Popcorn Time 图标到 /opt/popcorntime 目录下,并命名为 popcorn.png。 +就快要完成了。最后一件事就是为 Popcorn Time 设置一个正确的图标。你可以下载一个 Popcorn Time 图标到 `/opt/popcorntime` 目录下,并命名为 `popcorn.png`。 你可以使用以下命令: @@ -109,13 +113,15 @@ sudo wget -O /opt/popcorntime/popcorn.png https://upload.wikimedia.org/wikipedia 这样就 OK 了。现在你可以搜索 Popcorn Time 然后点击启动它了。 -![Popcorn Time installed on Ubuntu][12] -在菜单里搜索 Popcorn Time +![Popcorn Time installed on Ubuntu][12] + +*在菜单里搜索 Popcorn Time* 第一次启动时,你必须接受这些条款和条件。 ![Popcorn Time in Ubuntu][13] -接受这些服务条款 + +*接受这些服务条款* 一旦你完成这些,你就可以享受你的电影和电视节目了。 @@ -123,22 +129,17 @@ sudo wget -O /opt/popcorntime/popcorn.png https://upload.wikimedia.org/wikipedia 好了,这就是所有你在 Ubuntu 或者其他 Linux 发行版上安装 Popcorn Time 所需要的了。你可以直接开始看你最喜欢的影视节目了。 -当然,如果你有兴趣的话,我建议你阅读以下关于 Popcorn Time 的小贴士,可以学到更多。 - -[![][15]][16] -![][17] - ### 高效使用 Popcorn Time 的七个小贴士 现在你已经安装好了 Popcorn Time 了,我接下来将要告诉你一些有用的 Popcorn Time 技巧。我保证它会增强你使用 Popcorn Time 的体验。 -#### 1\. 使用高级设置 +#### 1、 使用高级设置 始终启用高级设置。它给了你更多的选项去调整 Popcorn Time 点击右上角的齿轮标记。查看其中的高级设置。 ![](https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2015/01/Popcorn_Time_Tricks.jpeg) -#### 2\. 在 VLC 或者其他播放器里观看影片 +#### 2、 在 VLC 或者其他播放器里观看影片 你知道你可以选择自己喜欢的播放器而不是 Popcorn Time 默认的播放器观看一个视频吗?当然,这个播放器必须已经安装在你的系统上了。 @@ -148,29 +149,29 @@ sudo wget -O /opt/popcorntime/popcorn.png https://upload.wikimedia.org/wikipedia ![](https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2015/01/Pocorn_Time_Tricks_1.png) -#### 3\. 将影片标记为稍后观看 +#### 3、 将影片标记为稍后观看 只是浏览电影和电视节目,但是却没有时间和精力去看?这不是问题。你可以添加这些影片到书签里面,稍后可以在 Faveriate 标签里面访问这些影片。这可以让你创建一个你想要稍后观看的列表。 ![](https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2015/01/Pocorn_Time_Tricks2.png) -#### 4\. 检查 torrent 的信息和种子信息 +#### 4、 检查 torrent 的信息和种子信息 像我之前提到的,你在 Popcorn Time 的观看体验依赖于 torrent 的速度。好消息是 Popcorn Time 显示了 torrent 的信息,因此你可以知道流媒体的速度。 -你可以在文件上看到一个绿色 / 黄色 / 红色的点。绿色意味着有足够的种子,文件很容易播放。黄色意味着有中等数量的种子,应该可以播放。红色意味着只有非常少可用的种子,播放的速度会很慢甚至无法观看。 +你可以在文件上看到一个绿色/黄色/红色的点。绿色意味着有足够的种子,文件很容易播放。黄色意味着有中等数量的种子,应该可以播放。红色意味着只有非常少可用的种子,播放的速度会很慢甚至无法观看。 ![](https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2015/01/Pocorn_Time_Tricks3.jpg) -#### 5\. 添加自定义字幕 +#### 5、 添加自定义字幕 如果你需要字幕而且它没有你想要的语言,你可以从外部网站下载自定义字幕。得到 .src 文件,然后就可以在 Popcorn Time 中使用它: ![](https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2015/01/Pocporn_Time_Tricks5.png) -这是[用 VLC 自动下载字幕][19] +你可以[用 VLC 自动下载字幕][19]。 -#### 6\. 保存文件离线观看 +#### 6、 保存文件离线观看 用 Popcorn Time 播放内容时,它会下载并暂时存储这些内容。当你关闭 APP 时,缓存会被清理干净。你可以更改这个操作,使得下载的文件可以保存下来供你未来使用。 @@ -178,7 +179,7 @@ sudo wget -O /opt/popcorntime/popcorn.png https://upload.wikimedia.org/wikipedia ![](https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2015/01/Popcorn_Time_Tips.jpg) -#### 7\. 拖放外部 torrent 文件立即播放 +#### 7、 拖放外部 torrent 文件立即播放 我猜你不知道这个操作。如果你没有在 Popcorn Time 发现某些影片,从你最喜欢的 torrent 网站下载 torrent 文件,打开 Popcorn Time,然后拖放这个 torrent 文件到 Popcorn Time 里面。它将会立即播放文件,当然这个取决于种子。这次你不需要在观看前下载整个文件了。 @@ -188,10 +189,7 @@ sudo wget -O /opt/popcorntime/popcorn.png https://upload.wikimedia.org/wikipedia 在 Popcorn Time 里面有很多的功能,但是我决定就此打住,剩下的就由你自己来探索吧。我希望你能发现更多 Popcorn Time 有用的功能和技巧。 -我再提醒一遍,使用 Torrents 在很多国家是违法的。如果你还是这样做了,请做好防护措施,并使用 VPN 服务。如果你想要我的建议,你可以去看一下(让 [ProtonMail][21] 成名的)[瑞士的隐私公司 ProtonVPN][20]。新加坡的 [Ivacy][7] 也是一个不错的选择。如果你觉得这些都太贵了,你可以看一下[在 FOSS SHOP 上廉价的 VPN][22] - -注意:这篇文章里包含了会员链接,请阅读我们的[会员隐私][23]。 - +我再提醒一遍,使用 Torrents 在很多国家是违法的。 ----------------------------------- @@ -200,7 +198,7 @@ via: https://itsfoss.com/popcorn-time-ubuntu-linux/ 作者:[Abhishek Prakash][a] 选题:[lujun9972](https://github.com/lujun9972) 译者:[dianbanjiu](https://github.com/dianbanjiu) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 From f414f68dd6696cde10caffbb8d4ed3888166b99b Mon Sep 17 00:00:00 2001 From: "Xingyu.Wang" Date: Fri, 5 Oct 2018 07:40:26 +0800 Subject: [PATCH 217/736] PUB:20180928 How to Install Popcorn Time on Ubuntu 18.04 and Other Linux Distributions.md @dianbanjiu https://linux.cn/article-10081-1.html --- ... Popcorn Time on Ubuntu 18.04 and Other Linux Distributions.md | 0 1 file changed, 0 insertions(+), 0 deletions(-) rename {translated/tech => published}/20180928 How to Install Popcorn Time on Ubuntu 18.04 and Other Linux Distributions.md (100%) diff --git a/translated/tech/20180928 How to Install Popcorn Time on Ubuntu 18.04 and Other Linux Distributions.md b/published/20180928 How to Install Popcorn Time on Ubuntu 18.04 and Other Linux Distributions.md similarity index 100% rename from translated/tech/20180928 How to Install Popcorn Time on Ubuntu 18.04 and Other Linux Distributions.md rename to published/20180928 How to Install Popcorn Time on Ubuntu 18.04 and Other Linux Distributions.md From 989d865cc3788bc4a24c210df8a19ae7bdc72305 Mon Sep 17 00:00:00 2001 From: pityonline Date: Tue, 2 Oct 2018 23:24:38 +0800 Subject: [PATCH 218/736] =?UTF-8?q?PRF:=20#10302=20=E5=88=9D=E6=AD=A5?= =?UTF-8?q?=E6=A0=A1=E5=AF=B9=E5=B9=B6=E8=B0=83=E6=95=B4=E6=A0=BC=E5=BC=8F?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...e SSH Key-based Authentication In Linux.md | 57 ++++++++++--------- 1 file changed, 29 insertions(+), 28 deletions(-) diff --git a/translated/tech/20180709 How To Configure SSH Key-based Authentication In Linux.md b/translated/tech/20180709 How To Configure SSH Key-based Authentication In Linux.md index 5c69d6a92b..dc34038a6e 100644 --- a/translated/tech/20180709 How To Configure SSH Key-based Authentication In Linux.md +++ b/translated/tech/20180709 How To Configure SSH Key-based Authentication In Linux.md @@ -1,33 +1,35 @@ -如何在 Linux 中配置基于密钥认证的 SSH +如何在 Linux 中配置基于密钥认证的 SSH ====== ![](https://www.ostechnix.com/wp-content/uploads/2017/01/Configure-SSH-Key-based-Authentication-In-Linux-720x340.png) -### 什么是基于 SSH密钥的认证? +### 什么是基于 SSH 密钥的认证? -众所周知,**Secure Shell**,又称 **SSH**,是允许你通过无安全网络(例如 Internet)和远程系统之间安全访问/通信的加密网络协议。无论何时使用 SSH 在无安全网络上发送数据,它都会在源系统上自动地被加密,并且在目的系统上解密。SSH 提供了四种加密方式,**基于密码认证**,**基于密钥认证**,**基于主机认证**和**键盘认证**。最常用的认证方式是基于密码认证和基于密钥认证。 +众所周知,**Secure Shell**,又称 **SSH**,是允许你通过无安全网络(例如 Internet)和远程系统之间安全访问/通信的加密网络协议。无论何时使用 SSH 在无安全网络上发送数据,它都会在源系统上自动地被加密,并且在目的系统上解密。SSH 提供了四种加密方式,**基于密码认证**,**基于密钥认证**,**基于主机认证**和**键盘认证**。最常用的认证方式是基于密码认证和基于密钥认证。 -在基于密码认证中,你需要的仅仅是远程系统上用户的密码。如果你知道远程用户的密码,你可以使用**“ssh[[email protected]][1]”**访问各自的系统。另一方面,在基于密钥认证中,为了通过 SSH 通信,你需要生成 SSH 密钥对,并且为远程系统上传 SSH 公钥。每个 SSH 密钥对由私钥与公钥组成。私钥应该保存在客户系统上,公钥应该上传给远程系统。你不应该将私钥透露给任何人。希望你已经对 SSH 和它的认证方式有了基本的概念。 +在基于密码认证中,你需要的仅仅是远程系统上用户的密码。如果你知道远程用户的密码,你可以使用 `ssh user@remote-system-name` 访问各自的系统。另一方面,在基于密钥认证中,为了通过 SSH 通信,你需要生成 SSH 密钥对,并且为远程系统上传 SSH 公钥。每个 SSH 密钥对由私钥与公钥组成。私钥应该保存在客户系统上,公钥应该上传给远程系统。你不应该将私钥透露给任何人。希望你已经对 SSH 和它的认证方式有了基本的概念。 -这篇教程,我们将讨论如何在 linux 上配置基于密钥认证的 SSH。 +这篇教程,我们将讨论如何在 Linux 上配置基于密钥认证的 SSH。 -### 在 Linux 上配置基于密钥认证的SSH +### 在 Linux 上配置基于密钥认证的 SSH 为本篇教程起见,我将使用 Arch Linux 为本地系统,Ubuntu 18.04 LTS 为远程系统。 本地系统详情: - * **OS** : Arch Linux Desktop - * **IP address** : 192.168.225.37 /24 + +* OS: Arch Linux Desktop +* IP address: 192.168.225.37/24 远程系统详情: - * **OS** : Ubuntu 18.04 LTS Server - * **IP address** : 192.168.225.22/24 + +* OS: Ubuntu 18.04 LTS Server +* IP address: 192.168.225.22/24 ### 本地系统配置 -就像我之前所说,在基于密钥认证的方法中,想要通过 SSH 访问远程系统,就应该将公钥上传给它。公钥通常会被保存在远程系统的一个文件**~/.ssh/authorized_keys** 中。 +就像我之前所说,在基于密钥认证的方法中,想要通过 SSH 访问远程系统,就应该将公钥上传给它。公钥通常会被保存在远程系统的一个文件 `~/.ssh/authorized_keys` 中。 -**注意事项:**不要使用**root** 用户生成密钥对,这样只有 root 用户才可以使用。使用普通用户创建密钥对。 +**注意事项**:不要使用 **root** 用户生成密钥对,这样只有 root 用户才可以使用。使用普通用户创建密钥对。 现在,让我们在本地系统上创建一个 SSH 密钥对。只需要在客户端系统上运行下面的命令。 @@ -37,7 +39,7 @@ $ ssh-keygen 上面的命令将会创建一个 2048 位的 RSA 密钥对。输入两次密码。更重要的是,记住你的密码。后面将会用到它。 -**样例输出** +样例输出: ``` Generating public/private rsa key pair. @@ -62,16 +64,16 @@ The key's randomart image is: +----[SHA256]-----+ ``` -如果你已经创建了密钥对,你将看到以下信息。输入 ‘y’ 就会覆盖已存在的密钥。 +如果你已经创建了密钥对,你将看到以下信息。输入 y 就会覆盖已存在的密钥。 ``` /home/username/.ssh/id_rsa already exists. Overwrite (y/n)? ``` -请注意**密码是可选的**。如果你输入了密码,那么每次通过 SSH 访问远程系统时都要求输入密码,除非你使用了 SSH 代理保存了密码。如果你不想要密码(虽然不安全),简单地输入两次 ENTER。不过,我们建议你使用密码。从安全的角度来看,使用无密码的 ssh 密钥对大体上不是一个很好的主意。 这种方式应该限定在特殊的情况下使用,例如,没有用户介入的服务访问远程系统。(例如,用 rsync 远程备份...) +请注意**密码是可选的**。如果你输入了密码,那么每次通过 SSH 访问远程系统时都要求输入密码,除非你使用了 SSH 代理保存了密码。如果你不想要密码(虽然不安全),简单地输入两次 ENTER。不过,我们建议你使用密码。从安全的角度来看,使用无密码的 ssh 密钥对大体上不是一个很好的主意。这种方式应该限定在特殊的情况下使用,例如,没有用户介入的服务访问远程系统。(例如,用 rsync 远程备份……) -如果你已经在个人文件 **~/.ssh/id_rsa** 中有了无密码的密钥对,但想要更新为带密码的密钥。使用下面的命令: +如果你已经在个人文件 `~/.ssh/id_rsa` 中有了无密码的密钥对,但想要更新为带密码的密钥。使用下面的命令: ``` $ ssh-keygen -p -f ~/.ssh/id_rsa @@ -91,9 +93,9 @@ Your identification has been saved with the new passphrase. $ ssh-copy-id sk@192.168.225.22 ``` -在这,我把本地(Arch Linux)系统上的公钥拷贝到了远程系统(Ubuntu 18.04 LTS)上。从技术上讲,上面的命令会把本地系统 **~/.ssh/id_rsa.pub key** 文件中的内容拷贝到远程系统**~/.ssh/authorized_keys** 中。明白了吗?非常棒。 +在这,我把本地(Arch Linux)系统上的公钥拷贝到了远程系统(Ubuntu 18.04 LTS)上。从技术上讲,上面的命令会把本地系统 `~/.ssh/id_rsa.pub` 文件中的内容拷贝到远程系统 `~/.ssh/authorized_keys` 中。明白了吗?非常棒。 -输入 **yes** 来继续连接你的远程 SSH 服务端。接着,输入远程系统 root 用户的密码。 +输入 yes 来继续连接你的远程 SSH 服务端。接着,输入远程系统 root 用户的密码。 ``` /usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed @@ -106,7 +108,7 @@ Now try logging into the machine, with: "ssh '[email protected]'" and check to make sure that only the key(s) you wanted were added. ``` -如果你已经拷贝了密钥,但想要替换为新的密码,使用 **-f** 选项覆盖已有的密钥。 +如果你已经拷贝了密钥,但想要替换为新的密码,使用 `-f` 选项覆盖已有的密钥。 ``` $ ssh-copy-id -f sk@192.168.225.22 @@ -118,13 +120,13 @@ $ ssh-copy-id -f sk@192.168.225.22 你需要在 root 或者 sudo 用户下执行下面的命令。 -为了禁用基于密码的认证,你需要在远程系统的控制台上编辑 **/etc/ssh/sshd_config** 配置文件: +为了禁用基于密码的认证,你需要在远程系统的控制台上编辑 `/etc/ssh/sshd_config` 配置文件: ``` $ sudo vi /etc/ssh/sshd_config ``` -找到下面这一行,去掉注释然后将值设为 **no** +找到下面这一行,去掉注释然后将值设为 `no`: ``` PasswordAuthentication no @@ -156,7 +158,7 @@ Last login: Mon Jul 9 09:59:51 2018 from 192.168.225.37 现在,你就能 SSH 你的远程系统了。如你所见,我们已经使用之前 **ssh-keygen** 创建的密码登录进了远程系统的账户,而不是使用账户实际的密码。 -如果你试图从其他客户端系统 ssh (远程系统),你将会得到这条错误信息。比如,我试图通过命令从 CentOS SSH 访问 Ubuntu 系统: +如果你试图从其它客户端系统 ssh(远程系统),你将会得到这条错误信息。比如,我试图通过命令从 CentOS SSH 访问 Ubuntu 系统: **样例输出:** @@ -168,7 +170,7 @@ Warning: Permanently added '192.168.225.22' (ECDSA) to the list of known hosts. Permission denied (publickey). ``` -如你所见,除了 CentOS (译注:根据上文,这里应该是 Arch) 系统外,我不能通过其他任何系统 SSH 访问我的远程系统 Ubuntu 18.04。 +如你所见,除了 CentOS(译注:根据上文,这里应该是 Arch)系统外,我不能通过其它任何系统 SSH 访问我的远程系统 Ubuntu 18.04。 ### 为 SSH 服务端添加更多客户端系统的密钥 @@ -180,7 +182,7 @@ Permission denied (publickey). $ ssh-keygen ``` -输入两次密码。现在, ssh 密钥对已经生成了。你需要手动把公钥(不是私钥)拷贝到远程服务端上。 +输入两次密码。现在,ssh 密钥对已经生成了。你需要手动把公钥(不是私钥)拷贝到远程服务端上。 使用命令查看公钥: @@ -194,7 +196,7 @@ $ cat ~/.ssh/id_rsa.pub ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQCt3a9tIeK5rPx9p74/KjEVXa6/OODyRp0QLS/sLp8W6iTxFL+UgALZlupVNgFjvRR5luJ9dLHWwc+d4umavAWz708e6Na9ftEPQtC28rTFsHwmyLKvLkzcGkC5+A0NdbiDZLaK3K3wgq1jzYYKT5k+IaNS6vtrx5LDObcPNPEBDt4vTixQ7GZHrDUUk5586IKeFfwMCWguHveTN7ykmo2EyL2rV7TmYq+eY2ZqqcsoK0fzXMK7iifGXVmuqTkAmZLGZK8a3bPb6VZd7KFum3Ezbu4BXZGp7FVhnOMgau2kYeOH/ItKPzpCAn+dg3NAAziCCxnII9b4nSSGz3mMY4Y7 ostechnix@centosserver ``` -拷贝所有内容(通过 USB 驱动器或者其它任何介质),然后去你的远程服务端的控制台。像下面那样,在 home 下创建文件夹叫做 **ssh**。你需要以 root 身份执行命令。 +拷贝所有内容(通过 USB 驱动器或者其它任何介质),然后去你的远程服务端的控制台。像下面那样,在 `$HOME` 下创建文件夹叫做 `.ssh`。你需要以 root 身份执行命令。 ``` $ mkdir -p ~/.ssh @@ -227,9 +229,8 @@ via: https://www.ostechnix.com/configure-ssh-key-based-authentication-linux/ 作者:[SK][a] 选题:[lujun9972](https://github.com/lujun9972) 译者:[LuuMing](https://github.com/LuuMing) -校对:[校对者ID](https://github.com/校对者ID) +校对:[pityonline](https://github.com/pityonline) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 -[a]:https://www.ostechnix.com/author/sk/ -[1]:https://www.ostechnix.com/cdn-cgi/l/email-protection +[a]: https://www.ostechnix.com/author/sk/ From cc71cdf04c0d45d4153dc98b9b971622ee46ee8a Mon Sep 17 00:00:00 2001 From: pityonline Date: Fri, 5 Oct 2018 08:46:53 +0800 Subject: [PATCH 219/736] =?UTF-8?q?PRF:=20#10302=20=E5=AE=8C=E6=88=90?= =?UTF-8?q?=E6=A0=A1=E5=AF=B9?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...e SSH Key-based Authentication In Linux.md | 49 ++++++++++--------- 1 file changed, 26 insertions(+), 23 deletions(-) diff --git a/translated/tech/20180709 How To Configure SSH Key-based Authentication In Linux.md b/translated/tech/20180709 How To Configure SSH Key-based Authentication In Linux.md index dc34038a6e..77a4e03b35 100644 --- a/translated/tech/20180709 How To Configure SSH Key-based Authentication In Linux.md +++ b/translated/tech/20180709 How To Configure SSH Key-based Authentication In Linux.md @@ -13,7 +13,7 @@ ### 在 Linux 上配置基于密钥认证的 SSH -为本篇教程起见,我将使用 Arch Linux 为本地系统,Ubuntu 18.04 LTS 为远程系统。 +为方便演示,我将使用 Arch Linux 为本地系统,Ubuntu 18.04 LTS 为远程系统。 本地系统详情: @@ -27,7 +27,7 @@ ### 本地系统配置 -就像我之前所说,在基于密钥认证的方法中,想要通过 SSH 访问远程系统,就应该将公钥上传给它。公钥通常会被保存在远程系统的一个文件 `~/.ssh/authorized_keys` 中。 +就像我之前所说,在基于密钥认证的方法中,想要通过 SSH 访问远程系统,需要将公钥上传到远程系统。公钥通常会被保存在远程系统的一个 `~/.ssh/authorized_keys` 文件中。 **注意事项**:不要使用 **root** 用户生成密钥对,这样只有 root 用户才可以使用。使用普通用户创建密钥对。 @@ -37,9 +37,9 @@ $ ssh-keygen ``` -上面的命令将会创建一个 2048 位的 RSA 密钥对。输入两次密码。更重要的是,记住你的密码。后面将会用到它。 +上面的命令将会创建一个 2048 位的 RSA 密钥对。你需要输入两次密码。更重要的是,记住你的密码。后面将会用到它。 -样例输出: +**样例输出**: ``` Generating public/private rsa key pair. @@ -71,15 +71,15 @@ The key's randomart image is: Overwrite (y/n)? ``` -请注意**密码是可选的**。如果你输入了密码,那么每次通过 SSH 访问远程系统时都要求输入密码,除非你使用了 SSH 代理保存了密码。如果你不想要密码(虽然不安全),简单地输入两次 ENTER。不过,我们建议你使用密码。从安全的角度来看,使用无密码的 ssh 密钥对大体上不是一个很好的主意。这种方式应该限定在特殊的情况下使用,例如,没有用户介入的服务访问远程系统。(例如,用 rsync 远程备份……) +请注意**密码是可选的**。如果你输入了密码,那么每次通过 SSH 访问远程系统时都要求输入密码,除非你使用了 SSH 代理保存了密码。如果你不想要密码(虽然不安全),简单地敲两次回车。不过,我建议你使用密码。从安全的角度来看,使用无密码的 ssh 密钥对不是什么好主意。这种方式应该限定在特殊的情况下使用,例如,没有用户介入的服务访问远程系统。(例如,用 rsync 远程备份……) -如果你已经在个人文件 `~/.ssh/id_rsa` 中有了无密码的密钥对,但想要更新为带密码的密钥。使用下面的命令: +如果你已经在个人文件 `~/.ssh/id_rsa` 中有了无密码的密钥,但想要更新为带密码的密钥。使用下面的命令: ``` $ ssh-keygen -p -f ~/.ssh/id_rsa ``` -样例输出: +**样例输出**: ``` Enter new passphrase (empty for no passphrase): @@ -93,18 +93,18 @@ Your identification has been saved with the new passphrase. $ ssh-copy-id sk@192.168.225.22 ``` -在这,我把本地(Arch Linux)系统上的公钥拷贝到了远程系统(Ubuntu 18.04 LTS)上。从技术上讲,上面的命令会把本地系统 `~/.ssh/id_rsa.pub` 文件中的内容拷贝到远程系统 `~/.ssh/authorized_keys` 中。明白了吗?非常棒。 +在这里,我把本地(Arch Linux)系统上的公钥拷贝到了远程系统(Ubuntu 18.04 LTS)上。从技术上讲,上面的命令会把本地系统 `~/.ssh/id_rsa.pub` 文件中的内容拷贝到远程系统 `~/.ssh/authorized_keys` 中。明白了吗?非常棒。 -输入 yes 来继续连接你的远程 SSH 服务端。接着,输入远程系统 root 用户的密码。 +输入 yes 来继续连接你的远程 SSH 服务端。接着,输入远程系统 sk 用户的密码。 ``` /usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed /usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys -[email protected]2.168.225.22's password: +sk@192.168.225.22's password: Number of key(s) added: 1 -Now try logging into the machine, with: "ssh '[email protected]'" +Now try logging into the machine, with: "ssh 'sk@192.168.225.22'" and check to make sure that only the key(s) you wanted were added. ``` @@ -114,13 +114,13 @@ and check to make sure that only the key(s) you wanted were added. $ ssh-copy-id -f sk@192.168.225.22 ``` -我们现在已经成功地将本地系统的 SSH 公钥添加进了远程系统。现在,让我们在远程系统上完全禁用掉基于密码认证的方式。因为,我们已经配置了密钥认证,因此我们不再需要密码认证了。 +我们现在已经成功地将本地系统的 SSH 公钥添加进了远程系统。现在,让我们在远程系统上完全禁用掉基于密码认证的方式。因为我们已经配置了密钥认证,因此不再需要密码认证了。 ### 在远程系统上禁用基于密码认证的 SSH 你需要在 root 或者 sudo 用户下执行下面的命令。 -为了禁用基于密码的认证,你需要在远程系统的控制台上编辑 `/etc/ssh/sshd_config` 配置文件: +禁用基于密码的认证,你需要在远程系统的终端里编辑 `/etc/ssh/sshd_config` 配置文件: ``` $ sudo vi /etc/ssh/sshd_config @@ -148,19 +148,19 @@ $ ssh sk@192.168.225.22 输入密码。 -**样例输出:** +**样例输出**: ``` Enter passphrase for key '/home/sk/.ssh/id_rsa': Last login: Mon Jul 9 09:59:51 2018 from 192.168.225.37 -[email protected]:~$ +sk@ubuntuserver:~$ ``` -现在,你就能 SSH 你的远程系统了。如你所见,我们已经使用之前 **ssh-keygen** 创建的密码登录进了远程系统的账户,而不是使用账户实际的密码。 +现在,你就能 SSH 你的远程系统了。如你所见,我们已经使用之前 `ssh-keygen` 创建的密码登录进了远程系统的账户,而不是使用当前账户实际的密码。 如果你试图从其它客户端系统 ssh(远程系统),你将会得到这条错误信息。比如,我试图通过命令从 CentOS SSH 访问 Ubuntu 系统: -**样例输出:** +**样例输出**: ``` The authenticity of host '192.168.225.22 (192.168.225.22)' can't be established. @@ -184,19 +184,19 @@ $ ssh-keygen 输入两次密码。现在,ssh 密钥对已经生成了。你需要手动把公钥(不是私钥)拷贝到远程服务端上。 -使用命令查看公钥: +使用以下命令查看公钥: ``` $ cat ~/.ssh/id_rsa.pub ``` -应该会输出如下信息: +应该会输出类似下面的信息: ``` ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQCt3a9tIeK5rPx9p74/KjEVXa6/OODyRp0QLS/sLp8W6iTxFL+UgALZlupVNgFjvRR5luJ9dLHWwc+d4umavAWz708e6Na9ftEPQtC28rTFsHwmyLKvLkzcGkC5+A0NdbiDZLaK3K3wgq1jzYYKT5k+IaNS6vtrx5LDObcPNPEBDt4vTixQ7GZHrDUUk5586IKeFfwMCWguHveTN7ykmo2EyL2rV7TmYq+eY2ZqqcsoK0fzXMK7iifGXVmuqTkAmZLGZK8a3bPb6VZd7KFum3Ezbu4BXZGp7FVhnOMgau2kYeOH/ItKPzpCAn+dg3NAAziCCxnII9b4nSSGz3mMY4Y7 ostechnix@centosserver ``` -拷贝所有内容(通过 USB 驱动器或者其它任何介质),然后去你的远程服务端的控制台。像下面那样,在 `$HOME` 下创建文件夹叫做 `.ssh`。你需要以 root 身份执行命令。 +拷贝所有内容(通过 USB 驱动器或者其它任何介质),然后去你的远程服务端的终端,像下面那样,在 `$HOME` 下创建文件夹叫做 `.ssh`。你需要以 root 身份执行命令(注:不一定需要 root)。 ``` $ mkdir -p ~/.ssh @@ -210,15 +210,16 @@ echo {Your_public_key_contents_here} >> ~/.ssh/authorized_keys 在远程系统上重启 ssh 服务。现在,你可以在新的客户端上 SSH 远程服务端了。 -如果觉得手动添加 ssh 公钥有些困难,在远程系统上暂时性启用密码认证,使用 “ssh-copy-id“ 命令从本地系统上拷贝密钥,最后关闭密码认证。 +如果觉得手动添加 ssh 公钥有些困难,在远程系统上暂时性启用密码认证,使用 `ssh-copy-id` 命令从本地系统上拷贝密钥,最后禁用密码认证。 **推荐阅读:** -(译者注:在原文中此处有超链接) +* [SSLH – Share A Same Port For HTTPS And SSH][1] +* [ScanSSH – Fast SSH Server And Open Proxy Scanner][2] 好了,到此为止。基于密钥认证的 SSH 提供了一层防止暴力破解的额外保护。如你所见,配置密钥认证一点也不困难。这是一个非常好的方法让你的 Linux 服务端安全可靠。 -不久我就会带来另一篇有用的文章。到那时,继续关注 OSTechNix。 +不久我会带来另一篇有用的文章。请继续关注 OSTechNix。 干杯! @@ -234,3 +235,5 @@ via: https://www.ostechnix.com/configure-ssh-key-based-authentication-linux/ 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 [a]: https://www.ostechnix.com/author/sk/ +[1]: https://www.ostechnix.com/sslh-share-port-https-ssh/ +[2]: https://www.ostechnix.com/scanssh-fast-ssh-server-open-proxy-scanner/ From 315910a15fe61752852e84f4fb19cb25efce6521 Mon Sep 17 00:00:00 2001 From: HankChow <280630620@qq.com> Date: Fri, 5 Oct 2018 15:04:15 +0800 Subject: [PATCH 220/736] translated --- ...5 Essential Tools for Linux Development.md | 150 ------------------ ...5 Essential Tools for Linux Development.md | 131 +++++++++++++++ 2 files changed, 131 insertions(+), 150 deletions(-) delete mode 100644 sources/tech/20180803 5 Essential Tools for Linux Development.md create mode 100644 translated/tech/20180803 5 Essential Tools for Linux Development.md diff --git a/sources/tech/20180803 5 Essential Tools for Linux Development.md b/sources/tech/20180803 5 Essential Tools for Linux Development.md deleted file mode 100644 index 7c2ab1f7d5..0000000000 --- a/sources/tech/20180803 5 Essential Tools for Linux Development.md +++ /dev/null @@ -1,150 +0,0 @@ -HankChow translating - -5 Essential Tools for Linux Development -====== - -![](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/dev-tools.png?itok=kkDNylRg) - -Linux has become a mainstay for many sectors of work, play, and personal life. We depend upon it. With Linux, technology is expanding and evolving faster than anyone could have imagined. That means Linux development is also happening at an exponential rate. Because of this, more and more developers will be hopping on board the open source and Linux dev train in the immediate, near, and far-off future. For that, people will need tools. Fortunately, there are a ton of dev tools available for Linux; so many, in fact, that it can be a bit intimidating to figure out precisely what you need (especially if you’re coming from another platform). - -To make that easier, I thought I’d help narrow down the selection a bit for you. But instead of saying you should use Tool X and Tool Y, I’m going to narrow it down to five categories and then offer up an example for each. Just remember, for most categories, there are several available options. And, with that said, let’s get started. - -### Containers - -Let’s face it, in this day and age you need to be working with containers. Not only are they incredibly easy to deploy, they make for great development environments. If you regularly develop for a specific platform, why not do so by creating a container image that includes all of the tools you need to make the process quick and easy. With that image available, you can then develop and roll out numerous instances of whatever software or service you need. - -Using containers for development couldn’t be easier than it is with [Docker][1]. The advantages of using containers (and Docker) are: - - * Consistent development environment. - - * You can trust it will “just work” upon deployment. - - * Makes it easy to build across platforms. - - * Docker images available for all types of development environments and languages. - - * Deploying single containers or container clusters is simple. - - - - -Thanks to [Docker Hub][2], you’ll find images for nearly any platform, development environment, server, service… just about anything you need. Using images from Docker Hub means you can skip over the creation of the development environment and go straight to work on developing your app, server, API, or service. - -Docker is easily installable of most every Linux platform. For example: To install Docker on Ubuntu, you only have to open a terminal window and issue the command: -``` -sudo apt-get install docker.io - -``` - -With Docker installed, you’re ready to start pulling down specific images, developing, and deploying (Figure 1). - -![Docker images][4] - -Figure 1: Docker images ready to deploy. - -[Used with permission][5] - -### Version control system - -If you’re working on a large project or with a team on a project, you’re going to need a version control system. Why? Because you need to keep track of your code, where your code is, and have an easy means of making commits and merging code from others. Without such a tool, your projects would be nearly impossible to manage. For Linux users, you cannot beat the ease of use and widespread deployment of [Git][6] and [GitHub][7]. If you’re new to their worlds, Git is the version control system that you install on your local machine and GitHub is the remote repository you use to upload (and then manage) your projects. Git can be installed on most Linux distributions. For example, on a Debian-based system, the install is as simple as: -``` -sudo apt-get install git - -``` - -Once installed, you are ready to start your journey with version control (Figure 2). - -![Git installed][9] - -Figure 2: Git is installed and available for many important tasks. - -[Used with permission][5] - -Github requires you to create an account. You can use it for free for non-commercial projects, or you can pay for commercial project housing (for more information check out the price matrix [here][10]). - -### Text editor - -Let’s face it, developing on Linux would be a bit of a challenge without a text editor. Of course what a text editor is varies, depending upon who you ask. One person might say vim, emacs, or nano, whereas another might go full-on GUI with their editor. But since we’re talking development, we need a tool that can meet the needs of the modern day developer. And before I mention a couple of text editors, I will say this: Yes, I know that vim is a serious workhorse for serious developers and, if you know it well vim will meet and exceed all of your needs. However, getting up to speed enough that it won’t be in your way, can be a bit of a hurdle for some developers (especially those new to Linux). Considering my goal is to always help win over new users (and not just preach to an already devout choir), I’m taking the GUI route here. - -As far as text editors are concerned, you cannot go wrong with the likes of [Bluefish][11]. Bluefish can be found in most standard repositories and features project support, multi-threaded support for remote files, search and replace, open files recursively, snippets sidebar, integrates with make, lint, weblint, xmllint, unlimited undo/redo, in-line spell checker, auto-recovery, full screen editing, syntax highlighting (Figure 3), support for numerous languages, and much more. - -![Bluefish][13] - -Figure 3: Bluefish running on Ubuntu Linux 18.04. - -[Used with permission][5] - -### IDE - -Integrated Development Environment (IDE) is a piece of software that includes a comprehensive set of tools that enable a one-stop-shop environment for developing. IDEs not only enable you to code your software, but document and build them as well. There are a number of IDEs for Linux, but one in particular is not only included in the standard repositories it is also very user-friendly and powerful. That tool in question is [Geany][14]. Geany features syntax highlighting, code folding, symbol name auto-completion, construct completion/snippets, auto-closing of XML and HTML tags, call tips, many supported filetypes, symbol lists, code navigation, build system to compile and execute your code, simple project management, and a built-in plugin system. - -Geany can be easily installed on your system. For example, on a Debian-based distribution, issue the command: -``` -sudo apt-get install geany - -``` - -Once installed, you’re ready to start using this very powerful tool that includes a user-friendly interface (Figure 4) that has next to no learning curve. - -![Geany][16] - -Figure 4: Geany is ready to serve as your IDE. - -[Used with permission][5] - -### diff tool - -There will be times when you have to compare two files to find where they differ. This could be two different copies of what was the same file (only one compiles and the other doesn’t). When that happens, you don’t want to have to do that manually. Instead, you want to employ the power of tool like [Meld][17]. Meld is a visual diff and merge tool targeted at developers. With Meld you can make short shrift out of discovering the differences between two files. Although you can use a command line diff tool, when efficiency is the name of the game, you can’t beat Meld. - -Meld allows you to open a comparison between to files and it will highlight the differences between each. Meld also allows you to merge comparisons either from the right or the left (as the files are opened side by side - Figure 5). - -![Comparing two files][19] - -Figure 5: Comparing two files with a simple difference. - -[Used with permission][5] - -Meld can be installed from most standard repositories. On a Debian-based system, the installation command is: -``` -sudo apt-get install meld - -``` - -### Working with efficiency - -These five tools not only enable you to get your work done, they help to make it quite a bit more efficient. Although there are a ton of developer tools available for Linux, you’re going to want to make sure you have one for each of the above categories (maybe even starting with the suggestions I’ve made). - -Learn more about Linux through the free ["Introduction to Linux" ][20]course from The Linux Foundation and edX. - --------------------------------------------------------------------------------- - -via: https://www.linux.com/learn/intro-to-linux/2018/8/5-essential-tools-linux-development - -作者:[Jack Wallen][a] -选题:[lujun9972](https://github.com/lujun9972) -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:https://www.linux.com/users/jlwallen -[1]:https://www.docker.com/ -[2]:https://hub.docker.com/ -[3]:/files/images/5devtools1jpg -[4]:https://www.linux.com/sites/lcom/files/styles/rendered_file/public/5devtools_1.jpg?itok=V1Bsbkg9 (Docker images) -[5]:/licenses/category/used-permission -[6]:https://git-scm.com/ -[7]:https://github.com/ -[8]:/files/images/5devtools2jpg -[9]:https://www.linux.com/sites/lcom/files/styles/rendered_file/public/5devtools_2.jpg?itok=YJjhe4O6 (Git installed) -[10]:https://github.com/pricing -[11]:http://bluefish.openoffice.nl/index.html -[12]:/files/images/5devtools3jpg -[13]:https://www.linux.com/sites/lcom/files/styles/rendered_file/public/5devtools_3.jpg?itok=66A7Svme (Bluefish) -[14]:https://www.geany.org/ -[15]:/files/images/5devtools4jpg -[16]:https://www.linux.com/sites/lcom/files/styles/rendered_file/public/5devtools_4.jpg?itok=jRcA-0ue (Geany) -[17]:http://meldmerge.org/ -[18]:/files/images/5devtools5jpg -[19]:https://www.linux.com/sites/lcom/files/styles/rendered_file/public/5devtools_5.jpg?itok=eLkfM9oZ (Comparing two files) -[20]:https://training.linuxfoundation.org/linux-courses/system-administration-training/introduction-to-linux diff --git a/translated/tech/20180803 5 Essential Tools for Linux Development.md b/translated/tech/20180803 5 Essential Tools for Linux Development.md new file mode 100644 index 0000000000..dcb3b3b63e --- /dev/null +++ b/translated/tech/20180803 5 Essential Tools for Linux Development.md @@ -0,0 +1,131 @@ +Linux 开发的五大必备工具 +====== + +![](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/dev-tools.png?itok=kkDNylRg) + +Linux 已经成为工作、娱乐和个人生活等多个领域的支柱,人们已经越来越离不开它。在 Linux 的帮助下,技术的发展速度超出了人们的想象,Linux 开发的速度也以指数规模增长。因此,越来越多的开发者也不断地加入开源和学习 Linux 开发地潮流当中。在这个过程之中,合适的工具是必不可少的,可喜的是,随着 Linux 的发展,大量适用于 Linux 的开发工具也不断成熟。甚至可以说,这样的工具已经多得有点惊人。 + +为了选择更合适自己的开发工具,缩小选择范围是很必要的。但是这篇文章并不会要求你必须使用某个工具,而只是缩小到五个工具类别,然后对每个类别提供一个例子。然而,对于大多数类别,都会有不止一种选择。下面我们来看一下。 + +### 容器 + +放眼于现实,现在已经是容器的时代了。容器既容易进行部署,又可以方便地构建开发环境。如果你针对的是特定的平台的开发,将开发流程所需要的各种工具都创建到容器映像中是一种很好的方法,只要使用这一个容器映像,就能够快速启动大量运行所需服务的实例。 + +一个使用容器的最佳范例是使用 [Docker][1],使用容器(或 Docker)有这些好处: + + * 开发环境保持一致 + + * 部署后即可运行 + + * 易于跨平台部署 + + * Docker 映像适用于多种开发环境和语言 + + * 部署单个容器或容器集群都并不繁琐 + + + +通过 [Docker Hub][2],几乎可以找到适用于任何平台、任何开发环境、任何服务器,任何服务的映像,几乎可以满足任何一种需求。使用 Docker Hub 中的映像,就相当于免除了搭建开发环境的步骤,可以直接开始开发应用程序、服务器、API 或服务。 + +Docker 在所有 Linux 平台上都很容易安装,例如可以通过终端输入以下命令在 Ubuntu 上安装 Docker: +``` +sudo apt-get install docker.io + +``` + +Docker 安装完毕后,就可以从 Docker 仓库中拉取映像,然后开始开发和部署了(如下图)。 + +![Docker images][4] + + + +### 版本控制工具 + +如果你正在开发一个巨大的项目,又或者参与团队开发,版本控制工具是必不可少的,它可以用于记录代码变更、提交代码以及合并代码。如果没有这样的工具,项目几乎无法妥善管理。在 Linux 系统上,[Git][6] 和 [GitHub][7] 的易用性和流行程度是其它版本控制工具无法比拟的。如果你对 Git 和 GitHub 还不太熟悉,可以简单理解为 Git 是在本地计算机上安装的版本控制系统,而 GitHub 则是用于上传和管理项目的远程存储库。 Git 可以安装在大多数的 Linux 发行版上。例如在基于 Debian 的系统上,只需要通过以下这一条简单的命令就可以安装: +``` +sudo apt-get install git + +``` + +安装完毕后,就可以使用 Git 来实施版本控制了(如下图)。 + +![Git installed][9] + + + +Github 会要求用户创建一个帐户。用户可以免费使用 GitHub 来管理非商用项目,当然也可以使用 GitHub 的付费模式(更多相关信息,可以参阅[价格矩阵][10])。 + +### 文本编辑器 + +如果没有文本编辑器,在 Linux 上开发将会变得异常艰难。当然,文本编辑器之间孰优孰劣,具体还是要取决于开发者的需求。对于文本编辑器,有人可能会使用 vim、emacs 或 nano,也有人会使用带有 GUI 的编辑器。但由于重点在于开发,我们需要的是一种能够满足开发人员需求的工具。不过我首先要说,vim 对于开发人员来说确实是一个利器,但前提是要对 vim 非常熟悉,在这种前提下,vim 能够满足你的所有需求,甚至还能给你更好的体验。然而,对于一些开发者(尤其是刚开始接触 Linux 的新手)来说,这不仅难以帮助他们快速达成需求,甚至还会是一个需要逾越的障碍。考虑到这篇文章的目标是帮助 Linux 的新手(而不仅仅是为各种编辑器的死忠粉宣传他们拥护的编辑器),我更倾向于使用 GUI 编辑器。 + +就文本编辑器而论,选择 [Bluefish][11] 一般不会有错。 Bluefish 可以从大部分软件库中安装,它支持项目管理、远程文件多线程操作、搜索和替换、递归打开文件、侧边栏、集成 make/lint/weblint/xmllint、无限制撤销/重做、在线拼写检查、自动恢复、全屏编辑、语法高亮(如下图)、多种语言等等。 + +![Bluefish][13] + + + +### IDE + +集成开发环境(Integrated Development Environment, IDE)是包含一整套全面的工具、可以实现一站式功能的开发环境。 开发者除了可以使用 IDE 编写代码,还可以编写文档和构建软件。在 Linux 上也有很多适用的 IDE,其中 [Geany][14] 就包含在标准软件库中,它对用户非常友好,功能也相当强大。 Geany 具有语法高亮、代码折叠、自动完成,构建代码片段、自动关闭 XML 和 HTML 标签、调用提示、支持多种文件类型、符号列表、代码导航、构建编译,简单的项目管理和内置的插件系统等强大功能。 + +Geany 也能在系统上轻松安装,例如执行以下命令在基于 Debian 的 Linux 发行版上安装 Geany: +``` +sudo apt-get install geany + +``` + +安装完毕后,就可以快速上手这个易用且强大的 IDE 了(如下图)。 + +![Geany][16] + + + +### 文本比较工具 + +有时候会需要比较两个文件的内容来找到它们之间的不同之处,它们可能是同一文件的两个不同副本(有一个经过编译,而另一个没有)。这种情况下,你肯定不想要凭借肉眼来找出差异,而是想要使用像 [Meld][17] 这样的工具。 Meld 是针对开发者的文本比较和合并工具,可以使用 Meld 来发现两个文件之间的差异。虽然你可以使用命令行中的文本比较工具,但就效率而论,Meld 无疑更为优秀。 + +Meld 可以打开两个文件进行比较,并突出显示文件之间的差异之处。 Meld 还允许用户从两个文件的其中一方合并差异(下图显示了 Meld 同时打开两个文件)。 + +![Comparing two files][19] + + + +Meld 也可以通过标准软件如安装,在基于 Debian 的系统上,执行以下命令就可以安装: +``` +sudo apt-get install meld + +``` + +### 高效地工作 + +以上提到的五个工具除了帮助你完成工作,而且有助于提高效率。尽管适用于 Linux 开发者的工具有很多,但对于以上几个类别,你最好分别使用一个对应的工具。 + + +-------------------------------------------------------------------------------- + +via: https://www.linux.com/learn/intro-to-linux/2018/8/5-essential-tools-linux-development + +作者:[Jack Wallen][a] +选题:[lujun9972](https://github.com/lujun9972) +译者:[HankChow](https://github.com/HankChow) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://www.linux.com/users/jlwallen +[1]:https://www.docker.com/ +[2]:https://hub.docker.com/ +[4]:https://www.linux.com/sites/lcom/files/styles/rendered_file/public/5devtools_1.jpg?itok=V1Bsbkg9 "Docker images" +[6]:https://git-scm.com/ +[7]:https://github.com/ +[9]:https://www.linux.com/sites/lcom/files/styles/rendered_file/public/5devtools_2.jpg?itok=YJjhe4O6 "Git installed" +[10]:https://github.com/pricing +[11]:http://bluefish.openoffice.nl/index.html +[13]:https://www.linux.com/sites/lcom/files/styles/rendered_file/public/5devtools_3.jpg?itok=66A7Svme "Bluefish" +[14]:https://www.geany.org/ +[16]:https://www.linux.com/sites/lcom/files/styles/rendered_file/public/5devtools_4.jpg?itok=jRcA-0ue "Geany" +[17]:http://meldmerge.org/ +[19]:https://www.linux.com/sites/lcom/files/styles/rendered_file/public/5devtools_5.jpg?itok=eLkfM9oZ "Comparing two files" +[20]:https://training.linuxfoundation.org/linux-courses/system-administration-training/introduction-to-linux + From c4301766e81593987574dc6e5bb2066c73e80ba7 Mon Sep 17 00:00:00 2001 From: "Xingyu.Wang" Date: Fri, 5 Oct 2018 18:14:50 +0800 Subject: [PATCH 221/736] PRF:20180920 WinWorld - A Large Collection Of Defunct OSs, Software And Games.md MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit @thecyanbird 恭喜您,完成了第一篇翻译贡献! --- ...tion Of Defunct OSs, Software And Games.md | 133 +++++++++++------- 1 file changed, 82 insertions(+), 51 deletions(-) diff --git a/translated/talk/20180920 WinWorld - A Large Collection Of Defunct OSs, Software And Games.md b/translated/talk/20180920 WinWorld - A Large Collection Of Defunct OSs, Software And Games.md index a0c51974b9..02bf0bdf9e 100644 --- a/translated/talk/20180920 WinWorld - A Large Collection Of Defunct OSs, Software And Games.md +++ b/translated/talk/20180920 WinWorld - A Large Collection Of Defunct OSs, Software And Games.md @@ -1,23 +1,31 @@ -WinWorld -- 大型废弃操作系统,软件,游戏合集网站 +WinWorld:大型的废弃操作系统、软件、游戏的博物馆 ===== + ![](https://www.ostechnix.com/wp-content/uploads/2018/09/WinWorld-720x340.jpeg) -有一天,我正在测试 **Dosbox** -- 一个[**在 Linux 平台上运行 MS-DOS 游戏与程序的软件**][1]。当我在搜索一些常用的软件,例如 Turbo C++ 时,我意外留意到了一个叫做 **WinWorld** 的网站。我查看了这个网站上的某些内容,并且着实被惊艳到了。WinWorld 收集了非常多经典的,但已经被它们的开发者所抛弃许久的操作系统,软件,应用,开发工具,游戏以及各式各样的工具。它是一个以保存和分享古老,已经被废弃的或者预发布版本程序为目的的线上博物馆,由社区成员和志愿者运营。 -WinWorld 于 2013 年开始运营。它的创始者声称是被 Yahoo birefcases 激发了灵感并以此构建了这个网站。这个网站原目标是保存并且分享古旧软件。经过许多以不同方式提供帮助的志愿者以及多年的运营,WinWorld 得以迅猛发展。作为一个非盈利网站,WinWorld 的所有内容都免费开放。 -###WinWorld 保存了大量的废弃操作系统,软件,系统应用以及游戏 + +有一天,我正在测试 Dosbox -- 这是一个[在 Linux 平台上运行 MS-DOS 游戏与程序的软件][1]。当我在搜索一些常用的软件,例如 Turbo C++ 时,我意外留意到了一个叫做 [WinWorld][2] 的网站。我查看了这个网站上的某些内容,并且着实被惊艳到了。WinWorld 收集了非常多经典的,但已经被它们的开发者所抛弃许久的操作系统、软件、应用、开发工具、游戏以及各式各样的工具。它是一个以保存和分享古老的、已经被废弃的或者预发布版本程序为目的的线上博物馆,由社区成员和志愿者运营。 + +WinWorld 于 2013 年开始运营。它的创始者声称是被 Yahoo birefcases 激发了灵感并以此构建了这个网站。这个网站原目标是保存并且分享老旧软件。多年来,许多志愿者以不同方式提供了帮助,WinWorld 收集的老旧软件增长迅速。整个 WinWorld 仓库都是自由开源的,所有人都可以使用。 + +### WinWorld 保存了大量的废弃操作系统、软件、系统应用以及游戏 + 就像我刚才说的那样, WinWorld 存储了大量的被抛弃并且不再被开发的软件。 -**Linux 与 Unix:** -这里提供了完整的 UNIX 和 LINUX 操作系统列表以及它们各自的简要介绍,首次发行的年代。 -* **A/UX** - 于 1988 年推出,移植到 68k Macintosh 平台的 Unix 系统。 + +**Linux 与 Unix:** + +这里我给出了完整的 UNIX 和 LINUX 操作系统的列表,以及它们各自的简要介绍、首次发行的年代。 + +* **A/UX** - 于 1988 年推出,移植到苹果的 68k Macintosh 平台的 Unix 系统。 * **AIX** - 于 1986 年推出,IBM 移植的 Unix 系统。 * **AT &T System V Unix** - 于 1983 年推出,最早的商业版 Unix 之一。 * **Banyan VINES** - 于 1984 年推出,专为 Unix 设计的网络操作系统。 * **Corel Linux** - 于 1999 年推出,商业 Linux 发行版。 -* **DEC OSF-1** - 于 1991 年推出,由迪吉多公司(DEC)开发的 Unix 版本。 +* **DEC OSF-1** - 于 1991 年推出,由 DEC 公司开发的 Unix 版本。 * **Digital UNIX** - 由 DEC 于 1995 年推出,**OSF-1** 的重命名版本。 -* **FreeBSD 1.0** - 于 1993 年推出,FreeBSD 的首次发行版。这个系统是基于 4.3BSD 开发的。 +* **FreeBSD 1.0** - 于 1993 年推出,FreeBSD 的首个发行版。这个系统是基于 4.3BSD 开发的。 * **Gentus Linux** - 由 ABIT 于 2000 年推出,未遵守 GPL 协议的 Linux 发行版。 * **HP-UX** - 于 1992 年推出,UNIX 的变种系统。 -* **IRIX** - 由硅谷图形公司(SGI)于1988年推出的操作系统。 +* **IRIX** - 由硅谷图形公司(SGI)于 1988 年推出的操作系统。 * **Lindows** - 于 2002 年推出,与 Corel Linux 类似的商业操作系统。 * **Linux Kernel** - 0.01 版本于 90 年代早期推出,Linux 源代码的副本。 * **Mandrake Linux** - 于 1999 年推出。基于 Red Hat Linux 的 Linux 发行版,稍后被重新命名为 Mandriva。 @@ -28,57 +36,80 @@ WinWorld 于 2013 年开始运营。它的创始者声称是被 Yahoo birefcases * **Sun Solaris** - 由 Sun Microsystem 于 1992 年推出,基于 Unix 的操作系统。 * **SunOS** - 由 Sun Microsystem 于 1982 年推出,衍生自 BSD 基于 Unix 的操作系统。 * **Tru64 UNIX** - 由 DEC 开发,旧称 OSF/1。 -* **Ubuntu 4.10** - 基于 Debian 的知名操作系统。这是早期的 beta 预发布版本,较早期 Ubuntu 正式发行版更早推出。 +* **Ubuntu 4.10** - 基于 Debian 的知名操作系统。这是早期的 beta 预发布版本,比第一个 Ubuntu 正式发行版更早推出。 * **Ultrix** - 由 DEC 开发, UNIX 克隆。 * **UnixWare** - 由 Novell 推出, UNIX 变种。 * **Xandros Linux** - 首个版本于 2003 年推出。基于 Corel Linux 的专有 Linux 发行版。 -* **Xenix** - 最初由 Microsoft 于 1984 推出, UNIX 变种操作系统。 -不仅仅是 Linux/Unix,你还能找到例如 DOS,Windows,Apple/Mac,OS 2,Novell netware等其他的操作系统与 shell。 -**DOS & CP/M:** - *86-DOS - *Concurrent CPM-86 & Concurrent DOS - *CP/M 86 & CP/M-80 - *DOS Plus - *DR-DOS - *GEM - *MP/M - *MS-DOS - *Multitasking MS-DOS 4.00 - *Multiuser DOS - *PC-DOS - *PC-MOS - *PTS-DOS - *Real/32 - *Tandy Deskmate - *Wendin DOS -**Windows:** - *BackOffice Server - *Windows 1.0/2.x/3.0/3.1/95/98/2000/ME/NT 3.X/NT 4.0 - *Windows Whistler - *WinFrame -**Apple/Mac:** - *Mac OS 7/8/9 - *Mac OS X - *System Software (0-6) -**OS/2:** - *Citrix Multiuser - *OS/2 1.x - *OS/2 2.0 - *OS/2 3.x - *OS/2 Warp 4 -于此同时,WinWorld 也收集了大量的旧软件,系统应用,开发工具和游戏。您在访问网站的时候也可以同时查看它们。 -说实话,我甚至不知道这个网站列出的绝大部分东西,我甚至不知道它们存在过。其中列出的某些工具发布于我出生之前。 -如果您需要或者打算去测试一个经典的程序(例如游戏,软件,操作系统),并且在其他地方找不到它们,那么来 WinWorld 资源库看看,下载它们然后开始你的探险吧。祝您好运! -免责声明: -OSTechNix 并非隶属于 WinWorld。我们,OSTechNix,并不确保 WinWorld 站点存储数据的真实性与可靠性。而且在你所在的地区,或许从第三方站点下载软件是违法行为。本篇文章作者和 OSTechNix 都不会承担任何责任,使用此服务意味着您将自行承担风险。 +* **Xenix** - 最初由微软于 1984 推出,UNIX 变种操作系统。 + +不仅仅是 Linux/Unix,你还能找到例如 DOS、Windows、Apple/Mac、OS 2、Novell netware 等其他的操作系统与 shell。 + +**DOS & CP/M:** + +* 86-DOS +* Concurrent CPM-86 & Concurrent DOS +* CP/M 86 & CP/M-80 +* DOS Plus +* DR-DOS +* GEM +* MP/M +* MS-DOS +* 多任务的 MS-DOS 4.00 +* 多用户 DOS +* PC-DOS +* PC-MOS +* PTS-DOS +* Real/32 +* Tandy Deskmate +* Wendin DOS + +**Windows:** + +* BackOffice Server +* Windows 1.0/2.x/3.0/3.1/95/98/2000/ME/NT 3.X/NT 4.0 +* Windows Whistler +* WinFrame + +**Apple/Mac:** + +* Mac OS 7/8/9 +* Mac OS X +* System Software (0-6) + +**OS/2:** + +* Citrix Multiuser +* OS/2 1.x +* OS/2 2.0 +* OS/2 3.x +* OS/2 Warp 4 + +于此同时,WinWorld 也收集了大量的旧软件、系统应用、开发工具和游戏。你也可以一起看看它们。 + +说实话,这个网站列出的绝大部分东西,我甚至都不知道它们存在过。其中列出的某些工具发布于我出生之前。 + +如果您需要或者打算去测试一个经典的程序(例如游戏、软件、操作系统),并且在其他地方找不到它们,那么来 WinWorld 资源库看看,下载它们然后开始你的探险吧。祝您好运! + +![WinWorld – A Collection Of Defunct OSs, Software, Applications And Games](https://www.ostechnix.com/wp-content/uploads/2018/09/winworld.png) + +**免责声明:** + +OSTechNix 并非隶属于 WinWorld。我们 OSTechNix 并不确保 WinWorld 站点存储数据的真实性与可靠性。而且在你所在的地区,或许从第三方站点下载软件是违法行为。本篇文章作者和 OSTechNix 都不会承担任何责任,使用此服务意味着您将自行承担风险。(LCTT 译注:本站和译者亦同样申明。) + 本篇文章到此为止。希望这对您有用,更多的好文章即将发布,敬请期待! + 谢谢各位的阅读! + -------------------------------------------------------------------------------- via: https://www.ostechnix.com/winworld-a-large-collection-of-defunct-oss-software-and-games/ + 作者:[SK][a] 选题:[lujun9972](https://github.com/lujun9972) 译者:[thecyanbird](https://github.com/thecyanbird) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) + 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + [a]: https://www.ostechnix.com/author/sk/ [1]: https://www.ostechnix.com/how-to-run-ms-dos-games-and-programs-in-linux/ +[2]: https://winworldpc.com/library/ From 5388ff4fe6c4003bdbca0217b5cd13fce6e4cf38 Mon Sep 17 00:00:00 2001 From: "Xingyu.Wang" Date: Fri, 5 Oct 2018 18:16:11 +0800 Subject: [PATCH 222/736] PUB:20180920 WinWorld - A Large Collection Of Defunct OSs, Software And Games.md MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit @thecyanbird 本文首发地址:https://linux.cn/article-10082-1.html 您的 LCTT 专页地址: https://linux.cn/lctt/thecyanbird 请在 LCTT 平台注册领取您的 LCCN : https://lctt.linux.cn/ --- ...rld - A Large Collection Of Defunct OSs, Software And Games.md | 0 1 file changed, 0 insertions(+), 0 deletions(-) rename {translated/talk => published}/20180920 WinWorld - A Large Collection Of Defunct OSs, Software And Games.md (100%) diff --git a/translated/talk/20180920 WinWorld - A Large Collection Of Defunct OSs, Software And Games.md b/published/20180920 WinWorld - A Large Collection Of Defunct OSs, Software And Games.md similarity index 100% rename from translated/talk/20180920 WinWorld - A Large Collection Of Defunct OSs, Software And Games.md rename to published/20180920 WinWorld - A Large Collection Of Defunct OSs, Software And Games.md From c2f0789430208cc238326643af6197e4041deb28 Mon Sep 17 00:00:00 2001 From: "Xingyu.Wang" Date: Fri, 5 Oct 2018 18:40:01 +0800 Subject: [PATCH 223/736] PRF:20180913 ScreenCloud- The Screenshot-- App.md @geekpi --- ...80913 ScreenCloud- The Screenshot-- App.md | 27 ++++++++++--------- 1 file changed, 15 insertions(+), 12 deletions(-) diff --git a/translated/tech/20180913 ScreenCloud- The Screenshot-- App.md b/translated/tech/20180913 ScreenCloud- The Screenshot-- App.md index a7002183c3..54a36dd377 100644 --- a/translated/tech/20180913 ScreenCloud- The Screenshot-- App.md +++ b/translated/tech/20180913 ScreenCloud- The Screenshot-- App.md @@ -1,43 +1,46 @@ -ScreenCloud:一个截屏程序 +ScreenCloud:一个增强的截屏程序 ====== -[ScreenCloud][1]是一个很棒的小程序,你甚至不知道你需要它。桌面 Linux 的默认屏幕截图流程很好(Prt Scr 按钮),我们甚至有一些[强大的截图工具][2],如 [Shutter][3]。但是,ScreenCloud 有一个非常简单但非常方便的功能,让我爱上了它。在我们深入它之前,让我们先看一个背景故事。 -我截取了很多截图。远远超过平均水平。收据、注册详细信息、开发工作、文章中程序的截图等等。我接下来要做的就是打开浏览器,浏览我最喜欢的云存储并将重要的内容转储到那里,以便我可以在手机上以及 PC 上的多个操作系统上访问它们。这也让我可以轻松与我的团队分享我正在使用的程序的截图。 +[ScreenCloud][1]是一个很棒的小程序,你甚至不知道你需要它。桌面 Linux 的默认屏幕截图流程很好(`PrtScr` 按钮),我们甚至有一些[强大的截图工具][2],如 [Shutter][3]。但是,ScreenCloud 有一个非常简单但非常方便的功能,让我爱上了它。在我们深入它之前,让我们先看一个背景故事。 + +我截取了很多截图,远超常人。收据、注册详细信息、开发工作、文章中程序的截图等等。我接下来要做的就是打开浏览器,浏览我最喜欢的云存储并将重要的内容转储到那里,以便我可以在手机上以及 PC 上的多个操作系统上访问它们。这也让我可以轻松与我的团队分享我正在使用的程序的截图。 我对这个标准的截图流程没有抱怨,打开浏览器并登录我的云,然后手动上传屏幕截图,直到我遇到 ScreenCloud。 ### ScreenCloud -ScreenCloud 是跨平台的程序,它提供简单的屏幕截图和灵活的[云备份选项][4]管理。这包括使用你自己的[ FTP 服务器][5]。 +ScreenCloud 是跨平台的程序,它提供轻松的屏幕截图功能和灵活的[云备份选项][4]管理。这包括使用你自己的 [FTP 服务器][5]。 ![][6] -ScreenCloud 很精简,投入了大量的注意力给小的东西。它为你提供了非常容易记住的热键来捕获全屏、活动窗口或捕获用鼠标选择的区域。 +ScreenCloud 很顺滑,在细节上投入了大量的精力。它为你提供了非常容易记住的热键来捕获全屏、活动窗口或鼠标选择区域。 -![][7]ScreenCloud 的默认键盘快捷键 +![][7] + +*ScreenCloud 的默认键盘快捷键* 截取屏幕截图后,你可以设置 ScreenCloud 如何处理图像或直接将其上传到你选择的云服务。它甚至支持 SFTP。截图上传后(通常在几秒钟内),图像链接就会被自动复制到剪贴板,这让你可以轻松共享。 ![][8] -你还可以使用 ScreenCloud 进行一些基本编辑。为此,你需要将 “Save to” 设置为 “Ask me”。此设置在下拉框中有并且通常是默认设置。当使用它时,当你截取屏幕截图时,你会看到编辑文件的选项。在这里,你可以在屏幕截图中添加箭头、文本和数字。 +你还可以使用 ScreenCloud 进行一些基本编辑。为此,你需要将 “Save to” 设置为 “Ask me”。此设置在应用图标菜单中有并且通常是默认设置。当使用它时,当你截取屏幕截图时,你会看到编辑文件的选项。在这里,你可以在屏幕截图中添加箭头、文本和数字。 -![Editing screenshots with ScreenCloud][9]Editing screenshots with ScreenCloud +![Editing screenshots with ScreenCloud][9] + +*用 ScreenCloud 编辑截屏* ### 在 Linux 上安装 ScreenCloud -ScreenCloud 可在[ Snap 商店][10]中找到。因此,你可以通过访问[ Snap 商店][12]或运行以下命令,轻松地将其安装在 Ubuntu 和其他[启用 Snap ][11]的发行版上。 +ScreenCloud 可在 [Snap 商店][10]中找到。因此,你可以通过访问 [Snap 商店][12]或运行以下命令,轻松地将其安装在 Ubuntu 和其他[启用 Snap][11] 的发行版上。 ``` sudo snap install screencloud - ``` 对于无法通过 Snap 安装程序的 Linux 发行版,你可以[在这里][1]下载 AppImage。进入下载文件夹,右键单击并在那里打开终端。然后运行以下命令。 ``` sudo chmod +x ScreenCloud-v1.4.0-x86_64.AppImage - ``` 然后,你可以通过双击下载的文件来启动程序。 @@ -57,7 +60,7 @@ via: https://itsfoss.com/screencloud-app/ 作者:[Aquil Roshan][a] 选题:[lujun9972](https://github.com/lujun9972) 译者:[geekpi](https://github.com/geekpi) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 From 0c380988c192d1c528b3b5cd06493525c7c24ba8 Mon Sep 17 00:00:00 2001 From: "Xingyu.Wang" Date: Fri, 5 Oct 2018 18:40:47 +0800 Subject: [PATCH 224/736] PUB:20180913 ScreenCloud- The Screenshot-- App.md @geekpi https://linux.cn/article-10083-1.html --- .../20180913 ScreenCloud- The Screenshot-- App.md | 0 1 file changed, 0 insertions(+), 0 deletions(-) rename {translated/tech => published}/20180913 ScreenCloud- The Screenshot-- App.md (100%) diff --git a/translated/tech/20180913 ScreenCloud- The Screenshot-- App.md b/published/20180913 ScreenCloud- The Screenshot-- App.md similarity index 100% rename from translated/tech/20180913 ScreenCloud- The Screenshot-- App.md rename to published/20180913 ScreenCloud- The Screenshot-- App.md From eb05f1d1a95145b247789e39ad4e19bdf471c668 Mon Sep 17 00:00:00 2001 From: "Xingyu.Wang" Date: Fri, 5 Oct 2018 18:53:14 +0800 Subject: [PATCH 225/736] PRF:20180906 How To Limit Network Bandwidth In Linux Using Wondershaper.md @HankChow --- ...k Bandwidth In Linux Using Wondershaper.md | 41 ++++--------------- 1 file changed, 8 insertions(+), 33 deletions(-) diff --git a/translated/tech/20180906 How To Limit Network Bandwidth In Linux Using Wondershaper.md b/translated/tech/20180906 How To Limit Network Bandwidth In Linux Using Wondershaper.md index 746e664228..046777e1be 100644 --- a/translated/tech/20180906 How To Limit Network Bandwidth In Linux Using Wondershaper.md +++ b/translated/tech/20180906 How To Limit Network Bandwidth In Linux Using Wondershaper.md @@ -11,101 +11,84 @@ ### 在 Linux 中使用 Wondershaper 限制网络带宽 -`wondershaper` 是用于显示系统网卡网络带宽的简单脚本。它使用了 `iproute` 和 `tc` 命令,但大大简化了操作过程。 +`wondershaper` 是用于显示系统网卡网络带宽的简单脚本。它使用了 iproute 的 `tc` 命令,但大大简化了操作过程。 -**安装 Wondershaper** +#### 安装 Wondershaper 使用 `git clone` 克隆 Wondershaper 的版本库就可以安装最新版本: ``` $ git clone https://github.com/magnific0/wondershaper.git - ``` 按照以下命令进入 `wondershaper` 目录并安装: ``` $ cd wondershaper - $ sudo make install - ``` 然后执行以下命令,可以让 `wondershaper` 在每次系统启动时都自动开始服务: ``` $ sudo systemctl enable wondershaper.service - $ sudo systemctl start wondershaper.service - ``` 如果你不强求安装最新版本,也可以使用软件包管理器(官方和非官方均可)来进行安装。 -`wondershaper` 在 [Arch 用户软件仓库][1](Arch User Repository, AUR)中可用,所以可以使用类似 [`yay`][2] 这些 AUR 辅助软件在基于 Arch 的系统中安装 `wondershaper` 。 +`wondershaper` 在 [Arch 用户软件仓库][1](Arch User Repository,AUR)中可用,所以可以使用类似 [yay][2] 这些 AUR 辅助软件在基于 Arch 的系统中安装 `wondershaper` 。 ``` $ yay -S wondershaper-git - ``` -对于Debian、Ubuntu 和 Linux Mint 可以使用以下命令安装: +对于 Debian、Ubuntu 和 Linux Mint 可以使用以下命令安装: ``` $ sudo apt-get install wondershaper - ``` 对于 Fedora 可以使用以下命令安装: ``` $ sudo dnf install wondershaper - ``` 对于 RHEL、CentOS,只需要启用 EPEL 仓库,就可以使用以下命令安装: ``` $ sudo yum install epel-release - $ sudo yum install wondershaper - ``` 在每次系统启动时都自动启动 `wondershaper` 服务。 ``` $ sudo systemctl enable wondershaper.service - $ sudo systemctl start wondershaper.service - ``` -**用法** +#### 用法 首先需要找到网络接口的名称,通过以下几个命令都可以查询到网卡的详细信息: ``` $ ip addr - $ route - $ ifconfig - ``` 在确定网卡名称以后,就可以按照以下的命令限制网络带宽: ``` $ sudo wondershaper -a -d -u - ``` 例如,如果网卡名称是 `enp0s8`,并且需要把上行、下行速率分别限制为 1024 Kbps 和 512 Kbps,就可以执行以下命令: ``` $ sudo wondershaper -a enp0s8 -d 1024 -u 512 - ``` 其中参数的含义是: @@ -114,20 +97,16 @@ $ sudo wondershaper -a enp0s8 -d 1024 -u 512 * `-d`:下行带宽 * `-u`:上行带宽 - - 如果要对网卡解除网络带宽的限制,只需要执行: ``` $ sudo wondershaper -c -a enp0s8 - ``` 或者: ``` $ sudo wondershaper -c enp0s8 - ``` 如果系统中有多个网卡,为确保稳妥,需要按照上面的方法手动设置每个网卡的上行、下行速率。 @@ -149,13 +128,14 @@ DSPEED="2048" # Upload rate in Kbps # USPEED="512" - ``` Wondershaper 使用前: + ![](https://www.ostechnix.com/wp-content/uploads/2018/09/wondershaper-1.png) Wondershaper 使用后: + ![](https://www.ostechnix.com/wp-content/uploads/2018/09/wondershaper-2.png) 可以看到,使用 Wondershaper 限制网络带宽之后,下行速率与限制之前相比已经大幅下降。 @@ -164,21 +144,16 @@ Wondershaper 使用后: ``` $ wondershaper -h - ``` 也可以查看 Wondershaper 的用户手册: ``` $ man wondershaper - ``` -As far as tested, Wondershaper worked just fine as described above. Give it a try and let us know what do you think about this utility. 根据测试,Wondershaper 按照上面的方式可以有很好的效果。你可以试用一下,然后发表你的看法。 - - -------------------------------------------------------------------------------- via: https://www.ostechnix.com/how-to-limit-network-bandwidth-in-linux-using-wondershaper/ @@ -186,7 +161,7 @@ via: https://www.ostechnix.com/how-to-limit-network-bandwidth-in-linux-using-won 作者:[SK][a] 选题:[lujun9972](https://github.com/lujun9972) 译者:[HankChow](https://github.com/HankChow) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 From 280755d66e8d5cfe3845c20f5898a7b52a35da3a Mon Sep 17 00:00:00 2001 From: "Xingyu.Wang" Date: Fri, 5 Oct 2018 18:53:36 +0800 Subject: [PATCH 226/736] PUB:20180906 How To Limit Network Bandwidth In Linux Using Wondershaper.md @HankChow https://linux.cn/article-10084-1.html --- ... How To Limit Network Bandwidth In Linux Using Wondershaper.md | 0 1 file changed, 0 insertions(+), 0 deletions(-) rename {translated/tech => published}/20180906 How To Limit Network Bandwidth In Linux Using Wondershaper.md (100%) diff --git a/translated/tech/20180906 How To Limit Network Bandwidth In Linux Using Wondershaper.md b/published/20180906 How To Limit Network Bandwidth In Linux Using Wondershaper.md similarity index 100% rename from translated/tech/20180906 How To Limit Network Bandwidth In Linux Using Wondershaper.md rename to published/20180906 How To Limit Network Bandwidth In Linux Using Wondershaper.md From ab22ea0328c1485fe1cbb5767625a4d021185f6d Mon Sep 17 00:00:00 2001 From: HankChow <280630620@qq.com> Date: Fri, 5 Oct 2018 20:12:34 +0800 Subject: [PATCH 227/736] hankchow translating --- ...0180724 75 Most Used Essential Linux Applications of 2018.md | 2 ++ 1 file changed, 2 insertions(+) diff --git a/sources/tech/20180724 75 Most Used Essential Linux Applications of 2018.md b/sources/tech/20180724 75 Most Used Essential Linux Applications of 2018.md index 919182ba1f..2b52356068 100644 --- a/sources/tech/20180724 75 Most Used Essential Linux Applications of 2018.md +++ b/sources/tech/20180724 75 Most Used Essential Linux Applications of 2018.md @@ -1,3 +1,5 @@ +HankChow translating + 75 Most Used Essential Linux Applications of 2018 ====== From efafc43cc669bd41c24817b20205503a61724438 Mon Sep 17 00:00:00 2001 From: LuMing <784315443@qq.com> Date: Fri, 5 Oct 2018 20:46:43 +0800 Subject: [PATCH 228/736] LuuMing Translating --- ... to improve collaboration between developers and designers.md | 1 + 1 file changed, 1 insertion(+) diff --git a/sources/talk/20180502 9 ways to improve collaboration between developers and designers.md b/sources/talk/20180502 9 ways to improve collaboration between developers and designers.md index 293841714d..637a54ee91 100644 --- a/sources/talk/20180502 9 ways to improve collaboration between developers and designers.md +++ b/sources/talk/20180502 9 ways to improve collaboration between developers and designers.md @@ -1,3 +1,4 @@ +LuuMing translating 9 ways to improve collaboration between developers and designers ====== From bc3422152f6f71c29712a7ee23e20f8cd103aeb8 Mon Sep 17 00:00:00 2001 From: "Xingyu.Wang" Date: Fri, 5 Oct 2018 21:41:30 +0800 Subject: [PATCH 229/736] PRF:20180920 8 Python packages that will simplify your life with Django.md @belitex --- ...hat will simplify your life with Django.md | 49 +++++++++---------- 1 file changed, 24 insertions(+), 25 deletions(-) diff --git a/translated/tech/20180920 8 Python packages that will simplify your life with Django.md b/translated/tech/20180920 8 Python packages that will simplify your life with Django.md index f242007433..8f914f87e0 100644 --- a/translated/tech/20180920 8 Python packages that will simplify your life with Django.md +++ b/translated/tech/20180920 8 Python packages that will simplify your life with Django.md @@ -1,7 +1,7 @@ 简化 Django 开发的八个 Python 包 ====== -这个月的 Python 专栏将介绍一些 Django 包,它们有益于你的工作,以及你的个人或业余项目。 +> 这个月的 Python 专栏将介绍一些 Django 包,它们有益于你的工作,以及你的个人或业余项目。 ![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/water-stone-balance-eight-8.png?itok=1aht_V5V) @@ -11,32 +11,31 @@ Django 开发者们,在这个月的 Python 专栏中,我们会介绍一些 ### 有用又省时的工具集合:django-extensions -[Django-extensions][4] 这个 Django 包非常受欢迎,全是有用的工具,比如下面这些管理命令: +[django-extensions][4] 这个 Django 包非常受欢迎,全是有用的工具,比如下面这些管理命令: - * **shell_plus** 打开 Django 的管理 shell,这个 shell 已经自动导入了所有的数据库模型。在测试复杂的数据关系时,就不需要再从几个不同的应用里做 import 的操作了。 - * **clean_pyc** 删除项目目录下所有位置的 .pyc 文件 - * **create_template_tags** 在指定的应用下,创建模板标签的目录结构。 - * **describe_form** 输出模型的表单定义,可以粘贴到 forms.py 文件中。(需要注意的是,这种方法创建的是普通 Django 表单,而不是模型表单。) - * **notes** 输出你项目里所有带 TODO,FIXME 等标记的注释。 + * `shell_plus` 打开 Django 的管理 shell,这个 shell 已经自动导入了所有的数据库模型。在测试复杂的数据关系时,就不需要再从几个不同的应用里做导入操作了。 + * `clean_pyc` 删除项目目录下所有位置的 .pyc 文件 + * `create_template_tags` 在指定的应用下,创建模板标签的目录结构。 + * `describe_form` 输出模型的表单定义,可以粘贴到 `forms.py` 文件中。(需要注意的是,这种方法创建的是普通 Django 表单,而不是模型表单。) + * `notes` 输出你项目里所有带 TODO、FIXME 等标记的注释。 Django-extensions 还包括几个有用的抽象基类,在定义模型时,它们能满足常见的模式。当你需要以下模型时,可以继承这些基类: + * `TimeStampedModel`:这个模型的基类包含了 `created` 字段和 `modified` 字段,还有一个 `save()` 方法,在适当的场景下,该方法自动更新 `created` 和 `modified` 字段的值。 + * `ActivatorModel`:如果你的模型需要像 `status`、`activate_date` 和 `deactivate_date` 这样的字段,可以使用这个基类。它还自带了一个启用 `.active()` 和 `.inactive()` 查询集的 manager。 + * `TitleDescriptionModel` 和 `TitleSlugDescriptionModel`:这两个模型包括了 `title` 和 `description` 字段,其中 `description` 字段还包括 `slug`,它根据 `title` 字段自动产生。 - * **TimeStampedModel** : 这个模型的基类包含了 **created** 字段和 **modified** 字段,还有一个 **save()** 方法,在适当的场景下,该方法自动更新 created 和 modified 字段的值。 - * **ActivatorModel** : 如果你的模型需要像 **status**,**activate_date** 和 **deactivate_date** 这样的字段,可以使用这个基类。它还自带了一个启用 **.active()** 和 **.inactive()** 查询集的 manager。 - * **TitleDescriptionModel** 和 **TitleSlugDescriptionModel** : 这两个模型包括了 **title** 和 **description** 字段,其中 description 字段还包括 **slug**,它根据 **title** 字段自动产生。 - -Django-extensions 还有其他更多的功能,也许对你的项目有帮助,所以,去浏览一下它的[文档][5]吧! +django-extensions 还有其他更多的功能,也许对你的项目有帮助,所以,去浏览一下它的[文档][5]吧! ### 12 因子应用的配置:django-environ -在 Django 项目的配置方面,[Django-environ][6] 提供了符合 [12 因子应用][7] 方法论的管理方法。它是其他一些库的集合,包括 [envparse][8] 和 [honcho][9] 等。安装了 django-environ 之后,在项目的根目录创建一个 .env 文件,用这个文件去定义那些随环境不同而不同的变量,或者需要保密的变量。(比如 API keys,是否启用 debug,数据库的 URLs 等) +在 Django 项目的配置方面,[django-environ][6] 提供了符合 [12 因子应用][7] 方法论的管理方法。它是另外一些库的集合,包括 [envparse][8] 和 [honcho][9] 等。安装了 django-environ 之后,在项目的根目录创建一个 `.env` 文件,用这个文件去定义那些随环境不同而不同的变量,或者需要保密的变量。(比如 API 密钥,是否启用调试,数据库的 URL 等) -然后,在项目的 settings.py 中引入 **environ**,并参考[官方文档的例子][10]设置好 **environ.PATH()** 和 **environ.Env()**。就可以通过 **env('VARIABLE_NAME')** 来获取 .env 文件中定义的变量值了。 +然后,在项目的 `settings.py` 中引入 `environ`,并参考[官方文档的例子][10]设置好 `environ.PATH()` 和 `environ.Env()`。就可以通过 `env('VARIABLE_NAME')` 来获取 `.env` 文件中定义的变量值了。 ### 创建出色的管理命令:django-click -[Django-click][11] 是基于 [Click][12] 的, ( 我们[之前推荐过][13]… [两次][14] Click),它对编写 Django 管理命令很有帮助。这个库没有很多文档,但是代码仓库中有个存放[测试命令][15]的目录,非常有参考价值。 Django-click 基本的 Hello World 命令是这样写的: +[django-click][11] 是基于 [Click][12] 的,(我们[之前推荐过][13]… [两次][14] Click),它对编写 Django 管理命令很有帮助。这个库没有很多文档,但是代码仓库中有个存放[测试命令][15]的目录,非常有参考价值。 django-click 基本的 Hello World 命令是这样写的: ``` # app_name.management.commands.hello.py @@ -57,31 +56,31 @@ Hello, Lacey ### 处理有限状态机:django-fsm -[Django-fsm][16] 给 Django 的模型添加了有限状态机的支持。如果你管理一个新闻网站,想用类似于“写作中”,“编辑中”,“已发布”来流转文章的状态,django-fsm 能帮你定义这些状态,还能管理状态变化的规则与限制。 +[django-fsm][16] 给 Django 的模型添加了有限状态机的支持。如果你管理一个新闻网站,想用类似于“写作中”、“编辑中”、“已发布”来流转文章的状态,django-fsm 能帮你定义这些状态,还能管理状态变化的规则与限制。 -Django-fsm 为模型提供了 FSMField 字段,用来定义模型实例的状态。用 django-fsm 的 **@transition** 修饰符,可以定义状态变化的方法,并处理状态变化的任何副作用。 +Django-fsm 为模型提供了 FSMField 字段,用来定义模型实例的状态。用 django-fsm 的 `@transition` 修饰符,可以定义状态变化的方法,并处理状态变化的任何副作用。 -虽然 django-fsm 文档很轻量,不过 [Django 中的工作流(状态)][17] 这篇 GitHubGist 对有限状态机和 django-fsm 做了非常好的介绍。 +虽然 django-fsm 文档很轻量,不过 [Django 中的工作流(状态)][17] 这篇 GitHub Gist 对有限状态机和 django-fsm 做了非常好的介绍。 ### 联系人表单:#django-contact-form -联系人表单可以说是网站的标配。但是不要自己去写全部的样板代码,用 [django-contact-form][18] 在几分钟内就可以搞定。它带有一个可选的能过滤垃圾邮件的表单类(也有不过滤的普通表单类)和一个 **ContactFormView** 基类,基类的方法可以覆盖或自定义修改。而且它还能引导你完成模板的创建,好让表单正常工作。 +联系人表单可以说是网站的标配。但是不要自己去写全部的样板代码,用 [django-contact-form][18] 在几分钟内就可以搞定。它带有一个可选的能过滤垃圾邮件的表单类(也有不过滤的普通表单类)和一个 `ContactFormView` 基类,基类的方法可以覆盖或自定义修改。而且它还能引导你完成模板的创建,好让表单正常工作。 ### 用户注册和认证:django-allauth -[Django-allauth][19] 是一个 Django 应用,它为用户注册,登录注销,密码重置,还有第三方用户认证(比如 GitHub 或 Twitter)提供了视图,表单和 URLs,支持邮件地址作为用户名的认证方式,而且有大量的文档记录。第一次用的时候,它的配置可能会让人有点晕头转向;请仔细阅读[安装说明][20],在[自定义你的配置][21]时要专注,确保启用某个功能的所有配置都用对了。 +[django-allauth][19] 是一个 Django 应用,它为用户注册、登录/注销、密码重置,还有第三方用户认证(比如 GitHub 或 Twitter)提供了视图、表单和 URL,支持邮件地址作为用户名的认证方式,而且有大量的文档记录。第一次用的时候,它的配置可能会让人有点晕头转向;请仔细阅读[安装说明][20],在[自定义你的配置][21]时要专注,确保启用某个功能的所有配置都用对了。 ### 处理 Django REST 框架的用户认证:django-rest-auth -如果 Django 开发中涉及到对外提供 API,你很可能用到了 [Django REST Framework][22] (DRF)。如果你在用 DRF,那么你应该试试 django-rest-auth,它提供了用户注册,登录/注销,密码重置和社交媒体认证的 endpoints (是通过添加 django-allauth 的支持来实现的,这两个包协作得很好)。 +如果 Django 开发中涉及到对外提供 API,你很可能用到了 [Django REST Framework][22](DRF)。如果你在用 DRF,那么你应该试试 django-rest-auth,它提供了用户注册、登录/注销,密码重置和社交媒体认证的端点(是通过添加 django-allauth 的支持来实现的,这两个包协作得很好)。 ### Django REST 框架的 API 可视化:django-rest-swagger -[Django REST Swagger][24] 提供了一个功能丰富的用户界面,用来和 Django REST 框架的 API 交互。你只需要安装 Django REST Swagger,把它添加到 Django 项目的 installed apps 中,然后在 urls.py 中添加 Swagger 的视图和 URL 模式就可以了,剩下的事情交给 API 的 docstring 处理。 +[Django REST Swagger][24] 提供了一个功能丰富的用户界面,用来和 Django REST 框架的 API 交互。你只需要安装 Django REST Swagger,把它添加到 Django 项目的已安装应用中,然后在 `urls.py` 中添加 Swagger 的视图和 URL 模式就可以了,剩下的事情交给 API 的 docstring 处理。 ![](https://opensource.com/sites/default/files/uploads/swagger-ui.png) -API 的用户界面按照 app 的维度展示了所有 endpoints 和可用方法,并列出了这些 endpoints 的可用操作,而且它提供了和 API 交互的功能(比如添加/删除/获取记录)。django-rest-swagger 从 API 视图中的 docstrings 生成每个 endpoint 的文档,通过这种方法,为你的项目创建了一份 API 文档,这对你,对前端开发人员和用户都很有用。 +API 的用户界面按照 app 的维度展示了所有端点和可用方法,并列出了这些端点的可用操作,而且它提供了和 API 交互的功能(比如添加/删除/获取记录)。django-rest-swagger 从 API 视图中的 docstrings 生成每个端点的文档,通过这种方法,为你的项目创建了一份 API 文档,这对你,对前端开发人员和用户都很有用。 -------------------------------------------------------------------------------- @@ -90,7 +89,7 @@ via: https://opensource.com/article/18/9/django-packages 作者:[Jeff Triplett][a] 选题:[lujun9972](https://github.com/lujun9972) 译者:[belitex](https://github.com/belitex) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 @@ -118,4 +117,4 @@ via: https://opensource.com/article/18/9/django-packages [21]: https://django-allauth.readthedocs.io/en/latest/configuration.html [22]: http://www.django-rest-framework.org/ [23]: https://django-rest-auth.readthedocs.io/ -[24]: https://django-rest-swagger.readthedocs.io/en/latest/ \ No newline at end of file +[24]: https://django-rest-swagger.readthedocs.io/en/latest/ From fdb1eef5f559d62deebebfd6343578896805239d Mon Sep 17 00:00:00 2001 From: "Xingyu.Wang" Date: Fri, 5 Oct 2018 21:43:36 +0800 Subject: [PATCH 230/736] PUB:20180920 8 Python packages that will simplify your life with Django.md MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit @belitex 恭喜你,完成了第一篇翻译贡献。本文首发地址: https://linux.cn/article-10085-1.html 您的 LCTT 专页地址: https://linux.cn/lctt/belitex 请注册 LCTT 平台领取 LCCN https://lctt.linux.cn/ --- ... 8 Python packages that will simplify your life with Django.md | 0 1 file changed, 0 insertions(+), 0 deletions(-) rename {translated/tech => published}/20180920 8 Python packages that will simplify your life with Django.md (100%) diff --git a/translated/tech/20180920 8 Python packages that will simplify your life with Django.md b/published/20180920 8 Python packages that will simplify your life with Django.md similarity index 100% rename from translated/tech/20180920 8 Python packages that will simplify your life with Django.md rename to published/20180920 8 Python packages that will simplify your life with Django.md From ada3603a9f3f64f7bce9bd45861279d32c4a3b1c Mon Sep 17 00:00:00 2001 From: "Xingyu.Wang" Date: Fri, 5 Oct 2018 21:53:52 +0800 Subject: [PATCH 231/736] PRF:20180709 How To Configure SSH Key-based Authentication In Linux.md @LuuMing --- ... Configure SSH Key-based Authentication In Linux.md | 10 +++++----- 1 file changed, 5 insertions(+), 5 deletions(-) diff --git a/translated/tech/20180709 How To Configure SSH Key-based Authentication In Linux.md b/translated/tech/20180709 How To Configure SSH Key-based Authentication In Linux.md index 77a4e03b35..8fb89b943d 100644 --- a/translated/tech/20180709 How To Configure SSH Key-based Authentication In Linux.md +++ b/translated/tech/20180709 How To Configure SSH Key-based Authentication In Linux.md @@ -64,14 +64,14 @@ The key's randomart image is: +----[SHA256]-----+ ``` -如果你已经创建了密钥对,你将看到以下信息。输入 y 就会覆盖已存在的密钥。 +如果你已经创建了密钥对,你将看到以下信息。输入 `y` 就会覆盖已存在的密钥。 ``` /home/username/.ssh/id_rsa already exists. Overwrite (y/n)? ``` -请注意**密码是可选的**。如果你输入了密码,那么每次通过 SSH 访问远程系统时都要求输入密码,除非你使用了 SSH 代理保存了密码。如果你不想要密码(虽然不安全),简单地敲两次回车。不过,我建议你使用密码。从安全的角度来看,使用无密码的 ssh 密钥对不是什么好主意。这种方式应该限定在特殊的情况下使用,例如,没有用户介入的服务访问远程系统。(例如,用 rsync 远程备份……) +请注意**密码是可选的**。如果你输入了密码,那么每次通过 SSH 访问远程系统时都要求输入密码,除非你使用了 SSH 代理保存了密码。如果你不想要密码(虽然不安全),简单地敲两次回车。不过,我建议你使用密码。从安全的角度来看,使用无密码的 ssh 密钥对不是什么好主意。这种方式应该限定在特殊的情况下使用,例如,没有用户介入的服务访问远程系统。(例如,用 `rsync` 远程备份……) 如果你已经在个人文件 `~/.ssh/id_rsa` 中有了无密码的密钥,但想要更新为带密码的密钥。使用下面的命令: @@ -95,7 +95,7 @@ $ ssh-copy-id sk@192.168.225.22 在这里,我把本地(Arch Linux)系统上的公钥拷贝到了远程系统(Ubuntu 18.04 LTS)上。从技术上讲,上面的命令会把本地系统 `~/.ssh/id_rsa.pub` 文件中的内容拷贝到远程系统 `~/.ssh/authorized_keys` 中。明白了吗?非常棒。 -输入 yes 来继续连接你的远程 SSH 服务端。接着,输入远程系统 sk 用户的密码。 +输入 `yes` 来继续连接你的远程 SSH 服务端。接着,输入远程系统用户 `sk` 的密码。 ``` /usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed @@ -118,7 +118,7 @@ $ ssh-copy-id -f sk@192.168.225.22 ### 在远程系统上禁用基于密码认证的 SSH -你需要在 root 或者 sudo 用户下执行下面的命令。 +你需要在 root 用户或者 `sudo` 执行下面的命令。 禁用基于密码的认证,你需要在远程系统的终端里编辑 `/etc/ssh/sshd_config` 配置文件: @@ -170,7 +170,7 @@ Warning: Permanently added '192.168.225.22' (ECDSA) to the list of known hosts. Permission denied (publickey). ``` -如你所见,除了 CentOS(译注:根据上文,这里应该是 Arch)系统外,我不能通过其它任何系统 SSH 访问我的远程系统 Ubuntu 18.04。 +如你所见,除了 CentOS(LCTT 译注:根据上文,这里应该是 Arch)系统外,我不能通过其它任何系统 SSH 访问我的远程系统 Ubuntu 18.04。 ### 为 SSH 服务端添加更多客户端系统的密钥 From f6f6a0373dec7a75de89d2ea40bacbb6d1ab52dd Mon Sep 17 00:00:00 2001 From: "Xingyu.Wang" Date: Fri, 5 Oct 2018 21:54:19 +0800 Subject: [PATCH 232/736] PUB:20180709 How To Configure SSH Key-based Authentication In Linux.md @LuuMing @pityonline https://linux.cn/article-10086-1.html --- ...0709 How To Configure SSH Key-based Authentication In Linux.md | 0 1 file changed, 0 insertions(+), 0 deletions(-) rename {translated/tech => published}/20180709 How To Configure SSH Key-based Authentication In Linux.md (100%) diff --git a/translated/tech/20180709 How To Configure SSH Key-based Authentication In Linux.md b/published/20180709 How To Configure SSH Key-based Authentication In Linux.md similarity index 100% rename from translated/tech/20180709 How To Configure SSH Key-based Authentication In Linux.md rename to published/20180709 How To Configure SSH Key-based Authentication In Linux.md From fc5a2a481b06bea900e1997a9dcc6e72a9d083e5 Mon Sep 17 00:00:00 2001 From: "Xingyu.Wang" Date: Sat, 6 Oct 2018 19:24:15 +0800 Subject: [PATCH 233/736] PRF:20180815 How to Create M3U Playlists in Linux [Quick Tip].md @lujun9972 --- ...eate M3U Playlists in Linux [Quick Tip].md | 51 ++++++++++--------- 1 file changed, 26 insertions(+), 25 deletions(-) diff --git a/translated/tech/20180815 How to Create M3U Playlists in Linux [Quick Tip].md b/translated/tech/20180815 How to Create M3U Playlists in Linux [Quick Tip].md index 21f5bc61df..1ce5ebde67 100644 --- a/translated/tech/20180815 How to Create M3U Playlists in Linux [Quick Tip].md +++ b/translated/tech/20180815 How to Create M3U Playlists in Linux [Quick Tip].md @@ -1,36 +1,38 @@ -Linux下如何创建 M3U 播放列表 [小建议] +Linux 下如何创建 M3U 播放列表 ====== -**简介:关于如何在Linux终端中根据乱序文件创建M3U播放列表实现循序播放的小建议。** + +> 简介:关于如何在Linux终端中根据乱序文件创建M3U播放列表实现循序播放的小建议。 ![Create M3U playlists in Linux Terminal][1] -我是外国电视连续剧的粉丝,这些连续剧不太容易从DVD或像[Netflix] [2]这样的流媒体上获得。还在,您可以在YouTube上找到一些内容并[从YouTube下载][3]。 +我是外国电视连续剧的粉丝,这些连续剧不太容易从 DVD 或像 [Netflix][2] 这样的流媒体上获得。好在,您可以在 YouTube 上找到一些内容并[从 YouTube 下载][3]。 -现在出现了一个问题. 你的文件可能不是按顺序存储的. 在GNU / Linux中,文件不是按数字顺序自然排序的,因此我必须创建.m3u播放列表,以便[MPV视频播放器][4]可以按顺序播放视频而不是乱顺进行播放。 +现在出现了一个问题。你的文件可能不是按顺序存储的。在 GNU/Linux中,文件不是按数字顺序自然排序的,因此我必须创建 .m3u 播放列表,以便 [MPV 视频播放器][4]可以按顺序播放视频而不是乱顺进行播放。 -同样的,有时候表示第几集的数字实在文件名中间或结尾的,像这样 ‘My Web Series S01E01.mkv’. 这里的剧集信息位于文件名的中间,'S01E01'告诉我们人类,哪个是第一集,下一集是哪一个文件。 +同样的,有时候表示第几集的数字是在文件名中间或结尾的,像这样 “My Web Series S01E01.mkv”。这里的剧集信息位于文件名的中间,“S01E01”告诉我们人类这是第一集,后面还有其它剧集。 -因此我要做的事情就是在视频墓中创建一个 m3u 播放列表并告诉MPV播放这个 .m3u 播放列表,MPV自然会按顺序播放这些视频. +因此我要做的事情就是在视频墓中创建一个 .m3u 播放列表,并告诉 MPV 播放这个 .m3u 播放列表,MPV 自然会按顺序播放这些视频. -### 什么是M3U 文件? +### 什么是 M3U 文件? -[M3U][5] 基本上就是个按特定顺序包含文件名的文本文件. 当类似MPV或VLC这样的播放器打开M3U文件时, 它会尝试按给定的顺序播放指定文件. +[M3U][5] 基本上就是个按特定顺序包含文件名的文本文件。当类似 MPV 或 VLC 这样的播放器打开 M3U 文件时,它会尝试按给定的顺序播放指定文件。 -### 创建M3U来按顺序播放音频/视频文件 +### 创建 M3U 来按顺序播放音频/视频文件 + +就我而言, 我使用了下面命令: -就我而言, 我使用了下面命令: ``` $/home/shirish/Videos/web-series-video/$ ls -1v |grep .mkv > /tmp/1.m3u && mv /tmp/1.m3u . - ``` -然我们拆分一下看看每个部分表示什么意思 – +然我们拆分一下看看每个部分表示什么意思: -**ls -1v** = 这就是用普通的 ls 来列出目录中的内容. 其中 `-1` 表示每行显示一个文件. 而 `-v` 表示根据文本中的数字(版本)进行自然排序 +`ls -1v` = 这就是用普通的 `ls` 来列出目录中的内容. 其中 `-1` 表示每行显示一个文件。而 `-v` 表示根据文本中的数字(版本)进行自然排序。 -**| grep .mkv** = 基本上就是告诉 `ls` 寻找那些以 `.mkv` 结尾的文件. 它也可以是 `.mp4` 或其他任何你想要的媒体文件格式. +`| grep .mkv` = 基本上就是告诉 `ls` 寻找那些以 `.mkv` 结尾的文件。它也可以是 `.mp4` 或其他任何你想要的媒体文件格式。 通过在控制台上运行命令来进行试运行通常是个好主意: + ``` ls -1v |grep .mkv My Web Series S01E01 [Episode 1 Name] Multi 480p WEBRip x264 - xRG.mkv @@ -41,28 +43,27 @@ My Web Series S01E05 [Episode 5 Name] Multi 480p WEBRip x264 - xRG.mkv My Web Series S01E06 [Episode 6 Name] Multi 480p WEBRip x264 - xRG.mkv My Web Series S01E07 [Episode 7 Name] Multi 480p WEBRip x264 - xRG.mkv My Web Series S01E08 [Episode 8 Name] Multi 480p WEBRip x264 - xRG.mkv - ``` -结果显示我要做的事情是正确的. 现在下一步就是让输出以 `.m3u` 播放列表的格式输出. +结果显示我要做的是正确的。现在下一步就是让输出以 `.m3u` 播放列表的格式输出。 + ``` ls -1v |grep .mkv > /tmp/web_playlist.m3u && mv /tmp/web_playlist.m3u . - ``` -这就在当前目录中创建了 `.m3u` 文件. 这个`.m3u`播放列表只不过是一个.txt文件,其内容与上面相同,扩展名为.m3u。 你也可以手动编辑它,并按照想要的顺序添加确切的文件名。 +这就在当前目录中创建了 .m3u 文件。这个 .m3u 播放列表只不过是一个 .txt 文件,其内容与上面相同,扩展名为 .m3u 而已。 你也可以手动编辑它,并按照想要的顺序添加确切的文件名。 + +之后你只需要这样做: -之后你只需要这样做: ``` mpv web_playlist.m3u - ``` -一般来说,关于MPV和播放列表的好处在于你不需要一次性全部看完。 您可以一次看任意长时间,然后在下一次查看其余部分。 +一般来说,MPV 和播放列表的好处在于你不需要一次性全部看完。 您可以一次看任意长时间,然后在下一次查看其余部分。 -我希望写一些有关MPV的文章,以及如何制作在媒体文件中嵌入字幕的mkv文件,但这是将来的事情了。 +我希望写一些有关 MPV 的文章,以及如何制作在媒体文件中嵌入字幕的 mkv 文件,但这是将来的事情了。 -注意: 这是开源软件,不鼓励盗版 +注意: 这是开源软件,不鼓励盗版。 -------------------------------------------------------------------------------- @@ -70,8 +71,8 @@ via: https://itsfoss.com/create-m3u-playlist-linux/ 作者:[Shirsh][a] 选题:[lujun9972](https://github.com/lujun9972) -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) +译者:[lujun9972](https://github.com/lujun9972) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 From 8104d7fe74d19722aaec90dd5262bbd90bb22dd7 Mon Sep 17 00:00:00 2001 From: "Xingyu.Wang" Date: Sat, 6 Oct 2018 19:24:36 +0800 Subject: [PATCH 234/736] PUB:20180815 How to Create M3U Playlists in Linux [Quick Tip].md @lujun9972 https://linux.cn/article-10087-1.html --- .../20180815 How to Create M3U Playlists in Linux [Quick Tip].md | 0 1 file changed, 0 insertions(+), 0 deletions(-) rename {translated/tech => published}/20180815 How to Create M3U Playlists in Linux [Quick Tip].md (100%) diff --git a/translated/tech/20180815 How to Create M3U Playlists in Linux [Quick Tip].md b/published/20180815 How to Create M3U Playlists in Linux [Quick Tip].md similarity index 100% rename from translated/tech/20180815 How to Create M3U Playlists in Linux [Quick Tip].md rename to published/20180815 How to Create M3U Playlists in Linux [Quick Tip].md From 9e29fb35e4109bacd1d30a13400a3e670b43a333 Mon Sep 17 00:00:00 2001 From: bookug Date: Sat, 6 Oct 2018 20:19:40 +0800 Subject: [PATCH 235/736] =?UTF-8?q?=E3=80=90=E7=BF=BB=E8=AF=91=E4=B8=AD?= =?UTF-8?q?=E3=80=91Moving=20to=20Linux=20from=20dated=20Windows=20machine?= =?UTF-8?q?s?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- .../talk/20180123 Moving to Linux from dated Windows machines.md | 1 + 1 file changed, 1 insertion(+) diff --git a/sources/talk/20180123 Moving to Linux from dated Windows machines.md b/sources/talk/20180123 Moving to Linux from dated Windows machines.md index 6acd6e53f2..3a9b426a0c 100644 --- a/sources/talk/20180123 Moving to Linux from dated Windows machines.md +++ b/sources/talk/20180123 Moving to Linux from dated Windows machines.md @@ -1,3 +1,4 @@ +【bookug翻译中】 Moving to Linux from dated Windows machines ====== From 2887694de63bdf8ac5e90798f79506e1fcce1c1b Mon Sep 17 00:00:00 2001 From: "Xingyu.Wang" Date: Sat, 6 Oct 2018 21:02:02 +0800 Subject: [PATCH 236/736] PRF: 20180824 5 cool music player apps.md @geekpi --- .../tech/20180824 5 cool music player apps.md | 43 ++++++++++--------- 1 file changed, 22 insertions(+), 21 deletions(-) diff --git a/translated/tech/20180824 5 cool music player apps.md b/translated/tech/20180824 5 cool music player apps.md index fb301ed4dd..76223f18ec 100644 --- a/translated/tech/20180824 5 cool music player apps.md +++ b/translated/tech/20180824 5 cool music player apps.md @@ -2,20 +2,21 @@ ====== ![](https://fedoramagazine.org/wp-content/uploads/2018/08/5-cool-music-apps-816x345.jpg) -你喜欢音乐吗?那么 Fedora 中可能有你正在寻找的东西。本文介绍在 Fedora 上运行的不同音乐播放器。无论你有大量的音乐库,还是小型音乐库,或者根本没有音乐库,你都会被覆盖到。这里有四个图形程序和一个基于终端的音乐播放器,可以让你挑选。 + +你喜欢音乐吗?那么 Fedora 中可能有你正在寻找的东西。本文介绍在 Fedora 上运行的各种音乐播放器。无论你有庞大的音乐库,还是小一些的,抑或根本没有,你都可以用到音乐播放器。这里有四个图形程序和一个基于终端的音乐播放器,可以让你挑选。 ### Quod Libet -Quod Libet 是你的大型音频库的管理员。如果你有一个大量的音频库,你不想只听,但也要管理,Quod Libet 可能是一个很好的选择。 +Quod Libet 是一个完备的大型音频库管理器。如果你有一个庞大的音频库,你不想只是听,也想要管理,Quod Libet 可能是一个很好的选择。 ![][1] -Quod Libet 可以从磁盘上的多个位置导入音乐,并允许你编辑音频文件的标签 - 因此一切都在你的控制之下。额外地,它还有各种插件可用,从简单的均衡器到 [last.fm][2] 同步。你也可以直接从 [Soundcloud][3] 搜索和播放音乐。 +Quod Libet 可以从磁盘上的多个位置导入音乐,并允许你编辑音频文件的标签 —— 因此一切都在你的控制之下。此外,它还有各种插件可用,从简单的均衡器到 [last.fm][2] 同步。你也可以直接从 [Soundcloud][3] 搜索和播放音乐。 + +Quod Libet 在 HiDPI 屏幕上工作得很好,它有 Fedora 的 RPM 包,如果你运行 [Silverblue][5],它在 [Flathub][4] 中也有。使用 Gnome Software 或命令行安装它: -Quod Libet 在 HiDPI 屏幕上工作得很好,它有 Fedora 的 RPM 包,如果你运行[Silverblue][5],它在 [Flathub][4] 中也有。使用 Gnome Software 或命令行安装它: ``` $ sudo dnf install quodlibet - ``` ### Audacious @@ -24,14 +25,14 @@ $ sudo dnf install quodlibet ![][6] -Audacious 可能不会立即管理你的所有音乐,但你如果想将音乐组织为文件,它能做得很好。你还可以导出和导入播放列表,而无需重新组织音乐文件本身。 +Audacious 可能不直接管理你的所有音乐,但你如果想将音乐按文件组织起来,它能做得很好。你还可以导出和导入播放列表,而无需重新组织音乐文件本身。 -额外地,你可以让它看起来像 Winamp。要让它与上面的截图相同,请进入 “Settings/Appearance,”,选择顶部的 “Winamp Classic Interface”,然后选择右下方的 “Refugee” 皮肤。而鲍勃是你的叔叔!这就完成了。 +此外,你可以让它看起来像 Winamp。要让它与上面的截图相同,请进入 “Settings/Appearance”,选择顶部的 “Winamp Classic Interface”,然后选择右下方的 “Refugee” 皮肤。就这么简单。 Audacious 在 Fedora 中作为 RPM 提供,可以使用 Gnome Software 或在终端运行以下命令安装: + ``` $ sudo dnf install audacious - ``` ### Lollypop @@ -40,25 +41,25 @@ Lollypop 是一个音乐播放器,它与 GNOME 集成良好。如果你喜欢 ![][7] -除了与 GNOME Shell 的良好视觉集成之外,它还可以很好地用于 HiDPI 屏幕,并支持黑暗主题。 +除了与 GNOME Shell 的良好视觉集成之外,它还可以很好地用于 HiDPI 屏幕,并支持暗色主题。 额外地,Lollypop 有一个集成的封面下载器和一个所谓的派对模式(右上角的音符按钮),它可以自动选择和播放音乐。它还集成了 [last.fm][2] 或 [libre.fm][8] 等在线服务。 它有 Fedora 的 RPM 也有用于 [Silverblue][5] 工作站的 [Flathub][4],使用 Gnome Software 或终端进行安装: + ``` $ sudo dnf install lollypop - ``` ### Gradio -如果你没有任何音乐但仍喜欢听怎么办?或者你只是喜欢收音机?Gradio 就是为你准备的。 +如果你没有任何音乐但仍想听怎么办?或者你只是喜欢收音机?Gradio 就是为你准备的。 ![][9] Gradio 是一个简单的收音机,它允许你搜索和播放网络电台。你可以按国家、语言或直接搜索找到它们。额外地,它可视化地集成到了 GNOME Shell 中,可以与 HiDPI 屏幕配合使用,并且可以选择黑暗主题。 -可以在 [Flathub][4] 中找到 Gradio,它同时可以运行在 Fedora Workstation 和 [Silverblue][5] 中。使用 Gnome Software 安装它 +可以在 [Flathub][4] 中找到 Gradio,它同时可以运行在 Fedora Workstation 和 [Silverblue][5] 中。使用 Gnome Software 安装它。 ### sox @@ -67,19 +68,19 @@ Gradio 是一个简单的收音机,它允许你搜索和播放网络电台。 ![][10] sox 是一个非常简单的基于终端的音乐播放器。你需要做的就是运行如下命令: + ``` $ play file.mp3 - ``` 接着 sox 就会为你播放。除了单独的音频文件外,sox 还支持 m3u 格式的播放列表。 -额外地,因为 sox 是基于终端的程序,你可以在 ssh 中运行它。你有一个带扬声器的家用服务器吗?或者你想从另一台电脑上播放音乐吗?尝试将它与 [tmux][11] 一起使用,这样即使会话关闭也可以继续听。 +此外,因为 sox 是基于终端的程序,你可以通过 ssh 运行它。你有一个带扬声器的家用服务器吗?或者你想从另一台电脑上播放音乐吗?尝试将它与 [tmux][11] 一起使用,这样即使会话关闭也可以继续听。 sox 在 Fedora 中以 RPM 提供。运行下面的命令安装: + ``` $ sudo dnf install sox - ``` @@ -90,19 +91,19 @@ via: https://fedoramagazine.org/5-cool-music-player-apps/ 作者:[Adam Šamalík][a] 选题:[lujun9972](https://github.com/lujun9972) 译者:[geekpi](https://github.com/geekpi) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 [a]:https://fedoramagazine.org/author/asamalik/ -[1]:https://fedoramagazine.org/wp-content/uploads/2018/08/qodlibet-300x217.png +[1]:https://fedoramagazine.org/wp-content/uploads/2018/08/qodlibet-768x555.png [2]:https://last.fm [3]:https://soundcloud.com/ [4]:https://flathub.org/home [5]:https://teamsilverblue.org/ -[6]:https://fedoramagazine.org/wp-content/uploads/2018/08/audacious-300x136.png -[7]:https://fedoramagazine.org/wp-content/uploads/2018/08/lollypop-300x172.png +[6]:https://fedoramagazine.org/wp-content/uploads/2018/08/audacious-768x348.png +[7]:https://fedoramagazine.org/wp-content/uploads/2018/08/lollypop-768x439.png [8]:https://libre.fm -[9]:https://fedoramagazine.org/wp-content/uploads/2018/08/gradio.png -[10]:https://fedoramagazine.org/wp-content/uploads/2018/08/sox-300x179.png +[9]:https://fedoramagazine.org/wp-content/uploads/2018/08/gradio-768x499.png +[10]:https://fedoramagazine.org/wp-content/uploads/2018/08/sox-768x457.png [11]:https://fedoramagazine.org/use-tmux-more-powerful-terminal/ From a43cb5246888093791166c510e7fec3e06db4b1c Mon Sep 17 00:00:00 2001 From: "Xingyu.Wang" Date: Sat, 6 Oct 2018 21:02:32 +0800 Subject: [PATCH 237/736] PUB: 20180824 5 cool music player apps.md @geekpi https://linux.cn/article-10088-1.html --- .../tech => published}/20180824 5 cool music player apps.md | 0 1 file changed, 0 insertions(+), 0 deletions(-) rename {translated/tech => published}/20180824 5 cool music player apps.md (100%) diff --git a/translated/tech/20180824 5 cool music player apps.md b/published/20180824 5 cool music player apps.md similarity index 100% rename from translated/tech/20180824 5 cool music player apps.md rename to published/20180824 5 cool music player apps.md From 8c19714c05287759ea71cfd14111e09cb190ab11 Mon Sep 17 00:00:00 2001 From: bookug Date: Sat, 6 Oct 2018 22:31:10 +0800 Subject: [PATCH 238/736] =?UTF-8?q?=E3=80=90=E5=AE=8C=E6=88=90=E7=BF=BB?= =?UTF-8?q?=E8=AF=91=E3=80=91Moving=20to=20Linux=20from=20dated=20Windows?= =?UTF-8?q?=20machines?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...ng to Linux from dated Windows machines.md | 51 ------------------- 1 file changed, 51 deletions(-) delete mode 100644 sources/talk/20180123 Moving to Linux from dated Windows machines.md diff --git a/sources/talk/20180123 Moving to Linux from dated Windows machines.md b/sources/talk/20180123 Moving to Linux from dated Windows machines.md deleted file mode 100644 index 3a9b426a0c..0000000000 --- a/sources/talk/20180123 Moving to Linux from dated Windows machines.md +++ /dev/null @@ -1,51 +0,0 @@ -【bookug翻译中】 -Moving to Linux from dated Windows machines -====== - -![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/1980s-computer-yearbook.png?itok=eGOYEKK-) - -Every day, while working in the marketing department at ONLYOFFICE, I see Linux users discussing our office productivity software on the internet. Our products are popular among Linux users, which made me curious about using Linux as an everyday work tool. My old Windows XP-powered computer was an obstacle to performance, so I started reading about Linux systems (particularly Ubuntu) and decided to try it out as an experiment. Two of my colleagues joined me. - -### Why Linux? - -We needed to make a change, first, because our old systems were not enough in terms of performance: we experienced regular crashes, an overload every time more than two apps were active, a 50% chance of freezing when a machine was shut down, and so forth. This was rather distracting to our work, which meant we were considerably less efficient than we could be. - -Upgrading to newer versions of Windows was an option, too, but that is an additional expense, plus our software competes against Microsoft's office suite. So that was an ideological question, too. - -Second, as I mentioned earlier, ONLYOFFICE products are rather popular within the Linux community. By reading about Linux users' experience with our software, we became interested in joining them. - -A week after we asked to change to Linux, we got our shiny new computer cases with [Kubuntu][1] inside. We chose version 16.04, which features KDE Plasma 5.5 and many KDE apps including Dolphin, as well as LibreOffice 5.1 and Firefox 45. - -### What we like about Linux - -Linux's biggest advantage, I believe, is its speed; for instance, it takes just seconds from pushing the machine's On button to starting your work. Everything seemed amazingly rapid from the very beginning: the overall responsiveness, the graphics, and even system updates. - -One other thing that surprised me compared to Windows is that Linux allows you to configure nearly everything, including the entire look of your desktop. In Settings, I found how to change the color and shape of bars, buttons, and fonts; relocate any desktop element; and build a composition of widgets, even including comics and Color Picker. I believe I've barely scratched the surface of the available options and have yet to explore most of the customization opportunities that this system is well known for. - -Linux distributions are generally a very safe environment. People rarely use antivirus apps in Linux, simply because there are so few viruses written for it. You save system speed, time, and, sure enough, money. - -In general, Linux has refreshed our everyday work lives, surprising us with a number of new options and opportunities. Even in the short time we've been using it, we'd characterize it as: - - * Fast and smooth to operate - * Highly customizable - * Relatively newcomer-friendly - * Challenging with basic components, however very rewarding in return - * Safe and secure - * An exciting experience for everyone who seeks to refresh their workplace - - - -Have you switched from Windows or MacOS to Kubuntu or another Linux variant? Or are you considering making the change? Please share your reasons for wanting to adopt Linux, as well as your impressions of going open source, in the comments. - --------------------------------------------------------------------------------- - -via: https://opensource.com/article/18/1/move-to-linux-old-windows - -作者:[Michael Korotaev][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:https://opensource.com/users/michaelk -[1]:https://kubuntu.org/ From 1ff539003292f3c089d1052a2a05ed2c312c6e49 Mon Sep 17 00:00:00 2001 From: bookug Date: Sat, 6 Oct 2018 22:35:38 +0800 Subject: [PATCH 239/736] =?UTF-8?q?=E3=80=90=E5=AE=8C=E6=88=90=E7=BF=BB?= =?UTF-8?q?=E8=AF=91=E3=80=91Moving=20to=20Linux=20from=20dated=20Windows?= =?UTF-8?q?=20machines?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...ng to Linux from dated Windows machines.md | 63 +++++++++++++++++++ 1 file changed, 63 insertions(+) create mode 100644 translated/talk/20180123 Moving to Linux from dated Windows machines.md diff --git a/translated/talk/20180123 Moving to Linux from dated Windows machines.md b/translated/talk/20180123 Moving to Linux from dated Windows machines.md new file mode 100644 index 0000000000..b90a166a4d --- /dev/null +++ b/translated/talk/20180123 Moving to Linux from dated Windows machines.md @@ -0,0 +1,63 @@ +从过时的 Windows 机器迁移到 Linux +====== + +![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/1980s-computer-yearbook.png?itok=eGOYEKK-) + +每天当我在 ONLYOFFICE 的市场部门工作的时候,我都能看到 Linux 用户在网上讨论我们的办公效率软件。 +我们的产品在 Linux 用户中很受欢迎,这使得我对使用 Linux 作为日常工具的体验非常好奇。 +我的老旧的 Windows XP 机器在性能上非常差,因此我决定了解 Linux 系统(特别是 Ubuntu )并且决定去尝试使用它。 +我的两个同事加入了我的计划。 + +### 为何选择 Linux ? + +我们必须做出改变,首先,我们的老系统在性能方面不够用:我们经历过频繁的崩溃,每当超过两个应用在运行机器就会负载过度,关闭机器时有一半的几率冻结等等。 +这很容易让我们从工作中分心,意味着我们没有我们应有的工作效率了。 + +升级到 Windows 更新的版本也是一种选择,但这样可能会带来额外的开销,而且我们的软件本身也是要与 Microsoft 的办公软件竞争。 +因此我们在这方面也存在意识形态的问题。 + +其次,就像我之前提过的, ONLYOFFICE 产品在 Linux 社区内非常受欢迎。 +通过阅读 Linux 用户在使用我们的软件时的体验,我们也对加入他们很感兴趣。 + +在我们要求转换到 Linux 系统一周后,我们拿到了崭新的装好了 [Kubuntu][1] 的机器。 +我们选择了 16.04 版本,因为这个版本支持 KDE Plasma 5.5 和包括 Dolphin 在内的很多 KDE 应用,同时也包括 LibreOffice 5.1 和 Firefox 45 。 + +### Linux 让人喜欢的地方 + +我相信 Linux 最大的优势是它的运行速度,比如,从按下机器的电源按钮到开始工作只需要几秒钟时间。 +从一开始,一切看起来都超乎寻常地快:总体的响应速度,图形界面,甚至包括系统更新的速度。 + +另一个使我惊奇的事情是跟 Windows 相比, Linux 几乎能让你配置任何东西,包括整个桌面的外观。 +在设置里面,我发现了如何修改各种栏目、按钮和字体的颜色和形状,也可以重新布置任意桌面组件的位置,组合桌面的小工具(甚至包括漫画和颜色选择器) +我相信我还仅仅只是了解了基本的选项,之后还需要探索这个系统更多著名的定制化选项。 + +Linux 发行版通常是一个非常安全的环境。 +人们很少在 Linux 系统中使用防病毒的软件,因为很少有人会写病毒程序来攻击 Linux 系统。 +因此你可以拥有很好的系统速度,并且节省了时间和金钱。 + +总之, Linux 已经改变了我们的日常生活,用一系列的新选项和功能大大震惊了我们。 +仅仅通过短时间的使用,我们已经可以给它总结出以下特性: + + * 操作很快很顺畅 + * 高度可定制 + * 对新手很友好 + * 了解基本组件很有挑战性,但回报丰厚 + * 安全可靠 + * 对所有想改变工作场所的人来说都是一次绝佳的体验 + +你已经从 Windows 或 MacOS 系统换到 Kubuntu 或其他 Linux 变种了么? +或者你是否正在考虑做出改变? +请分享你想要采用 Linux 系统的原因,连同你对开源的印象一起写在评论中。 + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/18/1/move-to-linux-old-windows + +作者:[Michael Korotaev][a] +译者:[bookug](https://github.com/bookug) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://opensource.com/users/michaelk +[1]:https://kubuntu.org/ From a941f8a3c440a04dfcc3a856274a9ef9dc73781c Mon Sep 17 00:00:00 2001 From: lctt9972 Date: Sun, 7 Oct 2018 02:34:29 +0000 Subject: [PATCH 240/736] Revert "translating by ljgibbslf" This reverts commit ecedd6ff13d73cb24bb562daf10f3042c943595c. --- ...se a here documents to write data to a file in bash script.md | 1 - 1 file changed, 1 deletion(-) diff --git a/sources/tech/20171116 How to use a here documents to write data to a file in bash script.md b/sources/tech/20171116 How to use a here documents to write data to a file in bash script.md index 9c2a636b09..12d15af78f 100644 --- a/sources/tech/20171116 How to use a here documents to write data to a file in bash script.md +++ b/sources/tech/20171116 How to use a here documents to write data to a file in bash script.md @@ -1,4 +1,3 @@ -translating by ljgibbslf How to use a here documents to write data to a file in bash script ====== From 3eaf2f2fa747ef86b9f9ffbd26505b3c5cdb3057 Mon Sep 17 00:00:00 2001 From: lctt9972 Date: Sun, 7 Oct 2018 02:34:44 +0000 Subject: [PATCH 241/736] Revert "Update 20171130 Excellent Business Software Alternatives For Linux.md" This reverts commit b623203bf829b63371438140c16be8b0a9ea2fe1. --- ...0171130 Excellent Business Software Alternatives For Linux.md | 1 - 1 file changed, 1 deletion(-) diff --git a/sources/tech/20171130 Excellent Business Software Alternatives For Linux.md b/sources/tech/20171130 Excellent Business Software Alternatives For Linux.md index 195b51423a..3469c62569 100644 --- a/sources/tech/20171130 Excellent Business Software Alternatives For Linux.md +++ b/sources/tech/20171130 Excellent Business Software Alternatives For Linux.md @@ -1,4 +1,3 @@ -Yoliver istranslating. Excellent Business Software Alternatives For Linux ------- From 9ff36c7a360b1673ca1ca5fc18a7d9b9663253d5 Mon Sep 17 00:00:00 2001 From: lctt9972 Date: Sun, 7 Oct 2018 02:34:59 +0000 Subject: [PATCH 242/736] Revert "Update 20180522 How to Enable Click to Minimize On Ubuntu.md" This reverts commit 9dfa29dcc4035dc929f93ed8a79fc02606ec3a9f. --- .../tech/20180522 How to Enable Click to Minimize On Ubuntu.md | 2 -- 1 file changed, 2 deletions(-) diff --git a/sources/tech/20180522 How to Enable Click to Minimize On Ubuntu.md b/sources/tech/20180522 How to Enable Click to Minimize On Ubuntu.md index 761138908d..50d68ad445 100644 --- a/sources/tech/20180522 How to Enable Click to Minimize On Ubuntu.md +++ b/sources/tech/20180522 How to Enable Click to Minimize On Ubuntu.md @@ -1,5 +1,3 @@ -translated by cyleft - How to Enable Click to Minimize On Ubuntu ============================================================ From 7dc00f8206de3967b845813a49ede97e9c0ead05 Mon Sep 17 00:00:00 2001 From: lctt9972 Date: Sun, 7 Oct 2018 02:35:10 +0000 Subject: [PATCH 243/736] Revert "add translating tag" This reverts commit e235e143387c92b9c5c2cc861c7803dcc43e51f9. --- .../20180727 How to analyze your system with perf and Python.md | 2 -- 1 file changed, 2 deletions(-) diff --git a/sources/tech/20180727 How to analyze your system with perf and Python.md b/sources/tech/20180727 How to analyze your system with perf and Python.md index ccc66b04a7..c1be98cc0e 100644 --- a/sources/tech/20180727 How to analyze your system with perf and Python.md +++ b/sources/tech/20180727 How to analyze your system with perf and Python.md @@ -1,5 +1,3 @@ -pinewall translating - How to analyze your system with perf and Python ====== From 9910e70abd9637ea4b6aadeba66e14591f95ff4c Mon Sep 17 00:00:00 2001 From: lctt9972 Date: Sun, 7 Oct 2018 02:35:22 +0000 Subject: [PATCH 244/736] Revert "Update 29180329 Python ChatOps libraries- Opsdroid and Errbot.md" This reverts commit 95d2cfc61108e999145e518c512c09b269a2bf7f. --- .../29180329 Python ChatOps libraries- Opsdroid and Errbot.md | 2 -- 1 file changed, 2 deletions(-) diff --git a/sources/tech/29180329 Python ChatOps libraries- Opsdroid and Errbot.md b/sources/tech/29180329 Python ChatOps libraries- Opsdroid and Errbot.md index d7ef058106..5f409956f7 100644 --- a/sources/tech/29180329 Python ChatOps libraries- Opsdroid and Errbot.md +++ b/sources/tech/29180329 Python ChatOps libraries- Opsdroid and Errbot.md @@ -1,5 +1,3 @@ -Translating by shipsw - Python ChatOps libraries: Opsdroid and Errbot ====== From 886d6765c29de77dfa490a0ec276e8ecc97300cc Mon Sep 17 00:00:00 2001 From: "Xingyu.Wang" Date: Sun, 7 Oct 2018 11:51:02 +0800 Subject: [PATCH 245/736] Rename 29180329 Python ChatOps libraries- Opsdroid and Errbot.md to 20180329 Python ChatOps libraries- Opsdroid and Errbot.md --- ... => 20180329 Python ChatOps libraries- Opsdroid and Errbot.md} | 0 1 file changed, 0 insertions(+), 0 deletions(-) rename sources/tech/{29180329 Python ChatOps libraries- Opsdroid and Errbot.md => 20180329 Python ChatOps libraries- Opsdroid and Errbot.md} (100%) diff --git a/sources/tech/29180329 Python ChatOps libraries- Opsdroid and Errbot.md b/sources/tech/20180329 Python ChatOps libraries- Opsdroid and Errbot.md similarity index 100% rename from sources/tech/29180329 Python ChatOps libraries- Opsdroid and Errbot.md rename to sources/tech/20180329 Python ChatOps libraries- Opsdroid and Errbot.md From bbe95b228ff58dcbccdd9aca55a809a329acf0a2 Mon Sep 17 00:00:00 2001 From: HankChow <280630620@qq.com> Date: Sun, 7 Oct 2018 13:21:31 +0800 Subject: [PATCH 246/736] translated --- ...ed Essential Linux Applications of 2018.md | 990 ------------------ ...ed Essential Linux Applications of 2018.md | 985 +++++++++++++++++ 2 files changed, 985 insertions(+), 990 deletions(-) delete mode 100644 sources/tech/20180724 75 Most Used Essential Linux Applications of 2018.md create mode 100644 translated/tech/20180724 75 Most Used Essential Linux Applications of 2018.md diff --git a/sources/tech/20180724 75 Most Used Essential Linux Applications of 2018.md b/sources/tech/20180724 75 Most Used Essential Linux Applications of 2018.md deleted file mode 100644 index 2b52356068..0000000000 --- a/sources/tech/20180724 75 Most Used Essential Linux Applications of 2018.md +++ /dev/null @@ -1,990 +0,0 @@ -HankChow translating - -75 Most Used Essential Linux Applications of 2018 -====== - -**2018** has been an awesome year for a lot of applications, especially those that are both free and open source. And while various Linux distributions come with a number of default apps, users are free to take them out and use any of the free or paid alternatives of their choice. - -Today, we bring you a [list of Linux applications][3] that have been able to make it to users’ Linux installations almost all the time despite the butt-load of other alternatives. - -To simply put, any app on this list is among the most used in its category, and if you haven’t already tried it out you are probably missing out. Enjoy! - -### Backup Tools - -#### Rsync - -[Rsync][4] is an open source bandwidth-friendly utility tool for performing swift incremental file transfers and it is available for free. -``` -$ rsync [OPTION...] SRC... [DEST] - -``` - -To know more examples and usage, read our article “[10 Practical Examples of Rsync Command][5]” to learn more about it. - -#### Timeshift - -[Timeshift][6] provides users with the ability to protect their system by taking incremental snapshots which can be reverted to at a different date – similar to the function of Time Machine in Mac OS and System restore in Windows. - -![](https://www.fossmint.com/wp-content/uploads/2018/07/Timeshift-Create-Linux-Mint-Snapshot.png) - -### BitTorrent Client - -![](https://www.fossmint.com/wp-content/uploads/2018/07/Linux-Torrent-Clients.png) - -#### Deluge - -[Deluge][7] is a beautiful cross-platform BitTorrent client that aims to perfect the **μTorrent** experience and make it available to users for free. - -Install **Deluge** on **Ubuntu** and **Debian** , using following commands. -``` -$ sudo add-apt-repository ppa:deluge-team/ppa -$ sudo apt-get update -$ sudo apt-get install deluge - -``` - -#### qBittorent - -[qBittorent][8] is an open source BitTorrent protocol client that aims to provide a free alternative to torrent apps like μTorrent. - -Install **qBittorent** on **Ubuntu** and **Debian** , using following commands. -``` -$ sudo add-apt-repository ppa:qbittorrent-team/qbittorrent-stable -$ sudo apt-get update -$ sudo apt-get install qbittorrent - -``` - -#### Transmission - -[Transmission][9] is also a BitTorrent client with awesome functionalities and a major focus on speed and ease of use. It comes preinstalled with many Linux distros. - -Install **Transmission** on **Ubuntu** and **Debian** , using following commands. -``` -$ sudo add-apt-repository ppa:transmissionbt/ppa -$ sudo apt-get update -$ sudo apt-get install transmission-gtk transmission-cli transmission-common transmission-daemon - -``` - -### Cloud Storage - -![](https://www.fossmint.com/wp-content/uploads/2018/07/Linux-Cloud-Storage.png) - -#### Dropbox - -The [Dropbox][10] team rebranded their cloud service earlier this year to provide an even better performance and app integration for their clients. It starts with 2GB of storage for free. - -Install **Dropbox** on **Ubuntu** and **Debian** , using following commands. -``` -$ cd ~ && wget -O - "https://www.dropbox.com/download?plat=lnx.x86" | tar xzf - [On 32-Bit] -$ cd ~ && wget -O - "https://www.dropbox.com/download?plat=lnx.x86_64" | tar xzf - [On 64-Bit] -$ ~/.dropbox-dist/dropboxd - -``` - -#### Google Drive - -[Google Drive][11] is Google’s cloud service solution and my guess is that it needs no introduction. Just like with **Dropbox** , you can sync files across all your connected devices. It starts with 15GB of storage for free and this includes Gmail, Google photos, Maps, etc. - -Check out: [5 Google Drive Clients for Linux][12] - -#### Mega - -[Mega][13] stands out from the rest because apart from being extremely security-conscious, it gives free users 50GB to do as they wish! Its end-to-end encryption ensures that they can’t access your data, and if you forget your recovery key, you too wouldn’t be able to. - -[**Download MEGA Cloud Storage for Ubuntu][14] - -### Commandline Editors - -![](https://www.fossmint.com/wp-content/uploads/2018/07/Commandline-Editors.png) - -#### Vim - -[Vim][15] is an open source clone of vi text editor developed to be customizable and able to work with any type of text. - -Install **Vim** on **Ubuntu** and **Debian** , using following commands. -``` -$ sudo add-apt-repository ppa:jonathonf/vim -$ sudo apt update -$ sudo apt install vim - -``` - -#### Emacs - -[Emacs][16] refers to a set of highly configurable text editors. The most popular variant, GNU Emacs, is written in Lisp and C to be self-documenting, extensible, and customizable. - -Install **Emacs** on **Ubuntu** and **Debian** , using following commands. -``` -$ sudo add-apt-repository ppa:kelleyk/emacs -$ sudo apt update -$ sudo apt install emacs25 - -``` - -#### Nano - -[Nano][17] is a feature-rich CLI text editor for power users and it has the ability to work with different terminals, among other functionalities. - -Install **Nano** on **Ubuntu** and **Debian** , using following commands. -``` -$ sudo add-apt-repository ppa:n-muench/programs-ppa -$ sudo apt-get update -$ sudo apt-get install nano - -``` - -### Download Manager - -![](https://www.fossmint.com/wp-content/uploads/2018/07/Linux-Download-Managers.png) - -#### Aria2 - -[Aria2][18] is an open source lightweight multi-source and multi-protocol command line-based downloader with support for Metalinks, torrents, HTTP/HTTPS, SFTP, etc. - -Install **Aria2** on **Ubuntu** and **Debian** , using following command. -``` -$ sudo apt-get install aria2 - -``` - -#### uGet - -[uGet][19] has earned its title as the **#1** open source download manager for Linux distros and it features the ability to handle any downloading task you can throw at it including using multiple connections, using queues, categories, etc. - -Install **uGet** on **Ubuntu** and **Debian** , using following commands. -``` -$ sudo add-apt-repository ppa:plushuang-tw/uget-stable -$ sudo apt update -$ sudo apt install uget - -``` - -#### XDM - -[XDM][20], **Xtreme Download Manager** is an open source downloader written in Java. Like any good download manager, it can work with queues, torrents, browsers, and it also includes a video grabber and a smart scheduler. - -Install **XDM** on **Ubuntu** and **Debian** , using following commands. -``` -$ sudo add-apt-repository ppa:noobslab/apps -$ sudo apt-get update -$ sudo apt-get install xdman - -``` - -### Email Clients - -![](https://www.fossmint.com/wp-content/uploads/2018/07/Linux-Email-Clients.png) - -#### Thunderbird - -[Thunderbird][21] is among the most popular email applications. It is free, open source, customizable, feature-rich, and above all, easy to install. - -Install **Thunderbird** on **Ubuntu** and **Debian** , using following commands. -``` -$ sudo add-apt-repository ppa:ubuntu-mozilla-security/ppa -$ sudo apt-get update -$ sudo apt-get install thunderbird - -``` - -#### Geary - -[Geary][22] is an open source email client based on WebKitGTK+. It is free, open-source, feature-rich, and adopted by the GNOME project. - -Install **Geary** on **Ubuntu** and **Debian** , using following commands. -``` -$ sudo add-apt-repository ppa:geary-team/releases -$ sudo apt-get update -$ sudo apt-get install geary - -``` - -#### Evolution - -[Evolution][23] is a free and open source email client for managing emails, meeting schedules, reminders, and contacts. - -Install **Evolution** on **Ubuntu** and **Debian** , using following commands. -``` -$ sudo add-apt-repository ppa:gnome3-team/gnome3-staging -$ sudo apt-get update -$ sudo apt-get install evolution - -``` - -### Finance Software - -![](https://www.fossmint.com/wp-content/uploads/2018/07/Linux-Accounting-Software.png) - -#### GnuCash - -[GnuCash][24] is a free, cross-platform, and open source software for financial accounting tasks for personal and small to mid-size businesses. - -Install **GnuCash** on **Ubuntu** and **Debian** , using following commands. -``` -$ sudo sh -c 'echo "deb http://archive.getdeb.net/ubuntu $(lsb_release -sc)-getdeb apps" >> /etc/apt/sources.list.d/getdeb.list' -$ sudo apt-get update -$ sudo apt-get install gnucash - -``` - -#### KMyMoney - -[KMyMoney][25] is a finance manager software that provides all important features found in the commercially-available, personal finance managers. - -Install **KMyMoney** on **Ubuntu** and **Debian** , using following commands. -``` -$ sudo add-apt-repository ppa:claydoh/kmymoney2-kde4 -$ sudo apt-get update -$ sudo apt-get install kmymoney - -``` - -### IDE Editors - -![](https://www.fossmint.com/wp-content/uploads/2018/07/Linux-IDE-Editors.png) - -#### Eclipse IDE - -[Eclipse][26] is the most widely used Java IDE containing a base workspace and an impossible-to-overemphasize configurable plug-in system for personalizing its coding environment. - -For installation, read our article “[How to Install Eclipse Oxygen IDE in Debian and Ubuntu][27]” - -#### Netbeans IDE - -A fan-favourite, [Netbeans][28] enables users to easily build applications for mobile, desktop, and web platforms using Java, PHP, HTML5, JavaScript, and C/C++, among other languages. - -For installation, read our article “[How to Install Netbeans Oxygen IDE in Debian and Ubuntu][29]” - -#### Brackets - -[Brackets][30] is an advanced text editor developed by Adobe to feature visual tools, preprocessor support, and a design-focused user flow for web development. In the hands of an expert, it can serve as an IDE in its own right. - -Install **Brackets** on **Ubuntu** and **Debian** , using following commands. -``` -$ sudo add-apt-repository ppa:webupd8team/brackets -$ sudo apt-get update -$ sudo apt-get install brackets - -``` - -#### Atom IDE - -[Atom IDE][31] is a more robust version of Atom text editor achieved by adding a number of extensions and libraries to boost its performance and functionalities. It is, in a sense, Atom on steroids. - -Install **Atom** on **Ubuntu** and **Debian** , using following commands. -``` -$ sudo apt-get install snapd -$ sudo snap install atom --classic - -``` - -#### Light Table - -[Light Table][32] is a self-proclaimed next-generation IDE developed to offer awesome features like data value flow stats and coding collaboration. - -Install **Light Table** on **Ubuntu** and **Debian** , using following commands. -``` -$ sudo add-apt-repository ppa:dr-akulavich/lighttable -$ sudo apt-get update -$ sudo apt-get install lighttable-installer - -``` - -#### Visual Studio Code - -[Visual Studio Code][33] is a source code editor created by Microsoft to offer users the best-advanced features in a text editor including syntax highlighting, code completion, debugging, performance statistics and graphs, etc. - -[**Download Visual Studio Code for Ubuntu][34] - -### Instant Messaging - -![](https://www.fossmint.com/wp-content/uploads/2018/07/Linux-IM-Clients.png) - -#### Pidgin - -[Pidgin][35] is an open source instant messaging app that supports virtually all chatting platforms and can have its abilities extended using extensions. - -Install **Pidgin** on **Ubuntu** and **Debian** , using following commands. -``` -$ sudo add-apt-repository ppa:jonathonf/backports -$ sudo apt-get update -$ sudo apt-get install pidgin - -``` - -#### Skype - -[Skype][36] needs no introduction and its awesomeness is available for any interested Linux user. - -Install **Skype** on **Ubuntu** and **Debian** , using following commands. -``` -$ sudo apt install snapd -$ sudo snap install skype --classic - -``` - -#### Empathy - -[Empathy][37] is a messaging app with support for voice, video chat, text, and file transfers over multiple several protocols. It also allows you to add other service accounts to it and interface with all of them through it. - -Install **Empathy** on **Ubuntu** and **Debian** , using following commands. -``` -$ sudo apt-get install empathy - -``` - -### Linux Antivirus - -#### ClamAV/ClamTk - -[ClamAV][38] is an open source and cross-platform command line antivirus app for detecting Trojans, viruses, and other malicious codes. [ClamTk][39] is its GUI front-end. - -Install **ClamAV/ClamTk** on **Ubuntu** and **Debian** , using following commands. -``` -$ sudo apt-get install clamav -$ sudo apt-get install clamtk - -``` - -### Linux Desktop Environments - -#### Cinnamon - -[Cinnamon][40] is a free and open-source derivative of **GNOME3** and it follows the traditional desktop metaphor conventions. - -Install **Cinnamon** desktop on **Ubuntu** and **Debian** , using following commands. -``` -$ sudo add-apt-repository ppa:embrosyn/cinnamon -$ sudo apt update -$ sudo apt install cinnamon-desktop-environment lightdm - -``` - -#### Mate - -The [Mate][41] Desktop Environment is a derivative and continuation of **GNOME2** developed to offer an attractive UI on Linux using traditional metaphors. - -Install **Mate** desktop on **Ubuntu** and **Debian** , using following commands. -``` -$ sudo apt install tasksel -$ sudo apt update -$ sudo tasksel install ubuntu-mate-desktop - -``` - -#### GNOME - -[GNOME][42] is a Desktop Environment comprised of several free and open-source applications and can run on any Linux distro and on most BSD derivatives. - -Install **Gnome** desktop on **Ubuntu** and **Debian** , using following commands. -``` -$ sudo apt install tasksel -$ sudo apt update -$ sudo tasksel install ubuntu-desktop - -``` - -#### KDE - -[KDE][43] is developed by the KDE community to provide users with a graphical solution to interfacing with their system and performing several computing tasks. - -Install **KDE** desktop on **Ubuntu** and **Debian** , using following commands. -``` -$ sudo apt install tasksel -$ sudo apt update -$ sudo tasksel install kubuntu-desktop - -``` - -### Linux Maintenance Tools - -#### GNOME Tweak Tool - -The [GNOME Tweak Tool][44] is the most popular tool for customizing and tweaking GNOME3 and GNOME Shell settings. - -Install **GNOME Tweak Tool** on **Ubuntu** and **Debian** , using following commands. -``` -$ sudo apt install gnome-tweak-tool - -``` - -#### Stacer - -[Stacer][45] is a free, open-source app for monitoring and optimizing Linux systems. - -Install **Stacer** on **Ubuntu** and **Debian** , using following commands. -``` -$ sudo add-apt-repository ppa:oguzhaninan/stacer -$ sudo apt-get update -$ sudo apt-get install stacer - -``` - -#### BleachBit - -[BleachBit][46] is a free disk space cleaner that also works as a privacy manager and system optimizer. - -[**Download BleachBit for Ubuntu][47] - -### Linux Terminals - -#### GNOME Terminal - -[GNOME Terminal][48] is GNOME’s default terminal emulator. - -Install **Gnome Terminal** on **Ubuntu** and **Debian** , using following commands. -``` -$ sudo apt-get install gnome-terminal - -``` - -#### Konsole - -[Konsole][49] is a terminal emulator for KDE. - -Install **Konsole** on **Ubuntu** and **Debian** , using following commands. -``` -$ sudo apt-get install konsole - -``` - -#### Terminator - -[Terminator][50] is a feature-rich GNOME Terminal-based terminal app built with a focus on arranging terminals, among other functions. - -Install **Terminator** on **Ubuntu** and **Debian** , using following commands. -``` -$ sudo apt-get install terminator - -``` - -#### Guake - -[Guake][51] is a lightweight drop-down terminal for the GNOME Desktop Environment. - -Install **Guake** on **Ubuntu** and **Debian** , using following commands. -``` -$ sudo apt-get install guake - -``` - -### Multimedia Editors - -#### Ardour - -[Ardour][52] is a beautiful Digital Audio Workstation (DAW) for recording, editing, and mixing audio professionally. - -Install **Ardour** on **Ubuntu** and **Debian** , using following commands. -``` -$ sudo add-apt-repository ppa:dobey/audiotools -$ sudo apt-get update -$ sudo apt-get install ardour - -``` - -#### Audacity - -[Audacity][53] is an easy-to-use cross-platform and open source multi-track audio editor and recorder; arguably the most famous of them all. - -Install **Audacity** on **Ubuntu** and **Debian** , using following commands. -``` -$ sudo add-apt-repository ppa:ubuntuhandbook1/audacity -$ sudo apt-get update -$ sudo apt-get install audacity - -``` - -#### GIMP - -[GIMP][54] is the most popular open source Photoshop alternative and it is for a reason. It features various customization options, 3rd-party plugins, and a helpful user community. - -Install **Gimp** on **Ubuntu** and **Debian** , using following commands. -``` -$ sudo add-apt-repository ppa:otto-kesselgulasch/gimp -$ sudo apt update -$ sudo apt install gimp - -``` - -#### Krita - -[Krita][55] is an open source painting app that can also serve as an image manipulating tool and it features a beautiful UI with a reliable performance. - -Install **Krita** on **Ubuntu** and **Debian** , using following commands. -``` -$ sudo add-apt-repository ppa:kritalime/ppa -$ sudo apt update -$ sudo apt install krita - -``` - -#### Lightworks - -[Lightworks][56] is a powerful, flexible, and beautiful tool for editing videos professionally. It comes feature-packed with hundreds of amazing effects and presets that allow it to handle any editing task that you throw at it and it has 25 years of experience to back up its claims. - -[**Download Lightworks for Ubuntu][57] - -#### OpenShot - -[OpenShot][58] is an award-winning free and open source video editor known for its excellent performance and powerful capabilities. - -Install **Openshot** on **Ubuntu** and **Debian** , using following commands. -``` -$ sudo add-apt-repository ppa:openshot.developers/ppa -$ sudo apt update -$ sudo apt install openshot-qt - -``` - -#### PiTiV - -[Pitivi][59] is a beautiful video editor that features a beautiful code base, awesome community, is easy to use, and allows for hassle-free collaboration. - -Install **PiTiV** on **Ubuntu** and **Debian** , using following commands. -``` -$ flatpak install --user https://flathub.org/repo/appstream/org.pitivi.Pitivi.flatpakref -$ flatpak install --user http://flatpak.pitivi.org/pitivi.flatpakref -$ flatpak run org.pitivi.Pitivi//stable - -``` - -### Music Players - -#### Rhythmbox - -[Rhythmbox][60] posses the ability to perform all music tasks you throw at it and has so far proved to be a reliable music player that it ships with Ubuntu. - -Install **Rhythmbox** on **Ubuntu** and **Debian** , using following commands. -``` -$ sudo add-apt-repository ppa:fossfreedom/rhythmbox -$ sudo apt-get update -$ sudo apt-get install rhythmbox - -``` - -#### Lollypop - -[Lollypop][61] is a beautiful, relatively new, open source music player featuring a number of advanced options like online radio, scrubbing support and party mode. Yet, it manages to keep everything simple and easy to manage. - -Install **Lollypop** on **Ubuntu** and **Debian** , using following commands. -``` -$ sudo add-apt-repository ppa:gnumdk/lollypop -$ sudo apt-get update -$ sudo apt-get install lollypop - -``` - -#### Amarok - -[Amarok][62] is a robust music player with an intuitive UI and tons of advanced features bundled into a single unit. It also allows users to discover new music based on their genre preferences. - -Install **Amarok** on **Ubuntu** and **Debian** , using following commands. -``` -$ sudo apt-get update -$ sudo apt-get install amarok - -``` - -#### Clementine - -[Clementine][63] is an Amarok-inspired music player that also features a straight-forward UI, advanced control features, and the ability to let users search for and discover new music. - -Install **Clementine** on **Ubuntu** and **Debian** , using following commands. -``` -$ sudo add-apt-repository ppa:me-davidsansome/clementine -$ sudo apt-get update -$ sudo apt-get install clementine - -``` - -#### Cmus - -[Cmus][64] is arguably the most efficient CLI music player, Cmus is fast and reliable, and its functionality can be increased using extensions. - -Install **Cmus** on **Ubuntu** and **Debian** , using following commands. -``` -$ sudo add-apt-repository ppa:jmuc/cmus -$ sudo apt-get update -$ sudo apt-get install cmus - -``` - -### Office Suites - -#### Calligra Suite - -The [Calligra Suite][65] provides users with a set of 8 applications which cover working with office, management, and graphics tasks. - -Install **Calligra Suite** on **Ubuntu** and **Debian** , using following commands. -``` -$ sudo apt-get install calligra - -``` - -#### LibreOffice - -[LibreOffice][66] the most actively developed office suite in the open source community, LibreOffice is known for its reliability and its functions can be increased using extensions. - -Install **LibreOffice** on **Ubuntu** and **Debian** , using following commands. -``` -$ sudo add-apt-repository ppa:libreoffice/ppa -$ sudo apt update -$ sudo apt install libreoffice - -``` - -#### WPS Office - -[WPS Office][67] is a beautiful office suite alternative with a more modern UI. - -[**Download WPS Office for Ubuntu][68] - -### Screenshot Tools - -#### Shutter - -[Shutter][69] allows users to take screenshots of their desktop and then edit them using filters and other effects coupled with the option to upload and share them online. - -Install **Shutter** on **Ubuntu** and **Debian** , using following commands. -``` -$ sudo add-apt-repository -y ppa:shutter/ppa -$ sudo apt update -$ sudo apt install shutter - -``` - -#### Kazam - -[Kazam][70] screencaster captures screen content to output a video and audio file supported by any video player with VP8/WebM and PulseAudio support. - -Install **Kazam** on **Ubuntu** and **Debian** , using following commands. -``` -$ sudo add-apt-repository ppa:kazam-team/unstable-series -$ sudo apt update -$ sudo apt install kazam python3-cairo python3-xlib - -``` - -#### Gnome Screenshot - -[Gnome Screenshot][71] was once bundled with Gnome utilities but is now a standalone app. It can be used to take screencasts in a format that is easily shareable. - -Install **Gnome Screenshot** on **Ubuntu** and **Debian** , using following commands. -``` -$ sudo apt-get update -$ sudo apt-get install gnome-screenshot - -``` - -### Screen Recorders - -#### SimpleScreenRecorder - -[SimpleScreenRecorder][72] was created to be better than the screen-recording apps available at the time of its creation and has now turned into one of the most efficient and easy-to-use screen recorders for Linux distros. - -Install **SimpleScreenRecorder** on **Ubuntu** and **Debian** , using following commands. -``` -$ sudo add-apt-repository ppa:maarten-baert/simplescreenrecorder -$ sudo apt-get update -$ sudo apt-get install simplescreenrecorder - -``` - -#### recordMyDesktop - -[recordMyDesktop][73] is an open source session recorder that is also capable of recording desktop session audio. - -Install **recordMyDesktop** on **Ubuntu** and **Debian** , using following commands. -``` -$ sudo apt-get update -$ sudo apt-get install gtk-recordmydesktop - -``` - -### Text Editors - -#### Atom - -[Atom][74] is a modern and customizable text editor created and maintained by GitHub. It is ready for use right out of the box and can have its functionality enhanced and its UI customized using extensions and themes. - -Install **Atom** on **Ubuntu** and **Debian** , using following commands. -``` -$ sudo apt-get install snapd -$ sudo snap install atom --classic - -``` - -#### Sublime Text - -[Sublime Text][75] is easily among the most awesome text editors to date. It is customizable, lightweight (even when bulldozed with a lot of data files and extensions), flexible, and remains free to use forever. - -Install **Sublime Text** on **Ubuntu** and **Debian** , using following commands. -``` -$ sudo apt-get install snapd -$ sudo snap install sublime-text - -``` - -#### Geany - -[Geany][76] is a memory-friendly text editor with basic IDE features designed to exhibit shot load times and extensible functions using libraries. - -Install **Geany** on **Ubuntu** and **Debian** , using following commands. -``` -$ sudo apt-get update -$ sudo apt-get install geany - -``` - -#### Gedit - -[Gedit][77] is famous for its simplicity and it comes preinstalled with many Linux distros because of its function as an excellent general purpose text editor. - -Install **Gedit** on **Ubuntu** and **Debian** , using following commands. -``` -$ sudo apt-get update -$ sudo apt-get install gedit - -``` - -### To-Do List Apps - -#### Evernote - -[Evernote][78] is a cloud-based note-taking productivity app designed to work perfectly with different types of notes including to-do lists and reminders. - -There is no any official evernote app for Linux, so check out other third party [6 Evernote Alternative Clients for Linux][79]. - -#### Everdo - -[Everdo][78] is a beautiful, security-conscious, low-friction Getting-Things-Done app productivity app for handling to-dos and other note types. If Evernote comes off to you in an unpleasant way, Everdo is a perfect alternative. - -[**Download Everdo for Ubuntu][80] - -#### Taskwarrior - -[Taskwarrior][81] is an open source and cross-platform command line app for managing tasks. It is famous for its speed and distraction-free environment. - -Install **Taskwarrior** on **Ubuntu** and **Debian** , using following commands. -``` -$ sudo apt-get update -$ sudo apt-get install taskwarrior - -``` - -### Video Players - -#### Banshee - -[Banshee][82] is an open source multi-format-supporting media player that was first developed in 2005 and has only been getting better since. - -Install **Banshee** on **Ubuntu** and **Debian** , using following commands. -``` -$ sudo add-apt-repository ppa:banshee-team/ppa -$ sudo apt-get update -$ sudo apt-get install banshee - -``` - -#### VLC - -[VLC][83] is my favourite video player and it’s so awesome that it can play almost any audio and video format you throw at it. You can also use it to play internet radio, record desktop sessions, and stream movies online. - -Install **VLC** on **Ubuntu** and **Debian** , using following commands. -``` -$ sudo add-apt-repository ppa:videolan/stable-daily -$ sudo apt-get update -$ sudo apt-get install vlc - -``` - -#### Kodi - -[Kodi][84] is among the world’s most famous media players and it comes as a full-fledged media centre app for playing all things media whether locally or remotely. - -Install **Kodi** on **Ubuntu** and **Debian** , using following commands. -``` -$ sudo apt-get install software-properties-common -$ sudo add-apt-repository ppa:team-xbmc/ppa -$ sudo apt-get update -$ sudo apt-get install kodi - -``` - -#### SMPlayer - -[SMPlayer][85] is a GUI for the award-winning **MPlayer** and it is capable of handling all popular media formats; coupled with the ability to stream from YouTube, Chromcast, and download subtitles. - -Install **SMPlayer** on **Ubuntu** and **Debian** , using following commands. -``` -$ sudo add-apt-repository ppa:rvm/smplayer -$ sudo apt-get update -$ sudo apt-get install smplayer - -``` - -### Virtualization Tools - -#### VirtualBox - -[VirtualBox][86] is an open source app created for general-purpose OS virtualization and it can be run on servers, desktops, and embedded systems. - -Install **VirtualBox** on **Ubuntu** and **Debian** , using following commands. -``` -$ wget -q https://www.virtualbox.org/download/oracle_vbox_2016.asc -O- | sudo apt-key add - -$ wget -q https://www.virtualbox.org/download/oracle_vbox.asc -O- | sudo apt-key add - -$ sudo apt-get update -$ sudo apt-get install virtualbox-5.2 -$ virtualbox - -``` - -#### VMWare - -[VMware][87] is a digital workspace that provides platform virtualization and cloud computing services to customers and is reportedly the first to successfully virtualize x86 architecture systems. One of its products, VMware workstations allows users to run multiple OSes in a virtual memory. - -For installation, read our article “[How to Install VMware Workstation Pro on Ubuntu][88]“. - -### Web Browsers - -#### Chrome - -[Google Chrome][89] is undoubtedly the most popular browser. Known for its speed, simplicity, security, and beauty following Google’s Material Design trend, Chrome is a browser that web developers cannot do without. It is also free to use and open source. - -Install **Google Chrome** on **Ubuntu** and **Debian** , using following commands. -``` -$ wget -q -O - https://dl-ssl.google.com/linux/linux_signing_key.pub | sudo apt-key add - -$ sudo sh -c 'echo "deb http://dl.google.com/linux/chrome/deb/ stable main" >> /etc/apt/sources.list.d/google.list' -$ sudo apt-get update -$ sudo apt-get install google-chrome-stable - -``` - -#### Firefox - -[Firefox Quantum][90] is a beautiful, speed, task-ready, and customizable browser capable of any browsing task that you throw at it. It is also free, open source, and packed with developer-friendly tools that are easy for even beginners to get up and running with. - -Install **Firefox Quantum** on **Ubuntu** and **Debian** , using following commands. -``` -$ sudo add-apt-repository ppa:mozillateam/firefox-next -$ sudo apt update && sudo apt upgrade -$ sudo apt install firefox - -``` - -#### Vivaldi - -[Vivaldi][91] is a free and open source Chrome-based project that aims to perfect Chrome’s features with a couple of more feature additions. It is known for its colourful panels, memory-friendly performance, and flexibility. - -[**Download Vivaldi for Ubuntu][91] - -That concludes our list for today. Did I skip a famous title? Tell me about it in the comments section below. - -Don’t forget to share this post and to subscribe to our newsletter to get the latest publications from FossMint. - - --------------------------------------------------------------------------------- - -via: https://www.fossmint.com/most-used-linux-applications/ - -作者:[Martins D. Okoi][a] -选题:[lujun9972](https://github.com/lujun9972) -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:https://www.fossmint.com/author/dillivine/ -[1]:https://plus.google.com/share?url=https://www.fossmint.com/most-used-linux-applications/ (Share on Google+) -[2]:https://www.linkedin.com/shareArticle?mini=true&url=https://www.fossmint.com/most-used-linux-applications/ (Share on LinkedIn) -[3]:https://www.fossmint.com/awesome-linux-software/ -[4]:https://rsync.samba.org/ -[5]:https://www.tecmint.com/rsync-local-remote-file-synchronization-commands/ -[6]:https://github.com/teejee2008/timeshift -[7]:https://deluge-torrent.org/ -[8]:https://www.qbittorrent.org/ -[9]:https://transmissionbt.com/ -[10]:https://www.dropbox.com/ -[11]:https://www.google.com/drive/ -[12]:https://www.fossmint.com/best-google-drive-clients-for-linux/ -[13]:https://mega.nz/ -[14]:https://mega.nz/sync!linux -[15]:https://www.vim.org/ -[16]:https://www.gnu.org/s/emacs/ -[17]:https://www.nano-editor.org/ -[18]:https://aria2.github.io/ -[19]:http://ugetdm.com/ -[20]:http://xdman.sourceforge.net/ -[21]:https://www.thunderbird.net/ -[22]:https://github.com/GNOME/geary -[23]:https://github.com/GNOME/evolution -[24]:https://www.gnucash.org/ -[25]:https://kmymoney.org/ -[26]:https://www.eclipse.org/ide/ -[27]:https://www.tecmint.com/install-eclipse-oxygen-ide-in-ubuntu-debian/ -[28]:https://netbeans.org/ -[29]:https://www.tecmint.com/install-netbeans-ide-in-ubuntu-debian-linux-mint/ -[30]:http://brackets.io/ -[31]:https://ide.atom.io/ -[32]:http://lighttable.com/ -[33]:https://code.visualstudio.com/ -[34]:https://code.visualstudio.com/download -[35]:https://www.pidgin.im/ -[36]:https://www.skype.com/ -[37]:https://wiki.gnome.org/Apps/Empathy -[38]:https://www.clamav.net/ -[39]:https://dave-theunsub.github.io/clamtk/ -[40]:https://github.com/linuxmint/cinnamon-desktop -[41]:https://mate-desktop.org/ -[42]:https://www.gnome.org/ -[43]:https://www.kde.org/plasma-desktop -[44]:https://github.com/nzjrs/gnome-tweak-tool -[45]:https://github.com/oguzhaninan/Stacer -[46]:https://www.bleachbit.org/ -[47]:https://www.bleachbit.org/download -[48]:https://github.com/GNOME/gnome-terminal -[49]:https://konsole.kde.org/ -[50]:https://gnometerminator.blogspot.com/p/introduction.html -[51]:http://guake-project.org/ -[52]:https://ardour.org/ -[53]:https://www.audacityteam.org/ -[54]:https://www.gimp.org/ -[55]:https://krita.org/en/ -[56]:https://www.lwks.com/ -[57]:https://www.lwks.com/index.php?option=com_lwks&view=download&Itemid=206 -[58]:https://www.openshot.org/ -[59]:http://www.pitivi.org/ -[60]:https://wiki.gnome.org/Apps/Rhythmbox -[61]:https://gnumdk.github.io/lollypop-web/ -[62]:https://amarok.kde.org/en -[63]:https://www.clementine-player.org/ -[64]:https://cmus.github.io/ -[65]:https://www.calligra.org/tour/calligra-suite/ -[66]:https://www.libreoffice.org/ -[67]:https://www.wps.com/ -[68]:http://wps-community.org/downloads -[69]:http://shutter-project.org/ -[70]:https://launchpad.net/kazam -[71]:https://gitlab.gnome.org/GNOME/gnome-screenshot -[72]:http://www.maartenbaert.be/simplescreenrecorder/ -[73]:http://recordmydesktop.sourceforge.net/about.php -[74]:https://atom.io/ -[75]:https://www.sublimetext.com/ -[76]:https://www.geany.org/ -[77]:https://wiki.gnome.org/Apps/Gedit -[78]:https://everdo.net/ -[79]:https://www.fossmint.com/evernote-alternatives-for-linux/ -[80]:https://everdo.net/linux/ -[81]:https://taskwarrior.org/ -[82]:http://banshee.fm/ -[83]:https://www.videolan.org/ -[84]:https://kodi.tv/ -[85]:https://www.smplayer.info/ -[86]:https://www.virtualbox.org/wiki/VirtualBox -[87]:https://www.vmware.com/ -[88]:https://www.tecmint.com/install-vmware-workstation-in-linux/ -[89]:https://www.google.com/chrome/ -[90]:https://www.mozilla.org/en-US/firefox/ -[91]:https://vivaldi.com/ diff --git a/translated/tech/20180724 75 Most Used Essential Linux Applications of 2018.md b/translated/tech/20180724 75 Most Used Essential Linux Applications of 2018.md new file mode 100644 index 0000000000..96ca929009 --- /dev/null +++ b/translated/tech/20180724 75 Most Used Essential Linux Applications of 2018.md @@ -0,0 +1,985 @@ +2018 年 75 个最常用的 Linux 应用程序 +====== + +对于许多应用程序来说,2018年是非常好的一年,尤其是免费开源的应用程序。尽管各种 Linux 发行版都自带了很多默认的应用程序,但用户也可以自由地选择使用它们或者其它任何免费或付费替代方案。 + +下面汇总了[一系列的 Linux 应用程序][3],这些应用程序都能够在 Linux 系统上安装,尽管还有很多其它选择。以下汇总中的任何应用程序都属于其类别中最常用的应用程序,如果你还没有用过,欢迎试用一下! + +### 备份工具 + +#### Rsync + +[Rsync][4] 是一个开源的、带宽友好的工具,它用于执行快速的增量文件传输,而且它也是一个免费工具。 +``` +$ rsync [OPTION...] SRC... [DEST] + +``` + +想要了解更多示例和用法,可以参考《[10 个使用 Rsync 命令的实际例子][5]》。 + +#### Timeshift + +[Timeshift][6] 能够通过增量快照来保护用户的系统数据,而且可以按照日期恢复指定的快照,类似于 Mac OS 中的 Time Machine 功能和 Windows 中的系统还原功能。 + +![](https://www.fossmint.com/wp-content/uploads/2018/07/Timeshift-Create-Linux-Mint-Snapshot.png) + +### BT(BitTorrent) 客户端 + +![](https://www.fossmint.com/wp-content/uploads/2018/07/Linux-Torrent-Clients.png) + +#### Deluge + +[Deluge][7] 是一个漂亮的跨平台 BT 客户端,旨在优化 μTorrent 体验,并向用户免费提供服务。 + +使用以下命令在 Ubuntu 和 Debian 安装 `Deluge`。 +``` +$ sudo add-apt-repository ppa:deluge-team/ppa +$ sudo apt-get update +$ sudo apt-get install deluge + +``` + +#### qBittorent + +[qBittorent][8] 是一个开源的 BT 客户端,旨在提供类似 μTorrent 的免费替代方案。 + +使用以下命令在 Ubuntu 和 Debian 安装 `qBittorent`。 +``` +$ sudo add-apt-repository ppa:qbittorrent-team/qbittorrent-stable +$ sudo apt-get update +$ sudo apt-get install qbittorrent + +``` + +#### Transmission + +[Transmission][9] 是一个强大的 BT 客户端,它主要关注速度和易用性,一般在很多 Linux 发行版上都有预装。 + +使用以下命令在 Ubuntu 和 Debian 安装 `Transmission`。 +``` +$ sudo add-apt-repository ppa:transmissionbt/ppa +$ sudo apt-get update +$ sudo apt-get install transmission-gtk transmission-cli transmission-common transmission-daemon + +``` + +### 云存储 + +![](https://www.fossmint.com/wp-content/uploads/2018/07/Linux-Cloud-Storage.png) + +#### Dropbox + +[Dropbox][10] 团队在今年早些时候给他们的云服务换了一个名字,也为客户提供了更好的性能和集成了更多应用程序。Dropbox 会向用户免费提供 2 GB 存储空间。 + +使用以下命令在 Ubuntu 和 Debian 安装 `Dropbox`。 +``` +$ cd ~ && wget -O - "https://www.dropbox.com/download?plat=lnx.x86" | tar xzf - [On 32-Bit] +$ cd ~ && wget -O - "https://www.dropbox.com/download?plat=lnx.x86_64" | tar xzf - [On 64-Bit] +$ ~/.dropbox-dist/dropboxd + +``` + +#### Google Drive + +[Google Drive][11] 是 Google 提供的云服务解决方案,这已经是一个广为人知的服务了。与 Dropbox 一样,可以通过它在所有联网的设备上同步文件。它免费提供了 15 GB 存储空间,包括Gmail、Google 图片、Google 地图等服务。 + +参考阅读:[5 个适用于 Linux 的 Google Drive 客户端][12] + +#### Mega + +[Mega][13] 也是一个出色的云存储解决方案,它的亮点除了高度的安全性之外,还有为用户免费提供高达 50 GB 的免费存储空间。它使用端到端加密,以确保用户的数据安全,所以如果忘记了恢复密钥,用户自己也无法访问到存储的数据。 + +参考阅读:[在 Ubuntu 下载 Mega 云存储客户端][14] + +### 命令行编辑器 + +![](https://www.fossmint.com/wp-content/uploads/2018/07/Commandline-Editors.png) + +#### Vim + +[Vim][15] 是 vi 文本编辑器的开源克隆版本,它的主要目的是可以高度定制化并能够处理任何类型的文本。 + +使用以下命令在 Ubuntu 和 Debian 安装 `Vim`。 +``` +$ sudo add-apt-repository ppa:jonathonf/vim +$ sudo apt update +$ sudo apt install vim + +``` + +#### Emacs + +[Emacs][16] 是一个高度可配置的文本编辑器,最流行的一个分支 GNU Emacs 是用 Lisp 和 C 编写的,它的最大特点是可以自文档化、可扩展和可自定义。 + +使用以下命令在 Ubuntu 和 Debian 安装 `Emacs`。 +``` +$ sudo add-apt-repository ppa:kelleyk/emacs +$ sudo apt update +$ sudo apt install emacs25 + +``` + +#### Nano + +[Nano][17] 是一款功能丰富的命令行文本编辑器,比较适合高级用户。它可以通过多个终端进行不同功能的操作。 + +使用以下命令在 Ubuntu 和 Debian 安装 `Nano`。 +``` +$ sudo add-apt-repository ppa:n-muench/programs-ppa +$ sudo apt-get update +$ sudo apt-get install nano + +``` + +### 下载器 + +![](https://www.fossmint.com/wp-content/uploads/2018/07/Linux-Download-Managers.png) + +#### Aria2 + +[Aria2][18] 是一个开源的、轻量级的、多软件源和多协议的命令行下载器,它支持 Metalinks、torrents、HTTP/HTTPS、SFTP 等多种协议。 + +使用以下命令在 Ubuntu 和 Debian 安装 `Aria2`。 +``` +$ sudo apt-get install aria2 + +``` + +#### uGet + +[uGet][19] 已经成为 Linux 各种发行版中排名第一的开源下载器,它可以处理任何下载任务,包括多连接、队列、类目等。 + +使用以下命令在 Ubuntu 和 Debian 安装 `uGet`。 +``` +$ sudo add-apt-repository ppa:plushuang-tw/uget-stable +$ sudo apt update +$ sudo apt install uget + +``` + +#### XDM + +[XDM][20](Xtreme Download Manager)是一个使用 Java 编写的开源下载软件。和其它下载器一样,它可以结合队列、种子、浏览器使用,而且还带有视频采集器和智能调度器。 + +使用以下命令在 Ubuntu 和 Debian 安装 `XDM`。 +``` +$ sudo add-apt-repository ppa:noobslab/apps +$ sudo apt-get update +$ sudo apt-get install xdman + +``` + +### 电子邮件客户端 + +![](https://www.fossmint.com/wp-content/uploads/2018/07/Linux-Email-Clients.png) + +#### Thunderbird + +[Thunderbird][21] 是最受欢迎的电子邮件客户端之一。它的优点包括免费、开源、可定制、功能丰富,而且最重要的是安装过程也很简便。 + +使用以下命令在 Ubuntu 和 Debian 安装 `Thunderbird`。 +``` +$ sudo add-apt-repository ppa:ubuntu-mozilla-security/ppa +$ sudo apt-get update +$ sudo apt-get install thunderbird + +``` + +#### Geary + +[Geary][22] 是一个基于 WebKitGTK+ 的开源电子邮件客户端。它是一个免费开源的功能丰富的软件,并被 GNOME 项目收录。 + +使用以下命令在 Ubuntu 和 Debian 安装 `Geary`。 +``` +$ sudo add-apt-repository ppa:geary-team/releases +$ sudo apt-get update +$ sudo apt-get install geary + +``` + +#### Evolution + +[Evolution][23] 是一个免费开源的电子邮件客户端,可以用于电子邮件、会议日程、备忘录和联系人的管理。 + +使用以下命令在 Ubuntu 和 Debian 安装 `Evolution`。 +``` +$ sudo add-apt-repository ppa:gnome3-team/gnome3-staging +$ sudo apt-get update +$ sudo apt-get install evolution + +``` + +### 财务软件 + +![](https://www.fossmint.com/wp-content/uploads/2018/07/Linux-Accounting-Software.png) + +#### GnuCash + +[GnuCash][24] 是一款免费的跨平台开源软件,它适用于个人和中小型企业的财务任务。 + +使用以下命令在 Ubuntu 和 Debian 安装 `GnuCash`。 +``` +$ sudo sh -c 'echo "deb http://archive.getdeb.net/ubuntu $(lsb_release -sc)-getdeb apps" >> /etc/apt/sources.list.d/getdeb.list' +$ sudo apt-get update +$ sudo apt-get install gnucash + +``` + +#### KMyMoney + +[KMyMoney][25] 是一个财务管理软件,它可以提供商用或个人理财所需的大部分主要功能。 + +使用以下命令在 Ubuntu 和 Debian 安装 `KmyMoney`。 +``` +$ sudo add-apt-repository ppa:claydoh/kmymoney2-kde4 +$ sudo apt-get update +$ sudo apt-get install kmymoney + +``` + +### IDE 和编辑器 + +![](https://www.fossmint.com/wp-content/uploads/2018/07/Linux-IDE-Editors.png) + +#### Eclipse IDE + +[Eclipse][26] 是最广为使用的 Java IDE,它包括一个基本工作空间和一个用于自定义编程环境的强大的的插件配置系统。 + +关于 Eclipse IDE 的安装,可以参考 [如何在 Debian 和 Ubuntu 上安装 Eclipse IDE][27] 这一篇文章。 + +#### Netbeans IDE + +[Netbeans][28] 是一个相当受用户欢迎的 IDE,它支持使用 Java、PHP、HTML 5、JavaScript、C/C++ 或其他语言编写移动应用,桌面软件和 web 应用。 + +关于 Netbeans IDE 的安装,可以参考 [如何在 Debian 和 Ubuntu 上安装 Netbeans IDE][29] 这一篇文章。 + +#### Brackets + +[Brackets][30] 是由 Adobe 开发的高级文本编辑器,它带有可视化工具,支持预处理程序,以及用于 web 开发的以设计为中心的用户流程。对于熟悉它的用户,它可以发挥 IDE 的作用。 + +使用以下命令在 Ubuntu 和 Debian 安装 `Brackets`。 +``` +$ sudo add-apt-repository ppa:webupd8team/brackets +$ sudo apt-get update +$ sudo apt-get install brackets + +``` + +#### Atom IDE + +[Atom IDE][31] 是一个加强版的 Atom 编辑器,它添加了大量扩展和库以提高性能和增加功能。总之,它是各方面都变得更强大了的 Atom 。 + +使用以下命令在 Ubuntu 和 Debian 安装 `Atom`。 +``` +$ sudo apt-get install snapd +$ sudo snap install atom --classic + +``` + +#### Light Table + +[Light Table][32] 号称下一代的 IDE,它提供了数据流量统计和协作编程等的强大功能。 + +使用以下命令在 Ubuntu 和 Debian 安装 `Light Table`。 +``` +$ sudo add-apt-repository ppa:dr-akulavich/lighttable +$ sudo apt-get update +$ sudo apt-get install lighttable-installer + +``` + +#### Visual Studio Code + +[Visual Studio Code][33] 是由微软开发的代码编辑器,它包含了文本编辑器所需要的最先进的功能,包括语法高亮、自动完成、代码调试、性能统计和图表显示等功能。 + +参考阅读:[在Ubuntu 下载 Visual Studio Code][34] + +### 即时通信工具 + +![](https://www.fossmint.com/wp-content/uploads/2018/07/Linux-IM-Clients.png) + +#### Pidgin + +[Pidgin][35] 是一个开源的即时通信工具,它几乎支持所有聊天平台,还支持额外扩展功能。 + +使用以下命令在 Ubuntu 和 Debian 安装 `Pidgin`。 +``` +$ sudo add-apt-repository ppa:jonathonf/backports +$ sudo apt-get update +$ sudo apt-get install pidgin + +``` + +#### Skype + +[Skype][36] 也是一个广为人知的软件了,任何感兴趣的用户都可以在 Linux 上使用。 + +使用以下命令在 Ubuntu 和 Debian 安装 `Skype`。 +``` +$ sudo apt install snapd +$ sudo snap install skype --classic + +``` + +#### Empathy + +[Empathy][37] 是一个支持多协议语音、视频聊天、文本和文件传输的即时通信工具。它还允许用户添加多个服务的帐户,并用其与所有服务的帐户进行交互。 + +使用以下命令在 Ubuntu 和 Debian 安装 `Empathy`。 +``` +$ sudo apt-get install empathy + +``` + +### Linux 防病毒工具 + +#### ClamAV/ClamTk + +[ClamAV][38] 是一个开源的跨平台命令行防病毒工具,用于检测木马、病毒和其他恶意代码。而 [ClamTk][39] 则是它的前端 GUI。 + +使用以下命令在 Ubuntu 和 Debian 安装 `ClamAV` 和 `ClamTk`。 +``` +$ sudo apt-get install clamav +$ sudo apt-get install clamtk + +``` + +### Linux 桌面环境 + +#### Cinnamon + +[Cinnamon][40] 是 GNOME 3 的免费开源衍生产品,它遵循传统的 桌面比拟desktop metaphor 约定。 + +使用以下命令在 Ubuntu 和 Debian 安装 `Cinnamon`。 +``` +$ sudo add-apt-repository ppa:embrosyn/cinnamon +$ sudo apt update +$ sudo apt install cinnamon-desktop-environment lightdm + +``` + +#### Mate + +[Mate][41] 桌面环境是 GNOME 2 的衍生和延续,目的是在 Linux 上通过使用传统的桌面比拟提供有一个吸引力的 UI。 + +使用以下命令在 Ubuntu 和 Debian 安装 `Mate`。 +``` +$ sudo apt install tasksel +$ sudo apt update +$ sudo tasksel install ubuntu-mate-desktop + +``` + +#### GNOME + +[GNOME][42] 是由一些免费和开源应用程序组成的桌面环境,它可以运行在任何 Linux 发行版和大多数 BSD 衍生版本上。 + +使用以下命令在 Ubuntu 和 Debian 安装 `Gnome`。 +``` +$ sudo apt install tasksel +$ sudo apt update +$ sudo tasksel install ubuntu-desktop + +``` + +#### KDE + +[KDE][43] 由 KDE 社区开发,它为用户提供图形解决方案以控制操作系统并执行不同的计算任务。 + +使用以下命令在 Ubuntu 和 Debian 安装 `KDE`。 +``` +$ sudo apt install tasksel +$ sudo apt update +$ sudo tasksel install kubuntu-desktop + +``` + +### Linux 维护工具 + +#### GNOME Tweak Tool + +[GNOME Tweak Tool][44] 是用于自定义和调整 GNOME 3 和 GNOME Shell 设置的流行工具。 + +使用以下命令在 Ubuntu 和 Debian 安装 `GNOME Tweak Tool`。 +``` +$ sudo apt install gnome-tweak-tool + +``` + +#### Stacer + +[Stacer][45] 是一款用于监控和优化 Linux 系统的免费开源应用程序。 + +使用以下命令在 Ubuntu 和 Debian 安装 `Stacer`。 +``` +$ sudo add-apt-repository ppa:oguzhaninan/stacer +$ sudo apt-get update +$ sudo apt-get install stacer + +``` + +#### BleachBit + +[BleachBit][46] 是一个免费的磁盘空间清理器,它也可用作隐私管理器和系统优化器。 + +参考阅读:[在 Ubuntu 下载 BleachBit][47] + +### Linux 终端工具 + +#### GNOME 终端 + +[GNOME 终端][48] 是 GNOME 的默认终端模拟器。 + +使用以下命令在 Ubuntu 和 Debian 安装 `Gnome Terminal`。 +``` +$ sudo apt-get install gnome-terminal + +``` + +#### Konsole + +[Konsole][49] 是 KDE 的一个终端模拟器。 + +使用以下命令在 Ubuntu 和 Debian 安装 `Konsole`。 +``` +$ sudo apt-get install konsole + +``` + +#### Terminator + +[Terminator][50] 是一个功能丰富的终端程序,它基于 GNOME 终端,并且专注于整理终端功能。 + +使用以下命令在 Ubuntu 和 Debian 安装 `Terminator`。 +``` +$ sudo apt-get install terminator + +``` + +#### Guake + +[Guake][51] 是 GNOME 桌面环境下一个轻量级的可下拉式终端。 + +使用以下命令在 Ubuntu 和 Debian 安装 `Guake`。 +``` +$ sudo apt-get install guake + +``` + +### 多媒体编辑工具 + +#### Ardour + +[Ardour][52] 是一款漂亮的的数字音频工作站Digital Audio Workstation,可以完成专业的录制、编辑和混音工作。 + +使用以下命令在 Ubuntu 和 Debian 安装 `Ardour`。 +``` +$ sudo add-apt-repository ppa:dobey/audiotools +$ sudo apt-get update +$ sudo apt-get install ardour + +``` + +#### Audacity + +[Audacity][53] 是最著名的音频编辑软件之一,它是一款跨平台的开源多轨音频编辑器。 + +使用以下命令在 Ubuntu 和 Debian 安装 `Audacity`。 +``` +$ sudo add-apt-repository ppa:ubuntuhandbook1/audacity +$ sudo apt-get update +$ sudo apt-get install audacity + +``` + +#### GIMP + +[GIMP][54] 是 Photoshop 的开源替代品中最受欢迎的。这是因为它有多种可自定义的选项、第三方插件以及活跃的用户社区。 + +使用以下命令在 Ubuntu 和 Debian 安装 `Gimp`。 +``` +$ sudo add-apt-repository ppa:otto-kesselgulasch/gimp +$ sudo apt update +$ sudo apt install gimp + +``` + +#### Krita + +[Krita][55] 是一款开源的绘画程序,它具有美观的 UI 和可靠的性能,也可以用作图像处理工具。 + +使用以下命令在 Ubuntu 和 Debian 安装 `Krita`。 +``` +$ sudo add-apt-repository ppa:kritalime/ppa +$ sudo apt update +$ sudo apt install krita + +``` + +#### Lightworks + +[Lightworks][56] 是一款功能强大、灵活美观的专业视频编辑工具。它拥有上百种配套的视觉效果功能,可以处理任何编辑任务,毕竟这个软件已经有长达 25 年的视频处理经验。 + +参考阅读:[在 Ubuntu 下载 Lightworks][57] + +#### OpenShot + +[OpenShot][58] 是一款屡获殊荣的免费开源视频编辑器,这主要得益于其出色的性能和强大的功能。 + +使用以下命令在 Ubuntu 和 Debian 安装 `Openshot`。 +``` +$ sudo add-apt-repository ppa:openshot.developers/ppa +$ sudo apt update +$ sudo apt install openshot-qt + +``` + +#### PiTiV + +[Pitivi][59] 也是一个美观的视频编辑器,它有优美的代码库、优质的社区,还支持优秀的协作编辑功能。 + +使用以下命令在 Ubuntu 和 Debian 安装 `PiTiV`。 +``` +$ flatpak install --user https://flathub.org/repo/appstream/org.pitivi.Pitivi.flatpakref +$ flatpak install --user http://flatpak.pitivi.org/pitivi.flatpakref +$ flatpak run org.pitivi.Pitivi//stable + +``` + +### 音乐播放器 + +#### Rhythmbox + +[Rhythmbox][60] 支持海量种类的音乐,目前被认为是最可靠的音乐播放器,并由 Ubuntu 自带。 + +使用以下命令在 Ubuntu 和 Debian 安装 `Rhythmbox`。 +``` +$ sudo add-apt-repository ppa:fossfreedom/rhythmbox +$ sudo apt-get update +$ sudo apt-get install rhythmbox + +``` + +#### Lollypop + +[Lollypop][61] 是一款较为年轻的开源音乐播放器,它有很多高级选项,包括网络电台,滑动播放和派对模式。尽管功能繁多,它仍然尽量做到简单易管理。 + +使用以下命令在 Ubuntu 和 Debian 安装 `Lollypop`。 +``` +$ sudo add-apt-repository ppa:gnumdk/lollypop +$ sudo apt-get update +$ sudo apt-get install lollypop + +``` + +#### Amarok + +[Amarok][62] 是一款功能强大的音乐播放器,它有一个直观的 UI 和大量的高级功能,而且允许用户根据自己的偏好去发现新音乐。 + +使用以下命令在 Ubuntu 和 Debian 安装 `Amarok`。 +``` +$ sudo apt-get update +$ sudo apt-get install amarok + +``` + +#### Clementine + +[Clementine][63] 是一款 Amarok 风格的音乐播放器,因此和 Amarok 相似,也有直观的用户界面、先进的控制模块,以及让用户搜索和发现新音乐的功能。 + +使用以下命令在 Ubuntu 和 Debian 安装 `Clementine`。 +``` +$ sudo add-apt-repository ppa:me-davidsansome/clementine +$ sudo apt-get update +$ sudo apt-get install clementine + +``` + +#### Cmus + +[Cmus][64] 可以说是最高效的的命令行界面音乐播放器了,它具有快速可靠的特点,也支持使用扩展。 + +使用以下命令在 Ubuntu 和 Debian 安装 `Cmus`。 +``` +$ sudo add-apt-repository ppa:jmuc/cmus +$ sudo apt-get update +$ sudo apt-get install cmus + +``` + +### 办公软件 + +#### Calligra 套件 + +Calligra 套件为用户提供了一套总共 8 个应用程序,涵盖办公、管理、图表等各个范畴。 + +使用以下命令在 Ubuntu 和 Debian 安装 `Calligra` 套件。 +``` +$ sudo apt-get install calligra + +``` + +#### LibreOffice + +[LibreOffice][66] 是开源社区中开发过程最活跃的办公套件,它以可靠性著称,也可以通过扩展来添加功能。 + +使用以下命令在 Ubuntu 和 Debian 安装 `LibreOffice`。 +``` +$ sudo add-apt-repository ppa:libreoffice/ppa +$ sudo apt update +$ sudo apt install libreoffice + +``` + +#### WPS Office + +[WPS Office][67] 是一款漂亮的办公套件,它有一个很具现代感的 UI。 + +参考阅读:[在 Ubuntu 安装 WPS Office][68] + +### 屏幕截图工具 + +#### Shutter + +[Shutter][69] 允许用户截取桌面的屏幕截图,然后使用一些效果进行编辑,还支持上传和在线共享。 + +使用以下命令在 Ubuntu 和 Debian 安装 `Shutter`。 +``` +$ sudo add-apt-repository -y ppa:shutter/ppa +$ sudo apt update +$ sudo apt install shutter + +``` + +#### Kazam + +[Kazam][70] 可以用于捕获屏幕截图,它的输出对于任何支持 VP8/WebM 和 PulseAudio 视频播放器都可用。 + +使用以下命令在 Ubuntu 和 Debian 安装 `Kazam`。 +``` +$ sudo add-apt-repository ppa:kazam-team/unstable-series +$ sudo apt update +$ sudo apt install kazam python3-cairo python3-xlib + +``` + +#### Gnome Screenshot + +[Gnome Screenshot][71] 过去曾经和 Gnome 一起捆绑,但现在已经独立出来。它以易于共享的格式进行截屏。 + +使用以下命令在 Ubuntu 和 Debian 安装 `Gnome Screenshot`。 +``` +$ sudo apt-get update +$ sudo apt-get install gnome-screenshot + +``` + +### 录屏工具 + +#### SimpleScreenRecorder + +[SimpleScreenRecorder][72] 面世时已经是录屏工具中的佼佼者,现在已成为 Linux 各个发行版中最有效、最易用的录屏工具之一。 + +使用以下命令在 Ubuntu 和 Debian 安装 `SimpleScreenRecorder`。 +``` +$ sudo add-apt-repository ppa:maarten-baert/simplescreenrecorder +$ sudo apt-get update +$ sudo apt-get install simplescreenrecorder + +``` + +#### recordMyDesktop + +[recordMyDesktop][73] 是一个开源的会话记录器,它也能记录桌面会话的音频。 + +使用以下命令在 Ubuntu 和 Debian 安装 `recordMyDesktop`。 +``` +$ sudo apt-get update +$ sudo apt-get install gtk-recordmydesktop + +``` + +### Text Editors + +#### Atom + +[Atom][74] 是由 GitHub 开发和维护的可定制文本编辑器。它是开箱即用的,但也可以使用扩展和主题自定义 UI 来增强其功能。 + +使用以下命令在 Ubuntu 和 Debian 安装 `Atom`。 +``` +$ sudo apt-get install snapd +$ sudo snap install atom --classic + +``` + +#### Sublime Text + +[Sublime Text][75] 已经成为目前最棒的文本编辑器。它可定制、轻量灵活(即使打开了大量数据文件和加入了大量扩展),最重要的是可以永久免费使用。 + +使用以下命令在 Ubuntu 和 Debian 安装 `Sublime Text`。 +``` +$ sudo apt-get install snapd +$ sudo snap install sublime-text + +``` + +#### Geany + +[Geany][76] 是一个内存友好的文本编辑器,它具有基本的IDE功能,可以显示加载时间、扩展库函数等。 + +使用以下命令在 Ubuntu 和 Debian 安装 `Geany`。 +``` +$ sudo apt-get update +$ sudo apt-get install geany + +``` + +#### Gedit + +[Gedit][77] 以其简单著称,在很多 Linux 发行版都有预装,它具有文本编辑器都具有的优秀的功能。 + +使用以下命令在 Ubuntu 和 Debian 安装 `Gedit`。 +``` +$ sudo apt-get update +$ sudo apt-get install gedit + +``` + +### 备忘录软件 + +#### Evernote + +[Evernote][78] 是一款云上的笔记程序,它带有待办列表和提醒功能,能够与不同类型的笔记完美配合。 + +Evernote 在 Linux 上没有官方提供的软件,但可以参考 [Linux 上的 6 个 Evernote 替代客户端][79] 这篇文章使用其它第三方工具。 + +#### Everdo + +[Everdo][78] 是一款美观,安全,易兼容的备忘软件,可以用于处理待办事项和其它笔记。如果你认为 Evernote 有所不足,相信 Everdo 会是一个好的替代。 + +参考阅读:[在 Ubuntu 下载 Everdo][80] + +#### Taskwarrior + +[Taskwarrior][81] 是一个用于管理个人任务的开源跨平台命令行应用,它的速度和无干扰的环境是它的两大特点。 + +使用以下命令在 Ubuntu 和 Debian 安装 `Taskwarrior`。 +``` +$ sudo apt-get update +$ sudo apt-get install taskwarrior + +``` + +### 视频播放器 + +#### Banshee + +[Banshee][82] 是一个开源的支持多格式的媒体播放器,于 2005 年开始开发并逐渐成长。 + +使用以下命令在 Ubuntu 和 Debian 安装 `Banshee`。 +``` +$ sudo add-apt-repository ppa:banshee-team/ppa +$ sudo apt-get update +$ sudo apt-get install banshee + +``` + +#### VLC + +[VLC][83] 是我最喜欢的视频播放器,它几乎可以播放任何格式的音频和视频,它还可以播放网络电台、录制桌面会话以及在线播放电影。 + +使用以下命令在 Ubuntu 和 Debian 安装 `VLC`。 +``` +$ sudo add-apt-repository ppa:videolan/stable-daily +$ sudo apt-get update +$ sudo apt-get install vlc + +``` + +#### Kodi + +[Kodi][84] 是世界上最着名的媒体播放器之一,它有一个成熟的媒体中心,可以播放本地和远程的多媒体文件。 + +使用以下命令在 Ubuntu 和 Debian 安装 `Kodi`。 +``` +$ sudo apt-get install software-properties-common +$ sudo add-apt-repository ppa:team-xbmc/ppa +$ sudo apt-get update +$ sudo apt-get install kodi + +``` + +#### SMPlayer + +[SMPlayer][85] 是 MPlayer 的 GUI 版本,所有流行的媒体格式它都能够处理,并且它还有从 YouTube 和 Chromcast 和下载字幕的功能。 + +使用以下命令在 Ubuntu 和 Debian 安装 `SMPlayer`。 +``` +$ sudo add-apt-repository ppa:rvm/smplayer +$ sudo apt-get update +$ sudo apt-get install smplayer + +``` + +### 虚拟化工具 + +#### VirtualBox + +[VirtualBox][86] 是一个用于操作系统虚拟化的开源应用程序,在服务器、台式机和嵌入式系统上都可以运行。 + +使用以下命令在 Ubuntu 和 Debian 安装 `VirtualBox`。 +``` +$ wget -q https://www.virtualbox.org/download/oracle_vbox_2016.asc -O- | sudo apt-key add - +$ wget -q https://www.virtualbox.org/download/oracle_vbox.asc -O- | sudo apt-key add - +$ sudo apt-get update +$ sudo apt-get install virtualbox-5.2 +$ virtualbox + +``` + +#### VMWare + +[VMware][87] 是一个为客户提供平台虚拟化和云计算服务的数字工作区,是第一个成功将 x86 架构系统虚拟化的工作站。 VMware 工作站的其中一个产品就允许用户在虚拟内存中运行多个操作系统。 + +参阅 [在 Ubuntu 上安装 VMWare Workstation Pro][88] 可以了解 VMWare 的安装。 + +### 浏览器 + +#### Chrome + +[Google Chrome][89] 无疑是最受欢迎的浏览器。Chrome 以其速度、简洁、安全、美观而受人喜爱,它遵循了 Google 的界面设计风格,是 web 开发人员不可缺少的浏览器,同时它也是免费开源的。 + +使用以下命令在 Ubuntu 和 Debian 安装 `Google Chrome`。 +``` +$ wget -q -O - https://dl-ssl.google.com/linux/linux_signing_key.pub | sudo apt-key add - +$ sudo sh -c 'echo "deb http://dl.google.com/linux/chrome/deb/ stable main" >> /etc/apt/sources.list.d/google.list' +$ sudo apt-get update +$ sudo apt-get install google-chrome-stable + +``` + +#### Firefox + +[Firefox Quantum][90] 是一款漂亮、快速、完善并且可自定义的浏览器。它也是免费开源的,包含有开发人员所需要的工具,对于初学者也没有任何使用门槛。 + +使用以下命令在 Ubuntu 和 Debian 安装 `Firefox Quantum`。 +``` +$ sudo add-apt-repository ppa:mozillateam/firefox-next +$ sudo apt update && sudo apt upgrade +$ sudo apt install firefox + +``` + +#### Vivaldi + +[Vivaldi][91] 是一个基于 Chrome 的免费开源项目,旨在通过添加扩展来使 Chrome 的功能更加完善。色彩丰富的界面,性能良好、灵活性强是它的几大特点。 + +参考阅读:[在 Ubuntu 下载 Vivaldi][91] + +That concludes our list for today. Did I skip a famous title? Tell me about it in the comments section below. +以上就是我的推荐,你还有更好的软件向大家分享吗?欢迎评论。 + +-------------------------------------------------------------------------------- + +via: https://www.fossmint.com/most-used-linux-applications/ + +作者:[Martins D. Okoi][a] +选题:[lujun9972](https://github.com/lujun9972) +译者:[HankChow](https://github.com/HankChow) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://www.fossmint.com/author/dillivine/ +[1]:https://plus.google.com/share?url=https://www.fossmint.com/most-used-linux-applications/ "Share on Google+" +[2]:https://www.linkedin.com/shareArticle?mini=true&url=https://www.fossmint.com/most-used-linux-applications/ "Share on LinkedIn" +[3]:https://www.fossmint.com/awesome-linux-software/ +[4]:https://rsync.samba.org/ +[5]:https://www.tecmint.com/rsync-local-remote-file-synchronization-commands/ +[6]:https://github.com/teejee2008/timeshift +[7]:https://deluge-torrent.org/ +[8]:https://www.qbittorrent.org/ +[9]:https://transmissionbt.com/ +[10]:https://www.dropbox.com/ +[11]:https://www.google.com/drive/ +[12]:https://www.fossmint.com/best-google-drive-clients-for-linux/ +[13]:https://mega.nz/ +[14]:https://mega.nz/sync!linux +[15]:https://www.vim.org/ +[16]:https://www.gnu.org/s/emacs/ +[17]:https://www.nano-editor.org/ +[18]:https://aria2.github.io/ +[19]:http://ugetdm.com/ +[20]:http://xdman.sourceforge.net/ +[21]:https://www.thunderbird.net/ +[22]:https://github.com/GNOME/geary +[23]:https://github.com/GNOME/evolution +[24]:https://www.gnucash.org/ +[25]:https://kmymoney.org/ +[26]:https://www.eclipse.org/ide/ +[27]:https://www.tecmint.com/install-eclipse-oxygen-ide-in-ubuntu-debian/ +[28]:https://netbeans.org/ +[29]:https://www.tecmint.com/install-netbeans-ide-in-ubuntu-debian-linux-mint/ +[30]:http://brackets.io/ +[31]:https://ide.atom.io/ +[32]:http://lighttable.com/ +[33]:https://code.visualstudio.com/ +[34]:https://code.visualstudio.com/download +[35]:https://www.pidgin.im/ +[36]:https://www.skype.com/ +[37]:https://wiki.gnome.org/Apps/Empathy +[38]:https://www.clamav.net/ +[39]:https://dave-theunsub.github.io/clamtk/ +[40]:https://github.com/linuxmint/cinnamon-desktop +[41]:https://mate-desktop.org/ +[42]:https://www.gnome.org/ +[43]:https://www.kde.org/plasma-desktop +[44]:https://github.com/nzjrs/gnome-tweak-tool +[45]:https://github.com/oguzhaninan/Stacer +[46]:https://www.bleachbit.org/ +[47]:https://www.bleachbit.org/download +[48]:https://github.com/GNOME/gnome-terminal +[49]:https://konsole.kde.org/ +[50]:https://gnometerminator.blogspot.com/p/introduction.html +[51]:http://guake-project.org/ +[52]:https://ardour.org/ +[53]:https://www.audacityteam.org/ +[54]:https://www.gimp.org/ +[55]:https://krita.org/en/ +[56]:https://www.lwks.com/ +[57]:https://www.lwks.com/index.php?option=com_lwks&view=download&Itemid=206 +[58]:https://www.openshot.org/ +[59]:http://www.pitivi.org/ +[60]:https://wiki.gnome.org/Apps/Rhythmbox +[61]:https://gnumdk.github.io/lollypop-web/ +[62]:https://amarok.kde.org/en +[63]:https://www.clementine-player.org/ +[64]:https://cmus.github.io/ +[65]:https://www.calligra.org/tour/calligra-suite/ +[66]:https://www.libreoffice.org/ +[67]:https://www.wps.com/ +[68]:http://wps-community.org/downloads +[69]:http://shutter-project.org/ +[70]:https://launchpad.net/kazam +[71]:https://gitlab.gnome.org/GNOME/gnome-screenshot +[72]:http://www.maartenbaert.be/simplescreenrecorder/ +[73]:http://recordmydesktop.sourceforge.net/about.php +[74]:https://atom.io/ +[75]:https://www.sublimetext.com/ +[76]:https://www.geany.org/ +[77]:https://wiki.gnome.org/Apps/Gedit +[78]:https://everdo.net/ +[79]:https://www.fossmint.com/evernote-alternatives-for-linux/ +[80]:https://everdo.net/linux/ +[81]:https://taskwarrior.org/ +[82]:http://banshee.fm/ +[83]:https://www.videolan.org/ +[84]:https://kodi.tv/ +[85]:https://www.smplayer.info/ +[86]:https://www.virtualbox.org/wiki/VirtualBox +[87]:https://www.vmware.com/ +[88]:https://www.tecmint.com/install-vmware-workstation-in-linux/ +[89]:https://www.google.com/chrome/ +[90]:https://www.mozilla.org/en-US/firefox/ +[91]:https://vivaldi.com/ + From 64590224e4820f7cd08ea711bd4d280bf5f855a9 Mon Sep 17 00:00:00 2001 From: brifuture Date: Sun, 7 Oct 2018 14:42:03 +0800 Subject: [PATCH 247/736] Translation completed --- ...stack JavaScript web app in three weeks.md | 202 ------------------ ...stack JavaScript web app in three weeks.md | 195 +++++++++++++++++ 2 files changed, 195 insertions(+), 202 deletions(-) delete mode 100644 sources/tech/20170810 How we built our first full-stack JavaScript web app in three weeks.md create mode 100644 translated/tech/20170810 How we built our first full-stack JavaScript web app in three weeks.md diff --git a/sources/tech/20170810 How we built our first full-stack JavaScript web app in three weeks.md b/sources/tech/20170810 How we built our first full-stack JavaScript web app in three weeks.md deleted file mode 100644 index 566d29d768..0000000000 --- a/sources/tech/20170810 How we built our first full-stack JavaScript web app in three weeks.md +++ /dev/null @@ -1,202 +0,0 @@ -BriFuture is translating this article - -The user’s home dashboard in our app, AlignHow we built our first full-stack JavaScript web app in three weeks -============================================================ - -![](https://cdn-images-1.medium.com/max/2000/1*PgKBpQHRUgqpXcxtyehPZg.png) - -### A simple step-by-step guide to go from idea to deployed app - -My three months of coding bootcamp at the Grace Hopper Program have come to a close, and the title of this article is actually not quite true — I’ve now built  _three_  full-stack apps: [an e-commerce store from scratch][3], a [personal hackathon project][4] of my choice, and finally, a three-week capstone project. That capstone project was by far the most intensive— a three week journey with two teammates — and it is my proudest achievement from bootcamp. It is the first robust, complex app I have ever fully built and designed. - -As most developers know, even when you “know how to code”, it can be really overwhelming to embark on the creation of your first full-stack app. The JavaScript ecosystem is incredibly vast: with package managers, modules, build tools, transpilers, databases, libraries, and decisions to be made about all of them, it’s no wonder that so many budding coders never build anything beyond Codecademy tutorials. That’s why I want to walk you through a step-by-step guide of the decisions and steps my team took to create our live app, Align. - -* * * - -First, some context. Align is a web app that uses an intuitive timeline interface to help users set long-term goals and manage them over time.Our stack includes Firebase for back-end services and React on the front end. My teammates and I explain more in this short video: - -[video](https://youtu.be/YacM6uYP2Jo) - -Demoing Align @ Demo Day Live // July 10, 2017 - -So how did we go from Day 1, when we were assigned our teams, to the final live app? Here’s a rundown of the steps we took: - -* * * - -### Step 1: Ideate - -The first step was to figure out what exactly we wanted to build. In my past life as a consultant at IBM, I led ideation workshops with corporate leaders. Pulling from that, I suggested to my group the classic post-it brainstorming strategy, in which we all scribble out as many ideas as we can — even ‘stupid ones’ — so that people’s brains keep moving and no one avoids voicing ideas out of fear. - -![](https://cdn-images-1.medium.com/max/800/1*-M4xa9_HJylManvLoraqaQ.jpeg) - -After generating a few dozen app ideas, we sorted them into categories to gain a better understanding of what themes we were collectively excited about. In our group, we saw a clear trend towards ideas surrounding self-improvement, goal-setting, nostalgia, and personal development. From that, we eventually honed in on a specific idea: a personal dashboard for setting and managing long-term goals, with elements of memory-keeping and data visualization over time. - -From there, we created a set of user stories — descriptions of features we wanted to have, from an end-user perspective — to elucidate what exactly we wanted our app to do. - -### Step 2: Wireframe UX/UI - -Next, on a white board, we drew out the basic views we envisioned in our app. We incorporated our set of user stories to understand how these views would work in a skeletal app framework. - - -![](https://cdn-images-1.medium.com/max/400/1*r5FBoa8JsYOoJihDgrpzhg.jpeg) - - - -![](https://cdn-images-1.medium.com/max/400/1*0O8ZWiyUgWm0b8wEiHhuPw.jpeg) - - - -![](https://cdn-images-1.medium.com/max/400/1*y9Q5v-sF0PWmkhthcW338g.jpeg) - -These sketches ensured we were all on the same page, and provided a visual blueprint going forward of what exactly we were all working towards. - -### Step 3: Choose a data structure and type of database - -It was now time to design our data structure. Based on our wireframes and user stories, we created a list in a Google doc of the models we would need and what attributes each should include. We knew we needed a ‘goal’ model, a ‘user’ model, a ‘milestone’ model, and a ‘checkin’ model, as well as eventually a ‘resource’ model, and an ‘upload’ model. - - -![](https://cdn-images-1.medium.com/max/800/1*oA3mzyixVzsvnN_egw1xwg.png) -Our initial sketch of our data models - -After informally sketching the models out, we needed to choose a  _type _ of database: ‘relational’ vs. ‘non-relational’ (a.k.a. ‘SQL’ vs. ‘NoSQL’). Whereas SQL databases are table-based and need predefined schema, NoSQL databases are document-based and have dynamic schema for unstructured data. - -For our use case, it didn’t matter much whether we used a SQL or a No-SQL database, so we ultimately chose Google’s cloud NoSQL database Firebasefor other reasons: - -1. It could hold user image uploads in its cloud storage - -2. It included WebSocket integration for real-time updating - -3. It could handle our user authentication and offer easy OAuth integration - -Once we chose a database, it was time to understand the relations between our data models. Since Firebase is NoSQL, we couldn’t create join tables or set up formal relations like  _“Checkins belongTo Goals”_ . Instead, we needed to figure out what the JSON tree would look like, and how the objects would be nested (or not). Ultimately, we structured our model like this: - - ** 此处有Canvas,请手动处理 ** - -![](https://cdn-images-1.medium.com/max/800/1*py0hQy-XHZWmwff3PM6F1g.png) -Our final Firebase data scheme for the Goal object. Note that Milestones & Checkins are nested under Goals. - - _(Note: Firebase prefers shallow, normalized data structures for efficiency, but for our use case, it made most sense to nest it, since we would never be pulling a Goal from the database without its child Milestones and Checkins.)_ - -### Step 4: Set up Github and an agile workflow - -We knew from the start that staying organized and practicing agile development would serve us well. We set up a Github repo, on which weprevented merging to master to force ourselves to review each other’s code. - - -![](https://cdn-images-1.medium.com/max/800/1*5kDNcvJpr2GyZ0YqLauCoQ.png) - -We also created an agile board on [Waffle.io][5], which is free and has easy integration with Github. On the Waffle board, we listed our user stories as well as bugs we knew we needed to fix. Later, when we started coding, we would each create git branches for the user story we were currently working on, moving it from swim lane to swim lane as we made progress. - - -![](https://cdn-images-1.medium.com/max/800/1*gnWqGwQsdGtpt3WOwe0s_A.gif) - -We also began holding “stand-up” meetings each morning to discuss the previous day’s progress and any blockers each of us were encountering. This meeting often decided the day’s flow — who would be pair programming, and who would work on an issue solo. - -I highly recommend some sort of structured workflow like this, as it allowed us to clearly define our priorities and make efficient progress without any interpersonal conflict. - -### Step 5: Choose & download a boilerplate - -Because the JavaScript ecosystem is so complicated, we opted not to build our app from absolute ground zero. It felt unnecessary to spend valuable time wiring up our Webpack build scripts and loaders, and our symlink that pointed to our project directory. My team chose the [Firebones][6] skeleton because it fit our use case, but there are many open-source skeleton options available to choose from. - -### Step 6: Write back-end API routes (or Firebase listeners) - -If we weren’t using a cloud-based database, this would have been the time to start writing our back-end Express routes to make requests to our database. But since we were using Firebase, which is already in the cloud and has a different way of communicating with code, we just worked to set up our first successful database listener. - -To ensure our listener was working, we coded out a basic user form for creating a Goal, and saw that, indeed, when we filled out the form, our database was live-updating. We were connected! - -### Step 7: Build a “Proof Of Concept” - -Our next step was to create a “proof of concept” for our app, or a prototype of the most difficult fundamental features to implement, demonstrating that our app  _could _ eventuallyexist. For us, this meant finding a front-end library to satisfactorily render timelines, and connecting it to Firebase successfully to display some seed data in our database. - - -![](https://cdn-images-1.medium.com/max/800/1*d5Wu3fOlX8Xdqix1RPZWSA.png) -Basic Victory.JS timelines - -We found Victory.JS, a React library built on D3, and spent a day reading the documentation and putting together a very basic example of a  _VictoryLine_  component and a  _VictoryScatter_  component to visually display data from the database. Indeed, it worked! We were ready to build. - -### Step 8: Code out the features - -Finally, it was time to build out all the exciting functionality of our app. This is a giant step that will obviously vary widely depending on the app you’re personally building. We looked at our wireframes and started coding out the individual user stories in our Waffle. This often included touching both front-end and back-end code (for example, creating a front-end form and also connecting it to the database). Our features ranged from major to minor, and included things like: - -* ability to create new goals, milestones, and checkins - -* ability to delete goals, milestones, and checkins - -* ability to change a timeline’s name, color, and details - -* ability to zoom in on timelines - -* ability to add links to resources - -* ability to upload media - -* ability to bubble up resources and media from milestones and checkins to their associated goals - -* rich text editor integration - -* user signup / authentication / OAuth - -* popover to view timeline options - -* loading screens - -For obvious reasons, this step took up the bulk of our time — this phase is where most of the meaty code happened, and each time we finished a feature, there were always more to build out! - -### Step 9: Choose and code the design scheme - -Once we had an MVP of the functionality we desired in our app, it was time to clean it up and make it pretty. My team used Material-UI for components like form fields, menus, and login tabs, which ensured everything looked sleek, polished, and coherent without much in-depth design knowledge. - - -![](https://cdn-images-1.medium.com/max/800/1*PCRFAbsPBNPYhz6cBgWRCw.gif) -This was one of my favorite features to code out. Its beauty is so satisfying! - -We spent a while choosing a color scheme and editing the CSS, which provided us a nice break from in-the-trenches coding. We also designed alogo and uploaded a favicon. - -### Step 10: Find and squash bugs - -While we should have been using test-driven development from the beginning, time constraints left us with precious little time for anything but features. This meant that we spent the final two days simulating every user flow we could think of and hunting our app for bugs. - - -![](https://cdn-images-1.medium.com/max/800/1*X8JUwTeCAkIcvhKofcbIDA.png) - -This process was not the most systematic, but we found plenty of bugs to keep us busy, including a bug in which the loading screen would last indefinitely in certain situations, and one in which the resource component had stopped working entirely. Fixing bugs can be annoying, but when it finally works, it’s extremely satisfying. - -### Step 11: Deploy the live app - -The final step was to deploy our app so it would be available live! Because we were using Firebase to store our data, we deployed to Firebase Hosting, which was intuitive and simple. If your back end uses a different database, you can use Heroku or DigitalOcean. Generally, deployment directions are readily available on the hosting site. - -We also bought a cheap domain name on Namecheap.com to make our app more polished and easy to find. - -![](https://cdn-images-1.medium.com/max/800/1*gAuM_vWpv_U53xcV3tQINg.png) - -* * * - -And that was it — we were suddenly the co-creators of a real live full-stack app that someone could use! If we had a longer runway, Step 12 would have been to run A/B testing on users, so we could better understand how actual users interact with our app and what they’d like to see in a V2. - -For now, however, we’re happy with the final product, and with the immeasurable knowledge and understanding we gained throughout this process. Check out Align [here][7]! - - -![](https://cdn-images-1.medium.com/max/800/1*KbqmSW-PMjgfWYWS_vGIqg.jpeg) -Team Align: Sara Kladky (left), Melanie Mohn (center), and myself. - --------------------------------------------------------------------------------- - -via: https://medium.com/ladies-storm-hackathons/how-we-built-our-first-full-stack-javascript-web-app-in-three-weeks-8a4668dbd67c?imm_mid=0f581a&cmp=em-web-na-na-newsltr_20170816 - -作者:[Sophia Ciocca ][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:https://medium.com/@sophiaciocca?source=post_header_lockup -[1]:https://medium.com/@sophiaciocca?source=post_header_lockup -[2]:https://medium.com/@sophiaciocca?source=post_header_lockup -[3]:https://github.com/limitless-leggings/limitless-leggings -[4]:https://www.youtube.com/watch?v=qyLoInHNjoc -[5]:http://www.waffle.io/ -[6]:https://github.com/FullstackAcademy/firebones -[7]:https://align.fun/ -[8]:https://github.com/align-capstone/align -[9]:https://github.com/sophiaciocca -[10]:https://github.com/Kladky -[11]:https://github.com/melaniemohn diff --git a/translated/tech/20170810 How we built our first full-stack JavaScript web app in three weeks.md b/translated/tech/20170810 How we built our first full-stack JavaScript web app in three weeks.md new file mode 100644 index 0000000000..90448211c3 --- /dev/null +++ b/translated/tech/20170810 How we built our first full-stack JavaScript web app in three weeks.md @@ -0,0 +1,195 @@ +三周内构建 JavaScript 全栈 web 应用 +============================================================ + +![](https://cdn-images-1.medium.com/max/2000/1*PgKBpQHRUgqpXcxtyehPZg.png) +应用 Align 中,用户主页的控制面板 + +### 从构思到部署应用程序的简单分步指南 + +我在 Grace Hopper Program 为期三个月的编码训练营即将结束,实际上这篇文章的标题有些纰漏 —— 现在我已经构建了 _三个_ 全栈应用:[从零开始的电子商店(an e-commerce store from scratch)][3]、我个人的 [私人黑客马拉松项目(personal hackathon project)][4],还有这个“三周的结业项目”。这个项目是迄今为止强度最大的 —— 我和另外两名队友共同花费三周的时光 —— 而它也是我在训练营中最引以为豪的成就。这是我目前所构建和涉及的第一款稳定且复杂的应用。 + +如大多数开发者所知,即使你“知道怎么编写代码”,但真正要制作第一款全栈的应用却是非常困难的。JavaScript 生态系统出奇的大:有包管理器,模块,构建工具,转译器,数据库,库文件,还要对上述所有东西进行选择,难怪如此多的编程新手除了 Codecademy 的教程外,做不了任何东西。这就是为什么我想让你体验这个决策的分布教程,跟着我们队伍的脚印,构建可用的应用。 + +* * * + +首先,简单的说两句。Align 是一个 web 应用,它使用直观的时间线界面帮助用户管理时间、设定长期目标。我们的技术栈有:用于后端服务的 Firebase 和用于前端的 React。我和我的队友在这个短视频中解释的更详细: + +[video](https://youtu.be/YacM6uYP2Jo) + +展示 Align @ Demo Day Live // 2017 年 7 月 10 日 + +从第 1 天(我们组建团队的那天)开始,直到最终应用的完成,我们是如何做的?这里是我们采取的步骤纲要: + +* * * + +### 第 1 步:构思 + +第一步是弄清楚我们到底要构建什么东西。过去我在 IBM 中当咨询师的时候,我和合作组长一同带领着构思工作组。从那之后,我一直建议小组使用经典的头脑风暴策略,在会议中我们能够提出尽可能多的想法 —— 即使是 “愚蠢的想法” —— 这样每个人的大脑都在思考,没有人因顾虑而不敢发表意见。 + +![](https://cdn-images-1.medium.com/max/800/1*-M4xa9_HJylManvLoraqaQ.jpeg) + +在产生了好几个关于应用的想法时,我们把这些想法分类记录下来,以便更好的理解我们大家都感兴趣的主题。在我们这个小组中,我们看到实现想法的清晰趋势,需要自我改进、设定目标、情怀,还有个人发展。我们最后从中决定了具体的想法:做一个用于设置和管理长期目标的控制面板,有保存记忆的元素,可以根据时间将数据可视化。 + +从此,我们创作出了一系列用户故事(从一个终端用户的视角,对我们想要拥有的功能进行描述),阐明我们到底想要应用实现什么功能。 + +### 第 2 步:UX/UI 示意图 + +接下来,在一块白板上,我们画出了想象中应用的基本视图。结合了用户故事,以便理解在应用基本框架中这些视图将会如何工作。 + +![](https://cdn-images-1.medium.com/max/400/1*r5FBoa8JsYOoJihDgrpzhg.jpeg) + + + +![](https://cdn-images-1.medium.com/max/400/1*0O8ZWiyUgWm0b8wEiHhuPw.jpeg) + + + +![](https://cdn-images-1.medium.com/max/400/1*y9Q5v-sF0PWmkhthcW338g.jpeg) + +这些骨架确保我们意见统一,提供了可预见的蓝图,让我们向着计划的方向努力。 + +### 第 3 步:选好数据结构和数据库类型 + +到了设计数据结构的时候。基于我们的示意图和用户故事,我们在 Google doc 中制作了一个清单,它包含我们将会需要的模型和每个模型应该包含的属性。我们知道需要 “目标(goal)” 模型、“用户(user)”模型、“里程碑(milestone)”模型、“记录(checkin)”模型还有最后的“资源(resource)”模型和“上传(upload)”模型, + +![](https://cdn-images-1.medium.com/max/800/1*oA3mzyixVzsvnN_egw1xwg.png) +最初的数据模型结构 + +在正式确定好这些模型后,我们需要选择某种 _类型_ 的数据库:“关系型的”还是“非关系型的”(也就是“SQL”还是“NoSQL”)。由于基于表的 SQL 数据库需要预定义的格式,而基于文档的 NoSQL 数据库却可以用动态格式描述非结构化数据。 + +对于我们这个情况,用 SQL 型还是 No-SQL 型的数据库没多大影响,由于下列原因,我们最终选择了 Google 的 NoSQL 云数据库 Firebase: + +1. 它能够把用户上传的图片保存在云端并存储起来 + +2. 它包含 WebSocket 功能,能够实时更新 + +3. 它能够处理用户验证,并且提供简单的 OAuth 功能。 + +我们确定了数据库后,就要理解数据模型之间的关系了。由于 Firebase 是 NoSQL 类型,我们无法创建联合表或者设置像 _"记录 (Checkins)属于目标(Goals)"_ 的从属关系。因此我们需要弄清楚 JSON 树是什么样的,对象是怎样嵌套的(或者不是嵌套的关系)。最终,我们构建了像这样的模型: + + +![](https://cdn-images-1.medium.com/max/800/1*py0hQy-XHZWmwff3PM6F1g.png) +我们最终为目标(Goal)对象确定的 Firebase 数据格式。注意里程碑(Milestones)和记录(Checkins)对象嵌套在 Goals 中。 + +_(注意: 出于性能考虑,Firebase 更倾向于简单、常规的数据结构, 但对于我们这种情况,需要在数据中进行嵌套,因为我们不会从数据库中获取目标(Goal)却不获取相应的子对象里程碑(Milestones)和记录(Checkins)。)_ + +### 第 4 步:设置好 Github 和敏捷开发工作流 + +我们知道,从一开始就保持井然有序、执行敏捷开发对我们有极大好处。我们设置好 Github 上的仓库,我们无法直接将代码合并到主(master)分支,这迫使我们互相审阅代码。 + + +![](https://cdn-images-1.medium.com/max/800/1*5kDNcvJpr2GyZ0YqLauCoQ.png) + +我们还在 [Waffle.io][5] 网站上创建了敏捷开发的面板,它是免费的,很容易集成到 Github。我们在 Waffle 面板上罗列出所有用户故事以及需要我们去修复的 bugs。之后当我们开始编码时,我们每个人会为自己正在研究的每一个用户故事创建一个 git 分支,在完成工作后合并这一条条的分支。 + + +![](https://cdn-images-1.medium.com/max/800/1*gnWqGwQsdGtpt3WOwe0s_A.gif) + +我们还开始保持晨会的习惯,讨论前一天的工作和每一个人遇到的阻碍。会议常常决定了当天的流程 —— 哪些人要结对编程,哪些人要独自处理问题。 + +我认为这种类型的工作流程非常好,因为它让我们能够清楚地找到自己的定位,不用顾虑人际矛盾地高效执行工作。 + +### 第 5 步: 选择、下载样板文件 + +由于 JavaScript 的生态系统过于复杂,我们不打算从最底层开始构建应用。把宝贵的时间花在连通 Webpack 构建脚本和加载器,把符号链接指向项目工程这些事情上感觉很没必要。我的团队选择了 [Firebones][6] 框架,因为它恰好适用于我们这个情况,当然还有很多可供选择的开源框架。 + +### 第 6 步:编写后端 API 路由(或者 Firebase 监听器) + +如果我们没有用基于云的数据库,这时就应该开始编写执行数据库查询的后端高速路由了。但是由于我们用的是 Firebase,它本身就是云端的,可以用不同的方式进行代码交互,因此我们只需要设置好一个可用的数据库监听器。 + +为了确保监听器在工作,我们用代码做出了用于创建目标(Goal)的基本用户表格,实际上当我们完成表格时,就看到数据库执行可更新。数据库就成功连接了! + +### 第 7 步:构建 “概念证明” + +接下来是为应用创建 “概念证明”,也可以说是实现起来最复杂的基本功能的原型,证明我们的应用 _可以_ 实现。对我们而言,这意味着要找个前端库来实现时间线的渲染,成功连接到 Firebase,显示数据库中的一些种子数据。 + + +![](https://cdn-images-1.medium.com/max/800/1*d5Wu3fOlX8Xdqix1RPZWSA.png) +Victory.JS 绘制的简单时间线 + +我们找到了基于 D3 构建的响应式库 Victory.JS,花了一天时间阅读文档,用 _VictoryLine_ 和 _VictoryScatter_ 组件实现了非常基础的示例,能够可视化地显示数据库中的数据。实际上,这很有用!我们可以开始构建了。 + +### 第 8 步:用代码实现功能 + +最后,是时候构建出应用中那些令人期待的功能了。取决于你要构建的应用,这一重要步骤会有些明显差异。我们根据所用的框架,编码出不同的用户故事并保存在 Waffle 上。常常需要同时接触前端和后端代码(比如,创建一个前端表格同时要连接到数据库)。我们实现了包含以下这些大大小小的功能: + +* 能够创建新目标(goals)、里程碑(milestones)和记录(checkins) + +* 能够删除目标,里程碑和记录 + +* 能够更改时间线的名称,颜色和详细内容 + +* 能够缩放时间线 + +* 能够为资源添加链接 + +* 能够上传视频 + +* 在达到相关目标的里程碑和记录时弹出资源和视频 + +* 集成富文本编辑器 + +* 用户注册、验证、OAuth 验证 + +* 弹出查看时间线选项 + +* 加载画面 + +有各种原因,这一步花了我们很多时间 —— 这一阶段是产生最多优质代码的阶段,每当我们实现了一个功能,就会有更多的事情要完善。 + +### 第 9 步: 选择并实现设计方案 + +当我们使用 MVP 架构实现了想要的功能,就可以开始清理,对它进行美化了。像表单,菜单和登陆栏等组件,我的团队用的是 Material-UI,不需要很多深层次的设计知识,它也能确保每个组件看上去都很圆润光滑。 + +![](https://cdn-images-1.medium.com/max/800/1*PCRFAbsPBNPYhz6cBgWRCw.gif) +这是我制作的最喜爱功能之一了。它美得令人心旷神怡。 + +我们花了一点时间来选择颜色方案和编写 CSS ,这让我们在编程中休息了一段美妙的时间。期间我们还设计了 logo 图标,还上传了网站图标。 + +### 第 10 步: 找出并减少 bug + +我们一开始就应该使用测试驱动开发的模式,但时间有限,我们那点时间只够用来实现功能。这意味着最后的两天时间我们花在了模拟我们能够想到的每一种用户流,并从应用中找出 bug。 + +![](https://cdn-images-1.medium.com/max/800/1*X8JUwTeCAkIcvhKofcbIDA.png) + +这一步是最不具系统性的,但是我们发现了一堆够我们忙乎的 bug,其中一个是在某些情况下加载动画不会结束的 bug,还有一个是资源组件会完全停止运行的 bug。修复 bug 是件令人恼火的事情,但当软件可以运行时,又特别令人满足。 + +### 第 11 步:应用上线 + +最后一步是上线应用,这样才可以让用户使用它!由于我们使用 Firebase 存储数据,因此我们使用了 Firebase Hosting,它很直观也很简单。如果你要选择其它的数据库,你可以使用 Heroku 或者 DigitalOcean。一般来讲,可以在主机网站中查看使用说明。 + +我们还在 Namecheap.com 上购买了一个便宜的域名,这让我们的应用更加完善,很容易被找到。 + +![](https://cdn-images-1.medium.com/max/800/1*gAuM_vWpv_U53xcV3tQINg.png) + +* * * + +好了,这就是全部的过程 —— 我们都是这款实用的全栈应用的合作开发者。如果要继续讲,那么第 12 步将会是对用户进行 A/B 测试,这样我们才能更好地理解:实际用户与这款应用交互的方式和他们想在 V2 版本中看到的新功能。 + +但是,现在我们感到非常开心,不仅是因为成品,还因为我们从这个过程中获得了难以估量的知识和理解。点击 [这里][7] 查看 Align 应用! + +![](https://cdn-images-1.medium.com/max/800/1*KbqmSW-PMjgfWYWS_vGIqg.jpeg) +Align 团队:Sara Kladky (左), Melanie Mohn (中), 还有我自己. + +-------------------------------------------------------------------------------- + +via: https://medium.com/ladies-storm-hackathons/how-we-built-our-first-full-stack-javascript-web-app-in-three-weeks-8a4668dbd67c?imm_mid=0f581a&cmp=em-web-na-na-newsltr_20170816 + +作者:[Sophia Ciocca ][a] +译者:[BriFuture](https://github.com/BriFuture) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://medium.com/@sophiaciocca?source=post_header_lockup +[1]:https://medium.com/@sophiaciocca?source=post_header_lockup +[2]:https://medium.com/@sophiaciocca?source=post_header_lockup +[3]:https://github.com/limitless-leggings/limitless-leggings +[4]:https://www.youtube.com/watch?v=qyLoInHNjoc +[5]:http://www.waffle.io/ +[6]:https://github.com/FullstackAcademy/firebones +[7]:https://align.fun/ +[8]:https://github.com/align-capstone/align +[9]:https://github.com/sophiaciocca +[10]:https://github.com/Kladky +[11]:https://github.com/melaniemohn From 7c6c5353ba62944a8a7b8dadbff9bef1642323e0 Mon Sep 17 00:00:00 2001 From: "Xingyu.Wang" Date: Sun, 7 Oct 2018 20:45:15 +0800 Subject: [PATCH 248/736] PRF:20180827 4 tips for better tmux sessions.md @lujun9972 --- ...0180827 4 tips for better tmux sessions.md | 42 +++++++++---------- 1 file changed, 21 insertions(+), 21 deletions(-) diff --git a/translated/tech/20180827 4 tips for better tmux sessions.md b/translated/tech/20180827 4 tips for better tmux sessions.md index 5e507985fb..979568a171 100644 --- a/translated/tech/20180827 4 tips for better tmux sessions.md +++ b/translated/tech/20180827 4 tips for better tmux sessions.md @@ -3,17 +3,17 @@ ![](https://fedoramagazine.org/wp-content/uploads/2018/08/tmux-4-tips-816x345.jpg) -tmux 是一个终端多路复用工具,它可以让你系统上的终端支持多面板。你可以安排好面板配置,在每个面板用运行不同进程,这通常可以更好的地用你的屏幕。我们在 [这篇早期的文章 ][1] 中向读者介绍过这一强力工具。如果你已经开始使用 tmux 了,那么这里有一些技巧可以帮你更好地使用它。 +tmux 是一个终端多路复用工具,它可以让你系统上的终端支持多面板。你可以排列面板位置,在每个面板运行不同进程,这通常可以更好的地利用你的屏幕。我们在 [这篇早期的文章][1] 中向读者介绍过这一强力工具。如果你已经开始使用 tmux 了,那么这里有一些技巧可以帮你更好地使用它。 -本文假设你当前的前缀键是 `Ctrl+b`。如果你已重新映射该前缀,只需在相应位置替换为你定义的前缀即可。。 +本文假设你当前的前缀键是 `Ctrl+b`。如果你已重新映射该前缀,只需在相应位置替换为你定义的前缀即可。 ### 设置终端为自动使用 tmux -使用 tmux 的一个最大好处就是可以随意的从会话中断开和重连。这使得远程登陆会话更加强力。你有没有遇到过丢失了与远程系统的连接,然后好希望能够恢复在远程系统上做过的那些工作的情况?tmux 能够解决这一问题。 +使用 tmux 的一个最大好处就是可以随意的从会话中断开和重连。这使得远程登录会话功能更加强大。你有没有遇到过丢失了与远程系统的连接,然后好希望能够恢复在远程系统上做过的那些工作的情况?tmux 能够解决这一问题。 -然而,有时在远程系统上工作时,你可能会忘记开启一个会话。避免出现这一情况的一个方法就是每次通过交互式 shell 登陆系统时都让 tmux 启动或附加上一个会话。 +然而,有时在远程系统上工作时,你可能会忘记开启会话。避免出现这一情况的一个方法就是每次通过交互式 shell 登录系统时都让 tmux 启动或附加上一个会话。 -在你远程系统上的 ~/.bash_profile 文件中加入下面内容: +在你远程系统上的 `~/.bash_profile` 文件中加入下面内容: ``` if [ -z "$TMUX" ]; then @@ -21,50 +21,50 @@ if [ -z "$TMUX" ]; then fi ``` -然后注销远程系统,并使用 SSH 重新登录。你会发现你处在一个名为 default 的 tmux 会话中了。如果退出该会话,则下次登录时还会重新生成此会话。但更重要的是,若您正常地从会话中分离,那么下次登录时你会发现之前工作并没有丢失 - 这在连接中断时非常有用。 +然后注销远程系统,并使用 SSH 重新登录。你会发现你处在一个名为 `default` 的 tmux 会话中了。如果退出该会话,则下次登录时还会重新生成此会话。但更重要的是,若您正常地从会话中分离,那么下次登录时你会发现之前工作并没有丢失 - 这在连接中断时非常有用。 -你当然也可以将这段配置加入本地系统中。需要注意的是,大多数 GUI 界面的终端并不会自动使用这个 default 会话,因此它们并不是登陆 shell。虽然你可以修改这一行为,但它可能会导致终端嵌套执行附加到 tmux 会话这一动作从而导致会话不太可用,因此当进行此操作时请一定小心。 +你当然也可以将这段配置加入本地系统中。需要注意的是,大多数 GUI 界面的终端并不会自动使用这个 `default` 会话,因此它们并不是登录 shell。虽然你可以修改这一行为,但它可能会导致终端嵌套执行附加到 tmux 会话这一动作,从而导致会话不太可用,因此当进行此操作时请一定小心。 -### 使用 zoom 使注意力专注于单个进程 +### 使用缩放功能使注意力专注于单个进程 -然而 tmux 的目的就是在单个 session 中提供多窗口,多面板和多进程的能力,但有时候你需要专注。如果你正在与一个进程进行交互并且需要更多空间,或需要专注于某个任务,则可以使用 zoom 命令。该命令会将当前面板扩展,占据整个当前窗口的空间。 +虽然 tmux 的目的就是在单个会话中提供多窗口、多面板和多进程的能力,但有时候你需要专注。如果你正在与一个进程进行交互并且需要更多空间,或需要专注于某个任务,则可以使用缩放命令。该命令会将当前面板扩展,占据整个当前窗口的空间。 -Zoom 在其他情况下也很有用。比如,想象你在图形桌面上运行一个终端窗口。面板会使得从 tmux 会话中拷贝和粘帖多行内容变得相对困难。但若你对面板进行用 zoom 进行了缩放,就可以很容易地对多行数据进行拷贝/粘帖。 +缩放在其他情况下也很有用。比如,想象你在图形桌面上运行一个终端窗口。面板会使得从 tmux 会话中拷贝和粘帖多行内容变得相对困难。但若你缩放了面板,就可以很容易地对多行数据进行拷贝/粘帖。 -要对当前面板进行缩放,按下 `Ctrl+b,z`。需要回复的话,按下相同按键组合来回复面板。 +要对当前面板进行缩放,按下 `Ctrl+b, z`。需要恢复的话,按下相同按键组合来恢复面板。 ### 绑定一些有用的命令 -tmux 默认有大量的命令可用。但将一些更常用的操作绑定到容易记忆的快捷键会很有有。下面一些例子可以让会话变得更好用,你可以添加到 ~/.tmux.conf 文件中: +tmux 默认有大量的命令可用。但将一些更常用的操作绑定到容易记忆的快捷键会很有用。下面一些例子可以让会话变得更好用,你可以添加到 `~/.tmux.conf` 文件中: ``` bind r source-file ~/.tmux.conf \; display "Reloaded config" ``` -该命令重新读取你配置文件中的命令和键绑定。添加该条绑定后,退出所有的 tmux 会话然后重启一个会话。现在你做了任何更改后,只需要简单的按下 `Ctrl+b,r` 就能将修改的内容应用到现有的会话中了。 +该命令重新读取你配置文件中的命令和键绑定。添加该条绑定后,退出任意一个 tmux 会话然后重启一个会话。现在你做了任何更改后,只需要简单的按下 `Ctrl+b, r` 就能将修改的内容应用到现有的会话中了。 ``` bind V split-window -h bind H split-window ``` -这些命令可以很方便地对窗口进行横向切分(按下 Shift+V) 和纵向切分 (Shift+H)。 +这些命令可以很方便地对窗口进行横向切分(按下 `Shift+V`)和纵向切分(`Shift+H`)。 -若你想查看所有绑定的快捷键,按下 `Ctrl+B,?` 可以看到一个列表。你首先看到的应该是复制模式下的快捷键绑定,表示的是当你在 tmux 中进行复制粘帖时对应的快捷键。你添加的那两个键绑定会在前缀模式 (prefix mode) 中看到。请随意把玩吧! +若你想查看所有绑定的快捷键,按下 `Ctrl+B, ?` 可以看到一个列表。你首先看到的应该是复制模式下的快捷键绑定,表示的是当你在 tmux 中进行复制粘帖时对应的快捷键。你添加的那两个键绑定会在前缀模式prefix mode中看到。请随意把玩吧! -### Use powerline for great justice +### 使用 powerline 更清晰 -[如前文所示 ][2],powerline 工具是对 shell 的绝佳补充。而且它也兼容在 tmux 中使用。由于 tmux 接管了整个终端空间,powerline 窗口能理工的可不仅仅是更好的 shell 提示那么简单。 +[如前文所示][2],powerline 工具是对 shell 的绝佳补充。而且它也兼容在 tmux 中使用。由于 tmux 接管了整个终端空间,powerline 窗口能提供的可不仅仅是更好的 shell 提示那么简单。 - [![Screenshot of tmux powerline in git folder](https://fedoramagazine.org/wp-content/uploads/2018/08/Screenshot-from-2018-08-25-19-36-53-1024x690.png)][3] +[![Screenshot of tmux powerline in git folder](https://fedoramagazine.org/wp-content/uploads/2018/08/Screenshot-from-2018-08-25-19-36-53-1024x690.png)][3] -如果你还没有这么做,按照 [本文 ][4] 中的指示来安装该工具。然后[使用 sudo][5] 来安装附件: +如果你还没有这么做,按照 [这篇文章][4] 中的指示来安装该工具。然后[使用 sudo][5] 来安装附件: ``` sudo dnf install tmux-powerline ``` -然后重启会话,就会在底部看到一个漂亮的新状态栏。根据终端的宽度,默认的状态栏会显示你当前会话 ID,打开的窗口,系统信息,日期和时间,以及主机名。若你进入了使用 git 进行版本控制的项目目录中还能看到分支名和用色彩标注的版本库状态。 +接着重启会话,就会在底部看到一个漂亮的新状态栏。根据终端的宽度,默认的状态栏会显示你当前会话 ID、打开的窗口、系统信息、日期和时间,以及主机名。若你进入了使用 git 进行版本控制的项目目录中还能看到分支名和用色彩标注的版本库状态。 当然,这个状态栏具有很好的可配置性。享受你新增强的 tmux 会话吧,玩的开心点。 @@ -76,7 +76,7 @@ via: https://fedoramagazine.org/4-tips-better-tmux-sessions/ 作者:[Paul W. Frields][a] 选题:[lujun9972](https://github.com/lujun9972) 译者:[lujun9972](https://github.com/lujun9972) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 From 9345060ad302ba0d4deaa92c4957956e95806018 Mon Sep 17 00:00:00 2001 From: "Xingyu.Wang" Date: Sun, 7 Oct 2018 20:46:00 +0800 Subject: [PATCH 249/736] PUB:20180827 4 tips for better tmux sessions.md @lujun9972 https://linux.cn/article-10089-1.html --- .../20180827 4 tips for better tmux sessions.md | 0 1 file changed, 0 insertions(+), 0 deletions(-) rename {translated/tech => published}/20180827 4 tips for better tmux sessions.md (100%) diff --git a/translated/tech/20180827 4 tips for better tmux sessions.md b/published/20180827 4 tips for better tmux sessions.md similarity index 100% rename from translated/tech/20180827 4 tips for better tmux sessions.md rename to published/20180827 4 tips for better tmux sessions.md From f70973a3a3dee0ca28278904d7a325b02aafa294 Mon Sep 17 00:00:00 2001 From: "Xingyu.Wang" Date: Sun, 7 Oct 2018 21:21:16 +0800 Subject: [PATCH 250/736] PRF:20180201 Conditional Rendering in React using Ternaries and.md @GraveAccent --- ... Rendering in React using Ternaries and.md | 64 +++++++++---------- 1 file changed, 31 insertions(+), 33 deletions(-) diff --git a/translated/tech/20180201 Conditional Rendering in React using Ternaries and.md b/translated/tech/20180201 Conditional Rendering in React using Ternaries and.md index aa7ba0017e..92b2ae79ff 100644 --- a/translated/tech/20180201 Conditional Rendering in React using Ternaries and.md +++ b/translated/tech/20180201 Conditional Rendering in React using Ternaries and.md @@ -1,16 +1,15 @@ 在 React 条件渲染中使用三元表达式和 “&&” -============================================================ +======= ![](https://cdn-images-1.medium.com/max/2000/1*eASRJrCIVgsy5VbNMAzD9w.jpeg) -Photo by [Brendan Church][1] on [Unsplash][2] -React 组件可以通过多种方式决定渲染内容。你可以使用传统的 if 语句或 switch 语句。在本文中,我们将探讨一些替代方案。但要注意,如果你不小心,有些方案会带来自己的陷阱。 +React 组件可以通过多种方式决定渲染内容。你可以使用传统的 `if` 语句或 `switch` 语句。在本文中,我们将探讨一些替代方案。但要注意,如果你不小心,有些方案会带来自己的陷阱。 ### 三元表达式 vs if/else -假设我们有一个组件被传进来一个 `name` prop。 如果这个字符串非空,我们会显示一个问候语。否则,我们会告诉用户他们需要登录。 +假设我们有一个组件被传进来一个 `name` 属性。 如果这个字符串非空,我们会显示一个问候语。否则,我们会告诉用户他们需要登录。 -这是一个只实现了如上功能的无状态函数式组件。 +这是一个只实现了如上功能的无状态函数式组件(SFC)。 ``` const MyComponent = ({ name }) => { @@ -29,7 +28,7 @@ const MyComponent = ({ name }) => { }; ``` -这个很简单但是我们可以做得更好。这是使用三元运算符编写的相同组件。 +这个很简单但是我们可以做得更好。这是使用三元运算符conditional ternary operator编写的相同组件。 ``` const MyComponent = ({ name }) => ( @@ -41,86 +40,85 @@ const MyComponent = ({ name }) => ( 请注意这段代码与上面的例子相比是多么简洁。 -有几点需要注意。因为我们使用了箭头函数的单语句形式,所以隐含了return语句。另外,使用三元运算符允许我们省略掉重复的 `
` 标记。🎉 +有几点需要注意。因为我们使用了箭头函数的单语句形式,所以隐含了`return` 语句。另外,使用三元运算符允许我们省略掉重复的 `
` 标记。 ### 三元表达式 vs && -正如您所看到的,三元表达式用于表达 if/else 条件式非常好。但是对于简单的 if 条件式怎么样呢? +正如您所看到的,三元表达式用于表达 `if`/`else` 条件式非常好。但是对于简单的 `if` 条件式怎么样呢? -让我们看另一个例子。如果 isPro(一个布尔值)为真,我们将显示一个奖杯表情符号。我们也要渲染星星的数量(如果不是0)。我们可以这样写。 +让我们看另一个例子。如果 `isPro`(一个布尔值)为真,我们将显示一个奖杯表情符号。我们也要渲染星星的数量(如果不是 0)。我们可以这样写。 ``` const MyComponent = ({ name, isPro, stars}) => (
Hello {name} - {isPro ? '🏆' : null} + {isPro ? '♨' : null}
{stars ? (
- Stars:{'⭐️'.repeat(stars)} + Stars:{'☆'.repeat(stars)}
) : null}
); ``` -请注意 “else” 条件返回 null 。 这是因为三元表达式要有"否则"条件。 +请注意 `else` 条件返回 `null` 。 这是因为三元表达式要有“否则”条件。 -对于简单的 “if” 条件式,我们可以使用更合适的东西:&& 运算符。这是使用 “&&” 编写的相同代码。 +对于简单的 `if` 条件式,我们可以使用更合适的东西:`&&` 运算符。这是使用 `&&` 编写的相同代码。 ``` const MyComponent = ({ name, isPro, stars}) => (
Hello {name} - {isPro && '🏆'} + {isPro && '♨'}
{stars && (
- Stars:{'⭐️'.repeat(stars)} + Stars:{'☆'.repeat(stars)}
)}
); ``` -没有太多区别,但是注意我们消除了每个三元表达式最后面的 `: null` (else 条件式)。一切都应该像以前一样渲染。 +没有太多区别,但是注意我们消除了每个三元表达式最后面的 `: null` (`else` 条件式)。一切都应该像以前一样渲染。 +嘿!约翰得到了什么?当什么都不应该渲染时,只有一个 `0`。这就是我上面提到的陷阱。这里有解释为什么: -嘿!约翰得到了什么?当什么都不应该渲染时,只有一个0。这就是我上面提到的陷阱。这里有解释为什么。 - -[根据 MDN][3],一个逻辑运算符“和”(也就是`&&`): +[根据 MDN][3],一个逻辑运算符“和”(也就是 `&&`): > `expr1 && expr2` -> 如果 `expr1` 可以被转换成 `false` ,返回 `expr1`;否则返回 `expr2`。 如此,当与布尔值一起使用时,如果两个操作数都是 true,`&&` 返回 `true` ;否则,返回 `false`。 +> 如果 `expr1` 可以被转换成 `false` ,返回 `expr1`;否则返回 `expr2`。 如此,当与布尔值一起使用时,如果两个操作数都是 `true`,`&&` 返回 `true` ;否则,返回 `false`。 好的,在你开始拔头发之前,让我为你解释它。 -在我们这个例子里, `expr1` 是变量 `stars`,它的值是 `0`,因为0是 falsey 的值, `0` 会被返回和渲染。看,这还不算太坏。 +在我们这个例子里, `expr1` 是变量 `stars`,它的值是 `0`,因为 0 是假值,`0` 会被返回和渲染。看,这还不算太坏。 我会简单地这么写。 -> 如果 `expr1` 是 falsey,返回 `expr1` ,否则返回 `expr2` +> 如果 `expr1` 是假值,返回 `expr1` ,否则返回 `expr2`。 -所以,当对非布尔值使用 “&&” 时,我们必须让 falsy 的值返回 React 无法渲染的东西,比如说,`false` 这个值。 +所以,当对非布尔值使用 `&&` 时,我们必须让这个假值返回 React 无法渲染的东西,比如说,`false` 这个值。 我们可以通过几种方式实现这一目标。让我们试试吧。 ``` {!!stars && (
- {'⭐️'.repeat(stars)} + {'☆'.repeat(stars)}
)} ``` -注意 `stars` 前的双感叹操作符( `!!`)(呃,其实没有双感叹操作符。我们只是用了感叹操作符两次)。 +注意 `stars` 前的双感叹操作符(`!!`)(呃,其实没有双感叹操作符。我们只是用了感叹操作符两次)。 -第一个感叹操作符会强迫 `stars` 的值变成布尔值并且进行一次“非”操作。如果 `stars` 是 `0` ,那么 `!stars` 会 是 `true`。 +第一个感叹操作符会强迫 `stars` 的值变成布尔值并且进行一次“非”操作。如果 `stars` 是 `0` ,那么 `!stars` 会是 `true`。 -然后我们执行第二个`非`操作,所以如果 `stars` 是0,`!!stars` 会是 `false`。正好是我们想要的。 +然后我们执行第二个`非`操作,所以如果 `stars` 是 `0`,`!!stars` 会是 `false`。正好是我们想要的。 如果你不喜欢 `!!`,那么你也可以强制转换出一个布尔数比如这样(这种方式我觉得有点冗长)。 @@ -136,11 +134,11 @@ const MyComponent = ({ name, isPro, stars}) => ( #### 关于字符串 -空字符串与数字有一样的毛病。但是因为渲染后的空字符串是不可见的,所以这不是那种你很可能会去处理的难题,甚至可能不会注意到它。然而,如果你是完美主义者并且不希望DOM上有空字符串,你应采取我们上面对数字采取的预防措施。 +空字符串与数字有一样的毛病。但是因为渲染后的空字符串是不可见的,所以这不是那种你很可能会去处理的难题,甚至可能不会注意到它。然而,如果你是完美主义者并且不希望 DOM 上有空字符串,你应采取我们上面对数字采取的预防措施。 ### 其它解决方案 -一种可能的将来可扩展到其他变量的解决方案,是创建一个单独的 `shouldRenderStars` 变量。然后你用“&&”处理布尔值。 +一种可能的将来可扩展到其他变量的解决方案,是创建一个单独的 `shouldRenderStars` 变量。然后你用 `&&` 处理布尔值。 ``` const shouldRenderStars = stars > 0; @@ -151,7 +149,7 @@ return (
{shouldRenderStars && (
- {'⭐️'.repeat(stars)} + {'☆'.repeat(stars)}
)}
@@ -170,7 +168,7 @@ return (
{shouldRenderStars && (
- {'⭐️'.repeat(stars)} + {'☆'.repeat(stars)}
)}
@@ -181,7 +179,7 @@ return ( 我认为你应该充分利用这种语言。对于 JavaScript,这意味着为 `if/else` 条件式使用三元表达式,以及为 `if` 条件式使用 `&&` 操作符。 -我们可以回到每处都使用三元运算符的舒适区,但你现在消化了这些知识和力量,可以继续前进 && 取得成功了。 +我们可以回到每处都使用三元运算符的舒适区,但你现在消化了这些知识和力量,可以继续前进 `&&` 取得成功了。 -------------------------------------------------------------------------------- @@ -195,7 +193,7 @@ via: https://medium.freecodecamp.org/conditional-rendering-in-react-using-ternar 作者:[Donavon West][a] 译者:[GraveAccent](https://github.com/GraveAccent) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 From 46ea7b7dc27d85bcc01ce2c2992d02de34280433 Mon Sep 17 00:00:00 2001 From: "Xingyu.Wang" Date: Sun, 7 Oct 2018 21:22:20 +0800 Subject: [PATCH 251/736] PUB: 20180201 Conditional Rendering in React using Ternaries and.md @GraveAccent https://linux.cn/article-10090-1.html --- ...20180201 Conditional Rendering in React using Ternaries and.md | 0 1 file changed, 0 insertions(+), 0 deletions(-) rename {translated/tech => published}/20180201 Conditional Rendering in React using Ternaries and.md (100%) diff --git a/translated/tech/20180201 Conditional Rendering in React using Ternaries and.md b/published/20180201 Conditional Rendering in React using Ternaries and.md similarity index 100% rename from translated/tech/20180201 Conditional Rendering in React using Ternaries and.md rename to published/20180201 Conditional Rendering in React using Ternaries and.md From 26118991d966da00d99d747224ca925c624f95ac Mon Sep 17 00:00:00 2001 From: "Xingyu.Wang" Date: Sun, 7 Oct 2018 22:46:52 +0800 Subject: [PATCH 252/736] PRF:20171129 How to Install and Use Wireshark on Debian and Ubuntu 16.04_17.10.md @MjSeven --- ...eshark on Debian and Ubuntu 16.04_17.10.md | 35 ++++++++----------- 1 file changed, 14 insertions(+), 21 deletions(-) diff --git a/translated/tech/20171129 How to Install and Use Wireshark on Debian and Ubuntu 16.04_17.10.md b/translated/tech/20171129 How to Install and Use Wireshark on Debian and Ubuntu 16.04_17.10.md index 0bcbe0d3e5..c482cd05e5 100644 --- a/translated/tech/20171129 How to Install and Use Wireshark on Debian and Ubuntu 16.04_17.10.md +++ b/translated/tech/20171129 How to Install and Use Wireshark on Debian and Ubuntu 16.04_17.10.md @@ -1,27 +1,20 @@ -在 Debian 9 / Ubuntu 16.04 / 17.10 中如何安装并使用 Wireshark +如何安装并使用 Wireshark ====== -作者 [Pradeep Kumar][1],首发于 2017 年 11 月 29 日,更新于 2017 年 11 月 29 日 - [![wireshark-Debian-9-Ubuntu 16.04 -17.10](https://www.linuxtechi.com/wp-content/uploads/2017/11/wireshark-Debian-9-Ubuntu-16.04-17.10.jpg)][2] -Wireshark 是免费的,开源的,跨平台的基于 GUI 的网络数据包分析器,可用于 Linux, Windows, MacOS, Solaris 等。它可以实时捕获网络数据包,并以人性化的格式呈现。Wireshark 允许我们监控网络数据包上升到微观层面。Wireshark 还有一个名为 `tshark` 的命令行实用程序,它与 Wireshark 执行相同的功能,但它是通过终端而不是 GUI。 +Wireshark 是自由开源的、跨平台的基于 GUI 的网络数据包分析器,可用于 Linux、Windows、MacOS、Solaris 等。它可以实时捕获网络数据包,并以人性化的格式呈现。Wireshark 允许我们监控网络数据包直到其微观层面。Wireshark 还有一个名为 `tshark` 的命令行实用程序,它与 Wireshark 执行相同的功能,但它是通过终端而不是 GUI。 -Wireshark 可用于网络故障排除,分析,软件和通信协议开发以及用于教育目的。Wireshark 使用 `pcap` 库来捕获网络数据包。 +Wireshark 可用于网络故障排除、分析、软件和通信协议开发以及用于教育目的。Wireshark 使用 `pcap` 库来捕获网络数据包。 Wireshark 具有许多功能: * 支持数百项协议检查 - * 能够实时捕获数据包并保存,以便以后进行离线分析 - * 许多用于分析数据的过滤器 - -* 捕获的数据可以被压缩和解压缩(to 校正:on the fly 什么意思?) - -* 支持各种文件格式的数据分析,输出也可以保存为 XML, CSV 和纯文本格式 - -* 数据可以从以太网,wifi,蓝牙,USB,帧中继,令牌环等多个接口中捕获 +* 捕获的数据可以即时压缩和解压缩 +* 支持各种文件格式的数据分析,输出也可以保存为 XML、CSV 和纯文本格式 +* 数据可以从以太网、wifi、蓝牙、USB、帧中继、令牌环等多个接口中捕获 在本文中,我们将讨论如何在 Ubuntu/Debian 上安装 Wireshark,并将学习如何使用 Wireshark 捕获网络数据包。 @@ -102,7 +95,7 @@ linuxtechi@nixhome:/tmp/wireshark-2.4.2$ sudo make install linuxtechi@nixhome:/tmp/wireshark-2.4.2$ sudo ldconfig ``` -在安装后,它将创建一个单独的 Wireshark 组,我们现在将我们的用户添加到组中,以便它可以与 Wireshark 一起使用,否则在启动 wireshark 时可能会出现 `permission denied(权限被拒绝)`错误。 +在安装后,它将创建一个单独的 Wireshark 组,我们现在将我们的用户添加到组中,以便它可以与 Wireshark 一起使用,否则在启动 wireshark 时可能会出现 “permission denied(权限被拒绝)”错误。 要将用户添加到 wireshark 组,执行以下命令: @@ -120,7 +113,7 @@ linuxtechi@nixhome:~$ wireshark [![Access-wireshark-debian9](https://www.linuxtechi.com/wp-content/uploads/2017/11/Access-wireshark-debian9-1024x664.jpg)][4] -点击 Wireshark 图标 +点击 Wireshark 图标。 [![Wireshark-window-debian9](https://www.linuxtechi.com/wp-content/uploads/2017/11/Wireshark-window-debian9-1024x664.jpg)][5] @@ -128,7 +121,7 @@ linuxtechi@nixhome:~$ wireshark [![Access-wireshark-Ubuntu](https://www.linuxtechi.com/wp-content/uploads/2017/11/Access-wireshark-Ubuntu-1024x664.jpg)][6] -点击 Wireshark 图标 +点击 Wireshark 图标。 [![Wireshark-window-Ubuntu](https://www.linuxtechi.com/wp-content/uploads/2017/11/Wireshark-window-Ubuntu-1024x664.jpg)][7] @@ -138,7 +131,7 @@ linuxtechi@nixhome:~$ wireshark [![wireshark-Linux-system](https://www.linuxtechi.com/wp-content/uploads/2017/11/wireshark-Linux-system.jpg)][8] -所有这些都是我们可以捕获网络数据包的接口。根据你系统上的界面,此屏幕可能与你的不同。 +所有这些都是我们可以捕获网络数据包的接口。根据你系统上的接口,此屏幕可能与你的不同。 我们选择 `enp0s3` 来捕获该接口的网络流量。选择接口后,在我们网络上所有设备的网络数据包开始填充(参考下面的屏幕截图): @@ -146,11 +139,11 @@ linuxtechi@nixhome:~$ wireshark 第一次看到这个屏幕,我们可能会被这个屏幕上显示的数据所淹没,并且可能已经想过如何整理这些数据,但不用担心,Wireshark 的最佳功能之一就是它的过滤器。 -我们可以根据 IP 地址,端口号,也可以使用来源和目标过滤器,数据包大小等对数据进行排序和过滤,也可以将两个或多个过滤器组合在一起以创建更全面的搜索。我们也可以在 `Apply a Display Filter(应用显示过滤器)`选项卡中编写过滤规则,也可以选择已创建的规则。要选择之前构建的过滤器,请单击 `Apply a Display Filter(应用显示过滤器)`选项卡旁边的 `flag` 图标。 +我们可以根据 IP 地址、端口号,也可以使用来源和目标过滤器、数据包大小等对数据进行排序和过滤,也可以将两个或多个过滤器组合在一起以创建更全面的搜索。我们也可以在 “Apply a Display Filter(应用显示过滤器)”选项卡中编写过滤规则,也可以选择已创建的规则。要选择之前构建的过滤器,请单击 “Apply a Display Filter(应用显示过滤器)”选项卡旁边的旗帜图标。 [![Filter-in-wireshark-Ubuntu](https://www.linuxtechi.com/wp-content/uploads/2017/11/Filter-in-wireshark-Ubuntu-1024x727.jpg)][10] -我们还可以根据颜色编码过滤数据,默认情况下,浅紫色是 TCP 流量,浅蓝色是 UDP 流量,黑色标识有错误的数据包,看看这些编码是什么意思,点击 `View -> Coloring Rules`,我们也可以改变这些编码。 +我们还可以根据颜色编码过滤数据,默认情况下,浅紫色是 TCP 流量,浅蓝色是 UDP 流量,黑色标识有错误的数据包,看看这些编码是什么意思,点击 “View -> Coloring Rules”,我们也可以改变这些编码。 [![Packet-Colouring-Wireshark](https://www.linuxtechi.com/wp-content/uploads/2017/11/Packet-Colouring-Wireshark-1024x682.jpg)][11] @@ -161,11 +154,11 @@ Wireshark 是一个非常强大的工具,需要一些时间来习惯并对其 -------------------------------------------------------------------------------- -via: https://www.linuxtechi.com +via: https://www.linuxtechi.com/install-use-wireshark-debian-9-ubuntu/ 作者:[Pradeep Kumar][a] 译者:[MjSeven](https://github.com/MjSeven) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 From 6588202308cc1aa3fd76af0fb3b84bdefad2163b Mon Sep 17 00:00:00 2001 From: "Xingyu.Wang" Date: Sun, 7 Oct 2018 22:47:14 +0800 Subject: [PATCH 253/736] PUB:20171129 How to Install and Use Wireshark on Debian and Ubuntu 16.04_17.10.md @MjSeven https://linux.cn/article-10091-1.html --- ... Install and Use Wireshark on Debian and Ubuntu 16.04_17.10.md | 0 1 file changed, 0 insertions(+), 0 deletions(-) rename {translated/tech => published}/20171129 How to Install and Use Wireshark on Debian and Ubuntu 16.04_17.10.md (100%) diff --git a/translated/tech/20171129 How to Install and Use Wireshark on Debian and Ubuntu 16.04_17.10.md b/published/20171129 How to Install and Use Wireshark on Debian and Ubuntu 16.04_17.10.md similarity index 100% rename from translated/tech/20171129 How to Install and Use Wireshark on Debian and Ubuntu 16.04_17.10.md rename to published/20171129 How to Install and Use Wireshark on Debian and Ubuntu 16.04_17.10.md From 4e1bd9418cd8f59fc22758cbb67e6466cc8aec29 Mon Sep 17 00:00:00 2001 From: "Xingyu.Wang" Date: Sun, 7 Oct 2018 23:23:00 +0800 Subject: [PATCH 254/736] PRF:20180117 How to get into DevOps.md @belitex @pityonline --- .../talk/20180117 How to get into DevOps.md | 45 ++++++++++--------- 1 file changed, 23 insertions(+), 22 deletions(-) diff --git a/translated/talk/20180117 How to get into DevOps.md b/translated/talk/20180117 How to get into DevOps.md index 6efd6976d5..f55824538f 100644 --- a/translated/talk/20180117 How to get into DevOps.md +++ b/translated/talk/20180117 How to get into DevOps.md @@ -1,5 +1,6 @@ DevOps 实践指南 ====== +> 这些技巧或许对那些想要践行 DevOps 的系统运维和开发者能有所帮助。 ![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/rh_003784_02_os.comcareers_resume_rh1x.png?itok=S3HGxi6E) @@ -11,19 +12,19 @@ DevOps 实践指南 了解历史是搞清楚未来的关键,DevOps 也不例外。想搞清楚 DevOps 运动的普及和流行,去了解一下上世纪 90 年代后期和 21 世纪前十年 IT 的情况会有帮助。这是我的经验。 -我的第一份工作是在一家大型跨国金融服务公司做 Windows 系统管理员。当时给计算资源扩容需要给 Dell 打电话(或者像我们公司那样打给 CDW),并下一个价值数十万美元的订单,包含服务器、网络设备、电缆和软件,所有这些都要运到生产或线下的数据中心去。虽然 VMware 仍在尝试说服企业使用虚拟机运行他们的“性能敏感”型程序是更划算的,但是包括我们在内的很多公司都还忠于使用他们的物理机运行应用。 +我的第一份工作是在一家大型跨国金融服务公司做 Windows 系统管理员。当时给计算资源扩容需要给 Dell 打电话(或者像我们公司那样打给 CDW),并下一个价值数十万美元的订单,包含服务器、网络设备、电缆和软件,所有这些都要运到生产或线下的数据中心去。虽然 VMware 仍在尝试说服企业使用虚拟机运行他们的“性能敏感”型程序是更划算的,但是包括我们在内的很多公司都还是愿意使用他们的物理机运行应用。 在我们技术部门,有一个专门做数据中心工程和运营的团队,他们的工作包括价格谈判,让荒唐的月租能够降一点点,还包括保证我们的系统能够正常冷却(如果设备太多,这个事情的难度会呈指数增长)。如果这个团队足够幸运足够有钱,境外数据中心的工作人员对我们所有的服务器型号又都有足够的了解,就能避免在盘后交易中不小心搞错东西。那时候亚马逊 AWS 和 Rackspace 逐渐开始加速扩张,但还远远没到临界规模。 -当时我们还有专门的团队来保证硬件上运行着的操作系统和软件能够按照预期工作。这些工程师负责设计可靠的架构以方便给系统打补丁,监控和报警,还要定义基础镜像gold image的内容。这些大都是通过很多手工实验完成的,很多手工实验是为了编写一个运行说明书runbook来描述要做的事情,并确保按照它执行后的结果确实在预期内。在我们这么大的组织里,这样做很重要,因为一线和二线的技术支持都是境外的,而他们的培训内容只覆盖到了这些运行说明而已。 +当时我们还有专门的团队来保证硬件上运行着的操作系统和软件能够按照预期工作。这些工程师负责设计可靠的架构以方便给系统打补丁、监控和报警,还要定义基础镜像gold image的内容。这些大都是通过很多手工实验完成的,很多手工实验是为了编写一个运行说明书runbook来描述要做的事情,并确保按照它执行后的结果确实在预期内。在我们这么大的组织里,这样做很重要,因为一线和二线的技术支持都是境外的,而他们的培训内容只覆盖到了这些运行说明而已。 -(这是我职业生涯前三年的世界。我那时候的梦想是成为制定金本位制的人!) +(这是我职业生涯前三年的世界。我那时候的梦想是成为制定最高标准的人!) 软件发布则完全是另外一头怪兽。无可否认,我在这方面并没有积累太多经验。但是,从我收集的故事(和最近的经历)来看,当时大部分软件开发的日常大概是这样: * 开发人员按照技术和功能需求来编写代码,这些需求来自于业务分析人员的会议,但是会议并没有邀请开发人员参加。 * 开发人员可以选择为他们的代码编写单元测试,以确保在代码里没有任何明显的疯狂行为,比如除以 0 但不抛出异常。 -* 然后开发者会把他们的代码标记为“Ready for QA”(准备好了接受测试),质量保障的成员会把这个版本的代码发布到他们自己的环境中,这个环境和生产环境可能相似,也可能不,甚至和开发环境相比也不一定相似。 +* 然后开发者会把他们的代码标记为 “Ready for QA”(准备好了接受测试),质量保障的成员会把这个版本的代码发布到他们自己的环境中,这个环境和生产环境可能相似,也可能不,甚至和开发环境相比也不一定相似。 * 故障会在几天或者几个星期内反馈到开发人员那里,这个时长取决于其它业务活动和优先事项。 虽然系统管理员和开发人员经常有不一致的意见,但是对“变更管理”却一致痛恨。变更管理由高度规范的(就我当时的雇主而言)和非常必要的规则和程序组成,用来管理一家公司应该什么时候做技术变更,以及如何做。很多公司都按照 [ITIL][4] 来操作,简单的说,ITIL 问了很多和事情发生的原因、时间、地点和方式相关的问题,而且提供了一个过程,对产生最终答案的决定做审计跟踪。 @@ -54,20 +55,20 @@ DevOps 不是一个团队,CI/CD 也不是 JIRA 系统的一个用户组。DevO 我经常被问到这个问题,它的答案和同属于开放式的其它大部分问题一样:视情况而定。 -现在“DevOps 工程师”在不同的公司有不同的含义。在软件开发人员比较多但是很少有人懂基础设施的小公司,他们很可能是在找有更多系统管理经验的人。而其他公司,通常是大公司或老公司,已经有一个稳固的系统管理团队了,他们在向类似于谷歌 [SRE][7] 的方向做优化,也就是“设计操作功能的软件工程师”。但是,这并不是金科玉律,就像其它技术类工作一样,这个决定很大程度上取决于他的招聘经理。 +现在“DevOps 工程师”在不同的公司有不同的含义。在软件开发人员比较多但是很少有人懂基础设施的小公司,他们很可能是在找有更多系统管理经验的人。而其他公司,通常是大公司或老公司,已经有一个稳固的系统管理团队了,他们在向类似于谷歌 [SRE][7] 的方向做优化,也就是“设计运维功能的软件工程师”。但是,这并不是金科玉律,就像其它技术类工作一样,这个决定很大程度上取决于他的招聘经理。 也就是说,我们一般是在找对深入学习以下内容感兴趣的工程师: -* 如何管理和设计安全、可扩展的云平台(通常是在 AWS 上,不过微软的 Azure,Google Cloud Platform,还有 DigitalOcean 和 Heroku 这样的 PaaS 提供商,也都很流行)。 -* 如何用流行的 [CI/CD][8] 工具,比如 Jenkins,GoCD,还有基于云的 Travis CI 或者 CircleCI,来构造一条优化的发布部署流水线和发布部署策略。 -* 如何在你的系统中使用基于时间序列的工具,比如 Kibana,Grafana,Splunk,Loggly 或者 Logstash 来监控,记录,并在变化的时候报警。 -* 如何使用配置管理工具,例如 Chef,Puppet 或者 Ansible 做到“基础设施即代码”,以及如何使用像 Terraform 或 CloudFormation 的工具发布这些基础设施。 +* 如何管理和设计安全、可扩展的云平台(通常是在 AWS 上,不过微软的 Azure、Google Cloud Platform,还有 DigitalOcean 和 Heroku 这样的 PaaS 提供商,也都很流行)。 +* 如何用流行的 [CI/CD][8] 工具,比如 Jenkins、GoCD,还有基于云的 Travis CI 或者 CircleCI,来构造一条优化的发布部署流水线和发布部署策略。 +* 如何在你的系统中使用基于时间序列的工具,比如 Kibana、Grafana、Splunk、Loggly 或者 Logstash 来监控、记录,并在变化的时候报警。 +* 如何使用配置管理工具,例如 Chef、Puppet 或者 Ansible 做到“基础设施即代码”,以及如何使用像 Terraform 或 CloudFormation 的工具发布这些基础设施。 容器也变得越来越受欢迎。尽管有人对大规模使用 Docker 的现状[表示不满][9],但容器正迅速地成为一种很好的方式来实现在更少的操作系统上运行超高密度的服务和应用,同时提高它们的可靠性。(像 Kubernetes 或者 Mesos 这样的容器编排工具,能在宿主机故障的时候,几秒钟之内重新启动新的容器。)考虑到这些,掌握 Docker 或者 rkt 以及容器编排平台的知识会对你大有帮助。 -如果你是希望做 DevOps 实践的系统管理员,你还需要知道如何写代码。Python 和 Ruby 是 DevOps 领域的流行语言,因为它们是可移植的(也就是说可以在任何操作系统上运行),快速的,而且易读易学。它们还支撑着这个行业最流行的配置管理工具(Ansible 是使用 Python 写的,Chef 和 Puppet 是使用 Ruby 写的)以及云平台的 API 客户端(亚马逊 AWS,微软 Azure,Google Cloud Platform 的客户端通常会提供 Python 和 Ruby 语言的版本)。 +如果你是希望做 DevOps 实践的系统管理员,你还需要知道如何写代码。Python 和 Ruby 是 DevOps 领域的流行语言,因为它们是可移植的(也就是说可以在任何操作系统上运行)、快速的,而且易读易学。它们还支撑着这个行业最流行的配置管理工具(Ansible 是使用 Python 写的,Chef 和 Puppet 是使用 Ruby 写的)以及云平台的 API 客户端(亚马逊 AWS、微软 Azure、Google Cloud Platform 的客户端通常会提供 Python 和 Ruby 语言的版本)。 -如果你是开发人员,也希望做 DevOps 的实践,我强烈建议你去学习 Unix,Windows 操作系统以及网络基础知识。虽然云计算把很多系统管理的难题抽象化了,但是对应用的性能做 debug 的时候,如果你知道操作系统如何工作的就会有很大的帮助。下文包含了一些这个主题的图书。 +如果你是开发人员,也希望做 DevOps 的实践,我强烈建议你去学习 Unix、Windows 操作系统以及网络基础知识。虽然云计算把很多系统管理的难题抽象化了,但是对应用的性能做调试的时候,如果你知道操作系统如何工作的就会有很大的帮助。下文包含了一些这个主题的图书。 如果你觉得这些东西听起来内容太多,没关系,大家都是这么想的。幸运的是,有很多小项目可以让你开始探索。其中一个项目是 Gary Stafford 的[选举服务](https://github.com/garystafford/voter-service),一个基于 Java 的简单投票平台。我们要求面试候选人通过一个流水线将该服务从 GitHub 部署到生产环境基础设施上。你可以把这个服务与 Rob Mile 写的了不起的 DevOps [入门教程](https://github.com/maxamg/cd-office-hours)结合起来学习。 @@ -79,22 +80,22 @@ DevOps 不是一个团队,CI/CD 也不是 JIRA 系统的一个用户组。DevO #### 理论书籍 -* Gene Kim 写的 [The Phoenix Project(凤凰项目)][10]。这是一本很不错的书,内容涵盖了我上文解释过的历史(写的更生动形象),描述了一个运行在敏捷和 DevOps 之上的公司向精益前进的过程。 -* Terrance Ryan 写的 [Driving Technical Change(布道之道)][11]。非常好的一小本书,讲了大多数技术型组织内的常见性格特点以及如何和他们打交道。这本书对我的帮助比我想象的更多。 -* Tom DeMarco 和 Tim Lister 合著的 [Peopleware(人件)][12]。管理工程师团队的经典图书,有一点过时,但仍然很有价值。 -* Tom Limoncelli 写的 [Time Management for System Administrators(时间管理:给系统管理员)][13]。这本书主要面向系统管理员,它对很多大型组织内的系统管理员生活做了深入的展示。如果你想了解更多系统管理员和开发人员之间的冲突,这本书可能解释了更多。 -* Eric Ries 写的 [The Lean Startup(精益创业)][14]。描述了 Eric 自己的 3D 虚拟形象公司,IMVU,发现了如何精益工作,快速失败和更快盈利。 -* Jez Humble 和他的朋友写的 [Lean Enterprise(精益企业)][15]。这本书是对精益创业做的改编,以更适应企业,两本书都很棒,都很好地解释了 DevOps 背后的商业动机。 -* Kief Morris 写的 [Infrastructure As Code(基础设施即代码)][16]。关于“基础设施即代码”的非常好的入门读物!很好的解释了为什么所有公司都有必要采纳这种做法。 -* Betsy Beyer、Chris Jones、Jennifer Petoff 和 Niall Richard Murphy 合著的 [Site Reliability Engineering(站点可靠性工程师)][17]。一本解释谷歌 SRE 实践的书,也因为是“DevOps 诞生之前的 DevOps”被人熟知。在如何处理运行时间、时延和保持工程师快乐方面提供了有意思的看法。 +* Gene Kim 写的 《[凤凰项目][10]The Phoenix Project》。这是一本很不错的书,内容涵盖了我上文解释过的历史(写的更生动形象),描述了一个运行在敏捷和 DevOps 之上的公司向精益前进的过程。 +* Terrance Ryan 写的 《[布道之道][11]Driving Technical Change》。非常好的一小本书,讲了大多数技术型组织内的常见性格特点以及如何和他们打交道。这本书对我的帮助比我想象的更多。 +* Tom DeMarco 和 Tim Lister 合著的 《[人件][12]Peopleware》。管理工程师团队的经典图书,有一点过时,但仍然很有价值。 +* Tom Limoncelli 写的 《[时间管理:给系统管理员][13]Time Management for System Administrators》。这本书主要面向系统管理员,它对很多大型组织内的系统管理员生活做了深入的展示。如果你想了解更多系统管理员和开发人员之间的冲突,这本书可能解释了更多。 +* Eric Ries 写的 《[精益创业][14]The Lean Startup》。描述了 Eric 自己的 3D 虚拟形象公司,IMVU,发现了如何精益工作,快速失败和更快盈利。 +* Jez Humble 和他的朋友写的 《[精益企业][15]Lean Enterprise》。这本书是对精益创业做的改编,以更适应企业,两本书都很棒,都很好地解释了 DevOps 背后的商业动机。 +* Kief Morris 写的 《[基础设施即代码][16]Infrastructure As Code》。关于“基础设施即代码”的非常好的入门读物!很好的解释了为什么所有公司都有必要采纳这种做法。 +* Betsy Beyer、Chris Jones、Jennifer Petoff 和 Niall Richard Murphy 合著的 《[站点可靠性工程师][17]Site Reliability Engineering》。一本解释谷歌 SRE 实践的书,也因为是“DevOps 诞生之前的 DevOps”被人熟知。在如何处理运行时间、时延和保持工程师快乐方面提供了有意思的看法。 #### 技术书籍 如果你想找的是让你直接跟代码打交道的书,看这里就对了。 -* W. Richard Stevens 的 [TCP/IP Illustrated(TCP/IP 详解)][18]。这是一套经典的(也可以说是最全面的)讲解网络协议基础的巨著,重点介绍了 TCP/IP 协议族。如果你听说过 1,2,3,4 层网络,而且对深入学习它们感兴趣,那么你需要这本书。 -* Evi Nemeth、Trent Hein 和 Ben Whaley 合著的 [UNIX and Linux System Administration Handbook(UNIX/Linux 系统管理员手册)][19]。一本很好的入门书,介绍 Linux/Unix 如何工作以及如何使用。 -* Don Jones 和 Jeffrey Hicks 合著的 [Learn Windows Powershell In A Month of Lunches(Windows PowerShell 实战指南)][20]。如果你在 Windows 系统下做自动化任务,你需要学习怎么使用 Powershell。这本书能够帮助你。Don Jones 是这方面著名的 MVP。 +* W. Richard Stevens 的 《[TCP/IP 详解][18]TCP/IP Illustrated》。这是一套经典的(也可以说是最全面的)讲解网络协议基础的巨著,重点介绍了 TCP/IP 协议族。如果你听说过 1、2、3、4 层网络,而且对深入学习它们感兴趣,那么你需要这本书。 +* Evi Nemeth、Trent Hein 和 Ben Whaley 合著的 《[UNIX/Linux 系统管理员手册][19]UNIX and Linux System Administration Handbook》。一本很好的入门书,介绍 Linux/Unix 如何工作以及如何使用。 +* Don Jones 和 Jeffrey Hicks 合著的 《[Windows PowerShell 实战指南][20]Learn Windows Powershell In A Month of Lunches》。如果你在 Windows 系统下做自动化任务,你需要学习怎么使用 Powershell。这本书能够帮助你。Don Jones 是这方面著名的 MVP。 * 几乎所有 [James Turnbull][21] 写的东西,针对流行的 DevOps 工具,他发表了很好的技术入门读物。 不管是在那些把所有应用都直接部署在物理机上的公司,(现在很多公司仍然有充分的理由这样做)还是在那些把所有应用都做成 serverless 的先驱公司,DevOps 都很可能会持续下去。这部分工作很有趣,产出也很有影响力,而且最重要的是,它搭起桥梁衔接了技术和业务之间的缺口。DevOps 是一个值得期待的美好事物。 From 1d1c5b35416be6f9eae1cbbb08264240a4f7fedb Mon Sep 17 00:00:00 2001 From: "Xingyu.Wang" Date: Sun, 7 Oct 2018 23:23:20 +0800 Subject: [PATCH 255/736] PUB:20180117 How to get into DevOps.md @belitex https://linux.cn/article-10092-1.html --- {translated/talk => published}/20180117 How to get into DevOps.md | 0 1 file changed, 0 insertions(+), 0 deletions(-) rename {translated/talk => published}/20180117 How to get into DevOps.md (100%) diff --git a/translated/talk/20180117 How to get into DevOps.md b/published/20180117 How to get into DevOps.md similarity index 100% rename from translated/talk/20180117 How to get into DevOps.md rename to published/20180117 How to get into DevOps.md From e6d9cd8b4ddae8cb96be3021b17c591e32a840df Mon Sep 17 00:00:00 2001 From: dianbanjiu Date: Mon, 8 Oct 2018 00:51:40 +0800 Subject: [PATCH 256/736] dianbanjiu translating --- sources/tech/20180105 The Best Linux Distributions for 2018.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/sources/tech/20180105 The Best Linux Distributions for 2018.md b/sources/tech/20180105 The Best Linux Distributions for 2018.md index 3be92638c5..cc60350641 100644 --- a/sources/tech/20180105 The Best Linux Distributions for 2018.md +++ b/sources/tech/20180105 The Best Linux Distributions for 2018.md @@ -1,4 +1,4 @@ -The Best Linux Distributions for 2018 +[translating by dianbanjiu] The Best Linux Distributions for 2018 ============================================================ ![Linux distros 2018](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/linux-distros-2018.jpg?itok=Z8sdx4Zu "Linux distros 2018") From abada09660586aaebca335e1eaf902f14f830582 Mon Sep 17 00:00:00 2001 From: geekpi Date: Mon, 8 Oct 2018 09:08:32 +0800 Subject: [PATCH 257/736] translated --- ...s And Latest Headlines From Commandline.md | 138 ------------------ 1 file changed, 138 deletions(-) delete mode 100644 sources/tech/20180921 Clinews - Read News And Latest Headlines From Commandline.md diff --git a/sources/tech/20180921 Clinews - Read News And Latest Headlines From Commandline.md b/sources/tech/20180921 Clinews - Read News And Latest Headlines From Commandline.md deleted file mode 100644 index b7082ea141..0000000000 --- a/sources/tech/20180921 Clinews - Read News And Latest Headlines From Commandline.md +++ /dev/null @@ -1,138 +0,0 @@ -translating----geekpi - -Clinews – Read News And Latest Headlines From Commandline -====== - -![](https://www.ostechnix.com/wp-content/uploads/2018/09/clinews-720x340.jpeg) - -A while ago, we have written about a CLI news client named [**InstantNews**][1] that helps you to read news and latest headlines from commandline instantly. Today, I stumbled upon a similar utility named **Clinews** which serves the same purpose – reading news and latest headlines from popular websites, blogs from Terminal. You don’t need to install GUI applications or mobile apps. You can read what’s happening in the world right from your Terminal. It is free, open source utility written using **NodeJS**. - -### Installing Clinews - -Since Clinews is written using NodeJS, you can install it using NPM package manager. If you haven’t install NodeJS, install it as described in the following link. - -Once node installed, run the following command to install Clinews: - -``` -$ npm i -g clinews -``` - -You can also install Clinews using **Yarn** : - -``` -$ yarn global add clinews -``` - -Yarn itself can installed using npm - -``` -$ npm -i yarn -``` - -### Configure News API - -Clinews retrieves all news headlines from [**News API**][2]. News API is a simple and easy-to-use API that returns JSON metadata for the headlines currently published on a range of news sources and blogs. It currently provides live headlines from 70 popular sources, including Ars Technica, BBC, Blooberg, CNN, Daily Mail, Engadget, ESPN, Financial Times, Google News, hacker News, IGN, Mashable, National Geographic, Reddit r/all, Reuters, Speigel Online, Techcrunch, The Guardian, The Hindu, The Huffington Post, The Newyork Times, The Next Web, The Wall street Journal, USA today and [**more**][3]. - -First, you need an API key from News API. Go to [**https://newsapi.org/register**][4] URL and register a free account to get the API key. - -Once you got the API key from News API site, edit your **.bashrc** file: - -``` -$ vi ~/.bashrc - -``` - -Add newsapi API key at the end like below: - -``` -export IN_API_KEY="Paste-API-key-here" - -``` - -Please note that you need to paste the key inside the double quotes. Save and close the file. - -Run the following command to update the changes. - -``` -$ source ~/.bashrc - -``` - -Done. Now let us go ahead and fetch the latest headlines from new sources. - -### Read News And Latest Headlines From Commandline - -To read news and latest headlines from specific new source, for example **The Hindu** , run: - -``` -$ news fetch the-hindu - -``` - -Here, **“the-hindu”** is the new source id (fetch id). - -The above command will fetch latest 10 headlines from The Hindu news portel and display them in the Terminal. Also, it displays a brief description of the news, the published date and time, and the actual link to the source. - -**Sample output:** - -![](https://www.ostechnix.com/wp-content/uploads/2018/09/clinews-1.png) - -To read a news in your browser, hold Ctrl key and click on the URL. It will open in your default web browser. - -To view all the sources you can get news from, run: - -``` -$ news sources - -``` - -**Sample output:** - -![](https://www.ostechnix.com/wp-content/uploads/2018/09/clinews-2.png) - -As you see in the above screenshot, Clinews lists all news sources including the name of the news source, fetch id, description of the site, website URL and the country where it is located. As of writing this guide, Clinews currently supports 70+ news sources. - -Clinews can also able to search for news stories across all sources matching search criteria/term. Say for example, to list all news stories with titles containing the words **“Tamilnadu”** , use the following command: - -``` -$ news search "Tamilnadu" -``` - -This command will scrap all news sources for stories that match term **Tamilnadu**. - -Clinews has some extra flags that helps you to - - * limit the amount of news stories you want to see, - * sort news stories (top, latest, popular), - * display news stories category wise (E.g. business, entertainment, gaming, general, music, politics, science-and-nature, sport, technology) - - - -For more details, see the help section: - -``` -$ clinews -h -``` - -And, that’s all for now. Hope this was useful. More good stuffs to come. Stay tuned! - -Cheers! - - - --------------------------------------------------------------------------------- - -via: https://www.ostechnix.com/clinews-read-news-and-latest-headlines-from-commandline/ - -作者:[SK][a] -选题:[lujun9972](https://github.com/lujun9972) -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://www.ostechnix.com/author/sk/ -[1]: https://www.ostechnix.com/get-news-instantly-commandline-linux/ -[2]: https://newsapi.org/ -[3]: https://newsapi.org/sources -[4]: https://newsapi.org/register From 9f7c145be773058ba8a2addbb5f36945ad7f0f85 Mon Sep 17 00:00:00 2001 From: "Xingyu.Wang" Date: Mon, 8 Oct 2018 09:12:34 +0800 Subject: [PATCH 258/736] =?UTF-8?q?=E8=B6=85=E6=9C=9F=E5=9B=9E=E6=94=B6?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit @sober-wang --- ...ources For Linux - -BSD - Unix Documentation On the Web.md | 4 ---- 1 file changed, 4 deletions(-) diff --git a/sources/tech/20111221 30 Best Sources For Linux - -BSD - Unix Documentation On the Web.md b/sources/tech/20111221 30 Best Sources For Linux - -BSD - Unix Documentation On the Web.md index 81e3acacdb..6a4d1f4828 100644 --- a/sources/tech/20111221 30 Best Sources For Linux - -BSD - Unix Documentation On the Web.md +++ b/sources/tech/20111221 30 Best Sources For Linux - -BSD - Unix Documentation On the Web.md @@ -1,7 +1,3 @@ -# sober-wang 翻译中 - - - 30 Best Sources For Linux / *BSD / Unix Documentation On the We ====== From df817067ff1d9f0fbf6f58bdc8a69fe38c288395 Mon Sep 17 00:00:00 2001 From: geekpi Date: Mon, 8 Oct 2018 09:14:32 +0800 Subject: [PATCH 259/736] translating --- ...an 9 Server in Rescue (Single User mode) - Emergency Mode.md | 2 ++ 1 file changed, 2 insertions(+) diff --git a/sources/tech/20180925 How to Boot Ubuntu 18.04 - Debian 9 Server in Rescue (Single User mode) - Emergency Mode.md b/sources/tech/20180925 How to Boot Ubuntu 18.04 - Debian 9 Server in Rescue (Single User mode) - Emergency Mode.md index ff33e7c175..7a3702a124 100644 --- a/sources/tech/20180925 How to Boot Ubuntu 18.04 - Debian 9 Server in Rescue (Single User mode) - Emergency Mode.md +++ b/sources/tech/20180925 How to Boot Ubuntu 18.04 - Debian 9 Server in Rescue (Single User mode) - Emergency Mode.md @@ -1,3 +1,5 @@ +translating---geekpi + How to Boot Ubuntu 18.04 / Debian 9 Server in Rescue (Single User mode) / Emergency Mode ====== Booting a Linux Server into a single user mode or **rescue mode** is one of the important troubleshooting that a Linux admin usually follow while recovering the server from critical conditions. In Ubuntu 18.04 and Debian 9, single user mode is known as a rescue mode. From 8e889b95cf33b92204c3c9ac458e7ea9471f5de1 Mon Sep 17 00:00:00 2001 From: geekpi Date: Mon, 8 Oct 2018 09:15:27 +0800 Subject: [PATCH 260/736] add translated article --- ...s And Latest Headlines From Commandline.md | 137 ++++++++++++++++++ 1 file changed, 137 insertions(+) create mode 100644 translated/tech/20180921 Clinews - Read News And Latest Headlines From Commandline.md diff --git a/translated/tech/20180921 Clinews - Read News And Latest Headlines From Commandline.md b/translated/tech/20180921 Clinews - Read News And Latest Headlines From Commandline.md new file mode 100644 index 0000000000..892d6ca1c4 --- /dev/null +++ b/translated/tech/20180921 Clinews - Read News And Latest Headlines From Commandline.md @@ -0,0 +1,137 @@ +Clinews - 从命令行阅读新闻和最新头条 +====== + +![](https://www.ostechnix.com/wp-content/uploads/2018/09/clinews-720x340.jpeg) + +不久前,我们写了一个名为 [**InstantNews**][1] 的命令行新闻客户端,它可以帮助你立即在命令行阅读新闻和最新头条新闻。今天,我偶然发现了一个名为 **Clinews** 的类似,它的其功能与此相同 - 在终端阅读来自热门网站的新闻和最新头条,还有博客。你无需安装 GUI 应用或移动应用。你可以直接从终端阅读世界上正在发生的事情。它是使用 **NodeJS** 编写的免费开源程序。 + + +### 安装 Clinews + +由于 Clinews 是使用 NodeJS 编写的,因此你可以使用 NPM 包管理器安装。如果尚未安装 NodeJS,请按照以下链接中的说明进行安装。 + +安装 node 后,运行以下命令安装 Clinews: + +``` +$ npm i -g clinews +``` + +你也可以使用 **Yarn** 安装 Clinews: + +``` +$ yarn global add clinews +``` + +Yarn 本身可以使用 npm 安装 + +``` +$ npm -i yarn +``` + +### 配置 News API + +Clinews 从 [**News API**][2] 中检索所有新闻标题。News API 是一个简单易用的API,它返回当前在一系列新闻源和博客上发布的头条的 JSON 元数据。它目前提供来自 70 个热门源的实时头条,包括 Ars Technica、BBC、Blooberg、CNN、每日邮报、Engadget、ESPN、金融时报、谷歌新闻、hacker News,IGN、Mashable、国家地理、Reddit r/all、路透社、 Speigel Online、Techcrunch、The Guardian、The Hindu、赫芬顿邮报、纽约时报、The Next Web、华尔街日报,今日美国和[**等等**][3]。 + +首先,你需要 News API 的 API 密钥。进入 [**https://newsapi.org/register**][4] 并注册一个免费帐户来获取 API 密钥。 + +从 News API 获得 API 密钥后,编辑 **.bashrc**: + +``` +$ vi ~/.bashrc + +``` + +在最后添加 newsapi API 密钥,如下所示: + +``` +export IN_API_KEY="Paste-API-key-here" + +``` + +请注意,你需要将密钥粘贴在双引号内。保存并关闭文件。 + +运行以下命令以更新更改。 + +``` +$ source ~/.bashrc + +``` + +完成。现在继续并从新闻源获取最新的头条新闻。 + +### 在命令行阅读新闻和最新头条 + +要阅读特定新闻源的新闻和最新头条,例如 **The Hindu**,请运行: + +``` +$ news fetch the-hindu + +``` + +这里,**“the-hindu”** 是新闻源的源id(获取 id)。 + +上述命令将从 The Hindu 新闻站获取最新的 10 个头条,并将其显示在终端中。此外,它还显示新闻的简要描述、发布的日期和时间以及到源的实际链接。 + +**示例输出:** + +![](https://www.ostechnix.com/wp-content/uploads/2018/09/clinews-1.png) + +要在浏览器中阅读新闻,请按住 Ctrl 键并单击 URL。它将在你的默认 Web 浏览器中打开。 + +要查看所有的新闻源,请运行: + +``` +$ news sources + +``` + +**示例输出:** + +![](https://www.ostechnix.com/wp-content/uploads/2018/09/clinews-2.png) + +正如你在上面的截图中看到的,Clinews 列出了所有新闻源,包括新闻源的名称、获取 ID、网站描述、网站 URL 以及它所在的国家/地区。在撰写本指南时,Clinews 目前支持 70 多个新闻源。 + +Clinews 还可以搜索符合搜索条件/术语的所有源的新闻报道。例如,要列出包含单词 **“Tamilnadu”** 的所有新闻报道,请使用以下命令: + +``` +$ news search "Tamilnadu" +``` + +此命令将会筛选所有新闻源中含有 **Tamilnadu** 的报道。 + +Clinews有一些额外的标志可以帮助你 + + * 限制你想看的新闻报道的数量, +  * 排序新闻报道(热门、最新), +  * 智能显示新闻报道分类(例如商业、娱乐、游戏、大众、音乐、政治、科学和自然、体育、技术) + + + +更多详细信息,请参阅帮助部分: + +``` +$ clinews -h +``` + +就是这些了。希望这篇对你有用。还有更多好东西。敬请关注! + +干杯! + + + +-------------------------------------------------------------------------------- + +via: https://www.ostechnix.com/clinews-read-news-and-latest-headlines-from-commandline/ + +作者:[SK][a] +选题:[lujun9972](https://github.com/lujun9972) +译者:[geekpi](https://github.com/geekpi) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://www.ostechnix.com/author/sk/ +[1]: https://www.ostechnix.com/get-news-instantly-commandline-linux/ +[2]: https://newsapi.org/ +[3]: https://newsapi.org/sources +[4]: https://newsapi.org/register From c4f673280334a1671777ec711c1078905937b312 Mon Sep 17 00:00:00 2001 From: GraveAccent <39041505+GraveAccent@users.noreply.github.com> Date: Mon, 8 Oct 2018 09:27:32 +0800 Subject: [PATCH 261/736] Update 20180201 Rock Solid React.js Foundations A Beginners Guide.md --- ...80201 Rock Solid React.js Foundations A Beginners Guide.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/sources/tech/20180201 Rock Solid React.js Foundations A Beginners Guide.md b/sources/tech/20180201 Rock Solid React.js Foundations A Beginners Guide.md index b3252dfb75..98a1a8f392 100644 --- a/sources/tech/20180201 Rock Solid React.js Foundations A Beginners Guide.md +++ b/sources/tech/20180201 Rock Solid React.js Foundations A Beginners Guide.md @@ -1,4 +1,4 @@ -Rock Solid React.js Foundations: A Beginner’s Guide +GraveAccent翻译中 Rock Solid React.js Foundations: A Beginner’s Guide ============================================================ ** 此处有Canvas,请手动处理 ** @@ -289,4 +289,4 @@ via: https://medium.freecodecamp.org/rock-solid-react-js-foundations-a-beginners [9]:https://codepen.io/raynesax/pen/QaROqK [10]:https://twitter.com/rajat1saxena [11]:mailto:rajat@raynstudios.com -[12]:https://www.youtube.com/channel/UCUmQhjjF9bsIaVDJUHSIIKw \ No newline at end of file +[12]:https://www.youtube.com/channel/UCUmQhjjF9bsIaVDJUHSIIKw From d9e6b52ade824cb4f470949620f17be1706e52d2 Mon Sep 17 00:00:00 2001 From: HankChow <280630620@qq.com> Date: Mon, 8 Oct 2018 09:56:21 +0800 Subject: [PATCH 262/736] hankchow translating --- ...Replace one Linux Distro With Another in Dual Boot -Guide.md | 2 ++ 1 file changed, 2 insertions(+) diff --git a/sources/tech/20180925 How to Replace one Linux Distro With Another in Dual Boot -Guide.md b/sources/tech/20180925 How to Replace one Linux Distro With Another in Dual Boot -Guide.md index ab9fa8acc3..0e473dbc59 100644 --- a/sources/tech/20180925 How to Replace one Linux Distro With Another in Dual Boot -Guide.md +++ b/sources/tech/20180925 How to Replace one Linux Distro With Another in Dual Boot -Guide.md @@ -1,3 +1,5 @@ +HankChow translating + How to Replace one Linux Distro With Another in Dual Boot [Guide] ====== **If you have a Linux distribution installed, you can replace it with another distribution in the dual boot. You can also keep your personal documents while switching the distribution.** From e7c51f9d3c45bfad6b680c57011cea9f00da99e4 Mon Sep 17 00:00:00 2001 From: darksun Date: Mon, 8 Oct 2018 11:08:42 +0800 Subject: [PATCH 263/736] =?UTF-8?q?=E9=80=89=E9=A2=98:=20Open=20Source=20L?= =?UTF-8?q?ogging=20Tools=20for=20Linux?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...005 Open Source Logging Tools for Linux.md | 188 ++++++++++++++++++ 1 file changed, 188 insertions(+) create mode 100644 sources/tech/20181005 Open Source Logging Tools for Linux.md diff --git a/sources/tech/20181005 Open Source Logging Tools for Linux.md b/sources/tech/20181005 Open Source Logging Tools for Linux.md new file mode 100644 index 0000000000..723488008a --- /dev/null +++ b/sources/tech/20181005 Open Source Logging Tools for Linux.md @@ -0,0 +1,188 @@ +Open Source Logging Tools for Linux +====== + +![](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/logs-main.jpg?itok=voNrSz4H) + +If you’re a Linux systems administrator, one of the first tools you will turn to for troubleshooting are log files. These files hold crucial information that can go a long way to help you solve problems affecting your desktops and servers. For many sysadmins (especially those of an old-school sort), nothing beats the command line for checking log files. But for those who’d rather have a more efficient (and possibly modern) approach to troubleshooting, there are plenty of options. + +In this article, I’ll highlight a few such tools available for the Linux platform. I won’t be getting into logging tools that might be specific to a certain service (such as Kubernetes or Apache), and instead will focus on tools that work to mine the depths of all that magical information written into /var/log. + +Speaking of which… + +### What is /var/log? + +If you’re new to Linux, you might not know what the /var/log directory contains. However, the name is very telling. Within this directory is housed all of the log files from the system and any major service (such as Apache, MySQL, MariaDB, etc.) installed on the operating system. Open a terminal window and issue the command cd /var/log. Follow that with the command ls and you’ll see all of the various systems that have log files you can view (Figure 1). + +![/var/log/][2] + +Figure 1: Our ls command reveals the logs available in /var/log/. + +[Used with permission][3] + +Say, for instance, you want to view the syslog log file. Issue the command less syslog and you can scroll through all of the gory details of that particular log. But what if the standard terminal isn’t for you? What options do you have? Plenty. Let’s take a look at few such options. + +### Logs + +If you use the GNOME desktop (or other, as Logs can be installed on more than just GNOME), you have at your fingertips a log viewer that mainly just adds the slightest bit of GUI goodness over the log files to create something as simple as it is effective. Once installed (from the standard repositories), open Logs from the desktop menu, and you’ll be treated to an interface (Figure 2) that allows you to select from various types of logs (Important, All, System, Security, and Hardware), as well as select a boot period (from the top center drop-down), and even search through all of the available logs. + +![Logs tool][5] + +Figure 2: The GNOME Logs tool is one of the easiest GUI log viewers you’ll find for Linux. + +[Used with permission][3] + +Logs is a great tool, especially if you’re not looking for too many bells and whistles getting in the way of you viewing crucial log entries, so you can troubleshoot your systems. + +### KSystemLog + +KSystemLog is to KDE what Logs is to GNOME, but with a few more features to add into the mix. Although both make it incredibly simple to view your system log files, only KSystemLog includes colorized log lines, tabbed viewing, copy log lines to the desktop clipboard, built-in capability for sending log messages directly to the system, read detailed information for each log line, and more. KSystemLog views all the same logs found in GNOME Logs, only with a different layout. + +From the main window (Figure 3), you can view any of the different log (from System Log, Authentication Log, X.org Log, Journald Log), search the logs, filter by Date, Host, Process, Message, and select log priorities. + +![KSystemLog][7] + +Figure 3: The KSystemLog main window. + +[Used with permission][3] + +If you click on the Window menu, you can open a new tab, where you can select a different log/filter combination to view. From that same menu, you can even duplicate the current tab. If you want to manually add a log to a file, do the following: + + 1. Open KSystemLog. + + 2. Click File > Add Log Entry. + + 3. Create your log entry (Figure 4). + + 4. Click OK + + +![log entry][9] + +Figure 4: Creating a manual log entry with KSystemLog. + +[Used with permission][3] + +KSystemLog makes viewing logs in KDE an incredibly easy task. + +### Logwatch + +Logwatch isn’t a fancy GUI tool. Instead, logwatch allows you to set up a logging system that will email you important alerts. You can have those alerts emailed via an SMTP server or you can simply view them on the local machine. Logwatch can be found in the standard repositories for almost every distribution, so installation can be done with a single command, like so: + +``` +sudo apt-get install logwatch +``` + +Or: + +``` +sudo dnf install logwatch +``` + +During the installation, you will be required to select the delivery method for alerts (Figure 5). If you opt to go the local mail delivery only, you’ll need to install the mailutils app (so you can view mail locally, via the mail command). + +![ Logwatch][11] + +Figure 5: Configuring Logwatch alert sending method. + +[Used with permission][3] + +All Logwatch configurations are handled in a single file. To edit that file, issue the command sudo nano /usr/share/logwatch/default.conf/logwatch.conf. You’ll want to edit the MailTo = option. If you’re viewing this locally, set that to the Linux username you want the logs sent to (such as MailTo = jack). If you are sending these logs to an external email address, you’ll also need to change the MailFrom = option to a legitimate email address. From within that same configuration file, you can also set the detail level and the range of logs to send. Save and close that file. +Once configured, you can send your first mail with a command like: + +``` +logwatch --detail Med --mailto ADDRESS --service all --range today +Where ADDRESS is either the local user or an email address. + +``` + +For more information on using Logwatch, issue the command man logwatch. Read through the manual page to see the different options that can be used with the tool. + +### Rsyslog + +Rsyslog is a convenient way to send remote client logs to a centralized server. Say you have one Linux server you want to use to collect the logs from other Linux servers in your data center. With Rsyslog, this is easily done. Rsyslog has to be installed on all clients and the centralized server (by issuing a command like sudo apt-get install rsyslog). Once installed, create the /etc/rsyslog.d/server.conf file on the centralized server, with the contents: + +``` +# Provide UDP syslog reception +$ModLoad imudp +$UDPServerRun 514 + +# Provide TCP syslog reception +$ModLoad imtcp +$InputTCPServerRun 514 + +# Use custom filenaming scheme +$template FILENAME,"/var/log/remote/%HOSTNAME%.log" +*.* ?FILENAME + +$PreserveFQDN on + +``` + +Save and close that file. Now, on every client machine, create the file /etc/rsyslog.d/client.conf with the contents: + +``` +$PreserveFQDN on +$ActionQueueType LinkedList +$ActionQueueFileName srvrfwd +$ActionResumeRetryCount -1 +$ActionQueueSaveOnShutdown on +*.* @@SERVER_IP:514 + +``` + +Where SERVER_IP is the IP address of your centralized server. Save and close that file. Restart rsyslog on all machines with the command: + +``` +sudo systemctl restart rsyslog + +``` + +You can now view the centralized log files with the command (run on the centralized server): + +``` +tail -f /var/log/remote/*.log + +``` + +The tail command allows you to view those files as they are written to, in real time. You should see log entries appear that include the client hostname (Figure 6). + +![Rsyslog][13] + +Figure 6: Rsyslog showing entries for a connected client. + +[Used with permission][3] + +Rsyslog is a great tool for creating a single point of entry for viewing the logs of all of your Linux servers. + +### More where that came from + +This article only scratched the surface of the logging tools to be found on the Linux platform. And each of the above tools is capable of more than what is outlined here. However, this overview should give you a place to start your long day's journey into the Linux log file. + +Learn more about Linux through the free ["Introduction to Linux" ][14]course from The Linux Foundation and edX. + +-------------------------------------------------------------------------------- + +via: https://www.linux.com/learn/intro-to-linux/2018/10/open-source-logging-tools-linux + +作者:[JACK WALLEN][a] +选题:[lujun9972](https://github.com/lujun9972) +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://www.linux.com/users/jlwallen +[1]: /files/images/logs1jpg +[2]: https://www.linux.com/sites/lcom/files/styles/rendered_file/public/logs_1.jpg?itok=8yO2q1rW (/var/log/) +[3]: /licenses/category/used-permission +[4]: /files/images/logs2jpg +[5]: https://www.linux.com/sites/lcom/files/styles/rendered_file/public/logs_2.jpg?itok=kF6V46ZB (Logs tool) +[6]: /files/images/logs3jpg +[7]: https://www.linux.com/sites/lcom/files/styles/rendered_file/public/logs_3.jpg?itok=PhrIzI1N (KSystemLog) +[8]: /files/images/logs4jpg +[9]: https://www.linux.com/sites/lcom/files/styles/rendered_file/public/logs_4.jpg?itok=OxsGJ-TJ (log entry) +[10]: /files/images/logs5jpg +[11]: https://www.linux.com/sites/lcom/files/styles/rendered_file/public/logs_5.jpg?itok=GeAR551e (Logwatch) +[12]: /files/images/logs6jpg +[13]: https://www.linux.com/sites/lcom/files/styles/rendered_file/public/logs_6.jpg?itok=ira8UZOr (Rsyslog) +[14]: https://training.linuxfoundation.org/linux-courses/system-administration-training/introduction-to-linux From bcd0ca1b3908133e26800283fe4cc80644fe402a Mon Sep 17 00:00:00 2001 From: darksun Date: Mon, 8 Oct 2018 11:14:32 +0800 Subject: [PATCH 264/736] =?UTF-8?q?=E9=80=89=E9=A2=98:=20How=20to=20use=20?= =?UTF-8?q?Kolibri=20to=20access=20educational=20material=20offline?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ... to access educational material offline.md | 107 ++++++++++++++++++ 1 file changed, 107 insertions(+) create mode 100644 sources/tech/20181005 How to use Kolibri to access educational material offline.md diff --git a/sources/tech/20181005 How to use Kolibri to access educational material offline.md b/sources/tech/20181005 How to use Kolibri to access educational material offline.md new file mode 100644 index 0000000000..f856a497cd --- /dev/null +++ b/sources/tech/20181005 How to use Kolibri to access educational material offline.md @@ -0,0 +1,107 @@ +How to use Kolibri to access educational material offline +====== +Kolibri makes digital educational materials available to students without internet access. + +![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/EDU_OSDC_BYU_520x292_FINAL.png?itok=NVY7vR8o) + +While the internet has thoroughly transformed the availability of educational content for much of the world, many people still live in places where online access is poor or even nonexistent. [Kolibri][1] is a great solution for these communities. It's an app that creates an offline server to deliver high-quality educational resources to learners. You can set up Kolibri on a wide range of [hardware][2], including low-cost Windows, MacOS, and Linux (including Raspberry Pi) computers. A version for Android tablets is in the works. + +Because it's open source, free to use, works without broadband access (after initial setup), and includes a wide range of educational content, it gives students in rural schools, refugee camps, orphanages, informal schools, prisons, and other places without reliable internet service access to many of the same resources used by students all over the world. + +In addition to being simple to install, it's easy to customize Kolibri for various educational missions and needs, including literacy building, general reference materials, and life skills training. Kolibri includes content from sources including [OpenStax,][3] [CK-12][4], [Khan Academy][5], and [EngageNY][6]; once these packages are "seeded" by connecting the Kolibri serving device to a robust internet connection, they are immediately available for offline access on client devices through a compatible browser. + +### Installation and setup + +I installed Kolibri on an Intel i3-based laptop running Fedora 28. I chose the **pip install** method, which is very easy. Here's how to do it. + +Open a terminal and enter: + +``` +$ sudo pip install kolibri + +``` + +Start Kolibri by entering **$** **kolibri** **start** in the terminal. + +Find your Kolibri installation's URL in the terminal. + +![](https://opensource.com/sites/default/files/uploads/kolibri_url.png) + +Open your browser and point it to that URL, being sure to append port **8080**. + +Select the default language—options include English, Spanish, French, Arabic, Portuguese, Hindi, Farsi, Burmese, and Bengali. (I chose English.) + +Name your facility, i.e., your classroom, library, or home. (I named mine Test.) + +![](https://opensource.com/sites/default/files/uploads/kolibri_name.png) + +Tell Kolibri what type of facility you're setting up—self-managed, admin-managed, or informal. (I chose self-managed.) + +![](https://opensource.com/sites/default/files/uploads/kolibri_facility-type.png) + +Create an admin account. + +![](https://opensource.com/sites/default/files/uploads/kolibri_admin.png) + +### Add content + +You can add Kolibri-curated content channels while you are connected to broadband service. Explore and add content from the menu at the top-left of the browser. + +![](https://opensource.com/sites/default/files/uploads/kolibri_menu.png) + +Choose Device and Import. + +![](https://opensource.com/sites/default/files/uploads/kolibri_import.png) + +Selecting English as the default language provides access to 29 content channels including Touchable Earth, Global Digital Library, Khan Academy, OpenStax, CK-12, EngageNY, Blockly games, and more. + +Select a channel you're interested in. You have the option to download the entire channel (which might take a long time) or to select the specific content you want to download. + +![](https://opensource.com/sites/default/files/uploads/kolibri_select-content.png) + +To access your content, return to the top-left menu and select Learn. + +![](https://opensource.com/sites/default/files/uploads/kolibri_content.png) + +### Add users + +User accounts can be set up as learners, coaches, or admins. Users can access the Kolibri server from most web browsers on any Linux, MacOS, Windows, Android, or iOS device on the same network, even if the network isn't connected to the internet. Admins can set up classes on the device, assign coaches and learners to classes, and see every user's interaction and how much time they spend with the content. + +If your Kolibri server is set up as self-managed, users can create their own accounts by entering the Kolibri URL in their browser and following the prompts. For information on setting up users on an admin-managed server, check out Kolibri's [documentation][7]. + +![](https://opensource.com/sites/default/files/uploads/kolibri_user-account.png) + +After logging in, the user can access content right away to begin learning. + +### Learn more + +Kolibri is a very powerful learning resource, especially for people who don't have a robust connection to the internet. Its [documentation][8] is very complete, and a [demo][9] site maintained by the project allows you to try it out. + +Kolibri is open source under the [MIT License][10]. The project, which is managed by the nonprofit organization Learning Equality, is looking for developers—if you would like to get involved, be sure to check out them on [GitHub][11]. To learn more, follow Learning Equality and Kolibri on its [blog][12], [Twitter][13], and [Facebook][14] pages. + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/18/10/getting-started-kolibri + +作者:[Don Watkins][a] +选题:[lujun9972](https://github.com/lujun9972) +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/don-watkins +[1]: https://learningequality.org/kolibri/ +[2]: https://drive.google.com/file/d/0B9ZzDms8cSNgVWRKdUlPc2lkTkk/view +[3]: https://openstax.org/ +[4]: https://www.ck12.org/ +[5]: https://www.khanacademy.org/ +[6]: https://www.engageny.org/ +[7]: https://kolibri.readthedocs.io/en/latest/manage.html#create-a-new-user-account +[8]: https://learningequality.org/documentation/ +[9]: http://kolibridemo.learningequality.org/learn/#/topics +[10]: https://github.com/learningequality/kolibri/blob/develop/LICENSE +[11]: https://github.com/learningequality/ +[12]: https://blog.learningequality.org/ +[13]: https://twitter.com/LearnEQ/ +[14]: https://www.facebook.com/learningequality From f2ad6e269fa08393b7b93217e27a45ed84358eee Mon Sep 17 00:00:00 2001 From: darksun Date: Mon, 8 Oct 2018 11:18:51 +0800 Subject: [PATCH 265/736] =?UTF-8?q?=E9=80=89=E9=A2=98:=20Dbxfs=20=E2=80=93?= =?UTF-8?q?=20Mount=20Dropbox=20Folder=20Locally=20As=20Virtual=20File=20S?= =?UTF-8?q?ystem=20In=20Linux?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...Locally As Virtual File System In Linux.md | 133 ++++++++++++++++++ 1 file changed, 133 insertions(+) create mode 100644 sources/tech/20181005 Dbxfs - Mount Dropbox Folder Locally As Virtual File System In Linux.md diff --git a/sources/tech/20181005 Dbxfs - Mount Dropbox Folder Locally As Virtual File System In Linux.md b/sources/tech/20181005 Dbxfs - Mount Dropbox Folder Locally As Virtual File System In Linux.md new file mode 100644 index 0000000000..691600a4cc --- /dev/null +++ b/sources/tech/20181005 Dbxfs - Mount Dropbox Folder Locally As Virtual File System In Linux.md @@ -0,0 +1,133 @@ +Dbxfs – Mount Dropbox Folder Locally As Virtual File System In Linux +====== + +![](https://www.ostechnix.com/wp-content/uploads/2018/10/dbxfs-720x340.png) + +A while ago, we summarized all the possible ways to **[mount Google drive locally][1]** as a virtual file system and access the files stored in the google drive from your Linux operating system. Today, we are going to learn to mount Dropbox folder in your local file system using **dbxfs** utility. The dbxfs is used to mount your Dropbox folder locally as a virtual filesystem in Unix-like operating systems. While it is easy to [**install Dropbox client**][2] in Linux, this approach slightly differs from the official method. It is a command line dropbox client and requires no disk space for access. The dbxfs application is free, open source and written for Python 3.5+. + +### Installing dbxfs + +The dbxfs officially supports Linux and Mac OS. However, it should work on any POSIX system that provides a **FUSE-compatible library** or has the ability to mount **SMB** shares. Since it is written for Python 3.5, it can installed using **pip3** package manager. Refer the following guide if you haven’t installed PIP yet. + +And, install FUSE library as well. + +On Debian-based systems, run the following command to install FUSE: + +``` +$ sudo apt install libfuse2 + +``` + +On Fedora: + +``` +$ sudo dnf install fuse + +``` + +Once you installed all required dependencies, run the following command to install dbxfs utility: + +``` +$ pip3 install dbxfs + +``` + +### Mount Dropbox folder locally + +Create a mount point to mount your dropbox folder in your local file system. + +``` +$ mkdir ~/mydropbox + +``` + +Then, mount the dropbox folder locally using dbxfs utility as shown below: + +``` +$ dbxfs ~/mydropbox + +``` + +You will be asked to generate an access token: + +![](https://www.ostechnix.com/wp-content/uploads/2018/10/Generate-access-token-1.png) + +To generate an access token, just navigate to the URL given in the above output from your web browser and click **Allow** to authenticate Dropbox access. You need to log in to your dropbox account to complete authorization process. + +A new authorization code will be generated in the next screen. Copy the code and head back to your Terminal and paste it into cli-dbxfs prompt to finish the process. + +You will be then asked to save the credentials for future access. Type **Y** or **N** whether you want to save or decline. And then, you need to enter a passphrase twice for the new access token. + +Finally, click **Y** to accept **“/home/username/mydropbox”** as the default mount point. If you want to set different path, type **N** and enter the location of your choice. + +[![Generate access token 2][3]][4] + +All done! From now on, you can see your Dropbox folder is locally mounted in your filesystem. + +![](https://www.ostechnix.com/wp-content/uploads/2018/10/Dropbox-in-file-manager.png) + +### Change Access Token Storage Path + +By default, the dbxfs application will store your Dropbox access token in the system keyring or an encrypted file. However, you might want to store it in a **gpg** encrypted file or something else. If so, get an access token by creating a personal app on the [Dropbox developers app console][5]. + +![](https://www.ostechnix.com/wp-content/uploads/2018/10/access-token.png) + +Once the app is created, click **Generate** button in the next button. This access token can be used to access your Dropbox account via the API. Don’t share your access token with anyone. + +![](https://www.ostechnix.com/wp-content/uploads/2018/10/Create-a-new-app.png) + +Once you created an access token, encrypt it using any encryption tools of your choice, such as [**Cryptomater**][6], [**Cryptkeeper**][7], [**CryptGo**][8], [**Cryptr**][9], [**Tomb**][10], [**Toplip**][11] and [**GnuPG**][12] etc., and store it in your preferred location. + +Next edit the dbxfs configuration file and add the following line in it: + +``` +"access_token_command": ["gpg", "--decrypt", "/path/to/access/token/file.gpg"] + +``` + +You can find the dbxfs configuration file by running the following command: + +``` +$ dbxfs --print-default-config-file + +``` + +For more details, refer dbxfs help section: + +``` +$ dbxfs -h + +``` + +As you can see, mounting Dropfox folder locally in your file system using Dbxfs utility is no big deal. As far tested, dbxfs just works fine as expected. Give it a try if you’re interested to see how it works and let us know about your experience in the comment section below. + +And, that’s all for now. Hope this was useful. More good stuff to come. Stay tuned! + +Cheers! + + + +-------------------------------------------------------------------------------- + +via: https://www.ostechnix.com/dbxfs-mount-dropbox-folder-locally-as-virtual-file-system-in-linux/ + +作者:[SK][a] +选题:[lujun9972](https://github.com/lujun9972) +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://www.ostechnix.com/author/sk/ +[1]: https://www.ostechnix.com/how-to-mount-google-drive-locally-as-virtual-file-system-in-linux/ +[2]: https://www.ostechnix.com/install-dropbox-in-ubuntu-18-04-lts-desktop/ +[3]: data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7 +[4]: http://www.ostechnix.com/wp-content/uploads/2018/10/Generate-access-token-2.png +[5]: https://dropbox.com/developers/apps +[6]: https://www.ostechnix.com/cryptomator-open-source-client-side-encryption-tool-cloud/ +[7]: https://www.ostechnix.com/how-to-encrypt-your-personal-foldersdirectories-in-linux-mint-ubuntu-distros/ +[8]: https://www.ostechnix.com/cryptogo-easy-way-encrypt-password-protect-files/ +[9]: https://www.ostechnix.com/cryptr-simple-cli-utility-encrypt-decrypt-files/ +[10]: https://www.ostechnix.com/tomb-file-encryption-tool-protect-secret-files-linux/ +[11]: https://www.ostechnix.com/toplip-strong-file-encryption-decryption-cli-utility/ +[12]: https://www.ostechnix.com/an-easy-way-to-encrypt-and-decrypt-files-from-commandline-in-linux/ From 99ddaac250f755cd05183f7b6c8f403f99390d89 Mon Sep 17 00:00:00 2001 From: Ezio Date: Mon, 8 Oct 2018 11:34:03 +0800 Subject: [PATCH 266/736] =?UTF-8?q?20181008-1=20=E9=80=89=E9=A2=98?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...p to JavaScript From Scratch in 350 LOC.md | 635 ++++++++++++++++++ 1 file changed, 635 insertions(+) create mode 100644 sources/tech/20161014 Compiling Lisp to JavaScript From Scratch in 350 LOC.md diff --git a/sources/tech/20161014 Compiling Lisp to JavaScript From Scratch in 350 LOC.md b/sources/tech/20161014 Compiling Lisp to JavaScript From Scratch in 350 LOC.md new file mode 100644 index 0000000000..6d3040626b --- /dev/null +++ b/sources/tech/20161014 Compiling Lisp to JavaScript From Scratch in 350 LOC.md @@ -0,0 +1,635 @@ +# Compiling Lisp to JavaScript From Scratch in 350 + +In this article we will look at a from-scratch implementation of a compiler from a simple LISP-like calculator language to JavaScript. The complete source code can be found [here][7]. + +We will: + +1. Define our language and write a simple program in it + +2. Implement a simple parser combinator library + +3. Implement a parser for our language + +4. Implement a pretty printer for our language + +5. Define a subset of JavaScript for our usage + +6. Implement a code translator to the JavaScript subset we defined + +7. Glue it all together + +Let's start! + +### 1\. Defining the language + +The main attraction of lisps is that their syntax already represent a tree, this is why they are so easy to parse. We'll see that soon. But first let's define our language. Here's a BNF description of our language's syntax: + +``` +program ::= expr +expr ::= | | ([]) +``` + +Basically, our language let's us define one expression at the top level which it will evaluate. An expression is composed of either an integer, for example `5`, a variable, for example `x`, or a list of expressions, for example `(add x 1)`. + +An integer evaluate to itself, a variable evaluates to what it's bound in the current environment, and a list evaluates to a function call where the first argument is the function and the rest are the arguments to the function. + +We have some built-in special forms in our language so we can do more interesting stuff: + +* let expression let's us introduce new variables in the environment of the body of the let. The syntax is: + +``` +let ::= (let ([]) ) +letargs ::= ( ) +body ::= +``` + +* lambda expression: evaluates to an anonymous function definition. The syntax is: + +``` +lambda ::= (lambda ([]) ) +``` + +We also have a few built in functions: `add`, `mul`, `sub`, `div` and `print`. + +Let's see a quick example of a program written in our language: + +``` +(let + ((compose + (lambda (f g) + (lambda (x) (f (g x))))) + (square + (lambda (x) (mul x x))) + (add1 + (lambda (x) (add x 1)))) + (print ((compose square add1) 5))) +``` + +This program defines 3 functions: `compose`, `square` and `add1`. And then prints the result of the computation:`((compose square add1) 5)` + +I hope this is enough information about the language. Let's start implementing it! + +We can define the language in Haskell like this: + +``` +type Name = String + +data Expr + = ATOM Atom + | LIST [Expr] + deriving (Eq, Read, Show) + +data Atom + = Int Int + | Symbol Name + deriving (Eq, Read, Show) +``` + +We can parse programs in the language we defined to an `Expr`. Also, we are giving the new data types `Eq`, `Read`and `Show` instances to aid in testing and debugging. You'll be able to use those in the REPL for example to verify all this actually works. + +The reason we did not define `lambda`, `let` and the other built-in functions as part of the syntax is because we can get away with it in this case. These functions are just a more specific case of a `LIST`. So I decided to leave this to a later phase. + +Usually, you would like to define these special cases in the abstract syntax - to improve error messages, to unable static analysis and optimizations and such, but we won't do that here so this is enough for us. + +Another thing you would like to do usually is add some annotation to the syntax. For example the location: Which file did this `Expr` come from and which row and col in the file. You can use this in later stages to print the location of errors, even if they are not in the parser stage. + +* _Exercise 1_ : Add a `Program` data type to include multiple `Expr` sequentially + +* _Exercise 2_ : Add location annotation to the syntax tree. + +### 2\. Implement a simple parser combinator library + +First thing we are going to do is define an Embedded Domain Specific Language (or EDSL) which we will use to define our languages' parser. This is often referred to as parser combinator library. The reason we are doing it is strictly for learning purposes, Haskell has great parsing libraries and you should definitely use them when building real software, or even when just experimenting. One such library is [megaparsec][8]. + +First let's talk about the idea behind our parser library implementation. In it's essence, our parser is a function that takes some input, might consume some or all of the input, and returns the value it managed to parse and the rest of the input it didn't parse yet, or throws an error if it failed. Let's write that down. + +``` +newtype Parser a + = Parser (ParseString -> Either ParseError (a, ParseString)) + +data ParseString + = ParseString Name (Int, Int) String + +data ParseError + = ParseError ParseString Error + +type Error = String + +``` + +Here we defined three main new types. + +First, `Parser a`, is the parsing function we described before. + +Second, `ParseString` is our input or state we carry along. It has three significant parts: + +* `Name`: This is the name of the source + +* `(Int, Int)`: This is the current location in the source + +* `String`: This is the remaining string left to parse + +Third, `ParseError` contains the current state of the parser and an error message. + +Now we want our parser to be flexible, so we will define a few instances for common type classes for it. These instances will allow us to combine small parsers to make bigger parsers (hence the name 'parser combinators'). + +The first one is a `Functor` instance. We want a `Functor` instance because we want to be able to define a parser using another parser simply by applying a function on the parsed value. We will see an example of this when we define the parser for our language. + +``` +instance Functor Parser where + fmap f (Parser parser) = + Parser (\str -> first f <$> parser str) +``` + +The second instance is an `Applicative` instance. One common use case for this instance instance is to lift a pure function on multiple parsers. + +``` +instance Applicative Parser where + pure x = Parser (\str -> Right (x, str)) + (Parser p1) <*> (Parser p2) = + Parser $ + \str -> do + (f, rest) <- p1 str + (x, rest') <- p2 rest + pure (f x, rest') + +``` + +(Note:  _We will also implement a Monad instance so we can use do notation here._ ) + +The third instance is an `Alternative` instance. We want to be able to supply an alternative parser in case one fails. + +``` +instance Alternative Parser where + empty = Parser (`throwErr` "Failed consuming input") + (Parser p1) <|> (Parser p2) = + Parser $ + \pstr -> case p1 pstr of + Right result -> Right result + Left _ -> p2 pstr +``` + +The forth instance is a `Monad` instance. So we'll be able to chain parsers. + +``` +instance Monad Parser where + (Parser p1) >>= f = + Parser $ + \str -> case p1 str of + Left err -> Left err + Right (rs, rest) -> + case f rs of + Parser parser -> parser rest + +``` + +Next, let's define a way to run a parser and a utility function for failure: + +``` + +runParser :: String -> String -> Parser a -> Either ParseError (a, ParseString) +runParser name str (Parser parser) = parser $ ParseString name (0,0) str + +throwErr :: ParseString -> String -> Either ParseError a +throwErr ps@(ParseString name (row,col) _) errMsg = + Left $ ParseError ps $ unlines + [ "*** " ++ name ++ ": " ++ errMsg + , "* On row " ++ show row ++ ", column " ++ show col ++ "." + ] + +``` + +Now we'll start implementing the combinators which are the API and heart of the EDSL. + +First, we'll define `oneOf`. `oneOf` will succeed if one of the characters in the list supplied to it is the next character of the input and will fail otherwise. + +``` +oneOf :: [Char] -> Parser Char +oneOf chars = + Parser $ \case + ps@(ParseString name (row, col) str) -> + case str of + [] -> throwErr ps "Cannot read character of empty string" + (c:cs) -> + if c `elem` chars + then Right (c, ParseString name (row, col+1) cs) + else throwErr ps $ unlines ["Unexpected character " ++ [c], "Expecting one of: " ++ show chars] +``` + +`optional` will stop a parser from throwing an error. It will just return `Nothing` on failure. + +``` +optional :: Parser a -> Parser (Maybe a) +optional (Parser parser) = + Parser $ + \pstr -> case parser pstr of + Left _ -> Right (Nothing, pstr) + Right (x, rest) -> Right (Just x, rest) +``` + +`many` will try to run a parser repeatedly until it fails. When it does, it'll return a list of successful parses. `many1`will do the same, but will throw an error if it fails to parse at least once. + +``` +many :: Parser a -> Parser [a] +many parser = go [] + where go cs = (parser >>= \c -> go (c:cs)) <|> pure (reverse cs) + +many1 :: Parser a -> Parser [a] +many1 parser = + (:) <$> parser <*> many parser + +``` + +These next few parsers use the combinators we defined to make more specific parsers: + +``` +char :: Char -> Parser Char +char c = oneOf [c] + +string :: String -> Parser String +string = traverse char + +space :: Parser Char +space = oneOf " \n" + +spaces :: Parser String +spaces = many space + +spaces1 :: Parser String +spaces1 = many1 space + +withSpaces :: Parser a -> Parser a +withSpaces parser = + spaces *> parser <* spaces + +parens :: Parser a -> Parser a +parens parser = + (withSpaces $ char '(') + *> withSpaces parser + <* (spaces *> char ')') + +sepBy :: Parser a -> Parser b -> Parser [b] +sepBy sep parser = do + frst <- optional parser + rest <- many (sep *> parser) + pure $ maybe rest (:rest) frst + +``` + +Now we have everything we need to start defining a parser for our language. + +* _Exercise_ : implement an EOF (end of file/input) parser combinator. + +### 3\. Implementing a parser for our language + +To define our parser, we'll use the top-bottom method. + +``` +parseExpr :: Parser Expr +parseExpr = fmap ATOM parseAtom <|> fmap LIST parseList + +parseList :: Parser [Expr] +parseList = parens $ sepBy spaces1 parseExpr + +parseAtom :: Parser Atom +parseAtom = parseSymbol <|> parseInt + +parseSymbol :: Parser Atom +parseSymbol = fmap Symbol parseName + +``` + +Notice that these four function are a very high-level description of our language. This demonstrate why Haskell is so nice for parsing. Still, after defining the high-level parts, we still need to define the lower-level `parseName` and `parseInt`. + +What characters can we use as names in our language? Let's decide to use lowercase letters, digits and underscores, where the first character must be a letter. + +``` +parseName :: Parser Name +parseName = do + c <- oneOf ['a'..'z'] + cs <- many $ oneOf $ ['a'..'z'] ++ "0123456789" ++ "_" + pure (c:cs) +``` + +For integers, we want a sequence of digits optionally preceding by '-': + +``` +parseInt :: Parser Atom +parseInt = do + sign <- optional $ char '-' + num <- many1 $ oneOf "0123456789" + let result = read $ maybe num (:num) sign of + pure $ Int result +``` + +Lastly, we'll define a function to run a parser and get back an `Expr` or an error message. + +``` +runExprParser :: Name -> String -> Either String Expr +runExprParser name str = + case runParser name str (withSpaces parseExpr) of + Left (ParseError _ errMsg) -> Left errMsg + Right (result, _) -> Right result +``` + +* _Exercise 1_ : Write a parser for the `Program` type you defined in the first section + +* _Exercise 2_ : Rewrite `parseName` in Applicative style + +* _Exercise 3_ : Find a way to handle the overflow case in `parseInt` instead of using `read`. + +### 4\. Implement a pretty printer for our language + +One more thing we'd like to do is be able to print our programs as source code. This is useful for better error messages. + +``` +printExpr :: Expr -> String +printExpr = printExpr' False 0 + +printAtom :: Atom -> String +printAtom = \case + Symbol s -> s + Int i -> show i + +printExpr' :: Bool -> Int -> Expr -> String +printExpr' doindent level = \case + ATOM a -> indent (bool 0 level doindent) (printAtom a) + LIST (e:es) -> + indent (bool 0 level doindent) $ + concat + [ "(" + , printExpr' False (level + 1) e + , bool "\n" "" (null es) + , intercalate "\n" $ map (printExpr' True (level + 1)) es + , ")" + ] + +indent :: Int -> String -> String +indent tabs e = concat (replicate tabs " ") ++ e +``` + +* _Exercise_ : Write a pretty printer for the `Program` type you defined in the first section + +Okay, we wrote around 200 lines so far of what's typically called the front-end of the compiler. We have around 150 more lines to go and three more tasks: We need to define a subset of JS for our usage, define the translator from our language to that subset, and glue the whole thing together. Let's go! + +### 5\. Define a subset of JavaScript for our usage + +First, we'll define the subset of JavaScript we are going to use: + +``` +data JSExpr + = JSInt Int + | JSSymbol Name + | JSBinOp JSBinOp JSExpr JSExpr + | JSLambda [Name] JSExpr + | JSFunCall JSExpr [JSExpr] + | JSReturn JSExpr + deriving (Eq, Show, Read) + +type JSBinOp = String +``` + +This data type represent a JavaScript expression. We have two atoms - `JSInt` and `JSSymbol` to which we'll translate our languages' `Atom`, We have `JSBinOp` to represent a binary operation such as `+` or `*`, we have `JSLambda`for anonymous functions same as our `lambda expression`, We have `JSFunCall` which we'll use both for calling functions and introducing new names as in `let`, and we have `JSReturn` to return values from functions as that's required in JavaScript. + +This `JSExpr` type is an **abstract representation** of a JavaScript expression. We will translate our own `Expr`which is an abstract representation of our languages' expression to `JSExpr` and from there to JavaScript. But in order to do that we need to take this `JSExpr` and produce JavaScript code from it. We'll do that by pattern matching on `JSExpr` recursively and emit JS code as a `String`. This is basically the same thing we did in `printExpr`. We'll also track the scoping of elements so we can indent the generated code in a nice way. + +``` +printJSOp :: JSBinOp -> String +printJSOp op = op + +printJSExpr :: Bool -> Int -> JSExpr -> String +printJSExpr doindent tabs = \case + JSInt i -> show i + JSSymbol name -> name + JSLambda vars expr -> (if doindent then indent tabs else id) $ unlines + ["function(" ++ intercalate ", " vars ++ ") {" + ,indent (tabs+1) $ printJSExpr False (tabs+1) expr + ] ++ indent tabs "}" + JSBinOp op e1 e2 -> "(" ++ printJSExpr False tabs e1 ++ " " ++ printJSOp op ++ " " ++ printJSExpr False tabs e2 ++ ")" + JSFunCall f exprs -> "(" ++ printJSExpr False tabs f ++ ")(" ++ intercalate ", " (fmap (printJSExpr False tabs) exprs) ++ ")" + JSReturn expr -> (if doindent then indent tabs else id) $ "return " ++ printJSExpr False tabs expr ++ ";" +``` + +* _Exercise 1_ : Add a `JSProgram` type that will hold multiple `JSExpr` and create a function `printJSExprProgram` to generate code for it. + +* _Exercise 2_ : Add a new type of `JSExpr` - `JSIf`, and generate code for it. + +### 6\. Implement a code translator to the JavaScript subset we defined + +We are almost there. In this section we'll create a function to translate `Expr` to `JSExpr`. + +The basic idea is simple, we'll translate `ATOM` to `JSSymbol` or `JSInt` and `LIST` to either a function call or a special case we'll translate later. + +``` +type TransError = String + +translateToJS :: Expr -> Either TransError JSExpr +translateToJS = \case + ATOM (Symbol s) -> pure $ JSSymbol s + ATOM (Int i) -> pure $ JSInt i + LIST xs -> translateList xs + +translateList :: [Expr] -> Either TransError JSExpr +translateList = \case + [] -> Left "translating empty list" + ATOM (Symbol s):xs + | Just f <- lookup s builtins -> + f xs + f:xs -> + JSFunCall <$> translateToJS f <*> traverse translateToJS xs + +``` + +`builtins` is a list of special cases to translate, like `lambda` and `let`. Every case gets the list of arguments for it, verify that its syntactically valid and translates it to the equivalent `JSExpr`. + +``` +type Builtin = [Expr] -> Either TransError JSExpr +type Builtins = [(Name, Builtin)] + +builtins :: Builtins +builtins = + [("lambda", transLambda) + ,("let", transLet) + ,("add", transBinOp "add" "+") + ,("mul", transBinOp "mul" "*") + ,("sub", transBinOp "sub" "-") + ,("div", transBinOp "div" "/") + ,("print", transPrint) + ] + +``` + +In our case, we treat built-in special forms as special and not first class, so will not be able to use them as first class functions and such. + +We'll translate a Lambda to an anonymous function: + +``` +transLambda :: [Expr] -> Either TransError JSExpr +transLambda = \case + [LIST vars, body] -> do + vars' <- traverse fromSymbol vars + JSLambda vars' <$> (JSReturn <$> translateToJS body) + + vars -> + Left $ unlines + ["Syntax error: unexpected arguments for lambda." + ,"expecting 2 arguments, the first is the list of vars and the second is the body of the lambda." + ,"In expression: " ++ show (LIST $ ATOM (Symbol "lambda") : vars) + ] + +fromSymbol :: Expr -> Either String Name +fromSymbol (ATOM (Symbol s)) = Right s +fromSymbol e = Left $ "cannot bind value to non symbol type: " ++ show e + +``` + +We'll translate let to a definition of a function with the relevant named arguments and call it with the values, Thus introducing the variables in that scope: + +``` +transLet :: [Expr] -> Either TransError JSExpr +transLet = \case + [LIST binds, body] -> do + (vars, vals) <- letParams binds + vars' <- traverse fromSymbol vars + JSFunCall . JSLambda vars' <$> (JSReturn <$> translateToJS body) <*> traverse translateToJS vals + where + letParams :: [Expr] -> Either Error ([Expr],[Expr]) + letParams = \case + [] -> pure ([],[]) + LIST [x,y] : rest -> ((x:) *** (y:)) <$> letParams rest + x : _ -> Left ("Unexpected argument in let list in expression:\n" ++ printExpr x) + + vars -> + Left $ unlines + ["Syntax error: unexpected arguments for let." + ,"expecting 2 arguments, the first is the list of var/val pairs and the second is the let body." + ,"In expression:\n" ++ printExpr (LIST $ ATOM (Symbol "let") : vars) + ] +``` + +We'll translate an operation that can work on multiple arguments to a chain of binary operations. For example: `(add 1 2 3)` will become `1 + (2 + 3)` + +``` +transBinOp :: Name -> Name -> [Expr] -> Either TransError JSExpr +transBinOp f _ [] = Left $ "Syntax error: '" ++ f ++ "' expected at least 1 argument, got: 0" +transBinOp _ _ [x] = translateToJS x +transBinOp _ f list = foldl1 (JSBinOp f) <$> traverse translateToJS list +``` + +And we'll translate a `print` as a call to `console.log` + +``` +transPrint :: [Expr] -> Either TransError JSExpr +transPrint [expr] = JSFunCall (JSSymbol "console.log") . (:[]) <$> translateToJS expr +transPrint xs = Left $ "Syntax error. print expected 1 arguments, got: " ++ show (length xs) + +``` + +Notice that we could have skipped verifying the syntax if we'd parse those as special cases of `Expr`. + +* _Exercise 1_ : Translate `Program` to `JSProgram` + +* _Exercise 2_ : add a special case for `if Expr Expr Expr` and translate it to the `JSIf` case you implemented in the last exercise + +### 7\. Glue it all together + +Finally, we are going to glue this all together. We'll: + +1. Read a file + +2. Parse it to `Expr` + +3. Translate it to `JSExpr` + +4. Emit JavaScript code to the standard output + +We'll also enable a few flags for testing: + +* `--e` will parse and print the abstract representation of the expression (`Expr`) + +* `--pp` will parse and pretty print + +* `--jse` will parse, translate and print the abstract representation of the resulting JS (`JSExpr`) + +* `--ppc` will parse, pretty print and compile + +``` +main :: IO () +main = getArgs >>= \case + [file] -> + printCompile =<< readFile file + ["--e",file] -> + either putStrLn print . runExprParser "--e" =<< readFile file + ["--pp",file] -> + either putStrLn (putStrLn . printExpr) . runExprParser "--pp" =<< readFile file + ["--jse",file] -> + either print (either putStrLn print . translateToJS) . runExprParser "--jse" =<< readFile file + ["--ppc",file] -> + either putStrLn (either putStrLn putStrLn) . fmap (compile . printExpr) . runExprParser "--ppc" =<< readFile file + _ -> + putStrLn $ unlines + ["Usage: runghc Main.hs [ --e, --pp, --jse, --ppc ] " + ,"--e print the Expr" + ,"--pp pretty print Expr" + ,"--jse print the JSExpr" + ,"--ppc pretty print Expr and then compile" + ] + +printCompile :: String -> IO () +printCompile = either putStrLn putStrLn . compile + +compile :: String -> Either Error String +compile str = printJSExpr False 0 <$> (translateToJS =<< runExprParser "compile" str) + +``` + +That's it. We have a compiler from our language to JS. Again, you can view the full source file [here][9]. + +Running our compiler with the example from the first section yields this JavaScript code: + +``` +$ runhaskell Lisp.hs example.lsp +(function(compose, square, add1) { + return (console.log)(((compose)(square, add1))(5)); +})(function(f, g) { + return function(x) { + return (f)((g)(x)); + }; +}, function(x) { + return (x * x); +}, function(x) { + return (x + 1); +}) +``` + +If you have node.js installed on your computer, you can run this code by running: + +``` +$ runhaskell Lisp.hs example.lsp | node -p +36 +undefined +``` + +* _Final exercise_ : instead of compiling an expression, compile a program of multiple expressions. + +-------------------------------------------------------------------------------- + +via: https://gilmi.me/blog/post/2016/10/14/lisp-to-js + +作者:[ Gil Mizrahi ][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://gilmi.me/home +[1]:https://gilmi.me/blog/authors/Gil +[2]:https://gilmi.me/blog/tags/compilers +[3]:https://gilmi.me/blog/tags/fp +[4]:https://gilmi.me/blog/tags/haskell +[5]:https://gilmi.me/blog/tags/lisp +[6]:https://gilmi.me/blog/tags/parsing +[7]:https://gist.github.com/soupi/d4ff0727ccb739045fad6cdf533ca7dd +[8]:https://mrkkrp.github.io/megaparsec/ +[9]:https://gist.github.com/soupi/d4ff0727ccb739045fad6cdf533ca7dd +[10]:https://gilmi.me/blog/post/2016/10/14/lisp-to-js From 03693b1e9b4624fd9b37ba4af7804279306af000 Mon Sep 17 00:00:00 2001 From: Ezio Date: Mon, 8 Oct 2018 11:37:58 +0800 Subject: [PATCH 267/736] =?UTF-8?q?20181008-2=20=E9=80=89=E9=A2=98?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- .../tech/20180715 Why is Python so slow.md | 203 ++++++++++++++++++ 1 file changed, 203 insertions(+) create mode 100644 sources/tech/20180715 Why is Python so slow.md diff --git a/sources/tech/20180715 Why is Python so slow.md b/sources/tech/20180715 Why is Python so slow.md new file mode 100644 index 0000000000..2e2af0f74e --- /dev/null +++ b/sources/tech/20180715 Why is Python so slow.md @@ -0,0 +1,203 @@ +Why is Python so slow? +============================================================ + +Python is booming in popularity. It is used in DevOps, Data Science, Web Development and Security. + +It does not, however, win any medals for speed. + + +![](https://cdn-images-1.medium.com/max/1200/0*M2qZQsVnDS-4i5zc.jpg) + +> How does Java compare in terms of speed to C or C++ or C# or Python? The answer depends greatly on the type of application you’re running. No benchmark is perfect, but The Computer Language Benchmarks Game is [a good starting point][5]. + +I’ve been referring to the Computer Language Benchmarks Game for over a decade; compared with other languages like Java, C#, Go, JavaScript, C++, Python is [one of the slowest][6]. This includes [JIT][7] (C#, Java) and [AOT][8] (C, C++) compilers, as well as interpreted languages like JavaScript. + + _NB: When I say “Python”, I’m talking about the reference implementation of the language, CPython. I will refer to other runtimes in this article._ + +> I want to answer this question: When Python completes a comparable application 2–10x slower than another language,  _why is it slow_  and can’t we  _make it faster_ ? + +Here are the top theories: + +* “ _It’s the GIL (Global Interpreter Lock)_ ” + +* “ _It’s because its interpreted and not compiled_ ” + +* “ _It’s because its a dynamically typed language_ ” + +Which one of these reasons has the biggest impact on performance? + +### “It’s the GIL” + +Modern computers come with CPU’s that have multiple cores, and sometimes multiple processors. In order to utilise all this extra processing power, the Operating System defines a low-level structure called a thread, where a process (e.g. Chrome Browser) can spawn multiple threads and have instructions for the system inside. That way if one process is particularly CPU-intensive, that load can be shared across the cores and this effectively makes most applications complete tasks faster. + +My Chrome Browser, as I’m writing this article, has 44 threads open. Keep in mind that the structure and API of threading are different between POSIX-based (e.g. Mac OS and Linux) and Windows OS. The operating system also handles the scheduling of threads. + +IF you haven’t done multi-threaded programming before, a concept you’ll need to quickly become familiar with locks. Unlike a single-threaded process, you need to ensure that when changing variables in memory, multiple threads don’t try and access/change the same memory address at the same time. + +When CPython creates variables, it allocates the memory and then counts how many references to that variable exist, this is a concept known as reference counting. If the number of references is 0, then it frees that piece of memory from the system. This is why creating a “temporary” variable within say, the scope of a for loop, doesn’t blow up the memory consumption of your application. + +The challenge then becomes when variables are shared within multiple threads, how CPython locks the reference count. There is a “global interpreter lock” that carefully controls thread execution. The interpreter can only execute one operation at a time, regardless of how many threads it has. + +#### What does this mean to the performance of Python application? + +If you have a single-threaded, single interpreter application. It will make no difference to the speed. Removing the GIL would have no impact on the performance of your code. + +If you wanted to implement concurrency within a single interpreter (Python process) by using threading, and your threads were IO intensive (e.g. Network IO or Disk IO), you would see the consequences of GIL-contention. + +![](https://cdn-images-1.medium.com/max/1600/0*S_iSksY5oM5H1Qf_.png) +From David Beazley’s GIL visualised post [http://dabeaz.blogspot.com/2010/01/python-gil-visualized.html][1] + +If you have a web-application (e.g. Django) and you’re using WSGI, then each request to your web-app is a separate Python interpreter, so there is only 1 lock  _per_  request. Because the Python interpreter is slow to start, some WSGI implementations have a “Daemon Mode” [which keep Python process(es) on the go for you.][9] + +#### What about other Python runtimes? + +[PyPy has a GIL][10] and it is typically >3x faster than CPython. + +[Jython does not have a GIL][11] because a Python thread in Jython is represented by a Java thread and benefits from the JVM memory-management system. + +#### How does JavaScript do this? + +Well, firstly all Javascript engines [use mark-and-sweep Garbage Collection][12]. As stated, the primary need for the GIL is CPython’s memory-management algorithm. + +JavaScript does not have a GIL, but it’s also single-threaded so it doesn’t require one. JavaScript’s event-loop and Promise/Callback pattern are how asynchronous-programming is achieved in place of concurrency. Python has a similar thing with the asyncio event-loop. + +### “It’s because its an interpreted language” + +I hear this a lot and I find it a gross-simplification of the way CPython actually works. If at a terminal you wrote `python myscript.py` then CPython would start a long sequence of reading, lexing, parsing, compiling, interpreting and executing that code. + +If you’re interested in how that process works, I’ve written about it before: + +[Modifying the Python language in 6 minutes +This week I raised my first pull-request to the CPython core project, which was declined :-( but as to not completely…hackernoon.com][13][][14] + +An important point in that process is the creation of a `.pyc` file, at the compiler stage, the bytecode sequence is written to a file inside `__pycache__/`on Python 3 or in the same directory in Python 2\. This doesn’t just apply to your script, but all of the code you imported, including 3rd party modules. + +So most of the time (unless you write code which you only ever run once?), Python is interpreting bytecode and executing it locally. Compare that with Java and C#.NET: + +> Java compiles to an “Intermediate Language” and the Java Virtual Machine reads the bytecode and just-in-time compiles it to machine code. The .NET CIL is the same, the .NET Common-Language-Runtime, CLR, uses just-in-time compilation to machine code. + +So, why is Python so much slower than both Java and C# in the benchmarks if they all use a virtual machine and some sort of Bytecode? Firstly, .NET and Java are JIT-Compiled. + +JIT or Just-in-time compilation requires an intermediate language to allow the code to be split into chunks (or frames). Ahead of time (AOT) compilers are designed to ensure that the CPU can understand every line in the code before any interaction takes place. + +The JIT itself does not make the execution any faster, because it is still executing the same bytecode sequences. However, JIT enables optimizations to be made at runtime. A good JIT optimizer will see which parts of the application are being executed a lot, call these “hot spots”. It will then make optimizations to those bits of code, by replacing them with more efficient versions. + +This means that when your application does the same thing again and again, it can be significantly faster. Also, keep in mind that Java and C# are strongly-typed languages so the optimiser can make many more assumptions about the code. + +PyPy has a JIT and as mentioned in the previous section, is significantly faster than CPython. This performance benchmark article goes into more detail — + +[Which is the fastest version of Python? +Of course, “it depends”, but what does it depend on and how can you assess which is the fastest version of Python for…hackernoon.com][15][][16] + +#### So why doesn’t CPython use a JIT? + +There are downsides to JITs: one of those is startup time. CPython startup time is already comparatively slow, PyPy is 2–3x slower to start than CPython. The Java Virtual Machine is notoriously slow to boot. The .NET CLR gets around this by starting at system-startup, but the developers of the CLR also develop the Operating System on which the CLR runs. + +If you have a single Python process running for a long time, with code that can be optimized because it contains “hot spots”, then a JIT makes a lot of sense. + +However, CPython is a general-purpose implementation. So if you were developing command-line applications using Python, having to wait for a JIT to start every time the CLI was called would be horribly slow. + +CPython has to try and serve as many use cases as possible. There was the possibility of [plugging a JIT into CPython][17] but this project has largely stalled. + +> If you want the benefits of a JIT and you have a workload that suits it, use PyPy. + +### “It’s because its a dynamically typed language” + +In a “Statically-Typed” language, you have to specify the type of a variable when it is declared. Those would include C, C++, Java, C#, Go. + +In a dynamically-typed language, there are still the concept of types, but the type of a variable is dynamic. + +``` +a = 1 +a = "foo" +``` + +In this toy-example, Python creates a second variable with the same name and a type of `str` and deallocates the memory created for the first instance of `a` + +Statically-typed languages aren’t designed as such to make your life hard, they are designed that way because of the way the CPU operates. If everything eventually needs to equate to a simple binary operation, you have to convert objects and types down to a low-level data structure. + +Python does this for you, you just never see it, nor do you need to care. + +Not having to declare the type isn’t what makes Python slow, the design of the Python language enables you to make almost anything dynamic. You can replace the methods on objects at runtime, you can monkey-patch low-level system calls to a value declared at runtime. Almost anything is possible. + +It’s this design that makes it incredibly hard to optimise Python. + +To illustrate my point, I’m going to use a syscall tracing tool that works in Mac OS called Dtrace. CPython distributions do not come with DTrace builtin, so you have to recompile CPython. I’m using 3.6.6 for my demo + +``` +wget https://github.com/python/cpython/archive/v3.6.6.zip +unzip v3.6.6.zip +cd v3.6.6 +./configure --with-dtrace +make +``` + +Now `python.exe` will have Dtrace tracers throughout the code. [Paul Ross wrote an awesome Lightning Talk on Dtrace][19]. You can [download DTrace starter files][20] for Python to measure function calls, execution time, CPU time, syscalls, all sorts of fun. e.g. + +`sudo dtrace -s toolkit/.d -c ‘../cpython/python.exe script.py’` + +The `py_callflow` tracer shows all the function calls in your application + + +![](https://cdn-images-1.medium.com/max/1600/1*Lz4UdUi4EwknJ0IcpSJ52g.gif) + +So, does Python’s dynamic typing make it slow? + +* Comparing and converting types is costly, every time a variable is read, written to or referenced the type is checked + +* It is hard to optimise a language that is so dynamic. The reason many alternatives to Python are so much faster is that they make compromises to flexibility in the name of performance + +* Looking at [Cython][2], which combines C-Static Types and Python to optimise code where the types are known[ can provide ][3]an 84x performanceimprovement. + +### Conclusion + +> Python is primarily slow because of its dynamic nature and versatility. It can be used as a tool for all sorts of problems, where more optimised and faster alternatives are probably available. + +There are, however, ways of optimising your Python applications by leveraging async, understanding the profiling tools, and consider using multiple-interpreters. + +For applications where startup time is unimportant and the code would benefit a JIT, consider PyPy. + +For parts of your code where performance is critical and you have more statically-typed variables, consider using [Cython][4]. + +#### Further reading + +Jake VDP’s excellent article (although slightly dated) [https://jakevdp.github.io/blog/2014/05/09/why-python-is-slow/][21] + +Dave Beazley’s talk on the GIL [http://www.dabeaz.com/python/GIL.pdf][22] + +All about JIT compilers [https://hacks.mozilla.org/2017/02/a-crash-course-in-just-in-time-jit-compilers/][23] + +-------------------------------------------------------------------------------- + +via: https://hackernoon.com/why-is-python-so-slow-e5074b6fe55b + +作者:[Anthony Shaw][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://hackernoon.com/@anthonypjshaw?source=post_header_lockup +[1]:http://dabeaz.blogspot.com/2010/01/python-gil-visualized.html +[2]:http://cython.org/ +[3]:http://notes-on-cython.readthedocs.io/en/latest/std_dev.html +[4]:http://cython.org/ +[5]:http://algs4.cs.princeton.edu/faq/ +[6]:https://benchmarksgame-team.pages.debian.net/benchmarksgame/faster/python.html +[7]:https://en.wikipedia.org/wiki/Just-in-time_compilation +[8]:https://en.wikipedia.org/wiki/Ahead-of-time_compilation +[9]:https://www.slideshare.net/GrahamDumpleton/secrets-of-a-wsgi-master +[10]:http://doc.pypy.org/en/latest/faq.html#does-pypy-have-a-gil-why +[11]:http://www.jython.org/jythonbook/en/1.0/Concurrency.html#no-global-interpreter-lock +[12]:https://developer.mozilla.org/en-US/docs/Web/JavaScript/Memory_Management +[13]:https://hackernoon.com/modifying-the-python-language-in-7-minutes-b94b0a99ce14 +[14]:https://hackernoon.com/modifying-the-python-language-in-7-minutes-b94b0a99ce14 +[15]:https://hackernoon.com/which-is-the-fastest-version-of-python-2ae7c61a6b2b +[16]:https://hackernoon.com/which-is-the-fastest-version-of-python-2ae7c61a6b2b +[17]:https://www.slideshare.net/AnthonyShaw5/pyjion-a-jit-extension-system-for-cpython +[18]:https://github.com/python/cpython/archive/v3.6.6.zip +[19]:https://github.com/paulross/dtrace-py#the-lightning-talk +[20]:https://github.com/paulross/dtrace-py/tree/master/toolkit +[21]:https://jakevdp.github.io/blog/2014/05/09/why-python-is-slow/ +[22]:http://www.dabeaz.com/python/GIL.pdf +[23]:https://hacks.mozilla.org/2017/02/a-crash-course-in-just-in-time-jit-compilers/ From 67ae09583747725407380ffe39ba64cd0a1ca0ed Mon Sep 17 00:00:00 2001 From: darksun Date: Mon, 8 Oct 2018 12:53:55 +0800 Subject: [PATCH 268/736] =?UTF-8?q?=E9=80=89=E9=A2=98:=20Tips=20for=20list?= =?UTF-8?q?ing=20files=20with=20ls=20at=20the=20Linux=20command=20line?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...files with ls at the Linux command line.md | 73 +++++++++++++++++++ 1 file changed, 73 insertions(+) create mode 100644 sources/tech/20181003 Tips for listing files with ls at the Linux command line.md diff --git a/sources/tech/20181003 Tips for listing files with ls at the Linux command line.md b/sources/tech/20181003 Tips for listing files with ls at the Linux command line.md new file mode 100644 index 0000000000..d04e94a541 --- /dev/null +++ b/sources/tech/20181003 Tips for listing files with ls at the Linux command line.md @@ -0,0 +1,73 @@ +Tips for listing files with ls at the Linux command line +====== +Learn some of the Linux 'ls' command's most useful variations. +![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/button_push_open_keyboard_file_organize.png?itok=KlAsk1gx) + +One of the first commands I learned in Linux was `ls`. Knowing what’s in a directory where a file on your system resides is important. Being able to see and modify not just some but all of the files is also important. + +My first LInux cheat sheet was the [One Page Linux Manual][1] , which was released in1999 and became my go-to reference. I taped it over my desk and referred to it often as I began to explore Linux. Listing files with `ls -l` is introduced on the first page, at the bottom of the first column. + +Later, I would learn other iterations of this most basic command. Through the `ls` command, I began to learn about the complexity of the Linux file permissions and what was mine and what required root or sudo permission to change. I became very comfortable on the command line over time, and while I still use `ls -l` to find files in the directory, I frequently use `ls -al` so I can see hidden files that might need to be changed, like configuration files. + +According to an article by Eric Fischer about the `ls` command in the [Linux Documentation Project][2], the command's roots go back to the `listf` command on MIT’s Compatible Time Sharing System in 1961. When CTSS was replaced by [Multics][3], the command became `list`, with switches like `list -all`. According to [Wikipedia][4], `ls` appeared in the original version of AT&T Unix. The `ls` command we use today on Linux systems comes from the [GNU Core Utilities][5]. + +Most of the time, I use only a couple of iterations of the command. Looking inside a directory with `ls` or `ls -al` is how I generally use the command, but there are many other options that you should be familiar with. + +`$ ls -l` provides a simple list of the directory: + +![](https://opensource.com/sites/default/files/uploads/linux_ls_1_0.png) + +Using the man pages of my Fedora 28 system, I find that there are many other options to `ls`, all of which provide interesting and useful information about the Linux file system. By entering `man ls` at the command prompt, we can begin to explore some of the other options: + +![](https://opensource.com/sites/default/files/uploads/linux_ls_2_0.png) + +To sort the directory by file sizes, use `ls -lS`: + +![](https://opensource.com/sites/default/files/uploads/linux_ls_3_0.png) + +To list the contents in reverse order, use `ls -lr`: + +![](https://opensource.com/sites/default/files/uploads/linux_ls_4.png) + +To list contents by columns, use `ls -c`: + +![](https://opensource.com/sites/default/files/uploads/linux_ls_5.png) + +`ls -al` provides a list of all the files in the same directory: + +![](https://opensource.com/sites/default/files/uploads/linux_ls_6.png) + +Here are some additional options that I find useful and interesting: + + * List only the .txt files in the directory: `ls *.txt` + * List by file size: `ls -s` + * Sort by time and date: `ls -d` + * Sort by extension: `ls -X` + * Sort by file size: `ls -S` + * Long format with file size: `ls -ls` + * List only the .txt files in a directory: `ls *.txt` + + + +To generate a directory list in the specified format and send it to a file for later viewing, enter `ls -al > mydirectorylist`. Finally, one of the more exotic commands I found is `ls -R`, which provides a recursive list of all the directories on your computer and their contents. + +For a complete list of the all the iterations of the `ls` command, refer to the [GNU Core Utilities][6]. + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/18/10/ls-command + +作者:[Don Watkins][a] +选题:[lujun9972](https://github.com/lujun9972) +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/don-watkins +[1]: http://hackerspace.cs.rutgers.edu/library/General/One_Page_Linux_Manual.pdf +[2]: http://www.tldp.org/LDP/LG/issue48/fischer.html +[3]: https://en.wikipedia.org/wiki/Multics +[4]: https://en.wikipedia.org/wiki/Ls +[5]: http://www.gnu.org/s/coreutils/ +[6]: https://www.gnu.org/software/coreutils/manual/html_node/ls-invocation.html#ls-invocation From 954873c55469b385326524c9cb606a73f901779c Mon Sep 17 00:00:00 2001 From: darksun Date: Mon, 8 Oct 2018 12:55:27 +0800 Subject: [PATCH 269/736] =?UTF-8?q?=E9=80=89=E9=A2=98:=20PyTorch=201.0=20P?= =?UTF-8?q?review=20Release:=20Facebook=E2=80=99s=20newest=20Open=20Source?= =?UTF-8?q?=20AI?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...lease- Facebook-s newest Open Source AI.md | 181 ++++++++++++++++++ 1 file changed, 181 insertions(+) create mode 100644 sources/tech/20181004 PyTorch 1.0 Preview Release- Facebook-s newest Open Source AI.md diff --git a/sources/tech/20181004 PyTorch 1.0 Preview Release- Facebook-s newest Open Source AI.md b/sources/tech/20181004 PyTorch 1.0 Preview Release- Facebook-s newest Open Source AI.md new file mode 100644 index 0000000000..6418db9444 --- /dev/null +++ b/sources/tech/20181004 PyTorch 1.0 Preview Release- Facebook-s newest Open Source AI.md @@ -0,0 +1,181 @@ +PyTorch 1.0 Preview Release: Facebook’s newest Open Source AI +====== +Facebook already uses its own Open Source AI, PyTorch quite extensively in its own artificial intelligence projects. Recently, they have gone a league ahead by releasing a pre-release preview version 1.0. + +For those who are not familiar, [PyTorch][1] is a Python-based library for Scientific Computing. + +PyTorch harnesses the [superior computational power of Graphical Processing Units (GPUs)][2] for carrying out complex [Tensor][3] computations and implementing [deep neural networks][4]. So, it is used widely across the world by numerous researchers and developers. + +This new ready-to-use [Preview Release][5] was announced at the [PyTorch Developer Conference][6] at [The Midway][7], San Francisco, CA on Tuesday, October 2, 2018. + +### Highlights of PyTorch 1.0 Release Candidate + +![PyTorhc is Python based open source AI framework from Facebook][8] + +Some of the main new features in the release candidate are: + +#### 1\. JIT + +JIT is a set of compiler tools to bring research close to production. It includes a Python-based language called Torch Script and also ways to make existing code compatible with itself. + +#### 2\. New torch.distributed library: “C10D” + +“C10D” enables asynchronous operation on different backends with performance improvements on slower networks and more. + +#### 3\. C++ frontend (experimental) + +Though it has been specifically mentioned as an unstable API (expected in a pre-release), this is a pure C++ interface to the PyTorch backend that follows the API and architecture of the established Python frontend to enable research in high performance, low latency and C++ applications installed directly on hardware. + +To know more, you can take a look at the complete [update notes][9] on GitHub. + +The first stable version PyTorch 1.0 will be released in summer. + +### Installing PyTorch on Linux + +To install PyTorch v1.0rc0, the developers recommend using [conda][10] while there also other ways to do that as shown on their [local installation page][11] where they have documented everything necessary in detail. + +#### Prerequisites + + * Linux + * Pip + * Python + * [CUDA][12] (For Nvidia GPU owners) + + + +As we recently showed you [how to install and use Pip][13], let’s get to know how we can install PyTorch with it. + +Note that PyTorch has GPU and CPU-only variants. You should install the one that suits your hardware. + +#### Installing old and stable version of PyTorch + +If you want the stable release (version 0.4) for your GPU, use: + +``` +pip install torch torchvision + +``` + +Use these two commands in succession for a CPU-only stable release: + +``` +pip install http://download.pytorch.org/whl/cpu/torch-0.4.1-cp27-cp27mu-linux_x86_64.whl +pip install torchvision + +``` + +#### Installing PyTorch 1.0 Release Candidate + +You install PyTorch 1.0 RC GPU version with this command: + +``` +pip install torch_nightly -f https://download.pytorch.org/whl/nightly/cu92/torch_nightly.html + +``` + +If you do not have a GPU and would prefer a CPU-only version, use: + +``` +pip install torch_nightly -f https://download.pytorch.org/whl/nightly/cpu/torch_nightly.html + +``` + +#### Verifying your PyTorch installation + +Startup the python console on a terminal with the following simple command: + +``` +python + +``` + +Now enter the following sample code line by line to verify your installation: + +``` +from __future__ import print_function +import torch +x = torch.rand(5, 3) +print(x) + +``` + +You should get an output like: + +``` +tensor([[0.3380, 0.3845, 0.3217], + [0.8337, 0.9050, 0.2650], + [0.2979, 0.7141, 0.9069], + [0.1449, 0.1132, 0.1375], + [0.4675, 0.3947, 0.1426]]) + +``` + +To check whether you can use PyTorch’s GPU capabilities, use the following sample code: + +``` +import torch +torch.cuda.is_available() + +``` + +The resulting output should be: + +``` +True + +``` + +Support for AMD GPUs for PyTorch is still under development, so complete test coverage is not yet provided as reported [here][14], suggesting this [resource][15] in case you have an AMD GPU. + +Lets now look into some research projects that extensively use PyTorch: + +### Ongoing Research Projects based on PyTorch + + * [Detectron][16]: Facebook AI Research’s software system to intelligently detect and classify objects. It is based on Caffe2. Earlier this year, Caffe2 and PyTorch [joined forces][17] to create a Research + Production enabled PyTorch 1.0 we talk about. + * [Unsupervised Sentiment Discovery][18]: Such methods are extensively used with social media algorithms. + * [vid2vid][19]: Photorealistic video-to-video translation + * [DeepRecommender][20] (We covered how such systems work on our past [Netflix AI article][21]) + + + +Nvidia, leading GPU manufacturer covered more on this with their own [update][22] on this recent development where you can also read about ongoing collaborative research endeavours. + +### How should we react to such PyTorch capabilities? + +To think Facebook applies such amazingly innovative projects and more in its social media algorithms, should we appreciate all this or get alarmed? This is almost [Skynet][23]! This newly improved production-ready pre-release of PyTorch will certainly push things further ahead! Feel free to share your thoughts with us in the comments below! + +-------------------------------------------------------------------------------- + +via: https://itsfoss.com/pytorch-open-source-ai-framework/ + +作者:[Avimanyu Bandyopadhyay][a] +选题:[lujun9972](https://github.com/lujun9972) +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://itsfoss.com/author/avimanyu/ +[1]: https://pytorch.org/ +[2]: https://en.wikipedia.org/wiki/General-purpose_computing_on_graphics_processing_units +[3]: https://en.wikipedia.org/wiki/Tensor +[4]: https://www.techopedia.com/definition/32902/deep-neural-network +[5]: https://code.fb.com/ai-research/facebook-accelerates-ai-development-with-new-partners-and-production-capabilities-for-pytorch-1-0 +[6]: https://pytorch.fbreg.com/ +[7]: https://www.themidwaysf.com/ +[8]: https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/10/pytorch.jpeg +[9]: https://github.com/pytorch/pytorch/releases/tag/v1.0rc0 +[10]: https://conda.io/ +[11]: https://pytorch.org/get-started/locally/ +[12]: https://www.pugetsystems.com/labs/hpc/How-to-install-CUDA-9-2-on-Ubuntu-18-04-1184/ +[13]: https://itsfoss.com/install-pip-ubuntu/ +[14]: https://github.com/pytorch/pytorch/issues/10657#issuecomment-415067478 +[15]: https://rocm.github.io/install.html#installing-from-amd-rocm-repositories +[16]: https://github.com/facebookresearch/Detectron +[17]: https://caffe2.ai/blog/2018/05/02/Caffe2_PyTorch_1_0.html +[18]: https://github.com/NVIDIA/sentiment-discovery +[19]: https://github.com/NVIDIA/vid2vid +[20]: https://github.com/NVIDIA/DeepRecommender/ +[21]: https://itsfoss.com/netflix-open-source-ai/ +[22]: https://news.developer.nvidia.com/pytorch-1-0-accelerated-on-nvidia-gpus/ +[23]: https://en.wikipedia.org/wiki/Skynet_(Terminator) From 70a556232a4e96f36643f748973ff63ec1ff6f3a Mon Sep 17 00:00:00 2001 From: darksun Date: Mon, 8 Oct 2018 12:59:29 +0800 Subject: [PATCH 270/736] =?UTF-8?q?=E9=80=89=E9=A2=98:=20Functional=20prog?= =?UTF-8?q?ramming=20in=20Python:=20Immutable=20data=20structures?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...ng in Python- Immutable data structures.md | 190 ++++++++++++++++++ 1 file changed, 190 insertions(+) create mode 100644 sources/tech/20181004 Functional programming in Python- Immutable data structures.md diff --git a/sources/tech/20181004 Functional programming in Python- Immutable data structures.md b/sources/tech/20181004 Functional programming in Python- Immutable data structures.md new file mode 100644 index 0000000000..b831ff726f --- /dev/null +++ b/sources/tech/20181004 Functional programming in Python- Immutable data structures.md @@ -0,0 +1,190 @@ +Functional programming in Python: Immutable data structures +====== +Immutability can help us better understand our code. Here's how to achieve it without sacrificing performance. + +![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/metrics_graph_stats_blue.png?itok=OKCc_60D) + +In this two-part series, I will discuss how to import ideas from the functional programming methodology into Python in order to have the best of both worlds. + +This first post will explore how immutable data structures can help. The second part will explore higher-level functional programming concepts in Python using the **toolz** library. + +Why functional programming? Because mutation is hard to reason about. If you are already convinced that mutation is problematic, great. If you're not convinced, you will be by the end of this post. + +Let's begin by considering squares and rectangles. If we think in terms of interfaces, neglecting implementation details, are squares a subtype of rectangles? + +The definition of a subtype rests on the [Liskov substitution principle][1]. In order to be a subtype, it must be able to do everything the supertype does. + +How would we define an interface for a rectangle? + +``` +from zope.interface import Interface + +class IRectangle(Interface): +    def get_length(self): +        """Squares can do that""" +    def get_width(self): +        """Squares can do that""" +    def set_dimensions(self, length, width): +        """Uh oh""" +``` + +If this is the definition, then squares cannot be a subtype of rectangles; they cannot respond to a `set_dimensions` method if the length and width are different. + +A different approach is to choose to make rectangles immutable. + +``` +class IRectangle(Interface): +    def get_length(self): +        """Squares can do that""" +    def get_width(self): +        """Squares can do that""" +    def with_dimensions(self, length, width): +        """Returns a new rectangle""" +``` + +Now, a square can be a rectangle. It can return a new rectangle (which would not usually be a square) when `with_dimensions` is called, but it would not stop being a square. + +This might seem like an academic problem—until we consider that squares and rectangles are, in a sense, a container for their sides. After we understand this example, the more realistic case this comes into play with is more traditional containers. For example, consider random-access arrays. + +We have `ISquare` and `IRectangle`, and `ISquare` is a subtype of `IRectangle`. + +We want to put rectangles in a random-access array: + +``` +class IArrayOfRectangles(Interface): +    def get_element(self, i): +        """Returns Rectangle""" +    def set_element(self, i, rectangle): +        """'rectangle' can be any IRectangle""" +``` + +We want to put squares in a random-access array too: + +``` +class IArrayOfSquare(Interface): +    def get_element(self, i): +        """Returns Square""" +    def set_element(self, i, square): +        """'square' can be any ISquare""" +``` + +Even though `ISquare` is a subtype of `IRectangle`, no array can implement both `IArrayOfSquare` and `IArrayOfRectangle`. + +Why not? Assume `bucket` implements both. + +``` +>>> rectangle = make_rectangle(3, 4) +>>> bucket.set_element(0, rectangle) # This is allowed by IArrayOfRectangle +>>> thing = bucket.get_element(0) # That has to be a square by IArrayOfSquare +>>> assert thing.height == thing.width +Traceback (most recent call last): +  File "", line 1, in +AssertionError +``` + +Being unable to implement both means that neither is a subtype of the other, even though `ISquare` is a subtype of `IRectangle`. The problem is the `set_element` method: If we had a read-only array, `IArrayOfSquare` would be a subtype of `IArrayOfRectangle`. + +Mutability, in both the mutable `IRectangle` interface and the mutable `IArrayOf*` interfaces, has made thinking about types and subtypes much more difficult—and giving up on the ability to mutate meant that the intuitive relationships we expected to have between the types actually hold. + +Mutation can also have non-local effects. This happens when a shared object between two places is mutated by one. The classic example is one thread mutating a shared object with another thread, but even in a single-threaded program, sharing between places that are far apart is easy. Consider that in Python, most objects are reachable from many places: as a module global, or in a stack trace, or as a class attribute. + +If we cannot constrain the sharing, we might think about constraining the mutability. + +Here is an immutable rectangle, taking advantage of the [attrs][2] library: + +``` +@attr.s(frozen=True) +class Rectange(object): +    length = attr.ib() +    width = attr.ib() +    @classmethod +    def with_dimensions(cls, length, width): +        return cls(length, width) +``` + +Here is a square: + +``` +@attr.s(frozen=True) +class Square(object): +    side = attr.ib() +    @classmethod +    def with_dimensions(cls, length, width): +        return Rectangle(length, width) +``` + +Using the `frozen` argument, we can easily have `attrs`-created classes be immutable. All the hard work of writing `__setitem__` correctly has been done by others and is completely invisible to us. + +It is still easy to modify objects; it's just nigh impossible to mutate them. + +``` +too_long = Rectangle(100, 4) +reasonable = attr.evolve(too_long, length=10) +``` + +The [Pyrsistent][3] package allows us to have immutable containers. + +``` +# Vector of integers +a = pyrsistent.v(1, 2, 3) +# Not a vector of integers +b = a.set(1, "hello") +``` + +While `b` is not a vector of integers, nothing will ever stop `a` from being one. + +What if `a` was a million elements long? Is `b` going to copy 999,999 of them? Pyrsistent comes with "big O" performance guarantees: All operations take `O(log n)` time. It also comes with an optional C extension to improve performance beyond the big O. + +For modifying nested objects, it comes with a concept of "transformers:" + +``` +blog = pyrsistent.m( +    title="My blog", +    links=pyrsistent.v("github", "twitter"), +    posts=pyrsistent.v( +        pyrsistent.m(title="no updates", +                     content="I'm busy"), +        pyrsistent.m(title="still no updates", +                     content="still busy"))) +new_blog = blog.transform(["posts", 1, "content"], +                          "pretty busy") +``` + +`new_blog` will now be the immutable equivalent of + +``` +{'links': ['github', 'twitter'], + 'posts': [{'content': "I'm busy", +            'title': 'no updates'}, +           {'content': 'pretty busy', +            'title': 'still no updates'}], + 'title': 'My blog'} +``` + +But `blog` is still the same. This means anyone who had a reference to the old object has not been affected: The transformation had only local effects. + +This is useful when sharing is rampant. For example, consider default arguments: + +``` +def silly_sum(a, b, extra=v(1, 2)): +    extra = extra.extend([a, b]) +    return sum(extra) +``` + +In this post, we have learned why immutability can be useful for thinking about our code, and how to achieve it without an extravagant performance price. Next time, we will learn how immutable objects allow us to use powerful programming constructs. + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/18/10/functional-programming-python-immutable-data-structures + +作者:[Moshe Zadka][a] +选题:[lujun9972](https://github.com/lujun9972) +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/moshez +[1]: https://en.wikipedia.org/wiki/Liskov_substitution_principle +[2]: https://www.attrs.org/en/stable/ +[3]: https://pyrsistent.readthedocs.io/en/latest/ From 12d2af39c44cbe2c64b606d095d553a597a0930e Mon Sep 17 00:00:00 2001 From: darksun Date: Mon, 8 Oct 2018 13:10:53 +0800 Subject: [PATCH 271/736] =?UTF-8?q?=E9=80=89=E9=A2=98:=2013=20tools=20to?= =?UTF-8?q?=20measure=20DevOps=20success?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...1003 13 tools to measure DevOps success.md | 84 +++++++++++++++++++ 1 file changed, 84 insertions(+) create mode 100644 sources/talk/20181003 13 tools to measure DevOps success.md diff --git a/sources/talk/20181003 13 tools to measure DevOps success.md b/sources/talk/20181003 13 tools to measure DevOps success.md new file mode 100644 index 0000000000..26abb21f05 --- /dev/null +++ b/sources/talk/20181003 13 tools to measure DevOps success.md @@ -0,0 +1,84 @@ +13 tools to measure DevOps success +====== +How's your DevOps initiative really going? Find out with open source tools +![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/metrics_data_dashboard_system_computer_analytics.png?itok=oxAeIEI-) + +In today's enterprise, business disruption is all about agility with quality. Traditional processes and methods of developing software are challenged to keep up with the complexities that come with these new environments. Modern DevOps initiatives aim to help organizations use collaborations among different IT teams to increase agility and accelerate software application deployment. + +How is the DevOps initiative going in your organization? Whether or not it's going as well as you expected, you need to do assessments to verify your impressions. Measuring DevOps success is very important because these initiatives target the very processes that determine how IT works. DevOps also values measuring behavior, although measurements are more about your business processes and less about your development and IT systems. + +A metrics-oriented mindset is critical to ensuring DevOps initiatives deliver the intended results. Data-driven decisions and focused improvement activities lead to increased quality and efficiency. Also, the use of feedback to accelerate delivery is one reason DevOps creates a successful IT culture. + +With DevOps, as with any IT initiative, knowing what to measure is always the first step. Let's examine how to use continuous delivery improvement and open source tools to assess your DevOps program on three key metrics: team efficiency, business agility, and security. These will also help you identify what challenges your organization has and what problems you are trying to solve with DevOps. + +### 3 tools for measuring team efficiency + +Measuring team efficiency—in terms of how the DevOps initiative fits into your organization and how well it works for cultural innovation—is the hardest area to measure. The key metrics that enable the DevOps team to work more effectively on culture and organization are all about agile software development, such as knowledge sharing, prioritizing tasks, resource utilization, issue tracking, cross-functional teams, and collaboration. The following open source tools can help you improve and measure team efficiency: + + * [FunRetro][1] is a simple, intuitive tool that helps you collaborate across teams and improve what you do. + * [Kanboard][2] is a [kanban][3] board that helps you visualize your work in progress to focus on your goal. + * [Bugzilla][4] is a popular development tool with issue-tracking capabilities. + + + +### 6 tools for measuring business agility + +Speed is all that matters for accelerating business agility. Because DevOps gives organizations capabilities to deliver software faster with fewer failures, it's fast gaining acceptance. The key metrics are deployment time, change lead time, release frequency, and failover time. Puppet's [2017 State of DevOps Report][5] shows that high-performing DevOps practitioners deploy code updates 46x more frequently and high performers experience change lead times of under an hour, or 440x faster than average. Following are some open source tools to help you measure business agility: + + * [Kubernetes][6] is a container-orchestration system for automating deployment, scaling, and management of containerized applications. (Read more about [Kubernetes][7] on Opensource.com.) + * [CRI-O][8] is a Kubernetes orchestrator used to manage and launch containerized workloads without relying on a traditional container engine. + * [Ansible][9] is a popular automation engine used to automate apps and IT infrastructure and run tasks including installing and configuring applications. + * [Jenkins][10] is an automation tool used to automate the software development process with continuous integration. It facilitates the technical aspects of continuous delivery. + * [Spinnaker][11] is a multi-cloud continuous delivery platform for releasing software changes with high velocity and confidence. It combines a powerful and flexible pipeline management system with integrations to the major cloud providers. + * [Istio][12] is a service mesh that helps reduce the complexity of deployments and eases the strain on your development teams. + + + +### 4 tools for measuring security + +Security is always the last phase of measuring your DevOps initiative's success. Enterprises that have combined development and operations teams under a DevOps model are generally successful in releasing code at a much faster rate. But this has increased the need for integrating security in the DevOps process (this is known as DevSecOps), because the faster you release code, the faster you release any vulnerabilities in it. + +Measuring security vulnerabilities early ensures that builds are stable before they pass to the next stage in the release pipeline. In addition, measuring security can help overcome resistance to DevOps adoption. You need tools that can help your dev and ops teams identify and prioritize vulnerabilities as they are using software, and teams must ensure they don't introduce vulnerabilities when making changes. These open source tools can help you measure security: + + * [Gauntlt][13] is a ruggedization framework that enables security testing by devs, ops, and security. + * [Vault][14] securely manages secrets and encrypts data in transit, including storing credentials and API keys and encrypting passwords for user signups. + * [Clair][15] is a project for static analysis of vulnerabilities in appc and Docker containers. + * [SonarQube][16] is a platform for continuous inspection of code quality. It performs automatic reviews with static analysis of code to detect bugs, code smells, and security vulnerabilities. + + + +**[See our related security article,[7 open source tools for rugged DevOps][17].]** + +Many DevOps initiatives start small. DevOps requires a commitment to a new culture and process rather than new technologies. That's why organizations looking to implement DevOps will likely need to adopt open source tools for collecting data and using it to optimize business success. In that case, highly visible, useful measurements will become an essential part of every DevOps initiative's success + +### What to read next + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/18/10/devops-measurement-tools + +作者:[Daniel Oh][a] +选题:[lujun9972](https://github.com/lujun9972) +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/daniel-oh +[1]: https://funretro.io/ +[2]: http://kanboard.net/ +[3]: https://en.wikipedia.org/wiki/Kanban +[4]: https://www.bugzilla.org/ +[5]: https://puppet.com/resources/whitepaper/state-of-devops-report +[6]: https://kubernetes.io/ +[7]: https://opensource.com/resources/what-is-kubernetes +[8]: https://github.com/kubernetes-incubator/cri-o +[9]: https://github.com/ansible +[10]: https://jenkins.io/ +[11]: https://www.spinnaker.io/ +[12]: https://istio.io/ +[13]: http://gauntlt.org/ +[14]: https://www.hashicorp.com/blog/vault.html +[15]: https://github.com/coreos/clair +[16]: https://www.sonarqube.org/ +[17]: https://opensource.com/article/18/9/open-source-tools-rugged-devops From ab63aba0a918c079896a7e1baec8240075245672 Mon Sep 17 00:00:00 2001 From: darksun Date: Mon, 8 Oct 2018 17:04:58 +0800 Subject: [PATCH 272/736] =?UTF-8?q?=E9=80=89=E9=A2=98:=20Archiving=20web?= =?UTF-8?q?=20sites?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- sources/tech/20181004 Archiving web sites.md | 119 +++++++++++++++++++ 1 file changed, 119 insertions(+) create mode 100644 sources/tech/20181004 Archiving web sites.md diff --git a/sources/tech/20181004 Archiving web sites.md b/sources/tech/20181004 Archiving web sites.md new file mode 100644 index 0000000000..558c057913 --- /dev/null +++ b/sources/tech/20181004 Archiving web sites.md @@ -0,0 +1,119 @@ +Archiving web sites +====== + +I recently took a deep dive into web site archival for friends who were worried about losing control over the hosting of their work online in the face of poor system administration or hostile removal. This makes web site archival an essential instrument in the toolbox of any system administrator. As it turns out, some sites are much harder to archive than others. This article goes through the process of archiving traditional web sites and shows how it falls short when confronted with the latest fashions in the single-page applications that are bloating the modern web. + +### Converting simple sites + +The days of handcrafted HTML web sites are long gone. Now web sites are dynamic and built on the fly using the latest JavaScript, PHP, or Python framework. As a result, the sites are more fragile: a database crash, spurious upgrade, or unpatched vulnerability might lose data. In my previous life as web developer, I had to come to terms with the idea that customers expect web sites to basically work forever. This expectation matches poorly with "move fast and break things" attitude of web development. Working with the [Drupal][2] content-management system (CMS) was particularly challenging in that regard as major upgrades deliberately break compatibility with third-party modules, which implies a costly upgrade process that clients could seldom afford. The solution was to archive those sites: take a living, dynamic web site and turn it into plain HTML files that any web server can serve forever. This process is useful for your own dynamic sites but also for third-party sites that are outside of your control and you might want to safeguard. + +For simple or static sites, the venerable [Wget][3] program works well. The incantation to mirror a full web site, however, is byzantine: + +``` + $ nice wget --mirror --execute robots=off --no-verbose --convert-links \ + --backup-converted --page-requisites --adjust-extension \ + --base=./ --directory-prefix=./ --span-hosts \ + --domains=www.example.com,example.com http://www.example.com/ + +``` + +The above downloads the content of the web page, but also crawls everything within the specified domains. Before you run this against your favorite site, consider the impact such a crawl might have on the site. The above command line deliberately ignores [`robots.txt`][] rules, as is now [common practice for archivists][4], and hammer the website as fast as it can. Most crawlers have options to pause between hits and limit bandwidth usage to avoid overwhelming the target site. + +The above command will also fetch "page requisites" like style sheets (CSS), images, and scripts. The downloaded page contents are modified so that links point to the local copy as well. Any web server can host the resulting file set, which results in a static copy of the original web site. + +That is, when things go well. Anyone who has ever worked with a computer knows that things seldom go according to plan; all sorts of things can make the procedure derail in interesting ways. For example, it was trendy for a while to have calendar blocks in web sites. A CMS would generate those on the fly and make crawlers go into an infinite loop trying to retrieve all of the pages. Crafty archivers can resort to regular expressions (e.g. Wget has a `--reject-regex` option) to ignore problematic resources. Another option, if the administration interface for the web site is accessible, is to disable calendars, login forms, comment forms, and other dynamic areas. Once the site becomes static, those will stop working anyway, so it makes sense to remove such clutter from the original site as well. + +### JavaScript doom + +Unfortunately, some web sites are built with much more than pure HTML. In single-page sites, for example, the web browser builds the content itself by executing a small JavaScript program. A simple user agent like Wget will struggle to reconstruct a meaningful static copy of those sites as it does not support JavaScript at all. In theory, web sites should be using [progressive enhancement][5] to have content and functionality available without JavaScript but those directives are rarely followed, as anyone using plugins like [NoScript][6] or [uMatrix][7] will confirm. + +Traditional archival methods sometimes fail in the dumbest way. When trying to build an offsite backup of a local newspaper ([pamplemousse.ca][8]), I found that WordPress adds query strings (e.g. `?ver=1.12.4`) at the end of JavaScript includes. This confuses content-type detection in the web servers that serve the archive, which rely on the file extension to send the right `Content-Type` header. When such an archive is loaded in a web browser, it fails to load scripts, which breaks dynamic websites. + +As the web moves toward using the browser as a virtual machine to run arbitrary code, archival methods relying on pure HTML parsing need to adapt. The solution for such problems is to record (and replay) the HTTP headers delivered by the server during the crawl and indeed professional archivists use just such an approach. + +### Creating and displaying WARC files + +At the [Internet Archive][9], Brewster Kahle and Mike Burner designed the [ARC][10] (for "ARChive") file format in 1996 to provide a way to aggregate the millions of small files produced by their archival efforts. The format was eventually standardized as the WARC ("Web ARChive") [specification][11] that was released as an ISO standard in 2009 and revised in 2017. The standardization effort was led by the [International Internet Preservation Consortium][12] (IIPC), which is an "international organization of libraries and other organizations established to coordinate efforts to preserve internet content for the future", according to Wikipedia; it includes members such as the US Library of Congress and the Internet Archive. The latter uses the WARC format internally in its Java-based [Heritrix crawler][13]. + +A WARC file aggregates multiple resources like HTTP headers, file contents, and other metadata in a single compressed archive. Conveniently, Wget actually supports the file format with the `--warc` parameter. Unfortunately, web browsers cannot render WARC files directly, so a viewer or some conversion is necessary to access the archive. The simplest such viewer I have found is [pywb][14], a Python package that runs a simple webserver to offer a Wayback-Machine-like interface to browse the contents of WARC files. The following set of commands will render a WARC file on `http://localhost:8080/`: + +``` + $ pip install pywb + $ wb-manager init example + $ wb-manager add example crawl.warc.gz + $ wayback + +``` + +This tool was, incidentally, built by the folks behind the [Webrecorder][15] service, which can use a web browser to save dynamic page contents. + +Unfortunately, pywb has trouble loading WARC files generated by Wget because it [followed][16] an [inconsistency in the 1.0 specification][17], which was [fixed in the 1.1 specification][18]. Until Wget or pywb fix those problems, WARC files produced by Wget are not reliable enough for my uses, so I have looked at other alternatives. A crawler that got my attention is simply called [crawl][19]. Here is how it is invoked: + +``` + $ crawl https://example.com/ + +``` + +(It does say "very simple" in the README.) The program does support some command-line options, but most of its defaults are sane: it will fetch page requirements from other domains (unless the `-exclude-related` flag is used), but does not recurse out of the domain. By default, it fires up ten parallel connections to the remote site, a setting that can be changed with the `-c` flag. But, best of all, the resulting WARC files load perfectly in pywb. + +### Future work and alternatives + +There are plenty more [resources][20] for using WARC files. In particular, there's a Wget drop-in replacement called [Wpull][21] that is specifically designed for archiving web sites. It has experimental support for [PhantomJS][22] and [youtube-dl][23] integration that should allow downloading more complex JavaScript sites and streaming multimedia, respectively. The software is the basis for an elaborate archival tool called [ArchiveBot][24], which is used by the "loose collective of rogue archivists, programmers, writers and loudmouths" at [ArchiveTeam][25] in its struggle to "save the history before it's lost forever". It seems that PhantomJS integration does not work as well as the team wants, so ArchiveTeam also uses a rag-tag bunch of other tools to mirror more complex sites. For example, [snscrape][26] will crawl a social media profile to generate a list of pages to send into ArchiveBot. Another tool the team employs is [crocoite][27], which uses the Chrome browser in headless mode to archive JavaScript-heavy sites. + +This article would also not be complete without a nod to the [HTTrack][28] project, the "website copier". Working similarly to Wget, HTTrack creates local copies of remote web sites but unfortunately does not support WARC output. Its interactive aspects might be of more interest to novice users unfamiliar with the command line. + +In the same vein, during my research I found a full rewrite of Wget called [Wget2][29] that has support for multi-threaded operation, which might make it faster than its predecessor. It is [missing some features][30] from Wget, however, most notably reject patterns, WARC output, and FTP support but adds RSS, DNS caching, and improved TLS support. + +Finally, my personal dream for these kinds of tools would be to have them integrated with my existing bookmark system. I currently keep interesting links in [Wallabag][31], a self-hosted "read it later" service designed as a free-software alternative to [Pocket][32] (now owned by Mozilla). But Wallabag, by design, creates only a "readable" version of the article instead of a full copy. In some cases, the "readable version" is actually [unreadable][33] and Wallabag sometimes [fails to parse the article][34]. Instead, other tools like [bookmark-archiver][35] or [reminiscence][36] save a screenshot of the page along with full HTML but, unfortunately, no WARC file that would allow an even more faithful replay. + +The sad truth of my experiences with mirrors and archival is that data dies. Fortunately, amateur archivists have tools at their disposal to keep interesting content alive online. For those who do not want to go through that trouble, the Internet Archive seems to be here to stay and Archive Team is obviously [working on a backup of the Internet Archive itself][37]. + +-------------------------------------------------------------------------------- + +via: https://anarc.at/blog/2018-10-04-archiving-web-sites/ + +作者:[Anarcat][a] +选题:[lujun9972](https://github.com/lujun9972) +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://anarc.at +[1]: https://anarc.at/blog +[2]: https://drupal.org +[3]: https://www.gnu.org/software/wget/ +[4]: https://blog.archive.org/2017/04/17/robots-txt-meant-for-search-engines-dont-work-well-for-web-archives/ +[5]: https://en.wikipedia.org/wiki/Progressive_enhancement +[6]: https://noscript.net/ +[7]: https://github.com/gorhill/uMatrix +[8]: https://pamplemousse.ca/ +[9]: https://archive.org +[10]: http://www.archive.org/web/researcher/ArcFileFormat.php +[11]: https://iipc.github.io/warc-specifications/ +[12]: https://en.wikipedia.org/wiki/International_Internet_Preservation_Consortium +[13]: https://github.com/internetarchive/heritrix3/wiki +[14]: https://github.com/webrecorder/pywb +[15]: https://webrecorder.io/ +[16]: https://github.com/webrecorder/pywb/issues/294 +[17]: https://github.com/iipc/warc-specifications/issues/23 +[18]: https://github.com/iipc/warc-specifications/pull/24 +[19]: https://git.autistici.org/ale/crawl/ +[20]: https://archiveteam.org/index.php?title=The_WARC_Ecosystem +[21]: https://github.com/chfoo/wpull +[22]: http://phantomjs.org/ +[23]: http://rg3.github.io/youtube-dl/ +[24]: https://www.archiveteam.org/index.php?title=ArchiveBot +[25]: https://archiveteam.org/ +[26]: https://github.com/JustAnotherArchivist/snscrape +[27]: https://github.com/PromyLOPh/crocoite +[28]: http://www.httrack.com/ +[29]: https://gitlab.com/gnuwget/wget2 +[30]: https://gitlab.com/gnuwget/wget2/wikis/home +[31]: https://wallabag.org/ +[32]: https://getpocket.com/ +[33]: https://github.com/wallabag/wallabag/issues/2825 +[34]: https://github.com/wallabag/wallabag/issues/2914 +[35]: https://pirate.github.io/bookmark-archiver/ +[36]: https://github.com/kanishka-linux/reminiscence +[37]: http://iabak.archiveteam.org From 451ec5e7bc42b3bb352e5779a2ed3d0549770be9 Mon Sep 17 00:00:00 2001 From: dianbanjiu Date: Mon, 8 Oct 2018 17:54:32 +0800 Subject: [PATCH 273/736] dianbanjiu translated --- ...5 The Best Linux Distributions for 2018.md | 134 +++++++++++++++++ ...5 The Best Linux Distributions for 2018.md | 140 ------------------ 2 files changed, 134 insertions(+), 140 deletions(-) create mode 100644 20180105 The Best Linux Distributions for 2018.md delete mode 100644 sources/tech/20180105 The Best Linux Distributions for 2018.md diff --git a/20180105 The Best Linux Distributions for 2018.md b/20180105 The Best Linux Distributions for 2018.md new file mode 100644 index 0000000000..a01a4a60ab --- /dev/null +++ b/20180105 The Best Linux Distributions for 2018.md @@ -0,0 +1,134 @@ +# 2018 年最好的 Linux 发行版 + +![Linux distros 2018](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/linux-distros-2018.jpg?itok=Z8sdx4Zu "Linux distros 2018") +Jack Wallen 分享他挑选的 2018 年最好的 Linux 发行版。 + +这是新的一年,Linux仍有无限可能。而且许多 Linux 在 2017 年都带来了许多重大的改变,我相信在 2018 年它在服务器和桌面上将会带来更加稳定的系统和市场份额的增长。 + +对于那些期待迁移到开源平台(或是那些想要切换到)的人对于即将到来的一年,什么是最好的选择?如果你去 [Distrowatch][14] 找一下,你可能会因为众多的发行版而感到头晕,其中一些的排名在上升,而还有一些则恰恰相反。 + +因此,哪个 Linux 发行版将在 2018 年得到偏爱?我有我的看法。事实上,我现在就要和你们分享它。 + +跟我做的 [去年清单][15] 相似,我将会打破那张清单,使任务更加轻松。普通的 Linux 用户,至少包含以下几个类别:系统管理员,轻量级发行版,桌面,为物联网和服务器发行的版本。 + +根据这些,让我们开始 2018 年最好的 Linux 发行版清单吧。 + +### 对系统管理员最好的发行版 + +[Debian][16] 不常出现在“最好的”列表中。但他应该出现,为什么呢?如果了解到 Ubuntu 是基于 Debian 构建的(其实有很多的发行版都基于 Debian),你就很容易理解为什么这个发行版应该在许多“最好”清单中。但为什么是对管理员最好的呢?我想这是由于两个非常重要的原因: + +* 容易使用 +* 非常稳定 + +因为 Debain 使用 dpkg 和 apt 包管理,它使得使用环境非常简单。而且因为 Debian 提供了最稳定的 Linux 平台之一,它为许多事物提供了理想的环境:桌面,服务器,测试,开发。虽然 Debian 可能不包括去年获奖者发现的大量应用程序,但添加完成任务所需的任何/所有必要应用程序都非常容易。而且因为 Debian 可以根据你的选择安装桌面(Cinnamon, GNOME, KDE, LXDE, Mate, 或者 Xfce),你可以确定满足你需要的桌面。 + +![debian](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/debian.jpg?itok=XkHHG692 "debian") +图1:在 Debian 9.3 上运行的 GNOME 桌面。[使用][1] + +同时,Debain 在 Distrowatch 上名列第二。下载,安装,然后让它为你的工作而服务吧。Debain 尽管不那么华丽,但是对于管理员的工作来说十分有用。 + +### 最轻量级的发行版 + +轻量级的发行版对于一些老旧或是性能底下的机器有很好的支持。但是这不意味着这些发行版仅仅只为了老旧的硬件机器而生。如果你想要的是运行速度,你可能会想知道在你的现代机器上,这类发行版的运行速度。 + +在 2018 年上榜的最轻量级的发行版是 [Lubuntu][18]。尽管在这个类别里还有很多选择,而且尽管 Lubuntu 的大小与 Puppy Linux 相接近,但得益于它是 Ubuntu 家庭的一员,这弥补了它在易用性上的一些不足。但是不要担心,Lubuntu 对于硬件的要求并不高: + ++ CPU:奔腾 4 或者 奔腾 M 或者 AMD K8 以上 ++ 对于本地应用,512 MB 的内存就可以了,对于网络使用(Youtube,Google+,Google Drive, Facebook),建议 1 GB 以上。 + +Lubuntu 使用的是 LXDE 桌面,这意味着用户在初次使用这个 Linux 发行版时不会有任何问题。这份短清单中包含的应用(例如:Abiword, Gnumeric, 和 Firefox)都是非常轻量,且对用户友好的。 + +### [lubuntu,jpg][8] +![Lubuntu](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/lubuntu_2.jpg?itok=BkTnh7hU "Lubuntu") +图2:LXDE桌面。[使用][2] + +Lubuntu 能让十年以上的电脑如获新生。 + +### 最好的桌面发行版 + +[Elementary OS][19] 连续两年都是我清单中最好的桌面发行版。对于许多人,[Linux Mint][20] 都是桌面发行版的领导。但是,与我来说,它在易用性和稳定性上很难打败 Elementary OS。例如,我确信 [Ubuntu][21] 17.10 的发布会让我迁移回 Canonical 的发行版。不久之后我会迁移到 新的使用 GNOME 桌面的 Ubuntu,但是我发现我少了 Elementary OS 外观,可用性和感觉。在使用 Ubuntu 两周以后,我又换回了 Elementary OS。 + +### [elementaros.jpg][9] + +![Elementary OS](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/elementaros.jpg?itok=SRZC2vkg "Elementary OS") +图3:Pantheon 桌面是一件像艺术品一样的桌面。[使用][3] + +任何使用 Elementary OS 的感觉很好。Pantheon 桌面是缺省和用户友好做的最完美的桌面。每次更新,它都会变得更好。 + +尽管 Elementary OS 在 Distrowatch 中排名第六,但我预计到 2018 年第,它将至少上升至第三名。Elementary 开发人员非常关注用户的需求。他们倾听并且改进,他们目前的状态是如此之好,似乎所有他们都可以做的更好。 如果您需要一个具有出色可靠性和易用性的桌面,Elementary OS 就是你的发行版。 + +### 能够证明自己的最好的发行版 + +很长一段时间内,[Gentoo][22]都稳坐“展现你技能”的发行版的首座。但是,我认为现在 Gentoo 是时候让出“证明自己”的宝座给 [Linux From Svratch][23]。你可能认为这不公平,因为 LFS 实际上不是一个发行版,而是一个帮助用户创建自己的 Linux 发行版的项目。但是,有什么能比你自己创建一个自己的发行版更能证明自己所学的 Linux 知识的呢?在 LFS 项目中,你可以从头开始构建自定义的 Linux 系统。 所以,如果你真的有需要证明的东西,请下载 [Linux From Scratch Book][24] 并开始构建。 + +### 对于物联网最好的发行版 + +[Ubuntu Core][25] 已经是第二年赢得了该项的冠军。Ubuntu Core 是 Ubuntu 的一个小型版本,专为嵌入式和物联网设备而构建。使Ubuntu Core 如此完美的物联网的原因在于它将重点放在快照包 - 通用包上,可以安装到平台上,而不会干扰基本系统。这些快照包包含它们运行所需的所有内容(包括依赖项),因此不必担心安装会破坏操作系统(或任何其他已安装的软件)。 此外,快照非常容易升级并在隔离的沙箱中运行,这使它们成为物联网的理想解决方案。 + +Ubuntu Core 内置的另一个安全领域是登录机制。Ubuntu Core使用Ubuntu One ssh密钥,这样登录系统的唯一方法是通过上传的ssh密钥到[Ubuntu One帐户][26]。这为你的物联网设备提供了更高的安全性。 + +### [ubuntucore.jpg][10] +![ Ubuntu Core](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/ubuntucore.jpg?itok=Ydfq8NKH " Ubuntu Core") +图4:Ubuntu Core屏幕指示通过Ubuntu One用户启用远程访问。[使用][3] + +### 最好的服务器发行版 + +这让事情变得有些混乱。 主要原因是支持。 如果你需要商业支持,乍一看,你最好的选择可能是 [Red Hat Enterprise Linux][27]。红帽年复一年地证明了自己不仅是全球最强大的企业服务器平台之一,而且是单一最赚钱的开源业务(年收入超过20亿美元)。 + +但是,Red Hat 并不是唯一的服务器发行版。 实际上,Red Hat 甚至不支持企业服务器计算的各个方面。如果你关注亚马逊 Elastic Compute Cloud 上的云统计数据,Ubuntu 就会打败红帽企业Linux。根据[云市场][28],EC2 统计数据显示 RHEL 的部署率低于 10 万,而 Ubuntu 的部署量超过 20 万。 + +最终的结果是,Ubuntu 几乎已经成为云计算的领导者。如果你将它与 Ubuntu 易于使用和管理容器结合起来,就会发现 Ubuntu Server 是服务器类别的明显赢家。而且,如果你需要商业支持,Canonical 将为你提供 [Ubuntu Advantage][29]。 + +对使用 Ubuntu Server 的一个警告是它默认为纯文本界面。如果需要,你可以安装 GUI,但使用Ubuntu Server 命令行非常简单(每个Linux管理员都应该知道)。 + +### [ubuntuserver.jpg][11] + +![Ubuntu server](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/ubuntuserver_1.jpg?itok=qtFSUlee "Ubuntu server") +图5:Ubuntu 服务器登录,通知更新。[使用][3] + +### 你最好的选择 + +正如我之前所说,这些选择都非常主观,但如果你正在寻找一个好的开始,那就试试这些发行版。每一个都可以用于非常特定的目的,并且比大多数做得更好。虽然你可能不同意我的特定选择,但你可能会同意 Linux 在每个方面都提供了惊人的可能性。并且,请继续关注下周更多“最佳发行版”选秀。 + +通过 Linux 基金会和 edX 的免费[“Linux 简介”][13]课程了解有关Linux的更多信息。 + +-------------------------------------------------------------------------------- + +via: https://www.linux.com/blog/learn/intro-to-linux/2018/1/best-linux-distributions-2018 + +作者:[JACK WALLEN ][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://www.linux.com/users/jlwallen +[1]:https://www.linux.com/licenses/category/used-permission +[2]:https://www.linux.com/licenses/category/used-permission +[3]:https://www.linux.com/licenses/category/used-permission +[4]:https://www.linux.com/licenses/category/used-permission +[5]:https://www.linux.com/licenses/category/used-permission +[6]:https://www.linux.com/licenses/category/creative-commons-zero +[7]:https://www.linux.com/files/images/debianjpg +[8]:https://www.linux.com/files/images/lubuntujpg-2 +[9]:https://www.linux.com/files/images/elementarosjpg +[10]:https://www.linux.com/files/images/ubuntucorejpg +[11]:https://www.linux.com/files/images/ubuntuserverjpg-1 +[12]:https://www.linux.com/files/images/linux-distros-2018jpg +[13]:https://training.linuxfoundation.org/linux-courses/system-administration-training/introduction-to-linux +[14]:https://distrowatch.com/ +[15]:https://www.linux.com/news/learn/sysadmin/best-linux-distributions-2017 +[16]:https://www.debian.org/ +[17]:https://www.parrotsec.org/ +[18]:http://lubuntu.me/ +[19]:https://elementary.io/ +[20]:https://linuxmint.com/ +[21]:https://www.ubuntu.com/ +[22]:https://www.gentoo.org/ +[23]:http://www.linuxfromscratch.org/ +[24]:http://www.linuxfromscratch.org/lfs/download.html +[25]:https://www.ubuntu.com/core +[26]:https://login.ubuntu.com/ +[27]:https://www.redhat.com/en/technologies/linux-platforms/enterprise-linux +[28]:http://thecloudmarket.com/stats#/by_platform_definition +[29]:https://buy.ubuntu.com/?_ga=2.177313893.113132429.1514825043-1939188204.1510782993 diff --git a/sources/tech/20180105 The Best Linux Distributions for 2018.md b/sources/tech/20180105 The Best Linux Distributions for 2018.md deleted file mode 100644 index cc60350641..0000000000 --- a/sources/tech/20180105 The Best Linux Distributions for 2018.md +++ /dev/null @@ -1,140 +0,0 @@ -[translating by dianbanjiu] The Best Linux Distributions for 2018 -============================================================ - -![Linux distros 2018](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/linux-distros-2018.jpg?itok=Z8sdx4Zu "Linux distros 2018") -Jack Wallen shares his picks for the best Linux distributions for 2018.[Creative Commons Zero][6]Pixabay - -It’s a new year and the landscape of possibility is limitless for Linux. Whereas 2017 brought about some big changes to a number of Linux distributions, I believe 2018 will bring serious stability and market share growth—for both the server and the desktop. - -For those who might be looking to migrate to the open source platform (or those looking to switch it up), what are the best choices for the coming year? If you hop over to [Distrowatch][14], you’ll find a dizzying array of possibilities, some of which are on the rise, and some that are seeing quite the opposite effect. - -So, which Linux distributions will 2018 favor? I have my thoughts. In fact, I’m going to share them with you now. - -Similar to what I did for[ last year’s list][15], I’m going to make this task easier and break down the list, as follows: sysadmin, lightweight distribution, desktop, distro with more to prove, IoT, and server. These categories should cover the needs of any type of Linux user. - -With that said, let’s get to the list of best Linux distributions for 2018. - -### Best distribution for sysadmins - -[Debian][16] isn’t often seen on “best of” lists. It should be. Why? If you consider that Debian is the foundation for Ubuntu (which is, in turn, the foundation for so many distributions), it’s pretty easy to understand why this distribution should find its way on many a list. But why for administrators? I’ve considered this for two very important reasons: - -* Ease of use - -* Extreme stability - -Because Debian uses the dpkg and apt package managers, it makes for an incredibly easy to use environment. And because Debian offers one of the the most stable Linux platforms, it makes for an ideal environment for so many things: Desktops, servers, testing, development. Although Debian may not include the plethora of applications found in last years winner (for this category), [Parrot Linux][17], it is very easy to add any/all the necessary applications you need to get the job done. And because Debian can be installed with your choice of desktop (Cinnamon, GNOME, KDE, LXDE, Mate, or Xfce), you can be sure the interface will meet your needs. - - -![debian](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/debian.jpg?itok=XkHHG692 "debian") -Figure 1: The GNOME desktop running on top of Debian 9.3.[Used with permission][1] - -At the moment, Debian is listed at #2 on Distrowatch. Download it, install it, and then make it serve a specific purpose. It may not be flashy, but Debian is a sysadmin dream come true. - -### Best lightweight distribution - -Lightweight distribution serve a very specific purpose—giving new life to older, lesser-powered machines. But that doesn’t mean these particular distributions should only be considered for your older hardware. If speed is your ultimate need, you might want to see just how fast this category of distribution will run on your modern machine. - -Topping the list of lightweight distributions for 2018 is [Lubuntu][18]. Although there are plenty of options in this category, few come even close to the next-to-zero learning curve found on this distribution. And although Lubuntu’s footprint isn’t quite as small as Puppy Linux, thanks to it being a member of the Ubuntu family, the ease of use gained with this distribution makes up for it. But fear not, Lubuntu won’t bog down your older hardware.The requirements are: - -* CPU: Pentium 4 or Pentium M or AMD K8 - -* For local applications, Lubuntu can function with 512MB of RAM. For online usage (Youtube, Google+, Google Drive, and Facebook),  1GB of RAM is recommended. - -Lubuntu makes use of the LXDE desktop (Figure 2), which means users new to Linux won’t have the slightest problem working with this distribution. The short list of included apps (such as Abiword, Gnumeric, and Firefox) are all lightning fast and user-friendly. - -### [lubuntu.jpg][8] - -![Lubuntu](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/lubuntu_2.jpg?itok=BkTnh7hU "Lubuntu") -Figure 2: The Lubuntu LXDE desktop in action.[Used with permission][2] - -Lubntu can make short and easy work of breathing life into hardware that is up to ten years old. - -### Best desktop distribution - -For the second year in a row, [Elementary OS][19] tops my list of best Desktop distribution. For many, the leader on the Desktop is [Linux Mint][20] (which is a very fine flavor). However, for my money, it’s hard to beat the ease of use and stability of Elementary OS. Case in point, I was certain the release of [Ubuntu][21] 17.10 would have me migrating back to Canonical’s distribution. Very soon after migrating to the new GNOME-Friendly Ubuntu, I found myself missing the look, feel, and reliability of Elementary OS (Figure 3). After two weeks with Ubuntu, I was back to Elementary OS. - -### [elementaros.jpg][9] - -![Elementary OS](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/elementaros.jpg?itok=SRZC2vkg "Elementary OS") -Figure 3: The Pantheon desktop is a work of art as a desktop.[Used with permission][3] - -Anyone that has given Elementary OS a go immediately feels right at home. The Pantheon desktop is a perfect combination of slickness and user-friendliness. And with each update, it only gets better. - -Although Elementary OS stands at #6 on the Distrowatch page hit ranking, I predict it will find itself climbing to at least the third spot by the end of 2018\. The Elementary developers are very much in tune with what users want. They listen and they evolve. However, the current state of this distribution is so good, it seems all they could do to better it is a bit of polish here and there. Anyone looking for a desktop that offers a unified look and feel throughout the UI, Elementary OS is hard to beat. If you need a desktop that offers an outstanding ratio of reliability and ease of use, Elementary OS is your distribution. - -### Best distro for those with something to prove - -For the longest time [Gentoo][22] sat on top of the “show us your skills” distribution list. However, I think it’s time Gentoo took a backseat to the true leader of “something to prove”: [Linux From Scratch][23]. You may not think this fair, as LFS isn’t actually a distribution, but a project that helps users create their own Linux distribution. But, seriously, if you want to go a very long way to proving your Linux knowledge, what better way than to create your own distribution? From the LFS project, you can build a custom Linux system, from the ground up... entirely from source code. So, if you really have something to prove, download the [Linux From Scratch Book][24] and start building. - -### Best distribution for IoT - -For the second year in a row [Ubuntu Core][25] wins, hands down. Ubuntu Core is a tiny, transactional version of Ubuntu, built specifically for embedded and IoT devices. What makes Ubuntu Core so perfect for IoT is that it places the focus on snap packages—universal packages that can be installed onto a platform, without interfering with the base system. These snap packages contain everything they need to run (including dependencies), so there is no worry the installation will break the operating system (or any other installed software). Also, snaps are very easy to upgrade and run in an isolated sandbox, making them a great solution for IoT. - -Another area of security built into Ubuntu Core is the login mechanism. Ubuntu Core works with Ubuntu One ssh keys, such that the only way to log into the system is via uploaded ssh keys to a [Ubuntu One account][26] (Figure 4). This makes for a heightened security for your IoT devices. - -### [ubuntucore.jpg][10] - -![ Ubuntu Core](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/ubuntucore.jpg?itok=Ydfq8NKH " Ubuntu Core") -Figure 4:The Ubuntu Core screen indicating a remote access enabled via Ubuntu One user.[Used with permission][4] - -### Best server distribution - -This where things get a bit confusing. The primary reason is support. If you need commercial support your best choice might be, at first blush, [Red Hat Enterprise Linux][27]. Red Hat has proved itself, year after year, to not only be one of the strongest enterprise server platforms on the planet, but the single most profitable open source businesses (with over $2 billion in annual revenue). - -However, Red Hat isn’t far and away the only server distribution. In fact, Red Hat doesn’t even dominate every aspect of Enterprise server computing. If you look at cloud statistics on Amazon’s Elastic Compute Cloud alone, Ubuntu blows away Red Hat Enterprise Linux. According to [The Cloud Market][28], EC2 statistics show RHEL at under 100k deployments, whereas Ubuntu is over 200k deployments. That’s significant. - -The end result is that Ubuntu has pretty much taken over as the leader in the cloud. And if you combine that with Ubuntu’s ease of working with and managing containers, it starts to become clear that Ubuntu Server is the clear winner for the Server category. And, if you need commercial support, Canonical has you covered, with [Ubuntu Advantage][29]. - -The one caveat to Ubuntu Server is that it defaults to a text-only interface (Figure 5). You can install a GUI, if needed, but working with the Ubuntu Server command line is pretty straightforward (and something every Linux administrator should know). - -### [ubuntuserver.jpg][11] - -![Ubuntu server](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/ubuntuserver_1.jpg?itok=qtFSUlee "Ubuntu server") -Figure 5: The Ubuntu server login, informing of updates.[Used with permission][5] - -### The choice is yours - -As I said before, these choices are all very subjective … but if you’re looking for a great place to start, give these distributions a try. Each one can serve a very specific purpose and do it better than most. Although you may not agree with my particular picks, chances are you’ll agree that Linux offers amazing possibilities on every front. And, stay tuned for more “best distro” picks next week. - - _Learn more about Linux through the free ["Introduction to Linux" ][13]course from The Linux Foundation and edX._ - --------------------------------------------------------------------------------- - -via: https://www.linux.com/blog/learn/intro-to-linux/2018/1/best-linux-distributions-2018 - -作者:[JACK WALLEN ][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:https://www.linux.com/users/jlwallen -[1]:https://www.linux.com/licenses/category/used-permission -[2]:https://www.linux.com/licenses/category/used-permission -[3]:https://www.linux.com/licenses/category/used-permission -[4]:https://www.linux.com/licenses/category/used-permission -[5]:https://www.linux.com/licenses/category/used-permission -[6]:https://www.linux.com/licenses/category/creative-commons-zero -[7]:https://www.linux.com/files/images/debianjpg -[8]:https://www.linux.com/files/images/lubuntujpg-2 -[9]:https://www.linux.com/files/images/elementarosjpg -[10]:https://www.linux.com/files/images/ubuntucorejpg -[11]:https://www.linux.com/files/images/ubuntuserverjpg-1 -[12]:https://www.linux.com/files/images/linux-distros-2018jpg -[13]:https://training.linuxfoundation.org/linux-courses/system-administration-training/introduction-to-linux -[14]:https://distrowatch.com/ -[15]:https://www.linux.com/news/learn/sysadmin/best-linux-distributions-2017 -[16]:https://www.debian.org/ -[17]:https://www.parrotsec.org/ -[18]:http://lubuntu.me/ -[19]:https://elementary.io/ -[20]:https://linuxmint.com/ -[21]:https://www.ubuntu.com/ -[22]:https://www.gentoo.org/ -[23]:http://www.linuxfromscratch.org/ -[24]:http://www.linuxfromscratch.org/lfs/download.html -[25]:https://www.ubuntu.com/core -[26]:https://login.ubuntu.com/ -[27]:https://www.redhat.com/en/technologies/linux-platforms/enterprise-linux -[28]:http://thecloudmarket.com/stats#/by_platform_definition -[29]:https://buy.ubuntu.com/?_ga=2.177313893.113132429.1514825043-1939188204.1510782993 From bdef266f85241a459a9ca2cee14cdde9dc50ec4d Mon Sep 17 00:00:00 2001 From: dianbanjiu Date: Mon, 8 Oct 2018 18:40:08 +0800 Subject: [PATCH 274/736] dianbanjiu translated --- .../tech/20180105 The Best Linux Distributions for 2018.md | 0 1 file changed, 0 insertions(+), 0 deletions(-) rename 20180105 The Best Linux Distributions for 2018.md => translated/tech/20180105 The Best Linux Distributions for 2018.md (100%) diff --git a/20180105 The Best Linux Distributions for 2018.md b/translated/tech/20180105 The Best Linux Distributions for 2018.md similarity index 100% rename from 20180105 The Best Linux Distributions for 2018.md rename to translated/tech/20180105 The Best Linux Distributions for 2018.md From 8396b13862ec23c4177df9c3940aa95580c70350 Mon Sep 17 00:00:00 2001 From: dianbanjiu Date: Mon, 8 Oct 2018 18:47:54 +0800 Subject: [PATCH 275/736] dianbanjiu translated --- .../tech/20180105 The Best Linux Distributions for 2018.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/translated/tech/20180105 The Best Linux Distributions for 2018.md b/translated/tech/20180105 The Best Linux Distributions for 2018.md index a01a4a60ab..ed373a6f6e 100644 --- a/translated/tech/20180105 The Best Linux Distributions for 2018.md +++ b/translated/tech/20180105 The Best Linux Distributions for 2018.md @@ -97,7 +97,7 @@ Ubuntu Core 内置的另一个安全领域是登录机制。Ubuntu Core使用Ubu via: https://www.linux.com/blog/learn/intro-to-linux/2018/1/best-linux-distributions-2018 作者:[JACK WALLEN ][a] -译者:[译者ID](https://github.com/译者ID) +译者:[dianbanjiu](https://github.com/dianbanjiu) 校对:[校对者ID](https://github.com/校对者ID) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 From 2d1fb18a7047f3f463299a86b5722bf6722a927e Mon Sep 17 00:00:00 2001 From: heguangzhi <7731226@qq.com> Date: Mon, 8 Oct 2018 22:02:52 +0800 Subject: [PATCH 276/736] translated An introduction to swap space on Linux systems --- ...oduction to swap space on Linux systems.md | 302 ----------------- ...oduction to swap space on Linux systems.md | 315 ++++++++++++++++++ 2 files changed, 315 insertions(+), 302 deletions(-) delete mode 100644 sources/tech/20180926 An introduction to swap space on Linux systems.md create mode 100644 translated/tech/20180926 An introduction to swap space on Linux systems.md diff --git a/sources/tech/20180926 An introduction to swap space on Linux systems.md b/sources/tech/20180926 An introduction to swap space on Linux systems.md deleted file mode 100644 index da50208533..0000000000 --- a/sources/tech/20180926 An introduction to swap space on Linux systems.md +++ /dev/null @@ -1,302 +0,0 @@ -heguangzhi Translating - -An introduction to swap space on Linux systems -====== - -![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/fail_progress_cycle_momentum_arrow.png?itok=q-ZFa_Eh) - -Swap space is a common aspect of computing today, regardless of operating system. Linux uses swap space to increase the amount of virtual memory available to a host. It can use one or more dedicated swap partitions or a swap file on a regular filesystem or logical volume. - -There are two basic types of memory in a typical computer. The first type, random access memory (RAM), is used to store data and programs while they are being actively used by the computer. Programs and data cannot be used by the computer unless they are stored in RAM. RAM is volatile memory; that is, the data stored in RAM is lost if the computer is turned off. - -Hard drives are magnetic media used for long-term storage of data and programs. Magnetic media is nonvolatile; the data stored on a disk remains even when power is removed from the computer. The CPU (central processing unit) cannot directly access the programs and data on the hard drive; it must be copied into RAM first, and that is where the CPU can access its programming instructions and the data to be operated on by those instructions. During the boot process, a computer copies specific operating system programs, such as the kernel and init or systemd, and data from the hard drive into RAM, where it is accessed directly by the computer’s processor, the CPU. - -### Swap space - -Swap space is the second type of memory in modern Linux systems. The primary function of swap space is to substitute disk space for RAM memory when real RAM fills up and more space is needed. - -For example, assume you have a computer system with 8GB of RAM. If you start up programs that don’t fill that RAM, everything is fine and no swapping is required. But suppose the spreadsheet you are working on grows when you add more rows, and that, plus everything else that's running, now fills all of RAM. Without swap space available, you would have to stop working on the spreadsheet until you could free up some of your limited RAM by closing down some other programs. - -The kernel uses a memory management program that detects blocks, aka pages, of memory in which the contents have not been used recently. The memory management program swaps enough of these relatively infrequently used pages of memory out to a special partition on the hard drive specifically designated for “paging,” or swapping. This frees up RAM and makes room for more data to be entered into your spreadsheet. Those pages of memory swapped out to the hard drive are tracked by the kernel’s memory management code and can be paged back into RAM if they are needed. - -The total amount of memory in a Linux computer is the RAM plus swap space and is referred to as virtual memory. - -### Types of Linux swap - -Linux provides for two types of swap space. By default, most Linux installations create a swap partition, but it is also possible to use a specially configured file as a swap file. A swap partition is just what its name implies—a standard disk partition that is designated as swap space by the `mkswap` command. - -A swap file can be used if there is no free disk space in which to create a new swap partition or space in a volume group where a logical volume can be created for swap space. This is just a regular file that is created and preallocated to a specified size. Then the `mkswap` command is run to configure it as swap space. I don’t recommend using a file for swap space unless absolutely necessary. - -### Thrashing - -Thrashing can occur when total virtual memory, both RAM and swap space, become nearly full. The system spends so much time paging blocks of memory between swap space and RAM and back that little time is left for real work. The typical symptoms of this are obvious: The system becomes slow or completely unresponsive, and the hard drive activity light is on almost constantly. - -If you can manage to issue a command like `free` that shows CPU load and memory usage, you will see that the CPU load is very high, perhaps as much as 30 to 40 times the number of CPU cores in the system. Another symptom is that both RAM and swap space are almost completely allocated. - -After the fact, looking at SAR (system activity report) data can also show these symptoms. I install SAR on every system I work on and use it for post-repair forensic analysis. - -### What is the right amount of swap space? - -Many years ago, the rule of thumb for the amount of swap space that should be allocated on the hard drive was 2X the amount of RAM installed in the computer (of course, that was when most computers' RAM was measured in KB or MB). So if a computer had 64KB of RAM, a swap partition of 128KB would be an optimum size. This rule took into account the facts that RAM sizes were typically quite small at that time and that allocating more than 2X RAM for swap space did not improve performance. With more than twice RAM for swap, most systems spent more time thrashing than actually performing useful work. - -RAM has become an inexpensive commodity and most computers these days have amounts of RAM that extend into tens of gigabytes. Most of my newer computers have at least 8GB of RAM, one has 32GB, and my main workstation has 64GB. My older computers have from 4 to 8 GB of RAM. - -When dealing with computers having huge amounts of RAM, the limiting performance factor for swap space is far lower than the 2X multiplier. The Fedora 28 online Installation Guide, which can be found online at [Fedora Installation Guide][1], defines current thinking about swap space allocation. I have included below some discussion and the table of recommendations from that document. - -The following table provides the recommended size of a swap partition depending on the amount of RAM in your system and whether you want sufficient memory for your system to hibernate. The recommended swap partition size is established automatically during installation. To allow for hibernation, however, you will need to edit the swap space in the custom partitioning stage. - -_Table 1: Recommended system swap space in Fedora 28 documentation_ - -| **Amount of system RAM** | **Recommended swap space** | **Recommended swap with hibernation** | -|--------------------------|-----------------------------|---------------------------------------| -| less than 2 GB | 2 times the amount of RAM | 3 times the amount of RAM | -| 2 GB - 8 GB | Equal to the amount of RAM | 2 times the amount of RAM | -| 8 GB - 64 GB | 0.5 times the amount of RAM | 1.5 times the amount of RAM | -| more than 64 GB | workload dependent | hibernation not recommended | - -At the border between each range listed above (for example, a system with 2 GB, 8 GB, or 64 GB of system RAM), use discretion with regard to chosen swap space and hibernation support. If your system resources allow for it, increasing the swap space may lead to better performance. - -Of course, most Linux administrators have their own ideas about the appropriate amount of swap space—as well as pretty much everything else. Table 2, below, contains my recommendations based on my personal experiences in multiple environments. These may not work for you, but as with Table 1, they may help you get started. - -_Table 2: Recommended system swap space per the author_ - -| Amount of RAM | Recommended swap space | -|---------------|------------------------| -| ≤ 2GB | 2X RAM | -| 2GB – 8GB | = RAM | -| >8GB | 8GB | - -One consideration in both tables is that as the amount of RAM increases, beyond a certain point adding more swap space simply leads to thrashing well before the swap space even comes close to being filled. If you have too little virtual memory while following these recommendations, you should add more RAM, if possible, rather than more swap space. As with all recommendations that affect system performance, use what works best for your specific environment. This will take time and effort to experiment and make changes based on the conditions in your Linux environment. - -#### Adding more swap space to a non-LVM disk environment - -Due to changing requirements for swap space on hosts with Linux already installed, it may become necessary to modify the amount of swap space defined for the system. This procedure can be used for any general case where the amount of swap space needs to be increased. It assumes sufficient available disk space is available. This procedure also assumes that the disks are partitioned in “raw” EXT4 and swap partitions and do not use logical volume management (LVM). - -The basic steps to take are simple: - - 1. Turn off the existing swap space. - - 2. Create a new swap partition of the desired size. - - 3. Reread the partition table. - - 4. Configure the partition as swap space. - - 5. Add the new partition/etc/fstab. - - 6. Turn on swap. - - - - -A reboot should not be necessary. - -For safety's sake, before turning off swap, at the very least you should ensure that no applications are running and that no swap space is in use. The `free` or `top` commands can tell you whether swap space is in use. To be even safer, you could revert to run level 1 or single-user mode. - -Turn off the swap partition with the command which turns off all swap space: - -``` -swapoff -a - -``` - -Now display the existing partitions on the hard drive. - -``` -fdisk -l - -``` - -This displays the current partition tables on each drive. Identify the current swap partition by number. - -Start `fdisk` in interactive mode with the command: - -``` -fdisk /dev/ - -``` - -For example: - -``` -fdisk /dev/sda - -``` - -At this point, `fdisk` is now interactive and will operate only on the specified disk drive. - -Use the fdisk `p` sub-command to verify that there is enough free space on the disk to create the new swap partition. The space on the hard drive is shown in terms of 512-byte blocks and starting and ending cylinder numbers, so you may have to do some math to determine the available space between and at the end of allocated partitions. - -Use the `n` sub-command to create a new swap partition. fdisk will ask you the starting cylinder. By default, it chooses the lowest-numbered available cylinder. If you wish to change that, type in the number of the starting cylinder. - -The `fdisk` command now allows you to enter the size of the partitions in a number of formats, including the last cylinder number or the size in bytes, KB or MB. Type in 4000M, which will give about 4GB of space on the new partition (for example), and press Enter. - -Use the `p` sub-command to verify that the partition was created as you specified it. Note that the partition will probably not be exactly what you specified unless you used the ending cylinder number. The `fdisk` command can only allocate disk space in increments on whole cylinders, so your partition may be a little smaller or larger than you specified. If the partition is not what you want, you can delete it and create it again. - -Now it is necessary to specify that the new partition is to be a swap partition. The sub-command `t` allows you to specify the type of partition. So enter `t`, specify the partition number, and when it asks for the hex code partition type, type 82, which is the Linux swap partition type, and press Enter. - -When you are satisfied with the partition you have created, use the `w` sub-command to write the new partition table to the disk. The `fdisk` program will exit and return you to the command prompt after it completes writing the revised partition table. You will probably receive the following message as `fdisk` completes writing the new partition table: - -``` -The partition table has been altered! -Calling ioctl() to re-read partition table. -WARNING: Re-reading the partition table failed with error 16: Device or resource busy. -The kernel still uses the old table. -The new table will be used at the next reboot. -Syncing disks. -``` - -At this point, you use the `partprobe` command to force the kernel to re-read the partition table so that it is not necessary to perform a reboot. - -``` -partprobe -``` - -Now use the command `fdisk -l` to list the partitions and the new swap partition should be among those listed. Be sure that the new partition type is “Linux swap”. - -It will be necessary to modify the /etc/fstab file to point to the new swap partition. The existing line may look like this: - -``` -LABEL=SWAP-sdaX   swap        swap    defaults        0 0 - -``` - -where `X` is the partition number. Add a new line that looks similar this, depending upon the location of your new swap partition: - -``` -/dev/sdaY         swap        swap    defaults        0 0 - -``` - -Be sure to use the correct partition number. Now you can perform the final step in creating the swap partition. Use the `mkswap` command to define the partition as a swap partition. - -``` -mkswap /dev/sdaY - -``` - -The final step is to turn swap on using the command: - -``` -swapon -a - -``` - -Your new swap partition is now online along with the previously existing swap partition. You can use the `free` or `top` commands to verify this. - -#### Adding swap to an LVM disk environment - -If your disk setup uses LVM, changing swap space will be fairly easy. Again, this assumes that space is available in the volume group in which the current swap volume is located. By default, the installation procedures for Fedora Linux in an LVM environment create the swap partition as a logical volume. This makes it easy because you can simply increase the size of the swap volume. - -Here are the steps required to increase the amount of swap space in an LVM environment: - - 1. Turn off all swap. - - 2. Increase the size of the logical volume designated for swap. - - 3. Configure the resized volume as swap space. - - 4. Turn on swap. - - - - -First, let’s verify that swap exists and is a logical volume using the `lvs` command (list logical volume). - -``` -[root@studentvm1 ~]# lvs -  LV     VG                Attr       LSize  Pool   Origin Data%  Meta%  Move Log Cpy%Sync Convert -  home   fedora_studentvm1 -wi-ao----  2.00g                                                       -  pool00 fedora_studentvm1 twi-aotz--  2.00g               8.17   2.93                             -  root   fedora_studentvm1 Vwi-aotz--  2.00g pool00        8.17                                   -  swap   fedora_studentvm1 -wi-ao----  8.00g                                                       -  tmp    fedora_studentvm1 -wi-ao----  5.00g                                                       -  usr    fedora_studentvm1 -wi-ao---- 15.00g                                                       -  var    fedora_studentvm1 -wi-ao---- 10.00g                                                       -[root@studentvm1 ~]# -``` - -You can see that the current swap size is 8GB. In this case, we want to add 2GB to this swap volume. First, stop existing swap. You may have to terminate running programs if swap space is in use. - -``` -swapoff -a - -``` - -Now increase the size of the logical volume. - -``` -[root@studentvm1 ~]# lvextend -L +2G /dev/mapper/fedora_studentvm1-swap -  Size of logical volume fedora_studentvm1/swap changed from 8.00 GiB (2048 extents) to 10.00 GiB (2560 extents). -  Logical volume fedora_studentvm1/swap successfully resized. -[root@studentvm1 ~]# -``` - -Run the `mkswap` command to make this entire 10GB partition into swap space. - -``` -[root@studentvm1 ~]# mkswap /dev/mapper/fedora_studentvm1-swap -mkswap: /dev/mapper/fedora_studentvm1-swap: warning: wiping old swap signature. -Setting up swapspace version 1, size = 10 GiB (10737414144 bytes) -no label, UUID=3cc2bee0-e746-4b66-aa2d-1ea15ef1574a -[root@studentvm1 ~]# -``` - -Turn swap back on. - -``` -[root@studentvm1 ~]# swapon -a -[root@studentvm1 ~]# -``` - -Now verify the new swap space is present with the list block devices command. Again, a reboot is not required. - -``` -[root@studentvm1 ~]# lsblk -NAME                                 MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT -sda                                    8:0    0   60G  0 disk -|-sda1                                 8:1    0    1G  0 part /boot -`-sda2                                 8:2    0   59G  0 part -  |-fedora_studentvm1-pool00_tmeta   253:0    0    4M  0 lvm   -  | `-fedora_studentvm1-pool00-tpool 253:2    0    2G  0 lvm   -  |   |-fedora_studentvm1-root       253:3    0    2G  0 lvm  / -  |   `-fedora_studentvm1-pool00     253:6    0    2G  0 lvm   -  |-fedora_studentvm1-pool00_tdata   253:1    0    2G  0 lvm   -  | `-fedora_studentvm1-pool00-tpool 253:2    0    2G  0 lvm   -  |   |-fedora_studentvm1-root       253:3    0    2G  0 lvm  / -  |   `-fedora_studentvm1-pool00     253:6    0    2G  0 lvm   -  |-fedora_studentvm1-swap           253:4    0   10G  0 lvm  [SWAP] -  |-fedora_studentvm1-usr            253:5    0   15G  0 lvm  /usr -  |-fedora_studentvm1-home           253:7    0    2G  0 lvm  /home -  |-fedora_studentvm1-var            253:8    0   10G  0 lvm  /var -  `-fedora_studentvm1-tmp            253:9    0    5G  0 lvm  /tmp -sr0                                   11:0    1 1024M  0 rom   -[root@studentvm1 ~]# -``` - -You can also use the `swapon -s` command, or `top`, `free`, or any of several other commands to verify this. - -``` -[root@studentvm1 ~]# free -              total        used        free      shared  buff/cache   available -Mem:        4038808      382404     2754072        4152      902332     3404184 -Swap:      10485756           0    10485756 -[root@studentvm1 ~]# -``` - -Note that the different commands display or require as input the device special file in different forms. There are a number of ways in which specific devices are accessed in the /dev directory. My article, [Managing Devices in Linux][2], includes more information about the /dev directory and its contents. - --------------------------------------------------------------------------------- - -via: https://opensource.com/article/18/9/swap-space-linux-systems - -作者:[David Both][a] -选题:[lujun9972](https://github.com/lujun9972) -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://opensource.com/users/dboth -[1]: https://docs.fedoraproject.org/en-US/fedora/f28/install-guide/ -[2]: https://opensource.com/article/16/11/managing-devices-linux diff --git a/translated/tech/20180926 An introduction to swap space on Linux systems.md b/translated/tech/20180926 An introduction to swap space on Linux systems.md new file mode 100644 index 0000000000..0a36a44e9f --- /dev/null +++ b/translated/tech/20180926 An introduction to swap space on Linux systems.md @@ -0,0 +1,315 @@ + +Linux 系统上 swap 空间的介绍 +====== + +![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/fail_progress_cycle_momentum_arrow.png?itok=q-ZFa_Eh) + +当今无论什么操作系统 Swap 空间是非常常见的。Linux 使用 Swap 空间来增加主机可用的虚拟内存。它可以是常规文件或逻辑卷上使用一个或多个专用swap 分区或 swap 文件。 + +典型计算机中有两种基本类型的内存。第一种类型,随机存取存储器 (RAM),用于存储计算机使用的数据和程序。只有程序和数据存储在 RAM 中,计算机才能使用它们。随机存储器是易失性存储器;也就是说,如果计算机关闭了,存储在 RAM 中的数据就会丢失。 + + +硬盘是用于长期存储数据和程序的磁性介质。该磁介质可以很好的保存数据;即使计算机断电,存储在磁盘上的数据也会保留下来。CPU (中央处理器)不能直接访问硬盘上的程序和数据;他们必须首先复制到 RAM 中,RAM 是 CPU 访问代码指令和操作数据的地方。在引导过程中,计算机将特定的操作系统程序(如内核、init 或 systemd )以及硬盘上的数据复制到 RAM 中,在 RAM 中,计算机的处理器 CPU 可以直接访问这些数据。 + +### Swap 空间 + +Swap 空间是现代 Linux 系统中的第二种内存类型。Swap 空间的主要功能是当全部的 RAM 被占用并且需要更多内存时,用磁盘空间代替 RAM 内存。 + +例如,假设你有一个 8GB RAM 的计算机。如果你启动的程序没有填满 RAM,一切好,不需要 Swap。假设你在处理电子表格,当添加更多的行时,你电子表格会增长,加上所有正在运行的程序,将会占用全部的 RAM 。如果这时没有可用的 Swap 空间,你将不得不停止处理电子表格,直到关闭一些其他程序来释放一些 RAM 。 + +内核使用一个内存管理程序来检测最近没有使用的内存块,也就是内存页面。内存管理程序将这些相对不经常使用的内存页交换到硬盘上专门指定用于“分页”或 swap 的特殊分区。释放 RAM ,为输入电子表格更多数据腾出了空间。那些换出到硬盘的内存页面被内核的内存管理代码跟踪,如果需要,可以被分页回 RAM。 + +Linux 计算机中的内存总量是 RAM + swap 分区,swap 分区被称为虚拟内存. + +### Linux swap 分区类型 + +Linux 提供了两种类型的 swap 空间。默认情况下,大多数 Linux 在安装时都会创建一个 swap 分区,但是也可以使用一个特殊配置的文件作为 swap 文件。swap 分区顾名思义就是一个标准磁盘分区,由 `mkswap` 命令指定 swap 空间。 + +如果没有可用磁盘空间来创建新的 swap 分区,或者卷组中没有空间为 swap 空间创建逻辑卷,则可以使用 swap 文件。这只是一个创建并预分配指定大小的常规文件。然后运行 `mkswap` 命令将其配置为 swap 空间。除非绝对必要,否则我不建议使用文件来做 swap 空间。 + +### 频繁交换 + +当总虚拟内存( RAM 和 swap 空间 )变得快满时,可能会发生频繁交换 。系统花了太多时间在 swap 空间和 RAM 之间做内存块页面切换,以至于几乎没有时间用于实际工作。这种情况是显而易见的:系统变得缓慢或完全无反应,硬盘指示灯几乎持续亮起。 + +使用 `free` 的命令来显示 CPU 负载和内存使用情况,你会发现 CPU 负载非常高,可能达到系统中 CPU 内核数量的30到40倍。另一个情况是 RAM 和 swap 空间几乎完全被分配了。 + + +事实上,查看 SAR (系统活动报告)数据也可以显示这些内容。在我的每个系统上都安装 SAR ,并将这些用于数据分析。 + + +### swap 空间的正确大小是多少? + +许多年前,硬盘上分配给 swap 空间大小是计算机上的 RAM 的两倍(当然,这是大多数计算机的 RAM 以 KB 或 MB 为单位的时候)。因此,如果一台计算机有 64KB 的 RAM,应该分配 128KB 的 swap 分区。该规则考虑到了这样的事实情况,即 RAM 大小在当时非常小,分配超过2倍的 RAM 用于 swap 空间并不能提高性能。使用超过两倍的 RAM 进行交换,比实际执行有用的工作的时候,大多数系统将花费更多的时间。 + + +RAM 现在已经很便宜了,如今大多数计算机的 RAM 都达到了几十亿字节。我的大多数新电脑至少有 8GB 内存,一台有32GB 内存,我的主工作站有 64GB 内存。我的旧电脑有4到 8GB 的内存。 + + +当操作具有大 RAM 的计算机时,swap 空间的限制性能系数远低于 2倍。[Fedora 28在线安装指南][1] 定义了当前关于 swap 空间分配的方法。下面内容是我提出的建议。 + +下表根据系统中的 RAM 大小以及是否有足够的内存让系统休眠,提供了交换分区的推荐大小。建议的 swap 分区大小是在安装过程中自动建立的。但是,为了满足系统休眠,您需要在自定义分区阶段编辑 swap 空间。 + +_表 1: Fedora 28文档中推荐的系统 swap 空间_ + +| **系统内存大小 ** | **推荐 swap 空间 ** | **建议 swap 大小用休眠模式 ** | +|--------------------------|-----------------------------|---------------------------------------| +| 小于 2 GB | 2倍 RAM | 3 倍 RAM | +| 2 GB - 8 GB | 等于 RAM 大小 | 2 倍 RAM | +| 8 GB - 64 GB | 0.5 倍 RAM | 1.5 倍 RAM | +| 大于 64 GB | 工作量相关 | 不建议休眠模式 | + + +在上面列出的每个范围之间的边界(例如,具有 2GB、8GB 或 64GB 的系统 RAM),请根据所选 swap 空间和支持休眠功能请谨慎使用。如果你的系统资源允许,增加 swap 空间可能会带来更好的性能。 + +当然,大多数 Linux 管理员对多大的 swap 空间量有自己的想法。下面的表2包含了基于我在多种环境中的个人经历所做出的建议。这些可能不适合你,但是和表1一样,它们可能对你有所帮助。 + + +_表 2: 作者推荐的系统 swap 空间_ + +| RAM 大小 | 推荐 swap 空间 | +|---------------|------------------------| +| ≤ 2GB | 2X RAM | +| 2GB – 8GB | = RAM | +| >8GB | 8GB | + + +这两个表中共同点,随着 RAM 数量的增加,超过某一点增加更多 swap 空间只会导致在 swap 空间几乎被全部使用之前就发生频繁交换。根据以上建议,则应尽可能添加更多 RAM,而不是增加更多 swap 空间。如类似影响系统性能的情况一样,请使用最适合你的建议。根据 Linux 环境中的条件进行测试和更改是需要时间和精力的。 + + +### 向非 LVM 磁盘环境添加更多 swap 空间 + +面对已安装 Linux 的主机并对 swap 空间的需求不断变化,有时有必要修改系统定义的 swap 空间的大小。此过程可用于需要增加 swap 空间大小的任何情况。它假设有足够的可用磁盘空间。此过程还假设磁盘在 “raw” EXT4 和 swap 分区中分区,并且不使用逻辑卷 (LVM)。 + + + +要基本步骤很简单: + + 1. 关闭现有的 swap 空间。 + + 2. 创建所需大小的新 swap 分区。 + + 3. 重读分区表。 + + 4. 将分区配置为 swap 空间。 + + 5. 添加新分区到 /etc/fstab。 + + 6. 打开 swap 空间。 + + +不应需要重新启动机器。 + + +为了安全起见,在关闭 swap 空间前,至少你应该确保没有应用程序在运行,也没有 swap 空间在使用。`free` 或 `top` 命令可以告诉你 swap 空间是否在使用中。为了更安全,您可以恢复到运行级别1或单用户模式。 + +使用关闭所有 swap 空间的命令关闭 swap 分区: + +``` +swapoff -a + +``` + +现在查看硬盘上的现有分区。 + +``` +fdisk -l + +``` + +这将显示每个驱动器上的分区表。按编号标识当前的 swap 分区。 + + +使用以下命令在交互模式下启动 `fdisk`: + +``` +fdisk /dev/ + +``` + +例如: + +``` +fdisk /dev/sda + +``` + +此时,`fdisk` 是交互方式的,只在指定的磁盘驱动器上进行操作。 + +使用 fdisk `p` 子命令验证磁盘上是否有足够的可用空间来创建新的 swap 分区。硬盘上的空间以 512字节 以及起始和结束柱面编号的形式显示,因此您可能需要做一些计算来确定分配分区之间和末尾的可用空间。 + +使用 `n` 子命令创建新的交换分区。fdisk 会问你开始柱面。默认情况下,它选择编号最低的可用柱面。如果你想改变这一点,输入开始柱面的编号。 + +`fdisk` 命令允许你以多种格式输入分区的大小,包括最后一个柱面号或字节、KB 或 MB 的大小。键入 4000M ,这将在新分区上提供大约 4GB 的空间(例如),然后按 Enter 键。 + +使用 `p` 子命令来验证分区是否按照指定的方式创建的。请注意,除非使用结束柱面编号,否则分区可能与你指定的不完全相同。`fdisk` 命令只能在整个柱面上增量的分配磁盘空间,因此你的分区可能比你指定的稍小或稍大。如果分区不是您想要的,你可以删除它并重新创建它。 + +现在指定新分区是 swap 分区了 。子命令 `t` 允许你指定定分区的类型。所以输入 `t`,指定分区号,当它要求十六进制分区类型时,输入82,这是Linux swap 分区类型,然后按 Enter。 + + +当你对创建的分区感到满意时,使用 `w` 子命令将新的分区表写入磁盘。`fdisk` 程序将退出,并在完成修改后的分区表的编写后返回命令提示符。当`fdisk` 完成写入新分区表时,会收到以下消息: + +``` +The partition table has been altered! +Calling ioctl() to re-read partition table. +WARNING: Re-reading the partition table failed with error 16: Device or resource busy. +The kernel still uses the old table. +The new table will be used at the next reboot. +Syncing disks. +``` + + +此时,你使用 `partprobe` 命令强制内核重新读取分区表,这样就不需要执行重新启动机器。 + +``` +partprobe +``` + + +使用命令 `fdisk -l` 列出分区,新 swap 分区应该在列出的分区中。确保新的分区类型是 “Linux swap”。 + +修改 /etc/fstab 文件以指向新的 swap 分区。如下所示: + +``` +LABEL=SWAP-sdaX   swap        swap    defaults        0 0 + +``` + +其中 `X` 是分区号。根据新 swap 分区的位置,添加以下内容: + +``` +/dev/sdaY         swap        swap    defaults        0 0 + +``` + +请确保使用正确的分区号。现在,可以执行创建 swap 分区的最后一步。使用 `mkswap` 命令将分区定义为 swap 分区。 + +``` +mkswap /dev/sdaY + +``` + +最后一步是使用以下命令启用 swap 空间: + +``` +swapon -a + +``` + +你的新 swap 分区现在与以前存在的 swap 分区一起在线。您可以使用 `free` 或`top` 命令来验证这一点。 + +#### 在 LVM 磁盘环境中添加 swap 空间 + +如果你的磁盘使用 LVM ,更改 swap 空间将相当容易。同样,假设当前 swap 卷所在的卷组中有可用空间。默认情况下,LVM 环境中的 Fedora Linux 在安装过程将 swap 分区创建为逻辑卷。您可以非常简单地增加 swap 卷的大小。 + +以下是在 LVM 环境中增加 swap 空间大小的步骤: + + 1. 关闭所有 swap 。 + + 2. 增加指定用于 swap 的逻辑卷的大小。 + + 3. 为 swap 空间调整大小的卷配置。 + + 4. 启用 swap。 + + + +首先,让我们使用 `lvs` 命令(列出逻辑卷)来验证 swap 是否存在以及 swap 是否是逻辑卷。 + +``` +[root@studentvm1 ~]# lvs +  LV     VG                Attr       LSize  Pool   Origin Data%  Meta%  Move Log Cpy%Sync Convert +  home   fedora_studentvm1 -wi-ao----  2.00g                                                       +  pool00 fedora_studentvm1 twi-aotz--  2.00g               8.17   2.93                             +  root   fedora_studentvm1 Vwi-aotz--  2.00g pool00        8.17                                   +  swap   fedora_studentvm1 -wi-ao----  8.00g                                                       +  tmp    fedora_studentvm1 -wi-ao----  5.00g                                                       +  usr    fedora_studentvm1 -wi-ao---- 15.00g                                                       +  var    fedora_studentvm1 -wi-ao---- 10.00g                                                       +[root@studentvm1 ~]# +``` + +你可以看到当前的 swap 大小为 8GB。在这种情况下,我们希望将 2GB 添加到此 swap 卷中。首先,停止现有的 swap 。如果 swap 空间正在使用,终止正在运行的程序。 + + +``` +swapoff -a + +``` + +现在增加逻辑卷的大小。 + +``` +[root@studentvm1 ~]# lvextend -L +2G /dev/mapper/fedora_studentvm1-swap +  Size of logical volume fedora_studentvm1/swap changed from 8.00 GiB (2048 extents) to 10.00 GiB (2560 extents). +  Logical volume fedora_studentvm1/swap successfully resized. +[root@studentvm1 ~]# +``` + +运行 `mkswap` 命令将整个 10GB 分区变成 swap 空间。 + +``` +[root@studentvm1 ~]# mkswap /dev/mapper/fedora_studentvm1-swap +mkswap: /dev/mapper/fedora_studentvm1-swap: warning: wiping old swap signature. +Setting up swapspace version 1, size = 10 GiB (10737414144 bytes) +no label, UUID=3cc2bee0-e746-4b66-aa2d-1ea15ef1574a +[root@studentvm1 ~]# +``` + +重新启用 swap 。 + +``` +[root@studentvm1 ~]# swapon -a +[root@studentvm1 ~]# +``` + +现在,使用 `lsblk ` 命令验证新 swap 空间是否存在。同样,不需要重新启动机器。 + +``` +[root@studentvm1 ~]# lsblk +NAME                                 MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT +sda                                    8:0    0   60G  0 disk +|-sda1                                 8:1    0    1G  0 part /boot +`-sda2                                 8:2    0   59G  0 part +  |-fedora_studentvm1-pool00_tmeta   253:0    0    4M  0 lvm   +  | `-fedora_studentvm1-pool00-tpool 253:2    0    2G  0 lvm   +  |   |-fedora_studentvm1-root       253:3    0    2G  0 lvm  / +  |   `-fedora_studentvm1-pool00     253:6    0    2G  0 lvm   +  |-fedora_studentvm1-pool00_tdata   253:1    0    2G  0 lvm   +  | `-fedora_studentvm1-pool00-tpool 253:2    0    2G  0 lvm   +  |   |-fedora_studentvm1-root       253:3    0    2G  0 lvm  / +  |   `-fedora_studentvm1-pool00     253:6    0    2G  0 lvm   +  |-fedora_studentvm1-swap           253:4    0   10G  0 lvm  [SWAP] +  |-fedora_studentvm1-usr            253:5    0   15G  0 lvm  /usr +  |-fedora_studentvm1-home           253:7    0    2G  0 lvm  /home +  |-fedora_studentvm1-var            253:8    0   10G  0 lvm  /var +  `-fedora_studentvm1-tmp            253:9    0    5G  0 lvm  /tmp +sr0                                   11:0    1 1024M  0 rom   +[root@studentvm1 ~]# +``` + +您也可以使用`swapon -s` 命令或 `top` 、`free` 或其他几个命令来验证这一点。 + +``` +[root@studentvm1 ~]# free +              total        used        free      shared  buff/cache   available +Mem:        4038808      382404     2754072        4152      902332     3404184 +Swap:      10485756           0    10485756 +[root@studentvm1 ~]# +``` + +请注意,不同的命令以不同的形式显示或要求输入设备文件。在 /dev 目录中访问特定设备有多种方式。在我的文章[Managing Devices in Linux][2] 中有更多关于 /dev 目录及其内容说明。 + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/18/9/swap-space-linux-systems + +作者:[David Both][a] +选题:[lujun9972](https://github.com/lujun9972) +译者:[heguangzhi](https://github.com/heguangzhi) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/dboth +[1]: https://docs.fedoraproject.org/en-US/fedora/f28/install-guide/ +[2]: https://opensource.com/article/16/11/managing-devices-linux From 36a2767495c2ae363913c247e72de0435c9385f1 Mon Sep 17 00:00:00 2001 From: "Xingyu.Wang" Date: Mon, 8 Oct 2018 22:03:54 +0800 Subject: [PATCH 277/736] PRF:20180123 Moving to Linux from dated Windows machines.md MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit @bookug 恭喜你完成了第一篇翻译贡献! --- ...ng to Linux from dated Windows machines.md | 38 ++++++------------- 1 file changed, 12 insertions(+), 26 deletions(-) diff --git a/translated/talk/20180123 Moving to Linux from dated Windows machines.md b/translated/talk/20180123 Moving to Linux from dated Windows machines.md index b90a166a4d..a9e187ecc3 100644 --- a/translated/talk/20180123 Moving to Linux from dated Windows machines.md +++ b/translated/talk/20180123 Moving to Linux from dated Windows machines.md @@ -1,42 +1,30 @@ 从过时的 Windows 机器迁移到 Linux ====== +> 这是一个当老旧的 Windows 机器退役时,决定迁移到 Linux 的故事。 ![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/1980s-computer-yearbook.png?itok=eGOYEKK-) -每天当我在 ONLYOFFICE 的市场部门工作的时候,我都能看到 Linux 用户在网上讨论我们的办公效率软件。 -我们的产品在 Linux 用户中很受欢迎,这使得我对使用 Linux 作为日常工具的体验非常好奇。 -我的老旧的 Windows XP 机器在性能上非常差,因此我决定了解 Linux 系统(特别是 Ubuntu )并且决定去尝试使用它。 -我的两个同事加入了我的计划。 +我在 ONLYOFFICE 的市场部门工作的每一天,我都能看到 Linux 用户在网上讨论我们的办公软件。我们的产品在 Linux 用户中很受欢迎,这使得我对使用 Linux 作为日常工具的体验非常好奇。我的老旧的 Windows XP 机器在性能上非常差,因此我决定了解 Linux 系统(特别是 Ubuntu)并且决定去尝试使用它。我的两个同事也加入了我的计划。 ### 为何选择 Linux ? -我们必须做出改变,首先,我们的老系统在性能方面不够用:我们经历过频繁的崩溃,每当超过两个应用在运行机器就会负载过度,关闭机器时有一半的几率冻结等等。 -这很容易让我们从工作中分心,意味着我们没有我们应有的工作效率了。 +我们必须做出改变,首先,我们的老系统在性能方面不够用:我们经历过频繁的崩溃,每当运行超过两个应用时,机器就会负载过度,关闭机器时有一半的几率冻结等等。这很容易让我们从工作中分心,意味着我们没有我们应有的工作效率了。 -升级到 Windows 更新的版本也是一种选择,但这样可能会带来额外的开销,而且我们的软件本身也是要与 Microsoft 的办公软件竞争。 -因此我们在这方面也存在意识形态的问题。 +升级到 Windows 的新版本也是一种选择,但这样可能会带来额外的开销,而且我们的软件本身也是要与 Microsoft 的办公软件竞争。因此我们在这方面也存在意识形态的问题。 -其次,就像我之前提过的, ONLYOFFICE 产品在 Linux 社区内非常受欢迎。 -通过阅读 Linux 用户在使用我们的软件时的体验,我们也对加入他们很感兴趣。 +其次,就像我之前提过的, ONLYOFFICE 产品在 Linux 社区内非常受欢迎。通过阅读 Linux 用户在使用我们的软件时的体验,我们也对加入他们很感兴趣。 -在我们要求转换到 Linux 系统一周后,我们拿到了崭新的装好了 [Kubuntu][1] 的机器。 -我们选择了 16.04 版本,因为这个版本支持 KDE Plasma 5.5 和包括 Dolphin 在内的很多 KDE 应用,同时也包括 LibreOffice 5.1 和 Firefox 45 。 +在我们要求转换到 Linux 系统一周后,我们拿到了崭新的装好了 [Kubuntu][1] 的机器。我们选择了 16.04 版本,因为这个版本支持 KDE Plasma 5.5 和包括 Dolphin 在内的很多 KDE 应用,同时也包括 LibreOffice 5.1 和 Firefox 45 。 ### Linux 让人喜欢的地方 -我相信 Linux 最大的优势是它的运行速度,比如,从按下机器的电源按钮到开始工作只需要几秒钟时间。 -从一开始,一切看起来都超乎寻常地快:总体的响应速度,图形界面,甚至包括系统更新的速度。 +我相信 Linux 最大的优势是它的运行速度,比如,从按下机器的电源按钮到开始工作只需要几秒钟时间。从一开始,一切看起来都超乎寻常地快:总体的响应速度,图形界面,甚至包括系统更新的速度。 -另一个使我惊奇的事情是跟 Windows 相比, Linux 几乎能让你配置任何东西,包括整个桌面的外观。 -在设置里面,我发现了如何修改各种栏目、按钮和字体的颜色和形状,也可以重新布置任意桌面组件的位置,组合桌面的小工具(甚至包括漫画和颜色选择器) -我相信我还仅仅只是了解了基本的选项,之后还需要探索这个系统更多著名的定制化选项。 +另一个使我惊奇的事情是跟 Windows 相比, Linux 几乎能让你配置任何东西,包括整个桌面的外观。在设置里面,我发现了如何修改各种栏目、按钮和字体的颜色和形状,也可以重新布置任意桌面组件的位置,组合桌面小工具(甚至包括漫画和颜色选择器)。我相信我还仅仅只是了解了基本的选项,之后还需要探索这个系统更多著名的定制化选项。 -Linux 发行版通常是一个非常安全的环境。 -人们很少在 Linux 系统中使用防病毒的软件,因为很少有人会写病毒程序来攻击 Linux 系统。 -因此你可以拥有很好的系统速度,并且节省了时间和金钱。 +Linux 发行版通常是一个非常安全的环境。人们很少在 Linux 系统中使用防病毒的软件,因为很少有人会写病毒程序来攻击 Linux 系统。因此你可以拥有很好的系统速度,并且节省了时间和金钱。 -总之, Linux 已经改变了我们的日常生活,用一系列的新选项和功能大大震惊了我们。 -仅仅通过短时间的使用,我们已经可以给它总结出以下特性: +总之, Linux 已经改变了我们的日常生活,用一系列的新选项和功能大大震惊了我们。仅仅通过短时间的使用,我们已经可以给它总结出以下特性: * 操作很快很顺畅 * 高度可定制 @@ -45,9 +33,7 @@ Linux 发行版通常是一个非常安全的环境。 * 安全可靠 * 对所有想改变工作场所的人来说都是一次绝佳的体验 -你已经从 Windows 或 MacOS 系统换到 Kubuntu 或其他 Linux 变种了么? -或者你是否正在考虑做出改变? -请分享你想要采用 Linux 系统的原因,连同你对开源的印象一起写在评论中。 +你已经从 Windows 或 MacOS 系统换到 Kubuntu 或其他 Linux 变种了么?或者你是否正在考虑做出改变?请分享你想要采用 Linux 系统的原因,连同你对开源的印象一起写在评论中。 -------------------------------------------------------------------------------- @@ -55,7 +41,7 @@ via: https://opensource.com/article/18/1/move-to-linux-old-windows 作者:[Michael Korotaev][a] 译者:[bookug](https://github.com/bookug) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 From c101336d1f5e938ef4f0bf2cb052226131e1021e Mon Sep 17 00:00:00 2001 From: "Xingyu.Wang" Date: Mon, 8 Oct 2018 22:04:59 +0800 Subject: [PATCH 278/736] PUB:20180123 Moving to Linux from dated Windows machines.md MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit @bookug 本文首发地址: https://linux.cn/article-10093-1.html 您的 LCTT 专页地址: https://linux.cn/lctt/bookug 请到 LCTT 平台注册领取 LCCN : https://lctt.linux.cn/ --- .../20180123 Moving to Linux from dated Windows machines.md | 0 1 file changed, 0 insertions(+), 0 deletions(-) rename {translated/talk => published}/20180123 Moving to Linux from dated Windows machines.md (100%) diff --git a/translated/talk/20180123 Moving to Linux from dated Windows machines.md b/published/20180123 Moving to Linux from dated Windows machines.md similarity index 100% rename from translated/talk/20180123 Moving to Linux from dated Windows machines.md rename to published/20180123 Moving to Linux from dated Windows machines.md From e0bb162d4fc03f3969849b5336bc79c6b244eb6b Mon Sep 17 00:00:00 2001 From: "Xingyu.Wang" Date: Mon, 8 Oct 2018 22:49:09 +0800 Subject: [PATCH 279/736] PRF:20180803 5 Essential Tools for Linux Development.md @HankChow --- ...5 Essential Tools for Linux Development.md | 38 ++++++++----------- 1 file changed, 16 insertions(+), 22 deletions(-) diff --git a/translated/tech/20180803 5 Essential Tools for Linux Development.md b/translated/tech/20180803 5 Essential Tools for Linux Development.md index dcb3b3b63e..0f2f26c18a 100644 --- a/translated/tech/20180803 5 Essential Tools for Linux Development.md +++ b/translated/tech/20180803 5 Essential Tools for Linux Development.md @@ -1,57 +1,52 @@ Linux 开发的五大必备工具 ====== +> Linux 上的开发工具如此之多,以至于会担心找不到恰好适合你的。 ![](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/dev-tools.png?itok=kkDNylRg) -Linux 已经成为工作、娱乐和个人生活等多个领域的支柱,人们已经越来越离不开它。在 Linux 的帮助下,技术的发展速度超出了人们的想象,Linux 开发的速度也以指数规模增长。因此,越来越多的开发者也不断地加入开源和学习 Linux 开发地潮流当中。在这个过程之中,合适的工具是必不可少的,可喜的是,随着 Linux 的发展,大量适用于 Linux 的开发工具也不断成熟。甚至可以说,这样的工具已经多得有点惊人。 +Linux 已经成为工作、娱乐和个人生活等多个领域的支柱,人们已经越来越离不开它。在 Linux 的帮助下,技术的变革速度超出了人们的想象,Linux 开发的速度也以指数规模增长。因此,越来越多的开发者也不断地加入开源和学习 Linux 开发地潮流当中。在这个过程之中,合适的工具是必不可少的,可喜的是,随着 Linux 的发展,大量适用于 Linux 的开发工具也不断成熟。甚至可以说,这样的工具已经多得有点惊人。 为了选择更合适自己的开发工具,缩小选择范围是很必要的。但是这篇文章并不会要求你必须使用某个工具,而只是缩小到五个工具类别,然后对每个类别提供一个例子。然而,对于大多数类别,都会有不止一种选择。下面我们来看一下。 ### 容器 -放眼于现实,现在已经是容器的时代了。容器既容易进行部署,又可以方便地构建开发环境。如果你针对的是特定的平台的开发,将开发流程所需要的各种工具都创建到容器映像中是一种很好的方法,只要使用这一个容器映像,就能够快速启动大量运行所需服务的实例。 +放眼于现实,现在已经是容器的时代了。容器既及其容易部署,又可以方便地构建开发环境。如果你针对的是特定的平台的开发,将开发流程所需要的各种工具都创建到容器映像中是一种很好的方法,只要使用这一个容器映像,就能够快速启动大量运行所需服务的实例。 一个使用容器的最佳范例是使用 [Docker][1],使用容器(或 Docker)有这些好处: * 开发环境保持一致 - * 部署后即可运行 - * 易于跨平台部署 - * Docker 映像适用于多种开发环境和语言 - * 部署单个容器或容器集群都并不繁琐 - - -通过 [Docker Hub][2],几乎可以找到适用于任何平台、任何开发环境、任何服务器,任何服务的映像,几乎可以满足任何一种需求。使用 Docker Hub 中的映像,就相当于免除了搭建开发环境的步骤,可以直接开始开发应用程序、服务器、API 或服务。 +通过 [Docker Hub][2],几乎可以找到适用于任何平台、任何开发环境、任何服务器、任何服务的映像,几乎可以满足任何一种需求。使用 Docker Hub 中的映像,就相当于免除了搭建开发环境的步骤,可以直接开始开发应用程序、服务器、API 或服务。 Docker 在所有 Linux 平台上都很容易安装,例如可以通过终端输入以下命令在 Ubuntu 上安装 Docker: + ``` sudo apt-get install docker.io - ``` Docker 安装完毕后,就可以从 Docker 仓库中拉取映像,然后开始开发和部署了(如下图)。 ![Docker images][4] - +*图 1: Docker 镜像准备部署* ### 版本控制工具 -如果你正在开发一个巨大的项目,又或者参与团队开发,版本控制工具是必不可少的,它可以用于记录代码变更、提交代码以及合并代码。如果没有这样的工具,项目几乎无法妥善管理。在 Linux 系统上,[Git][6] 和 [GitHub][7] 的易用性和流行程度是其它版本控制工具无法比拟的。如果你对 Git 和 GitHub 还不太熟悉,可以简单理解为 Git 是在本地计算机上安装的版本控制系统,而 GitHub 则是用于上传和管理项目的远程存储库。 Git 可以安装在大多数的 Linux 发行版上。例如在基于 Debian 的系统上,只需要通过以下这一条简单的命令就可以安装: +如果你正在开发一个大型项目,又或者参与团队开发,版本控制工具是必不可少的,它可以用于记录代码变更、提交代码以及合并代码。如果没有这样的工具,项目几乎无法妥善管理。在 Linux 系统上,[Git][6] 和 [GitHub][7] 的易用性和流行程度是其它版本控制工具无法比拟的。如果你对 Git 和 GitHub 还不太熟悉,可以简单理解为 Git 是在本地计算机上安装的版本控制系统,而 GitHub 则是用于上传和管理项目的远程存储库。 Git 可以安装在大多数的 Linux 发行版上。例如在基于 Debian 的系统上,只需要通过以下这一条简单的命令就可以安装: + ``` sudo apt-get install git - ``` 安装完毕后,就可以使用 Git 来实施版本控制了(如下图)。 ![Git installed][9] - +*图 2:Git 已经安装,可以用于很多重要任务* Github 会要求用户创建一个帐户。用户可以免费使用 GitHub 来管理非商用项目,当然也可以使用 GitHub 的付费模式(更多相关信息,可以参阅[价格矩阵][10])。 @@ -63,23 +58,23 @@ Github 会要求用户创建一个帐户。用户可以免费使用 GitHub 来 ![Bluefish][13] - +*图 3:运行在 Ubuntu 18.04 上的 Bluefish* ### IDE -集成开发环境(Integrated Development Environment, IDE)是包含一整套全面的工具、可以实现一站式功能的开发环境。 开发者除了可以使用 IDE 编写代码,还可以编写文档和构建软件。在 Linux 上也有很多适用的 IDE,其中 [Geany][14] 就包含在标准软件库中,它对用户非常友好,功能也相当强大。 Geany 具有语法高亮、代码折叠、自动完成,构建代码片段、自动关闭 XML 和 HTML 标签、调用提示、支持多种文件类型、符号列表、代码导航、构建编译,简单的项目管理和内置的插件系统等强大功能。 +集成开发环境Integrated Development Environment(IDE)是包含一整套全面的工具、可以实现一站式功能的开发环境。 开发者除了可以使用 IDE 编写代码,还可以编写文档和构建软件。在 Linux 上也有很多适用的 IDE,其中 [Geany][14] 就包含在标准软件库中,它对用户非常友好,功能也相当强大。 Geany 具有语法高亮、代码折叠、自动完成,构建代码片段、自动关闭 XML 和 HTML 标签、调用提示、支持多种文件类型、符号列表、代码导航、构建编译,简单的项目管理和内置的插件系统等强大功能。 Geany 也能在系统上轻松安装,例如执行以下命令在基于 Debian 的 Linux 发行版上安装 Geany: + ``` sudo apt-get install geany - ``` 安装完毕后,就可以快速上手这个易用且强大的 IDE 了(如下图)。 ![Geany][16] - +*图 4:Geany 可以作为你的 IDE* ### 文本比较工具 @@ -89,19 +84,18 @@ Meld 可以打开两个文件进行比较,并突出显示文件之间的差异 ![Comparing two files][19] +*图 5: 以简单差异的模式比较两个文件* +Meld 也可以通过大多数标准的软件库安装,在基于 Debian 的系统上,执行以下命令就可以安装: -Meld 也可以通过标准软件如安装,在基于 Debian 的系统上,执行以下命令就可以安装: ``` sudo apt-get install meld - ``` ### 高效地工作 以上提到的五个工具除了帮助你完成工作,而且有助于提高效率。尽管适用于 Linux 开发者的工具有很多,但对于以上几个类别,你最好分别使用一个对应的工具。 - -------------------------------------------------------------------------------- via: https://www.linux.com/learn/intro-to-linux/2018/8/5-essential-tools-linux-development @@ -109,7 +103,7 @@ via: https://www.linux.com/learn/intro-to-linux/2018/8/5-essential-tools-linux-d 作者:[Jack Wallen][a] 选题:[lujun9972](https://github.com/lujun9972) 译者:[HankChow](https://github.com/HankChow) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 From 5282d11aa156d0a1a8048badda6805602bf5e5e5 Mon Sep 17 00:00:00 2001 From: "Xingyu.Wang" Date: Mon, 8 Oct 2018 22:49:45 +0800 Subject: [PATCH 280/736] PUB:20180803 5 Essential Tools for Linux Development.md @HankChow https://linux.cn/article-10094-1.html --- .../20180803 5 Essential Tools for Linux Development.md | 0 1 file changed, 0 insertions(+), 0 deletions(-) rename {translated/tech => published}/20180803 5 Essential Tools for Linux Development.md (100%) diff --git a/translated/tech/20180803 5 Essential Tools for Linux Development.md b/published/20180803 5 Essential Tools for Linux Development.md similarity index 100% rename from translated/tech/20180803 5 Essential Tools for Linux Development.md rename to published/20180803 5 Essential Tools for Linux Development.md From 0b9de23fbe6a829819446b903df1f087fa12e11e Mon Sep 17 00:00:00 2001 From: Liang Chen Date: Mon, 8 Oct 2018 23:39:01 +0800 Subject: [PATCH 281/736] translated by Flowsnow --- ...eautiful And Cross-platform Podcast App.md | 114 ------------------ ...eautiful And Cross-platform Podcast App.md | 108 +++++++++++++++++ 2 files changed, 108 insertions(+), 114 deletions(-) delete mode 100644 sources/tech/20180924 A Simple, Beautiful And Cross-platform Podcast App.md create mode 100644 translated/tech/20180924 A Simple, Beautiful And Cross-platform Podcast App.md diff --git a/sources/tech/20180924 A Simple, Beautiful And Cross-platform Podcast App.md b/sources/tech/20180924 A Simple, Beautiful And Cross-platform Podcast App.md deleted file mode 100644 index 628a805144..0000000000 --- a/sources/tech/20180924 A Simple, Beautiful And Cross-platform Podcast App.md +++ /dev/null @@ -1,114 +0,0 @@ -translating by Flowsnow - -A Simple, Beautiful And Cross-platform Podcast App -====== - -![](https://www.ostechnix.com/wp-content/uploads/2018/09/cpod-720x340.png) - -Podcasts have become very popular in the last few years. Podcasts are what’s called “infotainment”, they are generally light-hearted, but they generally give you valuable information. Podcasts have blown up in the last few years, and if you like something, chances are there is a podcast about it. There are a lot of podcast players out there for the Linux desktop, but if you want something that is visually beautiful, has slick animations, and works on every platform, there aren’t a lot of alternatives to **CPod**. CPod (formerly known as **Cumulonimbus** ) is an open source and slickest podcast app that works on Linux, MacOS and Windows. - -CPod runs on something called **Electron** – a tool that allows developers to build cross-platform (E.g Windows, MacOs and Linux) desktop GUI applications. In this brief guide, we will be discussing – how to install and use CPod podcast app in Linux. - -### Installing CPod - -Go to the [**releases page**][1] of CPod. Download and Install the binary for your platform of choice. If you use Ubuntu/Debian, you can just download and install the .deb file from the releases page as shown below. - -``` -$ wget https://github.com/z-------------/CPod/releases/download/v1.25.7/CPod_1.25.7_amd64.deb - -$ sudo apt update - -$ sudo apt install gdebi - -$ sudo gdebi CPod_1.25.7_amd64.deb -``` - -If you use any other distribution, you probably should use the **AppImage** in the releases page. - -Download the AppImage file from the releases page. - -Open your terminal, and go to the directory where the AppImage file has been stored. Change the permissions to allow execution: - -``` -$ chmod +x CPod-1.25.7-x86_64.AppImage -``` - -Execute the AppImage File: - -``` -$ ./CPod-1.25.7-x86_64.AppImage -``` - -You’ll be presented a dialog asking whether to integrate the app with the system. Click **Yes** if you want to do so. - -### Features - -**Explore Tab** - -![](https://www.ostechnix.com/wp-content/uploads/2018/09/CPod-features-tab.png) - -CPod uses the Apple iTunes database to find podcasts. This is good, because the iTunes database is the biggest one out there. If there is a podcast out there, chances are it’s on iTunes. To find podcasts, just use the top search bar in the Explore section. The Explore Section also shows a few popular podcasts. - -**Home Tab** - -![](http://www.ostechnix.com/wp-content/uploads/2018/09/CPod-home-tab.png) - -The Home Tab is the tab that opens by default when you open the app. The Home Tab shows a chronological list of all the episodes of all the podcasts that you have subscribed to. - -From the home tab, you can: - - 1. Mark episodes read. - 2. Download them for offline playing - 3. Add them to the queue. - -![](https://www.ostechnix.com/wp-content/uploads/2018/09/The-podcasts-queue.png) - -**Subscriptions Tab** - -![](https://www.ostechnix.com/wp-content/uploads/2018/09/CPod-subscriptions-tab.png) - -You can of course, subscribe to podcasts that you like. A few other things you can do in the Subscriptions Tab is: - - 1. Refresh Podcast Artwork - 2. Export and Import Subscriptions to/from an .OPML file. - - - -**The Player** - -![](https://www.ostechnix.com/wp-content/uploads/2018/09/CPod-Podcast-Player.png) - -The player is perhaps the most beautiful part of CPod. The app changes the overall look and feel according to the banner of the podcast. There’s a sound visualiser at the bottom. To the right, you can see and search for other episodes of this podcast. - -**Cons/Missing Features** - -While I love this app, there are a few features and disadvantages that CPod does have: - - 1. Poor MPRIS Integration – You can play/pause the podcast from the media player dialog of your desktop environment, but not much more. The name of the podcast is not shown, and you can go to the next/previous episode. - 2. No support for chapters. - 3. No auto-downloading – you have to manually download episodes. - 4. CPU usage during use is pretty high (even for an Electron app). - - - -### Verdict - -While it does have its cons, CPod is clearly the most aesthetically pleasing podcast player app out there, and it has most basic features down. If you love using visually beautiful apps, and don’t need the advanced features, this is the perfect app for you. I know for a fact that I’m going to use it. - -Do you like CPod? Please put your opinions on the comments below! - - - --------------------------------------------------------------------------------- - -via: https://www.ostechnix.com/cpod-a-simple-beautiful-and-cross-platform-podcast-app/ - -作者:[EDITOR][a] -选题:[lujun9972](https://github.com/lujun9972) -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://www.ostechnix.com/author/editor/ -[1]: https://github.com/z-------------/CPod/releases diff --git a/translated/tech/20180924 A Simple, Beautiful And Cross-platform Podcast App.md b/translated/tech/20180924 A Simple, Beautiful And Cross-platform Podcast App.md new file mode 100644 index 0000000000..1fdba14a5f --- /dev/null +++ b/translated/tech/20180924 A Simple, Beautiful And Cross-platform Podcast App.md @@ -0,0 +1,108 @@ +一个简单,美观和跨平台的播客应用程序 +====== + +![](https://www.ostechnix.com/wp-content/uploads/2018/09/cpod-720x340.png) + +播客在过去几年中变得非常流行。 播客就是所谓的“信息娱乐”,它们通常是轻松的,但它通常会为你提供有价值的信息。 播客在过去几年中已经非常火爆了,如果你喜欢某些东西,很可能存在一个相关的播客。 Linux 桌面版上有很多播客播放器,但是如果你想要一些视觉上美观,有光滑动画并且可以在每个平台上运行的东西,那就并没有很多替代品可以替代 **CPod** 了。 CPod(以前称为 **Cumulonimbus**)是一个开源的最简单的播客应用程序,适用于 Linux,MacOS 和 Windows。 + +CPod 运行在一个名为 **Electron** 的东西上 - 这个工具允许开发人员构建跨平台(例如 Windows,MacOs 和 Linux)的桌面图形化应用程序。 在本简要指南中,我们将讨论如何在 Linux 中安装和使用 CPod 播客应用程序。 + +### 安装 CPod + +转到 CPod 的[**发布页面**][1]。 下载并安装所选平台的二进制文件。 如果你使用 Ubuntu / Debian,你只需从发布页面下载并安装 .deb 文件,如下所示。 + +``` +$ wget https://github.com/z-------------/CPod/releases/download/v1.25.7/CPod_1.25.7_amd64.deb + +$ sudo apt update + +$ sudo apt install gdebi + +$ sudo gdebi CPod_1.25.7_amd64.deb +``` + +如果你使用任何其他发行版,你可能需要在发行版页面中使用 **AppImage**。 + +从发布页面下载 AppImage 文件。 + +打开终端,然后转到存储 AppImage 文件的目录。 更改权限以允许执行: + +``` +$ chmod +x CPod-1.25.7-x86_64.AppImage +``` + +执行 AppImage 文件: + +``` +$ ./CPod-1.25.7-x86_64.AppImage +``` + +你将看到一个对话框询问是否将应用程序与系统集成。 如果要执行此操作,请单击**是**。 + +### 特征 + +**探索标签页** + +![](https://www.ostechnix.com/wp-content/uploads/2018/09/CPod-features-tab.png) + +CPod 使用 Apple iTunes 数据库查找播客。 这很好,因为 iTunes 数据库是最大的数据库。 如果那里有一个播客,很可能是在 iTunes 上。 要查找播客,只需使用探索部分中的顶部搜索栏即可。 探索部分还展示了一些受欢迎的播客。 + +**主标签页** + +![](http://www.ostechnix.com/wp-content/uploads/2018/09/CPod-home-tab.png) + +主标签页在打开应用程序时是默认打开的。 主标签页显示你已订阅的所有播客的所有剧集的时间顺序列表。 + +在主页选项卡中,你可以: + +1. 标记剧集阅读。 +2. 下载它们进行离线播放 +3. 将它们添加到播放队列中。 + +![](https://www.ostechnix.com/wp-content/uploads/2018/09/The-podcasts-queue.png) + +**订阅标签页** + +![](https://www.ostechnix.com/wp-content/uploads/2018/09/CPod-subscriptions-tab.png) + +你当然可以订阅你喜欢的播客。 你可以在订阅标签页中执行的其他一些操作是: + +1.刷新播客艺术作品 +2.导出订阅到 .OPML 文件中,从 .OPML 文件中导入订阅。 + + +**播放器** + +![](https://www.ostechnix.com/wp-content/uploads/2018/09/CPod-Podcast-Player.png) + +播放器可能是 CPod 最美观的部分。 该应用程序根据播客的横幅更改整体外观。 底部有一个声音可视化器。 在右侧,你可以查看和搜索此播客的其他剧集。 + +**缺点/缺失功能** + +虽然我喜欢这个应用程序,但 CPod 确实有一些特性和缺点: + +1. 可怜的 MPRIS 集成 - 你可以从桌面环境的媒体播放器对话框中播放或者暂停播客,但这是不够的。 播客的名称未显示,你可以转到下一个或者上一个剧集。 +2. 不支持章节。 +3. 没有自动下载 - 你必须手动下载剧集。 +4. 使用过程中的 CPU 使用率非常高(即使对于 Electron 应用程序)。 + + +### Verdict + +虽然它确实有它的缺点,但 CPod 显然是最美观的播客播放器应用程序,并且它具有最基本的功能。 如果你喜欢使用视觉上美观的应用程序,并且不需要高级功能,那么这就是你的完美款 app。 我知道我马上就要使用它。 + +你喜欢 CPod 吗? 请将你的意见发表在下面的评论中。 + +-------------------------------------------------------------------------------- + +via: https://www.ostechnix.com/cpod-a-simple-beautiful-and-cross-platform-podcast-app/ + +作者:[EDITOR][a] +选题:[lujun9972](https://github.com/lujun9972) +译者:[Flowsnow](https://github.com/Flowsnow) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://www.ostechnix.com/author/editor/ +[1]: https://github.com/z-------------/CPod/releases \ No newline at end of file From 4b520853d3395164f40791d48eb7a8de5bed549b Mon Sep 17 00:00:00 2001 From: Liang Chen Date: Tue, 9 Oct 2018 00:37:22 +0800 Subject: [PATCH 282/736] translated by Flowsnow --- ...ython library for data science projects.md | 260 ------------------ ...ython library for data science projects.md | 238 ++++++++++++++++ 2 files changed, 238 insertions(+), 260 deletions(-) delete mode 100644 sources/tech/20180926 How to use the Scikit-learn Python library for data science projects.md create mode 100644 translated/tech/20180926 How to use the Scikit-learn Python library for data science projects.md diff --git a/sources/tech/20180926 How to use the Scikit-learn Python library for data science projects.md b/sources/tech/20180926 How to use the Scikit-learn Python library for data science projects.md deleted file mode 100644 index e8b108720e..0000000000 --- a/sources/tech/20180926 How to use the Scikit-learn Python library for data science projects.md +++ /dev/null @@ -1,260 +0,0 @@ -translating by Flowsnow - -How to use the Scikit-learn Python library for data science projects -====== - -![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/brain_data.png?itok=RH6NA32X) - -The Scikit-learn Python library, initially released in 2007, is commonly used in solving machine learning and data science problems—from the beginning to the end. The versatile library offers an uncluttered, consistent, and efficient API and thorough online documentation. - -### What is Scikit-learn? - -[Scikit-learn][1] is an open source Python library that has powerful tools for data analysis and data mining. It's available under the BSD license and is built on the following machine learning libraries: - - * **NumPy** , a library for manipulating multi-dimensional arrays and matrices. It also has an extensive compilation of mathematical functions for performing various calculations. - * **SciPy** , an ecosystem consisting of various libraries for completing technical computing tasks. - * **Matplotlib** , a library for plotting various charts and graphs. - - - -Scikit-learn offers an extensive range of built-in algorithms that make the most of data science projects. - -Here are the main ways the Scikit-learn library is used. - -#### 1. Classification - -The [classification][2] tools identify the category associated with provided data. For example, they can be used to categorize email messages as either spam or not. - - * Support vector machines (SVMs) - * Nearest neighbors - * Random forest - - - -#### 2. Regression - -Classification algorithms in Scikit-learn include: - -Regression involves creating a model that tries to comprehend the relationship between input and output data. For example, regression tools can be used to understand the behavior of stock prices. - -Regression algorithms include: - - * SVMs - * Ridge regression - * Lasso - - - -#### 3. Clustering - -The Scikit-learn clustering tools are used to automatically group data with the same characteristics into sets. For example, customer data can be segmented based on their localities. - -Clustering algorithms include: - - * K-means - * Spectral clustering - * Mean-shift - - - -#### 4. Dimensionality reduction - -Dimensionality reduction lowers the number of random variables for analysis. For example, to increase the efficiency of visualizations, outlying data may not be considered. - -Dimensionality reduction algorithms include: - - * Principal component analysis (PCA) - * Feature selection - * Non-negative matrix factorization - - - -#### 5. Model selection - -Model selection algorithms offer tools to compare, validate, and select the best parameters and models to use in your data science projects. - -Model selection modules that can deliver enhanced accuracy through parameter tuning include: - - * Grid search - * Cross-validation - * Metrics - - - -#### 6. Preprocessing - -The Scikit-learn preprocessing tools are important in feature extraction and normalization during data analysis. For example, you can use these tools to transform input data—such as text—and apply their features in your analysis. - -Preprocessing modules include: - - * Preprocessing - * Feature extraction - - - -### A Scikit-learn library example - -Let's use a simple example to illustrate how you can use the Scikit-learn library in your data science projects. - -We'll use the [Iris flower dataset][3], which is incorporated in the Scikit-learn library. The Iris flower dataset contains 150 details about three flower species: - - * Setosa—labeled 0 - * Versicolor—labeled 1 - * Virginica—labeled 2 - - - -The dataset includes the following characteristics of each flower species (in centimeters): - - * Sepal length - * Sepal width - * Petal length - * Petal width - - - -#### Step 1: Importing the library - -Since the Iris dataset is included in the Scikit-learn data science library, we can load it into our workspace as follows: - -``` -from sklearn import datasets -iris = datasets.load_iris() -``` - -These commands import the **datasets** module from **sklearn** , then use the **load_digits()** method from **datasets** to include the data in the workspace. - -#### Step 2: Getting dataset characteristics - -The **datasets** module contains several methods that make it easier to get acquainted with handling data. - -In Scikit-learn, a dataset refers to a dictionary-like object that has all the details about the data. The data is stored using the **.data** key, which is an array list. - -For instance, we can utilize **iris.data** to output information about the Iris flower dataset. - -``` -print(iris.data) -``` - -Here is the output (the results have been truncated): - -``` -[[5.1 3.5 1.4 0.2] - [4.9 3.  1.4 0.2] - [4.7 3.2 1.3 0.2] - [4.6 3.1 1.5 0.2] - [5.  3.6 1.4 0.2] - [5.4 3.9 1.7 0.4] - [4.6 3.4 1.4 0.3] - [5.  3.4 1.5 0.2] - [4.4 2.9 1.4 0.2] - [4.9 3.1 1.5 0.1] - [5.4 3.7 1.5 0.2] - [4.8 3.4 1.6 0.2] - [4.8 3.  1.4 0.1] - [4.3 3.  1.1 0.1] - [5.8 4.  1.2 0.2] - [5.7 4.4 1.5 0.4] - [5.4 3.9 1.3 0.4] - [5.1 3.5 1.4 0.3] -``` - -Let's also use **iris.target** to give us information about the different labels of the flowers. - -``` -print(iris.target) -``` - -Here is the output: - -``` -[0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 - 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 - 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 2 2 2 2 2 2 2 2 2 2 2 - 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 - 2 2] - -``` - -If we use **iris.target_names** , we'll output an array of the names of the labels found in the dataset. - -``` -print(iris.target_names) -``` - -Here is the result after running the Python code: - -``` -['setosa' 'versicolor' 'virginica'] -``` - -#### Step 3: Visualizing the dataset - -We can use the [box plot][4] to produce a visual depiction of the Iris flower dataset. The box plot illustrates how the data is distributed over the plane through their quartiles. - -Here's how to achieve this: - -``` -import seaborn as sns -box_data = iris.data #variable representing the data array -box_target = iris.target #variable representing the labels array -sns.boxplot(data = box_data,width=0.5,fliersize=5) -sns.set(rc={'figure.figsize':(2,15)}) -``` - -Let's see the result: - -![](https://opensource.com/sites/default/files/uploads/scikit_boxplot.png) - -On the horizontal axis: - - * 0 is sepal length - * 1 is sepal width - * 2 is petal length - * 3 is petal width - - - -The vertical axis is dimensions in centimeters. - -### Wrapping up - -Here is the entire code for this simple Scikit-learn data science tutorial. - -``` -from sklearn import datasets -iris = datasets.load_iris() -print(iris.data) -print(iris.target) -print(iris.target_names) -import seaborn as sns -box_data = iris.data #variable representing the data array -box_target = iris.target #variable representing the labels array -sns.boxplot(data = box_data,width=0.5,fliersize=5) -sns.set(rc={'figure.figsize':(2,15)}) -``` - -Scikit-learn is a versatile Python library you can use to efficiently complete data science projects. - -If you want to learn more, check out the tutorials on [LiveEdu][5], such as Andrey Bulezyuk's video on using the Scikit-learn library to create a [machine learning application][6]. - -Do you have any questions or comments? Feel free to share them below. - --------------------------------------------------------------------------------- - -via: https://opensource.com/article/18/9/how-use-scikit-learn-data-science-projects - -作者:[Dr.Michael J.Garbade][a] -选题:[lujun9972](https://github.com/lujun9972) -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://opensource.com/users/drmjg -[1]: http://scikit-learn.org/stable/index.html -[2]: https://blog.liveedu.tv/regression-versus-classification-machine-learning-whats-the-difference/ -[3]: https://en.wikipedia.org/wiki/Iris_flower_data_set -[4]: https://en.wikipedia.org/wiki/Box_plot -[5]: https://www.liveedu.tv/guides/data-science/ -[6]: https://www.liveedu.tv/andreybu/REaxr-machine-learning-model-python-sklearn-kera/oPGdP-machine-learning-model-python-sklearn-kera/ diff --git a/translated/tech/20180926 How to use the Scikit-learn Python library for data science projects.md b/translated/tech/20180926 How to use the Scikit-learn Python library for data science projects.md new file mode 100644 index 0000000000..6f94cb8327 --- /dev/null +++ b/translated/tech/20180926 How to use the Scikit-learn Python library for data science projects.md @@ -0,0 +1,238 @@ +如何将Scikit-learn Python库用于数据科学项目 +====== + +![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/brain_data.png?itok=RH6NA32X) + +Scikit-learn Python库最初于2007年发布,从头到尾都通常用于解决机器学习和数据科学问题。 多功能库提供整洁,一致,高效的API和全面的在线文档。 + +### 什么是Scikit-learn? + +[Scikit-learn][1]是一个开源Python库,拥有强大的数据分析和数据挖掘工具。 在BSD许可下可用,并建立在以下机器学习库上: + +- **NumPy**,一个用于操作多维数组和矩阵的库。 它还具有广泛的数学函数汇集,可用于执行各种计算。 +- **SciPy**,一个由各种库组成的生态系统,用于完成技术计算任务。 +- **Matplotlib**,一个用于绘制各种图表和图形的库。 + +Scikit-learn提供了广泛的内置算法,可以充分用于数据科学项目。 + +以下是使用Scikit-learn库的主要方法。 + +#### 1. 分类 + +[分类][2]工具识别与提供的数据相关联的类别。 例如,它们可用于将电子邮件分类为垃圾邮件或非垃圾邮件。 + +Scikit-learn中的分类算法包括: + +- 支持向量机(SVM) +- 最邻近 +- 随机森林 + +#### 2. 回归 + +回归涉及到创建一个模型去试图理解输入和输出数据之间的关系。 例如,回归工具可用于了解股票价格的行为。 + +回归算法包括: + +- SVM +- 岭回归Ridge regression +- Lasso(LCTT译者注:Lasso 即 least absolute shrinkage and selection operator,又译最小绝对值收敛和选择算子、套索算法) + +#### 3. 聚类 + +Scikit-learn聚类工具用于自动将具有相同特征的数据分组。 例如,可以根据客户数据的地点对客户数据进行细分。 + +聚类算法包括: + +- K-means +- 谱聚类Spectral clustering +- Mean-shift + +#### 4. 降维 + +降维降低了用于分析的随机变量的数量。 例如,为了提高可视化效率,可能不会考虑外围数据。 + +降维算法包括: + +- 主成分分析Principal component analysis(PCA) +- 功能选择Feature selection +- 非负矩阵分解Non-negative matrix factorization + +#### 5. 模型选择 + +模型选择算法提供了用于比较,验证和选择要在数据科学项目中使用的最佳参数和模型的工具。 + +通过参数调整能够增强精度的模型选择模块包括: + +- 网格搜索Grid search +- 交叉验证Cross-validation +- 指标Metrics + +#### 6. 预处理 + +Scikit-learn预处理工具在数据分析期间的特征提取和规范化中非常重要。 例如,您可以使用这些工具转换输入数据(如文本)并在分析中应用其特征。 + +预处理模块包括: + +- 预处理 +- 特征提取 + +### Scikit-learn库示例 + +让我们用一个简单的例子来说明如何在数据科学项目中使用Scikit-learn库。 + +我们将使用[鸢尾花花卉数据集][3],该数据集包含在Scikit-learn库中。 鸢尾花数据集包含有关三种花种的150个细节,三种花种分别为: + +- Setosa-标记为0 +- Versicolor-标记为1 +- Virginica-标记为2 + +数据集包括每种花种的以下特征(以厘米为单位): + +- 萼片长度 +- 萼片宽度 +- 花瓣长度 +- 花瓣宽度 + +#### 第1步:导入库 + +由于Iris数据集包含在Scikit-learn数据科学库中,我们可以将其加载到我们的工作区中,如下所示: + +``` +from sklearn import datasets +iris = datasets.load_iris() +``` + +这些命令从**sklearn**导入数据集**datasets**模块,然后使用**datasets**中的**load_iris()**方法将数据包含在工作空间中。 + +#### 第2步:获取数据集特征 + +数据集**datasets**模块包含几种方法,使您更容易熟悉处理数据。 + +在Scikit-learn中,数据集指的是类似字典的对象,其中包含有关数据的所有详细信息。 使用**.data**键存储数据,该数据列是一个数组列表。 + +例如,我们可以利用**iris.data**输出有关Iris花卉数据集的信息。 + +``` +print(iris.data) +``` + +这是输出(结果已被截断): + +``` +[[5.1 3.5 1.4 0.2] + [4.9 3.  1.4 0.2] + [4.7 3.2 1.3 0.2] + [4.6 3.1 1.5 0.2] + [5.  3.6 1.4 0.2] + [5.4 3.9 1.7 0.4] + [4.6 3.4 1.4 0.3] + [5.  3.4 1.5 0.2] + [4.4 2.9 1.4 0.2] + [4.9 3.1 1.5 0.1] + [5.4 3.7 1.5 0.2] + [4.8 3.4 1.6 0.2] + [4.8 3.  1.4 0.1] + [4.3 3.  1.1 0.1] + [5.8 4.  1.2 0.2] + [5.7 4.4 1.5 0.4] + [5.4 3.9 1.3 0.4] + [5.1 3.5 1.4 0.3] +``` + +我们还使用**iris.target**向我们提供有关花朵不同标签的信息。 + +``` +print(iris.target) +``` + +这是输出: + +``` +[0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 + 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 + 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 2 2 2 2 2 2 2 2 2 2 2 + 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 + 2 2] + +``` + +如果我们使用**iris.target_names**,我们将输出数据集中找到的标签名称的数组。 + +``` +print(iris.target_names) +``` + +以下是运行Python代码后的结果: + +``` +['setosa' 'versicolor' 'virginica'] +``` + +#### 第3步:可视化数据集 + +我们可以使用[箱形图][4]来生成鸢尾花数据集的视觉描绘。 箱形图说明了数据如何通过四分位数在平面上分布的。 + +以下是如何实现这一目标: + +``` +import seaborn as sns +box_data = iris.data # 表示数据数组的变量 +box_target = iris.target # 表示标签数组的变量 +sns.boxplot(data = box_data,width=0.5,fliersize=5) +sns.set(rc={'figure.figsize':(2,15)}) +``` + +让我们看看结果: + +![](https://opensource.com/sites/default/files/uploads/scikit_boxplot.png) + +在横轴上: + + * 0是萼片长度 + * 1是萼片宽度 + * 2是花瓣长度 + * 3是花瓣宽度 + +垂直轴的尺寸以厘米为单位。 + +### 总结 + +以下是这个简单的Scikit-learn数据科学教程的完整代码。 + +``` +from sklearn import datasets +iris = datasets.load_iris() +print(iris.data) +print(iris.target) +print(iris.target_names) +import seaborn as sns +box_data = iris.data # 表示数据数组的变量 +box_target = iris.target # 表示标签数组的变量 +sns.boxplot(data = box_data,width=0.5,fliersize=5) +sns.set(rc={'figure.figsize':(2,15)}) +``` + +Scikit-learn是一个多功能的Python库,可用于高效完成数据科学项目。 + +如果您想了解更多信息,请查看[LiveEdu][5]上的教程,例如Andrey Bulezyuk关于使用Scikit-learn库创建[机器学习应用程序][6]的视频。 + +有什么评价或者疑问吗? 欢迎在下面分享。 + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/18/9/how-use-scikit-learn-data-science-projects + +作者:[Dr.Michael J.Garbade][a] +选题:[lujun9972](https://github.com/lujun9972) +译者:[Flowsnow](https://github.com/Flowsnow) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/drmjg +[1]: http://scikit-learn.org/stable/index.html +[2]: https://blog.liveedu.tv/regression-versus-classification-machine-learning-whats-the-difference/ +[3]: https://en.wikipedia.org/wiki/Iris_flower_data_set +[4]: https://en.wikipedia.org/wiki/Box_plot +[5]: https://www.liveedu.tv/guides/data-science/ +[6]: https://www.liveedu.tv/andreybu/REaxr-machine-learning-model-python-sklearn-kera/oPGdP-machine-learning-model-python-sklearn-kera/ \ No newline at end of file From 5d2e506e94539b1dfd6e7774fcff6c74db0be7e3 Mon Sep 17 00:00:00 2001 From: geekpi Date: Tue, 9 Oct 2018 08:52:13 +0800 Subject: [PATCH 283/736] translated --- ...tem Monitor Application Written In Rust.md | 80 ------------------- ...tem Monitor Application Written In Rust.md | 78 ++++++++++++++++++ 2 files changed, 78 insertions(+), 80 deletions(-) delete mode 100644 sources/tech/20180925 Hegemon - A Modular System Monitor Application Written In Rust.md create mode 100644 translated/tech/20180925 Hegemon - A Modular System Monitor Application Written In Rust.md diff --git a/sources/tech/20180925 Hegemon - A Modular System Monitor Application Written In Rust.md b/sources/tech/20180925 Hegemon - A Modular System Monitor Application Written In Rust.md deleted file mode 100644 index a75c1f3e9a..0000000000 --- a/sources/tech/20180925 Hegemon - A Modular System Monitor Application Written In Rust.md +++ /dev/null @@ -1,80 +0,0 @@ -translating---geekpi - -Hegemon – A Modular System Monitor Application Written In Rust -====== - -![](https://www.ostechnix.com/wp-content/uploads/2018/09/hegemon-720x340.png) - -When it comes to monitor running processes in Unix-like systems, the most commonly used applications are **top** and **htop** , which is an enhanced version of top. My personal favorite is htop. However, the developers are releasing few alternatives to these applications every now and then. One such alternative to top and htop utilities is **Hegemon**. It is a modular system monitor application written using **Rust** programming language. - -Concerning about the features of Hegemon, we can list the following: - - * Hegemon will monitor the usage of CPU, memory and Swap. - * It monitors the system’s temperature and fan speed. - * The update interval time can be adjustable. The default value is 3 seconds. - * We can reveal more detailed graph and additional information by expanding the data streams. - * Unit tests - * Clean interface - * Free and open source. - - - -### Installing Hegemon - -Make sure you have installed **Rust 1.26** or later version. To install Rust in your Linux distribution, refer the following guide: - -[Install Rust Programming Language In Linux][2] - -Also, install [libsensors][1] library. It is available in the default repositories of most Linux distributions. For example, you can install it in RPM based systems such as Fedora using the following command: - -``` -$ sudo dnf install lm_sensors-devel -``` - -On Debian-based systems like Ubuntu, Linux Mint, it can be installed using command: - -``` -$ sudo apt-get install libsensors4-dev -``` - -Once you installed Rust and libsensors, install Hegemon using command: - -``` -$ cargo install hegemon -``` - -Once hegemon installed, start monitoring the running processes in your Linux system using command: - -``` -$ hegemon -``` - -Here is the sample output from my Arch Linux desktop. - -![](https://www.ostechnix.com/wp-content/uploads/2018/09/Hegemon-in-action.gif) - -To exit, press **Q**. - - -Please be mindful that hegemon is still in its early development stage and it is not complete replacement for **top** command. There might be bugs and missing features. If you came across any bugs, report them in the project’s github page. The developer is planning to bring more features in the upcoming versions. So, keep an eye on this project. - -And, that’s all for now. Hope this helps. More good stuffs to come. Stay tuned! - -Cheers! - - - --------------------------------------------------------------------------------- - -via: https://www.ostechnix.com/hegemon-a-modular-system-monitor-application-written-in-rust/ - -作者:[SK][a] -选题:[lujun9972](https://github.com/lujun9972) -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://www.ostechnix.com/author/sk/ -[1]: https://github.com/lm-sensors/lm-sensors -[2]: https://www.ostechnix.com/install-rust-programming-language-in-linux/ diff --git a/translated/tech/20180925 Hegemon - A Modular System Monitor Application Written In Rust.md b/translated/tech/20180925 Hegemon - A Modular System Monitor Application Written In Rust.md new file mode 100644 index 0000000000..71aace4ce4 --- /dev/null +++ b/translated/tech/20180925 Hegemon - A Modular System Monitor Application Written In Rust.md @@ -0,0 +1,78 @@ +Hegemon - 使用 Rust 编写的模块化系统监视程序 +====== + +![](https://www.ostechnix.com/wp-content/uploads/2018/09/hegemon-720x340.png) + +在类 Unix 系统中监视运行进程时,最常用的程序是 **top** 和 top 的增强版 **htop**。我个人最喜欢的是 htop。但是,开发人员不时会发布这些程序的替代品。top 和 htop 工具的一个替代品是 **Hegemon**。它是使用 **Rust** 语言编写的模块化系统监视程序。 + +关于 Hegemon 的功能,我们可以列出以下这些: + + * Hegemon 会监控 CPU、内存和交换页的使用情况。 +  * 它监控系统的温度和风扇速度。 +  * 更新间隔时间可以调整。默认值为 3 秒。 +  * 我们可以通过扩展数据流来展示更详细的图表和其他信息。 +  * 单元测试 +  * 干净的界面 +  * 免费且开源。 + + + +### 安装 Hegemon + +确保已安装 **Rust 1.26** 或更高版本。要在 Linux 发行版中安装 Rust,请参阅以下指南: + +[Install Rust Programming Language In Linux][2] + +另外要安装 [libsensors][1] 库。它在大多数 Linux 发行版的默认仓库中都有。例如,你可以使用以下命令将其安装在基于 RPM 的系统(如 Fedora)中: + +``` +$ sudo dnf install lm_sensors-devel +``` + +在像 Ubuntu、Linux Mint 这样的基于 Debian 的系统上,可以使用这个命令安装它: + +``` +$ sudo apt-get install libsensors4-dev +``` + +在安装 Rust 和 libsensors 后,使用命令安装 Hegemon: + +``` +$ cargo install hegemon +``` + +安装 hegemon 后,使用以下命令开始监视 Linux 系统中正在运行的进程: + +``` +$ hegemon +``` + +以下是 Arch Linux 桌面的示例输出。 + +![](https://www.ostechnix.com/wp-content/uploads/2018/09/Hegemon-in-action.gif) + +要退出,请按 **Q**。 + + +请注意,hegemon 仍处于早期开发阶段,并不能完全取代 **top** 命令。它可能存在 bug 和功能缺失。如果你遇到任何 bug,请在项目的 github 页面中报告它们。开发人员计划在即将推出的版本中引入更多功能。所以,请关注这个项目。 + +就是这些了。希望这篇文章有用。还有更多的好东西。敬请关注! + +干杯! + + + +-------------------------------------------------------------------------------- + +via: https://www.ostechnix.com/hegemon-a-modular-system-monitor-application-written-in-rust/ + +作者:[SK][a] +选题:[lujun9972](https://github.com/lujun9972) +译者:[geekpi](https://github.com/geekpi) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://www.ostechnix.com/author/sk/ +[1]: https://github.com/lm-sensors/lm-sensors +[2]: https://www.ostechnix.com/install-rust-programming-language-in-linux/ From ba79fbe9809a926087866c629780670ab13ba6e7 Mon Sep 17 00:00:00 2001 From: geekpi Date: Tue, 9 Oct 2018 08:59:15 +0800 Subject: [PATCH 284/736] translating --- sources/tech/20180412 A Desktop GUI Application For NPM.md | 2 ++ 1 file changed, 2 insertions(+) diff --git a/sources/tech/20180412 A Desktop GUI Application For NPM.md b/sources/tech/20180412 A Desktop GUI Application For NPM.md index 4eabc40672..5c87aad3c0 100644 --- a/sources/tech/20180412 A Desktop GUI Application For NPM.md +++ b/sources/tech/20180412 A Desktop GUI Application For NPM.md @@ -1,3 +1,5 @@ +translating---geekpi + A Desktop GUI Application For NPM ====== From ca17f21198361ab3e3c51b72ac967c57626fad76 Mon Sep 17 00:00:00 2001 From: darksun Date: Tue, 9 Oct 2018 09:30:03 +0800 Subject: [PATCH 285/736] =?UTF-8?q?=E9=80=89=E9=A2=98:=20Play=20Windows=20?= =?UTF-8?q?games=20on=20Fedora=20with=20Steam=20Play=20and=20Proton?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...es on Fedora with Steam Play and Proton.md | 103 ++++++++++++++++++ 1 file changed, 103 insertions(+) create mode 100644 sources/tech/20181008 Play Windows games on Fedora with Steam Play and Proton.md diff --git a/sources/tech/20181008 Play Windows games on Fedora with Steam Play and Proton.md b/sources/tech/20181008 Play Windows games on Fedora with Steam Play and Proton.md new file mode 100644 index 0000000000..22b4cc8558 --- /dev/null +++ b/sources/tech/20181008 Play Windows games on Fedora with Steam Play and Proton.md @@ -0,0 +1,103 @@ +Play Windows games on Fedora with Steam Play and Proton +====== + +![](https://fedoramagazine.org/wp-content/uploads/2018/09/steam-proton-816x345.jpg) + +Some weeks ago, Steam [announced][1] a new addition to Steam Play with Linux support for Windows games using Proton, a fork from WINE. This capability is still in beta, and not all games work. Here are some more details about Steam and Proton. + +According to the Steam website, there are new features in the beta release: + + * Windows games with no Linux version currently available can now be installed and run directly from the Linux Steam client, complete with native Steamworks and OpenVR support. + * DirectX 11 and 12 implementations are now based on Vulkan, which improves game compatibility and reduces performance impact. + * Fullscreen support has been improved. Fullscreen games seamlessly stretch to the desired display without interfering with the native monitor resolution or requiring the use of a virtual desktop. + * Improved game controller support. Games automatically recognize all controllers supported by Steam. Expect more out-of-the-box controller compatibility than even the original version of the game. + * Performance for multi-threaded games has been greatly improved compared to vanilla WINE. + + + +### Installation + +If you’re interested in trying Steam with Proton out, just follow these easy steps. (Note that you can ignore the first steps to enable the Steam Beta if you have the [latest updated version of Steam installed][2]. In that case you no longer need Steam Beta to use Proton.) + +Open up Steam and log in to your account. This example screenshot shows support for only 22 games before enabling Proton. + +![][3] + +Now click on Steam option on top of the client. This displays a drop down menu. Then select Settings. + +![][4] + +Now the settings window pops up. Select the Account option and next to Beta participation, click on change. + +![][5] + +Now change None to Steam Beta Update. + +![][6] + +Click on OK and a prompt asks you to restart. + +![][7] + +Let Steam download the update. This can take a while depending on your internet speed and computer resources. + +![][8] + +After restarting, go back to the Settings window. This time you’ll see a new option. Make sure the check boxes for Enable Steam Play for supported titles, Enable Steam Play for all titles and Use this tool instead of game-specific selections from Steam are enabled. The compatibility tool should be Proton. + +![][9] + +The Steam client asks you to restart. Do so, and once you log back into your Steam account, your game library for Linux should be extended. + +![][10] + +### Installing a Windows game using Steam Play + +Now that you have Proton enabled, install a game. Select the title you want and you’ll find the process is similar to installing a normal game on Steam, as shown in these screenshots. + +![][11] + +![][12] + +![][13] + +![][14] + +After the game is done downloading and installing, you can play it. + +![][15] + +![][16] + +Some games may be affected by the beta nature of Proton. The game in this example, Chantelise, had no audio and a low frame rate. Keep in mind this capability is still in beta and Fedora is not responsible for results. If you’d like to read further, the community has created a [Google doc][17] with a list of games that have been tested. + + +-------------------------------------------------------------------------------- + +via: https://fedoramagazine.org/play-windows-games-steam-play-proton/ + +作者:[Francisco J. Vergara Torres][a] +选题:[lujun9972](https://github.com/lujun9972) +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://fedoramagazine.org/author/patxi/ +[1]: https://steamcommunity.com/games/221410/announcements/detail/1696055855739350561 +[2]: https://fedoramagazine.org/third-party-repositories-fedora/ +[3]: https://fedoramagazine.org/wp-content/uploads/2018/09/listOfGamesLinux-300x197.png +[4]: https://fedoramagazine.org/wp-content/uploads/2018/09/1-300x169.png +[5]: https://fedoramagazine.org/wp-content/uploads/2018/09/2-300x196.png +[6]: https://fedoramagazine.org/wp-content/uploads/2018/09/4-300x272.png +[7]: https://fedoramagazine.org/wp-content/uploads/2018/09/6-300x237.png +[8]: https://fedoramagazine.org/wp-content/uploads/2018/09/7-300x126.png +[9]: https://fedoramagazine.org/wp-content/uploads/2018/09/10-300x237.png +[10]: https://fedoramagazine.org/wp-content/uploads/2018/09/12-300x196.png +[11]: https://fedoramagazine.org/wp-content/uploads/2018/09/13-300x196.png +[12]: https://fedoramagazine.org/wp-content/uploads/2018/09/14-300x195.png +[13]: https://fedoramagazine.org/wp-content/uploads/2018/09/15-300x196.png +[14]: https://fedoramagazine.org/wp-content/uploads/2018/09/16-300x195.png +[15]: https://fedoramagazine.org/wp-content/uploads/2018/09/Screenshot-from-2018-08-30-15-14-59-300x169.png +[16]: https://fedoramagazine.org/wp-content/uploads/2018/09/Screenshot-from-2018-08-30-15-19-34-300x169.png +[17]: https://docs.google.com/spreadsheets/d/1DcZZQ4HL_Ol969UbXJmFG8TzOHNnHoj8Q1f8DIFe8-8/edit#gid=1003113831 From 7137fce86dc0f40b371ab6e8e7d4615a79815969 Mon Sep 17 00:00:00 2001 From: darksun Date: Tue, 9 Oct 2018 09:32:09 +0800 Subject: [PATCH 286/736] =?UTF-8?q?=E9=80=89=E9=A2=98:=20Terminalizer=20?= =?UTF-8?q?=E2=80=93=20A=20Tool=20To=20Record=20Your=20Terminal=20And=20Ge?= =?UTF-8?q?nerate=20Animated=20Gif=20Images?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...rminal And Generate Animated Gif Images.md | 171 ++++++++++++++++++ 1 file changed, 171 insertions(+) create mode 100644 sources/tech/20181005 Terminalizer - A Tool To Record Your Terminal And Generate Animated Gif Images.md diff --git a/sources/tech/20181005 Terminalizer - A Tool To Record Your Terminal And Generate Animated Gif Images.md b/sources/tech/20181005 Terminalizer - A Tool To Record Your Terminal And Generate Animated Gif Images.md new file mode 100644 index 0000000000..26d1941cc1 --- /dev/null +++ b/sources/tech/20181005 Terminalizer - A Tool To Record Your Terminal And Generate Animated Gif Images.md @@ -0,0 +1,171 @@ +Terminalizer – A Tool To Record Your Terminal And Generate Animated Gif Images +====== +This is know topic for most of us and i don’t want to give you the detailed information about this flow. Also, we had written many article under this topics. + +Script command is the one of the standard command to record Linux terminal sessions. Today we are going to discuss about same kind of tool called Terminalizer. + +This tool will help us to record the users terminal activity, also will help us to identify other useful information from the output. + +### What Is Terminalizer + +Terminalizer allow users to record their terminal activity and allow them to generate animated gif images. It’s highly customizable CLI tool that user can share a link for an online player, web player for a recording file. + +**Suggested Read :** +**(#)** [Script – A Simple Command To Record Your Terminal Session Activity][1] +**(#)** [Automatically Record/Capture All Users Terminal Sessions Activity In Linux][2] +**(#)** [Teleconsole – A Tool To Share Your Terminal Session Instantly To Anyone In Seconds][3] +**(#)** [tmate – Instantly Share Your Terminal Session To Anyone In Seconds][4] +**(#)** [Peek – Create a Animated GIF Recorder in Linux][5] +**(#)** [Kgif – A Simple Shell Script to Create a Gif File from Active Window][6] +**(#)** [Gifine – Quickly Create An Animated GIF Video In Ubuntu/Debian][7] + +There is no distribution official package to install this utility and we can easily install it by using Node.js. + +### How To Install Noje.js in Linux + +Node.js can be installed in multiple ways. Here, we are going to teach you the standard method. + +For Ubuntu/LinuxMint use [APT-GET Command][8] or [APT Command][9] to install Node.js + +``` +$ curl -sL https://deb.nodesource.com/setup_8.x | sudo -E bash - +$ sudo apt-get install -y nodejs + +``` + +For Debian use [APT-GET Command][8] or [APT Command][9] to install Node.js + +``` +# curl -sL https://deb.nodesource.com/setup_8.x | bash - +# apt-get install -y nodejs + +``` + +For **`RHEL/CentOS`** , use [YUM Command][10] to install tmux. + +``` +$ sudo curl --silent --location https://rpm.nodesource.com/setup_8.x | sudo bash - +$ sudo yum install epel-release +$ sudo yum -y install nodejs + +``` + +For **`Fedora`** , use [DNF Command][11] to install tmux. + +``` +$ sudo dnf install nodejs + +``` + +For **`Arch Linux`** , use [Pacman Command][12] to install tmux. + +``` +$ sudo pacman -S nodejs npm + +``` + +For **`openSUSE`** , use [Zypper Command][13] to install tmux. + +``` +$ sudo zypper in nodejs6 + +``` + +### How to Install Terminalizer + +As you have already installed prerequisite package called Node.js, now it’s time to install Terminalizer on your system. Simple run the below npm command to install Terminalizer. + +``` +$ sudo npm install -g terminalizer + +``` + +### How to Use Terminalizer + +To record your session activity using Terminalizer, just run the following Terminalizer command. Once you started the recording then play around it and finally hit `CTRL+D` to exit and save the recording. + +``` +# terminalizer record 2g-session + +defaultConfigPath +The recording session is started +Press CTRL+D to exit and save the recording + +``` + +This will save your recording session as a YAML file, in this case my filename would be 2g-session-activity.yml. +![][15] + +Just type few commands to verify this and finally hit `CTRL+D` to exit the current capture. When you hit `CTRL+D` on the terminal and you will be getting the below output. + +``` +# logout +Successfully Recorded +The recording data is saved into the file: +/home/daygeek/2g-session.yml +You can edit the file and even change the configurations. + +``` + +![][16] + +### How to Play the Recorded File + +Use the below command format to paly your recorded YAML file. Make sure, you have to input your recorded file instead of us. + +``` +# terminalizer play 2g-session + +``` + +Render a recording file as an animated gif image. + +``` +# terminalizer render 2g-session + +``` + +`Note:` Below two commands are not implemented yet in the current version and will be available in the next version. + +If you would like to share your recording to others then upload a recording file and get a link for an online player and share it. + +``` +terminalizer share 2g-session + +``` + +Generate a web player for a recording file + +``` +# terminalizer generate 2g-session + +``` + +-------------------------------------------------------------------------------- + +via: https://www.2daygeek.com/terminalizer-a-tool-to-record-your-terminal-and-generate-animated-gif-images/ + +作者:[Prakash Subramanian][a] +选题:[lujun9972](https://github.com/lujun9972) +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://www.2daygeek.com/author/prakash/ +[1]: https://www.2daygeek.com/script-command-record-save-your-terminal-session-activity-linux/ +[2]: https://www.2daygeek.com/automatically-record-all-users-terminal-sessions-activity-linux-script-command/ +[3]: https://www.2daygeek.com/teleconsole-share-terminal-session-instantly-to-anyone-in-seconds/ +[4]: https://www.2daygeek.com/tmate-instantly-share-your-terminal-session-to-anyone-in-seconds/ +[5]: https://www.2daygeek.com/peek-create-animated-gif-screen-recorder-capture-arch-linux-mint-fedora-ubuntu/ +[6]: https://www.2daygeek.com/kgif-create-animated-gif-file-active-window-screen-recorder-capture-arch-linux-mint-fedora-ubuntu-debian-opensuse-centos/ +[7]: https://www.2daygeek.com/gifine-create-animated-gif-vedio-recorder-linux-mint-debian-ubuntu/ +[8]: https://www.2daygeek.com/apt-get-apt-cache-command-examples-manage-packages-debian-ubuntu-systems/ +[9]: https://www.2daygeek.com/apt-command-examples-manage-packages-debian-ubuntu-systems/ +[10]: https://www.2daygeek.com/yum-command-examples-manage-packages-rhel-centos-systems/ +[11]: https://www.2daygeek.com/dnf-command-examples-manage-packages-fedora-system/ +[12]: https://www.2daygeek.com/pacman-command-examples-manage-packages-arch-linux-system/ +[13]: https://www.2daygeek.com/zypper-command-examples-manage-packages-opensuse-system/ +[14]: data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7 +[15]: https://www.2daygeek.com/wp-content/uploads/2018/10/terminalizer-record-2g-session-1.gif +[16]: https://www.2daygeek.com/wp-content/uploads/2018/10/terminalizer-play-2g-session.gif From 9c1ba66792f1589c6f0ca7c93de97c7d7aa75075 Mon Sep 17 00:00:00 2001 From: darksun Date: Tue, 9 Oct 2018 09:35:03 +0800 Subject: [PATCH 287/736] =?UTF-8?q?=E9=80=89=E9=A2=98:=20KeeWeb=20?= =?UTF-8?q?=E2=80=93=20An=20Open=20Source,=20Cross=20Platform=20Password?= =?UTF-8?q?=20Manager?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...Source, Cross Platform Password Manager.md | 110 ++++++++++++++++++ 1 file changed, 110 insertions(+) create mode 100644 sources/tech/20181008 KeeWeb - An Open Source, Cross Platform Password Manager.md diff --git a/sources/tech/20181008 KeeWeb - An Open Source, Cross Platform Password Manager.md b/sources/tech/20181008 KeeWeb - An Open Source, Cross Platform Password Manager.md new file mode 100644 index 0000000000..a9b20ac54d --- /dev/null +++ b/sources/tech/20181008 KeeWeb - An Open Source, Cross Platform Password Manager.md @@ -0,0 +1,110 @@ +KeeWeb – An Open Source, Cross Platform Password Manager +====== + +![](https://www.ostechnix.com/wp-content/uploads/2018/10/keeweb-720x340.png) + +If you’ve been using the internet for any amount of time, chances are, you have a lot of accounts on a lot of websites. All of those accounts must have passwords, and you have to remember all those passwords. Either that, or write them down somewhere. Writing down passwords on paper may not be secure, and remembering them won’t be practically possible if you have more than a few passwords. This is why Password Managers have exploded in popularity in the last few years. A password Manager is like a central repository where you store all your passwords for all your accounts, and you lock it with a master password. With this approach, the only thing you need to remember is the Master password. + +**KeePass** is one such open source password manager. KeePass has an official client, but it’s pretty barebones. But there are a lot of other apps, both for your computer and for your phone, that are compatible with the KeePass file format for storing encrypted passwords. One such app is **KeeWeb**. + +KeeWeb is an open source, cross platform password manager with features like cloud sync, keyboard shortcuts and plugin support. KeeWeb uses Electron, which means it runs on Windows, Linux, and Mac OS. + +### Using KeeWeb Password Manager + +When it comes to using KeeWeb, you actually have 2 options. You can either use KeeWeb webapp without having to install it on your system and use it on the fly or simply install KeeWeb client in your local system. + +**Using the KeeWeb webapp** + +If you don’t want to bother installing a desktop app, you can just go to [**https://app.keeweb.info/**][1] and use it as a password manager. + +![](https://www.ostechnix.com/wp-content/uploads/2018/10/KeeWeb-webapp.png) + +It has all the features of the desktop app. Obviously, this requires you to be online when using the app. + +**Installing KeeWeb on your Desktop** + +If you like the comfort and offline availability of using a desktop app, you can also install it on your desktop. + +If you use Ubuntu/Debian, you can just go to [**releases pages**][2] and download KeeWeb latest **.deb** file, which you can install via this command: + +``` +$ sudo dpkg -i KeeWeb-1.6.3.linux.x64.deb + +``` + +If you’re on Arch, it is available in the [**AUR**][3], so you can install using any helper programs like [**Yay**][4]: + +``` +$ yay -S keeweb + +``` + +Once installed, launch it from Menu or application launcher. This is how KeeWeb default interface looks like: + +![](https://www.ostechnix.com/wp-content/uploads/2018/10/KeeWeb-desktop-client.png) + +### General Layout + +KeeWeb basically shows a list of all your passwords, along with all your tags to the left. Clicking on a tag will filter the list to only passwords of that tag. To the right, all the fields for the selected account are shown. You can set username, password, website, or just add a custom note. You can even create your own fields and mark them as secure fields, which is great when storing things like credit card information. You can copy passwords by just clicking on them. KeeWeb also shows the date when an account was created and modified. Deleted passwords are kept in the trash, where they can be restored or permanently deleted. + +![](https://www.ostechnix.com/wp-content/uploads/2018/10/KeeWeb-general-layout.png) + +### KeeWeb Features + +**Cloud Sync** + +One of the main features of KeeWeb is the support for a wide variety of remote locations and cloud services. +Other than loading local files, you can open files from: + + 1. WebDAV Servers + 2. Google Drive + 3. Dropbox + 4. OneDrive + + + +This means that if you use multiple computers, you can synchronize the password files between them, so you don’t have to worry about not having all the passwords available on all devices. + +**Password Generator** + +![](https://www.ostechnix.com/wp-content/uploads/2018/10/KeeWeb-password-generator.png) + +Along with encrypting your passwords, it’s also important to create new, strong passwords for every single account. This means that if one of your account gets hacked, the attacker won’t be able to get in to your other accounts using the same password. + +To achieve this, KeeWeb has a built-in password generator, that lets you generate a custom password of a specific length, including specific type of characters. + +**Plugins** + +![](https://www.ostechnix.com/wp-content/uploads/2018/10/KeeWeb-plugins.png) + +You can extend KeeWeb functionality with plugins. Some of these plugins are translations for other languages, while others add new functionality, like checking **** for exposed passwords. + +**Local Backups** + +![](https://www.ostechnix.com/wp-content/uploads/2018/10/KeeWeb-backup.png) + +Regardless of where your password file is stored, you should probably keep local backups of the file on your computer. Luckily, KeeWeb has this feature built-in. You can backup to a specific path, and set it to backup periodically, or just whenever the file is changed. + + +### Verdict + +I have actually been using KeeWeb for several years now. It completely changed the way I store my passwords. The cloud sync is basically the feature that makes it a done deal for me. I don’t have to worry about keeping multiple unsynchronized files on multiple devices. If you want a great looking password manager that has cloud sync, KeeWeb is something you should look at. + + + +-------------------------------------------------------------------------------- + +via: https://www.ostechnix.com/keeweb-an-open-source-cross-platform-password-manager/ + +作者:[EDITOR][a] +选题:[lujun9972](https://github.com/lujun9972) +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://www.ostechnix.com/author/editor/ +[1]: https://app.keeweb.info/ +[2]: https://github.com/keeweb/keeweb/releases/latest +[3]: https://aur.archlinux.org/packages/keeweb/ +[4]: https://www.ostechnix.com/yay-found-yet-another-reliable-aur-helper/ From 03655e66325e40f05864203e774fe620a4eb8f9b Mon Sep 17 00:00:00 2001 From: darksun Date: Tue, 9 Oct 2018 09:36:12 +0800 Subject: [PATCH 288/736] =?UTF-8?q?=E9=80=89=E9=A2=98:=20Interview=20With?= =?UTF-8?q?=20Peter=20Ganten,=20CEO=20of=20Univention=20GmbH?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...th Peter Ganten, CEO of Univention GmbH.md | 97 +++++++++++++++++++ 1 file changed, 97 insertions(+) create mode 100644 sources/talk/20181004 Interview With Peter Ganten, CEO of Univention GmbH.md diff --git a/sources/talk/20181004 Interview With Peter Ganten, CEO of Univention GmbH.md b/sources/talk/20181004 Interview With Peter Ganten, CEO of Univention GmbH.md new file mode 100644 index 0000000000..5a0c1aabdd --- /dev/null +++ b/sources/talk/20181004 Interview With Peter Ganten, CEO of Univention GmbH.md @@ -0,0 +1,97 @@ +Interview With Peter Ganten, CEO of Univention GmbH +====== +I have been asking the Univention team to share the behind-the-scenes story of [**Univention**][1] for a couple of months. Finally, today we got the interview of **Mr. Peter H. Ganten** , CEO of Univention GmbH. Despite his busy schedule, in this interview, he shares what he thinks of the Univention project and its impact on open source ecosystem, what open source developers and companies will need to do to keep thriving and what are the biggest challenges for open source projects. + +**OSTechNix: What’s your background and why have you founded Univention?** + +**Peter Ganten:** I studied physics and psychology. In psychology I was a research assistant and coded evaluation software. I realized how important it is that results have to be disclosed in order to verify or falsify them. The same goes for the code that leads to the results. This brought me into contact with Open Source Software (OSS) and Linux. + +![](https://www.ostechnix.com/wp-content/uploads/2018/10/peter-ganten-interview.jpg) + +I was a kind of technical lab manager and I had the opportunity to try out a lot, which led to my book about Debian. That was still in the New Economy era where the first business models emerged on how to make money with Open Source. When the bubble burst, I had the plan to make OSS a solid business model without venture capital but with Hanseatic business style – seriously, steadily, no bling bling. + +**What were the biggest challenges at the beginning?** + +When I came from the university, the biggest challenge clearly was to gain entrepreneurial and business management knowledge. I quickly learned that it’s not about Open Source software as an end to itself but always about customer value, and the benefits OSS offers its customers. We all had to learn a lot. + +In the beginning, we expected that Linux on the desktop would become established in a similar way as Linux on the server. However, this has not yet been proven true. The replacement has happened with Android and the iPhone. Our conclusion then was to change our offerings towards ID management and enterprise servers. + +**Why does UCS matter? And for whom makes it sense to use it?** + +There is cool OSS in all areas, but many organizations are not capable to combine it all together and make it manageable. For the basic infrastructure (Windows desktops, users, user rights, roles, ID management, apps) we need a central instance to which groupware, CRM etc. is connected. Without Univention this would have to be laboriously assembled and maintained manually. This is possible for very large companies, but far too complex for many other organizations. + +[**UCS**][2] can be used out of the box and is scalable. That’s why it’s becoming more and more popular – more than 10,000 organizations are using UCS already today. + +**Who are your users and most important clients? What do they love most about UCS?** + +The Core Edition is free of charge and used by organizations from all sectors and industries such as associations, micro-enterprises, universities or large organizations with thousands of users. In the enterprise environment, where Long Term Servicing (LTS) and professional support are particularly important, we have organizations ranging in size from 30-50 users to several thousand users. One of the target groups is the education system in Germany. In many large cities and within their school administrations UCS is used, for example, in Cologne, Hannover, Bremen, Kassel and in several federal states. They are looking for manageable IT and apps for schools. That’s what we offer, because we can guarantee these authorities full control over their users’ identities. + +Also, more and more cloud service providers and MSPs want to take UCS to deliver a selection of cloud-based app solutions. + +**Is UCS 100% Open Source? If so, how can you run a profitable business selling it?** + +Yes, UCS is 100% Open Source, every line, the whole code is OSS. You can download and use UCS Core Edition for **FREE!** + +We know that in large, complex organizations, vendor support and liability is needed for LTS, SLAs, and we offer that with our Enterprise subscriptions and consulting services. We don’t offer these in the Core Edition. + +**And what are you giving back to the OS community?** + +A lot. We are involved in the Debian team and co-finance the LTS maintenance for Debian. For important OS components in UCS like [**OpenLDAP**][3], Samba or KVM we co-finance the development or have co-developed them ourselves. We make it all freely available. + +We are also involved on the political level in ensuring that OSS is used. We are engaged, for example, in the [**Free Software Foundation Europe (FSFE)**][4] and the [**German Open Source Business Alliance**][5], of which I am the chairman. We are working hard to make OSS more successful. + +**How can I get started with UCS?** + +It’s easy to get started with the Core Edition, which, like the Enterprise Edition, has an App Center and can be easily installed on your own hardware or as an appliance in a virtual machine. Just [**download Univention ISO**][6] and install it as described in the below link. + +Alternatively, you can try the [**UCS Online Demo**][7] to get a first impression of Univention Corporate Server without actually installing it on your system. + +**What do you think are the biggest challenges for Open Source?** + +There is a certain attitude you can see over and over again even in bigger projects: OSS alone is viewed as an almost mandatory prerequisite for a good, sustainable, secure and trustworthy IT solution – but just having decided to use OSS is no guarantee for success. You have to carry out projects professionally and cooperate with the manufacturers. A danger is that in complex projects people think: “Oh, OSS is free, I just put it all together by myself”. But normally you do not have the know-how to successfully implement complex software solutions. You would never proceed like this with Closed Source. There people think: “Oh, the software costs 3 $ millions, so it’s okay if I have to spend another 300,000 Dollars on consultants.” + +At OSS this is different. If such projects fail and leave burnt ground behind, we have to explain again and again that the failure of such projects is not due to the nature of OSS but to its poor implementation and organization in a specific project: You have to conclude reasonable contracts and involve partners as in the proprietary world, but you’ll gain a better solution. + +Another challenge: We must stay innovative, move forward, attract new people who are enthusiastic about working on projects. That’s sometimes a challenge. For example, there are a number of proprietary cloud services that are good but lead to extremely high dependency. There are approaches to alternatives in OSS, but no suitable business models yet. So it’s hard to find and fund developers. For example, I can think of Evernote and OneNote for which there is no reasonable OSS alternative. + +**And what will the future bring for Univention?** + +I don’t have a crystal ball, but we are extremely optimistic. We see a very high growth potential in the education market. More OSS is being made in the public sector, because we have repeatedly experienced the dead ends that can be reached if we solely rely on Closed Source. + +Overall, we will continue our organic growth at double-digit rates year after year. + +UCS and its core functionalities of identity management, infrastructure management and app center will increasingly be offered and used from the cloud as a managed service. We will support our technology in this direction, e.g., through containers, so that a hypervisor or bare metal is not always necessary for operation. + +**You have been the CEO of Univention for a long time. What keeps you motivated?** + +I have been the CEO of Univention for more than 16 years now. My biggest motivation is to realize that something is moving. That we offer the better way for IT. That the people who go this way with us are excited to work with us. I go home satisfied in the evening (of course not every evening). It’s totally cool to work with the team I have. It motivates and pushes you every time I need it myself. + +I’m a techie and nerd at heart, I enjoy dealing with technology. So I’m totally happy at this place and I’m grateful to the world that I can do whatever I want every day. Not everyone can say that. + +**Who gives you inspiration?** + +My employees, the customers and the Open Source projects. The exchange with other people. + +The motivation behind everything is that we want to make sure that mankind will be able to influence and change the IT that surrounds us today and in the future just the way we want it and we thinks it’s good. We want to make a contribution to this. That is why Univention is there. That is important to us every day. + + + +-------------------------------------------------------------------------------- + +via: https://www.ostechnix.com/interview-with-peter-ganten-ceo-of-univention-gmbh/ + +作者:[SK][a] +选题:[lujun9972](https://github.com/lujun9972) +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://www.ostechnix.com/author/sk/ +[1]: https://www.ostechnix.com/introduction-univention-corporate-server/ +[2]: https://www.univention.com/products/ucs/ +[3]: https://www.ostechnix.com/redhat-and-suse-announced-to-withdraw-support-for-openldap/ +[4]: https://fsfe.org/ +[5]: https://osb-alliance.de/ +[6]: https://www.univention.com/downloads/download-ucs/ +[7]: https://www.univention.com/downloads/ucs-online-demo/ From 5b6ded10b26c9ef1af920c8475c5aebe4e347bd3 Mon Sep 17 00:00:00 2001 From: darksun Date: Tue, 9 Oct 2018 09:38:38 +0800 Subject: [PATCH 289/736] =?UTF-8?q?=E9=80=89=E9=A2=98:=20Taking=20notes=20?= =?UTF-8?q?with=20Laverna,=20a=20web-based=20information=20organizer?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...erna, a web-based information organizer.md | 128 ++++++++++++++++++ 1 file changed, 128 insertions(+) create mode 100644 sources/tech/20181008 Taking notes with Laverna, a web-based information organizer.md diff --git a/sources/tech/20181008 Taking notes with Laverna, a web-based information organizer.md b/sources/tech/20181008 Taking notes with Laverna, a web-based information organizer.md new file mode 100644 index 0000000000..27616a9f6e --- /dev/null +++ b/sources/tech/20181008 Taking notes with Laverna, a web-based information organizer.md @@ -0,0 +1,128 @@ +Taking notes with Laverna, a web-based information organizer +====== + +![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/notebook-writing-pen.jpg?itok=uA3dCfu_) + +I don’t know anyone who doesn’t take notes. Most of the people I know use an online note-taking application like Evernote, Simplenote, or Google Keep. + +All of those are good tools, but they’re proprietary. And you have to wonder about the privacy of your information—especially in light of [Evernote’s great privacy flip-flop of 2016][1]. If you want more control over your notes and your data, you need to turn to an open source tool—preferably one that you can host yourself. + +And there are a number of good [open source alternatives to Evernote][2]. One of these is Laverna. Let’s take a look at it. + +### Getting Laverna + +You can [host Laverna yourself][3] or use the [web version][4] + +Since I have nowhere to host the application, I’ll focus here on using the web version of Laverna. Aside from the installation and setting up storage (more on that below), I’m told that the experience with a self-hosted version of Laverna is the same. + +### Setting up Laverna + +To start using Laverna right away, click the **Start using now** button on the front page of [Laverna.cc][5]. + +On the welcome screen, click **Next**. You’ll be asked to enter an encryption password to secure your notes and get to them when you need to. You’ll also be asked to choose a way to synchronize your notes. I’ll discuss synchronization in a moment, so just enter a password and click **Next**. + +![](https://opensource.com/sites/default/files/uploads/laverna-set-password.png) + +When you log in, you'll see a blank canvas: + +![](https://opensource.com/sites/default/files/uploads/laverna-main-window.png) + +### Storing your notes + +Before diving into how to use Laverna, let’s walk through how to store your notes. + +Out of the box, Laverna stores your notes in your browser’s cache. The problem with that is when you clear the cache, you lose your notes. You can also store your notes using: + + * Dropbox, a popular and proprietary web-based file syncing and storing service + * [remoteStorage][6], which offers a way for web applications to store information in the cloud. + + + +Using Dropbox is convenient, but it’s proprietary. There are also concerns about [privacy and surveillance][7]. Laverna encrypts your notes before saving them, but not all encryption is foolproof. Even if you don’t have anything illegal or sensitive in your notes, they’re no one’s business but your own. + +remoteStorage, on the other hand, is kind of techie to set up. There are few hosted storage services out there. I use [5apps][8]. + +To change how Laverna stores your notes, click the hamburger menu in the top-left corner. Click **Settings** and then **Sync**. + +![](https://opensource.com/sites/default/files/uploads/laverna-sync.png) + +Select the service you want to use, then click **Save**. After that, click the left arrow in the top-left corner. You’ll be asked to authorize Laverna with the service you chose. + +### Using Laverna + +With that out of the way, let’s get down to using Laverna. Create a new note by clicking the **New Note** icon, which opens the note editor: + +![](https://opensource.com/sites/default/files/uploads/laverna-new-note.png) + +Type a title for your note, then start typing the note in the left pane of the editor. The right pane displays a preview of your note: + +![](https://opensource.com/sites/default/files/uploads/laverna-writing-note.png) + +You can format your notes using Markdown; add formatting using your keyboard or the toolbar at the top of the window. + +You can also embed an image or file from your computer into a note, or link to one on the web. When you embed an image, it’s stored with your note. + +When you’re done, click **Save**. + +### Organizing your notes + +Like some other note-taking tools, Laverna lists the last note that you created or edited at the top. If you have a lot of notes, it can take a bit of work to find the one you're looking for. + +To better organize your notes, you can group them into notebooks, where you can quickly filter them based on a topic or a grouping. + +When you’re creating or editing a note, you can select a notebook from the **Select notebook** list in the top-left corner of the window. If you don’t have any notebooks, select **Add a new notebook** from the list and type the notebook’s name. + +You can also make that notebook a child of another notebook. Let’s say, for example, you maintain three blogs. You can create a notebook called **Blog Post Notes** and name children for each blog. + +To filter your notes by notebook, click the hamburger menu, followed by the name of a notebook. Only the notes in the notebook you choose will appear in the list. + +![](https://opensource.com/sites/default/files/uploads/laverna-notebook.png) + +### Using Laverna across devices + +I use Laverna on my laptop and on an eight-inch tablet running [LineageOS][9]. Getting the two devices to use the same storage and display the same notes takes a little work. + +First, you’ll need to export your settings. Log into wherever you’re using Laverna and click the hamburger menu. Click **Settings** , then **Import & Export**. Under **Settings** , click **Export settings**. Laverna saves a file named laverna-settings.json to your device. + +Copy that file to the other device or devices on which you want to use Laverna. You can do that by emailing it to yourself or by syncing the file across devices using an application like [ownCloud][10] or [Nextcloud][11]. + +On the other device, click **Import** on the splash screen. Otherwise, click the hamburger menu and then **Settings > Import & Export**. Click **Import settings**. Find the JSON file with your settings, click **Open** and then **Save**. + +Laverna will ask you to: + + * Log back in using your password. + * Register with the storage service you’re using. + + + +Repeat this process for each device that you want to use. It’s cumbersome, I know. I’ve done it. You should need to do it only once per device, though. + +### Final thoughts + +Once you set up Laverna, it’s easy to use and has just the right features for what I need to do. I’m hoping that the developers can expand the storage and syncing options to include open source applications like Nextcloud and ownCloud. + +While Laverna doesn’t have all the bells and whistles of a note-taking application like Evernote, it does a great job of letting you take and organize your notes. The fact that Laverna is open source and supports Markdown are two additional great reasons to use it. + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/18/10/taking-notes-laverna + +作者:[Scott Nesbitt][a] +选题:[lujun9972](https://github.com/lujun9972) +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/scottnesbitt +[1]: https://blog.evernote.com/blog/2016/12/15/evernote-revisits-privacy-policy/ +[2]: https://opensource.com/life/16/8/open-source-alternatives-evernote +[3]: https://github.com/Laverna/laverna +[4]: https://laverna.cc/ +[5]: http://laverna.cc/ +[6]: https://remotestorage.io/ +[7]: https://www.zdnet.com/article/dropbox-faces-questions-over-claims-of-improper-data-sharing/ +[8]: https://5apps.com/storage/beta +[9]: https://lineageos.org/ +[10]: https://owncloud.com/ +[11]: https://nextcloud.com/ From 57eeb7ba3a93dc1092145858a2b0c447ed50d9e0 Mon Sep 17 00:00:00 2001 From: darksun Date: Tue, 9 Oct 2018 09:44:48 +0800 Subject: [PATCH 290/736] =?UTF-8?q?=E9=80=89=E9=A2=98:=20Python=20at=20the?= =?UTF-8?q?=20pump:=20A=20script=20for=20filling=20your=20gas=20tank?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...ump- A script for filling your gas tank.md | 101 ++++++++++++++++++ 1 file changed, 101 insertions(+) create mode 100644 sources/tech/20181008 Python at the pump- A script for filling your gas tank.md diff --git a/sources/tech/20181008 Python at the pump- A script for filling your gas tank.md b/sources/tech/20181008 Python at the pump- A script for filling your gas tank.md new file mode 100644 index 0000000000..493a906b3f --- /dev/null +++ b/sources/tech/20181008 Python at the pump- A script for filling your gas tank.md @@ -0,0 +1,101 @@ +Python at the pump: A script for filling your gas tank +====== +Here's how I used Python to discover a strategy for cost-effective fill-ups. + +![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/bulb-light-energy-power-idea.png?itok=zTEEmTZB) + +I recently began driving a car that had traditionally used premium gas (93 octane). According to the maker, though, it requires only 91 octane. The thing is, in the US, you can buy only 87, 89, or 93 octane. Where I live, gas prices jump 30 cents per gallon jump from one grade to the next, so premium costs 60 cents more than regular. So why not try to save some money? + +It’s easy enough to wait until the gas gauge shows that the tank is half full and then fill it with 89 octane, and there you have 91 octane. But it gets tricky to know what to do next—half a tank of 91 octane plus half a tank of 93 ends up being 92, and where do you go from there? You can make continuing calculations, but they get increasingly messy. This is where Python came into the picture. + +I wanted to come up with a simple scheme in which I could fill the tank at some level with 93 octane, then at the same or some other level with 89 octane, with the primary goal to never get below 91 octane with the final mixture. What I needed to do was create some recurring calculation that uses the previous octane value for the preceding fill-up. I suppose there would be some polynomial equation that would solve this, but in Python, this sounds like a loop. + +``` +#!/usr/bin/env python +# octane.py + +o = 93.0 +newgas = 93.0   # this represents the octane of the last fillup +i = 1 +while i < 21:                   # 20 iterations (trips to the pump) +    if newgas == 89.0:          # if the last fillup was with 89 octane +                                # switch to 93 +        newgas = 93.0 +        o = newgas/2 + o/2      # fill when gauge is 1/2 full +    else:                       # if it wasn't 89 octane, switch to that +        newgas = 89.0 +        o = newgas/2 + o/2      # fill when gauge says 1/2 full +    print str(i) + ': '+ str(o) +    i += 1 +``` + +As you can see, I am initializing the variable o (the current octane mixture in the tank) and the variable newgas (what I last filled the tank with) at the same value of 93. The loop then will repeat 20 times, for 20 fill-ups, switching from 89 octane and 93 octane for every other trip to the station. + +``` +1: 91.0 +2: 92.0 +3: 90.5 +4: 91.75 +5: 90.375 +6: 91.6875 +7: 90.34375 +8: 91.671875 +9: 90.3359375 +10: 91.66796875 +11: 90.333984375 +12: 91.6669921875 +13: 90.3334960938 +14: 91.6667480469 +15: 90.3333740234 +16: 91.6666870117 +17: 90.3333435059 +18: 91.6666717529 +19: 90.3333358765 +20: 91.6666679382 +``` + +This shows is that I probably need only 10 or 15 loops to see stabilization. It also shows that soon enough, I undershoot my 91 octane target. It’s also interesting to see this stabilization of the alternating mixture values, and it turns out this happens with any scheme where you choose the same amounts each time. In fact, it is true even if the amount of the fill-up is different for 89 and 93 octane. + +So at this point, I began playing with fractions, reasoning that I would probably need a bigger 93 octane fill-up than the 89 fill-up. I also didn’t want to make frequent trips to the gas station. What I ended up with (which seemed pretty good to me) was to wait until the tank was about 7⁄12 full and fill it with 89 octane, then wait until it was ¼ full and fill it with 93 octane. + +Here is what the changes in the loop look like: + +``` +    if newgas == 89.0:             +                                  +        newgas = 93.0 +        o = 3*newgas/4 + o/4       +    else:                         +        newgas = 89.0 +        o = 5*newgas/12 + 7*o/12 +``` + +Here are the numbers, starting with the tenth fill-up: + +``` +10: 92.5122272978 +11: 91.0487992571 +12: 92.5121998143 +13: 91.048783225 +14: 92.5121958062 +15: 91.048780887 +``` + +As you can see, this keeps the final octane very slightly above 91 all the time. Of course, my gas gauge isn’t marked in twelfths, but 7⁄12 is slightly less than 5⁄8, and I can handle that. + +An alternative simple solution might have been run the tank to empty and fill with 93 octane, then next time only half-fill it for 89—and perhaps this will be my default plan. Personally, I’m not a fan of running the tank all the way down since this isn’t always convenient. On the other hand, it could easily work on a long trip. And sometimes I buy gas because of a sudden drop in prices. So in the end, this scheme is one of a series of options that I can consider. + +The most important thing for Python users: Don’t code while driving! + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/18/10/python-gas-pump + +作者:[Greg Pittman][a] +选题:[lujun9972](https://github.com/lujun9972) +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/greg-p From 386b3d4c2de5d05c90c06bd911e155ab572fa3f3 Mon Sep 17 00:00:00 2001 From: darksun Date: Tue, 9 Oct 2018 09:46:31 +0800 Subject: [PATCH 291/736] =?UTF-8?q?=E9=80=89=E9=A2=98:=20How=20to=20use=20?= =?UTF-8?q?the=20SSH=20and=20SFTP=20protocols=20on=20your=20home=20network?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...and SFTP protocols on your home network.md | 76 +++++++++++++++++++ 1 file changed, 76 insertions(+) create mode 100644 sources/tech/20181002 How to use the SSH and SFTP protocols on your home network.md diff --git a/sources/tech/20181002 How to use the SSH and SFTP protocols on your home network.md b/sources/tech/20181002 How to use the SSH and SFTP protocols on your home network.md new file mode 100644 index 0000000000..5c24e87901 --- /dev/null +++ b/sources/tech/20181002 How to use the SSH and SFTP protocols on your home network.md @@ -0,0 +1,76 @@ +How to use the SSH and SFTP protocols on your home network +====== + +Use the SSH and SFTP protocols to access other devices, efficiently and securely transfer files, and more. + +![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/openwires_fromRHT_520_0612LL.png?itok=PqZi55Ab) + +Years ago, I decided to set up an extra computer (I always have extra computers) so that I could access it from work to transfer files I might need. To do this, the basic first step is to have your ISP assign a fixed IP address. + +The not-so-basic but much more important next step is to set up your accessible system safely. In this particular case, I was planning to access it only from work, so I could restrict access to that IP address. Even so, you want to use all possible security features. What is amazing—and scary—is that as soon as you set this up, people from all over the world will immediately attempt to access your system. You can discover this by checking the logs. I presume there are bots constantly searching for open doors wherever they can find them. + +Not long after I set up my computer, I decided my access was more a toy than a need, so I turned it off and gave myself one less thing to worry about. Nonetheless, there is another use for SSH and SFTP inside your home network, and it is more or less already set up for you. + +One requirement, of course, is that the other computer in your home must be turned on, although it doesn’t matter whether someone is logged on or not. You also need to know its IP address. There are two ways to find this out. One is to get access to the router, which you can do through a browser. Typically, its address is something like **192.168.1.254**. With some searching, it should be easy enough to find out what is currently on and hooked up to the system by eth0 or WiFi. What can be challenging is recognizing the computer you’re interested in. + +I find it easier to go to the computer in question, bring up a shell, and type: + +``` +ifconfig + +``` + +This spits out a lot of information, but the bit you want is right after `inet` and might look something like **192.168.1.234**. After you find that, go back to the client computer you want to access this host, and on the command line, type: + +``` +ssh gregp@192.168.1.234 + +``` + +For this to work, **gregp** must be a valid user on that system. You will then be asked for his password, and if you enter it correctly, you will be connected to that other computer in a shell environment. I confess that I don’t use SSH in this way very often. I have used it at times so I can run `dnf` to upgrade some other computer than the one I’m sitting at. Usually, I use SFTP: + +``` +sftp gregp@192.168.1.234 + +``` + +because I have a greater need for an easy method of transferring files from one computer to another. It’s certainly more convenient and less time-consuming than using a USB stick or an external drive. + +`get`, to receive files from the host; and `put`, to send files to the host. I usually migrate to the directory on my client where I either want to save files I will get from the host or send to the host before I connect. When you connect, you will be in the top-level directory—in this example, **home/gregp**. Once connected, you can then use `cd` just as you would in your client, except now you’re changing your working directory on the host. You may need to use `ls` to make sure you know where you are. + +Once you’re connected, the two basic commands for SFTP are, to receive files from the host; and, to send files to the host. I usually migrate to the directory on my client where I either want to save files I will get from the host or send to the host before I connect. When you connect, you will be in the top-level directory—in this example,. Once connected, you can then usejust as you would in your client, except now you’re changing your working directory on the host. You may need to useto make sure you know where you are. + +If you need to change the working directory on your client, use the command `lcd` (as in **local change directory** ). Similarly, use `lls` to show the working directory contents on your client system. + +What if the host doesn’t have a directory with the name you would like? Use `mkdir` to make a new directory on it. Or you might copy a whole directory of files to the host with this: + +``` +put -r ThisDir/ + +``` + +which creates the directory and then copies all of its files and subdirectories to the host. These transfers are extremely fast, as fast as your hardware allows, and have none of the bottlenecks you might encounter on the internet. To see a list of commands you can use in an SFTP session, check: + +``` +man sftp + +``` + +I have also been able to put SFTP to use on a Windows VM on my computer, yet another advantage of setting up a VM rather than a dual-boot system. This lets me move files to or from the Linux part of the system. So far I have only done this using a client in Windows. + +You can also use SSH and SFTP to access any devices connected to your router by wire or WiFi. For a while, I used an app called [SSHDroid][1], which runs SSH in a passive mode. In other words, you use your computer to access the Android device that is the host. Recently I found another app, [Admin Hands][2], where the tablet or phone is the client and can be used for either SSH or SFTP operations. This app is great for backing up or sharing photos from your phone. + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/18/10/ssh-sftp-home-network + +作者:[Geg Pittman][a] +选题:[lujun9972](https://github.com/lujun9972) +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/greg-p +[1]: https://play.google.com/store/apps/details?id=berserker.android.apps.sshdroid +[2]: https://play.google.com/store/apps/details?id=com.arpaplus.adminhands&hl=en_US From 44c9651f4ba06ebf57370feca296522122945b5c Mon Sep 17 00:00:00 2001 From: darksun Date: Tue, 9 Oct 2018 09:49:32 +0800 Subject: [PATCH 292/736] =?UTF-8?q?=E9=80=89=E9=A2=98:=20Oomox=20=E2=80=93?= =?UTF-8?q?=20Customize=20And=20Create=20Your=20Own=20GTK2,=20GTK3=20Theme?= =?UTF-8?q?s?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...e And Create Your Own GTK2, GTK3 Themes.md | 128 ++++++++++++++++++ 1 file changed, 128 insertions(+) create mode 100644 sources/tech/20181003 Oomox - Customize And Create Your Own GTK2, GTK3 Themes.md diff --git a/sources/tech/20181003 Oomox - Customize And Create Your Own GTK2, GTK3 Themes.md b/sources/tech/20181003 Oomox - Customize And Create Your Own GTK2, GTK3 Themes.md new file mode 100644 index 0000000000..e45d96470f --- /dev/null +++ b/sources/tech/20181003 Oomox - Customize And Create Your Own GTK2, GTK3 Themes.md @@ -0,0 +1,128 @@ +Oomox – Customize And Create Your Own GTK2, GTK3 Themes +====== + +![](https://www.ostechnix.com/wp-content/uploads/2018/10/Oomox-720x340.png) + +Theming and Visual customization is one of the main advantages of Linux. Since all the code is open, you can change how your Linux system looks and behaves to a greater degree than you ever could with Windows/Mac OS. GTK theming is perhaps the most popular way in which people customize their Linux desktops. The GTK toolkit is used by a wide variety of desktop environments like Gnome, Cinnamon, Unity, XFCE, and budgie. This means that a single theme made for GTK can be applied to any of these Desktop Environments with little changes. + +There are a lot of very high quality popular GTK themes out there, such as **Arc** , **Numix** , and **Adapta**. But if you want to customize these themes and create your own visual design, you can use **Oomox**. + +The Oomox is a graphical app for customizing and creating your own GTK theme complete with your own color, icon and terminal style. It comes with several presets, which you can apply on a Numix, Arc, or Materia style theme to create your own GTK theme. + +### Installing Oomox + +On Arch Linux and its variants: + +Oomox is available on [**AUR**][1], so you can install it using any AUR helper programs like [**Yay**][2]. + +``` +$ yay -S oomox + +``` + +On Debian/Ubuntu/Linux Mint, download `oomox.deb`package from [**here**][3] and install it as shown below. As of writing this guide, the latest version was **oomox_1.7.0.5.deb**. + +``` +$ sudo dpkg -i oomox_1.7.0.5.deb +$ sudo apt install -f + +``` + +On Fedora, Oomox is available in third-party **COPR** repository. + +``` +$ sudo dnf copr enable tcg/themes +$ sudo dnf install oomox + +``` + +Oomox is also available as a [**Flatpak app**][4]. Make sure you have installed Flatpak as described in [**this guide**][5]. And then, install and run Oomox using the following commands: + +``` +$ flatpak install flathub com.github.themix_project.Oomox + +$ flatpak run com.github.themix_project.Oomox + +``` + +For other Linux distributions, go to the Oomox project page (Link is given at the end of this guide) on Github and compile and install it manually from source. + +### Customize And Create Your Own GTK2, GTK3 Themes + +**Theme Customization** + +![](https://www.ostechnix.com/wp-content/uploads/2018/10/Oomox-1-1.png) + +You can change the colour of practically every UI element, like: + + 1. Headers + 2. Buttons + 3. Buttons inside Headers + 4. Menus + 5. Selected Text + + + +To the left, there are a number of presets, like the Cars theme, modern themes like Materia, and Numix, and retro themes. Then, at the top of the main window, there’s an option called **Theme Style** , that lets you set the overall visual style of the theme. You can choose from between Numix, Arc, and Materia. + +With certain styles like Numix, you can even change things like the Header Gradient, Outline Width and Panel Opacity. You can also add a Dark Mode for your theme that will be automatically created from the default theme. + +![](https://www.ostechnix.com/wp-content/uploads/2018/10/Oomox-2.png) + +**Iconset Customization** + +You can customize the iconset that will be used for the theme icons. There are 2 options – Gnome Colors and Archdroid. You an change the base, and stroke colours of the iconset. + +**Terminal Customization** + +You can also customize the terminal colours. The app has several presets for this, but you can customize the exact colour code for each colour value like red, green,black, and so on. You can also auto swap the foreground and background colours. + +**Spotify Theme** + +A unique feature this app has is that you can theme the spotify app to your liking. You can change the foreground, background, and accent color of the spotify app to match the overall GTK theme. + +Then, just press the **Apply Spotify Theme** button, and you’ll get this window: + +![](https://www.ostechnix.com/wp-content/uploads/2018/10/Oomox-3.png) + +Just hit apply, and you’re done. + +**Exporting your Theme** + +Once you’re done customizing the theme to your liking, you can rename it by clicking the rename button at the top left: + +![](https://www.ostechnix.com/wp-content/uploads/2018/10/Oomox-4.png) + +And then, just hit **Export Theme** to export the theme to your system. + +![](https://www.ostechnix.com/wp-content/uploads/2018/10/Oomox-5.png) + +You can also just export just the Iconset or the terminal theme. + +After this, you can open any Visual Customization app for your Desktop Environment, like Tweaks for Gnome based DEs, or the **XFCE Appearance Settings** , and select your exported GTK and Shell theme. + +### Verdict + +If you are a Linux theme junkie, and you know exactly how each button, how each header in your system should look like, Oomox is worth a look. For extreme customizers, it lets you change virtually everything about how your system looks. For people who just want to tweak an existing theme a little bit, it has many, many presets so you can get what you want without a lot of effort. + +Have you tried it? What are your thoughts on Oomox? Put them in the comments below! + + + +-------------------------------------------------------------------------------- + +via: https://www.ostechnix.com/oomox-customize-and-create-your-own-gtk2-gtk3-themes/ + +作者:[EDITOR][a] +选题:[lujun9972](https://github.com/lujun9972) +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://www.ostechnix.com/author/editor/ +[1]: https://aur.archlinux.org/packages/oomox/ +[2]: https://www.ostechnix.com/yay-found-yet-another-reliable-aur-helper/ +[3]: https://github.com/themix-project/oomox/releases +[4]: https://flathub.org/apps/details/com.github.themix_project.Oomox +[5]: https://www.ostechnix.com/flatpak-new-framework-desktop-applications-linux/ From 9f3d9e84fb74a9f5250f8c1338bfb8cec67d8702 Mon Sep 17 00:00:00 2001 From: darksun Date: Tue, 9 Oct 2018 09:51:20 +0800 Subject: [PATCH 293/736] =?UTF-8?q?=E9=80=89=E9=A2=98:=20Introducing=20Swi?= =?UTF-8?q?ft=20on=20Fedora?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- .../20181003 Introducing Swift on Fedora.md | 70 +++++++++++++++++++ 1 file changed, 70 insertions(+) create mode 100644 sources/tech/20181003 Introducing Swift on Fedora.md diff --git a/sources/tech/20181003 Introducing Swift on Fedora.md b/sources/tech/20181003 Introducing Swift on Fedora.md new file mode 100644 index 0000000000..6b975be8f6 --- /dev/null +++ b/sources/tech/20181003 Introducing Swift on Fedora.md @@ -0,0 +1,70 @@ +Introducing Swift on Fedora +====== + +![](https://fedoramagazine.org/wp-content/uploads/2018/09/swift-816x345.jpg) + +Swift is a general-purpose programming language built using a modern approach to safety, performance, and software design patterns. It aims to be the best language for a variety of programming projects, ranging from systems programming to desktop applications and scaling up to cloud services. Read more about it and how to try it out in Fedora. + +### Safe, Fast, Expressive + +Like many modern programming languages, Swift was designed to be safer than C-based languages. For example, variables are always initialized before they can be used. Arrays and integers are checked for overflow. Memory is automatically managed. + +Swift puts intent right in the syntax. To declare a variable, use the var keyword. To declare a constant, use let. + +Swift also guarantees that objects can never be nil; in fact, trying to use an object known to be nil will cause a compile-time error. When using a nil value is appropriate, it supports a mechanism called **optionals**. An optional may contain nil, but is safely unwrapped using the **?** operator. + +Some additional features include: + + * Closures unified with function pointers + * Tuples and multiple return values + * Generics + * Fast and concise iteration over a range or collection + * Structs that support methods, extensions, and protocols + * Functional programming patterns, e.g., map and filter + * Powerful error handling built-in + * Advanced control flow with do, guard, defer, and repeat keywords + + + +### Try Swift out + +Swift is available in Fedora 28 under then package name **swift-lang**. Once installed, run swift and the REPL console starts up. + +``` +$ swift +Welcome to Swift version 4.2 (swift-4.2-RELEASE). Type :help for assistance. + 1> let greeting="Hello world!" +greeting: String = "Hello world!" + 2> print(greeting) +Hello world! + 3> greeting = "Hello universe!" +error: repl.swift:3:10: error: cannot assign to value: 'greeting' is a 'let' constant +greeting = "Hello universe!" +~~~~~~~~ ^ + + + 3> + +``` + +Swift has a growing community, and in particular, a [work group][1] dedicated to making it an efficient and effective server-side programming language. Be sure to visit [its home page][2] for more ways to get involved. + +Photo by [Uillian Vargas][3] on [Unsplash][4]. + + +-------------------------------------------------------------------------------- + +via: https://fedoramagazine.org/introducing-swift-fedora/ + +作者:[Link Dupont][a] +选题:[lujun9972](https://github.com/lujun9972) +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://fedoramagazine.org/author/linkdupont/ +[1]: https://swift.org/server/ +[2]: http://swift.org +[3]: https://unsplash.com/photos/7oJpVR1inGk?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText +[4]: https://unsplash.com/search/photos/fast?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText From 5faeab123a946a94ed4d27d771316759e80ff7f1 Mon Sep 17 00:00:00 2001 From: darksun Date: Tue, 9 Oct 2018 09:58:49 +0800 Subject: [PATCH 294/736] =?UTF-8?q?=E9=80=89=E9=A2=98:=204=20open=20source?= =?UTF-8?q?=20invoicing=20tools=20for=20small=20businesses?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...ce invoicing tools for small businesses.md | 76 +++++++++++++++++++ 1 file changed, 76 insertions(+) create mode 100644 sources/tech/20181002 4 open source invoicing tools for small businesses.md diff --git a/sources/tech/20181002 4 open source invoicing tools for small businesses.md b/sources/tech/20181002 4 open source invoicing tools for small businesses.md new file mode 100644 index 0000000000..29589a6ad1 --- /dev/null +++ b/sources/tech/20181002 4 open source invoicing tools for small businesses.md @@ -0,0 +1,76 @@ +4 open source invoicing tools for small businesses +====== +Manage your billing and get paid with easy-to-use, web-based invoicing software. + +![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/BUS_lovemoneyglory2.png?itok=AvneLxFp) + +No matter what your reasons for starting a small business, the key to keeping that business going is getting paid. Getting paid usually means sending a client an invoice. + +It's easy enough to whip up an invoice using LibreOffice Writer or LibreOffice Calc, but sometimes you need a bit more. A more professional look. A way of keeping track of your invoices. Reminders about when to follow up on the invoices you've sent. + +There's a wide range of commercial and closed-source invoicing tools out there. But the offerings on the open source side of the fence are just as good, and maybe even more flexible, than their closed source counterparts. + +Let's take a look at four web-based open source invoicing tools that are great choices for freelancers and small businesses on a tight budget. I reviewed two of them in 2014, in an [earlier version][1] of this article. These four picks are easy to use and you can use them on just about any device. + +### Invoice Ninja + +I've never been a fan of the term ninja. Despite that, I like [Invoice Ninja][2]. A lot. It melds a simple interface with a set of features that let you create, manage, and send invoices to clients and customers. + +You can easily configure multiple clients, track payments and outstanding invoices, generate quotes, and email invoices. What sets Invoice Ninja apart from its competitors is its [integration with][3] over 40 online popular payment gateways, including PayPal, Stripe, WePay, and Apple Pay. + +[Download][4] a version that you can install on your own server or get an account with the [hosted version][5] of Invoice Ninja. There's a free version and a paid tier that will set you back US$ 8 a month. + +### InvoicePlane + +Once upon a time, there was a nifty open source invoicing tool called FusionInvoice. One day, its creators took the latest version of the code proprietary. That didn't end happily, as FusionInvoice's doors were shut for good in 2018. But that wasn't the end of the application. An old version of the code stayed open source and morphed into [InvoicePlane][6], which packs all of FusionInvoice's goodness. + +Creating an invoice takes just a couple of clicks. You can make them as minimal or detailed as you need. When you're ready, you can email your invoices or output them as PDFs. You can also create recurring invoices for clients or customers you regularly bill. + +InvoicePlane does more than generate and track invoices. You can also create quotes for jobs or goods, track products you sell, view and enter payments, and run reports on your invoices. + +[Grab the code][7] and install it on your web server. Or, if you're not quite ready to do that, [take the demo][8] for a spin. + +### OpenSourceBilling + +Described by its developer as "beautifully simple billing software," [OpenSourceBilling][9] lives up to the description. It has one of the cleanest interfaces I've seen, which makes configuring and using the tool a breeze. + +OpenSourceBilling stands out because of its dashboard, which tracks your current and past invoices, as well as any outstanding amounts. Your information is broken up into graphs and tables, which makes it easy to follow. + +You do much of the configuration on the invoice itself. You can add items, tax rates, clients, and even payment terms with a click and a few keystrokes. OpenSourceBilling saves that information across all of your invoices, both new and old. + +As with some of the other tools we've looked at, OpenSourceBilling has a [demo][10] you can try. + +### BambooInvoice + +When I was a full-time freelance writer and consultant, I used [BambooInvoice][11] to bill my clients. When its original developer stopped working on the software, I was a bit disappointed. But BambooInvoice is back, and it's as good as ever. + +What attracted me to BambooInvoice is its simplicity. It does one thing and does it well. You can create and edit invoices, and BambooInvoice keeps track of them by client and by the invoice numbers you assign to them. It also lets you know which invoices are open or overdue. You can email the invoices from within the application or generate PDFs. You can also run reports to keep tabs on your income. + +To [install][12] and use BambooInvoice, you'll need a web server running PHP 5 or newer as well as a MySQL database. Chances are you already have access to one, so you're good to go. + +Do you have a favorite open source invoicing tool? Feel free to share it by leaving a comment. + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/18/10/open-source-invoicing-tools + +作者:[Scott Nesbitt][a] +选题:[lujun9972](https://github.com/lujun9972) +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/scottnesbitt +[1]: https://opensource.com/business/14/9/4-open-source-invoice-tools +[2]: https://www.invoiceninja.org/ +[3]: https://www.invoiceninja.com/integrations/ +[4]: https://github.com/invoiceninja/invoiceninja +[5]: https://www.invoiceninja.com/invoicing-pricing-plans/ +[6]: https://invoiceplane.com/ +[7]: https://wiki.invoiceplane.com/en/1.5/getting-started/installation +[8]: https://demo.invoiceplane.com/ +[9]: http://www.opensourcebilling.org/ +[10]: http://demo.opensourcebilling.org/ +[11]: https://www.bambooinvoice.net/ +[12]: https://sourceforge.net/projects/bambooinvoice/ From ab8c4289a06ef682321cd3a2be8dabccf54fb677 Mon Sep 17 00:00:00 2001 From: darksun Date: Tue, 9 Oct 2018 10:04:35 +0800 Subject: [PATCH 295/736] =?UTF-8?q?=E9=80=89=E9=A2=98:=20Turn=20your=20boo?= =?UTF-8?q?k=20into=20a=20website=20and=20an=20ePub=20using=20Pandoc?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...into a website and an ePub using Pandoc.md | 263 ++++++++++++++++++ 1 file changed, 263 insertions(+) create mode 100644 sources/tech/20181001 Turn your book into a website and an ePub using Pandoc.md diff --git a/sources/tech/20181001 Turn your book into a website and an ePub using Pandoc.md b/sources/tech/20181001 Turn your book into a website and an ePub using Pandoc.md new file mode 100644 index 0000000000..bd79cb3c04 --- /dev/null +++ b/sources/tech/20181001 Turn your book into a website and an ePub using Pandoc.md @@ -0,0 +1,263 @@ +Turn your book into a website and an ePub using Pandoc +====== +Write once, publish twice using Markdown and Pandoc. + +![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/email_paper_envelope_document.png?itok=uPj_kouJ) + +Pandoc is a command-line tool for converting files from one markup language to another. In my [introduction to Pandoc][1], I explained how to convert text written in Markdown into a website, a slideshow, and a PDF. + +In this follow-up article, I'll dive deeper into [Pandoc][2], showing how to produce a website and an ePub book from the same Markdown source file. I'll use my upcoming e-book, [GRASP Principles for the Object-Oriented Mind][3], which I created using this process, as an example. + +First I will explain the file structure used for the book, then how to use Pandoc to generate a website and deploy it in GitHub. Finally, I demonstrate how to generate its companion ePub book. + +You can find the code in my [Programming Fight Club][4] GitHub repository. + +### Setting up the writing structure + +I do all of my writing in Markdown syntax. You can also use HTML, but the more HTML you introduce the highest risk that problems arise when Pandoc converts Markdown to an ePub document. My books follow the one-chapter-per-file pattern. Declare chapters using the Markdown heading H1 ( **#** ). You can put more than one chapter in each file, but putting them in separate files makes it easier to find content and do updates later. + +The meta-information follows a similar pattern: each output format has its own meta-information file. Meta-information files define information about your documents, such as text to add to your HTML or the license of your ePub. I store all of my Markdown documents in a folder named parts (this is important for the Makefile that generates the website and ePub). As an example, let's take the table of contents, the preface, and the about chapters (divided into the files toc.md, preface.md, and about.md) and, for clarity, we will leave out the remaining chapters. + +My about file might begin like: + +``` +# About this book {-} + +## Who should read this book {-} + +Before creating a complex software system one needs to create a solid foundation. +General Responsibility Assignment Software Principles (GRASP) are guidelines to assign +responsibilities to software classes in object-oriented programming. +``` + +Once the chapters are finished, the next step is to add meta-information to setup the format for the website and the ePub. + +### Generating the website + +#### Create the HTML meta-information file + +The meta-information file (web-metadata.yaml) for my website is a simple YAML file that contains information about the author, title, rights, content for the **< head>** tag, and content for the beginning and end of the HTML file. + +I recommend (at minimum) including the following fields in the web-metadata.yaml file: + +``` +--- +title: GRASP principles for the Object-oriented mind +author: Kiko Fernandez-Reyes +rights: 2017 Kiko Fernandez-Reyes, CC-BY-NC-SA 4.0 International +header-includes: +- | +  \```{=html} +  +  \``` +include-before: +- | +  \```{=html} + 

If you like this book, please consider +      spreading the word or +      +        buying me a coffee +     

+  \``` +include-after: +- | +  ```{=html} + 
+   
+   
+        +   
+  \``` +--- +``` + +Some variables to note: + + * The **header-includes** variable contains HTML that will be embedded inside the **< head>** tag. + * The line after calling a variable must be **\- |**. The next line must begin with triple backquotes that are aligned with the **|** or Pandoc will reject it. **{=html}** tells Pandoc that this is raw text and should not be processed as Markdown. (For this to work, you need to check that the **raw_attribute** extension in Pandoc is enabled. To check, type **pandoc --list-extensions | grep raw** and make sure the returned list contains an item named **+raw_html** ; the plus sign indicates it is enabled.) + * The variable **include-before** adds some HTML at the beginning of your website, and I ask readers to consider spreading the word or buying me a coffee. + * The **include-after** variable appends raw HTML at the end of the website and shows my book's license. + + + +These are only some of the fields available; take a look at the template variables in HTML (my article [introduction to Pandoc][1] covered this for LaTeX but the process is the same for HTML) to learn about others. + +#### Split the website into chapters + +The website can be generated as a whole, resulting in a long page with all the content, or split into chapters, which I think is easier to read. I'll explain how to divide the website into chapters so the reader doesn't get intimidated by a long website. + +To make the website easy to deploy on GitHub Pages, we need to create a root folder called docs (which is the root folder that GitHub Pages uses by default to render a website). Then we need to create folders for each chapter under docs, place the HTML chapters in their own folders, and the file content in a file named index.html. + +For example, the about.md file is converted to a file named index.html that is placed in a folder named about (about/index.html). This way, when users type **http:// /about/**, the index.html file from the folder about will be displayed in their browser. + +The following Makefile does all of this: + +``` +# Your book files +DEPENDENCIES= toc preface about + +# Placement of your HTML files +DOCS=docs + +all: web + +web: setup $(DEPENDENCIES) +        @cp $(DOCS)/toc/index.html $(DOCS) + + +# Creation and copy of stylesheet and images into +# the assets folder. This is important to deploy the +# website to Github Pages. +setup: +        @mkdir -p $(DOCS) +        @cp -r assets $(DOCS) + + +# Creation of folder and index.html file on a +# per-chapter basis + +$(DEPENDENCIES): +        @mkdir -p $(DOCS)/$@ +        @pandoc -s --toc web-metadata.yaml parts/$@.md \ +        -c /assets/pandoc.css -o $(DOCS)/$@/index.html + +clean: +        @rm -rf $(DOCS) + +.PHONY: all clean web setup +``` + +The option **-c /assets/pandoc.css** declares which CSS stylesheet to use; it will be fetched from **/assets/pandoc.css**. In other words, inside the **< head>** HTML tag, Pandoc adds the following line: + +``` + +``` + +To generate the website, type: + +``` +make +``` + +The root folder should contain now the following structure and files: + +``` +.---parts +|    |--- toc.md +|    |--- preface.md +|    |--- about.md +| +|---docs +    |--- assets/ +    |--- index.html +    |--- toc +    |     |--- index.html +    | +    |--- preface +    |     |--- index.html +    | +    |--- about +          |--- index.html +    +``` + +#### Deploy the website + +To deploy the website on GitHub, follow these steps: + + 1. Create a new repository + 2. Push your content to the repository + 3. Go to the GitHub Pages section in the repository's Settings and select the option for GitHub to use the content from the Master branch + + + +You can get more details on the [GitHub Pages][5] site. + +Check out [my book's website][6], generated using this process, to see the result. + +### Generating the ePub book + +#### Create the ePub meta-information file + +The ePub meta-information file, epub-meta.yaml, is similar to the HTML meta-information file. The main difference is that ePub offers other template variables, such as **publisher** and **cover-image**. Your ePub book's stylesheet will probably differ from your website's; mine uses one named epub.css. + +``` +--- +title: 'GRASP principles for the Object-oriented Mind' +publisher: 'Programming Language Fight Club' +author: Kiko Fernandez-Reyes +rights: 2017 Kiko Fernandez-Reyes, CC-BY-NC-SA 4.0 International +cover-image: assets/cover.png +stylesheet: assets/epub.css +... +``` + +Add the following content to the previous Makefile: + +``` +epub: +        @pandoc -s --toc epub-meta.yaml \ +        $(addprefix parts/, $(DEPENDENCIES:=.md)) -o $(DOCS)/assets/book.epub +``` + +The command for the ePub target takes all the dependencies from the HTML version (your chapter names), appends to them the Markdown extension, and prepends them with the path to the folder chapters' so Pandoc knows how to process them. For example, if **$(DEPENDENCIES)** was only **preface about** , then the Makefile would call: + +``` +@pandoc -s --toc epub-meta.yaml \ +parts/preface.md parts/about.md -o $(DOCS)/assets/book.epub +``` + +Pandoc would take these two chapters, combine them, generate an ePub, and place the book under the Assets folder. + +Here's an [example][7] of an ePub created using this process. + +### Summarizing the process + +The process to create a website and an ePub from a Markdown file isn't difficult, but there are a lot of details. The following outline may make it easier for you to follow. + + * HTML book: + * Write chapters in Markdown + * Add metadata + * Create a Makefile to glue pieces together + * Set up GitHub Pages + * Deploy + * ePub book: + * Reuse chapters from previous work + * Add new metadata file + * Create a Makefile to glue pieces together + * Set up GitHub Pages + * Deploy + + + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/18/10/book-to-website-epub-using-pandoc + +作者:[Kiko Fernandez-Reyes][a] +选题:[lujun9972](https://github.com/lujun9972) +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/kikofernandez +[1]: https://opensource.com/article/18/9/intro-pandoc +[2]: https://pandoc.org/ +[3]: https://www.programmingfightclub.com/ +[4]: https://github.com/kikofernandez/programmingfightclub +[5]: https://pages.github.com/ +[6]: https://www.programmingfightclub.com/grasp-principles/ +[7]: https://github.com/kikofernandez/programmingfightclub/raw/master/docs/web_assets/demo.epub From b6f2c585ab996e0c2b447a86e73717b31e269823 Mon Sep 17 00:00:00 2001 From: darksun Date: Tue, 9 Oct 2018 10:07:41 +0800 Subject: [PATCH 296/736] =?UTF-8?q?=E9=80=89=E9=A2=98:=2016=20iptables=20t?= =?UTF-8?q?ips=20and=20tricks=20for=20sysadmins?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ... iptables tips and tricks for sysadmins.md | 261 ++++++++++++++++++ 1 file changed, 261 insertions(+) create mode 100644 sources/tech/20181001 16 iptables tips and tricks for sysadmins.md diff --git a/sources/tech/20181001 16 iptables tips and tricks for sysadmins.md b/sources/tech/20181001 16 iptables tips and tricks for sysadmins.md new file mode 100644 index 0000000000..9e07971c81 --- /dev/null +++ b/sources/tech/20181001 16 iptables tips and tricks for sysadmins.md @@ -0,0 +1,261 @@ +16 iptables tips and tricks for sysadmins +====== +Iptables provides powerful capabilities to control traffic coming in and out of your system. + +![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/command_line_prompt.png?itok=wbGiJ_yg) + +Modern Linux kernels come with a packet-filtering framework named [Netfilter][1]. Netfilter enables you to allow, drop, and modify traffic coming in and going out of a system. The **iptables** userspace command-line tool builds upon this functionality to provide a powerful firewall, which you can configure by adding rules to form a firewall policy. [iptables][2] can be very daunting with its rich set of capabilities and baroque command syntax. Let's explore some of them and develop a set of iptables tips and tricks for many situations a system administrator might encounter. + +### Avoid locking yourself out + +Scenario: You are going to make changes to the iptables policy rules on your company's primary server. You want to avoid locking yourself—and potentially everybody else—out. (This costs time and money and causes your phone to ring off the wall.) + +#### Tip #1: Take a backup of your iptables configuration before you start working on it. + +Back up your configuration with the command: + +``` +/sbin/iptables-save > /root/iptables-works + +``` +#### Tip #2: Even better, include a timestamp in the filename. + +Add the timestamp with the command: + +``` +/sbin/iptables-save > /root/iptables-works-`date +%F` + +``` + +You get a file with a name like: + +``` +/root/iptables-works-2018-09-11 + +``` + +If you do something that prevents your system from working, you can quickly restore it: + +``` +/sbin/iptables-restore < /root/iptables-works-2018-09-11 + +``` + +#### Tip #3: Every time you create a backup copy of the iptables policy, create a link to the file with 'latest' in the name. + +``` +ln –s /root/iptables-works-`date +%F` /root/iptables-works-latest + +``` + +#### Tip #4: Put specific rules at the top of the policy and generic rules at the bottom. + +Avoid generic rules like this at the top of the policy rules: + +``` +iptables -A INPUT -p tcp --dport 22 -j DROP + +``` + +The more criteria you specify in the rule, the less chance you will have of locking yourself out. Instead of the very generic rule above, use something like this: + +``` +iptables -A INPUT -p tcp --dport 22 –s 10.0.0.0/8 –d 192.168.100.101 -j DROP + +``` + +This rule appends ( **-A** ) to the **INPUT** chain a rule that will **DROP** any packets originating from the CIDR block **10.0.0.0/8** on TCP ( **-p tcp** ) port 22 ( **\--dport 22** ) destined for IP address 192.168.100.101 ( **-d 192.168.100.101** ). + +There are plenty of ways you can be more specific. For example, using **-i eth0** will limit the processing to a single NIC in your server. This way, the filtering actions will not apply the rule to **eth1**. + +#### Tip #5: Whitelist your IP address at the top of your policy rules. + +This is a very effective method of not locking yourself out. Everybody else, not so much. + +``` +iptables -I INPUT -s -j ACCEPT + +``` + +You need to put this as the first rule for it to work properly. Remember, **-I** inserts it as the first rule; **-A** appends it to the end of the list. + +#### Tip #6: Know and understand all the rules in your current policy. + +Not making a mistake in the first place is half the battle. If you understand the inner workings behind your iptables policy, it will make your life easier. Draw a flowchart if you must. Also remember: What the policy does and what it is supposed to do can be two different things. + +### Set up a workstation firewall policy + +Scenario: You want to set up a workstation with a restrictive firewall policy. + +#### Tip #1: Set the default policy as DROP. + +``` +# Set a default policy of DROP +*filter +:INPUT DROP [0:0] +:FORWARD DROP [0:0] +:OUTPUT DROP [0:0] +``` + +#### Tip #2: Allow users the minimum amount of services needed to get their work done. + +The iptables rules need to allow the workstation to get an IP address, netmask, and other important information via DHCP ( **-p udp --dport 67:68 --sport 67:68** ). For remote management, the rules need to allow inbound SSH ( **\--dport 22** ), outbound mail ( **\--dport 25** ), DNS ( **\--dport 53** ), outbound ping ( **-p icmp** ), Network Time Protocol ( **\--dport 123 --sport 123** ), and outbound HTTP ( **\--dport 80** ) and HTTPS ( **\--dport 443** ). + +``` +# Set a default policy of DROP +*filter +:INPUT DROP [0:0] +:FORWARD DROP [0:0] +:OUTPUT DROP [0:0] + +# Accept any related or established connections +-I INPUT  1 -m state --state RELATED,ESTABLISHED -j ACCEPT +-I OUTPUT 1 -m state --state RELATED,ESTABLISHED -j ACCEPT + +# Allow all traffic on the loopback interface +-A INPUT -i lo -j ACCEPT +-A OUTPUT -o lo -j ACCEPT + +# Allow outbound DHCP request +-A OUTPUT –o eth0 -p udp --dport 67:68 --sport 67:68 -j ACCEPT + +# Allow inbound SSH +-A INPUT -i eth0 -p tcp -m tcp --dport 22 -m state --state NEW  -j ACCEPT + +# Allow outbound email +-A OUTPUT -i eth0 -p tcp -m tcp --dport 25 -m state --state NEW  -j ACCEPT + +# Outbound DNS lookups +-A OUTPUT -o eth0 -p udp -m udp --dport 53 -j ACCEPT + +# Outbound PING requests +-A OUTPUT –o eth0 -p icmp -j ACCEPT + +# Outbound Network Time Protocol (NTP) requests +-A OUTPUT –o eth0 -p udp --dport 123 --sport 123 -j ACCEPT + +# Outbound HTTP +-A OUTPUT -o eth0 -p tcp -m tcp --dport 80 -m state --state NEW -j ACCEPT +-A OUTPUT -o eth0 -p tcp -m tcp --dport 443 -m state --state NEW -j ACCEPT + +COMMIT +``` + +### Restrict an IP address range + +Scenario: The CEO of your company thinks the employees are spending too much time on Facebook and not getting any work done. The CEO tells the CIO to do something about the employees wasting time on Facebook. The CIO tells the CISO to do something about employees wasting time on Facebook. Eventually, you are told the employees are wasting too much time on Facebook, and you have to do something about it. You decide to block all access to Facebook. First, find out Facebook's IP address by using the **host** and **whois** commands. + +``` +host -t a www.facebook.com +www.facebook.com is an alias for star.c10r.facebook.com. +star.c10r.facebook.com has address 31.13.65.17 +whois 31.13.65.17 | grep inetnum +inetnum:        31.13.64.0 - 31.13.127.255 +``` + +Then convert that range to CIDR notation by using the [CIDR to IPv4 Conversion][3] page. You get **31.13.64.0/18**. To prevent outgoing access to [www.facebook.com][4], enter: + +``` +iptables -A OUTPUT -p tcp -i eth0 –o eth1 –d 31.13.64.0/18 -j DROP +``` + +### Regulate by time + +Scenario: The backlash from the company's employees over denying access to Facebook access causes the CEO to relent a little (that and his administrative assistant's reminding him that she keeps HIS Facebook page up-to-date). The CEO decides to allow access to Facebook.com only at lunchtime (12PM to 1PM). Assuming the default policy is DROP, use iptables' time features to open up access. + +``` +iptables –A OUTPUT -p tcp -m multiport --dport http,https -i eth0 -o eth1 -m time --timestart 12:00 --timestart 12:00 –timestop 13:00 –d +31.13.64.0/18  -j ACCEPT +``` + +This command sets the policy to allow ( **-j ACCEPT** ) http and https ( **-m multiport --dport http,https** ) between noon ( **\--timestart 12:00** ) and 13PM ( **\--timestop 13:00** ) to Facebook.com ( **–d[31.13.64.0/18][5]** ). + +### Regulate by time—Take 2 + +Scenario: During planned downtime for system maintenance, you need to deny all TCP and UDP traffic between the hours of 2AM and 3AM so maintenance tasks won't be disrupted by incoming traffic. This will take two iptables rules: + +``` +iptables -A INPUT -p tcp -m time --timestart 02:00 --timestop 03:00 -j DROP +iptables -A INPUT -p udp -m time --timestart 02:00 --timestop 03:00 -j DROP +``` + +With these rules, TCP and UDP traffic ( **-p tcp and -p udp** ) are denied ( **-j DROP** ) between the hours of 2AM ( **\--timestart 02:00** ) and 3AM ( **\--timestop 03:00** ) on input ( **-A INPUT** ). + +### Limit connections with iptables + +Scenario: Your internet-connected web servers are under attack by bad actors from around the world attempting to DoS (Denial of Service) them. To mitigate these attacks, you restrict the number of connections a single IP address can have to your web server: + +``` +iptables –A INPUT –p tcp –syn -m multiport -–dport http,https –m connlimit -–connlimit-above 20 –j REJECT -–reject-with-tcp-reset +``` + +Let's look at what this rule does. If a host makes more than 20 ( **-–connlimit-above 20** ) new connections ( **–p tcp –syn** ) in a minute to the web servers ( **-–dport http,https** ), reject the new connection ( **–j REJECT** ) and tell the connecting host you are rejecting the connection ( **-–reject-with-tcp-reset** ). + +### Monitor iptables rules + +Scenario: Since iptables operates on a "first match wins" basis as packets traverse the rules in a chain, frequently matched rules should be near the top of the policy and less frequently matched rules should be near the bottom. How do you know which rules are traversed the most or the least so they can be ordered nearer the top or the bottom? + +#### Tip #1: See how many times each rule has been hit. + +Use this command: + +``` +iptables -L -v -n –line-numbers +``` + +The command will list all the rules in the chain ( **-L** ). Since no chain was specified, all the chains will be listed with verbose output ( **-v** ) showing packet and byte counters in numeric format ( **-n** ) with line numbers at the beginning of each rule corresponding to that rule's position in the chain. + +Using the packet and bytes counts, you can order the most frequently traversed rules to the top and the least frequently traversed rules towards the bottom. + +#### Tip #2: Remove unnecessary rules. + +Which rules aren't getting any matches at all? These would be good candidates for removal from the policy. You can find that out with this command: + +``` +iptables -nvL | grep -v "0     0" +``` + +Note: that's not a tab between the zeros; there are five spaces between the zeros. + +#### Tip #3: Monitor what's going on. + +You would like to monitor what's going on with iptables in real time, like with **top**. Use this command to monitor the activity of iptables activity dynamically and show only the rules that are actively being traversed: + +``` +watch --interval=5 'iptables -nvL | grep -v "0     0"' +``` + +**watch** runs **'iptables -nvL | grep -v "0 0"'** every five seconds and displays the first screen of its output. This allows you to watch the packet and byte counts change over time. + +### Report on iptables + +Scenario: Your manager thinks this iptables firewall stuff is just great, but a daily activity report would be even better. Sometimes it's more important to write a report than to do the work. + +Use the packet filter/firewall/IDS log analyzer [FWLogwatch][6] to create reports based on the iptables firewall logs. FWLogwatch supports many log formats and offers many analysis options. It generates daily and monthly summaries of the log files, allowing the security administrator to free up substantial time, maintain better control over network security, and reduce unnoticed attacks. + +Here is sample output from FWLogwatch: + +![](https://opensource.com/sites/default/files/uploads/fwlogwatch.png) + +### More than just ACCEPT and DROP + +We've covered many facets of iptables, all the way from making sure you don't lock yourself out when working with iptables to monitoring iptables to visualizing the activity of an iptables firewall. These will get you started down the path to realizing even more iptables tips and tricks. + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/18/10/iptables-tips-and-tricks + +作者:[Gary Smith][a] +选题:[lujun9972](https://github.com/lujun9972) +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/greptile +[1]: https://en.wikipedia.org/wiki/Netfilter +[2]: https://en.wikipedia.org/wiki/Iptables +[3]: http://www.ipaddressguide.com/cidr +[4]: http://www.facebook.com +[5]: http://31.13.64.0/18 +[6]: http://fwlogwatch.inside-security.de/ From d57f939300d537f9af0b090983136a6143d0f29c Mon Sep 17 00:00:00 2001 From: darksun Date: Tue, 9 Oct 2018 10:20:09 +0800 Subject: [PATCH 297/736] =?UTF-8?q?=E9=80=89=E9=A2=98:=20How=20to=20Instal?= =?UTF-8?q?l=20Pip=20on=20Ubuntu?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- .../20181001 How to Install Pip on Ubuntu.md | 179 ++++++++++++++++++ 1 file changed, 179 insertions(+) create mode 100644 sources/tech/20181001 How to Install Pip on Ubuntu.md diff --git a/sources/tech/20181001 How to Install Pip on Ubuntu.md b/sources/tech/20181001 How to Install Pip on Ubuntu.md new file mode 100644 index 0000000000..8751dc50f9 --- /dev/null +++ b/sources/tech/20181001 How to Install Pip on Ubuntu.md @@ -0,0 +1,179 @@ +How to Install Pip on Ubuntu +====== +**Pip is a command line tool that allows you to install software packages written in Python. Learn how to install Pip on Ubuntu and how to use it for installing Python applications.** + +There are numerous ways to [install software on Ubuntu][1]. You can install applications from the software center, from downloaded DEB files, from PPA, from [Snap packages][2], [using Flatpak][3], using [AppImage][4] and even from the good old source code. + +There is one more way to install packages in [Ubuntu][5]. It’s called Pip and you can use it to install Python-based applications. + +### What is Pip + +[Pip][6] stands for “Pip Installs Packages”. [Pip][7] is a command line based package management system. It is used to install and manage software written in [Python language][8]. + +You can use Pip to install packages listed in the Python Package Index ([PyPI][9]). + +As a software developer, you can use pip to install various Python module and packages for your own Python projects. + +As an end user, you may need pip in order to install some applications that are developed using Python and can be installed easily using pip. One such example is [Stress Terminal][10] application that you can easily install with pip. + +Let’s see how you can install pip on Ubuntu and other Ubuntu-based distributions. + +### How to install Pip on Ubuntu + +![Install pip on Ubuntu Linux][11] + +Pip is not installed on Ubuntu by default. You’ll have to install it. Installing pip on Ubuntu is really easy. I’ll show it to you in a moment. + +Ubuntu 18.04 has both Python 2 and Python 3 installed by default. And hence, you should install pip for both Python versions. + +Pip, by default, refers to the Python 2. Pip in Python 3 is referred by pip3. + +Note: I am using Ubuntu 18.04 in this tutorial. But the instructions here should be valid for other versions like Ubuntu 16.04, 18.10 etc. You may also use the same commands on other Linux distributions based on Ubuntu such as Linux Mint, Linux Lite, Xubuntu, Kubuntu etc. + +#### Install pip for Python 2 + +First, make sure that you have Python 2 installed. On Ubuntu, use the command below to verify. + +``` +python2 --version + +``` + +If there is no error and a valid output that shows the Python version, you have Python 2 installed. So now you can install pip for Python 2 using this command: + +``` +sudo apt install python-pip + +``` + +It will install pip and a number of other dependencies with it. Once installed, verify that you have pip installed correctly. + +``` +pip --version + +``` + +It should show you a version number, something like this: + +``` +pip 9.0.1 from /usr/lib/python2.7/dist-packages (python 2.7) + +``` + +This mans that you have successfully installed pip on Ubuntu. + +#### Install pip for Python 3 + +You have to make sure that Python 3 is installed on Ubuntu. To check that, use this command: + +``` +python3 --version + +``` + +If it shows you a number like Python 3.6.6, Python 3 is installed on your Linux system. + +Now, you can install pip3 using the command below: + +``` +sudo apt install python3-pip + +``` + +You should verify that pip3 has been installed correctly using this command: + +``` +pip3 --version + +``` + +It should show you a number like this: + +``` +pip 9.0.1 from /usr/lib/python3/dist-packages (python 3.6) + +``` + +It means that pip3 is successfully installed on your system. + +### How to use Pip command + +Now that you have installed pip, let’s quickly see some of the basic pip commands. These commands will help you use pip commands for searching, installing and removing Python packages. + +To search packages from the Python Package Index, you can use the following pip command: + +``` +pip search + +``` + +For example, if you search or stress, it will show all the packages that have the string ‘stress’ in its name or description. + +``` +pip search stress +stress (1.0.0) - A trivial utility for consuming system resources. +s-tui (0.8.2) - Stress Terminal UI stress test and monitoring tool +stressypy (0.0.12) - A simple program for calling stress and/or stress-ng from python +fuzzing (0.3.2) - Tools for stress testing applications. +stressant (0.4.1) - Simple stress-test tool +stressberry (0.1.7) - Stress tests for the Raspberry Pi +mobbage (0.2) - A HTTP stress test and benchmark tool +stresser (0.2.1) - A large-scale stress testing framework. +cyanide (1.3.0) - Celery stress testing and integration test support. +pysle (1.5.7) - An interface to ISLEX, a pronunciation dictionary with stress markings. +ggf (0.3.2) - global geometric factors and corresponding stresses of the optical stretcher +pathod (0.17) - A pathological HTTP/S daemon for testing and stressing clients. +MatPy (1.0) - A toolbox for intelligent material design, and automatic yield stress determination +netblow (0.1.2) - Vendor agnostic network testing framework to stress network failures +russtress (0.1.3) - Package that helps you to put lexical stress in russian text +switchy (0.1.0a1) - A fast FreeSWITCH control library purpose-built on traffic theory and stress testing. +nx4_selenium_test (0.1) - Provides a Python class and apps which monitor and/or stress-test the NoMachine NX4 web interface +physical_dualism (1.0.0) - Python library that approximates the natural frequency from stress via physical dualism, and vice versa. +fsm_effective_stress (1.0.0) - Python library that uses the rheological-dynamical analogy (RDA) to compute damage and effective buckling stress in prismatic shell structures. +processpathway (0.3.11) - A nifty little toolkit to create stress-free, frustrationless image processing pathways from your webcam for computer vision experiments. Or observing your cat. + +``` + +If you want to install an application using pip, you can use it in the following manner: + +``` +pip install + +``` + +Pip doesn’t support tab completion so the package name should be exact. It will download all the necessary files and installed that package. + +If you want to remove a Python package installed via pip, you can use the remove option in pip. + +``` +pip uninstall + +``` + +You can use pip3 instead of pip in the above commands. + +I hope this quick tip helped you to install pip on Ubuntu. If you have any questions or suggestions, please let me know in the comment section below. + +-------------------------------------------------------------------------------- + +via: https://itsfoss.com/install-pip-ubuntu/ + +作者:[Abhishek Prakash][a] +选题:[lujun9972](https://github.com/lujun9972) +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://itsfoss.com/author/abhishek/ +[1]: https://itsfoss.com/how-to-add-remove-programs-in-ubuntu/ +[2]: https://itsfoss.com/use-snap-packages-ubuntu-16-04/ +[3]: https://itsfoss.com/flatpak-guide/ +[4]: https://itsfoss.com/use-appimage-linux/ +[5]: https://www.ubuntu.com/ +[6]: https://en.wikipedia.org/wiki/Pip_(package_manager) +[7]: https://pypi.org/project/pip/ +[8]: https://www.python.org/ +[9]: https://pypi.org/ +[10]: https://itsfoss.com/stress-terminal-ui/ +[11]: https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/10/install-pip-ubuntu.png From 98ed397ccfe93cb9df1345b59d8a44be35552873 Mon Sep 17 00:00:00 2001 From: darksun Date: Tue, 9 Oct 2018 10:21:26 +0800 Subject: [PATCH 298/736] =?UTF-8?q?=E9=80=89=E9=A2=98:=20Troubleshooting?= =?UTF-8?q?=20Node.js=20Issues=20with=20llnode?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...ubleshooting Node.js Issues with llnode.md | 75 +++++++++++++++++++ 1 file changed, 75 insertions(+) create mode 100644 sources/talk/20180925 Troubleshooting Node.js Issues with llnode.md diff --git a/sources/talk/20180925 Troubleshooting Node.js Issues with llnode.md b/sources/talk/20180925 Troubleshooting Node.js Issues with llnode.md new file mode 100644 index 0000000000..3565b0270d --- /dev/null +++ b/sources/talk/20180925 Troubleshooting Node.js Issues with llnode.md @@ -0,0 +1,75 @@ +Troubleshooting Node.js Issues with llnode +====== + +![](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/node_1920.jpg?itok=Cwd2YtPd) + +The llnode plugin lets you inspect Node.js processes and core dumps; it adds the ability to inspect JavaScript stack frames, objects, source code and more. At [Node+JS Interactive][1], Matheus Marchini, Node.js Collaborator and Lead Software Engineer at Sthima, will host a [workshop][2] on how to use llnode to find and fix issues quickly and reliably, without bloating your application with logs or compromising performance. He explains more in this interview. + +**Linux.com: What are some common issues that happen with a Node.js application in production?** + +**Matheus Marchini:** One of the most common issues Node.js developers might experience -- either in production or during development -- are unhandled exceptions. They happen when your code throws an error, and this error is not properly handled. There's a variation of this issue with Promises, although in this case, the problem is worse: if a Promise is rejected but there's no handler for that rejection, the application might enter into an undefined state and it can start to misbehave. + +The application might also crash when it's using too much memory. This usually happens when there's a memory leak in the application, although we usually don't have classic memory leaks in Node.js. Instead of unreferenced objects, we might have objects that are not used anymore but are still retained by another object, leading the Garbage Collector to ignore them. If this happens with several objects, we can quickly exhaust our available memory. + +Memory is not the only resource that might get exhausted. Given the asynchronous nature of Node.js and how it scales for a large number of requests, the application might start to run out on other resources such as opened file descriptions and a number of concurrent connections to a database. + +Infinite loops are not that common because we usually catch those during development, but every once in a while one manages to slip through our tests and get into our production servers. These are pretty catastrophic because they will block the main thread, rendering the entire application unresponsive. + +The last issues I'd like to point out are performance issues. Those can happen for a variety of reasons, ranging from unoptimized function to I/O latency. + +**Linux.com: Are there any quick tests you can do to determine what might be happening with your Node.js application?** + +**Marchini:** Node.js and V8 have several tools and features built-in which developers can use to find issues faster. For example, if you're facing performance issues, you might want to use the built-in [V8 CpuProfiler][3]. Memory issues can be tracked down with [V8 Sampling Heap Profiler][4]. All of these options are interesting because you can open their results in Chrome DevTools and get some nice graphical visualizations by default. + +If you are using native modules on your project, V8 built-in tools might not give you enough insights, since they focus only on JavaScript metrics. As an alternative to V8 CpuProfiler, you can use system profiler tools, such as [perf for Linux][5] and Dtrace for FreeBSD / OS X. You can grab the result from these tools and turn them into flamegraphs, making it easier to find which functions are taking more time to process. + +You can use third-party tools as well: [node-report][6] is an amazing first failure data capture which doesn't introduce a significant overhead. When your application crashes, it will generate a report with detailed information about the state of the system, including environment variables, flags used, operating system details, etc. You can also generate this report on demand, and it is extremely useful when asking for help in forums, for example. The best part is that, after installing it through npm, you can enable it with a flag -- no need to make changes in your code! + +But one of the tools I'm most amazed by is [llnode][7]. + +**Linux.com: When would you want to use something like llnode; and what exactly is it?** + +**Marchini:** **** llnode is useful when debugging infinite loops, uncaught exceptions or out of memory issues since it allows you to inspect the state of your application when it crashed. How does llnode do this? You can tell Node.js and your operating system to take a core dump of your application when it crashes and load it into llnode. llnode will analyze this core dump and give you useful information such as how many objects were allocated in the heap, the complete stack trace for the process (including native calls and V8 internals), pending requests and handlers in the event loop queue, etc. + +The most impressive feature llnode has is its ability to inspect objects and functions: you can see which variables are available for a given function, look at the function's code and inspect which properties your objects have with their respective values. For example, you can look up which variables are available for your HTTP handler function and which parameters it received. You can also look at headers and the payload of a given request. + +llnode is a plugin for [lldb][8], and it uses lldb features alongside hints provided by V8 and Node.js to recreate the process heap. It uses a few heuristics, too, so results might not be entirely correct sometimes. But most of the times the results are good enough -- and way better than not using any tool. + +This technique -- which is called post-mortem debugging -- is not something new, though, and it has been part of the Node.js project since 2012. This is a common technique used by C and C++ developers, but not many dynamic runtimes support it. I'm happy we can say Node.js is one of those runtimes. + +**Linux.com: What are some key items folks should know before adding llnode to their environment?** + +**Marchini:** To install and use llnode you'll need to have lldb installed on your system. If you're on OS X, lldb is installed as part of Xcode. On Linux, you can install it from your distribution's repository. We recommend using LLDB 3.9 or later. + +You'll also have to set up your environment to generate core dumps. First, remember to set the flag --abort-on-uncaught-exception when running a Node.js application, otherwise, Node.js won't generate a core dump when an uncaught exception happens. You'll also need to tell your operating system to generate core dumps when an application crashes. The most common way to do that is by running `ulimit -c unlimited`, but this will only apply to your current shell session. If you're using a process manager such as systemd I suggest looking at the process manager docs. You can also generate on-demand core dumps of a running process with tools such as gcore. + +**Linux.com: What can we expect from llnode in the future?** + +**Marchini:** llnode collaborators are working on several features and improvements to make the project more accessible for developers less familiar with native debugging tools. To accomplish that, we're improving the overall user experience as well as the project's documentation and installation process. Future versions will include colorized output, more reliable output for some commands and a simplified mode focused on JavaScript information. We are also working on a JavaScript API which can be used to automate some analysis, create graphical user interfaces, etc. + +If this project sounds interesting to you, and you would like to get involved, feel free join the conversation in [our issues tracker][9] or contact me on social [@mmarkini][10]. I would love to help you get started! + +Learn more at [Node+JS Interactive][1], coming up October 10-12, 2018 in Vancouver, Canada. + +-------------------------------------------------------------------------------- + +via: https://www.linux.com/blog/2018/9/troubleshooting-nodejs-issues-llnode + +作者:[The Linux Foundation][a] +选题:[lujun9972](https://github.com/lujun9972) +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://www.linux.com/users/ericstephenbrown +[1]: https://events.linuxfoundation.org/events/node-js-interactive-2018/?utm_source=Linux.com&utm_medium=article&utm_campaign=jsint18 +[2]: http://sched.co/G285 +[3]: https://nodejs.org/api/inspector.html#inspector_cpu_profiler +[4]: https://github.com/v8/sampling-heap-profiler +[5]: http://www.brendangregg.com/blog/2014-09-17/node-flame-graphs-on-linux.html +[6]: https://github.com/nodejs/node-report +[7]: https://github.com/nodejs/llnode +[8]: https://lldb.llvm.org/ +[9]: https://github.com/nodejs/llnode/issues +[10]: https://twitter.com/mmarkini From c9c96f794bed6ffa9267a917872074515037e714 Mon Sep 17 00:00:00 2001 From: darksun Date: Tue, 9 Oct 2018 10:24:17 +0800 Subject: [PATCH 299/736] =?UTF-8?q?=E9=80=89=E9=A2=98:=20Creator=20of=20th?= =?UTF-8?q?e=20World=20Wide=20Web=20is=20Creating=20a=20New=20Decentralize?= =?UTF-8?q?d=20Web?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...Web is Creating a New Decentralized Web.md | 44 +++++++++++++++++++ 1 file changed, 44 insertions(+) create mode 100644 sources/talk/20180930 Creator of the World Wide Web is Creating a New Decentralized Web.md diff --git a/sources/talk/20180930 Creator of the World Wide Web is Creating a New Decentralized Web.md b/sources/talk/20180930 Creator of the World Wide Web is Creating a New Decentralized Web.md new file mode 100644 index 0000000000..df8804cbac --- /dev/null +++ b/sources/talk/20180930 Creator of the World Wide Web is Creating a New Decentralized Web.md @@ -0,0 +1,44 @@ +Creator of the World Wide Web is Creating a New Decentralized Web +====== +**Creator of the world wide web, Tim Berners-Lee has unveiled his plans to create a new decentralized web where the data will be controlled by the users.** + +[Tim Berners-Lee][1] is known for creating the world wide web, i.e., the internet you know today. More than two decades later, Tim is working to free the internet from the clutches of corporate giants and give the power back to the people via a decentralized web. + +Berners-Lee was unhappy with the way ‘powerful forces’ of the internet handle data of the users for their own agenda. So he [started working on his own open source project][2] Solid “to restore the power and agency of individuals on the web.” + +> Solid changes the current model where users have to hand over personal data to digital giants in exchange for perceived value. As we’ve all discovered, this hasn’t been in our best interests. Solid is how we evolve the web in order to restore balance — by giving every one of us complete control over data, personal or not, in a revolutionary way. + +![Tim Berners-Lee is creating a decentralized web with open source project Solid][3] + +Basically, [Solid][4] is a platform built using the existing web where you create own ‘pods’ (personal data store). You decide where this pod will be hosted, who will access which data element and how the data will be shared through this pod. + +Berners-Lee believes that Solid “will empower individuals, developers and businesses with entirely new ways to conceive, build and find innovative, trusted and beneficial applications and services.” + +Developers need to integrate Solid into their apps and sites. Solid is still in the early stages so there are no apps for now but the project website claims that “the first wave of Solid apps are being created now.” + +Berners-Lee has created a startup called [Inrupt][5] and has taken a sabbatical from MIT to work full-time on Solid and to take it “from the vision of a few to the reality of many.” + +If you are interested in Solid, [learn how to create apps][6] or [contribute to the project][7] in your own way. Of course, it will take a lot of effort to build and drive the broad adoption of Solid so every bit of contribution will count to the success of a decentralized web. + +Do you think a [decentralized web][8] will be a reality? What do you think of decentralized web in general and project Solid in particular? + +-------------------------------------------------------------------------------- + +via: https://itsfoss.com/solid-decentralized-web/ + +作者:[Abhishek Prakash][a] +选题:[lujun9972](https://github.com/lujun9972) +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://itsfoss.com/author/abhishek/ +[1]: https://en.wikipedia.org/wiki/Tim_Berners-Lee +[2]: https://medium.com/@timberners_lee/one-small-step-for-the-web-87f92217d085 +[3]: https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/09/tim-berners-lee-solid-project.jpeg +[4]: https://solid.inrupt.com/ +[5]: https://www.inrupt.com/ +[6]: https://solid.inrupt.com/docs/getting-started +[7]: https://solid.inrupt.com/community +[8]: https://tech.co/decentralized-internet-guide-2018-02 From 1754895a48b425cfaeb1f9eeafd68878655c621a Mon Sep 17 00:00:00 2001 From: qhwdw Date: Tue, 9 Oct 2018 10:52:04 +0800 Subject: [PATCH 300/736] Translating by qhwdw --- ...180413 The df Command Tutorial With Examples For Beginners.md | 1 + ...lete Sed Command Guide [Explained with Practical Examples].md | 1 + ...tall Oracle VirtualBox On Ubuntu 18.04 LTS Headless Server.md | 1 + ...adless Virtualization Server Using KVM In Ubuntu 18.04 LTS.md | 1 + ...1 A checklist for submitting your first Linux kernel patch.md | 1 + sources/tech/20180824 What Stable Kernel Should I Use.md | 1 + 6 files changed, 6 insertions(+) diff --git a/sources/tech/20180413 The df Command Tutorial With Examples For Beginners.md b/sources/tech/20180413 The df Command Tutorial With Examples For Beginners.md index e72be14659..bb368b3958 100644 --- a/sources/tech/20180413 The df Command Tutorial With Examples For Beginners.md +++ b/sources/tech/20180413 The df Command Tutorial With Examples For Beginners.md @@ -1,3 +1,4 @@ +Translating by qhwdw The df Command Tutorial With Examples For Beginners ====== diff --git a/sources/tech/20180615 Complete Sed Command Guide [Explained with Practical Examples].md b/sources/tech/20180615 Complete Sed Command Guide [Explained with Practical Examples].md index e548213483..d2c50b6029 100644 --- a/sources/tech/20180615 Complete Sed Command Guide [Explained with Practical Examples].md +++ b/sources/tech/20180615 Complete Sed Command Guide [Explained with Practical Examples].md @@ -1,3 +1,4 @@ +Translating by qhwdw Complete Sed Command Guide [Explained with Practical Examples] ====== In a previous article, I showed the [basic usage of Sed][1], the stream editor, on a practical use case. Today, be prepared to gain more insight about Sed as we will take an in-depth tour of the sed execution model. This will be also an opportunity to make an exhaustive review of all Sed commands and to dive into their details and subtleties. So, if you are ready, launch a terminal, [download the test files][2] and sit comfortably before your keyboard: we will start our exploration right now! diff --git a/sources/tech/20180703 Install Oracle VirtualBox On Ubuntu 18.04 LTS Headless Server.md b/sources/tech/20180703 Install Oracle VirtualBox On Ubuntu 18.04 LTS Headless Server.md index dd8c3cdb13..20ce979026 100644 --- a/sources/tech/20180703 Install Oracle VirtualBox On Ubuntu 18.04 LTS Headless Server.md +++ b/sources/tech/20180703 Install Oracle VirtualBox On Ubuntu 18.04 LTS Headless Server.md @@ -1,3 +1,4 @@ +Translating by qhwdw Install Oracle VirtualBox On Ubuntu 18.04 LTS Headless Server ====== diff --git a/sources/tech/20180704 Setup Headless Virtualization Server Using KVM In Ubuntu 18.04 LTS.md b/sources/tech/20180704 Setup Headless Virtualization Server Using KVM In Ubuntu 18.04 LTS.md index a85a637830..3125a1a4ee 100644 --- a/sources/tech/20180704 Setup Headless Virtualization Server Using KVM In Ubuntu 18.04 LTS.md +++ b/sources/tech/20180704 Setup Headless Virtualization Server Using KVM In Ubuntu 18.04 LTS.md @@ -1,3 +1,4 @@ +Translating by qhwdw Setup Headless Virtualization Server Using KVM In Ubuntu 18.04 LTS ====== diff --git a/sources/tech/20180821 A checklist for submitting your first Linux kernel patch.md b/sources/tech/20180821 A checklist for submitting your first Linux kernel patch.md index 1fc4677491..b6974cde0b 100644 --- a/sources/tech/20180821 A checklist for submitting your first Linux kernel patch.md +++ b/sources/tech/20180821 A checklist for submitting your first Linux kernel patch.md @@ -1,3 +1,4 @@ +Translating by qhwdw A checklist for submitting your first Linux kernel patch ====== diff --git a/sources/tech/20180824 What Stable Kernel Should I Use.md b/sources/tech/20180824 What Stable Kernel Should I Use.md index bfd64a2ec2..52b77498c5 100644 --- a/sources/tech/20180824 What Stable Kernel Should I Use.md +++ b/sources/tech/20180824 What Stable Kernel Should I Use.md @@ -1,3 +1,4 @@ +Translating by qhwdw What Stable Kernel Should I Use? ====== I get a lot of questions about people asking me about what stable kernel should they be using for their product/device/laptop/server/etc. all the time. Especially given the now-extended length of time that some kernels are being supported by me and others, this isn’t always a very obvious thing to determine. So this post is an attempt to write down my opinions on the matter. Of course, you are free to use what ever kernel version you want, but here’s what I recommend. From 81843eeee0b4f8a24c4679cf7bd5914d9e768f32 Mon Sep 17 00:00:00 2001 From: "Xingyu.Wang" Date: Tue, 9 Oct 2018 12:14:54 +0800 Subject: [PATCH 301/736] PRF:20180907 How to Use the Netplan Network Configuration Tool on Linux.md @LuuMing --- ...lan Network Configuration Tool on Linux.md | 128 +++++++----------- 1 file changed, 47 insertions(+), 81 deletions(-) diff --git a/translated/tech/20180907 How to Use the Netplan Network Configuration Tool on Linux.md b/translated/tech/20180907 How to Use the Netplan Network Configuration Tool on Linux.md index 0027aafb6f..c4691e9651 100644 --- a/translated/tech/20180907 How to Use the Netplan Network Configuration Tool on Linux.md +++ b/translated/tech/20180907 How to Use the Netplan Network Configuration Tool on Linux.md @@ -1,21 +1,17 @@ 如何在 Linux 上使用网络配置工具 Netplan ====== +> netplan 是一个命令行工具,用于在某些 Linux 发行版上配置网络。 ![](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/netplan.jpg?itok=Gu_ZfNGa) -多年以来 Linux 管理员和用户们使用相同的方式配置他们的网络接口。例如,如果你是 Ubuntu 用户,你能够用桌面 GUI 配置网络连接,也可以在 /etc/network/interfaces 文件里配置。配置相当简单且从未失败。在文件中配置看起来就像这样: +多年以来 Linux 管理员和用户们以相同的方式配置他们的网络接口。例如,如果你是 Ubuntu 用户,你能够用桌面 GUI 配置网络连接,也可以在 `/etc/network/interfaces` 文件里配置。配置相当简单且可以奏效。在文件中配置看起来就像这样: ``` auto enp10s0 - iface enp10s0 inet static - address 192.168.1.162 - netmask 255.255.255.0 - gateway 192.168.1.100 - dns-nameservers 1.0.0.1,1.1.1.1 ``` @@ -25,7 +21,7 @@ dns-nameservers 1.0.0.1,1.1.1.1 sudo systemctl restart networking ``` -或者,如果你使用不带systemd 的发行版,你可以通过老办法来重启网络: +或者,如果你使用不带 systemd 的发行版,你可以通过老办法来重启网络: ``` sudo /etc/init.d/networking restart @@ -33,13 +29,13 @@ sudo /etc/init.d/networking restart 你的网络将会重新启动,新的配置将会生效。 -这就是多年以来的做法。但是现在,在某些发行版上(例如 Ubuntu Linux 18.04),网络的配置与控制发生了很大的变化。不需要那个 interfaces 文件和 /etc/init.d/networking 脚本,我们现在转向使用 [Netplan][1]。Netplan 是一个在某些 Linux 发行版上配置网络连接的命令行工具。Netplan 使用 YAML 描述文件来配置网络接口,然后,通过这些描述为任何给定的呈现工具生成必要的配置选项。 +这就是多年以来的做法。但是现在,在某些发行版上(例如 Ubuntu Linux 18.04),网络的配置与控制发生了很大的变化。不需要那个 `interfaces` 文件和 `/etc/init.d/networking` 脚本,我们现在转向使用 [Netplan][1]。Netplan 是一个在某些 Linux 发行版上配置网络连接的命令行工具。Netplan 使用 YAML 描述文件来配置网络接口,然后,通过这些描述为任何给定的呈现工具生成必要的配置选项。 -我将向你展示如何在 Linux 上使用 Netplan 配置静态 IP 地址和 DHCP 地址。我会在 Ubuntu Server 18.04 上演示。有句忠告,你创建的 .yaml 文件中的间距必须保持一致,否则将会失败。你不用为每行使用特定的间距,只需保持一致就行了。 +我将向你展示如何在 Linux 上使用 Netplan 配置静态 IP 地址和 DHCP 地址。我会在 Ubuntu Server 18.04 上演示。有句忠告,你创建的 .yaml 文件中的缩进必须保持一致,否则将会失败。你不用为每行使用特定的缩进间距,只需保持一致就行了。 ### 新的配置文件 -打开终端窗口(或者通过 SSH 登录进 Ubuntu 服务器)。你会在 /etc/netplan 文件夹下发现 Netplan 的新配置文件。使用 cd/etc/netplan 命令进入到那个文件夹下。一旦进到了那个文件夹,也许你就能够看到一个文件: +打开终端窗口(或者通过 SSH 登录进 Ubuntu 服务器)。你会在 `/etc/netplan` 文件夹下发现 Netplan 的新配置文件。使用 `cd /etc/netplan` 命令进入到那个文件夹下。一旦进到了那个文件夹,也许你就能够看到一个文件: ``` 01-netcfg.yaml @@ -55,13 +51,11 @@ sudo cp /etc/netplan/01-netcfg.yaml /etc/netplan/01-netcfg.yaml.bak ### 网络设备名称 -在你开始配置静态 IP 之前,你需要知道设备名称。要做到这一点,你可以使用命令 ip a,然后找出哪一个设备将会被用到(图 1)。 +在你开始配置静态 IP 之前,你需要知道设备名称。要做到这一点,你可以使用命令 `ip a`,然后找出哪一个设备将会被用到(图 1)。 ![netplan][3] -图 1:使用 ip a 命令找出设备名称 - -[Used with permission][4] (译注:这是什么鬼?) +*图 1:使用 ip a 命令找出设备名称* 我将为 ens5 配置一个静态的 IP。 @@ -75,67 +69,46 @@ sudo nano /etc/netplan/01-netcfg.yaml 文件的布局看起来就像这样: +``` network: - -Version: 2 - -Renderer: networkd - -ethernets: - -DEVICE_NAME: - -Dhcp4: yes/no - -Addresses: [IP/NETMASK] - -Gateway: GATEWAY - -Nameservers: - -Addresses: [NAMESERVER, NAMESERVER] + Version: 2 + Renderer: networkd + ethernets: + DEVICE_NAME: + Dhcp4: yes/no + Addresses: [IP/NETMASK] + Gateway: GATEWAY + Nameservers: + Addresses: [NAMESERVER, NAMESERVER] +``` 其中: - * DEVICE_NAME 是需要配置设备的实际名称。 - - * yes/no 代表是否启用 dhcp4。 - - * IP 是设备的 IP 地址。 - - * NETMASK 是 IP 地址的掩码。 - - * GATEWAY 是网关的地址。 - - * NAMESERVER 是由逗号分开的 DNS 服务器列表。 + * `DEVICE_NAME` 是需要配置设备的实际名称。 + * `yes`/`no` 代表是否启用 dhcp4。 + * `IP` 是设备的 IP 地址。 + * `NETMASK` 是 IP 地址的掩码。 + * `GATEWAY` 是网关的地址。 + * `NAMESERVER` 是由逗号分开的 DNS 服务器列表。 这是一份 .yaml 文件的样例: ``` network: - - version: 2 - - renderer: networkd - - ethernets: - - ens5: - - dhcp4: no - - addresses: [192.168.1.230/24] - - gateway4: 192.168.1.254 - - nameservers: - - addresses: [8.8.4.4,8.8.8.8] + version: 2 + renderer: networkd + ethernets: + ens5: + dhcp4: no + addresses: [192.168.1.230/24] + gateway4: 192.168.1.254 + nameservers: + addresses: [8.8.4.4,8.8.8.8] ``` 编辑上面的文件以达到你想要的效果。保存并关闭文件。 -注意,掩码已经不用再配置为 255.255.255.0 这种形式。取而代之的是,掩码已被添加进了 IP 地址中。 +注意,掩码已经不用再配置为 `255.255.255.0` 这种形式。取而代之的是,掩码已被添加进了 IP 地址中。 ### 测试配置 @@ -165,20 +138,13 @@ sudo netplan apply ``` network: - - version: 2 - - renderer: networkd - - ethernets: - - ens5: - - Addresses: [] - - dhcp4: true - - optional: true + version: 2 + renderer: networkd + ethernets: + ens5: + Addresses: [] + dhcp4: true + optional: true ``` 保存并退出。用下面命令来测试文件: @@ -187,15 +153,15 @@ network: sudo netplan try ``` -Netplan 应该会成功配置 DHCP 服务。这时你可以使用 ip a 命令得到动态分配的地址,然后重新配置静态地址。或者,你可以直接使用 DHCP 分配的地址(但看看这是一个服务器,你可能不想这样做)。 +Netplan 应该会成功配置 DHCP 服务。这时你可以使用 `ip a` 命令得到动态分配的地址,然后重新配置静态地址。或者,你可以直接使用 DHCP 分配的地址(但看看这是一个服务器,你可能不想这样做)。 -也许你有不只一个的网络接口,你可以命名第二个 .yaml 文件为 02-netcfg.yaml 。Netplan 会按照数字顺序应用配置文件,因此 01 会在 02 之前使用。根据你的需要创建多个配置文件。 +也许你有不只一个的网络接口,你可以命名第二个 .yaml 文件为 `02-netcfg.yaml` 。Netplan 会按照数字顺序应用配置文件,因此 01 会在 02 之前使用。根据你的需要创建多个配置文件。 ### 就是这些了 -不管你信不信,那些就是所有关于使用 Netplan 的东西了。虽然它对于我们习惯性的配置网络地址来说是一个相当大的改变,但并不是所有人都用的惯。但这种配置方式值得一提...因此你会适应的。 +不管怎样,那些就是所有关于使用 Netplan 的东西了。虽然它对于我们习惯性的配置网络地址来说是一个相当大的改变,但并不是所有人都用的惯。但这种配置方式值得一提……因此你会适应的。 -在 Linux Foundation 和 edX 上通过 ["Introduction to Linux"] 课程学习更多关于 Linux 的内容。 +在 Linux Foundation 和 edX 上通过 [“Introduction to Linux”][5] 课程学习更多关于 Linux 的内容。 -------------------------------------------------------------------------------- @@ -204,7 +170,7 @@ via: https://www.linux.com/learn/intro-to-linux/2018/9/how-use-netplan-network-c 作者:[Jack Wallen][a] 选题:[lujun9972](https://github.com/lujun9972) 译者:[LuuMing](https://github.com/LuuMing) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 From 952ef866611261e8cdc34baba4239e9a735d3cf1 Mon Sep 17 00:00:00 2001 From: "Xingyu.Wang" Date: Tue, 9 Oct 2018 12:15:15 +0800 Subject: [PATCH 302/736] PUB:20180907 How to Use the Netplan Network Configuration Tool on Linux.md @LuuMing https://linux.cn/article-10095-1.html --- ... How to Use the Netplan Network Configuration Tool on Linux.md | 0 1 file changed, 0 insertions(+), 0 deletions(-) rename {translated/tech => published}/20180907 How to Use the Netplan Network Configuration Tool on Linux.md (100%) diff --git a/translated/tech/20180907 How to Use the Netplan Network Configuration Tool on Linux.md b/published/20180907 How to Use the Netplan Network Configuration Tool on Linux.md similarity index 100% rename from translated/tech/20180907 How to Use the Netplan Network Configuration Tool on Linux.md rename to published/20180907 How to Use the Netplan Network Configuration Tool on Linux.md From 62ec04ddfccf3543a4fb238e7851e078da0eb2d1 Mon Sep 17 00:00:00 2001 From: Ezio Date: Tue, 9 Oct 2018 13:49:09 +0800 Subject: [PATCH 303/736] Update 20161014 Compiling Lisp to JavaScript From Scratch in 350 LOC.md --- ...1014 Compiling Lisp to JavaScript From Scratch in 350 LOC.md | 2 ++ 1 file changed, 2 insertions(+) diff --git a/sources/tech/20161014 Compiling Lisp to JavaScript From Scratch in 350 LOC.md b/sources/tech/20161014 Compiling Lisp to JavaScript From Scratch in 350 LOC.md index 6d3040626b..7c315546fa 100644 --- a/sources/tech/20161014 Compiling Lisp to JavaScript From Scratch in 350 LOC.md +++ b/sources/tech/20161014 Compiling Lisp to JavaScript From Scratch in 350 LOC.md @@ -617,12 +617,14 @@ undefined via: https://gilmi.me/blog/post/2016/10/14/lisp-to-js 作者:[ Gil Mizrahi ][a] +选题:[oska874][b] 译者:[译者ID](https://github.com/译者ID) 校对:[校对者ID](https://github.com/校对者ID) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 [a]:https://gilmi.me/home +[b]:https://github.com/oska874 [1]:https://gilmi.me/blog/authors/Gil [2]:https://gilmi.me/blog/tags/compilers [3]:https://gilmi.me/blog/tags/fp From 02e360e1be88d5e22b2a2493aa1ce35607e9d15c Mon Sep 17 00:00:00 2001 From: Ezio Date: Tue, 9 Oct 2018 13:50:00 +0800 Subject: [PATCH 304/736] Update 20180715 Why is Python so slow.md --- sources/tech/20180715 Why is Python so slow.md | 2 ++ 1 file changed, 2 insertions(+) diff --git a/sources/tech/20180715 Why is Python so slow.md b/sources/tech/20180715 Why is Python so slow.md index 2e2af0f74e..931d32a4b2 100644 --- a/sources/tech/20180715 Why is Python so slow.md +++ b/sources/tech/20180715 Why is Python so slow.md @@ -172,12 +172,14 @@ All about JIT compilers [https://hacks.mozilla.org/2017/02/a-crash-course-in-ju via: https://hackernoon.com/why-is-python-so-slow-e5074b6fe55b 作者:[Anthony Shaw][a] +选题:[oska874][b] 译者:[译者ID](https://github.com/译者ID) 校对:[校对者ID](https://github.com/校对者ID) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 [a]:https://hackernoon.com/@anthonypjshaw?source=post_header_lockup +[b]:https://github.com/oska874 [1]:http://dabeaz.blogspot.com/2010/01/python-gil-visualized.html [2]:http://cython.org/ [3]:http://notes-on-cython.readthedocs.io/en/latest/std_dev.html From bf034bce79e4837e0836bd69574bfc5831d4f418 Mon Sep 17 00:00:00 2001 From: qhwdw Date: Tue, 9 Oct 2018 17:07:17 +0800 Subject: [PATCH 305/736] Translated by qhwdw --- ...nd Tutorial With Examples For Beginners.md | 75 ++++++++++--------- 1 file changed, 38 insertions(+), 37 deletions(-) rename {sources => translated}/tech/20180413 The df Command Tutorial With Examples For Beginners.md (52%) diff --git a/sources/tech/20180413 The df Command Tutorial With Examples For Beginners.md b/translated/tech/20180413 The df Command Tutorial With Examples For Beginners.md similarity index 52% rename from sources/tech/20180413 The df Command Tutorial With Examples For Beginners.md rename to translated/tech/20180413 The df Command Tutorial With Examples For Beginners.md index bb368b3958..08f3860661 100644 --- a/sources/tech/20180413 The df Command Tutorial With Examples For Beginners.md +++ b/translated/tech/20180413 The df Command Tutorial With Examples For Beginners.md @@ -1,22 +1,22 @@ -Translating by qhwdw -The df Command Tutorial With Examples For Beginners +df 命令的新手教程 ====== ![](https://www.ostechnix.com/wp-content/uploads/2018/04/df-command-1-720x340.png) -In this guide, we are going to learn to use **df** command. The df command, stands for **D** isk **F** ree, reports file system disk space usage. It displays the amount of disk space available on the file system in a Linux system. The df command is not to be confused with **du** command. Both serves different purposes. The df command reports **how much disk space we have** (i.e free space) whereas the du command reports **how much disk space is being consumed** by the files and folders. Hope I made myself clear. Let us go ahead and see some practical examples of df command, so you can understand it better. +在本指南中,我们将学习如何使用 **df** 命令。df 命令是 `Disk Free` 的首字母组合,它报告文件系统磁盘空间的使用情况。它显示一个 Linux 系统中文件系统上可用磁盘空间的数量。df 命令很容易与 **du** 命令混淆。它们的用途不同。df 命令报告 **我们拥有多少磁盘空间**(空闲磁盘空间),而 du 命令报告 **被文件和目录占用了多少磁盘空间**。希望我这样的解释你能更清楚。在继续之前,我们来看一些 df 命令的实例,以便于你更好地理解它。 -### The df Command Tutorial With Examples +### df 命令使用举例 -**1\. View entire file system disk space usage** +**1、查看整个文件系统磁盘空间使用情况** -Run df command without any arguments to display the entire file system disk space. +无需任何参数来运行 df 命令,以显示整个文件系统磁盘空间使用情况。 ``` $ df ``` -**Sample output:** +**示例输出:** + ``` Filesystem 1K-blocks Used Available Use% Mounted on dev 4033216 0 4033216 0% /dev @@ -33,20 +33,20 @@ tmpfs 807776 28 807748 1% /run/user/1000 ![][2] -As you can see, the result is divided into six columns. Let us see what each column means. +正如你所见,输出结果分为六列。我们来看一下每一列的含义。 - * **Filesystem** – the filesystem on the system. - * **1K-blocks** – the size of the filesystem, measured in 1K blocks. - * **Used** – the amount of space used in 1K blocks. - * **Available** – the amount of available space in 1K blocks. - * **Use%** – the percentage that the filesystem is in use. - * **Mounted on** – the mount point where the filesystem is mounted. + * **Filesystem** – Linux 系统中的文件系统 + * **1K-blocks** – 文件系统的大小,用 1K 大小的块来表示。 + * **Used** – 以 1K 大小的块所表示的已使用数量。 + * **Available** – 以 1K 大小的块所表示的可用空间的数量。 + * **Use%** – 文件系统中已使用的百分比。 + * **Mounted on** – 已挂载的文件系统的挂载点。 -**2\. Display file system disk usage in human readable format** +**2、以人类友好格式显示文件系统硬盘空间使用情况** -As you may noticed in the above examples, the usage is showed in 1k blocks. If you want to display them in human readable format, use **-h** flag. +在上面的示例中你可能已经注意到了,它使用 1K 大小的块为单位来表示使用情况,如果你以人类友好格式来显示它们,可以使用 **-h** 标志。 ``` $ df -h Filesystem Size Used Avail Use% Mounted on @@ -62,11 +62,11 @@ tmpfs 789M 28K 789M 1% /run/user/1000 ``` -Now look at the **Size** and **Avail** columns, the usage is shown in GB and MB. +现在,在 **Size** 列和 **Avail** 列,使用情况是以 GB 和 MB 为单位来显示的。 **3\. Display disk space usage only in MB** -To view file system disk space usage only in Megabytes, use **-m** flag. +如果仅以 MB 为单位来显示文件系统磁盘空间使用情况,使用 **-m** 标志。 ``` $ df -m Filesystem 1M-blocks Used Available Use% Mounted on @@ -82,9 +82,9 @@ tmpfs 789 1 789 1% /run/user/1000 ``` -**4\. List inode information instead of block usage** +**4、列出节点而不是块的使用情况** -We can list inode information instead of block usage by using **-i** flag as shown below. +如下所示,我们可以通过使用 **-i** 标记来列出节点而不是块的使用情况。 ``` $ df -i Filesystem Inodes IUsed IFree IUse% Mounted on @@ -100,9 +100,9 @@ tmpfs 1009720 29 1009691 1% /run/user/1000 ``` -**5\. Display the file system type** +**5、显示文件系统类型** -To display the file system type, use **-T** flag. +使用 **-T** 标志显示文件系统类型。 ``` $ df -T Filesystem Type 1K-blocks Used Available Use% Mounted on @@ -118,11 +118,11 @@ tmpfs tmpfs 807776 28 807748 1% /run/user/1000 ``` -As you see, there is an extra column (second from left) that shows the file system type. +正如你所见,现在出现了显示文件系统类型的额外的列(从左数的第二列)。 -**6\. Display only the specific file system type** +**6、仅显示指定类型的文件系统** -We can limit the listing to a certain file systems. for example **ext4**. To do so, we use **-t** flag. +我们可以限制仅列出某些文件系统。比如,只列出 **ext4** 文件系统。我们使用 **-t** 标志。 ``` $ df -t ext4 Filesystem 1K-blocks Used Available Use% Mounted on @@ -131,11 +131,11 @@ Filesystem 1K-blocks Used Available Use% Mounted on ``` -See? This command shows only the ext4 file system disk space usage. +看到了吗?这个命令仅显示了 ext4 文件系统的磁盘空间使用情况。 -**7\. Exclude specific file system type** +**7、不列出指定类型的文件系统** -Some times, you may want to exclude a specific file system from the result. This can be achieved by using **-x** flag. +有时,我们可能需要从结果中去排除指定类型的文件系统。我们可以使用 **-x** 标记达到我们的目的。 ``` $ df -x ext4 Filesystem 1K-blocks Used Available Use% Mounted on @@ -149,11 +149,11 @@ tmpfs 807776 28 807748 1% /run/user/1000 ``` -The above command will display all file systems usage, except **ext4**. +上面的命令列出了除 **ext4** 类型以外的全部文件系统。 -**8\. Display usage for a folder** +**8、显示一个目录的磁盘使用情况** -To display the disk space available and where it is mounted for a folder, for example **/home/sk/** , use this command: +去显示某个目录的硬盘空间使用情况以及它的挂载点,例如 **/home/sk/** 目录,可以使用如下的命令: ``` $ df -hT /home/sk/ Filesystem Type Size Used Avail Use% Mounted on @@ -161,19 +161,19 @@ Filesystem Type Size Used Avail Use% Mounted on ``` -This command shows the file system type, used and available space in human readable form and where it is mounted. If you don’t to display the file system type, just ignore the **-t** flag. +这个命令显示文件系统类型、以人类友好格式显示已使用和可用磁盘空间、以及它的挂载点。如果你不想去显示文件系统类型,只需要忽略 **-t** 标志即可。 -For more details, refer the man pages. +更详细的使用情况,请参阅 man 手册页。 ``` $ man df ``` -**Recommended read:** +**建议阅读:** -And, that’s all for today! I hope this was useful. More good stuffs to come. Stay tuned! +今天就到此这止!我希望对你有用。还有更多更好玩的东西即将奉上。请继续关注! -Cheers! +再见! @@ -182,7 +182,7 @@ Cheers! via: https://www.ostechnix.com/the-df-command-tutorial-with-examples-for-beginners/ 作者:[SK][a] -译者:[译者ID](https://github.com/译者ID) +译者:[qhwdw](https://github.com/qhwdw) 校对:[校对者ID](https://github.com/校对者ID) 选题:[lujun9972](https://github.com/lujun9972) @@ -191,3 +191,4 @@ via: https://www.ostechnix.com/the-df-command-tutorial-with-examples-for-beginne [a]:https://www.ostechnix.com/author/sk/ [1]:data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7 [2]:http://www.ostechnix.com/wp-content/uploads/2018/04/df-command.png + From c757358df6a3a86ab001a04ed3263015772ad332 Mon Sep 17 00:00:00 2001 From: jrg Date: Tue, 9 Oct 2018 22:34:13 +0800 Subject: [PATCH 306/736] Update 20180724 Building a network attached storage device with a Raspberry Pi.md --- ...ing a network attached storage device with a Raspberry Pi.md | 2 ++ 1 file changed, 2 insertions(+) diff --git a/sources/tech/20180724 Building a network attached storage device with a Raspberry Pi.md b/sources/tech/20180724 Building a network attached storage device with a Raspberry Pi.md index 3144efd4ee..7c039e8238 100644 --- a/sources/tech/20180724 Building a network attached storage device with a Raspberry Pi.md +++ b/sources/tech/20180724 Building a network attached storage device with a Raspberry Pi.md @@ -1,3 +1,5 @@ +[翻译中]translating by jrg + Building a network attached storage device with a Raspberry Pi ====== From be8ae00165d0a570a56aada177fcf348f52b862e Mon Sep 17 00:00:00 2001 From: qhwdw Date: Tue, 9 Oct 2018 22:40:27 +0800 Subject: [PATCH 307/736] Translated by qhwdw --- ...ubmitting your first Linux kernel patch.md | 171 ----------------- ...ubmitting your first Linux kernel patch.md | 173 ++++++++++++++++++ 2 files changed, 173 insertions(+), 171 deletions(-) delete mode 100644 sources/tech/20180821 A checklist for submitting your first Linux kernel patch.md create mode 100644 translated/tech/20180821 A checklist for submitting your first Linux kernel patch.md diff --git a/sources/tech/20180821 A checklist for submitting your first Linux kernel patch.md b/sources/tech/20180821 A checklist for submitting your first Linux kernel patch.md deleted file mode 100644 index b6974cde0b..0000000000 --- a/sources/tech/20180821 A checklist for submitting your first Linux kernel patch.md +++ /dev/null @@ -1,171 +0,0 @@ -Translating by qhwdw -A checklist for submitting your first Linux kernel patch -====== - -![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/linux_penguin_green.png?itok=ENdVzW22) - -One of the biggest—and the fastest moving—open source projects, the Linux kernel, is composed of about 53,600 files and nearly 20-million lines of code. With more than 15,600 programmers contributing to the project worldwide, the Linux kernel follows a maintainer model for collaboration. - -![](https://opensource.com/sites/default/files/karnik_figure1.png) - -In this article, I'll provide a quick checklist of steps involved with making your first kernel contribution, and look at what you should know before submitting a patch. For a more in-depth look at the submission process for contributing your first patch, read the [KernelNewbies First Kernel Patch tutorial][1]. - -### Contributing to the kernel - -#### Step 1: Prepare your system. - -Steps in this article assume you have the following tools on your system: - -+ Text editor -+ Email client -+ Version control system (e.g., git) - -#### Step 2: Download the Linux kernel code repository`:` -``` -git clone -b staging-testing - -git://git.kernel.org/pub/scm/linux/kernel/git/gregkh/staging.git - -``` - -### Copy your current config: ```` -``` -cp /boot/config-`uname -r`* .config - -``` - -### Step 3: Build/install your kernel. -``` -make -jX - -sudo make modules_install install - -``` - -### Step 4: Make a branch and switch to it. -``` -git checkout -b first-patch - -``` - -### Step 5: Update your kernel to point to the latest code base. -``` -git fetch origin - -git rebase origin/staging-testing - -``` - -### Step 6: Make a change to the code base. - -Recompile using `make` command to ensure that your change does not produce errors. - -### Step 7: Commit your changes and create a patch. -``` -git add - -git commit -s -v - -git format-patch -o /tmp/ HEAD^ - -``` - -![](https://opensource.com/sites/default/files/karnik_figure2.png) - -The subject consists of the path to the file name separated by colons, followed by what the patch does in the imperative tense. After a blank line comes the description of the patch and the mandatory signed off tag and, lastly, a diff of your patch. - -Here is another example of a simple patch: - -![](https://opensource.com/sites/default/files/karnik_figure3.png) - -Next, send the patch [using email from the command line][2] (in this case, Mutt): `` -``` -mutt -H /tmp/0001- - -``` - -To know the list of maintainers to whom to send the patch, use the [get_maintainer.pl script][11]. - - -### What to know before submitting your first patch - - * [Greg Kroah-Hartman][3]'s [staging tree][4] is a good place to submit your [first patch][1] as he accepts easy patches from new contributors. When you get familiar with the patch-sending process, you could send subsystem-specific patches with increased complexity. - - * You also could start with correcting coding style issues in the code. To learn more, read the [Linux kernel coding style documentation][5]. - - * The script [checkpatch.pl][6] detects coding style errors for you. For example, run: - ``` - perl scripts/checkpatch.pl -f drivers/staging/android/* | less - - ``` - - * You could complete TODOs left incomplete by developers: - ``` - find drivers/staging -name TODO - ``` - - * [Coccinelle][7] is a helpful tool for pattern matching. - - * Read the [kernel mailing archives][8]. - - * Go through the [linux.git log][9] to see commits by previous authors for inspiration. - - * Note: Do not top-post to communicate with the reviewer of your patch! Here's an example: - -**Wrong way:** - -Chris, -_Yes let’s schedule the meeting tomorrow, on the second floor._ -> On Fri, Apr 26, 2013 at 9:25 AM, Chris wrote: -> Hey John, I had some questions: -> 1\. Do you want to schedule the meeting tomorrow? -> 2\. On which floor in the office? -> 3\. What time is suitable to you? - -(Notice that the last question was unintentionally left unanswered in the reply.) - -**Correct way:** - -Chris, -See my answers below... -> On Fri, Apr 26, 2013 at 9:25 AM, Chris wrote: -> Hey John, I had some questions: -> 1\. Do you want to schedule the meeting tomorrow? -_Yes tomorrow is fine._ -> 2\. On which floor in the office? -_Let's keep it on the second floor._ -> 3\. What time is suitable to you? -_09:00 am would be alright._ - -(All questions were answered, and this way saves reading time.) - - * The [Eudyptula challenge][10] is a great way to learn kernel basics. - - -To learn more, read the [KernelNewbies First Kernel Patch tutorial][1]. After that, if you still have any questions, ask on the [kernelnewbies mailing list][12] or in the [#kernelnewbies IRC channel][13]. - --------------------------------------------------------------------------------- - -via: https://opensource.com/article/18/8/first-linux-kernel-patch - -作者:[Sayli Karnik][a] -选题:[lujun9972](https://github.com/lujun9972) -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:https://opensource.com/users/sayli -[1]:https://kernelnewbies.org/FirstKernelPatch -[2]:https://opensource.com/life/15/8/top-4-open-source-command-line-email-clients -[3]:https://twitter.com/gregkh -[4]:https://www.kernel.org/doc/html/v4.15/process/2.Process.html -[5]:https://www.kernel.org/doc/html/v4.10/process/coding-style.html -[6]:https://github.com/torvalds/linux/blob/master/scripts/checkpatch.pl -[7]:http://coccinelle.lip6.fr/ -[8]:linux-kernel@vger.kernel.org -[9]:https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/log/ -[10]:http://eudyptula-challenge.org/ -[11]:https://github.com/torvalds/linux/blob/master/scripts/get_maintainer.pl -[12]:https://kernelnewbies.org/MailingList -[13]:https://kernelnewbies.org/IRC diff --git a/translated/tech/20180821 A checklist for submitting your first Linux kernel patch.md b/translated/tech/20180821 A checklist for submitting your first Linux kernel patch.md new file mode 100644 index 0000000000..bf23f20674 --- /dev/null +++ b/translated/tech/20180821 A checklist for submitting your first Linux kernel patch.md @@ -0,0 +1,173 @@ +提交你的第一个 Linux 内核补丁时的一个检查列表 +====== + +![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/linux_penguin_green.png?itok=ENdVzW22) + +Linux 内核是最大的且变动最快的开源项目之一,它由大约 53,600 个文件和近 2,000 万行代码组成。在全世界范围内超过 15,600 位程序员为它贡献代码,Linux 内核项目的维护者使用了如下的协作模型。 + +![](https://opensource.com/sites/default/files/karnik_figure1.png) + +本文中,为了便于在 Linux 内核中提交你的第一个贡献,我将为你提供一个必需的快速检查列表,以告诉你在提交补丁时,应该去查看和了解的内容。对于你贡献的第一个补丁的提交流程方面的更多内容,请阅读 [KernelNewbies 第一个内核补丁教程][1]。 + +### 为内核作贡献 + +#### 第 1 步:准备你的系统 + +本文开始之前,假设你的系统已经具备了如下的工具: + ++ 文本编辑器 ++ Email 客户端 ++ 版本控制系统(即:git) + +#### 第 2 步:下载 Linux 内核代码仓库: +``` +git clone -b staging-testing + +git://git.kernel.org/pub/scm/linux/kernel/git/gregkh/staging.git + +``` + +### 复制你的当前配置: +``` +cp /boot/config-`uname -r`* .config + +``` + +### 第 3 步:构建/安装你的内核 +``` +make -jX + +sudo make modules_install install + +``` + +### 第 4 步:创建一个分支并切换到它 +``` +git checkout -b first-patch + +``` + +### 第 5 步:更新你的内核并指向到最新的代码 +``` +git fetch origin + +git rebase origin/staging-testing + +``` + +### 第 6 步:在最新的代码基础上产生一个变更 + +使用 `make` 命令重新编译,确保你的变更没有错误。 + +### 第 7 步:提交你的变更并创建一个补丁 +``` +git add + +git commit -s -v + +git format-patch -o /tmp/ HEAD^ + +``` + +![](https://opensource.com/sites/default/files/karnik_figure2.png) + +主题是由冒号分隔的文件名组成,接下来是使用祈使语态来描述补丁做了什么。空行之后是强制规定的 `off` 标记,最后是你的补丁的 `diff` 信息。 + +下面是另外一个简单补丁的示例: + +![](https://opensource.com/sites/default/files/karnik_figure3.png) + +接下来,[使用 email 从命令行][2](在本例子中使用的是 Mutt)发送这个补丁: +``` +mutt -H /tmp/0001- + +``` + +使用 [get_maintainer.pl 脚本][11],去了解你的补丁应该发送给哪位维护者的列表。 + + +### 提交你的第一个补丁之前,你应该知道的事情 + + * [Greg Kroah-Hartman](3) 的 [staging tree][4] 是提交你的 [第一个补丁][1] 的最好的地方,因为他更容易接受新贡献者的补丁。在你熟悉了补丁发送流程以后,你就可以去发送复杂度更高的子系统专用的补丁。 + + * 你也可以从纠正代码中的编码风格开始。想学习更多关于这方面的内容,请阅读 [Linux 内核编码风格文档][5]。 + + * [checkpatch.pl][6] 脚本可以检测你的编码风格方面的错误。例如,运行如下的命令: + + ``` + perl scripts/checkpatch.pl -f drivers/staging/android/* | less + + ``` + + * 你可以去补全开发者留下的 TODO 注释中未完成的内容: + ``` + find drivers/staging -name TODO + ``` + + * [Coccinelle][7] 是一个模式匹配的有用工具。 + + * 阅读 [归档的内核邮件][8]。 + + * 为找到灵感,你可以去遍历 [linux.git log][9] 查看以前的作者的提交内容。 + + * 注意:不要为了评估你的补丁而在社区置顶帖子!下面就是一个这样的例子: + +**错误的方式:** + +Chris, +_Yes let’s schedule the meeting tomorrow, on the second floor._ + +> On Fri, Apr 26, 2013 at 9:25 AM, Chris wrote: +> Hey John, I had some questions: +> 1\. Do you want to schedule the meeting tomorrow? +> 2\. On which floor in the office? +> 3\. What time is suitable to you? + +(注意那最后一个问题,在回复中无意中落下了。) + +**正确的方式:** + +Chris, +See my answers below... + +> On Fri, Apr 26, 2013 at 9:25 AM, Chris wrote: +> Hey John, I had some questions: +> 1\. Do you want to schedule the meeting tomorrow? +_Yes tomorrow is fine._ +> 2\. On which floor in the office? +_Let's keep it on the second floor._ +> 3\. What time is suitable to you? +_09:00 am would be alright._ + +(所有问题全部回复,并且这种方式还保存了阅读的时间。) + + * [Eudyptula challenge][10] 是学习内核基础知识的非常好的方式。 + + +想学习更多内容,阅读 [KernelNewbies 第一个内核补丁教程][1]。之后如果你还有任何问题,可以在 [kernelnewbies 邮件列表][12] 或者 [#kernelnewbies IRC channel][13] 中提问。 + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/18/8/first-linux-kernel-patch + +作者:[Sayli Karnik][a] +选题:[lujun9972](https://github.com/lujun9972) +译者:[qhwdw](https://github.com/qhwdw) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://opensource.com/users/sayli +[1]:https://kernelnewbies.org/FirstKernelPatch +[2]:https://opensource.com/life/15/8/top-4-open-source-command-line-email-clients +[3]:https://twitter.com/gregkh +[4]:https://www.kernel.org/doc/html/v4.15/process/2.Process.html +[5]:https://www.kernel.org/doc/html/v4.10/process/coding-style.html +[6]:https://github.com/torvalds/linux/blob/master/scripts/checkpatch.pl +[7]:http://coccinelle.lip6.fr/ +[8]:linux-kernel@vger.kernel.org +[9]:https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/log/ +[10]:http://eudyptula-challenge.org/ +[11]:https://github.com/torvalds/linux/blob/master/scripts/get_maintainer.pl +[12]:https://kernelnewbies.org/MailingList +[13]:https://kernelnewbies.org/IRC From 6539e94d0579394cd6e04d89da821d677fc71baf Mon Sep 17 00:00:00 2001 From: jrg Date: Wed, 10 Oct 2018 00:03:07 +0800 Subject: [PATCH 308/736] Create 20180724 Building a network attached storage device with a Raspberry Pi.md --- ...ched storage device with a Raspberry Pi.md | 298 ++++++++++++++++++ 1 file changed, 298 insertions(+) create mode 100644 translated/tech/20180724 Building a network attached storage device with a Raspberry Pi.md diff --git a/translated/tech/20180724 Building a network attached storage device with a Raspberry Pi.md b/translated/tech/20180724 Building a network attached storage device with a Raspberry Pi.md new file mode 100644 index 0000000000..21c0c20cd5 --- /dev/null +++ b/translated/tech/20180724 Building a network attached storage device with a Raspberry Pi.md @@ -0,0 +1,298 @@ +树莓派自建 NAS 云盘之-树莓派搭建网络存储盘 +====== + +![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/bus-storage.png?itok=95-zvHYl) + +我将在接下来的三篇文章中讲述如何搭建一个简便、实用的 NAS 云盘系统。我在这个中心化的存储系统中存储数据,并且让它每晚都会自动的备份增量数据。本系列文章将利用 NFS 文件系统将磁盘挂载到同一网络下的不同设备上,使用 [Nextcloud][1] 来离线访问数据、分享数据。 + +本文主要讲述将数据盘挂载到远程设备上的软硬件步骤。本系列第二篇文章将讨论数据备份策略、如何添加定时备份数据任务。最后一篇文章中我们将会安装 Nextcloud 软件,用户通过Nextcloud 提供的 web 接口可以方便的离线或在线访问数据。本系列教程最终搭建的 NAS 云盘支持多用户操作、文件共享等功能,所以你可以通过它方便的分享数据,比如说你可以发送一个加密链接,跟朋友分享你的照片等等。 + +最终的系统架构如下图所示: + + +![](https://opensource.com/sites/default/files/uploads/nas_part1.png) + +### 硬件 + +首先需要准备硬件。本文所列方案只是其中一种示例,你也可以按不同的硬件方案进行采购。 + +最主要的就是[树莓派3][2],它带有四核 CPU,1G RAM,以及(有些)快速的网络接口。数据将存储在两个 USB 磁盘驱动器上(这里使用 1TB 磁盘);其中一个磁盘用于每天数据存储,另一个用于数据备份。请务必使用有源 USB 磁盘驱动器或者带附加电源的 USB 集线器,因为树莓派无法为两个 USB 磁盘驱动器供电。 + +### 软件 + +社区中最活跃的操作系统当属 [Raspbian][3],便于定制个性化项目。已经有很多 [操作指南][4] 讲述如何在树莓派中安装 Raspbian 系统,所以这里不再赘述。在撰写本文时,最新的官方支持版本是 [Raspbian Stretch][5],它对我来说很好使用。 + +到此,我将假设你已经配置好了基本的 Raspbian 系统并且可以通过 `ssh` 访问到你的树莓派。 + +### 准备 USB 磁盘驱动器 + +为了更好地读写数据,我建议使用 ext4 文件系统去格式化磁盘。首先,你必须先找到连接到树莓派的磁盘。你可以在 `/dev/sd/` 中找到磁盘设备。使用命令 `fdisk -l`,你可以找到刚刚连接的两块 USB 磁盘驱动器。请注意,操作下面的步骤将会清除 USB 磁盘驱动器上的所有数据,请做好备份。 + +``` +pi@raspberrypi:~ $ sudo fdisk -l + + + +<...> + + + +Disk /dev/sda: 931.5 GiB, 1000204886016 bytes, 1953525168 sectors + +Units: sectors of 1 * 512 = 512 bytes + +Sector size (logical/physical): 512 bytes / 512 bytes + +I/O size (minimum/optimal): 512 bytes / 512 bytes + +Disklabel type: dos + +Disk identifier: 0xe8900690 + + + +Device     Boot Start        End    Sectors   Size Id Type + +/dev/sda1        2048 1953525167 1953523120 931.5G 83 Linux + + + + + +Disk /dev/sdb: 931.5 GiB, 1000204886016 bytes, 1953525168 sectors + +Units: sectors of 1 * 512 = 512 bytes + +Sector size (logical/physical): 512 bytes / 512 bytes + +I/O size (minimum/optimal): 512 bytes / 512 bytes + +Disklabel type: dos + +Disk identifier: 0x6aa4f598 + + + +Device     Boot Start        End    Sectors   Size Id Type + +/dev/sdb1  *     2048 1953521663 1953519616 931.5G  83 Linux + +``` + +由于这些设备是连接到树莓派的唯一的 1TB 的磁盘,所以我们可以很容易的辨别出 `/dev/sda` 和 `/dev/sdb` 就是那两个 USB 磁盘驱动器。每个磁盘末尾的分区表提示了在执行以下的步骤后如何查看,这些步骤将会格式化磁盘并创建分区表。为每个 USB 磁盘驱动器按以下步骤进行操作(假设你的磁盘也是 `/dev/sda` 和 `/dev/sdb`,第二次操作你只要替换命令中的 `sda` 为 `sdb` 即可)。 + +首先,删除磁盘分区表,创建一个新的并且只包含一个分区的新分区表。在 `fdisk` 中,你可以使用交互单字母命令来告诉程序你想要执行的操作。只需要在提示符 `Command(m for help):` 后输入相应的字母即可(可以使用 `m` 命令获得更多详细信息): + +``` +pi@raspberrypi:~ $ sudo fdisk /dev/sda + + + +Welcome to fdisk (util-linux 2.29.2). + +Changes will remain in memory only, until you decide to write them. + +Be careful before using the write command. + + + + + +Command (m for help): o + +Created a new DOS disklabel with disk identifier 0x9c310964. + + + +Command (m for help): n + +Partition type + +   p   primary (0 primary, 0 extended, 4 free) + +   e   extended (container for logical partitions) + +Select (default p): p + +Partition number (1-4, default 1): + +First sector (2048-1953525167, default 2048): + +Last sector, +sectors or +size{K,M,G,T,P} (2048-1953525167, default 1953525167): + + + +Created a new partition 1 of type 'Linux' and of size 931.5 GiB. + + + +Command (m for help): p + + + +Disk /dev/sda: 931.5 GiB, 1000204886016 bytes, 1953525168 sectors + +Units: sectors of 1 * 512 = 512 bytes + +Sector size (logical/physical): 512 bytes / 512 bytes + +I/O size (minimum/optimal): 512 bytes / 512 bytes + +Disklabel type: dos + +Disk identifier: 0x9c310964 + + + +Device     Boot Start        End    Sectors   Size Id Type + +/dev/sda1        2048 1953525167 1953523120 931.5G 83 Linux + + + +Command (m for help): w + +The partition table has been altered. + +Syncing disks. + +``` + +现在,我们将用 ext4 文件系统格式化新创建的分区 `/dev/sda1`: + +``` +pi@raspberrypi:~ $ sudo mkfs.ext4 /dev/sda1 + +mke2fs 1.43.4 (31-Jan-2017) + +Discarding device blocks: done + + + +<...> + + + +Allocating group tables: done + +Writing inode tables: done + +Creating journal (1024 blocks): done + +Writing superblocks and filesystem accounting information: done + +``` + +重复以上步骤后,让我们根据用途来对它们建立标签: + +``` +pi@raspberrypi:~ $ sudo e2label /dev/sda1 data + +pi@raspberrypi:~ $ sudo e2label /dev/sdb1 backup + +``` + +现在,让我们安装这些磁盘并存储一些数据。以我运营该系统超过一年的经验来看,当树莓派启动时(例如在断电后),USB 磁盘驱动器并不是总被安装,因此我建议使用 autofs 在需要的时候进行安装。 + +首先,安装 autofs 并创建挂载点: + +``` +pi@raspberrypi:~ $ sudo apt install autofs + +pi@raspberrypi:~ $ sudo mkdir /nas + +``` + +然后添加下面这行来挂载设备 +`/etc/auto.master`: +``` +/nas    /etc/auto.usb + +``` + +如果不存在以下内容,则创建 `/etc/auto.usb`,然后重新启动 autofs 服务: + +``` +data -fstype=ext4,rw :/dev/disk/by-label/data + +backup -fstype=ext4,rw :/dev/disk/by-label/backup + +pi@raspberrypi3:~ $ sudo service autofs restart + +``` + +现在你应该可以分别访问 `/nas/data` 以及 `/nas/backup` 磁盘了。显然,到此还不会令人太兴奋,因为你只是擦除了磁盘中的数据。不过,你可以执行以下命令来确认设备是否已经挂载成功: + +``` +pi@raspberrypi3:~ $ cd /nas/data + +pi@raspberrypi3:/nas/data $ cd /nas/backup + +pi@raspberrypi3:/nas/backup $ mount + +<...> + +/etc/auto.usb on /nas type autofs (rw,relatime,fd=6,pgrp=463,timeout=300,minproto=5,maxproto=5,indirect) + +<...> + +/dev/sda1 on /nas/data type ext4 (rw,relatime,data=ordered) + +/dev/sdb1 on /nas/backup type ext4 (rw,relatime,data=ordered) + +``` + +首先进入对应目录以确保 autofs 能够挂载设备。Autofs 会跟踪文件系统的访问记录,并随时挂载所需要的设备。然后 `mount` 命令会显示这两个 USB 磁盘驱动器已经挂载到我们想要的位置了。 + +设置 autofs 的过程容易出错,如果第一次尝试失败,请不要沮丧。你可以上网搜索有关教程。 + +### 挂载网络存储 + +现在你已经设置了基本的网络存储,我们希望将它安装到远程 Linux 机器上。这里使用 NFS 文件系统,首先在树莓派上安装 NFS 服务器: + +``` +pi@raspberrypi:~ $ sudo apt install nfs-kernel-server + +``` + +然后,需要告诉 NFS 服务器公开 `/nas/data` 目录,这是从树莓派外部可以访问的唯一设备(另一个用于备份)。编辑 `/etc/exports` 添加如下内容以允许所有可以访问 NAS 云盘的设备挂载存储: + +``` +/nas/data *(rw,sync,no_subtree_check) + +``` + +更多有关限制挂载到单个设备的详细信息,请参阅 `man exports`。经过上面的配置,任何人都可以访问数据,只要他们可以访问 NFS 所需的端口:`111`和`2049`。我通过上面的配置,只允许通过路由器防火墙访问到我的家庭网络的 22 和 443 端口。这样,只有在家庭网络中的设备才能访问 NFS 服务器。 + +如果要在 Linux 计算机挂载存储,运行以下命令: + +``` +you@desktop:~ $ sudo mkdir /nas/data + +you@desktop:~ $ sudo mount -t nfs :/nas/data /nas/data + +``` + +同样,我建议使用 autofs 来挂载该网络设备。如果需要其他帮助,请参看 [如何使用 Autofs 来挂载 NFS 共享][6]。 + +现在你可以在远程设备上通过 NFS 系统访问位于你树莓派 NAS 云盘上的数据了。在后面一篇文章中,我将介绍如何使用 `rsync` 自动将数据备份到第二个 USB 磁盘驱动器。你将会学到如何使用 `rsync` 创建增量备份,在进行日常备份的同时还能节省设备空间。 + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/18/7/network-attached-storage-Raspberry-Pi + +作者:[Manuel Dewald][a] +选题:[lujun9972](https://github.com/lujun9972) +译者:[jrg](https://github.com/jrglinux) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/ntlx +[1]: https://nextcloud.com/ +[2]: https://www.raspberrypi.org/products/raspberry-pi-3-model-b/ +[3]: https://www.raspbian.org/ +[4]: https://www.raspberrypi.org/documentation/installation/installing-images/ +[5]: https://www.raspberrypi.org/blog/raspbian-stretch/ +[6]: https://opensource.com/article/18/6/using-autofs-mount-nfs-shares + From ffc851b34b031e32cbb32f272b7a1f27a45b1b8f Mon Sep 17 00:00:00 2001 From: jrg Date: Wed, 10 Oct 2018 00:04:08 +0800 Subject: [PATCH 309/736] Delete 20180724 Building a network attached storage device with a Raspberry Pi.md --- ...ched storage device with a Raspberry Pi.md | 286 ------------------ 1 file changed, 286 deletions(-) delete mode 100644 sources/tech/20180724 Building a network attached storage device with a Raspberry Pi.md diff --git a/sources/tech/20180724 Building a network attached storage device with a Raspberry Pi.md b/sources/tech/20180724 Building a network attached storage device with a Raspberry Pi.md deleted file mode 100644 index 7c039e8238..0000000000 --- a/sources/tech/20180724 Building a network attached storage device with a Raspberry Pi.md +++ /dev/null @@ -1,286 +0,0 @@ -[翻译中]translating by jrg - -Building a network attached storage device with a Raspberry Pi -====== - -![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/bus-storage.png?itok=95-zvHYl) - -In this three-part series, I'll explain how to set up a simple, useful NAS (network attached storage) system. I use this kind of setup to store my files on a central system, creating incremental backups automatically every night. To mount the disk on devices that are located in the same network, NFS is installed. To access files offline and share them with friends, I use [Nextcloud][1]. - -This article will cover the basic setup of software and hardware to mount the data disk on a remote device. In the second article, I will discuss a backup strategy and set up a cron job to create daily backups. In the third and last article, we will install Nextcloud, a tool for easy file access to devices synced offline as well as online using a web interface. It supports multiple users and public file-sharing so you can share pictures with friends, for example, by sending a password-protected link. - -The target architecture of our system looks like this: -![](https://opensource.com/sites/default/files/uploads/nas_part1.png) - -### Hardware - -Let's get started with the hardware you need. You might come up with a different shopping list, so consider this one an example. - -The computing power is delivered by a [Raspberry Pi 3][2], which comes with a quad-core CPU, a gigabyte of RAM, and (somewhat) fast ethernet. Data will be stored on two USB hard drives (I use 1-TB disks); one is used for the everyday traffic, the other is used to store backups. Be sure to use either active USB hard drives or a USB hub with an additional power supply, as the Raspberry Pi will not be able to power two USB drives. - -### Software - -The operating system with the highest visibility in the community is [Raspbian][3] , which is excellent for custom projects. There are plenty of [guides][4] that explain how to install Raspbian on a Raspberry Pi, so I won't go into details here. The latest official supported version at the time of this writing is [Raspbian Stretch][5] , which worked fine for me. - -At this point, I will assume you have configured your basic Raspbian and are able to connect to the Raspberry Pi by `ssh`. - -### Prepare the USB drives - -To achieve good performance reading from and writing to the USB hard drives, I recommend formatting them with ext4. To do so, you must first find out which disks are attached to the Raspberry Pi. You can find the disk devices in `/dev/sd/`. Using the command `fdisk -l`, you can find out which two USB drives you just attached. Please note that all data on the USB drives will be lost as soon as you follow these steps. -``` -pi@raspberrypi:~ $ sudo fdisk -l - - - -<...> - - - -Disk /dev/sda: 931.5 GiB, 1000204886016 bytes, 1953525168 sectors - -Units: sectors of 1 * 512 = 512 bytes - -Sector size (logical/physical): 512 bytes / 512 bytes - -I/O size (minimum/optimal): 512 bytes / 512 bytes - -Disklabel type: dos - -Disk identifier: 0xe8900690 - - - -Device     Boot Start        End    Sectors   Size Id Type - -/dev/sda1        2048 1953525167 1953523120 931.5G 83 Linux - - - - - -Disk /dev/sdb: 931.5 GiB, 1000204886016 bytes, 1953525168 sectors - -Units: sectors of 1 * 512 = 512 bytes - -Sector size (logical/physical): 512 bytes / 512 bytes - -I/O size (minimum/optimal): 512 bytes / 512 bytes - -Disklabel type: dos - -Disk identifier: 0x6aa4f598 - - - -Device     Boot Start        End    Sectors   Size Id Type - -/dev/sdb1  *     2048 1953521663 1953519616 931.5G  83 Linux - -``` - -As those devices are the only 1TB disks attached to the Raspberry Pi, we can easily see that `/dev/sda` and `/dev/sdb` are the two USB drives. The partition table at the end of each disk shows how it should look after the following steps, which create the partition table and format the disks. To do this, repeat the following steps for each of the two devices by replacing `sda` with `sdb` the second time (assuming your devices are also listed as `/dev/sda` and `/dev/sdb` in `fdisk`). - -First, delete the partition table of the disk and create a new one containing only one partition. In `fdisk`, you can use interactive one-letter commands to tell the program what to do. Simply insert them after the prompt `Command (m for help):` as follows (you can also use the `m` command anytime to get more information): -``` -pi@raspberrypi:~ $ sudo fdisk /dev/sda - - - -Welcome to fdisk (util-linux 2.29.2). - -Changes will remain in memory only, until you decide to write them. - -Be careful before using the write command. - - - - - -Command (m for help): o - -Created a new DOS disklabel with disk identifier 0x9c310964. - - - -Command (m for help): n - -Partition type - -   p   primary (0 primary, 0 extended, 4 free) - -   e   extended (container for logical partitions) - -Select (default p): p - -Partition number (1-4, default 1): - -First sector (2048-1953525167, default 2048): - -Last sector, +sectors or +size{K,M,G,T,P} (2048-1953525167, default 1953525167): - - - -Created a new partition 1 of type 'Linux' and of size 931.5 GiB. - - - -Command (m for help): p - - - -Disk /dev/sda: 931.5 GiB, 1000204886016 bytes, 1953525168 sectors - -Units: sectors of 1 * 512 = 512 bytes - -Sector size (logical/physical): 512 bytes / 512 bytes - -I/O size (minimum/optimal): 512 bytes / 512 bytes - -Disklabel type: dos - -Disk identifier: 0x9c310964 - - - -Device     Boot Start        End    Sectors   Size Id Type - -/dev/sda1        2048 1953525167 1953523120 931.5G 83 Linux - - - -Command (m for help): w - -The partition table has been altered. - -Syncing disks. - -``` - -Now we will format the newly created partition `/dev/sda1` using the ext4 filesystem: -``` -pi@raspberrypi:~ $ sudo mkfs.ext4 /dev/sda1 - -mke2fs 1.43.4 (31-Jan-2017) - -Discarding device blocks: done - - - -<...> - - - -Allocating group tables: done - -Writing inode tables: done - -Creating journal (1024 blocks): done - -Writing superblocks and filesystem accounting information: done - -``` - -After repeating the above steps, let's label the new partitions according to their usage in your system: -``` -pi@raspberrypi:~ $ sudo e2label /dev/sda1 data - -pi@raspberrypi:~ $ sudo e2label /dev/sdb1 backup - -``` - -Now let's get those disks mounted to store some data. My experience, based on running this setup for over a year now, is that USB drives are not always available to get mounted when the Raspberry Pi boots up (for example, after a power outage), so I recommend using autofs to mount them when needed. - -First install autofs and create the mount point for the storage: -``` -pi@raspberrypi:~ $ sudo apt install autofs - -pi@raspberrypi:~ $ sudo mkdir /nas - -``` - -Then mount the devices by adding the following line to `/etc/auto.master`: -``` -/nas    /etc/auto.usb - -``` - -Create the file `/etc/auto.usb` if not existing with the following content, and restart the autofs service: -``` -data -fstype=ext4,rw :/dev/disk/by-label/data - -backup -fstype=ext4,rw :/dev/disk/by-label/backup - -pi@raspberrypi3:~ $ sudo service autofs restart - -``` - -Now you should be able to access the disks at `/nas/data` and `/nas/backup`, respectively. Clearly, the content will not be too thrilling, as you just erased all the data from the disks. Nevertheless, you should be able to verify the devices are mounted by executing the following commands: -``` -pi@raspberrypi3:~ $ cd /nas/data - -pi@raspberrypi3:/nas/data $ cd /nas/backup - -pi@raspberrypi3:/nas/backup $ mount - -<...> - -/etc/auto.usb on /nas type autofs (rw,relatime,fd=6,pgrp=463,timeout=300,minproto=5,maxproto=5,indirect) - -<...> - -/dev/sda1 on /nas/data type ext4 (rw,relatime,data=ordered) - -/dev/sdb1 on /nas/backup type ext4 (rw,relatime,data=ordered) - -``` - -First move into the directories to make sure autofs mounts the devices. Autofs tracks access to the filesystems and mounts the needed devices on the go. Then the `mount` command shows that the two devices are actually mounted where we wanted them. - -Setting up autofs is a bit fault-prone, so do not get frustrated if mounting doesn't work on the first try. Give it another chance, search for more detailed resources (there is plenty of documentation online), or leave a comment. - -### Mount network storage - -Now that you have set up the basic network storage, we want it to be mounted on a remote Linux machine. We will use the network file system (NFS) for this. First, install the NFS server on the Raspberry Pi: -``` -pi@raspberrypi:~ $ sudo apt install nfs-kernel-server - -``` - -Next we need to tell the NFS server to expose the `/nas/data` directory, which will be the only device accessible from outside the Raspberry Pi (the other one will be used for backups only). To export the directory, edit the file `/etc/exports` and add the following line to allow all devices with access to the NAS to mount your storage: -``` -/nas/data *(rw,sync,no_subtree_check) - -``` - -For more information about restricting the mount to single devices and so on, refer to `man exports`. In the configuration above, anyone will be able to mount your data as long as they have access to the ports needed by NFS: `111` and `2049`. I use the configuration above and allow access to my home network only for ports 22 and 443 using the routers firewall. That way, only devices in the home network can reach the NFS server. - -To mount the storage on a Linux computer, run the commands: -``` -you@desktop:~ $ sudo mkdir /nas/data - -you@desktop:~ $ sudo mount -t nfs :/nas/data /nas/data - -``` - -Again, I recommend using autofs to mount this network device. For extra help, check out [How to use autofs to mount NFS shares][6]. - -Now you are able to access files stored on your own RaspberryPi-powered NAS from remote devices using the NFS mount. In the next part of this series, I will cover how to automatically back up your data to the second hard drive using `rsync`. To save space on the device while still doing daily backups, you will learn how to create incremental backups with `rsync`. - --------------------------------------------------------------------------------- - -via: https://opensource.com/article/18/7/network-attached-storage-Raspberry-Pi - -作者:[Manuel Dewald][a] -选题:[lujun9972](https://github.com/lujun9972) -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:https://opensource.com/users/ntlx -[1]:https://nextcloud.com/ -[2]:https://www.raspberrypi.org/products/raspberry-pi-3-model-b/ -[3]:https://www.raspbian.org/ -[4]:https://www.raspberrypi.org/documentation/installation/installing-images/ -[5]:https://www.raspberrypi.org/blog/raspbian-stretch/ -[6]:https://opensource.com/article/18/6/using-autofs-mount-nfs-shares From 31720c7daadf80866064802a87244fb8ae3e92e3 Mon Sep 17 00:00:00 2001 From: geekpi Date: Wed, 10 Oct 2018 08:55:58 +0800 Subject: [PATCH 310/736] translated --- ...0180928 10 handy Bash aliases for Linux.md | 118 ------------------ ...0180928 10 handy Bash aliases for Linux.md | 115 +++++++++++++++++ 2 files changed, 115 insertions(+), 118 deletions(-) delete mode 100644 sources/tech/20180928 10 handy Bash aliases for Linux.md create mode 100644 translated/tech/20180928 10 handy Bash aliases for Linux.md diff --git a/sources/tech/20180928 10 handy Bash aliases for Linux.md b/sources/tech/20180928 10 handy Bash aliases for Linux.md deleted file mode 100644 index 7ae1070997..0000000000 --- a/sources/tech/20180928 10 handy Bash aliases for Linux.md +++ /dev/null @@ -1,118 +0,0 @@ -translating---geekpi - -10 handy Bash aliases for Linux -====== -Get more efficient by using condensed versions of long Bash commands. - -![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/bash_command_line.png?itok=k4z94W2U) - -How many times have you repeatedly typed out a long command on the command line and wished there was a way to save it for later? This is where Bash aliases come in handy. They allow you to condense long, cryptic commands down to something easy to remember and use. Need some examples to get you started? No problem! - -To use a Bash alias you've created, you need to add it to your .bash_profile file, which is located in your home folder. Note that this file is hidden and accessible only from the command line. The easiest way to work with this file is to use something like Vi or Nano. - -### 10 handy Bash aliases - - 1. How many times have you needed to unpack a .tar file and couldn't remember the exact arguments needed? Aliases to the rescue! Just add the following to your .bash_profile file and then use **untar FileName** to unpack any .tar file. - - - -``` -alias untar='tar -zxvf ' - -``` - - 2. Want to download something but be able to resume if something goes wrong? - - - -``` -alias wget='wget -c ' - -``` - - 3. Need to generate a random, 20-character password for a new online account? No problem. - - - -``` -alias getpass="openssl rand -base64 20" - -``` - - 4. Downloaded a file and need to test the checksum? We've got that covered too. - - - -``` -alias sha='shasum -a 256 ' - -``` - - 5. A normal ping will go on forever. We don't want that. Instead, let's limit that to just five pings. - - - -``` -alias ping='ping -c 5' - -``` - - 6. Start a web server in any folder you'd like. - - - -``` -alias www='python -m SimpleHTTPServer 8000' - -``` - - 7. Want to know how fast your network is? Just download Speedtest-cli and use this alias. You can choose a server closer to your location by using the **speedtest-cli --list** command. - - - -``` -alias speed='speedtest-cli --server 2406 --simple' - -``` - - 8. How many times have you needed to know your external IP address and had no idea how to get that info? Yeah, me too. - - - -``` -alias ipe='curl ipinfo.io/ip' - -``` - - 9. Need to know your local IP address? - - - -``` -alias ipi='ipconfig getifaddr en0' - -``` - - 10. Finally, let's clear the screen. - - - -``` -alias c='clear' - -``` - -As you can see, Bash aliases are a super-easy way to simplify your life on the command line. Want more info? I recommend a quick Google search for "Bash aliases" or a trip to GitHub. - --------------------------------------------------------------------------------- - -via: https://opensource.com/article/18/9/handy-bash-aliases - -作者:[Patrick H.Mullins][a] -选题:[lujun9972](https://github.com/lujun9972) -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://opensource.com/users/pmullins diff --git a/translated/tech/20180928 10 handy Bash aliases for Linux.md b/translated/tech/20180928 10 handy Bash aliases for Linux.md new file mode 100644 index 0000000000..8706e56e8a --- /dev/null +++ b/translated/tech/20180928 10 handy Bash aliases for Linux.md @@ -0,0 +1,115 @@ +10 个 Linux 中方便的 Bash 别名 +====== +对 Bash 长命令使用压缩的版本来更有效率。 + +![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/bash_command_line.png?itok=k4z94W2U) + +你有多少次在命令行上输入一个长命令,并希望有一种方法可以保存它以供日后使用?这就是 Bash 别名派上用场的地方。它们允许你将长而神秘的命令压缩为易于记忆和使用的东西。需要一些例子来帮助你入门吗?没问题! + +要使用你创建的 Bash 别名,你需要将其添加到 .bash_profile 中,该文件位于你的主文件夹中。请注意,此文件是隐藏的,并只能从命令行访问。编辑此文件的最简单方法是使用 Vi 或 Nano 之类的东西。 + +### 10 个方便的 Bash 别名 + + 1. 你有几次遇到需要解压 .tar 文件但无法记住所需的确切参数?别名可以帮助你!只需将以下内容添加到 .bash_profile 中,然后使用 **untar FileName** 解压缩任何 .tar 文件。 + + +``` +alias untar='tar -zxvf ' + +``` + + 2. 想要下载的东西,但如果出现问题可以恢复吗? + + + +``` +alias wget='wget -c ' + +``` + + 3. 是否需要为新的网络帐户生成随机的 20 个字符的密码?没问题。 + + + +``` +alias getpass="openssl rand -base64 20" + +``` + + 4. 下载文件并需要测试校验和?我们也可做到。 + + + +``` +alias sha='shasum -a 256 ' + +``` + + 5. 普通的 ping 将永远持续下去。我们不希望这样。相反,让我们将其限制在五个 ping。 + + + +``` +alias ping='ping -c 5' + +``` + + 6. 在任何你想要的文件夹中启动 Web 服务器。 + + + +``` +alias www='python -m SimpleHTTPServer 8000' + +``` + + 7. 想知道你的网络有多快?只需下载 Speedtest-cli 并使用此别名即可。你可以使用 **speedtest-cli --list** 命令选择离你所在位置更近的服务器。 + + + +``` +alias speed='speedtest-cli --server 2406 --simple' + +``` + + 8. 你有多少次需要知道你的外部 IP 地址,但是不知道如何获取?我也是。 + + + +``` +alias ipe='curl ipinfo.io/ip' + +``` + + 9. 需要知道你的本地 IP 地址? + + + +``` +alias ipi='ipconfig getifaddr en0' + +``` + + 10. 最后,让我们清空屏幕。 + + + +``` +alias c='clear' + +``` + +如你所见,Bash 别名是一种在命令行上简化生活的超级简便方法。想了解更多信息?我建议你 Google 搜索“Bash 别名”或在 Github 中看下。 + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/18/9/handy-bash-aliases + +作者:[Patrick H.Mullins][a] +选题:[lujun9972](https://github.com/lujun9972) +译者:[geekpi](https://github.com/geekpi) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/pmullins From 21b04f244713b0f99868f833edf3ab9d671ed8bc Mon Sep 17 00:00:00 2001 From: geekpi Date: Wed, 10 Oct 2018 09:01:33 +0800 Subject: [PATCH 311/736] translating --- sources/tech/20181003 Introducing Swift on Fedora.md | 2 ++ 1 file changed, 2 insertions(+) diff --git a/sources/tech/20181003 Introducing Swift on Fedora.md b/sources/tech/20181003 Introducing Swift on Fedora.md index 6b975be8f6..186117cd7c 100644 --- a/sources/tech/20181003 Introducing Swift on Fedora.md +++ b/sources/tech/20181003 Introducing Swift on Fedora.md @@ -1,3 +1,5 @@ +translating---geekpi + Introducing Swift on Fedora ====== From 38f0b05e8f8adef2125cccc26e1cdc9a563edae1 Mon Sep 17 00:00:00 2001 From: qhwdw Date: Wed, 10 Oct 2018 09:33:28 +0800 Subject: [PATCH 312/736] Translated by qhwdw --- ...Box On Ubuntu 18.04 LTS Headless Server.md | 321 ------------------ ...Box On Ubuntu 18.04 LTS Headless Server.md | 319 +++++++++++++++++ 2 files changed, 319 insertions(+), 321 deletions(-) delete mode 100644 sources/tech/20180703 Install Oracle VirtualBox On Ubuntu 18.04 LTS Headless Server.md create mode 100644 translated/tech/20180703 Install Oracle VirtualBox On Ubuntu 18.04 LTS Headless Server.md diff --git a/sources/tech/20180703 Install Oracle VirtualBox On Ubuntu 18.04 LTS Headless Server.md b/sources/tech/20180703 Install Oracle VirtualBox On Ubuntu 18.04 LTS Headless Server.md deleted file mode 100644 index 20ce979026..0000000000 --- a/sources/tech/20180703 Install Oracle VirtualBox On Ubuntu 18.04 LTS Headless Server.md +++ /dev/null @@ -1,321 +0,0 @@ -Translating by qhwdw -Install Oracle VirtualBox On Ubuntu 18.04 LTS Headless Server -====== - -![](https://www.ostechnix.com/wp-content/uploads/2016/07/Install-Oracle-VirtualBox-On-Ubuntu-18.04-720x340.png) - -This step by step tutorial walk you through how to install **Oracle VirtualBox** on Ubuntu 18.04 LTS headless server. And, this guide also describes how to manage the VirtualBox headless instances using **phpVirtualBox** , a web-based front-end tool for VirtualBox. The steps described below might also work on Debian, and other Ubuntu derivatives such as Linux Mint. Let us get started. - -### Prerequisites - -Before installing Oracle VirtualBox, we need to do the following prerequisites in our Ubuntu 18.04 LTS server. - -First of all, update the Ubuntu server by running the following commands one by one. -``` -$ sudo apt update - -$ sudo apt upgrade - -$ sudo apt dist-upgrade - -``` - -Next, install the following necessary packages: -``` -$ sudo apt install build-essential dkms unzip wget - -``` - -After installing all updates and necessary prerequisites, restart the Ubuntu server. -``` -$ sudo reboot - -``` - -### Install Oracle VirtualBox on Ubuntu 18.04 LTS server - -Add Oracle VirtualBox official repository. To do so, edit **/etc/apt/sources.list** file: -``` -$ sudo nano /etc/apt/sources.list - -``` - -Add the following lines. - -Here, I will be using Ubuntu 18.04 LTS, so I have added the following repository. -``` -deb http://download.virtualbox.org/virtualbox/debian bionic contrib - -``` - -![][2] - -Replace the word **‘bionic’** with your Ubuntu distribution’s code name, such as ‘xenial’, ‘vivid’, ‘utopic’, ‘trusty’, ‘raring’, ‘quantal’, ‘precise’, ‘lucid’, ‘jessie’, ‘wheezy’, or ‘squeeze**‘.** - -Then, run the following command to add the Oracle public key: -``` -$ wget -q https://www.virtualbox.org/download/oracle_vbox_2016.asc -O- | sudo apt-key add - - -``` - -For VirtualBox older versions, add the following key: -``` -$ wget -q https://www.virtualbox.org/download/oracle_vbox.asc -O- | sudo apt-key add - - -``` - -Next, update the software sources using command: -``` -$ sudo apt update - -``` - -Finally, install latest Oracle VirtualBox latest version using command: -``` -$ sudo apt install virtualbox-5.2 - -``` - -### Adding users to VirtualBox group - -We need to create and add our system user to the **vboxusers** group. You can either create a separate user and assign it to vboxusers group or use the existing user. I don’t want to create a new user, so I added my existing user to this group. Please note that if you use a separate user for virtualbox, you must log out and log in to that particular user and do the rest of the steps. - -I am going to use my username named **sk** , so, I ran the following command to add it to the vboxusers group. -``` -$ sudo usermod -aG vboxusers sk - -``` - -Now, run the following command to check if virtualbox kernel modules are loaded or not. -``` -$ sudo systemctl status vboxdrv - -``` - -![][3] - -As you can see in the above screenshot, the vboxdrv module is loaded and running! - -For older Ubuntu versions, run: -``` -$ sudo /etc/init.d/vboxdrv status - -``` - -If the virtualbox module doesn’t start, run the following command to start it. -``` -$ sudo /etc/init.d/vboxdrv setup - -``` - -Great! We have successfully installed VirtualBox and started virtualbox module. Now, let us go ahead and install Oracle VirtualBox extension pack. - -### Install VirtualBox Extension pack - -The VirtualBox Extension pack provides the following functionalities to the VirtualBox guests. - - * The virtual USB 2.0 (EHCI) device - * VirtualBox Remote Desktop Protocol (VRDP) support - * Host webcam passthrough - * Intel PXE boot ROM - * Experimental support for PCI passthrough on Linux hosts - - - -Download the latest Extension pack for VirtualBox 5.2.x from [**here**][4]. -``` -$ wget https://download.virtualbox.org/virtualbox/5.2.14/Oracle_VM_VirtualBox_Extension_Pack-5.2.14.vbox-extpack - -``` - -Install Extension pack using command: -``` -$ sudo VBoxManage extpack install Oracle_VM_VirtualBox_Extension_Pack-5.2.14.vbox-extpack - -``` - -Congratulations! We have successfully installed Oracle VirtualBox with extension pack in Ubuntu 16.04 LTS server. It is time to deploy virtual machines. Refer the [**virtualbox official guide**][5] to start creating and managing virtual machines in command line. - -Not everyone is command line expert. Some of you might want to create and use virtual machines graphically. No worries! Here is where **phpVirtualBox** comes in handy!! - -### About phpVirtualBox - -**phpVirtualBox** is a free, web-based front-end to Oracle VirtualBox. It is written using PHP language. Using phpVirtualBox, we can easily create, delete, manage and administer virtual machines via a web browser from any remote system on the network. - -### Install phpVirtualBox in Ubuntu 18.04 LTS - -Since it is a web-based tool, we need to install Apache web server, PHP and some php modules. - -To do so, run: -``` -$ sudo apt install apache2 php php-mysql libapache2-mod-php php-soap php-xml - -``` - -Then, Download the phpVirtualBox 5.2.x version from the [**releases page**][6]. Please note that we have installed VirtualBox 5.2, so we must install phpVirtualBox version 5.2 as well. - -To download it, run: -``` -$ wget https://github.com/phpvirtualbox/phpvirtualbox/archive/5.2-0.zip - -``` - -Extract the downloaded archive with command: -``` -$ unzip 5.2-0.zip - -``` - -This command will extract the contents of 5.2.0.zip file into a folder named “phpvirtualbox-5.2-0”. Now, copy or move the contents of this folder to your apache web server root folder. -``` -$ sudo mv phpvirtualbox-5.2-0/ /var/www/html/phpvirtualbox - -``` - -Assign the proper permissions to the phpvirtualbox folder. -``` -$ sudo chmod 777 /var/www/html/phpvirtualbox/ - -``` - -Next, let us configure phpVirtualBox. - -Copy the sample config file as shown below. -``` -$ sudo cp /var/www/html/phpvirtualbox/config.php-example /var/www/html/phpvirtualbox/config.php - -``` - -Edit phpVirtualBox **config.php** file: -``` -$ sudo nano /var/www/html/phpvirtualbox/config.php - -``` - -Find the following lines and replace the username and password with your system user (The same username that we used in “Adding users to VirtualBox group” section). - -In my case, my Ubuntu system username is **sk** , and its password is **ubuntu**. -``` -var $username = 'sk'; -var $password = 'ubuntu'; - -``` - -![][7] - -Save and close the file. - -Next, create a new file called **/etc/default/virtualbox** : -``` -$ sudo nano /etc/default/virtualbox - -``` - -Add the following line. Replace ‘sk’ with your own username. -``` -VBOXWEB_USER=sk - -``` - -Finally, Reboot your system or simply restart the following services to complete the configuration. -``` -$ sudo systemctl restart vboxweb-service - -$ sudo systemctl restart vboxdrv - -$ sudo systemctl restart apache2 - -``` - -### Adjust firewall to allow Apache web server - -By default, the apache web browser can’t be accessed from remote systems if you have enabled the UFW firewall in Ubuntu 18.04 LTS. You must allow the http and https traffic via UFW by following the below steps. - -First, let us view which applications have installed a profile using command: -``` -$ sudo ufw app list -Available applications: -Apache -Apache Full -Apache Secure -OpenSSH - -``` - -As you can see, Apache and OpenSSH applications have installed UFW profiles. - -If you look into the **“Apache Full”** profile, you will see that it enables traffic to the ports **80** and **443** : -``` -$ sudo ufw app info "Apache Full" -Profile: Apache Full -Title: Web Server (HTTP,HTTPS) -Description: Apache v2 is the next generation of the omnipresent Apache web -server. - -Ports: -80,443/tcp - -``` - -Now, run the following command to allow incoming HTTP and HTTPS traffic for this profile: -``` -$ sudo ufw allow in "Apache Full" -Rules updated -Rules updated (v6) - -``` - -If you want to allow https traffic, but only http (80) traffic, run: -``` -$ sudo ufw app info "Apache" - -``` - -### Access phpVirtualBox Web console - -Now, go to any remote system that has graphical web browser. - -In the address bar, type: ****. - -In my case, I navigated to this link – **** - -You should see the following screen. Enter the phpVirtualBox administrative user credentials. - -The default username and phpVirtualBox is **admin** / **admin**. - -![][8] - -Congratulations! You will now be greeted with phpVirtualBox dashboard. - -![][9] - -Now, start creating your VMs and manage them from phpvirtualbox dashboard. As I mentioned earlier, You can access the phpVirtualBox from any system in the same network. All you need is a web browser and the username and password of phpVirtualBox. - -If you haven’t enabled virtualization support in the BISO of host system (not the guest), phpVirtualBox allows you to create 32-bit guests only. To install 64-bit guest systems, you must enable virtualization in your host system’s BIOS. Look for an option that is something like “virtualization” or “hypervisor” in your bios and make sure it is enabled. - -That’s it. Hope this helps. If you find this guide useful, please share it on your social networks and support us. - -More good stuffs to come. Stay tuned! - - --------------------------------------------------------------------------------- - -via: https://www.ostechnix.com/install-oracle-virtualbox-ubuntu-16-04-headless-server/ - -作者:[SK][a] -选题:[lujun9972](https://github.com/lujun9972) -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:https://www.ostechnix.com/author/sk/ -[1]:data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7 -[2]:http://www.ostechnix.com/wp-content/uploads/2016/07/Add-VirtualBox-repository.png -[3]:http://www.ostechnix.com/wp-content/uploads/2016/07/vboxdrv-service.png -[4]:https://www.virtualbox.org/wiki/Downloads -[5]:http://www.virtualbox.org/manual/ch08.html -[6]:https://github.com/phpvirtualbox/phpvirtualbox/releases -[7]:http://www.ostechnix.com/wp-content/uploads/2016/07/phpvirtualbox-config.png -[8]:http://www.ostechnix.com/wp-content/uploads/2016/07/phpvirtualbox-1.png -[9]:http://www.ostechnix.com/wp-content/uploads/2016/07/phpvirtualbox-2.png diff --git a/translated/tech/20180703 Install Oracle VirtualBox On Ubuntu 18.04 LTS Headless Server.md b/translated/tech/20180703 Install Oracle VirtualBox On Ubuntu 18.04 LTS Headless Server.md new file mode 100644 index 0000000000..3a9f28e2c3 --- /dev/null +++ b/translated/tech/20180703 Install Oracle VirtualBox On Ubuntu 18.04 LTS Headless Server.md @@ -0,0 +1,319 @@ +在 Ubuntu 18.04 LTS 无头服务器上安装 Oracle VirtualBox +====== + +![](https://www.ostechnix.com/wp-content/uploads/2016/07/Install-Oracle-VirtualBox-On-Ubuntu-18.04-720x340.png) + +本教程将指导你在 Ubuntu 18.04 LTS 无头服务器上,一步一步地安装 **Oracle VirtualBox**。同时,本教程也将介绍如何使用 **phpVirtualBox** 去管理安装在无头服务器上的 **VirtualBox** 实例。**phpVirtualBox** 是 VirtualBox 的一个基于 Web 的后端工具。这个教程也可以工作在 Debian 和其它 Ubuntu 衍生版本上,如 Linux Mint。现在,我们开始。 + +### 前提条件 + +在安装 Oracle VirtualBox 之前,我们的 Ubuntu 18.04 LTS 服务器上需要满足如下的前提条件。 + +首先,逐个运行如下的命令来更新 Ubuntu 服务器。 +``` +$ sudo apt update + +$ sudo apt upgrade + +$ sudo apt dist-upgrade + +``` + +接下来,安装如下的必需的包: +``` +$ sudo apt install build-essential dkms unzip wget + +``` + +安装完成所有的更新和必需的包之后,重启动 Ubuntu 服务器。 +``` +$ sudo reboot + +``` + +### 在 Ubuntu 18.04 LTS 服务器上安装 VirtualBox + +添加 Oracle VirtualBox 官方仓库。为此你需要去编辑 **/etc/apt/sources.list** 文件: +``` +$ sudo nano /etc/apt/sources.list + +``` + +添加下列的行。 + +在这里,我将使用 Ubuntu 18.04 LTS,因此我添加下列的仓库。 +``` +deb http://download.virtualbox.org/virtualbox/debian bionic contrib + +``` + +![][2] + +用你的 Ubuntu 发行版的代码名字替换关键字 **‘bionic’**,比如,**‘xenial’、‘vivid’、‘utopic’、‘trusty’、‘raring’、‘quantal’、‘precise’、‘lucid’、‘jessie’、‘wheezy’、或 ‘squeeze‘**。 + +然后,运行下列的命令去添加 Oracle 公钥: +``` +$ wget -q https://www.virtualbox.org/download/oracle_vbox_2016.asc -O- | sudo apt-key add - + +``` + +对于 VirtualBox 的老版本,添加如下的公钥: +``` +$ wget -q https://www.virtualbox.org/download/oracle_vbox.asc -O- | sudo apt-key add - + +``` + +接下来,使用如下的命令去更新软件源: +``` +$ sudo apt update + +``` + +最后,使用如下的命令去安装最新版本的 Oracle VirtualBox: +``` +$ sudo apt install virtualbox-5.2 + +``` + +### 添加用户到 VirtualBox 组 + +我们需要去创建并添加我们的系统用户到 **vboxusers** 组中。你也可以单独创建用户,然后将它分配到 **vboxusers** 组中,也可以使用已有的用户。我不想去创建新用户,因此,我添加已存在的用户到这个组中。请注意,如果你为 virtualbox 使用一个单独的用户,那么你必须注销当前用户,并使用那个特定的用户去登入,来完成剩余的步骤。 + +我使用的是我的用户名 **sk**,因此,我运行如下的命令将它添加到 **vboxusers** 组中。 +``` +$ sudo usermod -aG vboxusers sk + +``` + +现在,运行如下的命令去检查 virtualbox 内核模块是否已加载。 +``` +$ sudo systemctl status vboxdrv + +``` + +![][3] + +正如你在上面的截屏中所看到的,vboxdrv 模块已加载,并且是已运行的状态! + +对于老的 Ubuntu 版本,运行: +``` +$ sudo /etc/init.d/vboxdrv status + +``` + +如果 virtualbox 模块没有启动,运行如下的命令去启动它。 +``` +$ sudo /etc/init.d/vboxdrv setup + +``` + +很好!我们已经成功安装了 VirtualBox 并启动了 virtualbox 模块。现在,我们继续来安装 Oracle VirtualBox 的扩展包。 + +### 安装 VirtualBox 扩展包 + +VirtualBox 扩展包为 VirtualBox 访客系统提供了如下的功能。 + + * 虚拟的 USB 2.0 (EHCI) 驱动 + * VirtualBox 远程桌面协议(VRDP)支持 + * 宿主机网络摄像头直通 + * Intel PXE 引导 ROM + * 对 Linux 宿主机上的 PCI 直通提供支持 + + + +从[**这里**][4]为 VirtualBox 5.2.x 下载最新版的扩展包。 +``` +$ wget https://download.virtualbox.org/virtualbox/5.2.14/Oracle_VM_VirtualBox_Extension_Pack-5.2.14.vbox-extpack + +``` + +使用如下的命令去安装扩展包: +``` +$ sudo VBoxManage extpack install Oracle_VM_VirtualBox_Extension_Pack-5.2.14.vbox-extpack + +``` + +恭喜!我们已经成功地在 Ubuntu 18.04 LTS 服务器上安装了 Oracle VirtualBox 的扩展包。现在已经可以去部署虚拟机了。参考 [**virtualbox 官方指南**][5],在命令行中开始创建和管理虚拟机。 + +然而,并不是每个人都擅长使用命令行。有些人可能希望在图形界面中去创建和使用虚拟机。不用担心!下面我们为你带来非常好用的 **phpVirtualBox** 工具! + +### 关于 phpVirtualBox + +**phpVirtualBox** 是一个免费的、基于 web 的 Oracle VirtualBox 后端。它是使用 PHP 开发的。用 phpVirtualBox 我们可以通过 web 浏览器从网络上的任意一个系统上,很轻松地创建、删除、管理、和执行虚拟机。 + +### 在 Ubuntu 18.04 LTS 上安装 phpVirtualBox + +由于它是基于 web 的工具,我们需要安装 Apache web 服务器、PHP 和一些 php 模块。 + +为此,运行如下命令: +``` +$ sudo apt install apache2 php php-mysql libapache2-mod-php php-soap php-xml + +``` + +然后,从 [**下载页面**][6] 上下载 phpVirtualBox 5.2.x 版。请注意,由于我们已经安装了 VirtualBox 5.2 版,因此,同样的我们必须去安装 phpVirtualBox 的 5.2 版本。 + +运行如下的命令去下载它: +``` +$ wget https://github.com/phpvirtualbox/phpvirtualbox/archive/5.2-0.zip + +``` + +使用如下命令解压下载的安装包: +``` +$ unzip 5.2-0.zip + +``` + +这个命令将解压 5.2.0.zip 文件的内容到一个命名为 “phpvirtualbox-5.2-0” 的文件夹中。现在,复制或移动这个文件夹的内容到你的 apache web 服务器的根文件夹中。 +``` +$ sudo mv phpvirtualbox-5.2-0/ /var/www/html/phpvirtualbox + +``` + +给 phpvirtualbox 文件夹分配适当的权限。 +``` +$ sudo chmod 777 /var/www/html/phpvirtualbox/ + +``` + +接下来,我们开始配置 phpVirtualBox。 + +像下面这样复制示例配置文件。 +``` +$ sudo cp /var/www/html/phpvirtualbox/config.php-example /var/www/html/phpvirtualbox/config.php + +``` + +编辑 phpVirtualBox 的 **config.php** 文件: +``` +$ sudo nano /var/www/html/phpvirtualbox/config.php + +``` + +找到下列行,并且用你的系统用户名和密码去替换它(就是前面的“添加用户到 VirtualBox 组中”节中使用的用户名)。 + +在我的案例中,我的 Ubuntu 系统用户名是 **sk** ,它的密码是 **ubuntu**。 +``` +var $username = 'sk'; +var $password = 'ubuntu'; + +``` + +![][7] + +保存并关闭这个文件。 + +接下来,创建一个名为 **/etc/default/virtualbox** 的新文件: +``` +$ sudo nano /etc/default/virtualbox + +``` + +添加下列行。用你自己的系统用户替换 ‘sk’。 +``` +VBOXWEB_USER=sk + +``` + +最后,重引导你的系统或重启下列服务去完成整个配置工作。 +``` +$ sudo systemctl restart vboxweb-service + +$ sudo systemctl restart vboxdrv + +$ sudo systemctl restart apache2 + +``` + +### 调整防火墙允许连接 Apache web 服务器 + +如果你在 Ubuntu 18.04 LTS 上启用了 UFW,那么在默认情况下,apache web 服务器是不能被任何远程系统访问的。你必须通过下列的步骤让 http 和 https 流量允许通过 UFW。 + +首先,我们使用如下的命令来查看在策略中已经安装了哪些应用: +``` +$ sudo ufw app list +Available applications: +Apache +Apache Full +Apache Secure +OpenSSH + +``` + +正如你所见,Apache 和 OpenSSH 应该已经在 UFW 的策略文件中安装了。 + +如果你在策略中看到的是 **“Apache Full”**,说明它允许流量到达 **80** 和 **443** 端口: +``` +$ sudo ufw app info "Apache Full" +Profile: Apache Full +Title: Web Server (HTTP,HTTPS) +Description: Apache v2 is the next generation of the omnipresent Apache web +server. + +Ports: +80,443/tcp + +``` + +现在,运行如下的命令去启用这个策略中的 HTTP 和 HTTPS 的入站流量: +``` +$ sudo ufw allow in "Apache Full" +Rules updated +Rules updated (v6) + +``` + +如果你希望允许 https 流量,但是仅是 http (80) 的流量,运行如下的命令: +``` +$ sudo ufw app info "Apache" + +``` + +### 访问 phpVirtualBox 的 Web 控制台 + +现在,用任意一台远程系统的 web 浏览器来访问。 + +在地址栏中,输入:****。 + +在我的案例中,我导航到这个链接 – **** + +你将看到如下的屏幕输出。输入 phpVirtualBox 管理员用户凭据。 + +phpVirtualBox 的默认管理员用户名和密码是 **admin** / **admin**。 + +![][8] + +恭喜!你现在已经进入了 phpVirtualBox 管理面板了。 + +![][9] + +现在,你可以从 phpvirtualbox 的管理面板上,开始去创建你的 VM 了。正如我在前面提到的,你可以从同一网络上的任意一台系统上访问 phpVirtualBox 了,而所需要的仅仅是一个 web 浏览器和 phpVirtualBox 的用户名和密码。 + +如果在你的宿主机系统(不是访客机)的 BIOS 中没有启用虚拟化支持,phpVirtualBox 将只允许你去创建 32 位的访客系统。要安装 64 位的访客系统,你必须在你的宿主机的 BIOS 中启用虚拟化支持。在你的宿主机的 BIOS 中你可以找到一些类似于 “virtualization” 或 “hypervisor” 字眼的选项,然后确保它是启用的。 + +本文到此结束了,希望能帮到你。如果你找到了更有用的指南,共享出来吧。 + +还有一大波更好玩的东西即将到来,请继续关注! + +-------------------------------------------------------------------------------- + +via: https://www.ostechnix.com/install-oracle-virtualbox-ubuntu-16-04-headless-server/ + +作者:[SK][a] +选题:[lujun9972](https://github.com/lujun9972) +译者:[qhwdw](https://github.com/qhwdw) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://www.ostechnix.com/author/sk/ +[1]:data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7 +[2]:http://www.ostechnix.com/wp-content/uploads/2016/07/Add-VirtualBox-repository.png +[3]:http://www.ostechnix.com/wp-content/uploads/2016/07/vboxdrv-service.png +[4]:https://www.virtualbox.org/wiki/Downloads +[5]:http://www.virtualbox.org/manual/ch08.html +[6]:https://github.com/phpvirtualbox/phpvirtualbox/releases +[7]:http://www.ostechnix.com/wp-content/uploads/2016/07/phpvirtualbox-config.png +[8]:http://www.ostechnix.com/wp-content/uploads/2016/07/phpvirtualbox-1.png +[9]:http://www.ostechnix.com/wp-content/uploads/2016/07/phpvirtualbox-2.png From 1af6fbeb0bed0be363189dfb588a0b95c44bafa5 Mon Sep 17 00:00:00 2001 From: qhwdw Date: Wed, 10 Oct 2018 10:39:16 +0800 Subject: [PATCH 313/736] Translated by qhwdw --- ...on Server Using KVM In Ubuntu 18.04 LTS.md | 333 ----------------- ...on Server Using KVM In Ubuntu 18.04 LTS.md | 346 ++++++++++++++++++ 2 files changed, 346 insertions(+), 333 deletions(-) delete mode 100644 sources/tech/20180704 Setup Headless Virtualization Server Using KVM In Ubuntu 18.04 LTS.md create mode 100644 translated/tech/20180704 Setup Headless Virtualization Server Using KVM In Ubuntu 18.04 LTS.md diff --git a/sources/tech/20180704 Setup Headless Virtualization Server Using KVM In Ubuntu 18.04 LTS.md b/sources/tech/20180704 Setup Headless Virtualization Server Using KVM In Ubuntu 18.04 LTS.md deleted file mode 100644 index 3125a1a4ee..0000000000 --- a/sources/tech/20180704 Setup Headless Virtualization Server Using KVM In Ubuntu 18.04 LTS.md +++ /dev/null @@ -1,333 +0,0 @@ -Translating by qhwdw -Setup Headless Virtualization Server Using KVM In Ubuntu 18.04 LTS -====== - -![](https://www.ostechnix.com/wp-content/uploads/2016/11/kvm-720x340.jpg) - -We already have covered [**setting up Oracle VirtualBox on Ubuntu 18.04**][1] headless server. In this tutorial, we will be discussing how to setup headless virtualization server using **KVM** and how to manage the guest machines from a remote client. As you may know already, KVM ( **K** ernel-based **v** irtual **m** achine) is an open source, full virtualization for Linux. Using KVM, we can easily turn any Linux server in to a complete virtualization environment in minutes and deploy different kind of VMs such as GNU/Linux, *BSD, Windows etc. - -### Setup Headless Virtualization Server Using KVM - -I tested this guide on Ubuntu 18.04 LTS server, however this tutorial will work on other Linux distributions such as Debian, CentOS, RHEL and Scientific Linux. This method will be perfectly suitable for those who wants to setup a simple virtualization environment in a Linux server that doesn’t have any graphical environment. - -For the purpose of this guide, I will be using two systems. - -**KVM virtualization server:** - - * **Host OS** – Ubuntu 18.04 LTS minimal server (No GUI) - * **IP Address of Host OS** : 192.168.225.22/24 - * **Guest OS** (Which we are going to host on Ubuntu 18.04) : Ubuntu 16.04 LTS server - - - -**Remote desktop client :** - - * **OS** – Arch Linux - - - -### Install KVM - -First, let us check if our system supports hardware virtualization. To do so, run the following command from the Terminal: -``` -$ egrep -c '(vmx|svm)' /proc/cpuinfo - -``` - -If the result is **zero (0)** , the system doesn’t support hardware virtualization or the virtualization is disabled in the Bios. Go to your bios and check for the virtualization option and enable it. - -if the result is **1** or **more** , the system will support hardware virtualization. However, you still need to enable the virtualization option in Bios before running the above commands. - -Alternatively, you can use the following command to verify it. You need to install kvm first as described below, in order to use this command. -``` -$ kvm-ok - -``` - -**Sample output:** -``` -INFO: /dev/kvm exists -KVM acceleration can be used - -``` - -If you got the following error instead, you still can run guest machines in KVM, but the performance will be very poor. -``` -INFO: Your CPU does not support KVM extensions -INFO: For more detailed results, you should run this as root -HINT: sudo /usr/sbin/kvm-ok - -``` - -Also, there are other ways to find out if your CPU supports Virtualization or not. Refer the following guide for more details. - -Next, Install KVM and other required packages to setup a virtualization environment in Linux. - -On Ubuntu and other DEB based systems, run: -``` -$ sudo apt-get install qemu-kvm libvirt-bin virtinst bridge-utils cpu-checker - -``` - -Once KVM installed, start libvertd service (If it is not started already): -``` -$ sudo systemctl enable libvirtd - -$ sudo systemctl start libvirtd - -``` - -### Create Virtual machines - -All virtual machine files and other related files will be stored under **/var/lib/libvirt/**. The default path of ISO images is **/var/lib/libvirt/boot/**. - -First, let us see if there is any virtual machines. To view the list of available virtual machines, run: -``` -$ sudo virsh list --all - -``` - -**Sample output:** -``` -Id Name State ----------------------------------------------------- - -``` - -![][3] - -As you see above, there is no virtual machine available right now. - -Now, let us crate one. - -For example, let us create Ubuntu 16.04 Virtual machine with 512 MB RAM, 1 CPU core, 8 GB Hdd. -``` -$ sudo virt-install --name Ubuntu-16.04 --ram=512 --vcpus=1 --cpu host --hvm --disk path=/var/lib/libvirt/images/ubuntu-16.04-vm1,size=8 --cdrom /var/lib/libvirt/boot/ubuntu-16.04-server-amd64.iso --graphics vnc - -``` - -Please make sure you have Ubuntu 16.04 ISO image in path **/var/lib/libvirt/boot/** or any other path you have given in the above command. - -**Sample output:** -``` -WARNING Graphics requested but DISPLAY is not set. Not running virt-viewer. -WARNING No console to launch for the guest, defaulting to --wait -1 - -Starting install... -Creating domain... | 0 B 00:00:01 -Domain installation still in progress. Waiting for installation to complete. -Domain has shutdown. Continuing. -Domain creation completed. -Restarting guest. - -``` - -![][4] - -Let us break down the above command and see what each option do. - - * **–name** : This option defines the name of the virtual name. In our case, the name of VM is **Ubuntu-16.04**. - * **–ram=512** : Allocates 512MB RAM to the VM. - * **–vcpus=1** : Indicates the number of CPU cores in the VM. - * **–cpu host** : Optimizes the CPU properties for the VM by exposing the host’s CPU’s configuration to the guest. - * **–hvm** : Request the full hardware virtualization. - * **–disk path** : The location to save VM’s hdd and it’s size. In our example, I have allocated 8GB hdd size. - * **–cdrom** : The location of installer ISO image. Please note that you must have the actual ISO image in this location. - * **–graphics vnc** : Allows VNC access to the VM from a remote client. - - - -### Access Virtual machines using VNC client - -Now, go to the remote Desktop system. SSH to the Ubuntu server(Virtualization server) as shown below. - -Here, **sk** is my Ubuntu server’s user name and **192.168.225.22** is its IP address. - -Run the following command to find out the VNC port number. We need this to access the Vm from a remote system. -``` -$ sudo virsh dumpxml Ubuntu-16.04 | grep vnc - -``` - -**Sample output:** -``` - - -``` - -![][5] - -Note down the port number **5900**. Install any VNC client application. For this guide, I will be using TigerVnc. TigerVNC is available in the Arch Linux default repositories. To install it on Arch based systems, run: -``` -$ sudo pacman -S tigervnc - -``` - -Type the following SSH port forwarding command from your remote client system that has VNC client application installed. - -Again, **192.168.225.22** is my Ubuntu server’s (virtualization server) IP address. - -Then, open the VNC client from your Arch Linux (client). - -Type **localhost:5900** in the VNC server field and click **Connect** button. - -![][6] - -Then start installing the Ubuntu VM as the way you do in the physical system. - -![][7] - -![][8] - -Similarly, you can setup as many as virtual machines depending upon server hardware specifications. - -Alternatively, you can use **virt-viewer** utility in order to install operating system in the guest machines. virt-viewer is available in the most Linux distribution’s default repositories. After installing virt-viewer, run the following command to establish VNC access to the VM. -``` -$ sudo virt-viewer --connect=qemu+ssh://192.168.225.22/system --name Ubuntu-16.04 - -``` - -### Manage virtual machines - -Managing VMs from the command-line using virsh management user interface is very interesting and fun. The commands are very easy to remember. Let us see some examples. - -To view the list of running VMs, run: -``` -$ sudo virsh list - -``` - -Or, -``` -$ sudo virsh list --all - -``` - -**Sample output:** -``` - Id Name State ----------------------------------------------------- - 2 Ubuntu-16.04 running - -``` - -![][9] - -To start a VM, run: -``` -$ sudo virsh start Ubuntu-16.04 - -``` - -Alternatively, you can use the VM id to start it. - -![][10] - -As you see in the above output, Ubuntu 16.04 virtual machine’s Id is 2. So, in order to start it, just specify its Id like below. -``` -$ sudo virsh start 2 - -``` - -To restart a VM, run: -``` -$ sudo virsh reboot Ubuntu-16.04 - -``` - -**Sample output:** -``` -Domain Ubuntu-16.04 is being rebooted - -``` - -![][11] - -To pause a running VM, run: -``` -$ sudo virsh suspend Ubuntu-16.04 - -``` - -**Sample output:** -``` -Domain Ubuntu-16.04 suspended - -``` - -To resume the suspended VM, run: -``` -$ sudo virsh resume Ubuntu-16.04 - -``` - -**Sample output:** -``` -Domain Ubuntu-16.04 resumed - -``` - -To shutdown a VM, run: -``` -$ sudo virsh shutdown Ubuntu-16.04 - -``` - -**Sample output:** -``` -Domain Ubuntu-16.04 is being shutdown - -``` - -To completely remove a VM, run: -``` -$ sudo virsh undefine Ubuntu-16.04 - -$ sudo virsh destroy Ubuntu-16.04 - -``` - -**Sample output:** -``` -Domain Ubuntu-16.04 destroyed - -``` - -![][12] - -For more options, I recommend you to look into the man pages. -``` -$ man virsh - -``` - -That’s all for now folks. Start playing with your new virtualization environment. KVM virtualization will be opt for research & development and testing purposes, but not limited to. If you have sufficient hardware, you can use it for large production environments. Have fun and don’t forget to leave your valuable comments in the comment section below. - -Cheers! - - - --------------------------------------------------------------------------------- - -via: https://www.ostechnix.com/setup-headless-virtualization-server-using-kvm-ubuntu/ - -作者:[SK][a] -选题:[lujun9972](https://github.com/lujun9972) -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:https://www.ostechnix.com/author/sk/ -[1]:https://www.ostechnix.com/install-oracle-virtualbox-ubuntu-16-04-headless-server/ -[2]:data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7 -[3]:http://www.ostechnix.com/wp-content/uploads/2016/11/sk@ubuntuserver-_001.png -[4]:http://www.ostechnix.com/wp-content/uploads/2016/11/sk@ubuntuserver-_008-1.png -[5]:http://www.ostechnix.com/wp-content/uploads/2016/11/sk@ubuntuserver-_002.png -[6]:http://www.ostechnix.com/wp-content/uploads/2016/11/VNC-Viewer-Connection-Details_005.png -[7]:http://www.ostechnix.com/wp-content/uploads/2016/11/QEMU-Ubuntu-16.04-TigerVNC_006.png -[8]:http://www.ostechnix.com/wp-content/uploads/2016/11/QEMU-Ubuntu-16.04-TigerVNC_007.png -[9]:http://www.ostechnix.com/wp-content/uploads/2016/11/sk@ubuntuserver-_010-1.png -[10]:http://www.ostechnix.com/wp-content/uploads/2016/11/sk@ubuntuserver-_010-2.png -[11]:http://www.ostechnix.com/wp-content/uploads/2016/11/sk@ubuntuserver-_011-1.png -[12]:http://www.ostechnix.com/wp-content/uploads/2016/11/sk@ubuntuserver-_012.png diff --git a/translated/tech/20180704 Setup Headless Virtualization Server Using KVM In Ubuntu 18.04 LTS.md b/translated/tech/20180704 Setup Headless Virtualization Server Using KVM In Ubuntu 18.04 LTS.md new file mode 100644 index 0000000000..c65e756ff4 --- /dev/null +++ b/translated/tech/20180704 Setup Headless Virtualization Server Using KVM In Ubuntu 18.04 LTS.md @@ -0,0 +1,346 @@ +在 Ubuntu 18.04 LTS 上使用 KVM 配置无头虚拟化服务器 +====== + +![](https://www.ostechnix.com/wp-content/uploads/2016/11/kvm-720x340.jpg) + +我们已经讲解了 [在 Ubuntu 18.04 上配置 Oracle VirtualBox][1] 无头服务器。在本教程中,我们将讨论如何使用 **KVM** 去配置无头虚拟化服务器,以及如何从一个远程客户端去管理访客系统。正如你所知道的,KVM(**K** ernel-based **v** irtual **m** achine)是开源的,是对 Linux 的完全虚拟化。使用 KVM,我们可以在几分钟之内,很轻松地将任意 Linux 服务器转换到一个完全的虚拟化环境中,以及部署不同种类的虚拟机,比如 GNU/Linux、*BSD、Windows 等等。 + +### 使用 KVM 配置无头虚拟化服务器 + +我在 Ubuntu 18.04 LTS 服务器上测试了本指南,但是它在其它的 Linux 发行版上也可以使用,比如,Debian、CentOS、RHEL 以及 Scientific Linux。这个方法完全适合哪些希望在没有任何图形环境的 Linux 服务器上,去配置一个简单的虚拟化环境。 + +基于本指南的目的,我将使用两个系统。 + +**KVM 虚拟化服务器:** + + * **宿主机操作系统** – 最小化安装的 Ubuntu 18.04 LTS(没有 GUI) + * **宿主机操作系统的 IP 地址**:192.168.225.22/24 + * **访客操作系统**(它将运行在 Ubuntu 18.04 的宿主机上):Ubuntu 16.04 LTS server + + + +**远程桌面客户端:** + + * **操作系统** – Arch Linux + + + +### 安装 KVM + +首先,我们先检查一下我们的系统是否支持硬件虚拟化。为此,需要在终端中运行如下的命令: +``` +$ egrep -c '(vmx|svm)' /proc/cpuinfo + +``` + +假如结果是 **zero (0)**,说明系统不支持硬件虚拟化,或者在 BIOS 中禁用了虚拟化。进入你的系统 BIOS 并检查虚拟化选项,然后启用它。 + +假如结果是 **1** 或者 **更大的数**,说明系统将支持硬件虚拟化。然而,在你运行上面的命令之前,你需要始终保持 BIOS 中的虚拟化选项是启用的。 + +或者,你也可以使用如下的命令去验证它。但是为了使用这个命令你需要先安装 KVM。 +``` +$ kvm-ok + +``` + +**示例输出:** + +``` +INFO: /dev/kvm exists +KVM acceleration can be used + +``` + +如果输出的是如下这样的错误,你仍然可以在 KVM 中运行访客虚拟机,但是它的性能将非常差。 +``` +INFO: Your CPU does not support KVM extensions +INFO: For more detailed results, you should run this as root +HINT: sudo /usr/sbin/kvm-ok + +``` + +当然,还有其它的方法来检查你的 CPU 是否支持虚拟化。更多信息参考接下来的指南。 + +接下来,安装 KVM 和在 Linux 中配置虚拟化环境所需要的其它包。 + +在 Ubuntu 和其它基于 DEB 的系统上,运行如下命令: +``` +$ sudo apt-get install qemu-kvm libvirt-bin virtinst bridge-utils cpu-checker + +``` + +KVM 安装完成后,启动 libvertd 服务(如果它没有启动的话): +``` +$ sudo systemctl enable libvirtd + +$ sudo systemctl start libvirtd + +``` + +### 创建虚拟机 + +所有的虚拟机文件和其它的相关文件都保存在 **/var/lib/libvirt/** 下。ISO 镜像的默认路径是 **/var/lib/libvirt/boot/**。 + +首先,我们先检查一下是否有虚拟机。查看可用的虚拟机列表,运行如下的命令: +``` +$ sudo virsh list --all + +``` + +**示例输出:** + +``` +Id Name State +---------------------------------------------------- + +``` + +![][3] + +正如上面的截屏,现在没有可用的虚拟机。 + +现在,我们来创建一个。 + +例如,我们来创建一个有 512 MB 内存、1 个 CPU 核心、8 GB 硬盘的 Ubuntu 16.04 虚拟机。 +``` +$ sudo virt-install --name Ubuntu-16.04 --ram=512 --vcpus=1 --cpu host --hvm --disk path=/var/lib/libvirt/images/ubuntu-16.04-vm1,size=8 --cdrom /var/lib/libvirt/boot/ubuntu-16.04-server-amd64.iso --graphics vnc + +``` + +请确保在路径 **/var/lib/libvirt/boot/** 中有一个 Ubuntu 16.04 的 ISO 镜像文件,或者在上面命令中给定的其它路径中有相应的镜像文件。 + +**示例输出:** + +``` +WARNING Graphics requested but DISPLAY is not set. Not running virt-viewer. +WARNING No console to launch for the guest, defaulting to --wait -1 + +Starting install... +Creating domain... | 0 B 00:00:01 +Domain installation still in progress. Waiting for installation to complete. +Domain has shutdown. Continuing. +Domain creation completed. +Restarting guest. + +``` + +![][4] + +我们来分别讲解以上的命令和看到的每个选项的作用。 + + * **–name** : 这个选项定义虚拟机名字。在我们的案例中,这个虚拟机的名字是 **Ubuntu-16.04**。 + * **–ram=512** : 给虚拟机分配 512MB 内存。 + * **–vcpus=1** : 指明虚拟机中 CPU 核心的数量。 + * **–cpu host** : 通过暴露宿主机 CPU 的配置给访客系统来优化 CPU 属性。 + * **–hvm** : 要求完整的硬件虚拟化。 + * **–disk path** : 虚拟机硬盘的位置和大小。在我们的示例中,我分配了 8GB 的硬盘。 + * **–cdrom** : 安装 ISO 镜像的位置。请注意你必须在这个位置真的有一个 ISO 镜像。 + * **–graphics vnc** : 允许 VNC 从远程客户端访问虚拟机。 + + + +### 使用 VNC 客户端访问虚拟机 + +现在,我们在远程桌面系统上使用 SSH 登入到 Ubuntu 服务器上(虚拟化服务器),如下所示。 + +在这里,**sk** 是我的 Ubuntu 服务器的用户名,而 **192.168.225.22** 是它的 IP 地址。 + +运行如下的命令找出 VNC 的端口号。我们从一个远程系统上访问虚拟机需要它。 +``` +$ sudo virsh dumpxml Ubuntu-16.04 | grep vnc + +``` + +**示例输出:** + +``` + + +``` + +![][5] + +记下那个端口号 **5900**。安装任意的 VNC 客户端应用程序。在本指南中,我们将使用 TigerVnc。TigerVNC 是 Arch Linux 默认仓库中可用的客户端。在 Arch 上安装它,运行如下命令: +``` +$ sudo pacman -S tigervnc + +``` + +在安装有 VNC 客户端的远程客户端系统上输入如下的 SSH 端口转发命令。 + +``` +$ ssh sk@192.168.225.22 -L 5900:127.0.0.1:5900 +``` + +再强调一次,**192.168.225.22** 是我的 Ubuntu 服务器(虚拟化服务器)的 IP 地址。 + +然后,从你的 Arch Linux(客户端)打开 VNC 客户端。 + +在 VNC 服务器框中输入 **localhost:5900**,然后点击 **Connect** 按钮。 + +![][6] + +然后就像你在物理机上安装系统一样的方法开始安装 Ubuntu 虚拟机。 + +![][7] + +![][8] + +同样的,你可以根据你的服务器的硬件情况配置多个虚拟机。 + +或者,你可以使用 **virt-viewer** 实用程序在访客机器中安装操作系统。virt-viewer 在大多数 Linux 发行版的默认仓库中都可以找到。安装完 virt-viewer 之后,运行下列的命令去建立到虚拟机的访问连接。 +``` +$ sudo virt-viewer --connect=qemu+ssh://192.168.225.22/system --name Ubuntu-16.04 + +``` + +### 管理虚拟机 + +使用管理用户接口 virsh 从命令行去管理虚拟机是非常有趣的。命令非常容易记。我们来看一些例子。 + +查看运行的虚拟机,运行如下命令: +``` +$ sudo virsh list + +``` + +或者, +``` +$ sudo virsh list --all + +``` + +**示例输出:** + +``` + Id Name State +---------------------------------------------------- + 2 Ubuntu-16.04 running + +``` + +![][9] + +启动一个虚拟机,运行如下命令: +``` +$ sudo virsh start Ubuntu-16.04 + +``` + +或者,也可以使用虚拟机 id 去启动它。 + +![][10] + +正如在上面的截图所看到的,Ubuntu 16.04 虚拟机的 Id 是 2。因此,启动它时,你也可以像下面一样只指定它的 ID。 +``` +$ sudo virsh start 2 + +``` + +重启动一个虚拟机,运行如下命令: +``` +$ sudo virsh reboot Ubuntu-16.04 + +``` + +**示例输出:** + +``` +Domain Ubuntu-16.04 is being rebooted + +``` + +![][11] + +暂停一个运行中的虚拟机,运行如下命令: +``` +$ sudo virsh suspend Ubuntu-16.04 + +``` + +**示例输出:** + +``` +Domain Ubuntu-16.04 suspended + +``` + +让一个暂停的虚拟机重新运行,运行如下命令: +``` +$ sudo virsh resume Ubuntu-16.04 + +``` + +**示例输出:** + +``` +Domain Ubuntu-16.04 resumed + +``` + +关闭一个虚拟机,运行如下命令: +``` +$ sudo virsh shutdown Ubuntu-16.04 + +``` + +**示例输出:** + +``` +Domain Ubuntu-16.04 is being shutdown + +``` + +完全移除一个虚拟机,运行如下的命令: +``` +$ sudo virsh undefine Ubuntu-16.04 + +$ sudo virsh destroy Ubuntu-16.04 + +``` + +**示例输出:** + +``` +Domain Ubuntu-16.04 destroyed + +``` + +![][12] + +关于它的更多选项,建议你去查看 man 手册页: +``` +$ man virsh + +``` + +今天就到这里吧。开始在你的新的虚拟化环境中玩吧。对于研究和开发者、以及测试目的,KVM 虚拟化将是很好的选择,但它能做的远不止这些。如果你有充足的硬件资源,你可以将它用于大型的生产环境中。如果你还有其它好玩的发现,不要忘记在下面的评论区留下你的高见。 + +谢谢! + + + +-------------------------------------------------------------------------------- + +via: https://www.ostechnix.com/setup-headless-virtualization-server-using-kvm-ubuntu/ + +作者:[SK][a] +选题:[lujun9972](https://github.com/lujun9972) +译者:[qhwdw](https://github.com/qhwdw) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://www.ostechnix.com/author/sk/ +[1]:https://www.ostechnix.com/install-oracle-virtualbox-ubuntu-16-04-headless-server/ +[2]:data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7 +[3]:http://www.ostechnix.com/wp-content/uploads/2016/11/sk@ubuntuserver-_001.png +[4]:http://www.ostechnix.com/wp-content/uploads/2016/11/sk@ubuntuserver-_008-1.png +[5]:http://www.ostechnix.com/wp-content/uploads/2016/11/sk@ubuntuserver-_002.png +[6]:http://www.ostechnix.com/wp-content/uploads/2016/11/VNC-Viewer-Connection-Details_005.png +[7]:http://www.ostechnix.com/wp-content/uploads/2016/11/QEMU-Ubuntu-16.04-TigerVNC_006.png +[8]:http://www.ostechnix.com/wp-content/uploads/2016/11/QEMU-Ubuntu-16.04-TigerVNC_007.png +[9]:http://www.ostechnix.com/wp-content/uploads/2016/11/sk@ubuntuserver-_010-1.png +[10]:http://www.ostechnix.com/wp-content/uploads/2016/11/sk@ubuntuserver-_010-2.png +[11]:http://www.ostechnix.com/wp-content/uploads/2016/11/sk@ubuntuserver-_011-1.png +[12]:http://www.ostechnix.com/wp-content/uploads/2016/11/sk@ubuntuserver-_012.png From ad04b594a596e77841b1394ba01dc2a690b616fe Mon Sep 17 00:00:00 2001 From: "Xingyu.Wang" Date: Wed, 10 Oct 2018 13:00:47 +0800 Subject: [PATCH 314/736] PRF:20180413 The df Command Tutorial With Examples For Beginners.md @qhwdw --- ...nd Tutorial With Examples For Beginners.md | 79 +++++++++---------- 1 file changed, 38 insertions(+), 41 deletions(-) diff --git a/translated/tech/20180413 The df Command Tutorial With Examples For Beginners.md b/translated/tech/20180413 The df Command Tutorial With Examples For Beginners.md index 08f3860661..7a46f07032 100644 --- a/translated/tech/20180413 The df Command Tutorial With Examples For Beginners.md +++ b/translated/tech/20180413 The df Command Tutorial With Examples For Beginners.md @@ -1,21 +1,21 @@ -df 命令的新手教程 +df 命令新手教程 ====== ![](https://www.ostechnix.com/wp-content/uploads/2018/04/df-command-1-720x340.png) -在本指南中,我们将学习如何使用 **df** 命令。df 命令是 `Disk Free` 的首字母组合,它报告文件系统磁盘空间的使用情况。它显示一个 Linux 系统中文件系统上可用磁盘空间的数量。df 命令很容易与 **du** 命令混淆。它们的用途不同。df 命令报告 **我们拥有多少磁盘空间**(空闲磁盘空间),而 du 命令报告 **被文件和目录占用了多少磁盘空间**。希望我这样的解释你能更清楚。在继续之前,我们来看一些 df 命令的实例,以便于你更好地理解它。 +在本指南中,我们将学习如何使用 `df` 命令。df 命令是 “Disk Free” 的首字母组合,它报告文件系统磁盘空间的使用情况。它显示一个 Linux 系统中文件系统上可用磁盘空间的数量。`df` 命令很容易与 `du` 命令混淆。它们的用途不同。`df` 命令报告我们拥有多少磁盘空间(空闲磁盘空间),而 `du` 命令报告被文件和目录占用了多少磁盘空间。希望我这样的解释你能更清楚。在继续之前,我们来看一些 `df` 命令的实例,以便于你更好地理解它。 ### df 命令使用举例 -**1、查看整个文件系统磁盘空间使用情况** +#### 1、查看整个文件系统磁盘空间使用情况 + +无需任何参数来运行 `df` 命令,以显示整个文件系统磁盘空间使用情况。 -无需任何参数来运行 df 命令,以显示整个文件系统磁盘空间使用情况。 ``` $ df - ``` -**示例输出:** +示例输出: ``` Filesystem 1K-blocks Used Available Use% Mounted on @@ -28,25 +28,23 @@ tmpfs 4038880 11636 4027244 1% /tmp /dev/loop0 84096 84096 0 100% /var/lib/snapd/snap/core/4327 /dev/sda1 95054 55724 32162 64% /boot tmpfs 807776 28 807748 1% /run/user/1000 - ``` ![][2] 正如你所见,输出结果分为六列。我们来看一下每一列的含义。 - * **Filesystem** – Linux 系统中的文件系统 - * **1K-blocks** – 文件系统的大小,用 1K 大小的块来表示。 - * **Used** – 以 1K 大小的块所表示的已使用数量。 - * **Available** – 以 1K 大小的块所表示的可用空间的数量。 - * **Use%** – 文件系统中已使用的百分比。 - * **Mounted on** – 已挂载的文件系统的挂载点。 + * `Filesystem` – Linux 系统中的文件系统 + * `1K-blocks` – 文件系统的大小,用 1K 大小的块来表示。 + * `Used` – 以 1K 大小的块所表示的已使用数量。 + * `Available` – 以 1K 大小的块所表示的可用空间的数量。 + * `Use%` – 文件系统中已使用的百分比。 + * `Mounted on` – 已挂载的文件系统的挂载点。 +#### 2、以人类友好格式显示文件系统硬盘空间使用情况 +在上面的示例中你可能已经注意到了,它使用 1K 大小的块为单位来表示使用情况,如果你以人类友好格式来显示它们,可以使用 `-h` 标志。 -**2、以人类友好格式显示文件系统硬盘空间使用情况** - -在上面的示例中你可能已经注意到了,它使用 1K 大小的块为单位来表示使用情况,如果你以人类友好格式来显示它们,可以使用 **-h** 标志。 ``` $ df -h Filesystem Size Used Avail Use% Mounted on @@ -62,11 +60,12 @@ tmpfs 789M 28K 789M 1% /run/user/1000 ``` -现在,在 **Size** 列和 **Avail** 列,使用情况是以 GB 和 MB 为单位来显示的。 +现在,在 `Size` 列和 `Avail` 列,使用情况是以 GB 和 MB 为单位来显示的。 -**3\. Display disk space usage only in MB** +#### 3、仅以 MB 为单位来显示文件系统磁盘空间使用情况 + +如果仅以 MB 为单位来显示文件系统磁盘空间使用情况,使用 `-m` 标志。 -如果仅以 MB 为单位来显示文件系统磁盘空间使用情况,使用 **-m** 标志。 ``` $ df -m Filesystem 1M-blocks Used Available Use% Mounted on @@ -79,12 +78,12 @@ tmpfs 3945 12 3933 1% /tmp /dev/loop0 83 83 0 100% /var/lib/snapd/snap/core/4327 /dev/sda1 93 55 32 64% /boot tmpfs 789 1 789 1% /run/user/1000 - ``` -**4、列出节点而不是块的使用情况** +#### 4、列出节点而不是块的使用情况 + +如下所示,我们可以通过使用 `-i` 标记来列出节点而不是块的使用情况。 -如下所示,我们可以通过使用 **-i** 标记来列出节点而不是块的使用情况。 ``` $ df -i Filesystem Inodes IUsed IFree IUse% Mounted on @@ -97,12 +96,12 @@ tmpfs 1009720 3008 1006712 1% /tmp /dev/loop0 12829 12829 0 100% /var/lib/snapd/snap/core/4327 /dev/sda1 25688 390 25298 2% /boot tmpfs 1009720 29 1009691 1% /run/user/1000 - ``` -**5、显示文件系统类型** +#### 5、显示文件系统类型 + +使用 `-T` 标志显示文件系统类型。 -使用 **-T** 标志显示文件系统类型。 ``` $ df -T Filesystem Type 1K-blocks Used Available Use% Mounted on @@ -115,27 +114,27 @@ tmpfs tmpfs 4038880 11984 4026896 1% /tmp /dev/loop0 squashfs 84096 84096 0 100% /var/lib/snapd/snap/core/4327 /dev/sda1 ext4 95054 55724 32162 64% /boot tmpfs tmpfs 807776 28 807748 1% /run/user/1000 - ``` 正如你所见,现在出现了显示文件系统类型的额外的列(从左数的第二列)。 -**6、仅显示指定类型的文件系统** +#### 6、仅显示指定类型的文件系统 + +我们可以限制仅列出某些文件系统。比如,只列出 ext4 文件系统。我们使用 `-t` 标志。 -我们可以限制仅列出某些文件系统。比如,只列出 **ext4** 文件系统。我们使用 **-t** 标志。 ``` $ df -t ext4 Filesystem 1K-blocks Used Available Use% Mounted on /dev/sda2 478425016 428790896 25308436 95% / /dev/sda1 95054 55724 32162 64% /boot - ``` 看到了吗?这个命令仅显示了 ext4 文件系统的磁盘空间使用情况。 -**7、不列出指定类型的文件系统** +#### 7、不列出指定类型的文件系统 + +有时,我们可能需要从结果中去排除指定类型的文件系统。我们可以使用 `-x` 标记达到我们的目的。 -有时,我们可能需要从结果中去排除指定类型的文件系统。我们可以使用 **-x** 标记达到我们的目的。 ``` $ df -x ext4 Filesystem 1K-blocks Used Available Use% Mounted on @@ -146,30 +145,28 @@ tmpfs 4038880 0 4038880 0% /sys/fs/cgroup tmpfs 4038880 11984 4026896 1% /tmp /dev/loop0 84096 84096 0 100% /var/lib/snapd/snap/core/4327 tmpfs 807776 28 807748 1% /run/user/1000 - ``` -上面的命令列出了除 **ext4** 类型以外的全部文件系统。 +上面的命令列出了除 ext4 类型以外的全部文件系统。 -**8、显示一个目录的磁盘使用情况** +#### 8、显示一个目录的磁盘使用情况 + +去显示某个目录的硬盘空间使用情况以及它的挂载点,例如 `/home/sk/` 目录,可以使用如下的命令: -去显示某个目录的硬盘空间使用情况以及它的挂载点,例如 **/home/sk/** 目录,可以使用如下的命令: ``` $ df -hT /home/sk/ Filesystem Type Size Used Avail Use% Mounted on /dev/sda2 ext4 457G 409G 25G 95% / - ``` -这个命令显示文件系统类型、以人类友好格式显示已使用和可用磁盘空间、以及它的挂载点。如果你不想去显示文件系统类型,只需要忽略 **-t** 标志即可。 +这个命令显示文件系统类型、以人类友好格式显示已使用和可用磁盘空间、以及它的挂载点。如果你不想去显示文件系统类型,只需要忽略 `-t` 标志即可。 更详细的使用情况,请参阅 man 手册页。 + ``` $ man df - ``` -**建议阅读:** 今天就到此这止!我希望对你有用。还有更多更好玩的东西即将奉上。请继续关注! @@ -182,9 +179,9 @@ $ man df via: https://www.ostechnix.com/the-df-command-tutorial-with-examples-for-beginners/ 作者:[SK][a] -译者:[qhwdw](https://github.com/qhwdw) -校对:[校对者ID](https://github.com/校对者ID) 选题:[lujun9972](https://github.com/lujun9972) +译者:[qhwdw](https://github.com/qhwdw) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 From ad81dfde5cbf9afaaf894813c63e22aa22984edc Mon Sep 17 00:00:00 2001 From: "Xingyu.Wang" Date: Wed, 10 Oct 2018 13:01:08 +0800 Subject: [PATCH 315/736] PUB:20180413 The df Command Tutorial With Examples For Beginners.md @qhwdw https://linux.cn/article-10096-1.html --- ...0180413 The df Command Tutorial With Examples For Beginners.md | 0 1 file changed, 0 insertions(+), 0 deletions(-) rename {translated/tech => published}/20180413 The df Command Tutorial With Examples For Beginners.md (100%) diff --git a/translated/tech/20180413 The df Command Tutorial With Examples For Beginners.md b/published/20180413 The df Command Tutorial With Examples For Beginners.md similarity index 100% rename from translated/tech/20180413 The df Command Tutorial With Examples For Beginners.md rename to published/20180413 The df Command Tutorial With Examples For Beginners.md From dcaaf918e663fc991e613f8eac1705fffe3800c0 Mon Sep 17 00:00:00 2001 From: "Xingyu.Wang" Date: Wed, 10 Oct 2018 13:12:43 +0800 Subject: [PATCH 316/736] PRF:20170926 Managing users on Linux systems.md @dianbanjiu --- ...0170926 Managing users on Linux systems.md | 77 ++++++++++--------- 1 file changed, 39 insertions(+), 38 deletions(-) diff --git a/translated/tech/20170926 Managing users on Linux systems.md b/translated/tech/20170926 Managing users on Linux systems.md index 719b0575b6..aeb6e2904b 100644 --- a/translated/tech/20170926 Managing users on Linux systems.md +++ b/translated/tech/20170926 Managing users on Linux systems.md @@ -1,10 +1,13 @@ -# 管理 Linux 系统中的用户 +管理 Linux 系统中的用户 +====== +![](https://images.idgesg.net/images/article/2017/09/charging-bull-100735753-large.jpg) 也许你的 Lniux 用户并不是愤怒的公牛,但是当涉及管理他们的账户的时候,能让他们一直开心也是一种挑战。监控他们当前正在访问的东西,追踪他们他们遇到问题时的解决方案,并且保证能把他们在使用系统时出现的重要变动记录下来。这里有一些方法和工具可以使这份工作轻松一点。 ### 配置账户 -添加和移除账户是管理用户中最简单的一项,但是这里面仍然有很多需要考虑的选项。无论你是用桌面工具或是命令行选项,这都是一个非常自动化的过程。你可以使用命令添加一个新用户,像是 **adduser jdoe**,这同时会触发一系列的事情。使用下一个可用的 UID 可以创建 John 的账户,或许还会被许多用以配置账户的文件所填充。当你运行 adduser 命令加一个新的用户名的时候,它将会提示一些额外的信息,同时解释这是在干什么。 +添加和移除账户是管理用户中最简单的一项,但是这里面仍然有很多需要考虑的选项。无论你是用桌面工具或是命令行选项,这都是一个非常自动化的过程。你可以使用命令添加一个新用户,像是 `adduser jdoe`,这同时会触发一系列的事情。使用下一个可用的 UID 可以创建 John 的账户,或许还会被许多用以配置账户的文件所填充。当你运行 `adduser` 命令加一个新的用户名的时候,它将会提示一些额外的信息,同时解释这是在干什么。 + ``` $ sudo adduser jdoe Adding user 'jdoe' ... @@ -23,14 +26,14 @@ Enter the new value, or press ENTER for the default Home Phone []: Other []: Is the information correct? [Y/n] Y - ``` -像你看到的那样,adduser 将添加用户的信息(到 /etc/passwd 和 /etc/shadow 文件中),创建新的家目录,并用 /etc/skel 里设置的文件填充家目录,提示你分配初始密码和认定信息,然后确认这些信息都是正确的,如果你在最后的提示 “Is the information correct” 处的答案是 “n”,它将回溯你之前所有的回答,允许修改任何你想要修改的地方。 +像你看到的那样,`adduser` 将添加用户的信息(到 `/etc/passwd` 和 `/etc/shadow` 文件中),创建新的家目录,并用 `/etc/skel` 里设置的文件填充家目录,提示你分配初始密码和认定信息,然后确认这些信息都是正确的,如果你在最后的提示 “Is the information correct” 处的答案是 “n”,它将回溯你之前所有的回答,允许修改任何你想要修改的地方。 -创建好一个用户后,你可能会想要确认一下它是否是你期望的样子,更好的方法是确保在添加第一个帐户**之前**,“自动”选择与您想要查看的内容相匹配。默认有默认的好处,它对于你想知道他们定义在哪里有所用处,以防你想作出一些变动 —— 例如,你不想家目录在 /home 里,你不想用户 UIDs 从 1000 开始,或是你不想家目录下的文件被系统上的**每个人**都可读。 +创建好一个用户后,你可能会想要确认一下它是否是你期望的样子,更好的方法是确保在添加第一个帐户**之前**,“自动”选择与您想要查看的内容相匹配。默认有默认的好处,它对于你想知道他们定义在哪里有所用处,以防你想作出一些变动 —— 例如,你不想家目录在 `/home` 里,你不想用户 UID 从 1000 开始,或是你不想家目录下的文件被系统上的**每个人**都可读。 + +`adduser` 如何工作的一些细节设置在 `/etc/adduser.conf` 文件里。这个文件包含的一些设置决定了一个新的账户如何配置,以及它之后的样子。注意,注释和空白行将会在输出中被忽略,因此我们可以更加集中注意在设置上面。 -adduser 如何工作的一些细节设置在 /etc/adduser.conf 文件里。这个文件包含的一些设置决定了一个新的账户如何配置,以及它之后的样子。注意,注释和空白行将会在输出中被忽略,因此我们可以更加集中注意在设置上面。 ``` $ cat /etc/adduser.conf | grep -v "^#" | grep -v "^$" DSHELL=/bin/bash @@ -52,78 +55,78 @@ DIR_MODE=0755 SETGID_HOME=no QUOTAUSER="" SKEL_IGNORE_REGEX="dpkg-(old|new|dist|save)" - ``` -可以看到,我们有了一个默认的 shell(DSHELL),UIDs(FIRST_UID)的开始数值,家目录(DHOME)的位置,以及启动文件(SKEL)的来源位置。这个文件也会指定分配给家目录(DIR_HOME)的权限。 +可以看到,我们有了一个默认的 shell(`DSHELL`),UID(`FIRST_UID`)的开始数值,家目录(`DHOME`)的位置,以及启动文件(`SKEL`)的来源位置。这个文件也会指定分配给家目录(`DIR_HOME`)的权限。 -其中 DIR_HOME 是最重要的设置,它决定了每个家目录被使用的权限。这个设置分配给用户创建的目录权限是 755,家目录的权限将会设置为 rwxr-xr-x。用户可以读其他用户的文件,但是不能修改和移除他们。如果你想要更多的限制,你可以更改这个设置为 750(用户组外的任何人都不可访问)甚至是 700(除用户自己外的人都不可访问)。 +其中 `DIR_HOME` 是最重要的设置,它决定了每个家目录被使用的权限。这个设置分配给用户创建的目录权限是 `755`,家目录的权限将会设置为 `rwxr-xr-x`。用户可以读其他用户的文件,但是不能修改和移除他们。如果你想要更多的限制,你可以更改这个设置为 `750`(用户组外的任何人都不可访问)甚至是 `700`(除用户自己外的人都不可访问)。 -任何用户账号在创建之前都可以进行手动修改。例如,你可以编辑 /etc/passwd 或者修改家目录的权限,开始在新服务器上添加用户之前配置 /etc/adduser.conf 可以确保一定的一致性,从长远来看可以节省时间和避免一些麻烦。 +任何用户账号在创建之前都可以进行手动修改。例如,你可以编辑 `/etc/passwd` 或者修改家目录的权限,开始在新服务器上添加用户之前配置 `/etc/adduser.conf` 可以确保一定的一致性,从长远来看可以节省时间和避免一些麻烦。 -/etc/adduser.conf 的修改将会在之后创建的用户上生效。如果你想以不同的方式设置某个特定账户,除了用户名之外,你还可以选择使用 adduser 命令提供账户配置选项。或许你想为某些账户分配不同的 shell,请求特殊的 UID,完全禁用登录。adduser 的帮助页将会为你显示一些配置个人账户的选择。 +`/etc/adduser.conf` 的修改将会在之后创建的用户上生效。如果你想以不同的方式设置某个特定账户,除了用户名之外,你还可以选择使用 `adduser` 命令提供账户配置选项。或许你想为某些账户分配不同的 shell,请求特殊的 UID,完全禁用登录。`adduser` 的帮助页将会为你显示一些配置个人账户的选择。 ``` adduser [options] [--home DIR] [--shell SHELL] [--no-create-home] [--uid ID] [--firstuid ID] [--lastuid ID] [--ingroup GROUP | --gid ID] [--disabled-password] [--disabled-login] [--gecos GECOS] [--add_extra_groups] [--encrypt-home] user - ``` -每个 Linux 系统现在都会默认把每个用户放入对应的组中。作为一个管理员,你可能会选择以不同的方式去做事。你也许会发现把用户放在一个共享组中可以让你的站点工作的更好,这时,选择使用 adduser 的 --gid 选项去选择一个特定的组。当然,用户总是许多组的成员,因此也有一些选项去管理主要和次要的组。 +每个 Linux 系统现在都会默认把每个用户放入对应的组中。作为一个管理员,你可能会选择以不同的方式去做事。你也许会发现把用户放在一个共享组中可以让你的站点工作的更好,这时,选择使用 `adduser` 的 `--gid` 选项去选择一个特定的组。当然,用户总是许多组的成员,因此也有一些选项去管理主要和次要的组。 ### 处理用户密码 一直以来,知道其他人的密码都是一个不好的念头,在设置账户时,管理员通常使用一个临时的密码,然后在用户第一次登录时会运行一条命令强制他修改密码。这里是一个例子: + ``` $ sudo chage -d 0 jdoe ``` 当用户第一次登录的时候,会看到像这样的事情: + ``` WARNING: Your password has expired. You must change your password now and login again! Changing password for jdoe. (current) UNIX password: - ``` ### 添加用户到副组 -添加用户到副组中,你可能会用如下所示的 usermod 命令 —— 添加用户到组中并确认已经做出变动。 +添加用户到副组中,你可能会用如下所示的 `usermod` 命令 —— 添加用户到组中并确认已经做出变动。 + ``` $ sudo usermod -a -G sudo jdoe $ sudo grep sudo /etc/group sudo:x:27:shs,jdoe - ``` -记住在一些组,像是 sudo 或者 wheel 组中,意味着包含特权,一定要特别注意这一点。 +记住在一些组,像是 `sudo` 或者 `wheel` 组中,意味着包含特权,一定要特别注意这一点。 ### 移除用户,添加组等 -Linux 系统也提供了命令去移除账户,添加新的组,移除组等。例如,**deluser** 命令,将会从 /etc/passwd 和 /etc/shadow 中移除用户登录入口,但是会完整保留他的家目录,除非你添加了 --remove-home 或者 --remove-all-files 选项。**addgroup** 命令会添加一个组,按目前组的次序给他下一个 id(在用户组范围内),除非你使用 --gid 选项指定 id。 +Linux 系统也提供了命令去移除账户、添加新的组、移除组等。例如,`deluser` 命令,将会从 `/etc/passwd` 和 `/etc/shadow` 中移除用户登录入口,但是会完整保留他的家目录,除非你添加了 `--remove-home` 或者 `--remove-all-files` 选项。`addgroup` 命令会添加一个组,按目前组的次序给他下一个 ID(在用户组范围内),除非你使用 `--gid` 选项指定 ID。 + ``` $ sudo addgroup testgroup --gid=131 Adding group `testgroup' (GID 131) ... Done. - ``` ### 管理特权账户 -一些 Linux 系统中有一个 wheel 组,它给组中成员赋予了像 root 一样运行命令的能力。在这种情况下,/etc/sudoers 将会引用该组。在 Debian 系统中,这个组被叫做 sudo,但是以相同的方式工作,你在 /etc/sudoers 中可以看到像这样的引用: +一些 Linux 系统中有一个 wheel 组,它给组中成员赋予了像 root 一样运行命令的能力。在这种情况下,`/etc/sudoers` 将会引用该组。在 Debian 系统中,这个组被叫做 `sudo`,但是以相同的方式工作,你在 `/etc/sudoers` 中可以看到像这样的引用: + ``` %sudo ALL=(ALL:ALL) ALL - ``` -这个基础的设定意味着,任何在 wheel 或者 sudo 组中的成员,只要在他们运行的命令之前添加 sudo,就可以以 root 的权限去运行命令。 +这个基础的设定意味着,任何在 wheel 或者 sudo 组中的成员,只要在他们运行的命令之前添加 `sudo`,就可以以 root 的权限去运行命令。 -你可以向 sudoers 文件中添加更多有限的特权 —— 也许给特定用户运行一两个 root 的命令。如果这样做,您还应定期查看 /etc/sudoers 文件以评估用户拥有的权限,以及仍然需要提供的权限。 +你可以向 `sudoers` 文件中添加更多有限的特权 —— 也许给特定用户运行一两个 root 的命令。如果这样做,您还应定期查看 `/etc/sudoers` 文件以评估用户拥有的权限,以及仍然需要提供的权限。 + +在下面显示的命令中,我们看到在 `/etc/sudoers` 中匹配到的行。在这个文件中最有趣的行是,包含能使用 `sudo` 运行命令的路径设置,以及两个允许通过 `sudo` 运行命令的组。像刚才提到的那样,单个用户可以通过包含在 `sudoers` 文件中来获得权限,但是更有实际意义的方法是通过组成员来定义各自的权限。 -在下面显示的命令中,我们看到在 /etc/sudoers 中匹配到的行。在这个文件中最有趣的行是,包含能使用 sudo 运行命令的路径设置,以及两个允许通过 sudo 运行命令的组。像刚才提到的那样,单个用户可以通过包含在 sudoers 文件中来获得权限,但是更有实际意义的方法是通过组成员来定义各自的权限。 ``` # cat /etc/sudoers | grep -v "^#" | grep -v "^$" Defaults env_reset @@ -132,21 +135,21 @@ Defaults secure_path="/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/b root ALL=(ALL:ALL) ALL %admin ALL=(ALL) ALL <== admin group %sudo ALL=(ALL:ALL) ALL <== sudo group - ``` ### 登录检查 你可以通过以下命令查看用户的上一次登录: + ``` # last jdoe jdoe pts/18 192.168.0.11 Thu Sep 14 08:44 - 11:48 (00:04) jdoe pts/18 192.168.0.11 Thu Sep 14 13:43 - 18:44 (00:00) jdoe pts/18 192.168.0.11 Thu Sep 14 19:42 - 19:43 (00:00) - ``` -如果你想查看每一个用户上一次的登录情况,你可以通过一个像这样的循环来运行 last 命令: +如果你想查看每一个用户上一次的登录情况,你可以通过一个像这样的循环来运行 `last` 命令: + ``` $ for user in `ls /home`; do last $user | head -1; done @@ -154,11 +157,10 @@ jdoe pts/18 192.168.0.11 Thu Sep 14 19:42 - 19:43 (00:03) rocket pts/18 192.168.0.11 Thu Sep 14 13:02 - 13:02 (00:00) shs pts/17 192.168.0.11 Thu Sep 14 12:45 still logged in - - ``` -此命令仅显示自当前 wtmp 文件变为活跃状态以来已登录的用户。空白行表示用户自那以后从未登录过,但没有将其调出。一个更好的命令是过滤掉在这期间从未登录过的用户的显示: +此命令仅显示自当前 `wtmp` 文件变为活跃状态以来已登录的用户。空白行表示用户自那以后从未登录过,但没有将其调出。一个更好的命令是过滤掉在这期间从未登录过的用户的显示: + ``` $ for user in `ls /home`; do echo -n "$user ";last $user | head -1 | awk '{print substr($0,40)}'; done dhayes @@ -167,39 +169,38 @@ peanut pts/19 192.168.0.29 Mon Sep 11 09:15 - 17:11 rocket pts/18 192.168.0.11 Thu Sep 14 13:02 - 13:02 shs pts/17 192.168.0.11 Thu Sep 14 12:45 still logged tsmith - ``` 这个命令会打印很多,但是可以通过一个脚本使它更加清晰易用。 + ``` #!/bin/bash for user in `ls /home` do - echo -n "$user ";last $user | head -1 | awk '{print substr($0,40)}' + echo -n "$user ";last $user | head -1 | awk '{print substr($0,40)}' done - ``` 有时,此类信息可以提醒您用户角色的变动,表明他们可能不再需要相关帐户。 ### 与用户沟通 -Linux 提供了许多方法和用户沟通。你可以向 /etc/motd 文件中添加信息,当用户从终端登录到服务器时,将会显示这些信息。你也可以通过例如 write(通知单个用户)或者 wall(write 给所有已登录的用户)命令发送通知。 +Linux 提供了许多方法和用户沟通。你可以向 `/etc/motd` 文件中添加信息,当用户从终端登录到服务器时,将会显示这些信息。你也可以通过例如 `write`(通知单个用户)或者 `wall`(`write` 给所有已登录的用户)命令发送通知。 + ``` $ wall System will go down in one hour Broadcast message from shs@stinkbug (pts/17) (Thu Sep 14 14:04:16 2017): System will go down in one hour - ``` -重要的通知应该通过多个管道传递,因为很难预测用户实际会注意到什么。mesage-of-the-day(motd),wall 和 email 通知可以吸引用户大部分的注意力。 +重要的通知应该通过多个管道传递,因为很难预测用户实际会注意到什么。mesage-of-the-day(motd),`wall` 和 email 通知可以吸引用户大部分的注意力。 ### 注意日志文件 -更多地注意日志文件上也可以帮你理解用户活动。事实上,/var/log/auth.log 文件将会为你显示用户的登录和注销活动,组的创建等。/var/log/message 或者 /var/log/syslog 文件将会告诉你更多有关系统活动的事情。 +更多地注意日志文件上也可以帮你理解用户活动。事实上,`/var/log/auth.log` 文件将会为你显示用户的登录和注销活动,组的创建等。`/var/log/message` 或者 `/var/log/syslog` 文件将会告诉你更多有关系统活动的事情。 ### 追踪问题和请求 @@ -215,7 +216,7 @@ via: https://www.networkworld.com/article/3225109/linux/managing-users-on-linux- 作者:[Sandra Henry-Stocker][a] 译者:[dianbanjiu](https://github.com/dianbanjiu) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 From 88a359c9726634527d2331a3152f2a0949fe5d16 Mon Sep 17 00:00:00 2001 From: "Xingyu.Wang" Date: Wed, 10 Oct 2018 13:13:05 +0800 Subject: [PATCH 317/736] PUB:20170926 Managing users on Linux systems.md @dianbanjiu https://linux.cn/article-10097-1.html --- .../20170926 Managing users on Linux systems.md | 0 1 file changed, 0 insertions(+), 0 deletions(-) rename {translated/tech => published}/20170926 Managing users on Linux systems.md (100%) diff --git a/translated/tech/20170926 Managing users on Linux systems.md b/published/20170926 Managing users on Linux systems.md similarity index 100% rename from translated/tech/20170926 Managing users on Linux systems.md rename to published/20170926 Managing users on Linux systems.md From 1738a1234e1f72423ab63857c562e42f89cde2a7 Mon Sep 17 00:00:00 2001 From: "Xingyu.Wang" Date: Wed, 10 Oct 2018 13:20:00 +0800 Subject: [PATCH 318/736] PRF:20180928 10 handy Bash aliases for Linux.md @geekpi --- ...0180928 10 handy Bash aliases for Linux.md | 56 +++++-------------- 1 file changed, 13 insertions(+), 43 deletions(-) diff --git a/translated/tech/20180928 10 handy Bash aliases for Linux.md b/translated/tech/20180928 10 handy Bash aliases for Linux.md index 8706e56e8a..fe1c5098a1 100644 --- a/translated/tech/20180928 10 handy Bash aliases for Linux.md +++ b/translated/tech/20180928 10 handy Bash aliases for Linux.md @@ -1,102 +1,72 @@ 10 个 Linux 中方便的 Bash 别名 ====== -对 Bash 长命令使用压缩的版本来更有效率。 +> 对 Bash 长命令使用压缩的版本来更有效率。 ![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/bash_command_line.png?itok=k4z94W2U) 你有多少次在命令行上输入一个长命令,并希望有一种方法可以保存它以供日后使用?这就是 Bash 别名派上用场的地方。它们允许你将长而神秘的命令压缩为易于记忆和使用的东西。需要一些例子来帮助你入门吗?没问题! -要使用你创建的 Bash 别名,你需要将其添加到 .bash_profile 中,该文件位于你的主文件夹中。请注意,此文件是隐藏的,并只能从命令行访问。编辑此文件的最简单方法是使用 Vi 或 Nano 之类的东西。 +要使用你创建的 Bash 别名,你需要将其添加到 `.bash_profile` 中,该文件位于你的家目录中。请注意,此文件是隐藏的,并只能从命令行访问。编辑此文件的最简单方法是使用 Vi 或 Nano 之类的东西。 ### 10 个方便的 Bash 别名 - 1. 你有几次遇到需要解压 .tar 文件但无法记住所需的确切参数?别名可以帮助你!只需将以下内容添加到 .bash_profile 中,然后使用 **untar FileName** 解压缩任何 .tar 文件。 - +1、 你有几次遇到需要解压 .tar 文件但无法记住所需的确切参数?别名可以帮助你!只需将以下内容添加到 `.bash_profile` 中,然后使用 `untar FileName` 解压缩任何 .tar 文件。 ``` alias untar='tar -zxvf ' - ``` - - 2. 想要下载的东西,但如果出现问题可以恢复吗? - - +2、 想要下载的东西,但如果出现问题可以恢复吗? ``` alias wget='wget -c ' - ``` - 3. 是否需要为新的网络帐户生成随机的 20 个字符的密码?没问题。 - - +3、 是否需要为新的网络帐户生成随机的 20 个字符的密码?没问题。 ``` alias getpass="openssl rand -base64 20" - ``` - 4. 下载文件并需要测试校验和?我们也可做到。 - - +4、 下载文件并需要测试校验和?我们也可做到。 ``` alias sha='shasum -a 256 ' - ``` - 5. 普通的 ping 将永远持续下去。我们不希望这样。相反,让我们将其限制在五个 ping。 - - +5、 普通的 `ping` 将永远持续下去。我们不希望这样。相反,让我们将其限制在五个 `ping`。 ``` alias ping='ping -c 5' - ``` - 6. 在任何你想要的文件夹中启动 Web 服务器。 - - +6、 在任何你想要的文件夹中启动 Web 服务器。 ``` alias www='python -m SimpleHTTPServer 8000' - ``` - 7. 想知道你的网络有多快?只需下载 Speedtest-cli 并使用此别名即可。你可以使用 **speedtest-cli --list** 命令选择离你所在位置更近的服务器。 - - +7、 想知道你的网络有多快?只需下载 Speedtest-cli 并使用此别名即可。你可以使用 `speedtest-cli --list` 命令选择离你所在位置更近的服务器。 ``` alias speed='speedtest-cli --server 2406 --simple' - ``` - 8. 你有多少次需要知道你的外部 IP 地址,但是不知道如何获取?我也是。 - - +8、 你有多少次需要知道你的外部 IP 地址,但是不知道如何获取?我也是。 ``` alias ipe='curl ipinfo.io/ip' - ``` - 9. 需要知道你的本地 IP 地址? - - +9、 需要知道你的本地 IP 地址? ``` alias ipi='ipconfig getifaddr en0' - ``` - 10. 最后,让我们清空屏幕。 - - +10、 最后,让我们清空屏幕。 ``` alias c='clear' - ``` 如你所见,Bash 别名是一种在命令行上简化生活的超级简便方法。想了解更多信息?我建议你 Google 搜索“Bash 别名”或在 Github 中看下。 @@ -108,7 +78,7 @@ via: https://opensource.com/article/18/9/handy-bash-aliases 作者:[Patrick H.Mullins][a] 选题:[lujun9972](https://github.com/lujun9972) 译者:[geekpi](https://github.com/geekpi) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 From c422b4847c2624b1fc60605be228a6cb1ba6931e Mon Sep 17 00:00:00 2001 From: "Xingyu.Wang" Date: Wed, 10 Oct 2018 13:20:18 +0800 Subject: [PATCH 319/736] PUB:20180928 10 handy Bash aliases for Linux.md @geekpi https://linux.cn/article-10098-1.html --- .../20180928 10 handy Bash aliases for Linux.md | 0 1 file changed, 0 insertions(+), 0 deletions(-) rename {translated/tech => published}/20180928 10 handy Bash aliases for Linux.md (100%) diff --git a/translated/tech/20180928 10 handy Bash aliases for Linux.md b/published/20180928 10 handy Bash aliases for Linux.md similarity index 100% rename from translated/tech/20180928 10 handy Bash aliases for Linux.md rename to published/20180928 10 handy Bash aliases for Linux.md From fd78afd629345b099b9502bac2ed8b6a348b01ab Mon Sep 17 00:00:00 2001 From: "Xingyu.Wang" Date: Wed, 10 Oct 2018 13:39:56 +0800 Subject: [PATCH 320/736] PRF:20180724 75 Most Used Essential Linux Applications of 2018.md @HankChow --- ...ed Essential Linux Applications of 2018.md | 274 +++++++++--------- 1 file changed, 138 insertions(+), 136 deletions(-) diff --git a/translated/tech/20180724 75 Most Used Essential Linux Applications of 2018.md b/translated/tech/20180724 75 Most Used Essential Linux Applications of 2018.md index 96ca929009..7d0b586129 100644 --- a/translated/tech/20180724 75 Most Used Essential Linux Applications of 2018.md +++ b/translated/tech/20180724 75 Most Used Essential Linux Applications of 2018.md @@ -1,7 +1,9 @@ -2018 年 75 个最常用的 Linux 应用程序 +75 个最常用的 Linux 应用程序(2018 年) ====== -对于许多应用程序来说,2018年是非常好的一年,尤其是免费开源的应用程序。尽管各种 Linux 发行版都自带了很多默认的应用程序,但用户也可以自由地选择使用它们或者其它任何免费或付费替代方案。 +![](https://www.fossmint.com/wp-content/uploads/2018/07/Most-Used-Ubuntu-Applications.png) + +对于许多应用程序来说,2018 年是非常好的一年,尤其是自由开源的应用程序。尽管各种 Linux 发行版都自带了很多默认的应用程序,但用户也可以自由地选择使用它们或者其它任何免费或付费替代方案。 下面汇总了[一系列的 Linux 应用程序][3],这些应用程序都能够在 Linux 系统上安装,尽管还有很多其它选择。以下汇总中的任何应用程序都属于其类别中最常用的应用程序,如果你还没有用过,欢迎试用一下! @@ -9,10 +11,10 @@ #### Rsync -[Rsync][4] 是一个开源的、带宽友好的工具,它用于执行快速的增量文件传输,而且它也是一个免费工具。 +[Rsync][4] 是一个开源的、节约带宽的工具,它用于执行快速的增量文件传输,而且它也是一个免费工具。 + ``` $ rsync [OPTION...] SRC... [DEST] - ``` 想要了解更多示例和用法,可以参考《[10 个使用 Rsync 命令的实际例子][5]》。 @@ -31,36 +33,36 @@ $ rsync [OPTION...] SRC... [DEST] [Deluge][7] 是一个漂亮的跨平台 BT 客户端,旨在优化 μTorrent 体验,并向用户免费提供服务。 -使用以下命令在 Ubuntu 和 Debian 安装 `Deluge`。 +使用以下命令在 Ubuntu 和 Debian 安装 Deluge。 + ``` $ sudo add-apt-repository ppa:deluge-team/ppa $ sudo apt-get update $ sudo apt-get install deluge - ``` #### qBittorent [qBittorent][8] 是一个开源的 BT 客户端,旨在提供类似 μTorrent 的免费替代方案。 -使用以下命令在 Ubuntu 和 Debian 安装 `qBittorent`。 +使用以下命令在 Ubuntu 和 Debian 安装 qBittorent。 + ``` $ sudo add-apt-repository ppa:qbittorrent-team/qbittorrent-stable $ sudo apt-get update $ sudo apt-get install qbittorrent - ``` #### Transmission [Transmission][9] 是一个强大的 BT 客户端,它主要关注速度和易用性,一般在很多 Linux 发行版上都有预装。 -使用以下命令在 Ubuntu 和 Debian 安装 `Transmission`。 +使用以下命令在 Ubuntu 和 Debian 安装 Transmission。 + ``` $ sudo add-apt-repository ppa:transmissionbt/ppa $ sudo apt-get update $ sudo apt-get install transmission-gtk transmission-cli transmission-common transmission-daemon - ``` ### 云存储 @@ -71,12 +73,12 @@ $ sudo apt-get install transmission-gtk transmission-cli transmission-common tra [Dropbox][10] 团队在今年早些时候给他们的云服务换了一个名字,也为客户提供了更好的性能和集成了更多应用程序。Dropbox 会向用户免费提供 2 GB 存储空间。 -使用以下命令在 Ubuntu 和 Debian 安装 `Dropbox`。 +使用以下命令在 Ubuntu 和 Debian 安装 Dropbox。 + ``` $ cd ~ && wget -O - "https://www.dropbox.com/download?plat=lnx.x86" | tar xzf - [On 32-Bit] $ cd ~ && wget -O - "https://www.dropbox.com/download?plat=lnx.x86_64" | tar xzf - [On 64-Bit] $ ~/.dropbox-dist/dropboxd - ``` #### Google Drive @@ -99,36 +101,36 @@ $ ~/.dropbox-dist/dropboxd [Vim][15] 是 vi 文本编辑器的开源克隆版本,它的主要目的是可以高度定制化并能够处理任何类型的文本。 -使用以下命令在 Ubuntu 和 Debian 安装 `Vim`。 +使用以下命令在 Ubuntu 和 Debian 安装 Vim。 + ``` $ sudo add-apt-repository ppa:jonathonf/vim $ sudo apt update $ sudo apt install vim - ``` #### Emacs [Emacs][16] 是一个高度可配置的文本编辑器,最流行的一个分支 GNU Emacs 是用 Lisp 和 C 编写的,它的最大特点是可以自文档化、可扩展和可自定义。 -使用以下命令在 Ubuntu 和 Debian 安装 `Emacs`。 +使用以下命令在 Ubuntu 和 Debian 安装 Emacs。 + ``` $ sudo add-apt-repository ppa:kelleyk/emacs $ sudo apt update $ sudo apt install emacs25 - ``` #### Nano [Nano][17] 是一款功能丰富的命令行文本编辑器,比较适合高级用户。它可以通过多个终端进行不同功能的操作。 -使用以下命令在 Ubuntu 和 Debian 安装 `Nano`。 +使用以下命令在 Ubuntu 和 Debian 安装 Nano。 + ``` $ sudo add-apt-repository ppa:n-muench/programs-ppa $ sudo apt-get update $ sudo apt-get install nano - ``` ### 下载器 @@ -137,36 +139,36 @@ $ sudo apt-get install nano #### Aria2 -[Aria2][18] 是一个开源的、轻量级的、多软件源和多协议的命令行下载器,它支持 Metalinks、torrents、HTTP/HTTPS、SFTP 等多种协议。 +[Aria2][18] 是一个开源的、轻量级的、多软件源和多协议的命令行下载器,它支持 Metalink、torrent、HTTP/HTTPS、SFTP 等多种协议。 + +使用以下命令在 Ubuntu 和 Debian 安装 Aria2。 -使用以下命令在 Ubuntu 和 Debian 安装 `Aria2`。 ``` $ sudo apt-get install aria2 - ``` #### uGet [uGet][19] 已经成为 Linux 各种发行版中排名第一的开源下载器,它可以处理任何下载任务,包括多连接、队列、类目等。 -使用以下命令在 Ubuntu 和 Debian 安装 `uGet`。 +使用以下命令在 Ubuntu 和 Debian 安装 uGet。 + ``` $ sudo add-apt-repository ppa:plushuang-tw/uget-stable $ sudo apt update $ sudo apt install uget - ``` #### XDM [XDM][20](Xtreme Download Manager)是一个使用 Java 编写的开源下载软件。和其它下载器一样,它可以结合队列、种子、浏览器使用,而且还带有视频采集器和智能调度器。 -使用以下命令在 Ubuntu 和 Debian 安装 `XDM`。 +使用以下命令在 Ubuntu 和 Debian 安装 XDM。 + ``` $ sudo add-apt-repository ppa:noobslab/apps $ sudo apt-get update $ sudo apt-get install xdman - ``` ### 电子邮件客户端 @@ -177,36 +179,36 @@ $ sudo apt-get install xdman [Thunderbird][21] 是最受欢迎的电子邮件客户端之一。它的优点包括免费、开源、可定制、功能丰富,而且最重要的是安装过程也很简便。 -使用以下命令在 Ubuntu 和 Debian 安装 `Thunderbird`。 +使用以下命令在 Ubuntu 和 Debian 安装 Thunderbird。 + ``` $ sudo add-apt-repository ppa:ubuntu-mozilla-security/ppa $ sudo apt-get update $ sudo apt-get install thunderbird - ``` #### Geary [Geary][22] 是一个基于 WebKitGTK+ 的开源电子邮件客户端。它是一个免费开源的功能丰富的软件,并被 GNOME 项目收录。 -使用以下命令在 Ubuntu 和 Debian 安装 `Geary`。 +使用以下命令在 Ubuntu 和 Debian 安装 Geary。 + ``` $ sudo add-apt-repository ppa:geary-team/releases $ sudo apt-get update $ sudo apt-get install geary - ``` #### Evolution [Evolution][23] 是一个免费开源的电子邮件客户端,可以用于电子邮件、会议日程、备忘录和联系人的管理。 -使用以下命令在 Ubuntu 和 Debian 安装 `Evolution`。 +使用以下命令在 Ubuntu 和 Debian 安装 Evolution。 + ``` $ sudo add-apt-repository ppa:gnome3-team/gnome3-staging $ sudo apt-get update $ sudo apt-get install evolution - ``` ### 财务软件 @@ -217,27 +219,27 @@ $ sudo apt-get install evolution [GnuCash][24] 是一款免费的跨平台开源软件,它适用于个人和中小型企业的财务任务。 -使用以下命令在 Ubuntu 和 Debian 安装 `GnuCash`。 +使用以下命令在 Ubuntu 和 Debian 安装 GnuCash。 + ``` $ sudo sh -c 'echo "deb http://archive.getdeb.net/ubuntu $(lsb_release -sc)-getdeb apps" >> /etc/apt/sources.list.d/getdeb.list' $ sudo apt-get update $ sudo apt-get install gnucash - ``` #### KMyMoney [KMyMoney][25] 是一个财务管理软件,它可以提供商用或个人理财所需的大部分主要功能。 -使用以下命令在 Ubuntu 和 Debian 安装 `KmyMoney`。 +使用以下命令在 Ubuntu 和 Debian 安装 KmyMoney。 + ``` $ sudo add-apt-repository ppa:claydoh/kmymoney2-kde4 $ sudo apt-get update $ sudo apt-get install kmymoney - ``` -### IDE 和编辑器 +### IDE ![](https://www.fossmint.com/wp-content/uploads/2018/07/Linux-IDE-Editors.png) @@ -257,35 +259,35 @@ $ sudo apt-get install kmymoney [Brackets][30] 是由 Adobe 开发的高级文本编辑器,它带有可视化工具,支持预处理程序,以及用于 web 开发的以设计为中心的用户流程。对于熟悉它的用户,它可以发挥 IDE 的作用。 -使用以下命令在 Ubuntu 和 Debian 安装 `Brackets`。 +使用以下命令在 Ubuntu 和 Debian 安装 Brackets。 + ``` $ sudo add-apt-repository ppa:webupd8team/brackets $ sudo apt-get update $ sudo apt-get install brackets - ``` #### Atom IDE [Atom IDE][31] 是一个加强版的 Atom 编辑器,它添加了大量扩展和库以提高性能和增加功能。总之,它是各方面都变得更强大了的 Atom 。 -使用以下命令在 Ubuntu 和 Debian 安装 `Atom`。 +使用以下命令在 Ubuntu 和 Debian 安装 Atom。 + ``` $ sudo apt-get install snapd $ sudo snap install atom --classic - ``` #### Light Table [Light Table][32] 号称下一代的 IDE,它提供了数据流量统计和协作编程等的强大功能。 -使用以下命令在 Ubuntu 和 Debian 安装 `Light Table`。 +使用以下命令在 Ubuntu 和 Debian 安装 Light Table。 + ``` $ sudo add-apt-repository ppa:dr-akulavich/lighttable $ sudo apt-get update $ sudo apt-get install lighttable-installer - ``` #### Visual Studio Code @@ -302,33 +304,33 @@ $ sudo apt-get install lighttable-installer [Pidgin][35] 是一个开源的即时通信工具,它几乎支持所有聊天平台,还支持额外扩展功能。 -使用以下命令在 Ubuntu 和 Debian 安装 `Pidgin`。 +使用以下命令在 Ubuntu 和 Debian 安装 Pidgin。 + ``` $ sudo add-apt-repository ppa:jonathonf/backports $ sudo apt-get update $ sudo apt-get install pidgin - ``` #### Skype [Skype][36] 也是一个广为人知的软件了,任何感兴趣的用户都可以在 Linux 上使用。 -使用以下命令在 Ubuntu 和 Debian 安装 `Skype`。 +使用以下命令在 Ubuntu 和 Debian 安装 Skype。 + ``` $ sudo apt install snapd $ sudo snap install skype --classic - ``` #### Empathy [Empathy][37] 是一个支持多协议语音、视频聊天、文本和文件传输的即时通信工具。它还允许用户添加多个服务的帐户,并用其与所有服务的帐户进行交互。 -使用以下命令在 Ubuntu 和 Debian 安装 `Empathy`。 +使用以下命令在 Ubuntu 和 Debian 安装 Empathy。 + ``` $ sudo apt-get install empathy - ``` ### Linux 防病毒工具 @@ -337,61 +339,61 @@ $ sudo apt-get install empathy [ClamAV][38] 是一个开源的跨平台命令行防病毒工具,用于检测木马、病毒和其他恶意代码。而 [ClamTk][39] 则是它的前端 GUI。 -使用以下命令在 Ubuntu 和 Debian 安装 `ClamAV` 和 `ClamTk`。 +使用以下命令在 Ubuntu 和 Debian 安装 ClamAV 和 ClamTk。 + ``` $ sudo apt-get install clamav $ sudo apt-get install clamtk - ``` ### Linux 桌面环境 #### Cinnamon -[Cinnamon][40] 是 GNOME 3 的免费开源衍生产品,它遵循传统的 桌面比拟desktop metaphor 约定。 +[Cinnamon][40] 是 GNOME 3 的自由开源衍生产品,它遵循传统的 桌面比拟desktop metaphor 约定。 + +使用以下命令在 Ubuntu 和 Debian 安装 Cinnamon。 -使用以下命令在 Ubuntu 和 Debian 安装 `Cinnamon`。 ``` $ sudo add-apt-repository ppa:embrosyn/cinnamon $ sudo apt update $ sudo apt install cinnamon-desktop-environment lightdm - ``` #### Mate [Mate][41] 桌面环境是 GNOME 2 的衍生和延续,目的是在 Linux 上通过使用传统的桌面比拟提供有一个吸引力的 UI。 -使用以下命令在 Ubuntu 和 Debian 安装 `Mate`。 +使用以下命令在 Ubuntu 和 Debian 安装 Mate。 + ``` $ sudo apt install tasksel $ sudo apt update $ sudo tasksel install ubuntu-mate-desktop - ``` #### GNOME [GNOME][42] 是由一些免费和开源应用程序组成的桌面环境,它可以运行在任何 Linux 发行版和大多数 BSD 衍生版本上。 -使用以下命令在 Ubuntu 和 Debian 安装 `Gnome`。 +使用以下命令在 Ubuntu 和 Debian 安装 Gnome。 + ``` $ sudo apt install tasksel $ sudo apt update $ sudo tasksel install ubuntu-desktop - ``` #### KDE [KDE][43] 由 KDE 社区开发,它为用户提供图形解决方案以控制操作系统并执行不同的计算任务。 -使用以下命令在 Ubuntu 和 Debian 安装 `KDE`。 +使用以下命令在 Ubuntu 和 Debian 安装 KDE。 + ``` $ sudo apt install tasksel $ sudo apt update $ sudo tasksel install kubuntu-desktop - ``` ### Linux 维护工具 @@ -400,22 +402,22 @@ $ sudo tasksel install kubuntu-desktop [GNOME Tweak Tool][44] 是用于自定义和调整 GNOME 3 和 GNOME Shell 设置的流行工具。 -使用以下命令在 Ubuntu 和 Debian 安装 `GNOME Tweak Tool`。 +使用以下命令在 Ubuntu 和 Debian 安装 GNOME Tweak Tool。 + ``` $ sudo apt install gnome-tweak-tool - ``` #### Stacer [Stacer][45] 是一款用于监控和优化 Linux 系统的免费开源应用程序。 -使用以下命令在 Ubuntu 和 Debian 安装 `Stacer`。 +使用以下命令在 Ubuntu 和 Debian 安装 Stacer。 + ``` $ sudo add-apt-repository ppa:oguzhaninan/stacer $ sudo apt-get update $ sudo apt-get install stacer - ``` #### BleachBit @@ -430,40 +432,40 @@ $ sudo apt-get install stacer [GNOME 终端][48] 是 GNOME 的默认终端模拟器。 -使用以下命令在 Ubuntu 和 Debian 安装 `Gnome Terminal`。 +使用以下命令在 Ubuntu 和 Debian 安装 Gnome 终端。 + ``` $ sudo apt-get install gnome-terminal - ``` #### Konsole [Konsole][49] 是 KDE 的一个终端模拟器。 -使用以下命令在 Ubuntu 和 Debian 安装 `Konsole`。 +使用以下命令在 Ubuntu 和 Debian 安装 Konsole。 + ``` $ sudo apt-get install konsole - ``` #### Terminator [Terminator][50] 是一个功能丰富的终端程序,它基于 GNOME 终端,并且专注于整理终端功能。 -使用以下命令在 Ubuntu 和 Debian 安装 `Terminator`。 +使用以下命令在 Ubuntu 和 Debian 安装 Terminator。 + ``` $ sudo apt-get install terminator - ``` #### Guake [Guake][51] 是 GNOME 桌面环境下一个轻量级的可下拉式终端。 -使用以下命令在 Ubuntu 和 Debian 安装 `Guake`。 +使用以下命令在 Ubuntu 和 Debian 安装 Guake。 + ``` $ sudo apt-get install guake - ``` ### 多媒体编辑工具 @@ -472,48 +474,48 @@ $ sudo apt-get install guake [Ardour][52] 是一款漂亮的的数字音频工作站Digital Audio Workstation,可以完成专业的录制、编辑和混音工作。 -使用以下命令在 Ubuntu 和 Debian 安装 `Ardour`。 +使用以下命令在 Ubuntu 和 Debian 安装 Ardour。 + ``` $ sudo add-apt-repository ppa:dobey/audiotools $ sudo apt-get update $ sudo apt-get install ardour - ``` #### Audacity [Audacity][53] 是最著名的音频编辑软件之一,它是一款跨平台的开源多轨音频编辑器。 -使用以下命令在 Ubuntu 和 Debian 安装 `Audacity`。 +使用以下命令在 Ubuntu 和 Debian 安装 Audacity。 + ``` $ sudo add-apt-repository ppa:ubuntuhandbook1/audacity $ sudo apt-get update $ sudo apt-get install audacity - ``` #### GIMP [GIMP][54] 是 Photoshop 的开源替代品中最受欢迎的。这是因为它有多种可自定义的选项、第三方插件以及活跃的用户社区。 -使用以下命令在 Ubuntu 和 Debian 安装 `Gimp`。 +使用以下命令在 Ubuntu 和 Debian 安装 Gimp。 + ``` $ sudo add-apt-repository ppa:otto-kesselgulasch/gimp $ sudo apt update $ sudo apt install gimp - ``` #### Krita [Krita][55] 是一款开源的绘画程序,它具有美观的 UI 和可靠的性能,也可以用作图像处理工具。 -使用以下命令在 Ubuntu 和 Debian 安装 `Krita`。 +使用以下命令在 Ubuntu 和 Debian 安装 Krita。 + ``` $ sudo add-apt-repository ppa:kritalime/ppa $ sudo apt update $ sudo apt install krita - ``` #### Lightworks @@ -526,24 +528,24 @@ $ sudo apt install krita [OpenShot][58] 是一款屡获殊荣的免费开源视频编辑器,这主要得益于其出色的性能和强大的功能。 -使用以下命令在 Ubuntu 和 Debian 安装 `Openshot`。 +使用以下命令在 Ubuntu 和 Debian 安装 `Openshot。 + ``` $ sudo add-apt-repository ppa:openshot.developers/ppa $ sudo apt update $ sudo apt install openshot-qt - ``` #### PiTiV [Pitivi][59] 也是一个美观的视频编辑器,它有优美的代码库、优质的社区,还支持优秀的协作编辑功能。 -使用以下命令在 Ubuntu 和 Debian 安装 `PiTiV`。 +使用以下命令在 Ubuntu 和 Debian 安装 PiTiV。 + ``` $ flatpak install --user https://flathub.org/repo/appstream/org.pitivi.Pitivi.flatpakref $ flatpak install --user http://flatpak.pitivi.org/pitivi.flatpakref $ flatpak run org.pitivi.Pitivi//stable - ``` ### 音乐播放器 @@ -552,31 +554,32 @@ $ flatpak run org.pitivi.Pitivi//stable [Rhythmbox][60] 支持海量种类的音乐,目前被认为是最可靠的音乐播放器,并由 Ubuntu 自带。 -使用以下命令在 Ubuntu 和 Debian 安装 `Rhythmbox`。 +使用以下命令在 Ubuntu 和 Debian 安装 Rhythmbox。 + ``` $ sudo add-apt-repository ppa:fossfreedom/rhythmbox $ sudo apt-get update $ sudo apt-get install rhythmbox - ``` #### Lollypop [Lollypop][61] 是一款较为年轻的开源音乐播放器,它有很多高级选项,包括网络电台,滑动播放和派对模式。尽管功能繁多,它仍然尽量做到简单易管理。 -使用以下命令在 Ubuntu 和 Debian 安装 `Lollypop`。 +使用以下命令在 Ubuntu 和 Debian 安装 Lollypop。 + ``` $ sudo add-apt-repository ppa:gnumdk/lollypop $ sudo apt-get update $ sudo apt-get install lollypop - ``` #### Amarok [Amarok][62] 是一款功能强大的音乐播放器,它有一个直观的 UI 和大量的高级功能,而且允许用户根据自己的偏好去发现新音乐。 -使用以下命令在 Ubuntu 和 Debian 安装 `Amarok`。 +使用以下命令在 Ubuntu 和 Debian 安装 Amarok。 + ``` $ sudo apt-get update $ sudo apt-get install amarok @@ -587,48 +590,48 @@ $ sudo apt-get install amarok [Clementine][63] 是一款 Amarok 风格的音乐播放器,因此和 Amarok 相似,也有直观的用户界面、先进的控制模块,以及让用户搜索和发现新音乐的功能。 -使用以下命令在 Ubuntu 和 Debian 安装 `Clementine`。 +使用以下命令在 Ubuntu 和 Debian 安装 Clementine。 + ``` $ sudo add-apt-repository ppa:me-davidsansome/clementine $ sudo apt-get update $ sudo apt-get install clementine - ``` #### Cmus [Cmus][64] 可以说是最高效的的命令行界面音乐播放器了,它具有快速可靠的特点,也支持使用扩展。 -使用以下命令在 Ubuntu 和 Debian 安装 `Cmus`。 +使用以下命令在 Ubuntu 和 Debian 安装 Cmus。 + ``` $ sudo add-apt-repository ppa:jmuc/cmus $ sudo apt-get update $ sudo apt-get install cmus - ``` ### 办公软件 #### Calligra 套件 -Calligra 套件为用户提供了一套总共 8 个应用程序,涵盖办公、管理、图表等各个范畴。 +[Calligra 套件][65]为用户提供了一套总共 8 个应用程序,涵盖办公、管理、图表等各个范畴。 + +使用以下命令在 Ubuntu 和 Debian 安装 Calligra 套件。 -使用以下命令在 Ubuntu 和 Debian 安装 `Calligra` 套件。 ``` $ sudo apt-get install calligra - ``` #### LibreOffice [LibreOffice][66] 是开源社区中开发过程最活跃的办公套件,它以可靠性著称,也可以通过扩展来添加功能。 -使用以下命令在 Ubuntu 和 Debian 安装 `LibreOffice`。 +使用以下命令在 Ubuntu 和 Debian 安装 LibreOffice。 + ``` $ sudo add-apt-repository ppa:libreoffice/ppa $ sudo apt update $ sudo apt install libreoffice - ``` #### WPS Office @@ -643,35 +646,35 @@ $ sudo apt install libreoffice [Shutter][69] 允许用户截取桌面的屏幕截图,然后使用一些效果进行编辑,还支持上传和在线共享。 -使用以下命令在 Ubuntu 和 Debian 安装 `Shutter`。 +使用以下命令在 Ubuntu 和 Debian 安装 Shutter。 + ``` $ sudo add-apt-repository -y ppa:shutter/ppa $ sudo apt update $ sudo apt install shutter - ``` #### Kazam [Kazam][70] 可以用于捕获屏幕截图,它的输出对于任何支持 VP8/WebM 和 PulseAudio 视频播放器都可用。 -使用以下命令在 Ubuntu 和 Debian 安装 `Kazam`。 +使用以下命令在 Ubuntu 和 Debian 安装 Kazam。 + ``` $ sudo add-apt-repository ppa:kazam-team/unstable-series $ sudo apt update $ sudo apt install kazam python3-cairo python3-xlib - ``` #### Gnome Screenshot [Gnome Screenshot][71] 过去曾经和 Gnome 一起捆绑,但现在已经独立出来。它以易于共享的格式进行截屏。 -使用以下命令在 Ubuntu 和 Debian 安装 `Gnome Screenshot`。 +使用以下命令在 Ubuntu 和 Debian 安装 Gnome Screenshot。 + ``` $ sudo apt-get update $ sudo apt-get install gnome-screenshot - ``` ### 录屏工具 @@ -680,69 +683,69 @@ $ sudo apt-get install gnome-screenshot [SimpleScreenRecorder][72] 面世时已经是录屏工具中的佼佼者,现在已成为 Linux 各个发行版中最有效、最易用的录屏工具之一。 -使用以下命令在 Ubuntu 和 Debian 安装 `SimpleScreenRecorder`。 +使用以下命令在 Ubuntu 和 Debian 安装 SimpleScreenRecorder。 + ``` $ sudo add-apt-repository ppa:maarten-baert/simplescreenrecorder $ sudo apt-get update $ sudo apt-get install simplescreenrecorder - ``` #### recordMyDesktop [recordMyDesktop][73] 是一个开源的会话记录器,它也能记录桌面会话的音频。 -使用以下命令在 Ubuntu 和 Debian 安装 `recordMyDesktop`。 +使用以下命令在 Ubuntu 和 Debian 安装 recordMyDesktop。 + ``` $ sudo apt-get update $ sudo apt-get install gtk-recordmydesktop - ``` -### Text Editors +### 文本编辑器 #### Atom [Atom][74] 是由 GitHub 开发和维护的可定制文本编辑器。它是开箱即用的,但也可以使用扩展和主题自定义 UI 来增强其功能。 -使用以下命令在 Ubuntu 和 Debian 安装 `Atom`。 +使用以下命令在 Ubuntu 和 Debian 安装 Atom。 + ``` $ sudo apt-get install snapd $ sudo snap install atom --classic - ``` #### Sublime Text [Sublime Text][75] 已经成为目前最棒的文本编辑器。它可定制、轻量灵活(即使打开了大量数据文件和加入了大量扩展),最重要的是可以永久免费使用。 -使用以下命令在 Ubuntu 和 Debian 安装 `Sublime Text`。 +使用以下命令在 Ubuntu 和 Debian 安装 Sublime Text。 + ``` $ sudo apt-get install snapd $ sudo snap install sublime-text - ``` #### Geany [Geany][76] 是一个内存友好的文本编辑器,它具有基本的IDE功能,可以显示加载时间、扩展库函数等。 -使用以下命令在 Ubuntu 和 Debian 安装 `Geany`。 +使用以下命令在 Ubuntu 和 Debian 安装 Geany。 + ``` $ sudo apt-get update $ sudo apt-get install geany - ``` #### Gedit [Gedit][77] 以其简单著称,在很多 Linux 发行版都有预装,它具有文本编辑器都具有的优秀的功能。 -使用以下命令在 Ubuntu 和 Debian 安装 `Gedit`。 +使用以下命令在 Ubuntu 和 Debian 安装 Gedit。 + ``` $ sudo apt-get update $ sudo apt-get install gedit - ``` ### 备忘录软件 @@ -763,11 +766,11 @@ Evernote 在 Linux 上没有官方提供的软件,但可以参考 [Linux 上 [Taskwarrior][81] 是一个用于管理个人任务的开源跨平台命令行应用,它的速度和无干扰的环境是它的两大特点。 -使用以下命令在 Ubuntu 和 Debian 安装 `Taskwarrior`。 +使用以下命令在 Ubuntu 和 Debian 安装 Taskwarrior。 + ``` $ sudo apt-get update $ sudo apt-get install taskwarrior - ``` ### 视频播放器 @@ -776,49 +779,49 @@ $ sudo apt-get install taskwarrior [Banshee][82] 是一个开源的支持多格式的媒体播放器,于 2005 年开始开发并逐渐成长。 -使用以下命令在 Ubuntu 和 Debian 安装 `Banshee`。 +使用以下命令在 Ubuntu 和 Debian 安装 Banshee。 + ``` $ sudo add-apt-repository ppa:banshee-team/ppa $ sudo apt-get update $ sudo apt-get install banshee - ``` #### VLC [VLC][83] 是我最喜欢的视频播放器,它几乎可以播放任何格式的音频和视频,它还可以播放网络电台、录制桌面会话以及在线播放电影。 -使用以下命令在 Ubuntu 和 Debian 安装 `VLC`。 +使用以下命令在 Ubuntu 和 Debian 安装 VLC。 + ``` $ sudo add-apt-repository ppa:videolan/stable-daily $ sudo apt-get update $ sudo apt-get install vlc - ``` #### Kodi [Kodi][84] 是世界上最着名的媒体播放器之一,它有一个成熟的媒体中心,可以播放本地和远程的多媒体文件。 -使用以下命令在 Ubuntu 和 Debian 安装 `Kodi`。 +使用以下命令在 Ubuntu 和 Debian 安装 Kodi。 + ``` $ sudo apt-get install software-properties-common $ sudo add-apt-repository ppa:team-xbmc/ppa $ sudo apt-get update $ sudo apt-get install kodi - ``` #### SMPlayer [SMPlayer][85] 是 MPlayer 的 GUI 版本,所有流行的媒体格式它都能够处理,并且它还有从 YouTube 和 Chromcast 和下载字幕的功能。 -使用以下命令在 Ubuntu 和 Debian 安装 `SMPlayer`。 +使用以下命令在 Ubuntu 和 Debian 安装 SMPlayer。 + ``` $ sudo add-apt-repository ppa:rvm/smplayer $ sudo apt-get update $ sudo apt-get install smplayer - ``` ### 虚拟化工具 @@ -827,14 +830,14 @@ $ sudo apt-get install smplayer [VirtualBox][86] 是一个用于操作系统虚拟化的开源应用程序,在服务器、台式机和嵌入式系统上都可以运行。 -使用以下命令在 Ubuntu 和 Debian 安装 `VirtualBox`。 +使用以下命令在 Ubuntu 和 Debian 安装 VirtualBox。 + ``` $ wget -q https://www.virtualbox.org/download/oracle_vbox_2016.asc -O- | sudo apt-key add - $ wget -q https://www.virtualbox.org/download/oracle_vbox.asc -O- | sudo apt-key add - $ sudo apt-get update $ sudo apt-get install virtualbox-5.2 $ virtualbox - ``` #### VMWare @@ -849,34 +852,33 @@ $ virtualbox [Google Chrome][89] 无疑是最受欢迎的浏览器。Chrome 以其速度、简洁、安全、美观而受人喜爱,它遵循了 Google 的界面设计风格,是 web 开发人员不可缺少的浏览器,同时它也是免费开源的。 -使用以下命令在 Ubuntu 和 Debian 安装 `Google Chrome`。 +使用以下命令在 Ubuntu 和 Debian 安装 Google Chrome。 + ``` $ wget -q -O - https://dl-ssl.google.com/linux/linux_signing_key.pub | sudo apt-key add - $ sudo sh -c 'echo "deb http://dl.google.com/linux/chrome/deb/ stable main" >> /etc/apt/sources.list.d/google.list' $ sudo apt-get update $ sudo apt-get install google-chrome-stable - ``` #### Firefox -[Firefox Quantum][90] 是一款漂亮、快速、完善并且可自定义的浏览器。它也是免费开源的,包含有开发人员所需要的工具,对于初学者也没有任何使用门槛。 +[Firefox Quantum][90] 是一款漂亮、快速、完善并且可自定义的浏览器。它也是自由开源的,包含有开发人员所需要的工具,对于初学者也没有任何使用门槛。 + +使用以下命令在 Ubuntu 和 Debian 安装 Firefox Quantum。 -使用以下命令在 Ubuntu 和 Debian 安装 `Firefox Quantum`。 ``` $ sudo add-apt-repository ppa:mozillateam/firefox-next $ sudo apt update && sudo apt upgrade $ sudo apt install firefox - ``` #### Vivaldi -[Vivaldi][91] 是一个基于 Chrome 的免费开源项目,旨在通过添加扩展来使 Chrome 的功能更加完善。色彩丰富的界面,性能良好、灵活性强是它的几大特点。 +[Vivaldi][91] 是一个基于 Chrome 的自由开源项目,旨在通过添加扩展来使 Chrome 的功能更加完善。色彩丰富的界面,性能良好、灵活性强是它的几大特点。 参考阅读:[在 Ubuntu 下载 Vivaldi][91] -That concludes our list for today. Did I skip a famous title? Tell me about it in the comments section below. 以上就是我的推荐,你还有更好的软件向大家分享吗?欢迎评论。 -------------------------------------------------------------------------------- @@ -886,7 +888,7 @@ via: https://www.fossmint.com/most-used-linux-applications/ 作者:[Martins D. Okoi][a] 选题:[lujun9972](https://github.com/lujun9972) 译者:[HankChow](https://github.com/HankChow) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 From d07a1a6f55610257b2af3ac8b5c25edd5baa17a7 Mon Sep 17 00:00:00 2001 From: "Xingyu.Wang" Date: Wed, 10 Oct 2018 13:40:27 +0800 Subject: [PATCH 321/736] PUB:20180724 75 Most Used Essential Linux Applications of 2018.md @HankChow https://linux.cn/article-10099-1.html --- .../20180724 75 Most Used Essential Linux Applications of 2018.md | 0 1 file changed, 0 insertions(+), 0 deletions(-) rename {translated/tech => published}/20180724 75 Most Used Essential Linux Applications of 2018.md (100%) diff --git a/translated/tech/20180724 75 Most Used Essential Linux Applications of 2018.md b/published/20180724 75 Most Used Essential Linux Applications of 2018.md similarity index 100% rename from translated/tech/20180724 75 Most Used Essential Linux Applications of 2018.md rename to published/20180724 75 Most Used Essential Linux Applications of 2018.md From a0678c440439973f224d5b931ca570d3e10b5860 Mon Sep 17 00:00:00 2001 From: belitex Date: Wed, 10 Oct 2018 13:50:12 +0800 Subject: [PATCH 322/736] =?UTF-8?q?=E7=BF=BB=E8=AF=91=E5=AE=8C=E6=88=90:?= =?UTF-8?q?=203=20open=20source=20distributed=20tracing=20tools?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...3 open source distributed tracing tools.md | 90 ------------------- ...3 open source distributed tracing tools.md | 88 ++++++++++++++++++ 2 files changed, 88 insertions(+), 90 deletions(-) delete mode 100644 sources/tech/20180926 3 open source distributed tracing tools.md create mode 100644 translated/tech/20180926 3 open source distributed tracing tools.md diff --git a/sources/tech/20180926 3 open source distributed tracing tools.md b/sources/tech/20180926 3 open source distributed tracing tools.md deleted file mode 100644 index 9879302d38..0000000000 --- a/sources/tech/20180926 3 open source distributed tracing tools.md +++ /dev/null @@ -1,90 +0,0 @@ -translating by belitex - -3 open source distributed tracing tools -====== - -Find performance issues quickly with these tools, which provide a graphical view of what's happening across complex software systems. - -![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/server_data_system_admin.png?itok=q6HCfNQ8) - -Distributed tracing systems enable users to track a request through a software system that is distributed across multiple applications, services, and databases as well as intermediaries like proxies. This allows for a deeper understanding of what is happening within the software system. These systems produce graphical representations that show how much time the request took on each step and list each known step. - -A user reviewing this content can determine where the system is experiencing latencies or blockages. Instead of testing the system like a binary search tree when requests start failing, operators and developers can see exactly where the issues begin. This can also reveal where performance changes might be occurring from deployment to deployment. It’s always better to catch regressions automatically by alerting to the anomalous behavior than to have your customers tell you. - -How does this tracing thing work? Well, each request gets a special ID that’s usually injected into the headers. This ID uniquely identifies that transaction. This transaction is normally called a trace. The trace is the overall abstract idea of the entire transaction. Each trace is made up of spans. These spans are the actual work being performed, like a service call or a database request. Each span also has a unique ID. Spans can create subsequent spans called child spans, and child spans can have multiple parents. - -Once a transaction (or trace) has run its course, it can be searched in a presentation layer. There are several tools in this space that we’ll discuss later, but the picture below shows [Jaeger][1] from my [Istio walkthrough][2]. It shows multiple spans of a single trace. The power of this is immediately clear as you can better understand the transaction’s story at a glance. - -![](https://opensource.com/sites/default/files/uploads/monitoring_guide_jaeger_istio_0.png) - -This demo uses Istio’s built-in OpenTracing implementation, so I can get tracing without even modifying my application. It also uses Jaeger, which is OpenTracing-compatible. - -So what is OpenTracing? Let’s find out. - -### OpenTracing API - -[OpenTracing][3] is a spec that grew out of [Zipkin][4] to provide cross-platform compatibility. It offers a vendor-neutral API for adding tracing to applications and delivering that data into distributed tracing systems. A library written for the OpenTracing spec can be used with any system that is OpenTracing-compliant. Zipkin, Jaeger, and Appdash are examples of open source tools that have adopted the open standard, but even proprietary tools like [Datadog][5] and [Instana][6] are adopting it. This is expected to continue as OpenTracing reaches ubiquitous status. - -### OpenCensus - -Okay, we have OpenTracing, but what is this [OpenCensus][7] thing that keeps popping up in my searches? Is it a competing standard, something completely different, or something complementary? - -The answer depends on who you ask. I will do my best to explain the difference (as I understand it): OpenCensus takes a more holistic or all-inclusive approach. OpenTracing is focused on establishing an open API and spec and not on open implementations for each language and tracing system. OpenCensus provides not only the specification but also the language implementations and wire protocol. It also goes beyond tracing by including additional metrics that are normally outside the scope of distributed tracing systems. - -OpenCensus allows viewing data on the host where the application is running, but it also has a pluggable exporter system for exporting data to central aggregators. The current exporters produced by the OpenCensus team include Zipkin, Prometheus, Jaeger, Stackdriver, Datadog, and SignalFx, but anyone can create an exporter. - -From my perspective, there’s a lot of overlap. One isn’t necessarily better than the other, but it’s important to know what each does and doesn’t do. OpenTracing is primarily a spec, with others doing the implementation and opinionation. OpenCensus provides a holistic approach for the local component with more opinionation but still requires other systems for remote aggregation. - -### Tool options - -#### Zipkin - -Zipkin was one of the first systems of this kind. It was developed by Twitter based on the [Google Dapper paper][8] about the internal system Google uses. Zipkin was written using Java, and it can use Cassandra or ElasticSearch as a scalable backend. Most companies should be satisfied with one of those options. The lowest supported Java version is Java 6. It also uses the [Thrift][9] binary communication protocol, which is popular in the Twitter stack and is hosted as an Apache project. - -The system consists of reporters (clients), collectors, a query service, and a web UI. Zipkin is meant to be safe in production by transmitting only a trace ID within the context of a transaction to inform receivers that a trace is in process. The data collected in each reporter is then transmitted asynchronously to the collectors. The collectors store these spans in the database, and the web UI presents this data to the end user in a consumable format. The delivery of data to the collectors can occur in three different methods: HTTP, Kafka, and Scribe. - -The [Zipkin community][10] has also created [Brave][11], a Java client implementation compatible with Zipkin. It has no dependencies, so it won’t drag your projects down or clutter them with libraries that are incompatible with your corporate standards. There are many other implementations, and Zipkin is compatible with the OpenTracing standard, so these implementations should also work with other distributed tracing systems. The popular Spring framework has a component called [Spring Cloud Sleuth][12] that is compatible with Zipkin. - -#### Jaeger - -[Jaeger][1] is a newer project from Uber Technologies that the [CNCF][13] has since adopted as an Incubating project. It is written in Golang, so you don’t have to worry about having dependencies installed on the host or any overhead of interpreters or language virtual machines. Similar to Zipkin, Jaeger also supports Cassandra and ElasticSearch as scalable storage backends. Jaeger is also fully compatible with the OpenTracing standard. - -Jaeger’s architecture is similar to Zipkin, with clients (reporters), collectors, a query service, and a web UI, but it also has an agent on each host that locally aggregates the data. The agent receives data over a UDP connection, which it batches and sends to a collector. The collector receives that data in the form of the [Thrift][14] protocol and stores that data in Cassandra or ElasticSearch. The query service can access the data store directly and provide that information to the web UI. - -By default, a user won’t get all the traces from the Jaeger clients. The system samples 0.1% (1 in 1,000) of traces that pass through each client. Keeping and transmitting all traces would be a bit overwhelming to most systems. However, this can be increased or decreased by configuring the agents, which the client consults with for its configuration. This sampling isn’t completely random, though, and it’s getting better. Jaeger uses probabilistic sampling, which tries to make an educated guess at whether a new trace should be sampled or not. [Adaptive sampling is on its roadmap][15], which will improve the sampling algorithm by adding additional context for making decisions. - -#### Appdash - -[Appdash][16] is a distributed tracing system written in Golang, like Jaeger. It was created by [Sourcegraph][17] based on Google’s Dapper and Twitter’s Zipkin. Similar to Jaeger and Zipkin, Appdash supports the OpenTracing standard; this was a later addition and requires a component that is different from the default component. This adds risk and complexity. - -At a high level, Appdash’s architecture consists mostly of three components: a client, a local collector, and a remote collector. There’s not a lot of documentation, so this description comes from testing the system and reviewing the code. The client in Appdash gets added to your code. Appdash provides Python, Golang, and Ruby implementations, but OpenTracing libraries can be used with Appdash’s OpenTracing implementation. The client collects the spans and sends them to the local collector. The local collector then sends the data to a centralized Appdash server running its own local collector, which is the remote collector for all other nodes in the system. - --------------------------------------------------------------------------------- - -via: https://opensource.com/article/18/9/distributed-tracing-tools - -作者:[Dan Barker][a] -选题:[lujun9972](https://github.com/lujun9972) -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://opensource.com/users/barkerd427 -[1]: https://www.jaegertracing.io/ -[2]: https://www.youtube.com/watch?v=T8BbeqZ0Rls -[3]: http://opentracing.io/ -[4]: https://zipkin.io/ -[5]: https://www.datadoghq.com/ -[6]: https://www.instana.com/ -[7]: https://opencensus.io/ -[8]: https://research.google.com/archive/papers/dapper-2010-1.pdf -[9]: https://thrift.apache.org/ -[10]: https://zipkin.io/pages/community.html -[11]: https://github.com/openzipkin/brave -[12]: https://cloud.spring.io/spring-cloud-sleuth/ -[13]: https://www.cncf.io/ -[14]: https://en.wikipedia.org/wiki/Apache_Thrift -[15]: https://www.jaegertracing.io/docs/roadmap/#adaptive-sampling -[16]: https://github.com/sourcegraph/appdash -[17]: https://about.sourcegraph.com/ diff --git a/translated/tech/20180926 3 open source distributed tracing tools.md b/translated/tech/20180926 3 open source distributed tracing tools.md new file mode 100644 index 0000000000..773ff3f940 --- /dev/null +++ b/translated/tech/20180926 3 open source distributed tracing tools.md @@ -0,0 +1,88 @@ +三个开源的分布式追踪工具 +====== + +这几个工具对复杂软件系统中的实时事件做了可视化,能帮助你快速发现性能问题。 + +![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/server_data_system_admin.png?itok=q6HCfNQ8) + +分布式追踪系统能够从头到尾地追踪分布式系统中的请求,跨越多个应用、服务、数据库以及像代理这样的中间件。它能帮助你更深入地理解系统中到底发生了什么。追踪系统以图形化的方式,展示出每个已知步骤以及某个请求在每个步骤上的耗时。 + +用户可以通过这些展示来判断系统的哪个环节有延迟或阻塞,当请求失败时,运维和开发人员可以看到准确的问题源头,而不需要去测试整个系统,比如用二叉查找树的方法去定位问题。在开发迭代的过程中,追踪系统还能够展示出可能引起性能变化的环节。通过异常行为的警告自动地感知到性能在退化,总是比客户告诉你要好。 + +追踪是怎么工作的呢?给每个请求分配一个特殊 ID,这个 ID 通常会插入到请求头部中。它唯一标识了对应的事务。一般把事务叫做 trace,trace 是抽象整个事务的概念。每一个 trace 由 span 组成,span 代表着一次请求中真正执行的操作,比如一次服务调用,一次数据库请求等。每一个 span 也有自己唯一的 ID。span 之下也可以创建子 span,子 span 可以有多个父 span。 + +当一次事务(或者说 trace)运行过之后,就可以在追踪系统的表示层上搜索了。有几个工具可以用作表示层,我们下文会讨论,不过,我们先看下面的图,它是我在 [Istio walkthrough][2] 视频教程中提到的 [Jaeger][1] 界面,展示了单个 trace 中的多个 span。很明显,这个图能让你一目了然地对事务有更深的了解。 + +![](https://opensource.com/sites/default/files/uploads/monitoring_guide_jaeger_istio_0.png) + +这个 demo 使用了 Istio 内置的 OpenTracing 实现,所以我甚至不需要修改自己的应用代码就可以获得追踪数据。我也用到了 Jaeger,它是兼容 OpenTracing 的。 + +那么 OpenTracing 到底是什么呢?我们来看看。 + +### OpenTracing API + +[OpenTracing][3] 是源自 [Zipkin][4] 的规范,以提供跨平台兼容性。它提供了对厂商中立的 API,用来向应用程序添加追踪功能并将追踪数据发送到分布式的追踪系统。按照 OpenTracing 规范编写的库,可以被任何兼容 OpenTracing 的系统使用。采用这个开放标准的开源工具有 Zipkin,Jaeger,和 Appdash 等。甚至像 [Datadog][5] 和 [Instana][6] 这种付费工具也在采用。因为现在 OpenTracing 已经无处不在,这样的趋势有望继续发展下去。 + +### OpenCensus + +OpenTracing 已经说过了,可 [OpenCensus][7] 又是什么呢?它在搜索结果中老是出现。它是一个和 OpenTracing 完全不同或者互补的竞争标准吗? + +这个问题的答案取决于你的提问对象。我先尽我所能地解释一下他们的不同(按照我的理解):OpenCensus 更加全面或者说它包罗万象。OpenTracing 专注于建立开放的 API 和规范,而不是为每一种开发语言和追踪系统都提供开放的实现。OpenCensus 不仅提供规范,还提供开发语言的实现,和连接协议,而且它不仅只做追踪,还引入了额外的度量指标,这些一般不在分布式追踪系统的职责范围。 + +使用 OpenCensus,我们能够在运行着应用程序的主机上查看追踪数据,但它也有个可插拔的导出器系统,用于导出数据到中心聚合器。目前 OpenCensus 团队提供的导出器包括 Zipkin,Prometheus,Jaeger,Stackdriver,Datadog 和 SignalFx,不过任何人都可以创建一个导出器。 + +依我看这两者有很多重叠的部分,没有哪个一定比另外一个好,但是重要的是,要知道它们做什么事情和不做什么事情。OpenTracing 主要是一个规范,具体的实现和独断的设计由其他人来做。OpenCensus 更加独断地为本地组件提供了全面的解决方案,但是仍然需要其他系统做远程的聚合。 + +### 可选工具 + +#### Zipkin + +Zipkin 是最早出现的这类工具之一。 谷歌在 2010 年发表了介绍其内部追踪系统 Dapper 的[论文][8],Twitter 以此为基础开发了 Zipkin。Zipkin 的开发语言 Java,用 Cassandra 或 ElasticSearch 作为可扩展的存储后端,这些选择能满足大部分公司的需求。Zipkin 支持的最低 Java 版本是 Java 6,它也使用了 [Thrift][9] 的二进制通信协议,Thrift 在 Twitter 的系统中很流行,现在作为 Apache 项目在托管。 + +这个系统包括上报器(客户端),数据收集器,查询服务和一个 web 界面。Zipkin 只传输一个带事务上下文的 trace ID 来告知接收者追踪的进行,所以说在生产环境中是安全的。每一个客户端收集到的数据,会异步地传输到数据收集器。收集器把这些 span 的数据存到数据库,web 界面负责用可消费的格式展示这些数据给用户。客户端传输数据到收集器有三种方式:HTTP,Kafka 和 Scribe。 + +[Zipkin 社区][10] 还提供了 [Brave][11],一个跟 Zipkin 兼容的 Java 客户端的实现。由于 Brave 没有任何依赖,所以它不会拖累你的项目,也不会使用跟你们公司标准不兼容的库来搞乱你的项目。除 Brave 之外,还有很多其他的 Zipkin 客户端实现,因为 Zipkin 和 OpenTracing 标准是兼容的,所以这些实现也能用到其他的分布式追踪系统中。流行的 Spring 框架中一个叫 [Spring Cloud Sleuth][12] 的分布式追踪组件,它和 Zipkin 是兼容的。 + +#### Jaeger + +[Jaeger][1] 来自 Uber,是一个比较新的项目,[CNCF][13] (云原生计算基金会)已经把 Jaeger 托管为孵化项目。Jaeger 使用 Golang 开发,因此你不用担心在服务器上安装依赖的问题,也不用担心开发语言的解释器或虚拟机的开销。和 Zipkin 类似,Jaeger 也支持用 Cassandra 和 ElasticSearch 做可扩展的存储后端。Jaeger 也完全兼容 OpenTracing 标准。 + +Jaeger 的架构跟 Zipkin 很像,有客户端(上报器),数据收集器,查询服务和一个 web 界面,不过它还有一个在各个服务器上运行着的代理,负责在服务器本地做数据聚合。代理通过一个 UDP 连接接收数据,然后分批处理,发送到数据收集器。收集器接收到的数据是 [Thrift][14] 协议的格式,它把数据存到 Cassandra 或者 ElasticSearch 中。查询服务能直接访问数据库,并给 web 界面提供所需的信息。 + +默认情况下,Jaeger 客户端不会采集所有的追踪数据,只抽样了 0.1% 的( 1000 个采 1 个)追踪数据。对大多数系统来说,保留所有的追踪数据并传输的话就太多了。不过,通过配置代理可以调整这个值,客户端会从代理获取自己的配置。这个抽样并不是完全随机的,并且正在变得越来越好。Jaeger 使用概率抽样,试图对是否应该对新踪迹进行抽样进行有根据的猜测。 自适应采样已经在[路线图][15],它将通过添加额外的,能够帮助做决策的上下文,来改进采样算法。 + +#### Appdash + +[Appdash][16] 也是一个用 Golang 写的分布式追踪系统,和 Jaeger 一样。Appdash 是 [Sourcegraph][17] 公司基于谷歌的 Dapper 和 Twitter 的 Zipkin 开发的。同样的,它也支持 Opentracing 标准,不过这是后来添加的功能,依赖了一个与默认组件不同的组件,因此增加了风险和复杂度。 + +从高层次来看,Appdash 的架构主要有三个部分:客户端,本地收集器和远程收集器。因为没有很多文档,所以这个架构描述是基于对系统的测试以及查看源码。写代码时需要把 Appdash 的客户端添加进来。 Appdash 提供了 Python,Golang 和 Ruby 的实现,不过 OpenTracing 库可以与 Appdash 的 OpenTracing 实现一起使用。 客户端收集 span 数据,并将它们发送到本地收集器。然后,本地收集器将数据发送到中心的 Appdash 服务器,这个服务器上运行着自己的本地收集器,它的本地收集器是其他所有节点的远程收集器。 + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/18/9/distributed-tracing-tools + +作者:[Dan Barker][a] +选题:[lujun9972](https://github.com/lujun9972) +译者:[belitex](https://github.com/belitex) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/barkerd427 +[1]: https://www.jaegertracing.io/ +[2]: https://www.youtube.com/watch?v=T8BbeqZ0Rls +[3]: http://opentracing.io/ +[4]: https://zipkin.io/ +[5]: https://www.datadoghq.com/ +[6]: https://www.instana.com/ +[7]: https://opencensus.io/ +[8]: https://research.google.com/archive/papers/dapper-2010-1.pdf +[9]: https://thrift.apache.org/ +[10]: https://zipkin.io/pages/community.html +[11]: https://github.com/openzipkin/brave +[12]: https://cloud.spring.io/spring-cloud-sleuth/ +[13]: https://www.cncf.io/ +[14]: https://en.wikipedia.org/wiki/Apache_Thrift +[15]: https://www.jaegertracing.io/docs/roadmap/#adaptive-sampling +[16]: https://github.com/sourcegraph/appdash +[17]: https://about.sourcegraph.com/ From 5357f94d587994ea6e5c76f02489dd389a74e575 Mon Sep 17 00:00:00 2001 From: qhwdw Date: Wed, 10 Oct 2018 15:48:04 +0800 Subject: [PATCH 323/736] Translated by qhwdw --- ...0180824 What Stable Kernel Should I Use.md | 140 ------------------ ...0180824 What Stable Kernel Should I Use.md | 139 +++++++++++++++++ 2 files changed, 139 insertions(+), 140 deletions(-) delete mode 100644 sources/tech/20180824 What Stable Kernel Should I Use.md create mode 100644 translated/tech/20180824 What Stable Kernel Should I Use.md diff --git a/sources/tech/20180824 What Stable Kernel Should I Use.md b/sources/tech/20180824 What Stable Kernel Should I Use.md deleted file mode 100644 index 52b77498c5..0000000000 --- a/sources/tech/20180824 What Stable Kernel Should I Use.md +++ /dev/null @@ -1,140 +0,0 @@ -Translating by qhwdw -What Stable Kernel Should I Use? -====== -I get a lot of questions about people asking me about what stable kernel should they be using for their product/device/laptop/server/etc. all the time. Especially given the now-extended length of time that some kernels are being supported by me and others, this isn’t always a very obvious thing to determine. So this post is an attempt to write down my opinions on the matter. Of course, you are free to use what ever kernel version you want, but here’s what I recommend. - -As always, the opinions written here are my own, I speak for no one but myself. - -### What kernel to pick - -Here’s the my short list of what kernel you should use, raked from best to worst options. I’ll go into the details of all of these below, but if you just want the summary of all of this, here it is: - -Hierarchy of what kernel to use, from best solution to worst: - - * Supported kernel from your favorite Linux distribution - * Latest stable release - * Latest LTS release - * Older LTS release that is still being maintained - - - -What kernel to never use: - - * Unmaintained kernel release - - - -To give numbers to the above, today, as of August 24, 2018, the front page of kernel.org looks like this: - -![][1] - -So, based on the above list that would mean that: - - * 4.18.5 is the latest stable release - * 4.14.67 is the latest LTS release - * 4.9.124, 4.4.152, and 3.16.57 are the older LTS releases that are still being maintained - * 4.17.19 and 3.18.119 are “End of Life” kernels that have had a release in the past 60 days, and as such stick around on the kernel.org site for those who still might want to use them. - - - -Quite easy, right? - -Ok, now for some justification for all of this: - -### Distribution kernels - -The best solution for almost all Linux users is to just use the kernel from your favorite Linux distribution. Personally, I prefer the community based Linux distributions that constantly roll along with the latest updated kernel and it is supported by that developer community. Distributions in this category are Fedora, openSUSE, Arch, Gentoo, CoreOS, and others. - -All of these distributions use the latest stable upstream kernel release and make sure that any needed bugfixes are applied on a regular basis. That is the one of the most solid and best kernel that you can use when it comes to having the latest fixes ([remember all fixes are security fixes][2]) in it. - -There are some community distributions that take a bit longer to move to a new kernel release, but eventually get there and support the kernel they currently have quite well. Those are also great to use, and examples of these are Debian and Ubuntu. - -Just because I did not list your favorite distro here does not mean its kernel is not good. Look on the web site for the distro and make sure that the kernel package is constantly updated with the latest security patches, and all should be well. - -Lots of people seem to like the old, “traditional” model of a distribution and use RHEL, SLES, CentOS or the “LTS” Ubuntu release. Those distros pick a specific kernel version and then camp out on it for years, if not decades. They do loads of work backporting the latest bugfixes and sometimes new features to these kernels, all in a Quixote quest to keep the version number from never being changed, despite having many thousands of changes on top of that older kernel version. This work is a truly thankless job, and the developers assigned to these tasks do some wonderful work in order to achieve these goals. If you like never seeing your kernel version number change, then use these distributions. They usually cost some money to use, but the support you get from these companies is worth it when something goes wrong. - -So again, the best kernel you can use is one that someone else supports, and you can turn to for help. Use that support, usually you are already paying for it (for the enterprise distributions), and those companies know what they are doing. - -But, if you do not want to trust someone else to manage your kernel for you, or you have hardware that a distribution does not support, then you want to run the Latest stable release: - -### Latest stable release - -This kernel is the latest one from the Linux kernel developer community that they declare as “stable”. About every three months, the community releases a new stable kernel that contains all of the newest hardware support, the latest performance improvements, as well as the latest bugfixes for all parts of the kernel. Over the next 3 months, bugfixes that go into the next kernel release to be made are backported into this stable release, so that any users of this kernel are sure to get them as soon as possible. - -This is usually the kernel that most community distributions use as well, so you can be sure it is tested and has a large audience of users. Also, the kernel community (all 4000+ developers) are willing to help support users of this release, as it is the latest one that they made. - -After 3 months, a new kernel is released and you should move to it to ensure that you stay up to date, as support for this kernel is usually dropped a few weeks after the newer release happens. - -If you have new hardware that is purchased after the last LTS release came out, you almost are guaranteed to have to run this kernel in order to have it supported. So for desktops or new servers, this is usually the recommended kernel to be running. - -### Latest LTS release - -If your hardware relies on a vendors out-of-tree patch in order to make it work properly (like almost all embedded devices these days), then the next best kernel to be using is the latest LTS release. That release gets all of the latest kernel fixes that goes into the stable releases where applicable, and lots of users test and use it. - -Note, no new features and almost no new hardware support is ever added to these kernels, so if you need to use a new device, it is better to use the latest stable release, not this release. - -Also this release is common for users that do not like to worry about “major” upgrades happening on them every 3 months. So they stick to this release and upgrade every year instead, which is a fine practice to follow. - -The downsides of using this release is that you do not get the performance improvements that happen in newer kernels, except when you update to the next LTS kernel, potentially a year in the future. That could be significant for some workloads, so be very aware of this. - -Also, if you have problems with this kernel release, the first thing that any developer whom you report the issue to is going to ask you to do is, “does the latest stable release have this problem?” So you will need to be aware that support might not be as easy to get as with the latest stable releases. - -Now if you are stuck with a large patchset and can not update to a new LTS kernel once a year, perhaps you want the older LTS releases: - -### Older LTS release - -These releases have traditionally been supported by the community for 2 years, sometimes longer for when a major distribution relies on this (like Debian or SLES). However in the past year, thanks to a lot of suport and investment in testing and infrastructure from Google, Linaro, Linaro member companies, [kernelci.org][3], and others, these kernels are starting to be supported for much longer. - -Here’s the latest LTS releases and how long they will be supported for, as shown at [kernel.org/category/releases.html][4] on August 24, 2018: - -![][5] - -The reason that Google and other companies want to have these kernels live longer is due to the crazy (some will say broken) development model of almost all SoC chips these days. Those devices start their development lifecycle a few years before the chip is released, however that code is never merged upstream, resulting in a brand new chip being released based on a 2 year old kernel. These SoC trees usually have over 2 million lines added to them, making them something that I have started calling “Linux-like” kernels. - -If the LTS releases stop happening after 2 years, then support from the community instantly stops, and no one ends up doing bugfixes for them. This results in millions of very insecure devices floating around in the world, not something that is good for any ecosystem. - -Because of this dependency, these companies now require new devices to constantly update to the latest LTS releases as they happen for their specific release version (i.e. every 4.9.y release that happens). An example of this is the Android kernel requirements for new devices shipping for the “O” and now “P” releases specified the minimum kernel version allowed, and Android security releases might start to require those “.y” releases to happen more frequently on devices. - -I will note that some manufacturers are already doing this today. Sony is one great example of this, updating to the latest 4.4.y release on many of their new phones for their quarterly security release. Another good example is the small company Essential which has been tracking the 4.4.y releases faster than anyone that I know of. - -There is one huge caveat when using a kernel like this. The number of security fixes that get backported are not as great as with the latest LTS release, because the traditional model of the devices that use these older LTS kernels is a much more reduced user model. These kernels are not to be used in any type of “general computing” model where you have untrusted users or virtual machines, as the ability to do some of the recent Spectre-type fixes for older releases is greatly reduced, if present at all in some branches. - -So again, only use older LTS releases in a device that you fully control, or lock down with a very strong security model (like Android enforces using SELinux and application isolation). Never use these releases on a server with untrusted users, programs, or virtual machines. - -Also, support from the community for these older LTS releases is greatly reduced even from the normal LTS releases, if available at all. If you use these kernels, you really are on your own, and need to be able to support the kernel yourself, or rely on you SoC vendor to provide that support for you (note that almost none of them do provide that support, so beware…) - -### Unmaintained kernel release - -Surprisingly, many companies do just grab a random kernel release, slap it into their product and proceed to ship it in hundreds of thousands of units without a second thought. One crazy example of this would be the Lego Mindstorm systems that shipped a random -rc release of a kernel in their device for some unknown reason. A -rc release is a development release that not even the Linux kernel developers feel is ready for everyone to use just yet, let alone millions of users. - -You are of course free to do this if you want, but note that you really are on your own here. The community can not support you as no one is watching all kernel versions for specific issues, so you will have to rely on in-house support for everything that could go wrong. Which for some companies and systems, could be just fine, but be aware of the “hidden” cost this might cause if you do not plan for this up front. - -### Summary - -So, here’s a short list of different types of devices, and what I would recommend for their kernels: - - * Laptop / Desktop: Latest stable release - * Server: Latest stable release or latest LTS release - * Embedded device: Latest LTS release or older LTS release if the security model used is very strong and tight. - - - -And as for me, what do I run on my machines? My laptops run the latest development kernel (i.e. Linus’s development tree) plus whatever kernel changes I am currently working on and my servers run the latest stable release. So despite being in charge of the LTS releases, I don’t run them myself, except in testing systems. I rely on the development and latest stable releases to ensure that my machines are running the fastest and most secure releases that we know how to create at this point in time. - --------------------------------------------------------------------------------- - -via: http://kroah.com/log/blog/2018/08/24/what-stable-kernel-should-i-use/ - -作者:[Greg Kroah-Hartman][a] -选题:[lujun9972](https://github.com/lujun9972) -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:http://kroah.com -[1]:https://s3.amazonaws.com/kroah.com/images/kernel.org_2018_08_24.png -[2]:http://kroah.com/log/blog/2018/02/05/linux-kernel-release-model/ -[3]:https://kernelci.org/ -[4]:https://www.kernel.org/category/releases.html -[5]:https://s3.amazonaws.com/kroah.com/images/kernel.org_releases_2018_08_24.png diff --git a/translated/tech/20180824 What Stable Kernel Should I Use.md b/translated/tech/20180824 What Stable Kernel Should I Use.md new file mode 100644 index 0000000000..7a8d330a77 --- /dev/null +++ b/translated/tech/20180824 What Stable Kernel Should I Use.md @@ -0,0 +1,139 @@ +我应该使用哪些稳定版内核? +====== +很多人都问我这样的问题,在他们的产品/设备/笔记本/服务器等上面应该使用什么样的稳定版内核。一直以来,尤其是那些现在已经延长支持时间的内核,都是由我和其他人提供支持,因此,给出这个问题的答案并不是件容易的事情。因此这篇文章我将尝试去给出我在这个问题上的看法。当然,你可以任意选用任何一个你想去使用的内核版本,这里只是我的建议。 + +和以前一样,在这里给出的这些看法只代表我个人的意见。 + +### 可选择的内核有哪些 + +下面列出了我建议你应该去使用的内核的列表,从最好的到最差的都有。我在下面将详细介绍,但是如果你只想得到一个结论,它就是你想要的: + +建议你使用的内核的分级,从最佳的方案到最差的方案如下: + + * 你最喜欢的 Linux 发行版支持的内核 + * 最新的稳定版 + * 最新的 LTS 发行版 + * 仍然处于维护状态的老的 LTS 发行版 + + + +绝对不要去使用的内核: + + * 不再维护的内核发行版 + + + +给上面的列表给出具体的数字,今天是 2018 年 8 月 24 日,kernel.org 页面上可以看到是这样: + +![][1] + +因此,基于上面的列表,那它应该是: + + * 4.18.5 是最新的稳定版 + * 4.14.67 是最新的 LTS 发行版 + * 4.9.124、4.4.152、以及 3.16.57 是仍然处于维护状态的老的 LTS 发行版 + * 4.17.19 和 3.18.119 是过去 60 天内 “生命周期终止” 的内核版本,它们仍然保留在 kernel.org 站点上,是为了仍然想去使用它们的那些人。 + + + +非常容易,对吗? + +Ok,现在我给出这样选择的一些理由: + +### Linux 发行版内核 + +对于大多数 Linux 用户来说,最好的方案就是使用你喜欢的 Linux 发行版的内核。就我本人而言,我比较喜欢基于社区的、内核不断滚动升级的用最新内核的 Linux 发行版,并且它也是由开发者社区来支持的。这种类型的发行版有 Fedora、openSUSE、Arch、Gentoo、CoreOS、以及其它的。 + +所有这些发行版都使用了上游的最新的稳定版内核,并且确保定期打了需要的 bug 修复补丁。当它拥有了最新的修复之后([记住所有的修复都是安全修复][2]),这就是你可以使用的最安全、最好的内核之一。 + +有些社区的 Linux 发行版需要很长的时间才发行一个新内核的发行版,但是最终发行的版本和所支持的内核都是非常好的。这些也都非常好用,Debian 和 Ubuntu 就是这样的例子。 + +我没有在这里列出你所喜欢的发行版,并不是意味着它们的内核不够好。查看这些发行版的网站,确保它们的内核包是不断应用最新的安全补丁进行升级过的,那么它就应该是很好的。 + +许多人好像喜欢旧的、“传统” 模式的发行版,以及使用 RHEL、SLES、CentOS 或者 “LTS” Ubuntu 发行版。这些发行版挑选一个特定的内核版本,然后使用好几年,而不是几十年。他们移植了最新的 bug 修复,有时也有一些内核的新特性,所有的只是追求堂吉诃德式的保持版本号不变而已,尽管他们已经在那个旧的内核版本上做了成千上万的变更。这其实是一个吃力不讨好的工作,开发者分配去做这些任务,看上去做的很不错,其实就是为了实现这些目标。如果你从来没有看到你的内核版本号发生过变化,而仍然在使用这些发行版。他们通常会为使用而付出一些成本,当发生错误时能够从这些公司得到一些支持,那就是值得的。 + +所以,你能使用的最好的内核是你可以求助于别人,而别人可以为你提供支持的内核。使用那些支持,你通常都已经为它支付过费用了(对于企业发行版),而这些公司也知道他们职责是什么。 + +但是,如果你不希望去依赖别人,而是希望你自己管理你的内核,或者你有发行版不支持的硬件,那么你应该去使用最新的稳定版: + +### 最新的稳定版 + +最新的稳定版内核是 Linux 内核开发者社区宣布为“稳定版”的最新的一个内核。大约每三个月,社区发行一个包含了对所有新硬件支持的、新的稳定版内核,最新版的内核不但改善内核性能,同时还包含内核各部分的 bug 修复。再经过三个月之后,进入到下一个内核版本的 bug 修复将被移植进入这个稳定版内核中,因此,使用这个内核版本的用户将确保立即得到这些修复。 + +最新的稳定版内核通常也是主流社区发行版使用的较好的内核,因此你可以确保它是经过测试和拥有大量用户使用的内核。另外,内核社区(全部开发者超过 4000 人)也将帮助这个发行版提供对用户的支持,因为这是他们做的最新的一个内核。 + +三个月之后,将发行一个新的稳定版内核,你应该去更新到它以确保你的内核始终是最新的稳定版,当最新的稳定版内核发布之后,对你的当前稳定版内核的支持通常会落后几周时间。 + +如果你在上一个 LTS 版本发布之后购买了最新的硬件,为了能够支持最新的硬件,你几乎是绝对需要去运行这个最新的稳定版内核。对于台式机或新的服务器,它们通常需要运行在它们推荐的内核版本上。 + +### 最新的 LTS 发行版 + +如果你的硬件为了保证正常运行(像大多数的嵌入式设备),需要依赖供应商的源码树外的补丁,那么对你来说,最好的内核版本是最新的 LTS 发行版。这个发行版拥有所有进入稳定版内核的最新 bug 修复,以及大量的用户测试和使用。 + +请注意,这个最新的 LTS 发行版没有新特性,并且也几乎不会增加对新硬件的支持,因此,如果你需要使用一个新设备,那你的最佳选择就是最新的稳定版内核,而不是最新的 LTS 版内核。 + +另外,对于这个 LTS 发行版内核的用户来说,他也不用担心每三个月一次的“重大”升级。因此,他们将一直坚持使用这个 LTS 内核发行版,并每年升级一次,这是一个很好的实践。 + +使用这个 LTS 发行版的不利方面是,你没法得到在最新版本内核上实现的内核性能提升,除非在未来的一年中,你升级到下一个 LTS 版内核。 + +另外,如果你使用的这个内核版本有问题,你所做的第一件事情就是向任意一位内核开发者报告发生的问题,并向他们询问,“最新的稳定版内核中是否也存在这个问题?”并且,你将意识到,对它的支持不会像使用最新的稳定版内核那样容易得到。 + +现在,如果你坚持使用一个有大量的补丁集的内核,并且不希望升级到每年一次的新 LTS 内核版本上,那么,或许你应该去使用老的 LTS 发行版内核: + +### 老的 LTS 发行版 + +这些发行版传统上都由社区提供 2 年时间的支持,有时候当一个重要的 Linux 发行版(像 Debian 或 SLES 一样)依赖它时,这个支持时间会更长。然而在过去一年里,感谢 Google、Linaro、Linaro 成员公司、[kernelci.org][3]、以及其它公司在测试和基础设施上的大量投入,使得这些老的 LTS 发行版内核得到更长时间的支持。 + +这是最新的 LTS 发行版,它们将被支持多长时间,这是 2018 年 8 月 24 日显示在 [kernel.org/category/releases.html][4] 上的信息: + +![][5] + +Google 和其它公司希望这些内核使用的时间更长的原因是,由于现在几乎所有的 SoC 芯片的疯狂(也有人说是打破常规)的开发模型。这些设备在芯片发行前几年就启动了他们的开发生命周期,而那些代码从来不会合并到上游,最终结果是始终在一个分支中,新的芯片基于一个 2 年以前的老内核发布。这些 SoC 的代码树通常增加了超过 200 万行的代码,这使得它们成为我们前面称之为“类 Linux 内核“的东西。 + +如果在 2 年后,这个 LTS 发行版停止支持,那么来自社区的支持将立即停止,并且没有人对它再进行 bug 修复。这导致了在全球各地数以百万计的不安全设备仍然在使用中,这对任何生态系统来说都不是什么好事情。 + +由于这种依赖,这些公司现在要求新设备不断更新到最新的 LTS 发行版,而这些特定的发行版(即每个 4.9.y 发行版)就是为它们发行的。其中一个这样的例子就是新 Android 设备对内核版本的要求,这些新设备的 “O” 版本和现在的 “P” 版本指定了最低允许使用的内核版本,并且在设备上越来越频繁升级的、安全的 Android 发行版开始要求使用这些 “.y” 发行版。 + +我注意到一些生产商现在已经在做这些事情。Sony 是其中一个非常好的例子,在他们的大多数新手机上,通过他们每季度的安全发行版,将设备更新到最新的 4.4.y 发行版上。另一个很好的例子是一家小型公司 Essential,他们持续跟踪 4.4.y 发行版,据我所知,他们发布新版本的速度比其它公司都快。 + +当使用这种很老的内核时有个重大警告。移植到这种内核中的 bug 修复比起最新版本的 LTS 内核来说数量少很多,因为这些使用很老的 LTS 内核的传统设备型号要远少于现在的用户使用的型号。如果你打算将它们用在有不可信的用户或虚拟机的地方,那么这些内核将不再被用于任何”通用计算“的模型中,因为对于这些内核不会去做像最近的 Spectre 这样的修复,如果在一些分支中存在这样的 bug,那么将极大地降低安全性。 + +因此,仅当在你能够完全控制的设备中使用老的 LTS 发行版,或者是使用在有一个非常强大的安全模型(像 Android 一样强制使用 SELinux 和应用程序隔离)去限制的情况下。绝对不要在有不可信用户、程序、或虚拟机的服务器上使用这些老的 LTS 发行版内核。 + +此外,如果社区对它有支持的话,社区对这些老的 LTS 内核发行版相比正常的 LTS 内核发行版的支持要少的多。如果你使用这些内核,那么你只能是一个人在战斗,你需要有能力去独自支持这些内核,或者依赖你的 SoC 供应商为你提供支持(需要注意的是,对于大部分供应商来说是不会为你提供支持的,因此,你要特别注意 …)。 + +### 不再维护的内核发行版 + +更让人感到惊讶的事情是,许多公司只是随便选一个内核发行版,然后将它封装到它们的产品里,并将它毫不犹豫地承载到数十万的部件中。其中一个这样的糟糕例子是 Lego Mindstorm 系统,不知道是什么原因在它们的设备上随意选取了一个 `-rc` 的内核发行版。`-rc` 的发行版是开发中的版本,Linux 内核开发者认为它根本就不适合任何人使用,更不用说是数百万的用户了。 + +当然,如果你愿意,你可以随意地使用它,但是需要注意的是,可能真的就只有你一个人在使用它。社区不会为你提供支持,因为他们不可能关注所有内核版本的特定问题,因此如果出现错误,你只能独自去解决它。对于一些公司和系统来说,这么做可能还行,但是如果没有为此有所规划,那么要当心因此而产生的”隐性“成本。 + +### 总结 + +基于以上原因,下面是一个针对不同类型设备的简短列表,这些设备我推荐适用的内核如下: + + * 笔记本 / 台式机:最新的稳定版内核 + * 服务器:最新的稳定版内核或最新的 LTS 版内核 + * 嵌入式设备:最新的 LTS 版内核或老的 LTS 版内核(如果使用的安全模型非常强大和严格) + + + +至于我,在我的机器上运行什么样的内核?我的笔记本运行的是最新的开发版内核(即 Linus 的开发树)再加上我正在做修改的内核,我的服务器上运行的是最新的稳定版内核。因此,尽管我负责 LTS 发行版的支持工作,但我自己并不使用 LTS 版内核,除了在测试系统上。我依赖于开发版和最新的稳定版内核,以确保我的机器运行的是目前我们所知道的最快的也是最安全的内核版本。 + +-------------------------------------------------------------------------------- + +via: http://kroah.com/log/blog/2018/08/24/what-stable-kernel-should-i-use/ + +作者:[Greg Kroah-Hartman][a] +选题:[lujun9972](https://github.com/lujun9972) +译者:[qhwdw](https://github.com/qhwdw) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://kroah.com +[1]:https://s3.amazonaws.com/kroah.com/images/kernel.org_2018_08_24.png +[2]:http://kroah.com/log/blog/2018/02/05/linux-kernel-release-model/ +[3]:https://kernelci.org/ +[4]:https://www.kernel.org/category/releases.html +[5]:https://s3.amazonaws.com/kroah.com/images/kernel.org_releases_2018_08_24.png From 684f9a12b3f0f0d338239e1ad226ee7427072101 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?=E7=AB=A0=E5=86=9B?= Date: Wed, 10 Oct 2018 20:28:22 +0800 Subject: [PATCH 324/736] remove dupfile --- ...ind And Delete Duplicate Files In Linux.md | 215 ------------------ 1 file changed, 215 deletions(-) delete mode 100644 sources/tech/How To Find And Delete Duplicate Files In Linux.md diff --git a/sources/tech/How To Find And Delete Duplicate Files In Linux.md b/sources/tech/How To Find And Delete Duplicate Files In Linux.md deleted file mode 100644 index 060dedc447..0000000000 --- a/sources/tech/How To Find And Delete Duplicate Files In Linux.md +++ /dev/null @@ -1,215 +0,0 @@ -如何在 Linux 中寻找和删除重复的文件 -===== - -![](https://www.ostechnix.com/wp-content/uploads/2018/09/Find-And-Delete-Duplicate-Files-720x340.png) - -我经常备份配置文件 ,或者当硬盘文件中旧文件需要修改时我也会备份他们 。当我遇到错误时 ,我能够通过备份文件恢复修改的文件 。当我忘记清除这些备份文件 ,一段时间过去后 ,硬盘中总会被重复文件装满 。可能是我比较懒 ,也可能是我害怕会删除一些重要的文件 。如果你有和我一样的难题 ,并且在不同的文件夹下对于同一个文件有许多备份文件 ,在 Unix-Like 系统下 ,你能够用下面提到的工具来寻找和删除这些重复的文件 。 - -**警告** - -请谨慎删除重复文件 。如果你鲁莽的删除重复文件 ,回导致你的 [**数据丢失事故**][1] 。我建议你在用这些工具时 ,需要给与额外的注意 。 - -### 在 Linux 中寻找和删除重复文件 - -为了引导的目的 ,我将讨论三个实用的工具 。 - -1. Rdfind , -2. Fdupes , -3. FSlint - -这三个实用工具是免费的 ,开源 ,能够运行在大多数的 Unix-like 操作系统 。 - -#### 1. Rdfind - -**Rdfind** ,用来寻找冗余的数据 ,是一个免费并且开源的工具来寻找重复的文件通过文件夹和子文件夹 。它比较的文件的内容 ,而不是文件的名字 。Rdfind 用 **ranking** 算法来分类原始数据和重复文件 。如果你有两个或者多个相同的文件 ,Rdfind 足够智能来分辨哪个文件是最初的 ,其他的相同文件被认为是重复的文件 。一旦发现了重复的文件 ,Rdfind 回向你报告 ,你可以决定删除重复的文件或者将这些文件替换成[**软连接** 或者 **硬连接**][2] - -**Installing Rdfind** - -Rdfind 在 [**AUR**][3] 中是有效的 ,你能在 ARCH 系列系统上用 AUR 助手程序来安装它 ,如下所示 。 - -``` -$ yay -S rdfind - -``` - -在 Debain ,Ubuntu ,Linux Mint 上 : - -``` -$ sudo apt-get install rdfind - -``` - -在 Fedora 上 : - -``` -$ sudo dnf install rdfind - -``` - -在 RHEL ,CentOS : - -``` -$ sudo yum install epel-release -$ sudo yum install rdfind -``` - -**用法** - -当安装完成后 ,简单的运行 Rdfind 命令来操作文件夹来搜索重复的文件 。 - -``` -$ rdfind ~/Download -``` - -![](https://www.ostechnix.com/wp-content/uploads/2018/09/rdfind-1.png) - -上面的截图中 ,Rdfind 命令将搜索 ~/Download 文件夹并且将结果保存为 **result.txt** ,保存路径为当前工作路径 。在 results.txt 文件中 ,你能够查看重复的文件 。 - -``` -$ cat results.txt -# Automatically generated -# duptype id depth size device inode priority name -DUPTYPE_FIRST_OCCURRENCE 1469 8 9 2050 15864884 1 /home/sk/Downloads/tor-browser_en-US/Browser/TorBrowser/Tor/PluggableTransports/fte/tests/dfas/test5.regex -DUPTYPE_WITHIN_SAME_TREE -1469 8 9 2050 15864886 1 /home/sk/Downloads/tor-browser_en-US/Browser/TorBrowser/Tor/PluggableTransports/fte/tests/dfas/test6.regex -[...] -DUPTYPE_FIRST_OCCURRENCE 13 0 403635 2050 15740257 1 /home/sk/Downloads/Hyperledger(1).pdf -DUPTYPE_WITHIN_SAME_TREE -13 0 403635 2050 15741071 1 /home/sk/Downloads/Hyperledger.pdf -# end of file - -``` -在 results.txt 文件中你能够很容易的发现重复的文件 ,你能够删除你想删除的重复文件 。 - -同样 ,你能执行 **-dryrun** 操作指定的文件夹下来发现所有的重复文件而不改变任何事情 ,并在终端输出一个总结 : - -``` -$ rdfind -dryrun true ~/Download - -``` - -一旦你发现了重复文件 ,你能够用软链接或者硬链接来替换它们 。 - -用硬链接来替换重复文件 : - -``` -$ rdfind -makehardlink true ~/Download -``` - -用符号链接或者软链接来替换重复文件 : - -``` -$ rdfind -makesymlink true ~/Download -``` - -如果你的文件夹里面有空文件 ,而且你想忽略他们 。可以用 **-ignoreempty** 操作 : - -``` -$ rdfind -ignoreempty true ~/Download - -``` - -如果你不在想在留下这些老文件 ,删除这些重复文件而不是用软链接或者硬链接来替换他们 。 - -删除所有的重复文件 : -``` -$ rdfind -deleteduplicates true ~/Download - -``` - -如果你不想忽略这些空文件而要删除这些重复文件 : - -``` -$ rdfind -deleteduplicates true -ingoreempty false ~/Download - -``` - -更多的 rdfind 信息 ,参考帮助部分 : - -``` -$ rdfind --help - -``` - -``` -$ man rdfind - -``` - -#### 2. Fdupes - -**Fdupes** 是另一个命令行工具在指定文件夹和对应子文件夹中鉴别和删除重复文件 。它是免费由 C 语言编写的开源工具 。Fdupes 鉴别重复文件依靠文件大小 ,局部的 MD5 特征 ,全局 MD5 特征 ,最后用逐个字节比较来确认重复文件 。 - -和 Rdfind 工具相同 ,Fdupes 只有很少的设置来执行操作 ,例如 : - - * 递归的在文件夹及其子文件夹下寻找重复文件 - * 排除掉指定的空文件和隐藏文件 - * 显示重复文件的大小 - * 当遇见重复文件 ,立即删除它们 - * 不同的所有者 ,组 ,或不同的权限被认为是不同的文件 。 - * 还有更多的 - -**安装 Fdupes** - -在大多数的 Linux 发行版中的默认仓库中都是有效的 。 - -在 Arch Linux 和 他的变异版本 ,例如 Antergos ,Manjaro Linux 。用 Pcmman 来安装 ,如下 : - -``` -$ sudo pacman -S fdupes - -``` - -在 Debain ,Ubuntu ,Linux Mint 上 : - -``` -$ sudo apt-get install fdupes - -``` - -在 Fedora 上 : - -``` -$ duso dnf install fdupes - -``` - -在 RHEL ,CentOS 上 : - -``` -$ sudo yum install fdupes - -``` - -**用法** - -Fdupes 的用法十分简单 ,只要运行下面的命令在指定的文件夹下 ( ~/Download ) 来找出重复文件 。 - -``` -$fdupes ~/Download - -``` - -在我的系统上的简单输出 : - -``` -/home/sk/Downloads/Hyperledger.pdf -/home/sk/Downloads/Hyperledger(1).pdf - -``` - --------------------------------------------------------------------------------- - -via: https://www.ostechnix.com/how-to-find-and-delete-duplicate-files-in-linux/ -作者:[SK][a] -选题:[lujun9972](https://github.com/lujun9972) -译者:[singledo][6] -校对:[校对者ID](校对) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://www.ostechnix.com/author/sk/ -[1]: https://www.ostechnix.com/prevent-files-folders-accidental-deletion-modification-linux/ -[2]: https://www.ostechnix.com/explaining-soft-link-and-hard-link-in-linux-with-examples/ -[3]: https://aur.archlinux.org/packages/rdfind/ -[4]: https://www.ostechnix.com/yay-found-yet-another-reliable-aur-helper/ -[5]: https://aur.archlinux.org/packages/fslint/ -[6]: https://github.com/singledo/TranslateProject-1.git \ No newline at end of file From e2cd0369abab46bf96b2ce5b2baf857acea34bf7 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?=E7=AB=A0=E5=86=9B?= Date: Wed, 10 Oct 2018 20:34:12 +0800 Subject: [PATCH 325/736] apply for article - how to use ssh --- ...ow to use the SSH and SFTP protocols on your home network.md | 2 ++ 1 file changed, 2 insertions(+) diff --git a/sources/tech/20181002 How to use the SSH and SFTP protocols on your home network.md b/sources/tech/20181002 How to use the SSH and SFTP protocols on your home network.md index 5c24e87901..a58aa55ffd 100644 --- a/sources/tech/20181002 How to use the SSH and SFTP protocols on your home network.md +++ b/sources/tech/20181002 How to use the SSH and SFTP protocols on your home network.md @@ -1,3 +1,5 @@ +translating by singledo + How to use the SSH and SFTP protocols on your home network ====== From 1f8f861bf251a7283e1745baba8f557a1d132f2a Mon Sep 17 00:00:00 2001 From: Ryze-Borgia <42087725+Ryze-Borgia@users.noreply.github.com> Date: Wed, 10 Oct 2018 21:25:19 +0800 Subject: [PATCH 326/736] Delete 20180915 Linux vs Mac- 7 Reasons Why Linux is a Better Choice than Mac.md --- ...s Why Linux is a Better Choice than Mac.md | 133 ------------------ 1 file changed, 133 deletions(-) delete mode 100644 sources/talk/20180915 Linux vs Mac- 7 Reasons Why Linux is a Better Choice than Mac.md diff --git a/sources/talk/20180915 Linux vs Mac- 7 Reasons Why Linux is a Better Choice than Mac.md b/sources/talk/20180915 Linux vs Mac- 7 Reasons Why Linux is a Better Choice than Mac.md deleted file mode 100644 index 42f842a803..0000000000 --- a/sources/talk/20180915 Linux vs Mac- 7 Reasons Why Linux is a Better Choice than Mac.md +++ /dev/null @@ -1,133 +0,0 @@ -Translating by Ryze-Borgia - -Linux vs Mac: 7 Reasons Why Linux is a Better Choice than Mac -====== -Recently, we highlighted a few points about [why Linux is better than Windows][1]. Unquestionably, Linux is a superior platform. But, like other operating systems it has its drawbacks as well. For a very particular set of tasks (such as Gaming), Windows OS might prove to be better. And, likewise, for another set of tasks (such as video editing), a Mac-powered system might come in handy. It all trickles down to your preference and what you would like to do with your system. So, in this article, we will highlight a number of reasons why Linux is better than Mac. - -If you’re already using a Mac or planning to get one, we recommend you to thoroughly analyze the reasons and decide whether you want to switch/keep using Linux or continue using Mac. - -### 7 Reasons Why Linux is Better Than Mac - -![Linux vs Mac: Why Linux is a Better Choice][2] - -Both Linux and macOS are Unix-like OS and give access to Unix commands, BASH and other shells. Both of them have fewer applications and games than Windows. But the similarity ends here. - -Graphic designers and video editors swear by macOS whereas Linux is a favorite of developers, sysadmins and devops. - -So the question is should you use Linux over Mac? If yes, why? Let me give you some practical and some ideological reasons why Linux is better than Mac. - -#### 1\. Price - -![Linux vs Mac: Why Linux is a Better Choice][3] - -Let’s suppose, you use the system only to browse stuff, watch movies, download photos, write a document, create a spreadsheet, and other similar stuff. And, in addition to those activities, you want to have a secure operating system. - -In that case, you could choose to spend a couple of hundred bucks for a system to get things done. Or do you think spending more for a MacBook is a good idea? Well, you are the judge. - -So, it really depends on what you prefer. Whether you want to spend on a Mac-powered system or get a budget laptop/PC and install any Linux distro for free. Personally, I’ll be happy with a Linux system except for editing videos and music production. In that case, Final Cut Pro (for video editing) and Logic Pro X (for music production) will be my preference. - -#### 2\. Hardware Choices - -![Linux vs Mac: Why Linux is a Better Choice][4] - -Linux is free. You can install it on computers with any configuration. No matter how powerful/old your system is, Linux will work. [Even if you have an 8-year old PC laying around, you can have Linux installed and expect it to run smoothly by selecting the right distro][5]. - -But, Mac is as an Apple-exclusive. If you want to assemble a PC or get a budget laptop (with DOS) and expect to install Mac OS, it’s almost impossible. Mac comes baked in with the system Apple manufactures. - -There are [ways to install macOS on non Apple devices][6]. However, the kind of expertise and troubles it requires, it makes you question whether it’s worth the effort. - -You will have a wide range of hardware choices when you go with Linux but a minimal set of configurations when it comes to Mac OS. - -#### 3\. Security - -![Linux vs Mac: Why Linux is a Better Choice][7] - -A lot of people are all praises for iOS and Mac for being a secure platform. Well, yes, it is secure in a way (maybe more secure than Windows OS), but probably not as secure as Linux. - -I am not bluffing. There are malware and adware targeting macOS and the [number is growing every day][8]. I have seen not-so-techie users struggling with their slow mac. A quick investigation revealed that a [browser hijacking malware][9] was the culprit. - -There are no 100% secure operating systems and Linux is not an exception. There are vulnerabilities in the Linux world as well but they are duly patched by the timely updates provided by Linux distributions. - -Thankfully, we don’t have auto-running viruses or browser hijacking malwares in Linux world so far. And that’s one more reason why you should use Linux instead of a Mac. - -#### 4\. Customization & Flexibility - -![Linux vs Mac: Why Linux is a Better Choice][10] - -You don’t like something? Customize it or remove it. End of the story. - -For example, if you do not like the [Gnome desktop environment][11] on Ubuntu 18.04.1, you might as well change it to [KDE Plasma][11]. You can also try some of the [Gnome extensions][12] to enhance your desktop experience. You won’t find this level of freedom and customization on Mac OS. - -Besides, you can even modify the source code of your OS to add/remove something (which requires necessary technical knowledge) and create your own custom OS. Can you do that on Mac OS? - -Moreover, you get an array of Linux distributions to choose from as per your needs. For instance, if you need to mimic the workflow on Mac OS, [Elementary OS][13] would help. Do you want to have a lightweight Linux distribution installed on your old PC? We’ve got you covered in our list of [lightweight Linux distros][5]. Mac OS lacks this kind of flexibility. - -#### 5\. Using Linux helps your professional career [For IT/Tech students] - -![Linux vs Mac: Why Linux is a Better Choice][14] - -This is kind of controversial and applicable to students and job seekers in the IT field. Using Linux doesn’t make you a super-intelligent being and could possibly get you any IT related job. - -However, as you start using Linux and exploring it, you gain experience. As a techie, sooner or later you dive into the terminal, learning your way to move around the file system, installing applications via command line. You won’t even realize that you have learned the skills that newcomers in IT companies get trained on. - -In addition to that, Linux has enormous scope in the job market. There are so many Linux related technologies (Cloud, Kubernetes, Sysadmin etc.) you can learn, earn certifications and get a nice paying job. And to learn these, you have to use Linux. - -#### 6\. Reliability - -![Linux vs Mac: Why Linux is a Better Choice][15] - -Ever wondered why Linux is the best OS to run on any server? Because it is more reliable! - -But, why is that? Why is Linux more reliable than Mac OS? - -The answer is simple – more control to the user while providing better security. Mac OS does not provide you with the full control of its platform. It does that to make things easier for you simultaneously enhancing your user experience. With Linux, you can do whatever you want – which may result in poor user experience (for some) – but it does make it more reliable. - -#### 7\. Open Source - -![Linux vs Mac: Why Linux is a Better Choice][16] - -Open Source is something not everyone cares about. But to me, the most important aspect of Linux being a superior choice is its Open Source nature. And, most of the points discussed below are the direct advantages of an Open Source software. - -To briefly explain, you get to see/modify the source code yourself if it is an open source software. But, for Mac, Apple gets an exclusive control. Even if you have the required technical knowledge, you will not be able to independently take a look at the source code of Mac OS. - -In other words, a Mac-powered system enables you to get a car for yourself but the downside is you cannot open up the hood to see what’s inside. That’s bad! - -If you want to dive in deeper to know about the benefits of an open source software, you should go through [Ben Balter’s article][17] on OpenSource.com. - -### Wrapping Up - -Now that you’ve known why Linux is better than Mac OS. What do you think about it? Are these reasons enough for you to choose Linux over Mac OS? If not, then what do you prefer and why? - -Let us know your thoughts in the comments below. - -Note: The artwork here is based on Club Penguins. - --------------------------------------------------------------------------------- - -via: https://itsfoss.com/linux-vs-mac/ - -作者:[Ankush Das][a] -选题:[lujun9972](https://github.com/lujun9972) -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://itsfoss.com/author/ankush/ -[1]: https://itsfoss.com/linux-better-than-windows/ -[2]: https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/09/Linux-vs-mac-featured.png -[3]: https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/09/linux-vs-mac-1.jpeg -[4]: https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/09/linux-vs-mac-4.jpeg -[5]: https://itsfoss.com/lightweight-linux-beginners/ -[6]: https://hackintosh.com/ -[7]: https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/09/linux-vs-mac-2.jpeg -[8]: https://www.computerworld.com/article/3262225/apple-mac/warning-as-mac-malware-exploits-climb-270.html -[9]: https://www.imore.com/how-to-remove-browser-hijack -[10]: https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/09/linux-vs-mac-3.jpeg -[11]: https://www.gnome.org/ -[12]: https://itsfoss.com/best-gnome-extensions/ -[13]: https://elementary.io/ -[14]: https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/09/linux-vs-mac-5.jpeg -[15]: https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/09/linux-vs-mac-6.jpeg -[16]: https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/09/linux-vs-mac-7.jpeg -[17]: https://opensource.com/life/15/12/why-open-source From feee7a712db0ee824422c67124f1fc81468bbe02 Mon Sep 17 00:00:00 2001 From: Ryze-Borgia <42087725+Ryze-Borgia@users.noreply.github.com> Date: Wed, 10 Oct 2018 21:28:31 +0800 Subject: [PATCH 327/736] =?UTF-8?q?=E7=BF=BB=E8=AF=91=E5=AE=8C=E6=88=90?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...s Why Linux is a Better Choice than Mac.md | 131 ++++++++++++++++++ 1 file changed, 131 insertions(+) create mode 100644 sources/talk/20180915 Linux vs Mac- 7 Reasons Why Linux is a Better Choice than Mac.md diff --git a/sources/talk/20180915 Linux vs Mac- 7 Reasons Why Linux is a Better Choice than Mac.md b/sources/talk/20180915 Linux vs Mac- 7 Reasons Why Linux is a Better Choice than Mac.md new file mode 100644 index 0000000000..a9ece78ef7 --- /dev/null +++ b/sources/talk/20180915 Linux vs Mac- 7 Reasons Why Linux is a Better Choice than Mac.md @@ -0,0 +1,131 @@ +Linux vs Mac: Linux 比 Mac 好的七个原因 +====== +最近我们谈论了一些[为什么 Linux 比 Windows 好][1]的原因。毫无疑问, Linux 是个非常优秀的平台。但是它和其他的操作系统一样也会有缺点。对于某些专门的领域,像是游戏, Windows 当然更好。 而对于视频编辑等任务, Mac 系统可能更为方便。这一切都取决于你的爱好,以及你想用你的系统做些什么。在这篇文章中,我们将会介绍一些 Linux 相对于 Mac 更好的一些地方。 + +如果你已经在用 Mac 或者打算买一台 Mac 电脑,我们建议你仔细考虑一下,看看是改为使用 Linux 还是继续使用 Mac 。 + +### Linux 比 Mac 好的 7 个原因 + +![Linux vs Mac: 为什么 Linux 更好][2] + +Linux 和 macOS 都是类 Unix 操作系统,并且都支持 Unix 命令行、 bash 和其他一些命令行工具,相比于 Windows ,他们所支持的应用和游戏比较少。但缺点也仅仅如此。 + +平面设计师和视频剪辑师更加倾向于使用 Mac 系统,而 Linux 更加适合做开发、系统管理、运维的工程师。 + +那要不要使用 Linux 呢,为什么要选择 Linux 呢?下面是根据实际经验和理性分析给出的一些建议。 + +#### 1\. 价格 + +![Linux vs Mac: 为什么 Linux 更好][3] + +假设你只是需要浏览文件、看电影、下载图片、写文档、制作报表或者做一些类似的工作,并且你想要一个更加安全的系统。 + +那在这种情况下,你觉得花费几百块买个系统完成这项工作,或者花费更多直接买个 Macbook 划算吗?当然,最终的决定权还是在你。 + +买个装好 Mac 系统的电脑还是买个便宜的电脑,然后自己装上免费的 Linux 系统,这个要看你自己的偏好。就我个人而言,除了音视频剪辑创作之外,Linux 都非常好地用,而对于音视频方面,我更倾向于使用 Final Cut Pro (专业的视频编辑软件) 和 Logic Pro X (专业的音乐制作软件)(这两款软件都是苹果公司推出的)。 + +#### 2\. 硬件支持 + +![Linux vs Mac: 为什么 Linux 更好][4] + +Linux 支持多种平台. 无论你的电脑配置如何,你都可以在上面安装 Linux,无论性能好或者差,Linux 都可以运行。[即使你的电脑已经使用很久了, 你仍然可以通过选择安装合适的发行版让 Linux 在你的电脑上流畅的运行][5]. + +而 Mac 不同,它是苹果机专用系统。如果你希望买个便宜的电脑,然后自己装上 Mac 系统,这几乎是不可能的。一般来说 Mac 都是和苹果设备连在一起的。 + +这是[在非苹果系统上安装 Mac OS 的教程][6]. 这里面需要用到的专业技术以及可能遇到的一些问题将会花费你许多时间,你需要想好这样做是否值得。 + +总之,Linux 所支持的硬件平台很广泛,而 MacOS 相对而言则非常少。 + +#### 3\. 安全性 + +![Linux vs Mac: 为什么 Linux 更好][7] + +很多人都说 ios 和 Mac 是非常安全的平台。的确,相比于 Windows ,它确实比较安全,可并不一定有 Linux 安全。 + +我不是在危言耸听。 Mac 系统上也有不少恶意软件和广告,并且[数量与日俱增][8]。我认识一些不太懂技术的用户 使用着非常缓慢的 Mac 电脑并且为此苦苦挣扎。一项快速调查显示[浏览器恶意劫持软件][9]是罪魁祸首. + +从来没有绝对安全的操作系统,Linux 也不例外。 Linux 也有漏洞,但是 Linux 发行版提供的及时更新弥补了这些漏洞。另外,到目前为止在 Linux 上还没有自动运行的病毒或浏览器劫持恶意软件的案例发生。 + +这可能也是一个你应该选择 Linux 而不是 Mac 的原因。 + +#### 4\. 可定制性与灵活性 + +![Linux vs Mac: 为什么 Linux 更好][10] + +如果你有不喜欢的东西,自己定制或者修改它都行。 + +举个例子,如果你不喜欢 Ubuntu 18.04.1 的 [Gnome 桌面环境][11],你可以换成 [KDE Plasma][11]。 你也可以尝试一些 [Gnome 扩展][12]丰富你的桌面选择。这种灵活性和可定制性在 Mac OS 是不可能有的。 + +除此之外你还可以根据需要修改一些操作系统的代码(但是可能需要一些专业知识)以创造出适合你的系统。这个在 Mac OS 上可以做吗? + +另外你可以根据需要从一系列的 Linux 发行版进行选择。比如说,如果你想喜欢 Mac OS上的工作流, [Elementary OS][13] 可能是个不错的选择。你想在你的旧电脑上装上一个轻量级的 Linux 发行版系统吗?这里是一个[轻量级 Linux 发行版列表][5]。相比较而言, Mac OS 缺乏这种灵活性。 + +#### 5\. 使用 Linux 有助于你的职业生涯 [针对 IT 行业和科学领域的学生] + +![Linux vs Mac: 为什么 Linux 更好][14] + +对于 IT 领域的学生和求职者而言,这是有争议的但是也是有一定的帮助的。使用 Linux 并不会让你成为一个优秀的人,也不一定能让你得到任何与 IT 相关的工作。 + +但是当你开始使用 Linux 并且开始探索如何使用的时候,你将会获得非常多的经验。作为一名技术人员,你迟早会接触终端,学习通过命令行实现文件系统管理以及应用程序安装。你可能不会知道这些都是一些 IT 公司的新职员需要培训的内容。 + +除此之外,Linux 在就业市场上还有很大的发展空间。 Linux 相关的技术有很多( Cloud 、 Kubernetes 、Sysadmin 等),您可以学习,获得证书并获得一份相关的高薪的工作。要学习这些,你必须使用 Linux 。 + +#### 6\. 可靠 + +![Linux vs Mac: 为什么 Linux 更好][15] + +想想为什么服务器上用的都是 Linux 系统,当然是因为它可靠。 + +但是它为什么可靠呢,相比于 Mac OS ,它的可靠体现在什么方面呢? + +答案很简单——给用户更多的控制权,同时提供更好的安全性。在 Mac OS 上,你并不能完全控制它,这样做是为了让操作变得更容易,同时提高你的用户体验。使用 Linux ,你可以做任何你想做的事情——这可能会导致(对某些人来说)糟糕的用户体验——但它确实使其更可靠。 + +#### 7\. 开源 + +![Linux vs Mac: 为什么 Linux 更好][16] + +开源并不是每个人都关心的。但对我来说,Linux 最重要的优势在于它的开源特性。下面讨论的大多数观点都是开源软件的直接优势。 + +简单解释一下,如果是开源软件,你可以自己查看或者修改它。但对 Mac 来说,苹果拥有独家控制权。即使你有足够的技术知识,也无法查看 Mac OS 的源代码。 + +形象点说,Mac 驱动的系统可以让你得到一辆车,但缺点是你不能打开引擎盖看里面是什么。那可能非常糟糕! + +如果你想深入了解开源软件的优势,可以在 OpenSource.com 上浏览一下 [Ben Balter 的文章][17]。 + +### 总结 + +现在你应该知道为什么 Linux 比 Mac 好了吧,你觉得呢?上面的这些原因可以说服你选择 Linux 吗?如果不行的话那又是为什么呢? + +在下方评论让我们知道你的想法。 + +Note: 这里的图片是以企鹅俱乐部为原型的。 + +-------------------------------------------------------------------------------- + +via: https://itsfoss.com/linux-vs-mac/ + +作者:[Ankush Das][a] +选题:[lujun9972](https://github.com/lujun9972) +译者:[Ryze-Borgia](https://github.com/Ryze-Borgia) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://itsfoss.com/author/ankush/ +[1]: https://itsfoss.com/linux-better-than-windows/ +[2]: https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/09/Linux-vs-mac-featured.png +[3]: https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/09/linux-vs-mac-1.jpeg +[4]: https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/09/linux-vs-mac-4.jpeg +[5]: https://itsfoss.com/lightweight-linux-beginners/ +[6]: https://hackintosh.com/ +[7]: https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/09/linux-vs-mac-2.jpeg +[8]: https://www.computerworld.com/article/3262225/apple-mac/warning-as-mac-malware-exploits-climb-270.html +[9]: https://www.imore.com/how-to-remove-browser-hijack +[10]: https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/09/linux-vs-mac-3.jpeg +[11]: https://www.gnome.org/ +[12]: https://itsfoss.com/best-gnome-extensions/ +[13]: https://elementary.io/ +[14]: https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/09/linux-vs-mac-5.jpeg +[15]: https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/09/linux-vs-mac-6.jpeg +[16]: https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/09/linux-vs-mac-7.jpeg +[17]: https://opensource.com/life/15/12/why-open-source From 0efecacff3e5c8ae0a82fd2b00aa7e0a6b16f14f Mon Sep 17 00:00:00 2001 From: Ryze-Borgia <42087725+Ryze-Borgia@users.noreply.github.com> Date: Wed, 10 Oct 2018 21:37:48 +0800 Subject: [PATCH 328/736] Update 20181004 Functional programming in Python- Immutable data structures.md MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit 申领翻译 --- ...unctional programming in Python- Immutable data structures.md | 1 + 1 file changed, 1 insertion(+) diff --git a/sources/tech/20181004 Functional programming in Python- Immutable data structures.md b/sources/tech/20181004 Functional programming in Python- Immutable data structures.md index b831ff726f..e6050d52f9 100644 --- a/sources/tech/20181004 Functional programming in Python- Immutable data structures.md +++ b/sources/tech/20181004 Functional programming in Python- Immutable data structures.md @@ -1,3 +1,4 @@ +Translating by Ryze-Borgia Functional programming in Python: Immutable data structures ====== Immutability can help us better understand our code. Here's how to achieve it without sacrificing performance. From b39886f1f139a8687449bade783f041b1a645724 Mon Sep 17 00:00:00 2001 From: GraveAccent Date: Wed, 10 Oct 2018 21:49:14 +0800 Subject: [PATCH 329/736] =?UTF-8?q?GraveAccent=E7=BF=BB=E8=AF=91=E5=AE=8C?= =?UTF-8?q?=E6=88=90?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ... React.js Foundations A Beginners Guide.md | 292 ------------------ ... React.js Foundations A Beginners Guide.md | 281 +++++++++++++++++ 2 files changed, 281 insertions(+), 292 deletions(-) delete mode 100644 sources/tech/20180201 Rock Solid React.js Foundations A Beginners Guide.md create mode 100644 translated/tech/20180201 Rock Solid React.js Foundations A Beginners Guide.md diff --git a/sources/tech/20180201 Rock Solid React.js Foundations A Beginners Guide.md b/sources/tech/20180201 Rock Solid React.js Foundations A Beginners Guide.md deleted file mode 100644 index 98a1a8f392..0000000000 --- a/sources/tech/20180201 Rock Solid React.js Foundations A Beginners Guide.md +++ /dev/null @@ -1,292 +0,0 @@ -GraveAccent翻译中 Rock Solid React.js Foundations: A Beginner’s Guide -============================================================ - ** 此处有Canvas,请手动处理 ** - -![](https://cdn-images-1.medium.com/max/1000/1*wj5ujzj5wPQIKb0mIWLgNQ.png) -React.js crash course - -I’ve been working with React and React-Native for the last couple of months. I have already released two apps in production, [Kiven Aa][1] (React) and [Pollen Chat][2] (React Native). When I started learning React, I was searching for something (a blog, a video, a course, whatever) that didn’t only teach me how to write apps in React. I also wanted it to prepare me for interviews. - -Most of the material I found, concentrated on one or the other. So, this post is aimed towards the audience who is looking for a perfect mix of theory and hands-on. I will give you a little bit of theory so that you understand what is happening under the hood and then I will show you how to write some React.js code. - -If you prefer video, I have this entire course up on YouTube as well. Please check that out. - - -Let’s dive in… - -> React.js is a JavaScript library for building user interfaces - -You can build all sorts of single page applications. For example, chat messengers and e-commerce portals where you want to show changes on the user interface in real-time. - -### Everything’s a component - -A React app is comprised of components,  _a lot of them_ , nested into one another.  _But what are components, you may ask?_ - -A component is a reusable piece of code, which defines how certain features should look and behave on the UI. For example, a button is a component. - -Let’s look at the following calculator, which you see on Google when you try to calculate something like 2+2 = 4 –1 = 3 (quick maths!) - - -![](https://cdn-images-1.medium.com/max/1000/1*NS9DykYDyYG7__UXJdysTA.png) -Red markers denote components - -As you can see in the image above, the calculator has many areas — like the  _result display window_  and the  _numpad_ . All of these can be separate components or one giant component. It depends on how comfortable one is in breaking down and abstracting away things in React - -You write code for all such components separately. Then combine those under one container, which in turn is a React component itself. This way you can create reusable components and your final app will be a collection of separate components working together. - -The following is one such way you can write the calculator, shown above, in React. - -``` - - - - - - . - . - . - - - - -``` - -Yes! It looks like HTML code, but it isn’t. We will explore more about it in the later sections. - -### Setting up our Playground - -This tutorial focuses on React’s fundamentals. It is not primarily geared towards React for Web or [React Native][3] (for building mobile apps). So, we will use an online editor so as to avoid web or native specific configurations before even learning what React can do. - -I’ve already set up an environment for you on [codepen.io][4]. Just follow the link and read all the comments in HTML and JavaScript (JS) tabs. - -### Controlling Components - -We’ve learned that a React app is a collection of various components, structured as a nested tree. Thus, we require some sort of mechanism to pass data from one component to other. - -#### Enter “props” - -We can pass arbitrary data to our component using a `props` object. Every component in React gets this `props` object. - -Before learning how to use this `props` object, let’s learn about functional components. - -#### a) Functional component - -A functional component in React consumes arbitrary data that you pass to it using `props` object. It returns an object which describes what UI React should render. Functional components are also known as Stateless components. - -Let’s write our first functional component. - -``` -function Hello(props) { - return
{props.name}
-} -``` - -It’s that simple. We just passed `props` as an argument to a plain JavaScript function and returned,  _umm, well, what was that? That _ `_
{props.name}
_` _thing!_  It’s JSX (JavaScript Extended). We will learn more about it in a later section. - -This above function will render the following HTML in the browser. - -``` - -
- rajat -
-``` - - -> Read the section below about JSX, where I have explained how did we get this HTML from our JSX code. - -How can you use this functional component in your React app? Glad you asked! It’s as simple as the following. - -``` - -``` - -The attribute `name` in the above code becomes `props.name` inside our `Hello`component. The attribute `age` becomes `props.age` and so on. - -> Remember! You can nest one React component inside other React components. - -Let’s use this `Hello` component in our codepen playground. Replace the `div`inside `ReactDOM.render()` with our `Hello` component, as follows, and see the changes in the bottom window. - -``` -function Hello(props) { - return
{props.name}
-} - -ReactDOM.render(, document.getElementById('root')); -``` - - -> But what if your component has some internal state. For instance, like the following counter component, which has an internal count variable, which changes on + and — key presses. - -A React component with an internal state - -#### b) Class-based component - -The class-based component has an additional property `state` , which you can use to hold a component’s private data. We can rewrite our `Hello` component using class notation as follows. Since these components have a state, these are also known as Stateful components. - -``` -class Counter extends React.Component { - // this method should be present in your component - render() { - return ( -
- {this.props.name} -
- ); - } -} -``` - -We extend `React.Component` class of React library to make class-based components in React. Learn more about JavaScript classes [here][5]. - -The `render()` method must be present in your class as React looks for this method in order to know what UI it should render on screen. - -To use this sort of internal state, we first have to initialize the `state` object in the constructor of the component class, in the following way. - -``` -class Counter extends React.Component { - constructor() { - super(); - - // define the internal state of the component - this.state = {name: 'rajat'} - } - - render() { - return ( -
- {this.state.name} -
- ); - } -} - -// Usage: -// In your react app: -``` - -Similarly, the `props` can be accessed inside our class-based component using `this.props` object. - -To set the state, you use `React.Component`'s `setState()`. We will see an example of this, in the last part of this tutorial. - -> Tip: Never call `setState()` inside `render()` function, as `setState()` causes component to re-render and this will result in endless loop. - - -![](https://cdn-images-1.medium.com/max/1000/1*rPUhERO1Bnr5XdyzEwNOwg.png) -A class-based component has an optional property “state”. - - _Apart from _ `_state_` _, a class-based component has some life-cycle methods like _ `_componentWillMount()._` _ These you can use to do stuff, like initializing the _ `_state_` _and all but that is out of the scope of this post._ - -### JSX - -JSX is a short form of  _JavaScript Extended_  and it is a way to write `React`components. Using JSX, you get the full power of JavaScript inside XML like tags. - -You put JavaScript expressions inside `{}`. The following are some valid JSX examples. - - ``` - - - ; - -
- - ``` - -The way it works is you write JSX to describe what your UI should look like. A [transpiler][6] like `Babel` converts that code into a bunch of `React.createElement()` calls. The React library then uses those `React.createElement()` calls to construct a tree-like structure of DOM elements. In case of React for Web or Native views in case of React Native. It keeps it in the memory. - -React then calculates how it can effectively mimic this tree in the memory of the UI displayed to the user. This process is known as [reconciliation][7]. After that calculation is done, React makes the changes to the actual UI on the screen. - - ** 此处有Canvas,请手动处理 ** - -![](https://cdn-images-1.medium.com/max/1000/1*ighKXxBnnSdDlaOr5-ZOPg.png) -How React converts your JSX into a tree which describes your app’s UI - -You can use [Babel’s online REPL][8] to see what React actually outputs when you write some JSX. - - -![](https://cdn-images-1.medium.com/max/1000/1*NRuBKgzNh1nHwXn0JKHafg.png) -Use Babel REPL to transform JSX into plain JavaScript - -> Since JSX is just a syntactic sugar over plain `React.createElement()` calls, React can be used without JSX. - -Now we have every concept in place, so we are well positioned to write a `counter` component that we saw earlier as a GIF. - -The code is as follows and I hope that you already know how to render that in our playground. - -``` -class Counter extends React.Component { - constructor(props) { - super(props); - - this.state = {count: this.props.start || 0} - - // the following bindings are necessary to make `this` work in the callback - this.inc = this.inc.bind(this); - this.dec = this.dec.bind(this); - } - - inc() { - this.setState({ - count: this.state.count + 1 - }); - } - - dec() { - this.setState({ - count: this.state.count - 1 - }); - } - - render() { - return ( -
- - -
{this.state.count}
-
- ); - } -} -``` - -The following are some salient points about the above code. - -1. JSX uses `camelCasing` hence `button`'s attribute is `onClick`, not `onclick`, as we use in HTML. - -2. Binding is necessary for `this` to work on callbacks. See line #8 and 9 in the code above. - -The final interactive code is located [here][9]. - -With that, we’ve reached the conclusion of our React crash course. I hope I have shed some light on how React works and how you can use React to build bigger apps, using smaller and reusable components. - -* * * - -If you have any queries or doubts, hit me up on Twitter [@rajat1saxena][10] or write to me at [rajat@raynstudios.com][11]. - -* * * - -#### Please recommend this post, if you liked it and share it with your network. Follow me for more tech related posts and consider subscribing to my channel [Rayn Studios][12] on YouTube. Thanks a lot. - --------------------------------------------------------------------------------- - -via: https://medium.freecodecamp.org/rock-solid-react-js-foundations-a-beginners-guide-c45c93f5a923 - -作者:[Rajat Saxena ][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:https://medium.freecodecamp.org/@rajat1saxena -[1]:https://kivenaa.com/ -[2]:https://play.google.com/store/apps/details?id=com.pollenchat.android -[3]:https://facebook.github.io/react-native/ -[4]:https://codepen.io/raynesax/pen/MrNmBM -[5]:https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Classes -[6]:https://en.wikipedia.org/wiki/Source-to-source_compiler -[7]:https://reactjs.org/docs/reconciliation.html -[8]:https://babeljs.io/repl -[9]:https://codepen.io/raynesax/pen/QaROqK -[10]:https://twitter.com/rajat1saxena -[11]:mailto:rajat@raynstudios.com -[12]:https://www.youtube.com/channel/UCUmQhjjF9bsIaVDJUHSIIKw diff --git a/translated/tech/20180201 Rock Solid React.js Foundations A Beginners Guide.md b/translated/tech/20180201 Rock Solid React.js Foundations A Beginners Guide.md new file mode 100644 index 0000000000..bdb2abca36 --- /dev/null +++ b/translated/tech/20180201 Rock Solid React.js Foundations A Beginners Guide.md @@ -0,0 +1,281 @@ +坚实的 React 基础:初学者指南 +============================================================ +![](https://cdn-images-1.medium.com/max/1000/1*wj5ujzj5wPQIKb0mIWLgNQ.png) +React.js crash course + +在过去的几个月里,我一直在使用 React 和 React-Native。我已经发布了两个作为产品的应用, [Kiven Aa][1](React)和 [Pollen Chat][2](React Native)。当我开始学习 React 时,我找了一些不仅仅是教我如何用 React 写应用的东西(一个博客,一个视频,一个课程,等等),我也想让它帮我做好面试准备。 + +我发现的大部分资料都集中在某一单一方面上。所以,这篇文章针对的是那些希望理论与实践完美结合的观众。我会告诉你一些理论,以便你了解幕后发生的事情,然后我会向你展示如何编写一些 React.js 代码。 + +如果你更喜欢视频形式,我在YouTube上传了整个课程,请去看看。 + + +让我们开始...... + +> React.js 是一个用于构建用户界面的 JavaScript 库 + +你可以构建各种单页应用程序。例如,你希望在用户界面上实时显示更改的聊天软件和电子商务门户。 + +### 一切都是组件 + +React 应用由组件组成,数量多且互相嵌套。你或许会问:”可什么是组件呢?“ + +组件是可重用的代码段,它定义了某些功能在 UI 上的外观和行为。 比如,按钮就是一个组件。 + +让我们看看下面的计算器,当你尝试计算2 + 2 = 4 -1 = 3(简单的数学题)时,你会在Google上看到这个计算器。 + +![](https://cdn-images-1.medium.com/max/1000/1*NS9DykYDyYG7__UXJdysTA.png) +红色标记表示组件 + + + +如上图所示,这个计算器有很多区域,比如展示窗口和数字键盘。所有这些都可以是许多单独的组件或一个巨大的组件。这取决于在 React 中分解和抽象出事物的程度。你为所有这些组件分别编写代码,然后合并这些组件到一个容器中,而这个容器又是一个 React 组件。这样你就可以创建可重用的组件,最终的应用将是一组协同工作的单独组件。 + + + +以下是一个你践行了以上原则并可以用 React 编写计算器的方法。 + +``` + + + + + + . + . + . + + + + +``` + +没错!它看起来像HTML代码,然而并不是。我们将在后面的部分中详细探讨它。 + +### 设置我们的 Playground + +这篇教程专注于 React 的基础部分。它没有偏向 Web 或 React Native(开发移动应用)。所以,我们会用一个在线编辑器,这样可以在学习 React 能做什么之前避免 web 或 native 的具体配置。 + +我已经为读者在 [codepen.io][4] 设置好了开发环境。只需点开这个链接并且阅读所有 HTML 和 JavaScript 注释。 + +### 控制组件 + +我们已经了解到 React 应用是各种组件的集合,结构为嵌套树。因此,我们需要某种机制来将数据从一个组件传递到另一个组件。 + +#### 进入 “props” + +我们可以使用 `props` 对象将任意数据传递给我们的组件。 React 中的每个组件都会获取 `props` 对象。在学习如何使用 `props` 之前,让我们学习函数式组件。 + +#### a) 函数式组件 + +在 React 中,一个函数式组件通过 `props` 对象使用你传递给它的任意数据。它返回一个对象,该对象描述了 React 应渲染的 UI。函数式组件也称为无状态组件。 + + + +让我们编写第一个函数式组件。 + +``` +function Hello(props) { + return
{props.name}
+} +``` + + + +就这么简单。我们只是将 `props` 作为参数传递给了一个普通的 JavaScript 函数并且有返回值。嗯?返回了什么?那个 `
{props.name}
`。它是 JSX(JavaScript Extended)。我们将在后面的部分中详细了解它。 + +上面这个函数将在浏览器中渲染出以下HTML。 + +``` + +
+ rajat +
+``` + + +> 阅读以下有关 JSX 的部分,这一部分解释了如何从我们的 JSX 代码中得到这段 HTML 。 + +如何在 React 应用中使用这个函数式组件? 很高兴你问了! 它就像下面这么简单。 + +``` + +``` + +属性 `name` 在上面的代码中变成了 `Hello` 组件里的 `props.name` ,属性 `age` 变成了 `props.age` 。 + +> 记住! 你可以将一个React组件嵌套在其他React组件中。 + +让我们在 codepen playground 使用 `Hello` 组件。用我们的 `Hello` 组件替换 `ReactDOM.render()` 中的 `div`,并在底部窗口中查看更改。 + +``` +function Hello(props) { + return
{props.name}
+} + +ReactDOM.render(, document.getElementById('root')); +``` + + +> 但是如果你的组件有一些内部状态怎么办?例如,像下面的计数器组件一样,它有一个内部计数变量,它在 + 和 - 键按下时发生变化。 + +具有内部状态的 React 组件 + +#### b) 基于类的组件 + +基于类的组件有一个额外属性 `state` ,你可以用它存放组件的私有数据。我们可以用 class 表示法重写我们的 `Hello` 。由于这些组件具有状态,因此这些组件也称为有状态组件。 + +``` +class Counter extends React.Component { + // this method should be present in your component + render() { + return ( +
+ {this.props.name} +
+ ); + } +} +``` + +我们继承了 React 库的 React.Component 类以在React中创建基于类的组件。在[这里][5]了解更多有关 JavaScript 类的东西。 + +`render()` 方法必须存在于你的类中,因为React会查找此方法,用以了解它应在屏幕上渲染的 UI。为了使用这种内部状态,我们首先要在组件 + +要使用这种内部状态,我们首先必须按以下方式初始化组件类的构造函数中的状态对象。 + +``` +class Counter extends React.Component { + constructor() { + super(); + + // define the internal state of the component + this.state = {name: 'rajat'} + } + + render() { + return ( +
+ {this.state.name} +
+ ); + } +} + +// Usage: +// In your react app: +``` + +类似地,可以使用 this.props 对象在我们基于类的组件内访问 props。 + +要设置 state,请使用 `React.Component` 的 `setState()`。 在本教程的最后一部分中,我们将看到一个这样的例子。 + +> 提示:永远不要在 `render()` 函数中调用 `setState()`,因为 `setState` 会导致组件重新渲染,这将导致无限循环。 + +![](https://cdn-images-1.medium.com/max/1000/1*rPUhERO1Bnr5XdyzEwNOwg.png) +基于类的组件具有可选属性 “state”。 + +除了 `state` 以外,基于类的组件有一些声明周期方法比如 `componentWillMount()`。你可以利用这些去做初始化 `state`这样的事, 可是那将超出这篇文章的范畴。 + +### JSX + +JSX 是 JavaScript Extended 的一种简短形式,它是一种编写 React components 的方法。使用 JSX,你可以在类 XML 标签中获得 JavaScript 的全部力量。 + +你把 JavaScript 表达式放在`{}`里。下面是一些有效的 JSX 例子。 + + ``` + + + ; + +
+ + ``` + +它的工作方式是你编写 JSX 来描述你的 UI 应该是什么样子。像 Babel 这样的转码器将这些代码转换为一堆 `React.createElement()`调用。然后,React 库使用这些 `React.createElement()`调用来构造 DOM 元素的树状结构。对于 React 的网页视图或 React Native 的 Native 视图,它将保存在内存中。 + +React 接着会计算它如何在存储展示给用户的 UI 的内存中有效地模仿这个树。此过程称为 [reconciliation][7]。完成计算后,React会对屏幕上的真正 UI 进行更改。 + +![](https://cdn-images-1.medium.com/max/1000/1*ighKXxBnnSdDlaOr5-ZOPg.png) +React 如何将你的 JSX 转化为描述应用 UI 的树。 + +你可以使用 [Babel 的在线 REPL][8] 查看当你写一些 JSX 的时候,React 的真正输出。 + +![](https://cdn-images-1.medium.com/max/1000/1*NRuBKgzNh1nHwXn0JKHafg.png) +使用Babel REPL 转换 JSX 为普通 JavaScript + +> 由于 JSX 只是 `React.createElement()` 调用的语法糖,因此可以在没有 JSX 的情况下使用 React。 + +现在我们了解了所有的概念,所以我们已经准备好编写我们之前看到的作为GIF图的计数器组件。 + +代码如下,我希望你已经知道了如何在我们的 playground 上渲染它。 + +``` +class Counter extends React.Component { + constructor(props) { + super(props); + + this.state = {count: this.props.start || 0} + + // the following bindings are necessary to make `this` work in the callback + this.inc = this.inc.bind(this); + this.dec = this.dec.bind(this); + } + + inc() { + this.setState({ + count: this.state.count + 1 + }); + } + + dec() { + this.setState({ + count: this.state.count - 1 + }); + } + + render() { + return ( +
+ + +
{this.state.count}
+
+ ); + } +} +``` + +以下是关于上述代码的一些重点。 + +1. JSX 使用 `驼峰命名` ,所以 `button` 的 属性是 `onClick`,不是我们在HTML中用的 `onclick`。 + +2. 绑定 `this` 是必要的,以便在回调时工作。 请参阅上面代码中的第8行和第9行。 + +最终的交互式代码位于[此处][9]。 + +有了这个,我们已经到了 React 速成课程的结束。我希望我已经阐明了 React 如何工作以及如何使用 React 来构建更大的应用程序,使用更小和可重用的组件。 + +-------------------------------------------------------------------------------- + +via: https://medium.freecodecamp.org/rock-solid-react-js-foundations-a-beginners-guide-c45c93f5a923 + +作者:[Rajat Saxena ][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://medium.freecodecamp.org/@rajat1saxena +[1]:https://kivenaa.com/ +[2]:https://play.google.com/store/apps/details?id=com.pollenchat.android +[3]:https://facebook.github.io/react-native/ +[4]:https://codepen.io/raynesax/pen/MrNmBM +[5]:https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Classes +[6]:https://en.wikipedia.org/wiki/Source-to-source_compiler +[7]:https://reactjs.org/docs/reconciliation.html +[8]:https://babeljs.io/repl +[9]:https://codepen.io/raynesax/pen/QaROqK +[10]:https://twitter.com/rajat1saxena +[11]:mailto:rajat@raynstudios.com +[12]:https://www.youtube.com/channel/UCUmQhjjF9bsIaVDJUHSIIKw From 3e42e35fcaafe6b8a88da234d195bc124de5c0ba Mon Sep 17 00:00:00 2001 From: geekpi Date: Thu, 11 Oct 2018 08:53:25 +0800 Subject: [PATCH 330/736] translated --- ...80412 A Desktop GUI Application For NPM.md | 149 ------------------ ...80412 A Desktop GUI Application For NPM.md | 145 +++++++++++++++++ 2 files changed, 145 insertions(+), 149 deletions(-) delete mode 100644 sources/tech/20180412 A Desktop GUI Application For NPM.md create mode 100644 translated/tech/20180412 A Desktop GUI Application For NPM.md diff --git a/sources/tech/20180412 A Desktop GUI Application For NPM.md b/sources/tech/20180412 A Desktop GUI Application For NPM.md deleted file mode 100644 index 5c87aad3c0..0000000000 --- a/sources/tech/20180412 A Desktop GUI Application For NPM.md +++ /dev/null @@ -1,149 +0,0 @@ -translating---geekpi - -A Desktop GUI Application For NPM -====== - -![](https://www.ostechnix.com/wp-content/uploads/2018/04/ndm-3-720x340.png) - -NPM, short for **N** ode **P** ackage **M** anager, is a command line package manager for installing NodeJS packages, or modules. We already have have published a guide that described how to [**manage NodeJS packages using NPM**][1]. As you may noticed, managing NodeJS packages or modules using Npm is not a big deal. However, if you’re not compatible with CLI-way, there is a desktop GUI application named **NDM** which can be used for managing NodeJS applications/modules. NDM, stands for **N** PM **D** esktop **M** anager, is a free, open source graphical front-end for NPM that allows us to install, update, remove NodeJS packages via a simple graphical window. - -In this brief tutorial, we are going to learn about Ndm in Linux. - -### Install NDM - -NDM is available in AUR, so you can install it using any AUR helpers on Arch Linux and its derivatives like Antergos and Manjaro Linux. - -Using [**Pacaur**][2]: -``` -$ pacaur -S ndm - -``` - -Using [**Packer**][3]: -``` -$ packer -S ndm - -``` - -Using [**Trizen**][4]: -``` -$ trizen -S ndm - -``` - -Using [**Yay**][5]: -``` -$ yay -S ndm - -``` - -Using [**Yaourt**][6]: -``` -$ yaourt -S ndm - -``` - -On RHEL based systems like CentOS, run the following command to install NDM. -``` -$ echo "[fury] name=ndm repository baseurl=https://repo.fury.io/720kb/ enabled=1 gpgcheck=0" | sudo tee /etc/yum.repos.d/ndm.repo && sudo yum update && - -``` - -On Debian, Ubuntu, Linux Mint: -``` -$ echo "deb [trusted=yes] https://apt.fury.io/720kb/ /" | sudo tee /etc/apt/sources.list.d/ndm.list && sudo apt-get update && sudo apt-get install ndm - -``` - -NDM can also be installed using **Linuxbrew**. First, install Linuxbrew as described in the following link. - -After installing Linuxbrew, you can install NDM using the following commands: -``` -$ brew update - -$ brew install ndm - -``` - -On other Linux distributions, go to the [**NDM releases page**][7], download the latest version, compile and install it yourself. - -### NDM Usage - -Launch NDM wither from the Menu or using application launcher. This is how NDM’s default interface looks like. - -![][9] - -From here, you can install NodeJS packages/modules either locally or globally. - -**Install NodeJS packages locally** - -To install a package locally, first choose project directory by clicking on the **“Add projects”** button from the Home screen and select the directory where you want to keep your project files. For example, I have chosen a directory named **“demo”** as my project directory. - -Click on the project directory (i.e **demo** ) and then, click **Add packages** button. - -![][10] - -Type the package name you want to install and hit the **Install** button. - -![][11] - -Once installed, the packages will be listed under the project’s directory. Simply click on the directory to view the list of installed packages locally. - -![][12] - -Similarly, you can create separate project directories and install NodeJS modules in them. To view the list of installed modules on a project, click on the project directory, and you will the packages on the right side. - -**Install NodeJS packages globally** - -To install NodeJS packages globally, click on the **Globals** button on the left from the main interface. Then, click “Add packages” button, type the name of the package and hit “Install” button. - -**Manage packages** - -Click on any installed packages and you will see various options on the top, such as - - 1. Version (to view the installed version), - 2. Latest (to install latest available version), - 3. Update (to update the currently selected package), - 4. Uninstall (to remove the selected package) etc. - - - -![][13] - -NDM has two more options namely **“Update npm”** which is used to update the node package manager to latest available version, and **Doctor** that runs a set of checks to ensure that your npm installation has what it needs to manage your packages/modules. - -### Conclusion - -NDM makes the process of installing, updating, removing NodeJS packages easier! You don’t need to memorize the commands to perform those tasks. NDM lets us to do them all with a few mouse clicks via simple graphical window. For those who are lazy to type commands, NDM is perfect companion to manage NodeJS packages. - -Cheers! - -**Resource:** - - - --------------------------------------------------------------------------------- - -via: https://www.ostechnix.com/ndm-a-desktop-gui-application-for-npm/ - -作者:[SK][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) -选题:[lujun9972](https://github.com/lujun9972) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:https://www.ostechnix.com/author/sk/ -[1]:https://www.ostechnix.com/manage-nodejs-packages-using-npm/ -[2]:https://www.ostechnix.com/install-pacaur-arch-linux/ -[3]:https://www.ostechnix.com/install-packer-arch-linux-2/ -[4]:https://www.ostechnix.com/trizen-lightweight-aur-package-manager-arch-based-systems/ -[5]:https://www.ostechnix.com/yay-found-yet-another-reliable-aur-helper/ -[6]:https://www.ostechnix.com/install-yaourt-arch-linux/ -[7]:https://github.com/720kb/ndm/releases -[8]:data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7 -[9]:http://www.ostechnix.com/wp-content/uploads/2018/04/ndm-1.png -[10]:http://www.ostechnix.com/wp-content/uploads/2018/04/ndm-5-1.png -[11]:http://www.ostechnix.com/wp-content/uploads/2018/04/ndm-6.png -[12]:http://www.ostechnix.com/wp-content/uploads/2018/04/ndm-7.png -[13]:http://www.ostechnix.com/wp-content/uploads/2018/04/ndm-8.png diff --git a/translated/tech/20180412 A Desktop GUI Application For NPM.md b/translated/tech/20180412 A Desktop GUI Application For NPM.md new file mode 100644 index 0000000000..99928a08f2 --- /dev/null +++ b/translated/tech/20180412 A Desktop GUI Application For NPM.md @@ -0,0 +1,145 @@ +NPM 的桌面 GUI 程序 +====== + +![](https://www.ostechnix.com/wp-content/uploads/2018/04/ndm-3-720x340.png) + +NPM 是 **N** ode **P** ackage **M** anager (node 包管理器)的缩写,它是用于安装 NodeJS 软件包或模块的命令行软件包管理器。我们发布过一个指南描述了如何[**使用 NPM 管理 NodeJS 包**][1]。你可能已经注意到,使用 Npm 管理 NodeJS 包或模块并不是什么大问题。但是,如果你不习惯用 CLI 的方式,这有一个名为 **NDM** 的桌面 GUI 程序,它可用于管理 NodeJS 程序/模块。 NDM,代表 **N** PM **D** esktop **M** anager (npm 桌面管理器),是 NPM 的免费开源图形前端,它允许我们通过简单图形桌面安装、更新、删除 NodeJS 包。 + +在这个简短的教程中,我们将了解 Linux 中的 Ndm。 + +### 安装 NDM + +NDM 在 AUR 中可用,因此你可以在 Arch Linux 及其衍生版(如 Antergos 和 Manjaro Linux)上使用任何 AUR 助手程序安装。 + +使用 [**Pacaur**][2]: +``` +$ pacaur -S ndm + +``` + +使用 [**Packer**][3]: +``` +$ packer -S ndm + +``` + +使用 [**Trizen**][4]: +``` +$ trizen -S ndm + +``` + +使用 [**Yay**][5]: +``` +$ yay -S ndm + +``` + +使用 [**Yaourt**][6]: +``` +$ yaourt -S ndm + +``` + +在基于 RHEL 的系统(如 CentOS)上,运行以下命令以安装 NDM。 +``` +$ echo "[fury] name=ndm repository baseurl=https://repo.fury.io/720kb/ enabled=1 gpgcheck=0" | sudo tee /etc/yum.repos.d/ndm.repo && sudo yum update && + +``` + +在 Debian、Ubuntu、Linux Mint: +``` +$ echo "deb [trusted=yes] https://apt.fury.io/720kb/ /" | sudo tee /etc/apt/sources.list.d/ndm.list && sudo apt-get update && sudo apt-get install ndm + +``` + +也可以使用 **Linuxbrew** 安装 NDM。首先,按照以下链接中的说明安装 Linuxbrew。 + +安装 Linuxbrew 后,可以使用以下命令安装 NDM: +``` +$ brew update + +$ brew install ndm + +``` + +在其他 Linux 发行版上,进入[**NDM 发布页面**][7],下载最新版本,自行编译和安装。 + +### NDM 使用 + +从菜单或使用应用启动器启动 NDM。这就是 NDM 的默认界面。 + +![][9] + +在这里你可以本地或全局安装 NodeJS 包/模块。 + +**本地安装 NodeJS 包** + +要在本地安装软件包,首先通过单击主屏幕上的 **“Add projects”** 按钮选择项目目录,然后选择要保留项目文件的目录。例如,我选择了一个名为 **“demo”** 的目录作为我的项目目录。 + +单击项目目录(即 **demo**),然后单击 **Add packages** 按钮。 + +![][10] + +输入要安装的软件包名称,然后单击 **Install** 按钮。 + +![][11] + +安装后,软件包将列在项目目录下。只需单击该目录即可在本地查看已安装软件包的列表。 + +![][12] + +同样,你可以创建单独的项目目录并在其中安装 NodeJS 模块。要查看项目中已安装模块的列表,请单击项目目录,右侧将显示软件包。 + +**全局安装 NodeJS 包** + +要全局安装 NodeJS 包,请单击主界面左侧的 **Globals** 按钮。然后,单击 “Add packages” 按钮,输入包的名称并单击 “Install” 按钮。 + +**管理包** + +单击任何已安装的包,不将在顶部看到各种选项,例如: + + 1. 版本(查看已安装的版本), +  2. 最新(安装最新版本), +  3. 更新(更新当前选定的包), +  4. 卸载(删除所选包)等。 + + + +![][13] + +NDM 还有两个选项,即 **“Update npm”** 用于将 node 包管理器更新成最新可用版本, **Doctor** 运行一组检查以确保你的 npm 安装有所需的功能管理你的包/模块。 + +### 结论 + +NDM 使安装、更新、删除 NodeJS 包的过程更加容易!你无需记住执行这些任务的命令。NDM 让我们在简单的图形界面中点击几下鼠标即可完成所有操作。对于那些懒得输入命令的人来说,NDM 是管理 NodeJS 包的完美伴侣。 + +干杯! + + + +-------------------------------------------------------------------------------- + +via: https://www.ostechnix.com/ndm-a-desktop-gui-application-for-npm/ + +作者:[SK][a] +译者:[geekpi](https://github.com/geekpi) +校对:[校对者ID](https://github.com/校对者ID) +选题:[lujun9972](https://github.com/lujun9972) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://www.ostechnix.com/author/sk/ +[1]:https://www.ostechnix.com/manage-nodejs-packages-using-npm/ +[2]:https://www.ostechnix.com/install-pacaur-arch-linux/ +[3]:https://www.ostechnix.com/install-packer-arch-linux-2/ +[4]:https://www.ostechnix.com/trizen-lightweight-aur-package-manager-arch-based-systems/ +[5]:https://www.ostechnix.com/yay-found-yet-another-reliable-aur-helper/ +[6]:https://www.ostechnix.com/install-yaourt-arch-linux/ +[7]:https://github.com/720kb/ndm/releases +[8]:data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7 +[9]:http://www.ostechnix.com/wp-content/uploads/2018/04/ndm-1.png +[10]:http://www.ostechnix.com/wp-content/uploads/2018/04/ndm-5-1.png +[11]:http://www.ostechnix.com/wp-content/uploads/2018/04/ndm-6.png +[12]:http://www.ostechnix.com/wp-content/uploads/2018/04/ndm-7.png +[13]:http://www.ostechnix.com/wp-content/uploads/2018/04/ndm-8.png From e425c3397ed52ef203482c222d526ca865778f5d Mon Sep 17 00:00:00 2001 From: geekpi Date: Thu, 11 Oct 2018 09:00:54 +0800 Subject: [PATCH 331/736] translating --- ... Tips for listing files with ls at the Linux command line.md | 2 ++ 1 file changed, 2 insertions(+) diff --git a/sources/tech/20181003 Tips for listing files with ls at the Linux command line.md b/sources/tech/20181003 Tips for listing files with ls at the Linux command line.md index d04e94a541..fda48f1622 100644 --- a/sources/tech/20181003 Tips for listing files with ls at the Linux command line.md +++ b/sources/tech/20181003 Tips for listing files with ls at the Linux command line.md @@ -1,3 +1,5 @@ +translating---geekpi + Tips for listing files with ls at the Linux command line ====== Learn some of the Linux 'ls' command's most useful variations. From dcf339c11dfc12fff2b3ccd4c6ca552822dfdc95 Mon Sep 17 00:00:00 2001 From: darksun Date: Thu, 11 Oct 2018 11:12:41 +0800 Subject: [PATCH 332/736] =?UTF-8?q?=E9=80=89=E9=A2=98:=20Cloc=20=E2=80=93?= =?UTF-8?q?=20Count=20The=20Lines=20Of=20Source=20Code=20In=20Many=20Progr?= =?UTF-8?q?amming=20Languages?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...urce Code In Many Programming Languages.md | 197 ++++++++++++++++++ 1 file changed, 197 insertions(+) create mode 100644 sources/tech/20181010 Cloc - Count The Lines Of Source Code In Many Programming Languages.md diff --git a/sources/tech/20181010 Cloc - Count The Lines Of Source Code In Many Programming Languages.md b/sources/tech/20181010 Cloc - Count The Lines Of Source Code In Many Programming Languages.md new file mode 100644 index 0000000000..1cefdaaa4f --- /dev/null +++ b/sources/tech/20181010 Cloc - Count The Lines Of Source Code In Many Programming Languages.md @@ -0,0 +1,197 @@ +Cloc – Count The Lines Of Source Code In Many Programming Languages +====== + +![](https://www.ostechnix.com/wp-content/uploads/2018/10/cloc-720x340.png) + +As a developer, you may need to share the progress and statistics of your code to your boss or colleagues. Your boss might want to analyze the code and give any additional inputs. In such cases, there are few programs, as far as I know, available to analyze the source code. One such program is [**Ohcount**][1]. Today, I came across yet another similar utility namely **“Cloc”**. Using the Cloc, you can easily count the lines of source code in several programming languages. It counts the blank lines, comment lines, and physical lines of source code and displays the result in a neat tabular-column format. Cloc is free, open source and cross-platform utility entirely written in **Perl** programming language. + +### Features + +Cloc ships with numerous advantages including the following: + + * Easy to install/use. Requires no dependencies. + * Portable + * It can produce results in a variety of formats, such as plain text, SQL, JSON, XML, YAML, comma separated values. + * Can count your git commits. + * Count the code in directories and sub-directories. + * Count codes count code within compressed archives like tar balls, Zip files, Java .ear files etc. + * Open source and cross platform. + + + +### Installing Cloc + +The Cloc utility is available in the default repositories of most Unix-like operating systems. So, you can install it using the default package manager as shown below. + +On Arch Linux and its variants: + +``` +$ sudo pacman -S cloc + +``` + +On Debian, Ubuntu: + +``` +$ sudo apt-get install cloc + +``` + +On CentOS, Red Hat, Scientific Linux: + +``` +$ sudo yum install cloc + +``` + +On Fedora: + +``` +$ sudo dnf install cloc + +``` + +On FreeBSD: + +``` +$ sudo pkg install cloc + +``` + +It can also installed using third-party package manager like [**NPM**][2] as well. + +``` +$ npm install -g cloc + +``` + +### Count The Lines Of Source Code In Many Programming Languages + +Let us start with a simple example. I have a “hello world” program written in C in my current working directory. + +``` +$ cat hello.c +#include +int main() +{ + // printf() displays the string inside quotation + printf("Hello, World!"); + return 0; +} + +``` + +To count the lines of code in the hello.c program, simply run: + +``` +$ cloc hello.c + +``` + +Sample output: + +![](https://www.ostechnix.com/wp-content/uploads/2018/10/Hello-World-Program.png) + +The first column specifies the **name of programming languages that the source code consists of**. As you can see in the above output, the source code of “hello world” program is written using **C** programming language. + +The second column displays the **number of files in each programming languages**. So, our code contains **1 file** in total. + +The third column displays the **total number of blank lines**. We have zero blank files in our code. + +The fourth column displays **number of comment lines**. + +And the final and fifth column displays **total physical lines of given source code**. + +It is just a 6 line code program, so counting the lines in the code is not a big deal. What about the some big source code file? Have a look at the following example: + +``` +$ cloc file.tar.gz + +``` + +Sample output: + +![](https://www.ostechnix.com/wp-content/uploads/2018/10/cloc-1.png) + +As per the above output, it is quite difficult to manually find exact count of code. But, Cloc displays the result in seconds with nice tabular-column format. You can view the gross total of each section at the end which is quite handy when it comes to analyze the source code of a program. + +Cloc not only counts the individual source code files, but also files inside directories and sub-directories, archives, and even in specific git commits etc. + +**Count the lines of codes in a directory:** + +``` +$ cloc dir/ + +``` + +![][4] + +**Sub-directory:** + +``` +$ cloc dir/cloc/tests + +``` + +![][5] + +**Count the lines of codes in archive file:** + +``` +$ cloc archive.zip + +``` + +![][6] + +You can also count lines in a git repository, using a specific commit like below. + +``` +$ git clone https://github.com/AlDanial/cloc.git + +$ cd cloc + +$ cloc 157d706 + +``` + +![][7] + +Cloc can recognize several programming languages. To view the complete list of recognized languages, run: + +``` +$ cloc --show-lang + +``` + +For more details, refer the help section. + +``` +$ cloc --help + +``` + +Cheers! + + + +-------------------------------------------------------------------------------- + +via: https://www.ostechnix.com/cloc-count-the-lines-of-source-code-in-many-programming-languages/ + +作者:[SK][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://www.ostechnix.com/author/sk/ +[b]: https://github.com/lujun9972 +[1]: https://www.ostechnix.com/ohcount-the-source-code-line-counter-and-analyzer/ +[2]: https://www.ostechnix.com/install-node-js-linux/ +[3]: data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7 +[4]: http://www.ostechnix.com/wp-content/uploads/2018/10/cloc-2-1.png +[5]: http://www.ostechnix.com/wp-content/uploads/2018/10/cloc-4.png +[6]: http://www.ostechnix.com/wp-content/uploads/2018/10/cloc-3.png +[7]: http://www.ostechnix.com/wp-content/uploads/2018/10/cloc-5.png From 5f2de0f3434e8cbc83d91fb198bc040f62dbb7e5 Mon Sep 17 00:00:00 2001 From: darksun Date: Thu, 11 Oct 2018 11:18:48 +0800 Subject: [PATCH 333/736] =?UTF-8?q?=E9=80=89=E9=A2=98:=20GCC:=20Optimizing?= =?UTF-8?q?=20Linux,=20the=20Internet,=20and=20Everything?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...ing Linux, the Internet, and Everything.md | 82 +++++++++++++++++++ 1 file changed, 82 insertions(+) create mode 100644 sources/talk/20181009 GCC- Optimizing Linux, the Internet, and Everything.md diff --git a/sources/talk/20181009 GCC- Optimizing Linux, the Internet, and Everything.md b/sources/talk/20181009 GCC- Optimizing Linux, the Internet, and Everything.md new file mode 100644 index 0000000000..e7ac8d6c39 --- /dev/null +++ b/sources/talk/20181009 GCC- Optimizing Linux, the Internet, and Everything.md @@ -0,0 +1,82 @@ +GCC: Optimizing Linux, the Internet, and Everything +====== + +![](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/gcc-paper.jpg?itok=QFNUZWsV) + +Software is useless if computers can't run it. Even the most talented developer is at the mercy of the compiler when it comes to run-time performance - if you don’t have a reliable compiler toolchain you can’t build anything serious. The GNU Compiler Collection (GCC) provides a robust, mature and high performance partner to help you get the most out of your software. With decades of development by thousands of people GCC is one of the most respected compilers in the world. If you are building applications and not using GCC, you are missing out on the best possible solution. + +GCC is the “de facto-standard open source compiler today” [1] according to LLVM.org and the foundation used to build complete systems - from the kernel upwards. GCC supports over 60 hardware platforms, including ARM, Intel, AMD, IBM POWER, SPARC, HP PA-RISC, and IBM Z, as well as a variety of operating environments, including GNU, Linux, Windows, macOS, FreeBSD, NetBSD, OpenBSD, DragonFly BSD, Solaris, AIX, HP-UX, and RTEMS. It offers highly compliant C/C++ compilers and support for popular C libraries, such as GNU C Library (glibc), Newlib, musl, and the C libraries included with various BSD operating systems, as well as front-ends for Fortran, Ada, and GO languages. GCC also functions as a cross compiler, creating executable code for a platform other than the one on which the compiler is running. GCC is the core component of the tightly integrated GNU toolchain, produced by the GNU Project, that includes glibc, Binutils, and the GNU Debugger (GDB). + +"My all-time favorite GNU tool is GCC, the GNU Compiler Collection. At a time when developer tools were expensive, GCC was the second GNU tool and the one that enabled a community to write and build all the others. This tool single-handedly changed the industry and led to the creation of the free software movement, since a good, free compiler is a prerequisite to a community creating software." —Dave Neary, Open Source and Standards team at Red Hat. [2] + +### Optimizing Linux + +As the default compiler for the Linux kernel source, GCC delivers trusted, stable performance along with the additional extensions needed to correctly build the kernel. GCC is a standard component of popular Linux distributions, such as Arch Linux, CentOS, Debian, Fedora, openSUSE, and Ubuntu, where it routinely compiles supporting system components. This includes the default libraries used by Linux (such as libc, libm, libintl, libssh, libssl, libcrypto, libexpat, libpthread, and ncurses) which depend on GCC to provide correctness and performance and are used by applications and system utilities to access Linux kernel features. Many of the application packages included with a distribution are also built with GCC, such as Python, Perl, Ruby, nginx, Apache HTTP Server, OpenStack, Docker, and OpenShift. This combination of kernel, libraries, and application software translates into a large volume of code built with GCC for each Linux distribution. For the openSUSE distribution nearly 100% of native code is built by GCC, including 6,135 source packages producing 5,705 shared libraries and 38,927 executables. This amounts to about 24,540 source packages compiled weekly. [3] + +The base version of GCC included in Linux distributions is used to create the kernel and libraries that define the system Application Binary Interface (ABI). User space developers have the option of downloading the latest stable version of GCC to gain access to advanced features, performance optimizations, and improvements in usability. Linux distributions offer installation instructions or prebuilt toolchains for deploying the latest version of GCC along with other GNU tools that help to enhance developer productivity and improve deployment time. + +### Optimizing the Internet + +GCC is one of the most widely adopted core compilers for embedded systems, enabling the development of software for the growing world of IoT devices. GCC offers a number of extensions that make it well suited for embedded systems software development, including fine-grained control using compiler built-ins, #pragmas, inline assembly, and application-focussed command-line options. GCC supports a broad base of embedded architectures, including ARM, AMCC, AVR, Blackfin, MIPS, RISC-V, Renesas Electronics V850, and NXP and Freescale Power-based processors, producing efficient, high quality code. The cross-compilation capability offered by GCC is critical to this community, and prebuilt cross-compilation toolchains [4] are a major requirement. For example, the GNU ARM Embedded toolchains are integrated and validated packages featuring the Arm Embedded GCC compiler, libraries, and other tools necessary for bare-metal software development. These toolchains are available for cross-compilation on Windows, Linux and macOS host operating systems and target the popular ARM Cortex-R and Cortex-M processors, which have shipped in tens of billions of internet capable devices. [5] + +GCC empowers Cloud Computing, providing a reliable development platform for software that needs to directly manages computing resources, like database and web serving engines and backup and security software. GCC is fully compliant with C++11 and C++14 and offers experimental support for C++17 and C++2a [6], creating performant object code with a solid debugging information. Some examples of applications that utilize GCC include: MySQL Database Management System, which requires GCC for Linux [7]; the Apache HTTP Server, which recommends using GCC [8]; and Bacula, an enterprise ready network backup tool which require GCC. [9] + +### Optimizing Everything + +For the research and development of the scientific codes used in High Performance Computing (HPC), GCC offers mature C, C++, and Fortran front ends as well as support for OpenMP and OpenACC APIs for directive-based parallel programming. Because GCC offers portability across computing environments, it enables code to be more easily targeted and tested across a variety of new and legacy client and server platforms. GCC offers full support for OpenMP 4.0 for C, C++ and Fortran compilers and full support for OpenMP 4.5 for C and C++ compilers. For OpenACC, GCC supports most of the 2.5 specification and performance optimizations and is the only non-commercial, nonacademic compiler to provide [OpenACC][1] support. + +Code performance is an important parameter to this community and GCC offers a solid performance base. A Nov. 2017 paper published by Colfax Research evaluates C++ compilers for the speed of compiled code parallelized with OpenMP 4.x directives and for the speed of compilation time. Figure 1 plots the relative performance of the computational kernels when compiled by the different compilers and run with a single thread. The performance values are normalized so that the performance of G++ is equal to 1.0. + +![performance][3] + +Figure 1. Relative performance of each kernel as compiled by the different compilers. (single-threaded, higher is better). + +[Used with permission][4] + +The paper summarizes “the GNU compiler also does very well in our tests. G++ produces the second fastest code in three out of six cases and is amongst the fastest compiler in terms of compile time.” [10] + +### Who Is Using GCC? + +In The State of Developer Ecosystem Survey in 2018 by JetBrains, out of 6,000 developers who took the survey GCC is regularly used by 66% of C++ programmers and 73% of C programmers. [11] Here is a quick summary of the benefits of GCC that make it so popular with the developer community. + + * For developers required to write code for a variety of new and legacy computing platforms and operating environments, GCC delivers support for the broadest range of hardware and operating environments. Compilers offered by hardware vendors focus mainly on support for their products and other open source compilers are much more limited in the hardware and operating systems supported. [12] + + * There is a wide variety of GCC-based prebuilt toolchains, which has particular appeal to embedded systems developers. This includes the GNU ARM Embedded toolchains and 138 pre-compiled cross compiler toolchains available on the Bootlin web site. [13] While other open source compilers, such as Clang/LLVM, can replace GCC in existing cross compiling toolchains, these would need to be completely rebuilt by the developer. [14] + + * GCC delivers to application developers trusted, stable performance from a mature compiler platform. The GCC 8/9 vs. LLVM Clang 6/7 Compiler Benchmarks On AMD EPYC article provides results of 49 benchmarks ran across the four tested compilers at three optimization levels. Coming in first 34% of the time was GCC 8.2 RC1 using "-O3 -march=native" level, while at the same optimization level LLVM Clang 6.0 came in second with wins 20% of the time. [15] + + * GCC delivers improved diagnostics for compile time debugging [16] and accurate and useful information for runtime debugging. GCC is tightly integrated with GDB, a mature and feature complete tool which offers ‘non-stop’ debugging that can stop a single thread at a breakpoint. + + * GCC is a well supported platform with an active, committed community that supports the current and two previous releases. With releases schedule yearly this provides two years of support for a version. + + + + +### GCC: Continuing to Optimize Linux, the Internet, and Everything + +GCC continues to move forward as a world-class compiler. The most current version of GCC is 8.2, which was released in July 2018 and added hardware support for upcoming Intel CPUs, more ARM CPUs and improved performance for AMD’s ZEN CPU. Initial C17 support has been added along with initial work towards C++2A. Diagnostics have continued to be enhanced including better emitted diagnostics, with improved locations, location ranges, and fix-it hints, particularly in the C++ front end. A blog written by David Malcolm of Red Hat in March 2018 provides an overview of usability improvements in GCC 8. [17] + +New hardware platforms continue to rely on the GCC toolchain for software development, such as RISC-V, a free and open ISA that is of interest to machine learning, Artificial Intelligence (AI), and IoT market segments. GCC continues to be a critical component in the continuing development of Linux systems. The Clear Linux Project for Intel Architecture, an emerging distribution built for cloud, client, and IoT use cases, provides a good example of how GCC compiler technology is being used and improved to boost the performance and security of a Linux-based system. GCC is also being used for application development for Microsoft's Azure Sphere, a Linux-based operating system for IoT applications that initially supports the ARM based MediaTek MT3620 processor. In terms of developing the next generation of programmers, GCC is also a core component of the Windows toolchain for Raspberry PI, the low-cost embedded board running Debian-based GNU/Linux that is used to promote the teaching of basic computer science in schools and developing countries. + +GCC was first released on March 22, 1987 by Richard Stallman, the founder of the GNU Project and was considered a significant breakthrough since it was the first portable ANSI C optimizing compiler released as free software. GCC is maintained by a community of programmers from all over the world under the direction of a steering committee that ensures broad, representative oversight of the project. GCC’s community approach is one of its strengths, resulting in a large and diverse community of developers and users that contribute to and provide support for the project. According to Open Hub, GCC “is one of the largest open-source teams in the world, and is in the top 2% of all project teams on Open Hub.” [18] + +There has been a lot of discussion about the licensing of GCC, most of which confuses rather than enlightens. GCC is distributed under the GNU General Public License version 3 or later with the Runtime Library Exception. This is a copyleft license, which means that derivative work can only be distributed under the same license terms. GPLv3 is intended to protect GCC from being made proprietary and requires that changes to GCC code are made available freely and openly. To the ‘end user’ the compiler is just the same as any other; using GCC makes no difference to any licensing choices you might make for your own code. [19] + + +-------------------------------------------------------------------------------- + +via: https://www.linux.com/blog/2018/10/gcc-optimizing-linux-internet-and-everything + +作者:[Margaret Lewis][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://www.linux.com/users/margaret-lewis +[b]: https://github.com/lujun9972 +[1]: https://www.openacc.org/tools +[2]: /files/images/gccjpg-0 +[3]: https://www.linux.com/sites/lcom/files/styles/rendered_file/public/gcc_0.jpg?itok=HbGnRqWX (performance) +[4]: https://www.linux.com/licenses/category/used-permission From deddc14809fce60ce98f34a3a16ac0346ad047fe Mon Sep 17 00:00:00 2001 From: darksun Date: Thu, 11 Oct 2018 11:21:12 +0800 Subject: [PATCH 334/736] =?UTF-8?q?=E9=80=89=E9=A2=98:=206=20Commands=20To?= =?UTF-8?q?=20Shutdown=20And=20Reboot=20The=20Linux=20System=20From=20Term?= =?UTF-8?q?inal?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...d Reboot The Linux System From Terminal.md | 328 ++++++++++++++++++ 1 file changed, 328 insertions(+) create mode 100644 sources/tech/20181009 6 Commands To Shutdown And Reboot The Linux System From Terminal.md diff --git a/sources/tech/20181009 6 Commands To Shutdown And Reboot The Linux System From Terminal.md b/sources/tech/20181009 6 Commands To Shutdown And Reboot The Linux System From Terminal.md new file mode 100644 index 0000000000..15230ecd0b --- /dev/null +++ b/sources/tech/20181009 6 Commands To Shutdown And Reboot The Linux System From Terminal.md @@ -0,0 +1,328 @@ +6 Commands To Shutdown And Reboot The Linux System From Terminal +====== +Linux administrator performing many tasks in their routine work. The system Shutdown and Reboot task also included in it. + +It’s one of the risky task for them because some times it wont come back due to some reasons and they need to spend more time on it to troubleshoot. + +These task can be performed through CLI in Linux. Most of the time Linux administrator prefer to perform these kind of tasks via CLI because they are familiar on this. + +There are few commands are available in Linux to perform these tasks and user needs to choose appropriate command to perform the task based on the requirement. + +All these commands has their own feature and allow Linux admin to use it. + +**Suggested Read :** +**(#)** [11 Methods To Find System/Server Uptime In Linux][1] +**(#)** [Tuptime – A Tool To Report The Historical And Statistical Running Time Of Linux System][2] + +When the system is initiated for Shutdown or Reboot. It will be notified to all logged-in users and processes. Also, it wont allow any new logins if the time argument is used. + +I would suggest you to double check before you perform this action because you need to follow few prerequisites to make sure everything is fine. + +Those steps are listed below. + + * Make sure you should have a console access to troubleshoot further in case any issues arise. VMWare access for VMs and IPMI/iLO/iDRAC access for physical servers. + * You have to create a ticket as per your company procedure either Incident or Change ticket and get approval + * Take the important configuration files backup and move to other servers for safety + * Verify the log files (Perform the pre-check) + * Communicate about your activity with other dependencies teams like DBA, Application, etc + * Ask them to bring down their Database service or Application service and get a confirmation from them. + * Validate the same from your end using the appropriate command to double confirm this. + * Finally reboot the system + * Verify the log files (Perform the post-check), If everything is good then move to next step. If you found something is wrong then troubleshoot accordingly. + * If it’s back to up and running, ask the dependencies team to bring up their applications. + * Monitor for some time, and communicate back to them saying everything is working fine as expected. + + + +This task can be performed using following commands. + + * **`shutdown Command:`** shutdown command used to halt, power-off or reboot the machine. + * **`halt Command:`** halt command used to halt, power-off or reboot the machine. + * **`poweroff Command:`** poweroff command used to halt, power-off or reboot the machine. + * **`reboot Command:`** reboot command used to halt, power-off or reboot the machine. + * **`init Command:`** init (short for initialization) is the first process started during booting of the computer system. + * **`systemctl Command:`** systemd is a system and service manager for Linux operating systems. + + + +### Method-1: How To Shutdown And Reboot The Linux System Using Shutdown Command + +shutdown command used to power-off or reboot a Linux remote machine or local host. It’s offering +multiple options to perform this task effectively. If the time argument is used, 5 minutes before the system goes down the /run/nologin file is created to ensure that further logins shall not be allowed. + +The general syntax is + +``` +# shutdown [OPTION] [TIME] [MESSAGE] + +``` + +Run the below command to shutdown a Linux machine immediately. It will kill all the processes immediately and will shutdown the system. + +``` +# shutdown -h now + +``` + + * **`-h:`** Equivalent to –poweroff, unless –halt is specified. + + + +Alternatively we can use the shutdown command with `halt` option to bring down the machine immediately. + +``` +# shutdown --halt now +or +# shutdown -H now + +``` + + * **`-H, --halt:`** Halt the machine. + + + +Alternatively we can use the shutdown command with `poweroff` option to bring down the machine immediately. + +``` +# shutdown --poweroff now +or +# shutdown -P now + +``` + + * **`-P, --poweroff:`** Power-off the machine (the default). + + + +Run the below command to shutdown a Linux machine immediately. It will kill all the processes immediately and will shutdown the system. + +``` +# shutdown -h now + +``` + + * **`-h:`** Equivalent to –poweroff, unless –halt is specified. + + + +If you run the below commands without time parameter, it will wait for a minute then execute the given command. + +``` +# shutdown -h +Shutdown scheduled for Mon 2018-10-08 06:42:31 EDT, use 'shutdown -c' to cancel. + +[email protected]# +Broadcast message from [email protected] (Mon 2018-10-08 06:41:31 EDT): + +The system is going down for power-off at Mon 2018-10-08 06:42:31 EDT! + +``` + +All other logged in users can see a broadcast message in their terminal like below. + +``` +[[email protected] ~]$ +Broadcast message from [email protected] (Mon 2018-10-08 06:41:31 EDT): + +The system is going down for power-off at Mon 2018-10-08 06:42:31 EDT! + +``` + +for Halt option. + +``` +# shutdown -H +Shutdown scheduled for Mon 2018-10-08 06:37:53 EDT, use 'shutdown -c' to cancel. + +[email protected]# +Broadcast message from [email protected] (Mon 2018-10-08 06:36:53 EDT): + +The system is going down for system halt at Mon 2018-10-08 06:37:53 EDT! + +``` + +for Poweroff option. + +``` +# shutdown -P +Shutdown scheduled for Mon 2018-10-08 06:40:07 EDT, use 'shutdown -c' to cancel. + +[email protected]# +Broadcast message from [email protected] (Mon 2018-10-08 06:39:07 EDT): + +The system is going down for power-off at Mon 2018-10-08 06:40:07 EDT! + +``` + +This can be cancelled by hitting `shutdown -c` option on your terminal. + +``` +# shutdown -c + +Broadcast message from [email protected] (Mon 2018-10-08 06:39:09 EDT): + +The system shutdown has been cancelled at Mon 2018-10-08 06:40:09 EDT! + +``` + +All other logged in users can see a broadcast message in their terminal like below. + +``` +[[email protected] ~]$ +Broadcast message from [email protected] (Mon 2018-10-08 06:41:35 EDT): + +The system shutdown has been cancelled at Mon 2018-10-08 06:42:35 EDT! + +``` + +Add a time parameter, if you want to perform shutdown or reboot in `N` seconds. Here you can add broadcast a custom message to logged-in users. In this example, we are rebooting the machine in another 5 minutes. + +``` +# shutdown -r +5 "To activate the latest Kernel" +Shutdown scheduled for Mon 2018-10-08 07:13:16 EDT, use 'shutdown -c' to cancel. + +[[email protected] ~]# +Broadcast message from [email protected] (Mon 2018-10-08 07:08:16 EDT): + +To activate the latest Kernel +The system is going down for reboot at Mon 2018-10-08 07:13:16 EDT! + +``` + +Run the below command to reboot a Linux machine immediately. It will kill all the processes immediately and will reboot the system. + +``` +# shutdown -r now + +``` + + * **`-r, --reboot:`** Reboot the machine. + + + +### Method-2: How To Shutdown And Reboot The Linux System Using reboot Command + +reboot command used to power-off or reboot a Linux remote machine or local host. Reboot command comes with two useful options. + +It will perform a graceful shutdown and restart of the machine (This is similar to your restart option which is available in your system menu). + +Run “reboot’ command without any option to reboot Linux machine. + +``` +# reboot + +``` + +Run the “reboot” command with `-p` option to power-off or shutdown the Linux machine. + +``` +# reboot -p + +``` + + * **`-p, --poweroff:`** Power-off the machine, either halt or poweroff commands is invoked. + + + +Run the “reboot” command with `-f` option to forcefully reboot the Linux machine (This is similar to pressing the power button on the CPU). + +``` +# reboot -f + +``` + + * **`-f, --force:`** Force immediate halt, power-off, or reboot. + + + +### Method-3: How To Shutdown And Reboot The Linux System Using init Command + +init (short for initialization) is the first process started during booting of the computer system. + +It will check the /etc/inittab file to decide the Linux run level. Also, allow users to perform shutdown and reboot the Linux machine. There are seven runlevels exist, from zero to six. + +**Suggested Read :** +**(#)** [How To Check All Running Services In Linux][3] + +Run the below init command to shutdown the system . + +``` +# init 0 + +``` + + * **`0:`** Halt – to shutdown the system. + + + +Run the below init command to reboot the system . + +``` +# init 6 + +``` + + * **`6:`** Reboot – to reboot the system. + + + +### Method-4: How To Shutdown The Linux System Using halt Command + +halt command used to power-off or shutdown a Linux remote machine or local host. +halt terminates all processes and shuts down the cpu. + +``` +# halt + +``` + +### Method-5: How To Shutdown The Linux System Using poweroff Command + +poweroff command used to power-off or shutdown a Linux remote machine or local host. Poweroff is exactly like halt, but it also turns off the unit itself (lights and everything on a PC). It sends an ACPI command to the board, then to the PSU, to cut the power. + +``` +# poweroff + +``` + +### Method-6: How To Shutdown And Reboot The Linux System Using systemctl Command + +Systemd is a new init system and system manager which was implemented/adapted into all the major Linux distributions over the traditional SysV init systems. + +systemd is compatible with SysV and LSB init scripts. It can work as a drop-in replacement for sysvinit system. systemd is the first process get started by kernel and holding PID 1. + +**Suggested Read :** +**(#)** [chkservice – A Tool For Managing Systemd Units From Linux Terminal][4] + +It’s a parent process for everything and Fedora 15 is the first distribution which was adapted systemd instead of upstart. + +systemctl is command line utility and primary tool to manage the systemd daemons/services such as (start, restart, stop, enable, disable, reload & status). + +systemd uses .service files Instead of bash scripts (SysVinit uses). systemd sorts all daemons into their own Linux cgroups and you can see the system hierarchy by exploring /cgroup/systemd file. + +``` +# systemctl halt +# systemctl poweroff +# systemctl reboot +# systemctl suspend +# systemctl hibernate + +``` + +-------------------------------------------------------------------------------- + +via: https://www.2daygeek.com/6-commands-to-shutdown-halt-poweroff-reboot-the-linux-system/ + +作者:[Prakash Subramanian][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://www.2daygeek.com/author/prakash/ +[b]: https://github.com/lujun9972 +[1]: https://www.2daygeek.com/11-methods-to-find-check-system-server-uptime-in-linux/ +[2]: https://www.2daygeek.com/tuptime-a-tool-to-report-the-historical-and-statistical-running-time-of-linux-system/ +[3]: https://www.2daygeek.com/how-to-check-all-running-services-in-linux/ +[4]: https://www.2daygeek.com/chkservice-a-tool-for-managing-systemd-units-from-linux-terminal/ From d8498e780826288d7777bbae197df15c879ca123 Mon Sep 17 00:00:00 2001 From: darksun Date: Thu, 11 Oct 2018 11:25:38 +0800 Subject: [PATCH 335/736] =?UTF-8?q?=E9=80=89=E9=A2=98:=20How=20To=20Create?= =?UTF-8?q?=20And=20Maintain=20Your=20Own=20Man=20Pages?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ... Create And Maintain Your Own Man Pages.md | 198 ++++++++++++++++++ 1 file changed, 198 insertions(+) create mode 100644 sources/tech/20181009 How To Create And Maintain Your Own Man Pages.md diff --git a/sources/tech/20181009 How To Create And Maintain Your Own Man Pages.md b/sources/tech/20181009 How To Create And Maintain Your Own Man Pages.md new file mode 100644 index 0000000000..6d78d132e2 --- /dev/null +++ b/sources/tech/20181009 How To Create And Maintain Your Own Man Pages.md @@ -0,0 +1,198 @@ +How To Create And Maintain Your Own Man Pages +====== + +![](https://www.ostechnix.com/wp-content/uploads/2018/10/Um-pages-1-720x340.png) + +We already have discussed about a few [**good alternatives to Man pages**][1]. Those alternatives are mainly used for learning concise Linux command examples without having to go through the comprehensive man pages. If you’re looking for a quick and dirty way to easily and quickly learn a Linux command, those alternatives are worth trying. Now, you might be thinking – how can I create my own man-like help pages for a Linux command? This is where **“Um”** comes in handy. Um is a command line utility, used to easily create and maintain your own Man pages that contains only what you’ve learned about a command so far. + +By creating your own alternative to man pages, you can avoid lots of unnecessary, comprehensive details in a man page and include only what is necessary to keep in mind. If you ever wanted to created your own set of man-like pages, Um will definitely help. In this brief tutorial, we will see how to install “Um” command line utility and how to create our own man pages. + +### Installing Um + +Um is available for Linux and Mac OS. At present, it can only be installed using **Linuxbrew** package manager in Linux systems. Refer the following link if you haven’t installed Linuxbrew yet. + +Once Linuxbrew installed, run the following command to install Um utility. + +``` +$ brew install sinclairtarget/wst/um + +``` + +If you will see an output something like below, congratulations! Um has been installed and ready to use. + +``` +[...] +==> Installing sinclairtarget/wst/um +==> Downloading https://github.com/sinclairtarget/um/archive/4.0.0.tar.gz +==> Downloading from https://codeload.github.com/sinclairtarget/um/tar.gz/4.0.0 +-=#=# # # +==> Downloading https://rubygems.org/gems/kramdown-1.17.0.gem +######################################################################## 100.0% +==> gem install /home/sk/.cache/Homebrew/downloads/d0a5d978120a791d9c5965fc103866815189a4e3939 +==> Caveats +Bash completion has been installed to: +/home/linuxbrew/.linuxbrew/etc/bash_completion.d +==> Summary +🍺 /home/linuxbrew/.linuxbrew/Cellar/um/4.0.0: 714 files, 1.3MB, built in 35 seconds +==> Caveats +==> openssl +A CA file has been bootstrapped using certificates from the SystemRoots +keychain. To add additional certificates (e.g. the certificates added in +the System keychain), place .pem files in +/home/linuxbrew/.linuxbrew/etc/openssl/certs + +and run +/home/linuxbrew/.linuxbrew/opt/openssl/bin/c_rehash +==> ruby +Emacs Lisp files have been installed to: +/home/linuxbrew/.linuxbrew/share/emacs/site-lisp/ruby +==> um +Bash completion has been installed to: +/home/linuxbrew/.linuxbrew/etc/bash_completion.d + +``` + +Before going to use to make your man pages, you need to enable bash completion for Um. + +To do so, open your **~/.bash_profile** file: + +``` +$ nano ~/.bash_profile + +``` + +And, add the following lines in it: + +``` +if [ -f $(brew --prefix)/etc/bash_completion.d/um-completion.sh ]; then + . $(brew --prefix)/etc/bash_completion.d/um-completion.sh +fi + +``` + +Save and close the file. Run the following commands to update the changes. + +``` +$ source ~/.bash_profile + +``` + +All done. let us go ahead and create our first man page. + +### Create And Maintain Your Own Man Pages + +Let us say, you want to create your own man page for “dpkg” command. To do so, run: + +``` +$ um edit dpkg + +``` + +The above command will open a markdown template in your default editor: + +![](https://www.ostechnix.com/wp-content/uploads/2018/10/Create-dpkg-man-page.png) + +My default editor is Vi, so the above commands open it in the Vi editor. Now, start adding everything you want to remember about “dpkg” command in this template. + +Here is a sample: + +![](https://www.ostechnix.com/wp-content/uploads/2018/10/Edit-dpkg-man-page.png) + +As you see in the above output, I have added Synopsis, description and two options for dpkg command. You can add as many as sections you want in the man pages. Make sure you have given proper and easily-understandable titles for each section. Once done, save and quit the file (If you use Vi editor, Press **ESC** key and type **:wq** ). + +Finally, view your newly created man page using command: + +``` +$ um dpkg + +``` + +![](http://www.ostechnix.com/wp-content/uploads/2018/10/View-dpkg-man-page.png) + +As you can see, the the dpkg man page looks exactly like the official man pages. If you want to edit and/or add more details in a man page, again run the same command and add the details. + +``` +$ um edit dpkg + +``` + +To view the list of newly created man pages using Um, run: + +``` +$ um list + +``` + +All man pages will be saved under a directory named**`.um`**in your home directory + +Just in case, if you don’t want a particular page, simply delete it as shown below. + +``` +$ um rm dpkg + +``` + +To view the help section and all available general options, run: + +``` +$ um --help +usage: um + um [ARGS...] + +The first form is equivalent to `um read `. + +Subcommands: + um (l)ist List the available pages for the current topic. + um (r)ead Read the given page under the current topic. + um (e)dit Create or edit the given page under the current topic. + um rm Remove the given page. + um (t)opic [topic] Get or set the current topic. + um topics List all topics. + um (c)onfig [config key] Display configuration environment. + um (h)elp [sub-command] Display this help message, or the help message for a sub-command. + +``` + +### Configure Um + +To view the current configuration, run: + +``` +$ um config +Options prefixed by '*' are set in /home/sk/.um/umconfig. +editor = vi +pager = less +pages_directory = /home/sk/.um/pages +default_topic = shell +pages_ext = .md + +``` + +In this file, you can edit and change the values for **pager** , **editor** , **default_topic** , **pages_directory** , and **pages_ext** options as you wish. Say for example, if you want to save the newly created Um pages in your **[Dropbox][2]** folder, simply change the value of **pages_directory** directive and point it to the Dropbox folder in **~/.um/umconfig** file. + +``` +pages_directory = /Users/myusername/Dropbox/um + +``` + +And, that’s all for now. Hope this was useful. More good stuffs to come. Stay tuned! + +Cheers! + + + +-------------------------------------------------------------------------------- + +via: https://www.ostechnix.com/how-to-create-and-maintain-your-own-man-pages/ + +作者:[SK][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://www.ostechnix.com/author/sk/ +[b]: https://github.com/lujun9972 +[1]: https://www.ostechnix.com/3-good-alternatives-man-pages-every-linux-user-know/ +[2]: https://www.ostechnix.com/install-dropbox-in-ubuntu-18-04-lts-desktop/ From d647efd3fdb4e78e47dbea6db21c00a6c88b8bec Mon Sep 17 00:00:00 2001 From: darksun Date: Thu, 11 Oct 2018 11:27:29 +0800 Subject: [PATCH 336/736] =?UTF-8?q?=E9=80=89=E9=A2=98:=20Convert=20Screens?= =?UTF-8?q?hots=20of=20Equations=20into=20LaTeX=20Instantly=20With=20This?= =?UTF-8?q?=20Nifty=20Tool?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...to LaTeX Instantly With This Nifty Tool.md | 70 +++++++++++++++++++ 1 file changed, 70 insertions(+) create mode 100644 sources/tech/20181009 Convert Screenshots of Equations into LaTeX Instantly With This Nifty Tool.md diff --git a/sources/tech/20181009 Convert Screenshots of Equations into LaTeX Instantly With This Nifty Tool.md b/sources/tech/20181009 Convert Screenshots of Equations into LaTeX Instantly With This Nifty Tool.md new file mode 100644 index 0000000000..f2c17ff7c2 --- /dev/null +++ b/sources/tech/20181009 Convert Screenshots of Equations into LaTeX Instantly With This Nifty Tool.md @@ -0,0 +1,70 @@ +Convert Screenshots of Equations into LaTeX Instantly With This Nifty Tool +====== +**Mathpix is a nifty little tool that allows you to take screenshots of complex mathematical equations and instantly converts it into LaTeX editable text.** + +![Mathpix converts math equations images into LaTeX][1] + +[LaTeX editors][2] are excellent when it comes to writing academic and scientific documentation. + +There is a steep learning curved involved of course. And this learning curve becomes steeper if you have to write complex mathematical equations. + +[Mathpix][3] is a nifty little tool that helps you in this regard. + +Suppose you are reading a document that has mathematical equations. If you want to use those equations in your [LaTeX document][4], you need to use your ninja LaTeX skills and plenty of time. + +But Mathpix solves this problem for you. With Mathpix, you take the screenshot of the mathematical equations, and it will instantly give you the LaTeX code. You can then use this code in your [favorite LaTeX editor][2]. + +See Mathpix in action in the video below: + + + +[Video credit][5]: Reddit User [kaitlinmcunningham][6] + +Isn’t it super-cool? I guess the hardest part of writing LaTeX documents are those complicated equations. For lazy bums like me, Mathpix is a godsend. + +### Getting Mathpix + +Mathpix is available for Linux, macOS, Windows and iOS. There is no Android app for the moment. + +Note: Mathpix is a free to use tool but it’s not open source. + +On Linux, [Mathpix is available as a Snap package][7]. Which means [if you have Snap support enabled on your Linux distribution][8], you can install Mathpix with this simple command: + +``` +sudo snap install mathpix-snipping-tool + +``` + +Using Mathpix is simple. Once installed, open the tool. You’ll find it in the top panel. You can start taking the screenshot with Mathpix using the keyboard shortcut Ctrl+Alt+M. + +It will instantly translate the image of equation into a LaTeX code. The code will be copied into clipboard and you can then paste it in a LaTeX editor. + +Mathpix’s optical character recognition technology is [being used][9] by a number of companies like [WolframAlpha][10], Microsoft, Google, etc. to improve their tools’ image recognition capability while dealing with math symbols. + +Altogether, it’s an awesome tool for students and academics. It’s free to use and I so wish that it was an open source tool. We cannot get everything in life, can we? + +Do you use Mathpix or some other similar tool while dealing with mathematical symbols in LaTeX? What do you think of Mathpix? Share your views with us in the comment section. + +-------------------------------------------------------------------------------- + +via: https://itsfoss.com/mathpix/ + +作者:[Abhishek Prakash][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://itsfoss.com/author/abhishek/ +[b]: https://github.com/lujun9972 +[1]: https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/10/mathpix-converts-equations-into-latex.jpeg +[2]: https://itsfoss.com/latex-editors-linux/ +[3]: https://mathpix.com/ +[4]: https://www.latex-project.org/ +[5]: https://g.redditmedia.com/b-GL1rQwNezQjGvdlov9U_6vDwb1A7kEwGHYcQ1Ogtg.gif?fm=mp4&mp4-fragmented=false&s=39fd1816b43e2b544986d629f75a7a8e +[6]: https://www.reddit.com/user/kaitlinmcunningham +[7]: https://snapcraft.io/mathpix-snipping-tool +[8]: https://itsfoss.com/install-snap-linux/ +[9]: https://mathpix.com/api.html +[10]: https://www.wolframalpha.com/ From fdf85635c99271c1add9cda885bce3ee34410aeb Mon Sep 17 00:00:00 2001 From: darksun Date: Thu, 11 Oct 2018 11:37:45 +0800 Subject: [PATCH 337/736] =?UTF-8?q?=E9=80=89=E9=A2=98:=20An=20introduction?= =?UTF-8?q?=20to=20using=20tcpdump=20at=20the=20Linux=20command=20line?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...using tcpdump at the Linux command line.md | 457 ++++++++++++++++++ 1 file changed, 457 insertions(+) create mode 100644 sources/tech/20181010 An introduction to using tcpdump at the Linux command line.md diff --git a/sources/tech/20181010 An introduction to using tcpdump at the Linux command line.md b/sources/tech/20181010 An introduction to using tcpdump at the Linux command line.md new file mode 100644 index 0000000000..6998661f23 --- /dev/null +++ b/sources/tech/20181010 An introduction to using tcpdump at the Linux command line.md @@ -0,0 +1,457 @@ +An introduction to using tcpdump at the Linux command line +====== + +This flexible, powerful command-line tool helps ease the pain of troubleshooting network issues. + +![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/terminal_command_linux_desktop_code.jpg?itok=p5sQ6ODE) + +In my experience as a sysadmin, I have often found network connectivity issues challenging to troubleshoot. For those situations, tcpdump is a great ally. + +Tcpdump is a command line utility that allows you to capture and analyze network traffic going through your system. It is often used to help troubleshoot network issues, as well as a security tool. + +A powerful and versatile tool that includes many options and filters, tcpdump can be used in a variety of cases. Since it's a command line tool, it is ideal to run in remote servers or devices for which a GUI is not available, to collect data that can be analyzed later. It can also be launched in the background or as a scheduled job using tools like cron. + +In this article, we'll look at some of tcpdump's most common features. + +### 1\. Installation on Linux + +Tcpdump is included with several Linux distributions, so chances are, you already have it installed. Check if tcpdump is installed on your system with the following command: + +``` +$ which tcpdump +/usr/sbin/tcpdump +``` + +If tcpdump is not installed, you can install it but using your distribution's package manager. For example, on CentOS or Red Hat Enterprise Linux, like this: + +``` +$ sudo yum install -y tcpdump +``` + +Tcpdump requires `libpcap`, which is a library for network packet capture. If it's not installed, it will be automatically added as a dependency. + +You're ready to start capturing some packets. + +### 2\. Capturing packets with tcpdump + +To capture packets for troubleshooting or analysis, tcpdump requires elevated permissions, so in the following examples most commands are prefixed with `sudo`. + +To begin, use the command `tcpdump -D` to see which interfaces are available for capture: + +``` +$ sudo tcpdump -D +1.eth0 +2.virbr0 +3.eth1 +4.any (Pseudo-device that captures on all interfaces) +5.lo [Loopback] +``` + +In the example above, you can see all the interfaces available in my machine. The special interface `any` allows capturing in any active interface. + +Let's use it to start capturing some packets. Capture all packets in any interface by running this command: + +``` +$ sudo tcpdump -i any +tcpdump: verbose output suppressed, use -v or -vv for full protocol decode +listening on any, link-type LINUX_SLL (Linux cooked), capture size 262144 bytes +09:56:18.293641 IP rhel75.localdomain.ssh > 192.168.64.1.56322: Flags [P.], seq 3770820720:3770820916, ack 3503648727, win 309, options [nop,nop,TS val 76577898 ecr 510770929], length 196 +09:56:18.293794 IP 192.168.64.1.56322 > rhel75.localdomain.ssh: Flags [.], ack 196, win 391, options [nop,nop,TS val 510771017 ecr 76577898], length 0 +09:56:18.295058 IP rhel75.59883 > gateway.domain: 2486+ PTR? 1.64.168.192.in-addr.arpa. (43) +09:56:18.310225 IP gateway.domain > rhel75.59883: 2486 NXDomain* 0/1/0 (102) +09:56:18.312482 IP rhel75.49685 > gateway.domain: 34242+ PTR? 28.64.168.192.in-addr.arpa. (44) +09:56:18.322425 IP gateway.domain > rhel75.49685: 34242 NXDomain* 0/1/0 (103) +09:56:18.323164 IP rhel75.56631 > gateway.domain: 29904+ PTR? 1.122.168.192.in-addr.arpa. (44) +09:56:18.323342 IP rhel75.localdomain.ssh > 192.168.64.1.56322: Flags [P.], seq 196:584, ack 1, win 309, options [nop,nop,TS val 76577928 ecr 510771017], length 388 +09:56:18.323563 IP 192.168.64.1.56322 > rhel75.localdomain.ssh: Flags [.], ack 584, win 411, options [nop,nop,TS val 510771047 ecr 76577928], length 0 +09:56:18.335569 IP gateway.domain > rhel75.56631: 29904 NXDomain* 0/1/0 (103) +09:56:18.336429 IP rhel75.44007 > gateway.domain: 61677+ PTR? 98.122.168.192.in-addr.arpa. (45) +09:56:18.336655 IP gateway.domain > rhel75.44007: 61677* 1/0/0 PTR rhel75. (65) +09:56:18.337177 IP rhel75.localdomain.ssh > 192.168.64.1.56322: Flags [P.], seq 584:1644, ack 1, win 309, options [nop,nop,TS val 76577942 ecr 510771047], length 1060 + +---- SKIPPING LONG OUTPUT ----- + +09:56:19.342939 IP 192.168.64.1.56322 > rhel75.localdomain.ssh: Flags [.], ack 1752016, win 1444, options [nop,nop,TS val 510772067 ecr 76578948], length 0 +^C +9003 packets captured +9010 packets received by filter +7 packets dropped by kernel +$ +``` + +Tcpdump continues to capture packets until it receives an interrupt signal. You can interrupt capturing by pressing `Ctrl+C`. As you can see in this example, `tcpdump` captured more than 9,000 packets. In this case, since I am connected to this server using `ssh`, tcpdump captured all these packages. To limit the number of packets captured and stop `tcpdump`, use the `-c` option: + +``` +$ sudo tcpdump -i any -c 5 +tcpdump: verbose output suppressed, use -v or -vv for full protocol decode +listening on any, link-type LINUX_SLL (Linux cooked), capture size 262144 bytes +11:21:30.242740 IP rhel75.localdomain.ssh > 192.168.64.1.56322: Flags [P.], seq 3772575680:3772575876, ack 3503651743, win 309, options [nop,nop,TS val 81689848 ecr 515883153], length 196 +11:21:30.242906 IP 192.168.64.1.56322 > rhel75.localdomain.ssh: Flags [.], ack 196, win 1443, options [nop,nop,TS val 515883235 ecr 81689848], length 0 +11:21:30.244442 IP rhel75.43634 > gateway.domain: 57680+ PTR? 1.64.168.192.in-addr.arpa. (43) +11:21:30.244829 IP gateway.domain > rhel75.43634: 57680 NXDomain 0/0/0 (43) +11:21:30.247048 IP rhel75.33696 > gateway.domain: 37429+ PTR? 28.64.168.192.in-addr.arpa. (44) +5 packets captured +12 packets received by filter +0 packets dropped by kernel +$ +``` + +In this case, `tcpdump` stopped capturing automatically after capturing five packets. This is useful in different scenarios—for instance, if you're troubleshooting connectivity and capturing a few initial packages is enough. This is even more useful when we apply filters to capture specific packets (shown below). + +By default, tcpdump resolves IP addresses and ports into names, as shown in the previous example. When troubleshooting network issues, it is often easier to use the IP addresses and port numbers; disable name resolution by using the option `-n` and port resolution with `-nn`: + +``` +$ sudo tcpdump -i any -c5 -nn +tcpdump: verbose output suppressed, use -v or -vv for full protocol decode +listening on any, link-type LINUX_SLL (Linux cooked), capture size 262144 bytes +23:56:24.292206 IP 192.168.64.28.22 > 192.168.64.1.35110: Flags [P.], seq 166198580:166198776, ack 2414541257, win 309, options [nop,nop,TS val 615664 ecr 540031155], length 196 +23:56:24.292357 IP 192.168.64.1.35110 > 192.168.64.28.22: Flags [.], ack 196, win 1377, options [nop,nop,TS val 540031229 ecr 615664], length 0 +23:56:24.292570 IP 192.168.64.28.22 > 192.168.64.1.35110: Flags [P.], seq 196:568, ack 1, win 309, options [nop,nop,TS val 615664 ecr 540031229], length 372 +23:56:24.292655 IP 192.168.64.1.35110 > 192.168.64.28.22: Flags [.], ack 568, win 1400, options [nop,nop,TS val 540031229 ecr 615664], length 0 +23:56:24.292752 IP 192.168.64.28.22 > 192.168.64.1.35110: Flags [P.], seq 568:908, ack 1, win 309, options [nop,nop,TS val 615664 ecr 540031229], length 340 +5 packets captured +6 packets received by filter +0 packets dropped by kernel +``` + +As shown above, the capture output now displays the IP addresses and port numbers. This also prevents tcpdump from issuing DNS lookups, which helps to lower network traffic while troubleshooting network issues. + +Now that you're able to capture network packets, let's explore what this output means. + +### 3\. Understanding the output format + +Tcpdump is capable of capturing and decoding many different protocols, such as TCP, UDP, ICMP, and many more. While we can't cover all of them here, to help you get started, let's explore the TCP packet. You can find more details about the different protocol formats in tcpdump's [manual pages][1]. A typical TCP packet captured by tcpdump looks like this: + +``` +08:41:13.729687 IP 192.168.64.28.22 > 192.168.64.1.41916: Flags [P.], seq 196:568, ack 1, win 309, options [nop,nop,TS val 117964079 ecr 816509256], length 372 +``` + +The fields may vary depending on the type of packet being sent, but this is the general format. + +The first field, `08:41:13.729687,` represents the timestamp of the received packet as per the local clock. + +Next, `IP` represents the network layer protocol—in this case, `IPv4`. For `IPv6` packets, the value is `IP6`. + +The next field, `192.168.64.28.22`, is the source IP address and port. This is followed by the destination IP address and port, represented by `192.168.64.1.41916`. + +After the source and destination, you can find the TCP Flags `Flags [P.]`. Typical values for this field include: + +| Value | Flag Type | Description | +|-------| --------- | ----------------- | +| S | SYN | Connection Start | +| F | FIN | Connection Finish | +| P | PUSH | Data push | +| R | RST | Connection reset | +| . | ACK | Acknowledgment | + +This field can also be a combination of these values, such as `[S.]` for a `SYN-ACK` packet. + +Next is the sequence number of the data contained in the packet. For the first packet captured, this is an absolute number. Subsequent packets use a relative number to make it easier to follow. In this example, the sequence is `seq 196:568,` which means this packet contains bytes 196 to 568 of this flow. + +This is followed by the Ack Number: `ack 1`. In this case, it is 1 since this is the side sending data. For the side receiving data, this field represents the next expected byte (data) on this flow. For example, the Ack number for the next packet in this flow would be 568. + +The next field is the window size `win 309`, which represents the number of bytes available in the receiving buffer, followed by TCP options such as the MSS (Maximum Segment Size) or Window Scale. For details about TCP protocol options, consult [Transmission Control Protocol (TCP) Parameters][2]. + +Finally, we have the packet length, `length 372`, which represents the length, in bytes, of the payload data. The length is the difference between the last and first bytes in the sequence number. + +Now let's learn how to filter packages to narrow down results and make it easier to troubleshoot specific issues. + +### 4\. Filtering packets + +As mentioned above, tcpdump can capture too many packages, some of which are not even related to the issue you're troubleshooting. For example, if you're troubleshooting a connectivity issue with a web server you're not interested in the SSH traffic, so removing the SSH packets from the output makes it easier to work on the real issue. + +One of tcpdump's most powerful features is its ability to filter the captured packets using a variety of parameters, such as source and destination IP addresses, ports, protocols, etc. Let's look at some of the most common ones. + +#### Protocol + +To filter packets based on protocol, specifying the protocol in the command line. For example, capture ICMP packets only by using this command: + +``` +$ sudo tcpdump -i any -c5 icmp +tcpdump: verbose output suppressed, use -v or -vv for full protocol decode +listening on any, link-type LINUX_SLL (Linux cooked), capture size 262144 bytes +``` + +In a different terminal, try to ping another machine: + +``` +$ ping opensource.com +PING opensource.com (54.204.39.132) 56(84) bytes of data. +64 bytes from ec2-54-204-39-132.compute-1.amazonaws.com (54.204.39.132): icmp_seq=1 ttl=47 time=39.6 ms +``` + +Back in the tcpdump capture, notice that tcpdump captures and displays only the ICMP-related packets. In this case, tcpdump is not displaying name resolution packets that were generated when resolving the name `opensource.com`: + +``` +09:34:20.136766 IP rhel75 > ec2-54-204-39-132.compute-1.amazonaws.com: ICMP echo request, id 20361, seq 1, length 64 +09:34:20.176402 IP ec2-54-204-39-132.compute-1.amazonaws.com > rhel75: ICMP echo reply, id 20361, seq 1, length 64 +09:34:21.140230 IP rhel75 > ec2-54-204-39-132.compute-1.amazonaws.com: ICMP echo request, id 20361, seq 2, length 64 +09:34:21.180020 IP ec2-54-204-39-132.compute-1.amazonaws.com > rhel75: ICMP echo reply, id 20361, seq 2, length 64 +09:34:22.141777 IP rhel75 > ec2-54-204-39-132.compute-1.amazonaws.com: ICMP echo request, id 20361, seq 3, length 64 +5 packets captured +5 packets received by filter +0 packets dropped by kernel +``` + +#### Host + +Limit capture to only packets related to a specific host by using the `host` filter: + +``` +$ sudo tcpdump -i any -c5 -nn host 54.204.39.132 +tcpdump: verbose output suppressed, use -v or -vv for full protocol decode +listening on any, link-type LINUX_SLL (Linux cooked), capture size 262144 bytes +09:54:20.042023 IP 192.168.122.98.39326 > 54.204.39.132.80: Flags [S], seq 1375157070, win 29200, options [mss 1460,sackOK,TS val 122350391 ecr 0,nop,wscale 7], length 0 +09:54:20.088127 IP 54.204.39.132.80 > 192.168.122.98.39326: Flags [S.], seq 1935542841, ack 1375157071, win 28960, options [mss 1460,sackOK,TS val 522713542 ecr 122350391,nop,wscale 9], length 0 +09:54:20.088204 IP 192.168.122.98.39326 > 54.204.39.132.80: Flags [.], ack 1, win 229, options [nop,nop,TS val 122350437 ecr 522713542], length 0 +09:54:20.088734 IP 192.168.122.98.39326 > 54.204.39.132.80: Flags [P.], seq 1:113, ack 1, win 229, options [nop,nop,TS val 122350438 ecr 522713542], length 112: HTTP: GET / HTTP/1.1 +09:54:20.129733 IP 54.204.39.132.80 > 192.168.122.98.39326: Flags [.], ack 113, win 57, options [nop,nop,TS val 522713552 ecr 122350438], length 0 +5 packets captured +5 packets received by filter +0 packets dropped by kernel +``` + +In this example, tcpdump captures and displays only packets to and from host `54.204.39.132`. + +#### Port + +To filter packets based on the desired service or port, use the `port` filter. For example, capture packets related to a web (HTTP) service by using this command: + +``` +$ sudo tcpdump -i any -c5 -nn port 80 +tcpdump: verbose output suppressed, use -v or -vv for full protocol decode +listening on any, link-type LINUX_SLL (Linux cooked), capture size 262144 bytes +09:58:28.790548 IP 192.168.122.98.39330 > 54.204.39.132.80: Flags [S], seq 1745665159, win 29200, options [mss 1460,sackOK,TS val 122599140 ecr 0,nop,wscale 7], length 0 +09:58:28.834026 IP 54.204.39.132.80 > 192.168.122.98.39330: Flags [S.], seq 4063583040, ack 1745665160, win 28960, options [mss 1460,sackOK,TS val 522775728 ecr 122599140,nop,wscale 9], length 0 +09:58:28.834093 IP 192.168.122.98.39330 > 54.204.39.132.80: Flags [.], ack 1, win 229, options [nop,nop,TS val 122599183 ecr 522775728], length 0 +09:58:28.834588 IP 192.168.122.98.39330 > 54.204.39.132.80: Flags [P.], seq 1:113, ack 1, win 229, options [nop,nop,TS val 122599184 ecr 522775728], length 112: HTTP: GET / HTTP/1.1 +09:58:28.878445 IP 54.204.39.132.80 > 192.168.122.98.39330: Flags [.], ack 113, win 57, options [nop,nop,TS val 522775739 ecr 122599184], length 0 +5 packets captured +5 packets received by filter +0 packets dropped by kernel +``` + +#### Source IP/hostname + +You can also filter packets based on the source or destination IP Address or hostname. For example, to capture packets from host `192.168.122.98`: + +``` +$ sudo tcpdump -i any -c5 -nn src 192.168.122.98 +tcpdump: verbose output suppressed, use -v or -vv for full protocol decode +listening on any, link-type LINUX_SLL (Linux cooked), capture size 262144 bytes +10:02:15.220824 IP 192.168.122.98.39436 > 192.168.122.1.53: 59332+ A? opensource.com. (32) +10:02:15.220862 IP 192.168.122.98.39436 > 192.168.122.1.53: 20749+ AAAA? opensource.com. (32) +10:02:15.364062 IP 192.168.122.98.39334 > 54.204.39.132.80: Flags [S], seq 1108640533, win 29200, options [mss 1460,sackOK,TS val 122825713 ecr 0,nop,wscale 7], length 0 +10:02:15.409229 IP 192.168.122.98.39334 > 54.204.39.132.80: Flags [.], ack 669337581, win 229, options [nop,nop,TS val 122825758 ecr 522832372], length 0 +10:02:15.409667 IP 192.168.122.98.39334 > 54.204.39.132.80: Flags [P.], seq 0:112, ack 1, win 229, options [nop,nop,TS val 122825759 ecr 522832372], length 112: HTTP: GET / HTTP/1.1 +5 packets captured +5 packets received by filter +0 packets dropped by kernel +``` + +Notice that tcpdumps captured packets with source IP address `192.168.122.98` for multiple services such as name resolution (port 53) and HTTP (port 80). The response packets are not displayed since their source IP is different. + +Conversely, you can use the `dst` filter to filter by destination IP/hostname: + +``` +$ sudo tcpdump -i any -c5 -nn dst 192.168.122.98 +tcpdump: verbose output suppressed, use -v or -vv for full protocol decode +listening on any, link-type LINUX_SLL (Linux cooked), capture size 262144 bytes +10:05:03.572931 IP 192.168.122.1.53 > 192.168.122.98.47049: 2248 1/0/0 A 54.204.39.132 (48) +10:05:03.572944 IP 192.168.122.1.53 > 192.168.122.98.47049: 33770 0/0/0 (32) +10:05:03.621833 IP 54.204.39.132.80 > 192.168.122.98.39338: Flags [S.], seq 3474204576, ack 3256851264, win 28960, options [mss 1460,sackOK,TS val 522874425 ecr 122993922,nop,wscale 9], length 0 +10:05:03.667767 IP 54.204.39.132.80 > 192.168.122.98.39338: Flags [.], ack 113, win 57, options [nop,nop,TS val 522874436 ecr 122993972], length 0 +10:05:03.672221 IP 54.204.39.132.80 > 192.168.122.98.39338: Flags [P.], seq 1:643, ack 113, win 57, options [nop,nop,TS val 522874437 ecr 122993972], length 642: HTTP: HTTP/1.1 302 Found +5 packets captured +5 packets received by filter +0 packets dropped by kernel +``` + +#### Complex expressions + +You can also combine filters by using the logical operators `and` and `or` to create more complex expressions. For example, to filter packets from source IP address `192.168.122.98` and service HTTP only, use this command: + +``` +$ sudo tcpdump -i any -c5 -nn src 192.168.122.98 and port 80 +tcpdump: verbose output suppressed, use -v or -vv for full protocol decode +listening on any, link-type LINUX_SLL (Linux cooked), capture size 262144 bytes +10:08:00.472696 IP 192.168.122.98.39342 > 54.204.39.132.80: Flags [S], seq 2712685325, win 29200, options [mss 1460,sackOK,TS val 123170822 ecr 0,nop,wscale 7], length 0 +10:08:00.516118 IP 192.168.122.98.39342 > 54.204.39.132.80: Flags [.], ack 268723504, win 229, options [nop,nop,TS val 123170865 ecr 522918648], length 0 +10:08:00.516583 IP 192.168.122.98.39342 > 54.204.39.132.80: Flags [P.], seq 0:112, ack 1, win 229, options [nop,nop,TS val 123170866 ecr 522918648], length 112: HTTP: GET / HTTP/1.1 +10:08:00.567044 IP 192.168.122.98.39342 > 54.204.39.132.80: Flags [.], ack 643, win 239, options [nop,nop,TS val 123170916 ecr 522918661], length 0 +10:08:00.788153 IP 192.168.122.98.39342 > 54.204.39.132.80: Flags [F.], seq 112, ack 643, win 239, options [nop,nop,TS val 123171137 ecr 522918661], length 0 +5 packets captured +5 packets received by filter +0 packets dropped by kernel +``` + +You can create more complex expressions by grouping filter with parentheses. In this case, enclose the entire filter expression with quotation marks to prevent the shell from confusing them with shell expressions: + +``` +$ sudo tcpdump -i any -c5 -nn "port 80 and (src 192.168.122.98 or src 54.204.39.132)" +tcpdump: verbose output suppressed, use -v or -vv for full protocol decode +listening on any, link-type LINUX_SLL (Linux cooked), capture size 262144 bytes +10:10:37.602214 IP 192.168.122.98.39346 > 54.204.39.132.80: Flags [S], seq 871108679, win 29200, options [mss 1460,sackOK,TS val 123327951 ecr 0,nop,wscale 7], length 0 +10:10:37.650651 IP 54.204.39.132.80 > 192.168.122.98.39346: Flags [S.], seq 854753193, ack 871108680, win 28960, options [mss 1460,sackOK,TS val 522957932 ecr 123327951,nop,wscale 9], length 0 +10:10:37.650708 IP 192.168.122.98.39346 > 54.204.39.132.80: Flags [.], ack 1, win 229, options [nop,nop,TS val 123328000 ecr 522957932], length 0 +10:10:37.651097 IP 192.168.122.98.39346 > 54.204.39.132.80: Flags [P.], seq 1:113, ack 1, win 229, options [nop,nop,TS val 123328000 ecr 522957932], length 112: HTTP: GET / HTTP/1.1 +10:10:37.692900 IP 54.204.39.132.80 > 192.168.122.98.39346: Flags [.], ack 113, win 57, options [nop,nop,TS val 522957942 ecr 123328000], length 0 +5 packets captured +5 packets received by filter +0 packets dropped by kernel +``` + +In this example, we're filtering packets for HTTP service only (port 80) and source IP addresses `192.168.122.98` or `54.204.39.132`. This is a quick way of examining both sides of the same flow. + +### 5\. Checking packet content + +In the previous examples, we're checking only the packets' headers for information such as source, destinations, ports, etc. Sometimes this is all we need to troubleshoot network connectivity issues. Sometimes, however, we need to inspect the content of the packet to ensure that the message we're sending contains what we need or that we received the expected response. To see the packet content, tcpdump provides two additional flags: `-X` to print content in hex, and ASCII or `-A` to print the content in ASCII. + +For example, inspect the HTTP content of a web request like this: + +``` +$ sudo tcpdump -i any -c10 -nn -A port 80 +tcpdump: verbose output suppressed, use -v or -vv for full protocol decode +listening on any, link-type LINUX_SLL (Linux cooked), capture size 262144 bytes +13:02:14.871803 IP 192.168.122.98.39366 > 54.204.39.132.80: Flags [S], seq 2546602048, win 29200, options [mss 1460,sackOK,TS val 133625221 ecr 0,nop,wscale 7], length 0 +E..<..@.@.....zb6.'....P...@......r............ +............................ +13:02:14.910734 IP 54.204.39.132.80 > 192.168.122.98.39366: Flags [S.], seq 1877348646, ack 2546602049, win 28960, options [mss 1460,sackOK,TS val 525532247 ecr 133625221,nop,wscale 9], length 0 +E..<..@./..a6.'...zb.P..o..&...A..q a.......... +.R.W.......     ................ +13:02:14.910832 IP 192.168.122.98.39366 > 54.204.39.132.80: Flags [.], ack 1, win 229, options [nop,nop,TS val 133625260 ecr 525532247], length 0 +E..4..@.@.....zb6.'....P...Ao..'........... +.....R.W................ +13:02:14.911808 IP 192.168.122.98.39366 > 54.204.39.132.80: Flags [P.], seq 1:113, ack 1, win 229, options [nop,nop,TS val 133625261 ecr 525532247], length 112: HTTP: GET / HTTP/1.1 +E.....@.@..1..zb6.'....P...Ao..'........... +.....R.WGET / HTTP/1.1 +User-Agent: Wget/1.14 (linux-gnu) +Accept: */* +Host: opensource.com +Connection: Keep-Alive + +................ +13:02:14.951199 IP 54.204.39.132.80 > 192.168.122.98.39366: Flags [.], ack 113, win 57, options [nop,nop,TS val 525532257 ecr 133625261], length 0 +E..4.F@./.."6.'...zb.P..o..'.......9.2..... +.R.a.................... +13:02:14.955030 IP 54.204.39.132.80 > 192.168.122.98.39366: Flags [P.], seq 1:643, ack 113, win 57, options [nop,nop,TS val 525532258 ecr 133625261], length 642: HTTP: HTTP/1.1 302 Found +E....G@./...6.'...zb.P..o..'.......9....... +.R.b....HTTP/1.1 302 Found +Server: nginx +Date: Sun, 23 Sep 2018 17:02:14 GMT +Content-Type: text/html; charset=iso-8859-1 +Content-Length: 207 +X-Content-Type-Options: nosniff +Location: https://opensource.com/ +Cache-Control: max-age=1209600 +Expires: Sun, 07 Oct 2018 17:02:14 GMT +X-Request-ID: v-6baa3acc-bf52-11e8-9195-22000ab8cf2d +X-Varnish: 632951979 +Age: 0 +Via: 1.1 varnish (Varnish/5.2) +X-Cache: MISS +Connection: keep-alive + + + +302 Found + +

Found

+

The document has moved here.

+ +................ +13:02:14.955083 IP 192.168.122.98.39366 > 54.204.39.132.80: Flags [.], ack 643, win 239, options [nop,nop,TS val 133625304 ecr 525532258], length 0 +E..4..@.@.....zb6.'....P....o.............. +.....R.b................ +13:02:15.195524 IP 192.168.122.98.39366 > 54.204.39.132.80: Flags [F.], seq 113, ack 643, win 239, options [nop,nop,TS val 133625545 ecr 525532258], length 0 +E..4..@.@.....zb6.'....P....o.............. +.....R.b................ +13:02:15.236592 IP 54.204.39.132.80 > 192.168.122.98.39366: Flags [F.], seq 643, ack 114, win 57, options [nop,nop,TS val 525532329 ecr 133625545], length 0 +E..4.H@./.. 6.'...zb.P..o..........9.I..... +.R...................... +13:02:15.236656 IP 192.168.122.98.39366 > 54.204.39.132.80: Flags [.], ack 644, win 239, options [nop,nop,TS val 133625586 ecr 525532329], length 0 +E..4..@.@.....zb6.'....P....o.............. +.....R.................. +10 packets captured +10 packets received by filter +0 packets dropped by kernel +``` + +This is helpful for troubleshooting issues with API calls, assuming the calls are using plain HTTP. For encrypted connections, this output is less useful. + +### 6\. Saving captures to a file + +Another useful feature provided by tcpdump is the ability to save the capture to a file so you can analyze the results later. This allows you to capture packets in batch mode overnight, for example, and verify the results in the morning. It also helps when there are too many packets to analyze since real-time capture can occur too fast. + +To save packets to a file instead of displaying them on screen, use the option `-w`: + +``` +$ sudo tcpdump -i any -c10 -nn -w webserver.pcap port 80 +[sudo] password for ricardo: +tcpdump: listening on any, link-type LINUX_SLL (Linux cooked), capture size 262144 bytes +10 packets captured +10 packets received by filter +0 packets dropped by kernel +``` + +This command saves the output in a file named `webserver.pcap`. The `.pcap` extension stands for "packet capture" and is the convention for this file format. + +As shown in this example, nothing gets displayed on-screen, and the capture finishes after capturing 10 packets, as per the option `-c10`. If you want some feedback to ensure packets are being captured, use the option `-v`. + +Tcpdump creates a file in binary format so you cannot simply open it with a text editor. To read the contents of the file, execute tcpdump with the `-r` option: + +``` +$ tcpdump -nn -r webserver.pcap +reading from file webserver.pcap, link-type LINUX_SLL (Linux cooked) +13:36:57.679494 IP 192.168.122.98.39378 > 54.204.39.132.80: Flags [S], seq 3709732619, win 29200, options [mss 1460,sackOK,TS val 135708029 ecr 0,nop,wscale 7], length 0 +13:36:57.718932 IP 54.204.39.132.80 > 192.168.122.98.39378: Flags [S.], seq 1999298316, ack 3709732620, win 28960, options [mss 1460,sackOK,TS val 526052949 ecr 135708029,nop,wscale 9], length 0 +13:36:57.719005 IP 192.168.122.98.39378 > 54.204.39.132.80: Flags [.], ack 1, win 229, options [nop,nop,TS val 135708068 ecr 526052949], length 0 +13:36:57.719186 IP 192.168.122.98.39378 > 54.204.39.132.80: Flags [P.], seq 1:113, ack 1, win 229, options [nop,nop,TS val 135708068 ecr 526052949], length 112: HTTP: GET / HTTP/1.1 +13:36:57.756979 IP 54.204.39.132.80 > 192.168.122.98.39378: Flags [.], ack 113, win 57, options [nop,nop,TS val 526052959 ecr 135708068], length 0 +13:36:57.760122 IP 54.204.39.132.80 > 192.168.122.98.39378: Flags [P.], seq 1:643, ack 113, win 57, options [nop,nop,TS val 526052959 ecr 135708068], length 642: HTTP: HTTP/1.1 302 Found +13:36:57.760182 IP 192.168.122.98.39378 > 54.204.39.132.80: Flags [.], ack 643, win 239, options [nop,nop,TS val 135708109 ecr 526052959], length 0 +13:36:57.977602 IP 192.168.122.98.39378 > 54.204.39.132.80: Flags [F.], seq 113, ack 643, win 239, options [nop,nop,TS val 135708327 ecr 526052959], length 0 +13:36:58.022089 IP 54.204.39.132.80 > 192.168.122.98.39378: Flags [F.], seq 643, ack 114, win 57, options [nop,nop,TS val 526053025 ecr 135708327], length 0 +13:36:58.022132 IP 192.168.122.98.39378 > 54.204.39.132.80: Flags [.], ack 644, win 239, options [nop,nop,TS val 135708371 ecr 526053025], length 0 +$ +``` + +Since you're no longer capturing the packets directly from the network interface, `sudo` is not required to read the file. + +You can also use any of the filters we've discussed to filter the content from the file, just as you would with real-time data. For example, inspect the packets in the capture file from source IP address `54.204.39.132` by executing this command: + +``` +$ tcpdump -nn -r webserver.pcap src 54.204.39.132 +reading from file webserver.pcap, link-type LINUX_SLL (Linux cooked) +13:36:57.718932 IP 54.204.39.132.80 > 192.168.122.98.39378: Flags [S.], seq 1999298316, ack 3709732620, win 28960, options [mss 1460,sackOK,TS val 526052949 ecr 135708029,nop,wscale 9], length 0 +13:36:57.756979 IP 54.204.39.132.80 > 192.168.122.98.39378: Flags [.], ack 113, win 57, options [nop,nop,TS val 526052959 ecr 135708068], length 0 +13:36:57.760122 IP 54.204.39.132.80 > 192.168.122.98.39378: Flags [P.], seq 1:643, ack 113, win 57, options [nop,nop,TS val 526052959 ecr 135708068], length 642: HTTP: HTTP/1.1 302 Found +13:36:58.022089 IP 54.204.39.132.80 > 192.168.122.98.39378: Flags [F.], seq 643, ack 114, win 57, options [nop,nop,TS val 526053025 ecr 135708327], length 0 +``` + +### What's next? + +These basic features of tcpdump will help you get started with this powerful and versatile tool. To learn more, consult the [tcpdump website][3] and [man pages][4]. + +The tcpdump command line interface provides great flexibility for capturing and analyzing network traffic. If you need a graphical tool to understand more complex flows, look at [Wireshark][5]. + +One benefit of Wireshark is that it can read `.pcap` files captured by tcpdump. You can use tcpdump to capture packets in a remote machine that does not have a GUI and analyze the result file with Wireshark, but that is a topic for another day. + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/18/10/introduction-tcpdump + +作者:[Ricardo Gerardi][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/rgerardi +[b]: https://github.com/lujun9972 +[1]: http://www.tcpdump.org/manpages/tcpdump.1.html#lbAG +[2]: https://www.iana.org/assignments/tcp-parameters/tcp-parameters.xhtml +[3]: http://www.tcpdump.org/# +[4]: http://www.tcpdump.org/manpages/tcpdump.1.html +[5]: https://www.wireshark.org/ From 6666852ee31238170ef295eeebf31d835b020ab0 Mon Sep 17 00:00:00 2001 From: darksun Date: Thu, 11 Oct 2018 11:40:10 +0800 Subject: [PATCH 338/736] =?UTF-8?q?=E9=80=89=E9=A2=98:=205=20alerting=20an?= =?UTF-8?q?d=20visualization=20tools=20for=20sysadmins?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...g and visualization tools for sysadmins.md | 163 ++++++++++++++++++ 1 file changed, 163 insertions(+) create mode 100644 sources/tech/20181010 5 alerting and visualization tools for sysadmins.md diff --git a/sources/tech/20181010 5 alerting and visualization tools for sysadmins.md b/sources/tech/20181010 5 alerting and visualization tools for sysadmins.md new file mode 100644 index 0000000000..f933449461 --- /dev/null +++ b/sources/tech/20181010 5 alerting and visualization tools for sysadmins.md @@ -0,0 +1,163 @@ +5 alerting and visualization tools for sysadmins +====== +These open source tools help users understand system behavior and output, and provide alerts for potential problems. + +![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/metrics_data_dashboard_system_computer_analytics.png?itok=oxAeIEI-) + +You probably know (or can guess) what alerting and visualization tools are used for. Why would we discuss them as observability tools, especially since some systems include visualization as a feature? + +Observability comes from control theory and describes our ability to understand a system based on its inputs and outputs. This article focuses on the output component of observability. + +Alerting and visualization tools analyze the outputs of other systems and provide structured representations of these outputs. Alerts are basically a synthesized understanding of negative system outputs, and visualizations are disambiguated structured representations that facilitate user comprehension. + +### Common types of alerts and visualizations + +#### Alerts + +Let’s first cover what alerts are _not_. Alerts should not be sent if the human responder can’t do anything about the problem. This includes alerts that are sent to multiple individuals with only a few who can respond, or situations where every anomaly in the system triggers an alert. This leads to alert fatigue and receivers ignoring all alerts within a specific medium until the system escalates to a medium that isn’t already saturated. + +For example, if an operator receives hundreds of emails a day from the alerting system, that operator will soon ignore all emails from the alerting system. The operator will respond to a real incident only when he or she is experiencing the problem, emailed by a customer, or called by the boss. In this case, alerts have lost their meaning and usefulness. + +Alerts are not a constant stream of information or a status update. They are meant to convey a problem from which the system can’t automatically recover, and they are sent only to the individual most likely to be able to recover the system. Everything that falls outside this definition isn’t an alert and will only damage your employees and company culture. + +Everyone has a different set of alert types, so I won't discuss things like priority levels (P1-P5) or models that use words like "Informational," "Warning," and "Critical." Instead, I’ll describe the generic categories emergent in complex systems’ incident response. + +You might have noticed I mentioned an “Informational” alert type right after I wrote that alerts shouldn’t be informational. Well, not everyone agrees, but I don’t consider something an alert if it isn’t sent to anyone. It is a data point that many systems refer to as an alert. It represents some event that should be known but not responded to. It is generally part of the visualization system of the alerting tool and not an event that triggers actual notifications. Mike Julian covers this and other aspects of alerting in his book [Practical Monitoring][1]. It's a must read for work in this area. + +Non-informational alerts consist of types that can be responded to or require action. I group these into two categories: internal outage and external outage. (Most companies have more than two levels for prioritizing their response efforts.) Degraded system performance is considered an outage in this model, as the impact to each user is usually unknown. + +Internal outages are a lower priority than external outages, but they still need to be responded to quickly. They often include internal systems that company employees use or components of applications that are visible only to company employees. + +External outages consist of any system outage that would immediately impact a customer. These don’t include a system outage that prevents releasing updates to the system. They do include customer-facing application failures, database outages, and networking partitions that hurt availability or consistency if either can impact a user. They also include outages of tools that may not have a direct impact on users, as the application continues to run but this transparent dependency impacts performance. This is common when the system uses some external service or data source that isn’t necessary for full functionality but may cause delays as the application performs retries or handles errors from this external dependency. + +### Visualizations + +There are many visualization types, and I won’t cover them all here. It’s a fascinating area of research. On the data analytics side of my career, learning and applying that knowledge is a constant challenge. We need to provide simple representations of complex system outputs for the widest dissemination of information. [Google Charts][2] and [Tableau][3] have a wide selection of visualization types. We’ll cover the most common visualizations and some innovative solutions for quickly understanding systems. + +#### Line chart + +The line chart is probably the most common visualization. It does a pretty good job of producing an understanding of a system over time. A line chart in a metrics system would have a line for each unique metric or some aggregation of metrics. This can get confusing when there are a lot of metrics in the same dashboard (as shown below), but most systems can select specific metrics to view rather than having all of them visible. Also, anomalous behavior is easy to spot if it’s significant enough to escape the noise of normal operations. Below we can see purple, yellow, and light blue lines that might indicate anomalous behavior. + +![](https://opensource.com/sites/default/files/uploads/monitoring_guide_line_chart.png) + +Another feature of a line chart is that you can often stack them to show relationships. For example, you might want to look at requests on each server individually, but also in aggregate. This allows you to understand the overall system as well as each instance in the same graph. + +![](https://opensource.com/sites/default/files/uploads/monitoring_guide_line_chart_aggregate.png) + +#### Heatmaps + +Another common visualization is the heatmap. It is useful when looking at histograms. This type of visualization is similar to a bar chart but can show gradients within the bars representing the different percentiles of the overall metric. For example, suppose you’re looking at request latencies and you want to quickly understand the overall trend as well as the distribution of all requests. A heatmap is great for this, and it can use color to disambiguate the quantity of each section with a quick glance. + +The heatmap below shows the higher concentration around the centerline of the graph with an easy-to-understand visualization of the distribution vertically for each time bucket. We might want to review a couple of points in time where the distribution gets wide while the others are fairly tight like at 14:00. This distribution might be a negative performance indicator. + +![](https://opensource.com/sites/default/files/uploads/monitoring_guide_histogram.png) + +#### Gauges + +The last common visualization I’ll cover here is the gauge, which helps users understand a single metric quickly. Gauges can represent a single metric, like your speedometer represents your driving speed or your gas gauge represents the amount of gas in your car. Similar to the gas gauge, most monitoring gauges clearly indicate what is good and what isn’t. Often (as is shown below), good is represented by green, getting worse by orange, and “everything is breaking” by red. The middle row below shows traditional gauges. + +![](https://opensource.com/sites/default/files/uploads/monitoring_guide_gauges.png) + +This image shows more than just traditional gauges. The other gauges are single stat representations that are similar to the function of the classic gauge. They all use the same color scheme to quickly indicate system health with just a glance. Arguably, the bottom row is probably the best example of a gauge that allows you to glance at a dashboard and know that everything is healthy (or not). This type of visualization is usually what I put on a top-level dashboard. It offers a full, high-level understanding of system health in seconds. + +#### Flame graphs + +A less common visualization is the flame graph, introduced by [Netflix’s Brendan Gregg][4] in 2011. It’s not ideal for dashboarding or quickly observing high-level system concerns; it’s normally seen when trying to understand a specific application problem. This visualization focuses on CPU and memory and the associated frames. The X-axis lists the frames alphabetically, and the Y-axis shows stack depth. Each rectangle is a stack frame and includes the function being called. The wider the rectangle, the more it appears in the stack. This method is invaluable when trying to diagnose system performance at the application level and I urge everyone to give it a try. + +![](https://opensource.com/sites/default/files/uploads/monitoring_guide_flame_graph_0.png) + +### Tool options + +There are several commercial options for alerting, but since this is Opensource.com, I’ll cover only systems that are being used at scale by real companies that you can use at no cost. Hopefully, you’ll be able to contribute new and innovative features to make these systems even better. + +### Alerting tools + +#### Bosun + +If you’ve ever done anything with computers and gotten stuck, the help you received was probably thanks to a Stack Exchange system. Stack Exchange runs many different websites around a crowdsourced question-and-answer model. [Stack Overflow][5] is very popular with developers, and [Super User][6] is popular with operations. However, there are now hundreds of sites ranging from parenting to sci-fi and philosophy to bicycles. + +Stack Exchange open-sourced its alert management system, [Bosun][7], around the same time Prometheus and its [AlertManager][8] system were released. There were many similarities in the two systems, and that’s a really good thing. Like Prometheus, Bosun is written in Golang. Bosun’s scope is more extensive than Prometheus’ as it can interact with systems beyond metrics aggregation. It can also ingest data from log and event aggregation systems. It supports Graphite, InfluxDB, OpenTSDB, and Elasticsearch. + +Bosun’s architecture consists of a single server binary, a backend like OpenTSDB, Redis, and [scollector agents][9]. The scollector agents automatically detect services on a host and report metrics for those processes and other system resources. This data is sent to a metrics backend. The Bosun server binary then queries the backends to determine if any alerts need to be fired. Bosun can also be used by tools like [Grafana][10] to query the underlying backends through one common interface. Redis is used to store state and metadata for Bosun. + +A really neat feature of Bosun is that it lets you test your alerts against historical data. This was something I missed in Prometheus several years ago, when I had data for an issue I wanted alerts on but no easy way to test it. To make sure my alerts were working, I had to create and insert dummy data. This system alleviates that very time-consuming process. + +Bosun also has the usual features like showing simple graphs and creating alerts. It has a powerful expression language for writing alerting rules. However, it only has email and HTTP notification configurations, which means connecting to Slack and other tools requires a bit more customization ([which its documentation covers][11]). Similar to Prometheus, Bosun can use templates for these notifications, which means they can look as awesome as you want them to. You can use all your HTML and CSS skills to create the baddest email alert anyone has ever seen. + +#### Cabot + +[Cabot][12] was created by a company called [Arachnys][13]. You may not know who Arachnys is or what it does, but you have probably felt its impact: It built the leading cloud-based solution for fighting financial crimes. That sounds pretty cool, right? At a previous company, I was involved in similar functions around [“know your customer"][14] laws. Most companies would consider it a very bad thing to be linked to a terrorist group, for example, funneling money through their systems. These solutions also help defend against less-atrocious offenders like fraudsters who could also pose a risk to the institution. + +So why did Arachnys create Cabot? Well, it is kind of a Christmas present to everyone, as it was a Christmas project built because its developers couldn’t wrap their heads around [Nagios][15]. And really, who can blame them? Cabot was written with Django and Bootstrap, so it should be easy for most to contribute to the project. (Another interesting factoid: The name comes from the creator’s dog.) + +The Cabot architecture is similar to Bosun in that it doesn’t collect any data. Instead, it accesses data through the APIs of the tools it is alerting for. Therefore, Cabot uses a pull (rather than a push) model for alerting. It reaches out into each system’s API and retrieves the information it needs to make a decision based on a specific check. Cabot stores the alerting data in a Postgres database and also has a cache using Redis. + +Cabot natively supports [Graphite][16], but it also supports [Jenkins][17], which is rare in this area. [Arachnys][13] uses Jenkins like a centralized cron, but I like this idea of treating build failures like outages. Obviously, a build failure isn’t as critical as a production outage, but it could still alert the team and escalate if the failure isn’t resolved. Who actually checks Jenkins every time an email comes in about a build failure? Yeah, me too! + +Another interesting feature is that Cabot can integrate with Google Calendar for on-call rotations. Cabot calls this feature Rota, which is a British term for a roster or rotation. This makes a lot of sense, and I wish other systems would take this idea further. Cabot doesn’t support anything more complex than primary and backup personnel, but there is certainly room for additional features. The docs say if you want something more advanced, you should look at a commercial option. + +#### StatsAgg + +[StatsAgg][18]? How did that make the list? Well, it’s not every day you come across a publishing company that has created an alerting platform. I think that deserves recognition. Of course, [Pearson][19] isn’t just a publishing company anymore; it has several web presences and a joint venture with [O’Reilly Media][20]. However, I still think of it as the company that published my schoolbooks and tests. + +StatsAgg isn’t just an alerting platform; it’s also a metrics aggregation platform. And it’s kind of like a proxy for other systems. It supports Graphite, StatsD, InfluxDB, and OpenTSDB as inputs, but it can also forward those metrics to their respective platforms. This is an interesting concept, but potentially risky as loads increase on a central service. However, if the StatsAgg infrastructure is robust enough, it can still produce alerts even when a backend storage platform has an outage. + +StatsAgg is written in Java and consists only of the main server and UI, which keeps complexity to a minimum. It can send alerts based on regular expression matching and is focused on alerting by service rather than host or instance. Its goal is to fill a void in the open source observability stack, and I think it does that quite well. + +### Visualization tools + +#### Grafana + +Almost everyone knows about [Grafana][10], and many have used it. I have used it for years whenever I need a simple dashboard. The tool I used before was deprecated, and I was fairly distraught about that until Grafana made it okay. Grafana was gifted to us by Torkel Ödegaard. Like Cabot, Grafana was also created around Christmastime, and released in January 2014. It has come a long way in just a few years. It started life as a Kibana dashboarding system, and Torkel forked it into what became Grafana. + +Grafana’s sole focus is presenting monitoring data in a more usable and pleasing way. It can natively gather data from Graphite, Elasticsearch, OpenTSDB, Prometheus, and InfluxDB. There’s an Enterprise version that uses plugins for more data sources, but there’s no reason those other data source plugins couldn’t be created as open source, as the Grafana plugin ecosystem already offers many other data sources. + +What does Grafana do for me? It provides a central location for understanding my system. It is web-based, so anyone can access the information, although it can be restricted using different authentication methods. Grafana can provide knowledge at a glance using many different types of visualizations. However, it has started integrating alerting and other features that aren’t traditionally combined with visualizations. + +Now you can set alerts visually. That means you can look at a graph, maybe even one showing where an alert should have triggered due to some degradation of the system, click on the graph where you want the alert to trigger, and then tell Grafana where to send the alert. That’s a pretty powerful addition that won’t necessarily replace an alerting platform, but it can certainly help augment it by providing a different perspective on alerting criteria. + +Grafana has also introduced more collaboration features. Users have been able to share dashboards for a long time, meaning you don’t have to create your own dashboard for your [Kubernetes][21] cluster because there are several already available—with some maintained by Kubernetes developers and others by Grafana developers. + +The most significant addition around collaboration is annotations. Annotations allow a user to add context to part of a graph. Other users can then use this context to understand the system better. This is an invaluable tool when a team is in the middle of an incident and communication and common understanding are critical. Having all the information right where you’re already looking makes it much more likely that knowledge will be shared across the team quickly. It’s also a nice feature to use during blameless postmortems when the team is trying to understand how the failure occurred and learn more about their system. + +#### Vizceral + +Netflix created [Vizceral][22] to understand its traffic patterns better when performing a traffic failover. Unlike Grafana, which is a more general tool, Vizceral serves a very specific use case. Netflix no longer uses this tool internally and says it is no longer actively maintained, but it still updates the tool periodically. I highlight it here primarily to point out an interesting visualization mechanism and how it can help solve a problem. It’s worth running it in a demo environment just to better grasp the concepts and witness what’s possible with these systems. + +### What to read next + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/18/10/alerting-and-visualization-tools-sysadmins + +作者:[Dan Barker][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/barkerd427 +[b]: https://github.com/lujun9972 +[1]: https://www.practicalmonitoring.com/ +[2]: https://developers.google.com/chart/interactive/docs/gallery +[3]: https://libguides.libraries.claremont.edu/c.php?g=474417&p=3286401 +[4]: http://www.brendangregg.com/flamegraphs.html +[5]: https://stackoverflow.com/ +[6]: https://superuser.com/ +[7]: http://bosun.org/ +[8]: https://prometheus.io/docs/alerting/alertmanager/ +[9]: https://bosun.org/scollector/ +[10]: https://grafana.com/ +[11]: https://bosun.org/notifications +[12]: https://cabotapp.com/ +[13]: https://www.arachnys.com/ +[14]: https://en.wikipedia.org/wiki/Know_your_customer +[15]: https://www.nagios.org/ +[16]: https://graphiteapp.org/ +[17]: https://jenkins.io/ +[18]: https://github.com/PearsonEducation/StatsAgg +[19]: https://www.pearson.com/us/ +[20]: https://www.oreilly.com/ +[21]: https://opensource.com/resources/what-is-kubernetes +[22]: https://github.com/Netflix/vizceral From 2d5bbbc57da94ae44a229c7fb412cbdcfa4d7008 Mon Sep 17 00:00:00 2001 From: darksun Date: Thu, 11 Oct 2018 11:43:43 +0800 Subject: [PATCH 339/736] =?UTF-8?q?=E9=80=89=E9=A2=98:=203=20areas=20to=20?= =?UTF-8?q?drive=20DevOps=20change?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...20181008 3 areas to drive DevOps change.md | 108 ++++++++++++++++++ 1 file changed, 108 insertions(+) create mode 100644 sources/talk/20181008 3 areas to drive DevOps change.md diff --git a/sources/talk/20181008 3 areas to drive DevOps change.md b/sources/talk/20181008 3 areas to drive DevOps change.md new file mode 100644 index 0000000000..733158a81b --- /dev/null +++ b/sources/talk/20181008 3 areas to drive DevOps change.md @@ -0,0 +1,108 @@ +3 areas to drive DevOps change +====== +Driving large-scale organizational change is painful, but when it comes to DevOps, the payoff is worth the pain. +![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/diversity-inclusion-transformation-change_20180927.png?itok=2E-g10hJ) + +Pain avoidance is a powerful motivator. Some studies hint that even [plants experience a type of pain][1] and take steps to defend themselves. Yet we have plenty of examples of humans enduring pain on purpose—exercise often hurts, but we still do it. When we believe the payoff is worth the pain, we'll endure almost anything. + +The truth is that driving large-scale organizational change is painful. It hurts for those having to change their values and behaviors, it hurts for leadership, and it hurts for the people just trying to do their jobs. In the case of DevOps, though, I can tell you the pain is worth it. + +I've seen firsthand how teams learn they must spend time improving their technical processes, take ownership of their automation pipelines, and become masters of their fate. They gain the tools they need to be successful. + +![Improvements after DevOps transformation][3] + +Image by Lee Eason. CC BY-SA 4.0 + +This chart shows the value of that change. In a company where I directed a DevOps transformation, its 60+ teams submitted more than 900 requests per month to release management. If you add up the time those tickets stayed open, it came to more than 350 days per month. What could your company do with an extra 350 person-days per month? In addition to the improvements seen above, they went from 100 to 9,000 deployments per month, a 24% decrease in high-severity bugs, happier engineers, and improved net promoter scores (NPS). The biggest NPS improvements link to the teams furthest along on their DevOps journey, as the [Puppet State of DevOps][4] report predicted. The bottom line is that investments into technical process improvement translate into better business outcomes. + +DevOps leaders must focus on three main areas to drive this change: executives, culture, and team health. + +### Executives + +The bottom line is that investments into technical process improvement translate into better business outcomes. + +The larger your organization, the greater the distance (and opportunities for misunderstanding) between business leadership and the individuals delivering services to your customers. To make things worse, the landscape of tools and practices in technology is changing at an accelerating rate. This makes it practically impossible for business leaders to understand on their own how transformations like DevOps or agile work. + +The larger your organization, the greater the distance (and opportunities for misunderstanding) between business leadership and the individuals delivering services to your customers. To make things worse, the landscape of tools and practices in technology is changing at an accelerating rate. This makes it practically impossible for business leaders to understand on their own how transformations like DevOps or agile work. + +DevOps leaders must help executives come along for the ride. Educating leaders gives them options when they're making decisions and makes it more likely they'll choose paths that help your company. + +For example, let's say your executives believe DevOps is going to improve how you deploy your products into production, but they don't understand how. You've been working with a software team to help automate their deployment. When an executive hears about a deploy failure (and there will be failures), they will want to understand how it occurred. When they learn the software team did the deployment rather than the release management team, they may try to protect the business by decreeing all production releases must go through traditional change controls. You will lose credibility, and teams will be far less likely to trust you and accept further changes. + +It takes longer to rebuild trust with executives and get their support after an incident than it would have taken to educate them in the first place. Put the time in upfront to build alignment, and it will pay off as you implement tactical changes. + +Two pieces of advice when building that alignment: + + * First, **don't ignore any constraints** they raise. If they have worries about contracts or security, make the heads of legal and security your new best friends. By partnering with them, you'll build their trust and avoid making costly mistakes. + * Second, **use metrics to build a bridge** between what your delivery teams are doing and your executives' concerns. If the business has a goal to reduce customer churn, and you know from research that many customers leave because of unplanned downtime, reinforce that your teams are committed to tracking and improving Mean Time To Detection and Resolution (MTTD and MTTR). You can use those key metrics to show meaningful progress that teams and executives understand and get behind. + + + +### Culture + +DevOps is a culture of continuous improvement focused on code, build, deploy, and operational processes. Culture describes the organization's values and behaviors. Essentially, we're talking about changing how people behave, which is never easy. + +I recommend reading [The Wolf in CIO's Clothing][5]. Spend time thinking about psychology and motivation. Read [Drive][6] or at least watch Daniel Pink's excellent [TED Talk][7]. Read [The Hero with a Thousand Faces][8] and learn to identify the different journeys everyone is on. If none of these things sound interesting, you are not the right person to drive change in your company. Otherwise, read on! + +Essentially, we're talking about changing how people behave, which is never easy. + +Most rational people behave according to their values. Most organizations don't have explicit values everyone understands and lives by. Therefore, you'll need to identify the organization's values that have led to the behaviors that have led to the current state. You also need to make sure you can tell the story about how those values came to be and how they led to where you are. When you tell that story, be careful not to demonize those values—they aren't immoral or evil. People did the best they could at the time, given what they knew and what resources they had. + +Most rational people behave according to their values. Most organizations don't have explicit values everyone understands and lives by. Therefore, you'll need to identify the organization's values that have led to the behaviors that have led to the current state. You also need to make sure you can tell the story about how those values came to be and how they led to where you are. When you tell that story, be careful not to demonize those values—they aren't immoral or evil. People did the best they could at the time, given what they knew and what resources they had. + +Explain that the company and its organizational goals are changing, and the team must alter its values. It's helpful to express this in terms of contrast. For example, your company may have historically valued cost savings above all else. That value is there for a reason—the company was cash-strapped. To get new products out, the infrastructure group had to tightly couple services by sharing database clusters or servers. Over time, those practices created a real mess that became hard to maintain. Simple changes started breaking things in unexpected ways. This led to tight change-control processes that were painful for delivery teams, so they stopped changing things. + +Play that movie for five years, and you end up with little to no innovation, legacy technology, attraction and retention problems, and poor-quality products. You've grown the company, but you've hit a ceiling, and you can't continue to grow with those same values and behaviors. Now you must put engineering efficiency above cost saving. If one option will help teams maintain their service easier, but the other option is cheaper in the short term, you go with the first option. + +You must tell this story again and again. Then you must celebrate any time a team expresses the new value through their behavior—even if they make a mistake. When a team has a deploy failure, congratulate them for taking the risk and encourage them to keep learning. Explain how their behavior is leading to the right outcome and support them. Over time, teams will see the message is real, and they'll feel safe altering their behavior. + +### Team health + +Have you ever been in a planning meeting and heard something like this: "We can't really estimate that story until John gets back from vacation. He's the only one who knows that area of the code well enough." Or: "We can't get this task done because it's got a cross-team dependency on network engineering, and the guy that set up the firewall is out sick." Or: "John knows that system best; if he estimated the story at a 3, then let's just go with that." When the team works on that story, who will most likely do the work? That's right, John will, and the cycle will continue. + +For a long time, we've accepted that this is just the nature of software development. If we don't solve for it, we perpetuate the cycle. + +Entropy will always drive teams naturally towards disorder and bad health. Our job as team members and leaders is to intentionally manage against that entropy and keep our teams healthy. Transformations like DevOps, agile, moving to the cloud, or refactoring a legacy application all amplify and accelerate that entropy. That's because transformations add new skills and expertise needed for the team to take on that new type of work. + +Let's look at an example of a product team refactoring its legacy monolith. As usual, they build those new services in AWS. The legacy monolith was deployed to the data center, monitored, and backed up by IT. IT made sure the application's infosec requirements were met at the infrastructure layer. They conducted disaster recovery tests, patched the servers, and installed and configured required intrusion detection and antivirus agents. And they kept change control records, required for the annual audit process, of everything was done to the application's infrastructure. + +I often see product teams make the fatal mistake of thinking IT is all cost and bottleneck. They're hungry to shed the skin of IT and use the public cloud, but they never stop to appreciate the critical services IT provides. Moving to the cloud means you implement these things differently; they don't go away. AWS is still a data center, and any team utilizing it accepts the related responsibilities. + +In practice, this means product teams must learn how to do those IT services when they move to the cloud. So, when our fictional product team starts refactoring its legacy application and putting new services in in the cloud, it will need a vastly expanded skillset to be successful. Those skills don't magically appear—they're learned or hired—and team leaders and managers must actively manage the process. + +I built [Tekata.io][9] because I couldn't find any tools to support me as I helped my teams evolve. Tekata is free and easy to use, but the tool is not as important as the people and process. Make sure you build continuous learning into your cadence and keep track of your team's weak spots. Those weak spots affect your ability to deliver, and filling them usually involves learning new things, so there's a wonderful synergy here. In fact, 76% of millennials think professional development opportunities are [one of the most important elements][10] of company culture. + +### Proof is in the payoff + +DevOps transformations involve altering the behavior, and therefore the culture, of your teams. That must be done with executive support and understanding. At the same time, those behavior changes mean learning new skills, and that process must also be managed carefully. But the payoff for pulling this off is more productive teams, happier and more engaged team members, higher quality products, and happier customers. + +Lee Eason will present [Tales From A DevOps Transformation][11] at [All Things Open][12], October 21-23 in Raleigh, N.C. + +Disclaimer: All opinions are statements in this article are exclusively those of Lee Eason and are not representative of Ipreo or IHS Markit. + + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/18/10/tales-devops-transformation + +作者:[Lee Eason][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/leeeason +[b]: https://github.com/lujun9972 +[1]: https://link.springer.com/article/10.1007%2Fs00442-014-2995-6 +[2]: /file/411061 +[3]: https://opensource.com/sites/default/files/uploads/devops-delays.png (Improvements after DevOps transformation) +[4]: https://puppet.com/resources/whitepaper/state-of-devops-report +[5]: https://www.gartner.com/en/publications/wolf-cio +[6]: https://en.wikipedia.org/wiki/Drive:_The_Surprising_Truth_About_What_Motivates_Us +[7]: https://www.ted.com/talks/dan_pink_on_motivation?language=en#t-2094 +[8]: https://en.wikipedia.org/wiki/The_Hero_with_a_Thousand_Faces +[9]: https://tekata.io/ +[10]: https://www.execu-search.com/~/media/Resources/pdf/2017_Hiring_Outlook_eBook +[11]: https://allthingsopen.org/talk/tales-from-a-devops-transformation/ +[12]: https://allthingsopen.org/ From 72c7b4675e163cd18da8effde4a4bd6ff3782c65 Mon Sep 17 00:00:00 2001 From: darksun Date: Thu, 11 Oct 2018 11:46:13 +0800 Subject: [PATCH 340/736] =?UTF-8?q?=E9=80=89=E9=A2=98:=204=20best=20practi?= =?UTF-8?q?ces=20for=20giving=20open=20source=20code=20feedback?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...es for giving open source code feedback.md | 47 +++++++++++++++++++ 1 file changed, 47 insertions(+) create mode 100644 sources/talk/20181009 4 best practices for giving open source code feedback.md diff --git a/sources/talk/20181009 4 best practices for giving open source code feedback.md b/sources/talk/20181009 4 best practices for giving open source code feedback.md new file mode 100644 index 0000000000..4cfb806525 --- /dev/null +++ b/sources/talk/20181009 4 best practices for giving open source code feedback.md @@ -0,0 +1,47 @@ +4 best practices for giving open source code feedback +====== +A few simple guidelines can help you provide better feedback. + +![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/email_mail_box_envelope_send_blue.jpg?itok=6Epj47H6) + +In the previous article I gave you tips for [how to receive feedback][1], especially in the context of your first free and open source project contribution. Now it's time to talk about the other side of that same coin: providing feedback. + +If I tell you that something you did in your contribution is "stupid" or "naive," how would you feel? You'd probably be angry, hurt, or both, and rightfully so. These are mean-spirited words that when directed at people, can cut like knives. Words matter, and they matter a great deal. Therefore, put as much thought into the words you use when leaving feedback for a contribution as you do into any other form of contribution you give to the project. As you compose your feedback, think to yourself, "How would I feel if someone said this to me? Is there some way someone might take this another way, a less helpful way?" If the answer to that last question has even the chance of being a yes, backtrack and rewrite your feedback. It's better to spend a little time rewriting now than to spend a lot of time apologizing later. + +When someone does make a mistake that seems like it should have been obvious, remember that we all have different experiences and knowledge. What's obvious to you may not be to someone else. And, if you recall, there once was a time when that thing was not obvious to you. We all make mistakes. We all typo. We all forget commas, semicolons, and closing brackets. Save yourself a lot of time and effort: Point out the mistake, but leave out the judgement. Stick to the facts. After all, if the mistake is that obvious, then no critique will be necessary, right? + + 1. **Avoid ad hominem comments.** Remember to review only the contribution and not the person who contributed it. That is to say, point out, "the contribution could be more efficient here in this way…" rather than, "you did this inefficiently." The latter is ad hominem feedback. Ad hominem is a Latin phrase meaning "to the person," which is where your feedback is being directed: to the person who contributed it rather than to the contribution itself. By providing feedback on the person you make that feedback personal, and the contributor is justified in taking it personally. Be careful when crafting your feedback to make sure you're addressing only the contents of the contribution and not accidentally criticizing the person who submitted it for review. + + 2. **Include positive comments.** Not all of your feedback has to (or should) be critical. As you review the contribution and you see something that you like, provide feedback on that as well. Several academic studies—including an important one by [Baumeister, Braslavsky, Finkenauer, and Vohs][2]—show that humans focus more on negative feedback than positive. When your feedback is solely negative, it can be very disheartening for contributors. Including positive reinforcement and feedback is motivating to people and helps them feel good about their contribution and the time they spent on it, which all adds up to them feeling more inclined to provide another contribution in the future. It doesn't have to be some gushing paragraph of flowery praise, but a quick, "Huh, that's a really smart way to handle that. It makes everything flow really well," can go a long way toward encouraging someone to keep contributing. + + 3. **Questions are feedback, too.** Praise is one less common but valuable type of review feedback. Questions are another. If you're looking at a contribution and can't tell why the submitter + +When your feedback is solely negative, it can be very disheartening for contributors. + +did things the way they did, or if the contribution just doesn't make a lot of sense to you, asking for more information acts as feedback. It tells the submitter that something they contributed isn't as clear as they thought and that it may need some work to make the approach more obvious, or if it's a code contribution, a comment to explain what's going on and why. A simple, "I don't understand this part here. Could you please tell me what it's doing and why you chose that way?" can start a dialogue that leads to a contribution that's much easier for future contributors to understand and maintain. + + 4. **Expect a negotiation.** Using questions as a form of feedback implies that there will be answers to those questions, or perhaps other questions in response. Whether your feedback is in question or statement format, you should expect to generate some sort of dialogue throughout the process. An alternative is to see your feedback as incontrovertible, your word as law. Although this is definitely one approach you can take, it's rarely a good one. When providing feedback on a contribution, it's best to collaborate rather than dictate. As these dialogues arise, embracing them as opportunities for conversation and learning on both sides is important. Be willing to discuss their approach and your feedback, and to take the time to understand their perspective. + + + + +The bottom line is: Don't be a jerk. If you're not sure whether the feedback you're planning to leave makes you sound like a jerk, pause to have someone else review it before you click Send. Have empathy for the person at the receiving end of that feedback. While the maxim is thousands of years old, it still rings true today that you should try to do unto others as you would have them do unto you. Put yourself in their shoes and aim to be helpful and supportive rather than simply being right. + +_Adapted from[Forge Your Future with Open Source][3] by VM (Vicky) Brasseur, Copyright © 2018 The Pragmatic Programmers LLC. Reproduced with the permission of the publisher._ + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/18/10/best-practices-giving-open-source-code-feedback + +作者:[VM(Vicky) Brasseur][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/vmbrasseur +[b]: https://github.com/lujun9972 +[1]: https://opensource.com/article/18/10/6-tips-receiving-feedback +[2]: https://www.msudenver.edu/media/content/sri-taskforce/documents/Baumeister-2001.pdf +[3]: http://www.pragprog.com/titles/vbopens From 301612dad4bac42e61a8938f0ba10c3a89d21831 Mon Sep 17 00:00:00 2001 From: belitex Date: Thu, 11 Oct 2018 13:09:23 +0800 Subject: [PATCH 341/736] =?UTF-8?q?=E7=BF=BB=E8=AF=91=E7=94=B3=E9=A2=86?= =?UTF-8?q?=EF=BC=9AHow=20Writing=20Can=20Expand=20Your=20Skills=20and=20G?= =?UTF-8?q?row=20Your=20Career?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...19 How Writing Can Expand Your Skills and Grow Your Career.md | 1 + 1 file changed, 1 insertion(+) diff --git a/sources/talk/20180919 How Writing Can Expand Your Skills and Grow Your Career.md b/sources/talk/20180919 How Writing Can Expand Your Skills and Grow Your Career.md index a862920028..324d3c8700 100644 --- a/sources/talk/20180919 How Writing Can Expand Your Skills and Grow Your Career.md +++ b/sources/talk/20180919 How Writing Can Expand Your Skills and Grow Your Career.md @@ -1,3 +1,4 @@ +translating by belitex How Writing Can Expand Your Skills and Grow Your Career ====== From dd68b591936dbf9a6a7eca3a19672f84152a3779 Mon Sep 17 00:00:00 2001 From: "Xingyu.Wang" Date: Thu, 11 Oct 2018 14:49:06 +0800 Subject: [PATCH 342/736] PRF:20180921 Clinews - Read News And Latest Headlines From Commandline.md @geekpi --- ...s And Latest Headlines From Commandline.md | 28 +++++++------------ 1 file changed, 10 insertions(+), 18 deletions(-) diff --git a/translated/tech/20180921 Clinews - Read News And Latest Headlines From Commandline.md b/translated/tech/20180921 Clinews - Read News And Latest Headlines From Commandline.md index 892d6ca1c4..3dc74ff355 100644 --- a/translated/tech/20180921 Clinews - Read News And Latest Headlines From Commandline.md +++ b/translated/tech/20180921 Clinews - Read News And Latest Headlines From Commandline.md @@ -1,10 +1,9 @@ -Clinews - 从命令行阅读新闻和最新头条 +Clinews:从命令行阅读新闻和最新头条 ====== ![](https://www.ostechnix.com/wp-content/uploads/2018/09/clinews-720x340.jpeg) -不久前,我们写了一个名为 [**InstantNews**][1] 的命令行新闻客户端,它可以帮助你立即在命令行阅读新闻和最新头条新闻。今天,我偶然发现了一个名为 **Clinews** 的类似,它的其功能与此相同 - 在终端阅读来自热门网站的新闻和最新头条,还有博客。你无需安装 GUI 应用或移动应用。你可以直接从终端阅读世界上正在发生的事情。它是使用 **NodeJS** 编写的免费开源程序。 - +不久前,我们写了一个名为 [InstantNews][1] 的命令行新闻客户端,它可以帮助你立即在命令行阅读新闻和最新头条新闻。今天,我偶然发现了一个名为 **Clinews** 的类似,它的其功能与此相同 —— 在终端阅读来自热门网站的新闻和最新头条,还有博客。你无需安装 GUI 应用或移动应用。你可以直接从终端阅读世界上正在发生的事情。它是使用 **NodeJS** 编写的自由开源程序。 ### 安装 Clinews @@ -30,22 +29,20 @@ $ npm -i yarn ### 配置 News API -Clinews 从 [**News API**][2] 中检索所有新闻标题。News API 是一个简单易用的API,它返回当前在一系列新闻源和博客上发布的头条的 JSON 元数据。它目前提供来自 70 个热门源的实时头条,包括 Ars Technica、BBC、Blooberg、CNN、每日邮报、Engadget、ESPN、金融时报、谷歌新闻、hacker News,IGN、Mashable、国家地理、Reddit r/all、路透社、 Speigel Online、Techcrunch、The Guardian、The Hindu、赫芬顿邮报、纽约时报、The Next Web、华尔街日报,今日美国和[**等等**][3]。 +Clinews 从 [News API][2] 中检索所有新闻标题。News API 是一个简单易用的 API,它返回当前在一系列新闻源和博客上发布的头条的 JSON 元数据。它目前提供来自 70 个热门源的实时头条,包括 Ars Technica、BBC、Blooberg、CNN、每日邮报、Engadget、ESPN、金融时报、谷歌新闻、hacker News,IGN、Mashable、国家地理、Reddit r/all、路透社、 Speigel Online、Techcrunch、The Guardian、The Hindu、赫芬顿邮报、纽约时报、The Next Web、华尔街日报,今日美国和[等等][3]。 -首先,你需要 News API 的 API 密钥。进入 [**https://newsapi.org/register**][4] 并注册一个免费帐户来获取 API 密钥。 +首先,你需要 News API 的 API 密钥。进入 [https://newsapi.org/register][4] 并注册一个免费帐户来获取 API 密钥。 -从 News API 获得 API 密钥后,编辑 **.bashrc**: +从 News API 获得 API 密钥后,编辑 `.bashrc`: ``` $ vi ~/.bashrc - ``` 在最后添加 newsapi API 密钥,如下所示: ``` export IN_API_KEY="Paste-API-key-here" - ``` 请注意,你需要将密钥粘贴在双引号内。保存并关闭文件。 @@ -54,7 +51,6 @@ export IN_API_KEY="Paste-API-key-here" ``` $ source ~/.bashrc - ``` 完成。现在继续并从新闻源获取最新的头条新闻。 @@ -65,10 +61,9 @@ $ source ~/.bashrc ``` $ news fetch the-hindu - ``` -这里,**“the-hindu”** 是新闻源的源id(获取 id)。 +这里,`the-hindu` 是新闻源的源id(获取 id)。 上述命令将从 The Hindu 新闻站获取最新的 10 个头条,并将其显示在终端中。此外,它还显示新闻的简要描述、发布的日期和时间以及到源的实际链接。 @@ -82,7 +77,6 @@ $ news fetch the-hindu ``` $ news sources - ``` **示例输出:** @@ -91,22 +85,20 @@ $ news sources 正如你在上面的截图中看到的,Clinews 列出了所有新闻源,包括新闻源的名称、获取 ID、网站描述、网站 URL 以及它所在的国家/地区。在撰写本指南时,Clinews 目前支持 70 多个新闻源。 -Clinews 还可以搜索符合搜索条件/术语的所有源的新闻报道。例如,要列出包含单词 **“Tamilnadu”** 的所有新闻报道,请使用以下命令: +Clinews 还可以搜索符合搜索条件/术语的所有源的新闻报道。例如,要列出包含单词 “Tamilnadu” 的所有新闻报道,请使用以下命令: ``` $ news search "Tamilnadu" ``` -此命令将会筛选所有新闻源中含有 **Tamilnadu** 的报道。 +此命令将会筛选所有新闻源中含有 “Tamilnadu” 的报道。 -Clinews有一些额外的标志可以帮助你 +Clinews 有一些其它选项可以帮助你 * 限制你想看的新闻报道的数量,   * 排序新闻报道(热门、最新),   * 智能显示新闻报道分类(例如商业、娱乐、游戏、大众、音乐、政治、科学和自然、体育、技术) - - 更多详细信息,请参阅帮助部分: ``` @@ -126,7 +118,7 @@ via: https://www.ostechnix.com/clinews-read-news-and-latest-headlines-from-comma 作者:[SK][a] 选题:[lujun9972](https://github.com/lujun9972) 译者:[geekpi](https://github.com/geekpi) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 From 735681fda04c76ffcdc67c8645f5eb3a47fd6aee Mon Sep 17 00:00:00 2001 From: "Xingyu.Wang" Date: Thu, 11 Oct 2018 14:51:14 +0800 Subject: [PATCH 343/736] PUB:20180921 Clinews - Read News And Latest Headlines From Commandline.md @geekpi https://linux.cn/article-10100-1.html --- ...1 Clinews - Read News And Latest Headlines From Commandline.md | 0 1 file changed, 0 insertions(+), 0 deletions(-) rename {translated/tech => published}/20180921 Clinews - Read News And Latest Headlines From Commandline.md (100%) diff --git a/translated/tech/20180921 Clinews - Read News And Latest Headlines From Commandline.md b/published/20180921 Clinews - Read News And Latest Headlines From Commandline.md similarity index 100% rename from translated/tech/20180921 Clinews - Read News And Latest Headlines From Commandline.md rename to published/20180921 Clinews - Read News And Latest Headlines From Commandline.md From a663fa06ab0d49169e8888a9e51d9c41a1ba870a Mon Sep 17 00:00:00 2001 From: "Xingyu.Wang" Date: Thu, 11 Oct 2018 15:07:14 +0800 Subject: [PATCH 344/736] PRF:20140607 Five things that make Go fast.md MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit @houbaron 恭喜您,完成了第一篇翻译贡献! --- .../20140607 Five things that make Go fast.md | 127 +++++++++--------- 1 file changed, 61 insertions(+), 66 deletions(-) diff --git a/translated/tech/20140607 Five things that make Go fast.md b/translated/tech/20140607 Five things that make Go fast.md index 6adee59e52..63c0e0d18a 100644 --- a/translated/tech/20140607 Five things that make Go fast.md +++ b/translated/tech/20140607 Five things that make Go fast.md @@ -1,11 +1,10 @@ 五种加速 Go 的特性 -============================================================ +======== - _Anthony Starks 使用他出色的 Deck 演示工具重构了我原来的基于 Google Slides 的幻灯片。你可以在他的博客上查看他重构后的幻灯片, [mindchunk.blogspot.com.au/2014/06/remixing-with-deck][5]._ +_Anthony Starks 使用他出色的 Deck 演示工具重构了我原来的基于 Google Slides 的幻灯片。你可以在他的博客上查看他重构后的幻灯片, +[mindchunk.blogspot.com.au/2014/06/remixing-with-deck][5]。_ -* * * - -我最近被邀请在 Gocon 发表演讲,这是一个每半年在日本东京举行的精彩 Go 的大会。[Gocon 2014][6] 是一个完全由社区驱动的为期一天的活动,由培训和一整个下午的围绕着 生产环境中的 Go 这个主题的演讲组成. +我最近被邀请在 Gocon 发表演讲,这是一个每半年在日本东京举行的 Go 的精彩大会。[Gocon 2014][6] 是一个完全由社区驱动的为期一天的活动,由培训和一整个下午的围绕着生产环境中的 Go 这个主题的演讲组成.(LCTT 译注:本文发表于 2014 年) 以下是我的讲义。原文的结构能让我缓慢而清晰的演讲,因此我已经编辑了它使其更可读。 @@ -19,14 +18,16 @@ 我很高兴今天能来到 Gocon。我想参加这个会议已经两年了,我很感谢主办方能提供给我向你们演讲的机会。 - [![Gocon 2014](https://dave.cheney.net/wp-content/uploads/2014/06/Gocon-2014-1.jpg)][9] +[![Gocon 2014](https://dave.cheney.net/wp-content/uploads/2014/06/Gocon-2014-1.jpg)][9] + 我想以一个问题开始我的演讲。 为什么选择 Go? 当大家讨论学习或在生产环境中使用 Go 的原因时,答案不一而足,但因为以下三个原因的最多。 - [![Gocon 2014 ](https://dave.cheney.net/wp-content/uploads/2014/06/Gocon-2014-2.jpg)][10] +[![Gocon 2014 ](https://dave.cheney.net/wp-content/uploads/2014/06/Gocon-2014-2.jpg)][10] + 这就是 TOP3 的原因。 第一,并发。 @@ -37,29 +38,29 @@ Go 的 并发原语Concurrency Primitives 对于来自 Nod 我们今天从经验丰富的 Gophers 那里听说过,他们非常欣赏部署 Go 应用的简单性。 - [![Gocon 2014](https://dave.cheney.net/wp-content/uploads/2014/06/Gocon-2014-3.jpg)][11] +[![Gocon 2014](https://dave.cheney.net/wp-content/uploads/2014/06/Gocon-2014-3.jpg)][11] 然后是性能。 我相信人们选择 Go 的一个重要原因是它 _快_。 - [![Gocon 2014 (4)](https://dave.cheney.net/wp-content/uploads/2014/06/Gocon-2014-4.jpg)][12] +[![Gocon 2014 (4)](https://dave.cheney.net/wp-content/uploads/2014/06/Gocon-2014-4.jpg)][12] 在今天的演讲中,我想讨论五个有助于提高 Go 性能的特性。 我还将与大家分享 Go 如何实现这些特性的细节。 - [![Gocon 2014 (5)](https://dave.cheney.net/wp-content/uploads/2014/06/Gocon-2014-5.jpg)][13] +[![Gocon 2014 (5)](https://dave.cheney.net/wp-content/uploads/2014/06/Gocon-2014-5.jpg)][13] 我要谈的第一个特性是 Go 对于值的高效处理和存储。 - [![Gocon 2014 (6)](https://dave.cheney.net/wp-content/uploads/2014/06/Gocon-2014-6.jpg)][14] +[![Gocon 2014 (6)](https://dave.cheney.net/wp-content/uploads/2014/06/Gocon-2014-6.jpg)][14] 这是 Go 中一个值的例子。编译时,`gocon` 正好消耗四个字节的内存。 让我们将 Go 与其他一些语言进行比较 - [![Gocon 2014 (7)](https://dave.cheney.net/wp-content/uploads/2014/06/Gocon-2014-7.jpg)][15] +[![Gocon 2014 (7)](https://dave.cheney.net/wp-content/uploads/2014/06/Gocon-2014-7.jpg)][15] 由于 Python 表示变量的方式的开销,使用 Python 存储相同的值会消耗六倍的内存。 @@ -67,19 +68,19 @@ Python 使用额外的内存来跟踪类型信息,进行 引用计数引用计数引用计数内联Inlining。 - [![Gocon 2014 (17)](https://dave.cheney.net/wp-content/uploads/2014/06/Gocon-2014-17.jpg)][25] +[![Gocon 2014 (17)](https://dave.cheney.net/wp-content/uploads/2014/06/Gocon-2014-17.jpg)][25] Go 编译器通过将函数体视为调用者的一部分来内联函数。 @@ -143,13 +144,13 @@ Go 编译器通过将函数体视为调用者的一部分来内联函数。 复杂的函数通常不受调用它们的开销所支配,因此不会内联。 - [![Gocon 2014 (18)](https://dave.cheney.net/wp-content/uploads/2014/06/Gocon-2014-18.jpg)][26] +[![Gocon 2014 (18)](https://dave.cheney.net/wp-content/uploads/2014/06/Gocon-2014-18.jpg)][26] 这个例子显示函数 `Double` 调用 `util.Max`。 为了减少调用 `util.Max` 的开销,编译器可以将 `util.Max` 内联到 `Double` 中,就象这样 - [![Gocon 2014 (19)](https://dave.cheney.net/wp-content/uploads/2014/06/Gocon-2014-19.jpg)][27] +[![Gocon 2014 (19)](https://dave.cheney.net/wp-content/uploads/2014/06/Gocon-2014-19.jpg)][27] 内联后不再调用 `util.Max`,但是 `Double` 的行为没有改变。 @@ -159,7 +160,7 @@ Go 实现非常简单。编译包时,会标记任何适合内联的小函数 然后函数的源代码和编译后版本都会被存储。 - [![Gocon 2014 (20)](https://dave.cheney.net/wp-content/uploads/2014/06/Gocon-2014-20.jpg)][28] +[![Gocon 2014 (20)](https://dave.cheney.net/wp-content/uploads/2014/06/Gocon-2014-20.jpg)][28] 此幻灯片显示了 `util.a` 的内容。源代码已经过一些转换,以便编译器更容易快速处理。 @@ -169,13 +170,13 @@ Go 实现非常简单。编译包时,会标记任何适合内联的小函数 拥有该函数的源代码可以实现其他优化。 - [![Gocon 2014 (21)](https://dave.cheney.net/wp-content/uploads/2014/06/Gocon-2014-21.jpg)][29] +[![Gocon 2014 (21)](https://dave.cheney.net/wp-content/uploads/2014/06/Gocon-2014-21.jpg)][29] 在这个例子中,尽管函数 `Test` 总是返回 `false`,但 `Expensive` 在不执行它的情况下无法知道结果。 -当 `Test` 被内联时,我们得到这样的东西 +当 `Test` 被内联时,我们得到这样的东西。 - [![Gocon 2014 (22)](https://dave.cheney.net/wp-content/uploads/2014/06/Gocon-2014-22.jpg)][30] +[![Gocon 2014 (22)](https://dave.cheney.net/wp-content/uploads/2014/06/Gocon-2014-22.jpg)][30] 编译器现在知道 `Expensive` 的代码无法访问。 @@ -183,7 +184,7 @@ Go 实现非常简单。编译包时,会标记任何适合内联的小函数 Go 编译器可以跨文件甚至跨包自动内联函数。还包括从标准库调用的可内联函数的代码。 - [![Gocon 2014 (23)](https://dave.cheney.net/wp-content/uploads/2014/06/Gocon-2014-23.jpg)][31] +[![Gocon 2014 (23)](https://dave.cheney.net/wp-content/uploads/2014/06/Gocon-2014-23.jpg)][31] 强制垃圾回收Mandatory Garbage Collection 使 Go 成为一种更简单,更安全的语言。 @@ -191,13 +192,13 @@ Go 编译器可以跨文件甚至跨包自动内联函数。还包括从标准 这意味着在堆上分配的内存是有代价的。每次 GC 运行时都会花费 CPU 时间,直到释放内存为止。 - [![Gocon 2014 (24)](https://dave.cheney.net/wp-content/uploads/2014/06/Gocon-2014-24.jpg)][32] +[![Gocon 2014 (24)](https://dave.cheney.net/wp-content/uploads/2014/06/Gocon-2014-24.jpg)][32] 然而,有另一个地方分配内存,那就是栈。 与 C 不同,它强制您选择是否将值通过 `malloc` 将其存储在堆上,还是通过在函数范围内声明将其储存在栈上;Go 实现了一个名为 逃逸分析Escape Analysis 的优化。 - [![Gocon 2014 (25)](https://dave.cheney.net/wp-content/uploads/2014/06/Gocon-2014-25.jpg)][33] +[![Gocon 2014 (25)](https://dave.cheney.net/wp-content/uploads/2014/06/Gocon-2014-25.jpg)][33] 逃逸分析决定了对一个值的任何引用是否会从被声明的函数中逃逸。 @@ -207,7 +208,7 @@ Go 编译器可以跨文件甚至跨包自动内联函数。还包括从标准 让我们看一些例子 - [![Gocon 2014 (26)](https://dave.cheney.net/wp-content/uploads/2014/06/Gocon-2014-26.jpg)][34] +[![Gocon 2014 (26)](https://dave.cheney.net/wp-content/uploads/2014/06/Gocon-2014-26.jpg)][34] `Sum` 返回 1 到 100 的整数的和。这是一种相当不寻常的做法,但它说明了逃逸分析的工作原理。 @@ -215,7 +216,7 @@ Go 编译器可以跨文件甚至跨包自动内联函数。还包括从标准 没有必要回收 `numbers`,它会在 `Sum` 返回时自动释放。 - [![Gocon 2014 (27)](https://dave.cheney.net/wp-content/uploads/2014/06/Gocon-2014-27.jpg)][35] +[![Gocon 2014 (27)](https://dave.cheney.net/wp-content/uploads/2014/06/Gocon-2014-27.jpg)][35] 第二个例子也有点尬。在 `CenterCursor` 中,我们创建一个新的 `Cursor` 对象并在 `c` 中存储指向它的指针。 @@ -225,7 +226,7 @@ Go 编译器可以跨文件甚至跨包自动内联函数。还包括从标准 即使 `c` 被 `new` 函数分配了空间,它也不会存储在堆上,因为没有引用 `c` 的变量逃逸 `CenterCursor` 函数。 - [![Gocon 2014 (28)](https://dave.cheney.net/wp-content/uploads/2014/06/Gocon-2014-28.jpg)][36] +[![Gocon 2014 (28)](https://dave.cheney.net/wp-content/uploads/2014/06/Gocon-2014-28.jpg)][36] 默认情况下,Go 的优化始终处于启用状态。可以使用 `-gcflags = -m` 开关查看编译器的逃逸分析和内联决策。 @@ -233,11 +234,11 @@ Go 编译器可以跨文件甚至跨包自动内联函数。还包括从标准 我将在本演讲的其余部分详细讨论栈。 - [![Gocon 2014 (30)](https://dave.cheney.net/wp-content/uploads/2014/06/Gocon-2014-30.jpg)][37] +[![Gocon 2014 (30)](https://dave.cheney.net/wp-content/uploads/2014/06/Gocon-2014-30.jpg)][37] -Go 有 goroutines。 这是 Go 并发的基石。 +Go 有 goroutine。 这是 Go 并发的基石。 -我想退一步,探索 goroutines 的历史。 +我想退一步,探索 goroutine 的历史。 最初,计算机一次运行一个进程。在 60 年代,多进程或 分时Time Sharing 的想法变得流行起来。 @@ -245,7 +246,7 @@ Go 有 goroutines。 这是 Go 并发的基石。 这称为 _进程切换_。 - [![Gocon 2014 (29)](https://dave.cheney.net/wp-content/uploads/2014/06/Gocon-2014-29.jpg)][38] +[![Gocon 2014 (29)](https://dave.cheney.net/wp-content/uploads/2014/06/Gocon-2014-29.jpg)][38] 进程切换有三个主要开销。 @@ -255,44 +256,41 @@ Go 有 goroutines。 这是 Go 并发的基石。 最后是操作系统 上下文切换Context Switch 的成本,以及 调度函数Scheduler Function 选择占用 CPU 的下一个进程的开销。 - [![Gocon 2014 (31)](https://dave.cheney.net/wp-content/uploads/2014/06/Gocon-2014-31.jpg)][39] +[![Gocon 2014 (31)](https://dave.cheney.net/wp-content/uploads/2014/06/Gocon-2014-31.jpg)][39] 现代处理器中有数量惊人的寄存器。我很难在一张幻灯片上排开它们,这可以让你知道保护和恢复它们需要多少时间。 由于进程切换可以在进程执行的任何时刻发生,因此操作系统需要存储所有寄存器的内容,因为它不知道当前正在使用哪些寄存器。 - [![Gocon 2014 (32)](https://dave.cheney.net/wp-content/uploads/2014/06/Gocon-2014-32.jpg)][40] +[![Gocon 2014 (32)](https://dave.cheney.net/wp-content/uploads/2014/06/Gocon-2014-32.jpg)][40] 这导致了线程的出生,这些线程在概念上与进程相同,但共享相同的内存空间。 由于线程共享地址空间,因此它们比进程更轻,因此创建速度更快,切换速度更快。 - [![Gocon 2014 (33)](https://dave.cheney.net/wp-content/uploads/2014/06/Gocon-2014-33.jpg)][41] +[![Gocon 2014 (33)](https://dave.cheney.net/wp-content/uploads/2014/06/Gocon-2014-33.jpg)][41] -Goroutines 升华了线程的思想。 +Goroutine 升华了线程的思想。 -Goroutines 是 协作式调度Cooperative Scheduled +Goroutine 是 协作式调度Cooperative Scheduled 的,而不是依靠内核来调度。 当对 Go 运行时调度器Runtime Scheduler 进行显式调用时,goroutine 之间的切换仅发生在明确定义的点上。 编译器知道正在使用的寄存器并自动保存它们。 - [![Gocon 2014 (34)](https://dave.cheney.net/wp-content/uploads/2014/06/Gocon-2014-34.jpg)][42] +[![Gocon 2014 (34)](https://dave.cheney.net/wp-content/uploads/2014/06/Gocon-2014-34.jpg)][42] 虽然 goroutine 是协作式调度的,但运行时会为你处理。 -Goroutines 可能会给禅让给其他协程时刻是: +Goroutine 可能会给禅让给其他协程时刻是: * 阻塞式通道发送和接收。 - * Go 声明,虽然不能保证会立即调度新的 goroutine。 - * 文件和网络操作式的阻塞式系统调用。 - * 在被垃圾回收循环停止后。 - [![Gocon 2014 (35)](https://dave.cheney.net/wp-content/uploads/2014/06/Gocon-2014-35.jpg)][43] +[![Gocon 2014 (35)](https://dave.cheney.net/wp-content/uploads/2014/06/Gocon-2014-35.jpg)][43] 这个例子说明了上一张幻灯片中描述的一些调度点。 @@ -304,7 +302,7 @@ Goroutines 可能会给禅让给其他协程时刻是: 最后,当 `Read` 操作完成并且数据可用时,线程切换回左侧。 - [![Gocon 2014 (36)](https://dave.cheney.net/wp-content/uploads/2014/06/Gocon-2014-36.jpg)][44] +[![Gocon 2014 (36)](https://dave.cheney.net/wp-content/uploads/2014/06/Gocon-2014-36.jpg)][44] 这张幻灯片显示了低级语言描述的 `runtime.Syscall` 函数,它是 `os` 包中所有函数的基础。 @@ -316,13 +314,13 @@ Goroutines 可能会给禅让给其他协程时刻是: 这导致每 Go 进程的操作系统线程相对较少,Go 运行时负责将可运行的 Goroutine 分配给空闲的操作系统线程。 - [![Gocon 2014 (37)](https://dave.cheney.net/wp-content/uploads/2014/06/Gocon-2014-37.jpg)][45] +[![Gocon 2014 (37)](https://dave.cheney.net/wp-content/uploads/2014/06/Gocon-2014-37.jpg)][45] 在上一节中,我讨论了 goroutine 如何减少管理许多(有时是数十万个并发执行线程)的开销。 Goroutine故事还有另一面,那就是栈管理,它引导我进入我的最后一个话题。 - [![Gocon 2014 (39)](https://dave.cheney.net/wp-content/uploads/2014/06/Gocon-2014-39.jpg)][46] +[![Gocon 2014 (39)](https://dave.cheney.net/wp-content/uploads/2014/06/Gocon-2014-39.jpg)][46] 这是一个进程的内存布局图。我们感兴趣的关键是堆和栈的位置。 @@ -330,13 +328,13 @@ Goroutine故事还有另一面,那就是栈管理,它引导我进入我的 栈位于虚拟地址空间的顶部,并向下增长。 - [![Gocon 2014 (40)](https://dave.cheney.net/wp-content/uploads/2014/06/Gocon-2014-40.jpg)][47] +[![Gocon 2014 (40)](https://dave.cheney.net/wp-content/uploads/2014/06/Gocon-2014-40.jpg)][47] 因为堆和栈相互覆盖的结果会是灾难性的,操作系统通常会安排在栈和堆之间放置一个不可写内存区域,以确保如果它们发生碰撞,程序将中止。 这称为保护页,有效地限制了进程的栈大小,通常大约为几兆字节。 - [![Gocon 2014 (41)](https://dave.cheney.net/wp-content/uploads/2014/06/Gocon-2014-41.jpg)][48] +[![Gocon 2014 (41)](https://dave.cheney.net/wp-content/uploads/2014/06/Gocon-2014-41.jpg)][48] 我们已经讨论过线程共享相同的地址空间,因此对于每个线程,它必须有自己的栈。 @@ -346,7 +344,7 @@ Goroutine故事还有另一面,那就是栈管理,它引导我进入我的 缺点是随着程序中线程数的增加,可用地址空间的数量会减少。 - [![Gocon 2014 (42)](https://dave.cheney.net/wp-content/uploads/2014/06/Gocon-2014-42.jpg)][49] +[![Gocon 2014 (42)](https://dave.cheney.net/wp-content/uploads/2014/06/Gocon-2014-42.jpg)][49] 我们已经看到 Go 运行时将大量的 goroutine 调度到少量线程上,但那些 goroutines 的栈需求呢? @@ -354,13 +352,13 @@ Go 编译器不使用保护页,而是在每个函数调用时插入一个检 由于这种检查,goroutines 初始栈可以做得更小,这反过来允许 Go 程序员将 goroutines 视为廉价资源。 - [![Gocon 2014 (43)](https://dave.cheney.net/wp-content/uploads/2014/06/Gocon-2014-43.jpg)][50] +[![Gocon 2014 (43)](https://dave.cheney.net/wp-content/uploads/2014/06/Gocon-2014-43.jpg)][50] 这是一张显示了 Go 1.2 如何管理栈的幻灯片。 当 `G` 调用 `H` 时,没有足够的空间让 `H` 运行,所以运行时从堆中分配一个新的栈帧,然后在新的栈段上运行 `H`。当 `H` 返回时,栈区域返回到堆,然后返回到 `G`。 - [![Gocon 2014 (44)](https://dave.cheney.net/wp-content/uploads/2014/06/Gocon-2014-44.jpg)][51] +[![Gocon 2014 (44)](https://dave.cheney.net/wp-content/uploads/2014/06/Gocon-2014-44.jpg)][51] 这种管理栈的方法通常很好用,但对于某些类型的代码,通常是递归代码,它可能导致程序的内部循环跨越这些栈边界之一。 @@ -368,7 +366,7 @@ Go 编译器不使用保护页,而是在每个函数调用时插入一个检 每次都会导致栈拆分。 这被称为 热分裂Hot Split 问题。 - [![Gocon 2014 (45)](https://dave.cheney.net/wp-content/uploads/2014/06/Gocon-2014-45.jpg)][52] +[![Gocon 2014 (45)](https://dave.cheney.net/wp-content/uploads/2014/06/Gocon-2014-45.jpg)][52] 为了解决热分裂问题,Go 1.3 采用了一种新的栈管理方法。 @@ -380,7 +378,7 @@ Go 编译器不使用保护页,而是在每个函数调用时插入一个检 这解决了热分裂问题。 - [![Gocon 2014 (46)](https://dave.cheney.net/wp-content/uploads/2014/06/Gocon-2014-46.jpg)][53] +[![Gocon 2014 (46)](https://dave.cheney.net/wp-content/uploads/2014/06/Gocon-2014-46.jpg)][53] 值,内联,逃逸分析,Goroutines 和分段/复制栈。 @@ -398,7 +396,7 @@ Go 编译器不使用保护页,而是在每个函数调用时插入一个检 如果没有可增长的栈,逃逸分析可能会对栈施加太大的压力。 - [![Gocon 2014 (47)](https://dave.cheney.net/wp-content/uploads/2014/06/Gocon-2014-47.jpg)][54] +[![Gocon 2014 (47)](https://dave.cheney.net/wp-content/uploads/2014/06/Gocon-2014-47.jpg)][54] * 感谢 Gocon 主办方允许我今天发言 * twitter / web / email details @@ -407,11 +405,8 @@ Go 编译器不使用保护页,而是在每个函数调用时插入一个检 ### 相关文章: 1. [听我在 OSCON 上关于 Go 性能的演讲][1] - 2. [为什么 Goroutine 的栈是无限大的?][2] - 3. [Go 的运行时环境变量的旋风之旅][3] - 4. [没有事件循环的性能][4] -------------------------------------------------------------------------------- @@ -431,9 +426,9 @@ David 是来自澳大利亚悉尼的程序员和作者。 via: https://dave.cheney.net/2014/06/07/five-things-that-make-go-fast -作者:[Dave Cheney ][a] +作者:[Dave Cheney][a] 译者:[houbaron](https://github.com/houbaron) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 From 27e9d023aff33cccf7e68599d50769ac5e169498 Mon Sep 17 00:00:00 2001 From: "Xingyu.Wang" Date: Thu, 11 Oct 2018 15:08:46 +0800 Subject: [PATCH 345/736] PUB: 20140607 Five things that make Go fast.md MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit @houbaron 本文首发地址: https://linux.cn/article-10101-1.html 您的 LCTT 主页: https://linux.cn/lctt/houbaron --- .../tech => published}/20140607 Five things that make Go fast.md | 0 1 file changed, 0 insertions(+), 0 deletions(-) rename {translated/tech => published}/20140607 Five things that make Go fast.md (100%) diff --git a/translated/tech/20140607 Five things that make Go fast.md b/published/20140607 Five things that make Go fast.md similarity index 100% rename from translated/tech/20140607 Five things that make Go fast.md rename to published/20140607 Five things that make Go fast.md From 9441264dcdf6168a930a0699467d128b6771e3b6 Mon Sep 17 00:00:00 2001 From: "Xingyu.Wang" Date: Thu, 11 Oct 2018 15:18:30 +0800 Subject: [PATCH 346/736] PRF:20180928 A Free And Secure Online PDF Conversion Suite.md @zhousiyu325 --- ... And Secure Online PDF Conversion Suite.md | 66 +++++++------------ 1 file changed, 24 insertions(+), 42 deletions(-) diff --git a/translated/tech/20180928 A Free And Secure Online PDF Conversion Suite.md b/translated/tech/20180928 A Free And Secure Online PDF Conversion Suite.md index 46cc5067f2..c9e219b7b7 100644 --- a/translated/tech/20180928 A Free And Secure Online PDF Conversion Suite.md +++ b/translated/tech/20180928 A Free And Secure Online PDF Conversion Suite.md @@ -1,40 +1,40 @@ -一款免费且安全的在线PDF转换软件 +一款免费且安全的在线 PDF 转换软件 ====== ![](https://www.ostechnix.com/wp-content/uploads/2018/09/easypdf-720x340.jpg) -我们总在寻找一个更好用且更高效的解决方案,来我们的生活理加方便。 比方说,在处理PDF文档时,你会迫切地想拥有一款工具,它能够在任何情形下都显得快速可靠。在这,我们想向你推荐**EasyPDF**——一款可以胜任所有场合的在线PDF软件。通过大量的测试,我们可以保证:这款工具能够让你的PDF文档管理更加容易。 +我们总在寻找一个更好用且更高效的解决方案,来我们的生活理加方便。 比方说,在处理 PDF 文档时,你肯定会想拥有一款工具,它能够在任何情形下都显得快速可靠。在这,我们想向你推荐 **EasyPDF** —— 一款可以胜任所有场合的在线 PDF 软件。通过大量的测试,我们可以保证:这款工具能够让你的 PDF 文档管理更加容易。 -不过,关于EasyPDF有一些十分重要的事情,你必须知道。 +不过,关于 EasyPDF 有一些十分重要的事情,你必须知道。 -* EasyPDF是免费的、匿名的在线PDF转换软件。 -* 能够将PDF文档转换成Word、Excel、PowerPoint、AutoCAD、JPG, GIF和Text等格式格式的文档。 -* 能够从ord、Excel、PowerPoint等其他格式的文件创建PDF文件。 -* 能够进行PDF文档的合并、分割和压缩。 -* 能够识别扫描的PDF和图片中的内容。 -* 可以从你的设备或者云存储(Google Drive 和 DropBox)中上传文档。 -* 可以在Windows、Linux、Mac和智能手机上通过浏览器来操作。 +* EasyPDF 是免费的、匿名的在线 PDF 转换软件。 +* 能够将 PDF 文档转换成 Word、Excel、PowerPoint、AutoCAD、JPG、GIF 和文本等格式格式的文档。 +* 能够从 Word、Excel、PowerPoint 等其他格式的文件创建 PDF 文件。 +* 能够进行 PDF 文档的合并、分割和压缩。 +* 能够识别扫描的 PDF 和图片中的内容。 +* 可以从你的设备或者云存储(Google Drive 和 DropBox)中上传文档。 +* 可以在 Windows、Linux、Mac 和智能手机上通过浏览器来操作。 * 支持多种语言。 ### EasyPDF的用户界面 ![](http://www.ostechnix.com/wp-content/uploads/2018/09/easypdf-interface.png) -EasyPDF最吸引你眼球的就是平滑的用户界面,营造一种整洁的环境,这会让使用者感觉更加舒服。由于网站完全没有一点广告,EasyPDF的整体使用体验相比以前会好很多。 +EasyPDF 最吸引你眼球的就是平滑的用户界面,营造一种整洁的环境,这会让使用者感觉更加舒服。由于网站完全没有一点广告,EasyPDF 的整体使用体验相比以前会好很多。 每种不同类型的转换都有它们专门的菜单,只需要简单地向其中添加文件,你并不需要知道太多知识来进行操作。 -许多类似网站没有做好相关的优化,使得在手机上的使用体验并不太友好。然而,EasyPDF突破了这一个瓶颈。在智能手机上,EasyPDF几乎可以秒开,并且可以顺畅的操作。你也通过Chrome app的**three dots menu**把EasyPDF添加到手机的主屏幕上。 +许多类似网站没有做好相关的优化,使得在手机上的使用体验并不太友好。然而,EasyPDF 突破了这一个瓶颈。在智能手机上,EasyPDF 几乎可以秒开,并且可以顺畅的操作。你也通过 Chrome 的“三点菜单”把 EasyPDF 添加到手机的主屏幕上。 ![](http://www.ostechnix.com/wp-content/uploads/2018/09/EasyPDF-fs8.png) ### 特性 -除了好看的界面,EasyPDF还非常易于使用。为了使用它,你 **不需要注册一个账号** 或者**留下一个邮箱**,它是完全匿名的。另外,EasyPDF也不会对要转换的文件进行数量或者大小的限制,完全不需要安装!酷极了,不是吗? +除了好看的界面,EasyPDF 还非常易于使用。为了使用它,你 **不需要注册一个账号** 或者**留下一个邮箱**,它是完全匿名的。另外, EasyPDF 也不会对要转换的文件进行数量或者大小的限制,完全不需要安装!酷极了,不是吗? -首先,你需要选择一种想要进行的格式转换,比如,将PDF转换成Word。然后,选择你想要转换的PDF文件。你可以通过两种方式来上传文件:直接拖拉或者从设备上的文件夹进行选择。还可以选择从[**Google Drive**][1] 或 [**Dropbox**][2]来上传文件。 +首先,你需要选择一种想要进行的格式转换,比如,将 PDF 转换成 Word。然后,选择你想要转换的 PDF 文件。你可以通过两种方式来上传文件:直接拖拉或者从设备上的文件夹进行选择。还可以选择从[Google Drive][1] 或 [Dropbox][2]来上传文件。 -选择要进行格式转换的文件后,点击Convert按钮开始转换过程。转换过程会在一分钟内完成,你并不需要等待太长时间。如果你还有对其他文件进行格式转换,在接着转换前,不要忘了将前面已经转换完成的文件下载保存。不然的话,你将会丢失前面的文件。 +选择要进行格式转换的文件后,点击 Convert 按钮开始转换过程。转换过程会在一分钟内完成,你并不需要等待太长时间。如果你还有对其他文件进行格式转换,在接着转换前,不要忘了将前面已经转换完成的文件下载保存。不然的话,你将会丢失前面的文件。 ![](https://www.ostechnix.com/wp-content/uploads/2018/09/EasyPDF1.png) @@ -42,60 +42,42 @@ EasyPDF最吸引你眼球的就是平滑的用户界面,营造一种整洁的 目前支持的几种格式转换类型如下: -* **PDF to Word** – 将 PDF 文档 转换成 Word 文档 - + * **PDF to Word** – 将 PDF 文档 转换成 Word 文档 * **PDF 转换成 PowerPoint** – 将 PDF 文档 转换成 PowerPoint 演示讲稿 - * **PDF 转换成 Excel** – 将 PDF 文档 转换成 Excel 文档 - - * **PDF 创建** – 从一些其他类型的文件(如, text, doc, odt)来创建PDF文档 - + * **PDF 创建** – 从一些其他类型的文件(如,文本、doc、odt)来创建PDF文档 * **Word 转换成 PDF** – 将 Word 文档 转换成 PDF 文档 - * **JPG 转换成 PDF** – 将 JPG images 转换成 PDF 文档 - - * **PDF 转换成 Au转换成CAD** – 将 PDF 文档 转换成 .dwg 格式 (DWG 是 CAD 文件的原生的格式) - + * **PDF 转换成 AutoCAD** – 将 PDF 文档 转换成 .dwg 格式(DWG 是 CAD 文件的原生的格式) * **PDF 转换成 Text** – 将 PDF 文档 转换成 Text 文档 - * **PDF 分割** – 把 PDF 文件分割成多个部分 - - * **PDF 合并** – 把多个PDF文件合并成一个文件 - + * **PDF 合并** – 把多个 PDF 文件合并成一个文件 * **PDF 压缩** – 将 PDF 文档进行压缩 - * **PDF 转换成 JPG** – 将 PDF 文档 转换成 JPG 图片 - * **PDF 转换成 PNG** – 将 PDF 文档 转换成 PNG 图片 - * **PDF 转换成 GIF** – 将 PDF 文档 转换成 GIF 文件 + * **在线文字内容识别** – 将扫描的纸质文档转换成能够进行编辑的文件(如,Word、Excel、文本) - * **在线文字内容识别** – 将扫描的纸质文档转换成能够进行编辑的文件(如,Word,Excel,Text) - - 想试一试吗?好极了!点击下面的链接,然后开始格式转换吧! +想试一试吗?好极了!点击下面的链接,然后开始格式转换吧! [![](https://www.ostechnix.com/wp-content/uploads/2018/09/EasyPDF-online-pdf.png)][https://easypdf.com/] ### 总结 -EasyPDF 名符其实,能够让PDF 管理更加容易。就我测试过的 EasyPDF 服务而言,它提供了**完全免费**的简单易用的转换功能。它十分快速、安全和可靠。你会对它的服务质量感到非常满意,因为它不用支付任何费用,也不用留下像邮箱这样的个人信息。值得一试,也许你会找到你自己更喜欢的 PDF 工具。 +EasyPDF 名符其实,能够让 PDF 管理更加容易。就我测试过的 EasyPDF 服务而言,它提供了**完全免费**的简单易用的转换功能。它十分快速、安全和可靠。你会对它的服务质量感到非常满意,因为它不用支付任何费用,也不用留下像邮箱这样的个人信息。值得一试,也许你会找到你自己更喜欢的 PDF 工具。 好吧,我就说这些。更多的好东西还在后后面,请继续关注! 加油! - - - - -------------------------------------------------------------------------------- via: https://www.ostechnix.com/easypdf-a-free-and-secure-online-pdf-conversion-suite/ 作者:[SK][a] 选题:[lujun9972](https://github.com/lujun9972) -译者:[译者ID](https://github.com/zhousiyu325) -校对:[校对者ID](https://github.com/校对者ID) +译者:[zhousiyu325](https://github.com/zhousiyu325) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 From 8cb95dca518904808806ca63cfbc0b1e053a3c05 Mon Sep 17 00:00:00 2001 From: "Xingyu.Wang" Date: Thu, 11 Oct 2018 15:18:55 +0800 Subject: [PATCH 347/736] PUB:20180928 A Free And Secure Online PDF Conversion Suite.md @zhousiyu325 https://linux.cn/article-10102-1.html --- .../20180928 A Free And Secure Online PDF Conversion Suite.md | 0 1 file changed, 0 insertions(+), 0 deletions(-) rename {translated/tech => published}/20180928 A Free And Secure Online PDF Conversion Suite.md (100%) diff --git a/translated/tech/20180928 A Free And Secure Online PDF Conversion Suite.md b/published/20180928 A Free And Secure Online PDF Conversion Suite.md similarity index 100% rename from translated/tech/20180928 A Free And Secure Online PDF Conversion Suite.md rename to published/20180928 A Free And Secure Online PDF Conversion Suite.md From fe84ddf525274a1788a7e2576174d07962c02145 Mon Sep 17 00:00:00 2001 From: sd886393 Date: Thu, 11 Oct 2018 17:12:59 +0800 Subject: [PATCH 348/736] =?UTF-8?q?=E3=80=90=E5=AE=8C=E6=88=90=E7=BF=BB?= =?UTF-8?q?=E8=AF=91=E3=80=9120180928=20What=20containers=20can=20teach=20?= =?UTF-8?q?us=20about=20DevOps.md?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...at containers can teach us about DevOps.md | 101 ----------------- ...at containers can teach us about DevOps.md | 105 ++++++++++++++++++ 2 files changed, 105 insertions(+), 101 deletions(-) delete mode 100644 sources/tech/20180928 What containers can teach us about DevOps.md create mode 100644 translated/tech/20180928 What containers can teach us about DevOps.md diff --git a/sources/tech/20180928 What containers can teach us about DevOps.md b/sources/tech/20180928 What containers can teach us about DevOps.md deleted file mode 100644 index a0f3f5ef64..0000000000 --- a/sources/tech/20180928 What containers can teach us about DevOps.md +++ /dev/null @@ -1,101 +0,0 @@ -容器技术对指导我们 DevOps 的一些启发 -====== - -容器技术的使用支撑了目前 DevOps 三大主要实践:流水线,及时反馈,持续实验与学习以改进。 - -![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/LAW-patent_reform_520x292_10136657_1012_dc.png?itok=Cd2PmDWf) - -容器技术与 DevOps 二者在发展的过程中是互相促进的关系。得益于 DevOps 的设计理念愈发先进,容器生态系统在设计上与组件选择上也有相应发展。同时,由于容器技术在生产环境中的使用,反过来也促进了 DevOps 三大主要实践:[DevOps 三个实践][1]. - - -### 流水线的原则 - -**容器流** - -每个容器都可以看成一个独立的封闭仓库,当你置身其中,不需要管外部的系统环境、集群环境、以及其他基础设施。而在容器内部, -A container can be seen as a silo, and from inside, it is easy to forget the rest of the system: the host node, the cluster, the underlying infrastructure. Inside the container, it might appear that everything is functioning in an acceptable manner. From the outside perspective, though, the application inside the container is a part of a larger ecosystem of applications that make up a service: the web API, the web app user interface, the database, the workers, and caching services and garbage collectors. Teams put constraints on the container to limit performance impact on infrastructure, and much has been done to provide metrics for measuring container performance because overloaded or slow container workloads have downstream impact on other services or customers. - -**Real-world flow** - -This lesson can be applied to teams functioning in a silo as well. Every process (be it code release, infrastructure creation or even, say, manufacturing of [Spacely’s Sprockets][2]), follows a linear path from conception to realization. In technology, this progress flows from development to testing to operations and release. If a team working alone becomes a bottleneck or introduces a problem, the impact is felt all along the entire pipeline. A defect passed down the line destroys productivity downstream. While the broken process within the scope of the team itself may seem perfectly correct, it has a negative impact on the environment as a whole. - -**DevOps and flow** - -The first way of DevOps, principles of flow, is about approaching the process as a whole, striving to comprehend how the system works together and understanding the impact of issues on the entire process. To increase the efficiency of the process, pain points and waste are identified and removed. This is an ongoing process; teams must continually strive to increase visibility into the process and find and fix trouble spots and waste. - -> “The outcomes of putting the First Way into practice include never passing a known defect to downstream work centers, never allowing local optimization to create global degradation, always seeking to increase flow, and always seeking to achieve a profound understanding of the system (as per Deming).” - -–Gene Kim, [The Three Ways: The Principles Underpinning DevOps][3], IT Revolution, 25 Apr. 2017 - -### Principles of feedback - -**Container feedback** - -In addition to limiting containers to prevent impact elsewhere, many products have been created to monitor and trend container metrics in an effort to understand what they are doing and notify when they are misbehaving. [Prometheus][4], for example, is [all the rage][5] for collecting metrics from containers and clusters. Containers are excellent at separating applications and providing a way to ship an environment together with the code, sometimes at the cost of opacity, so much is done to try to provide rapid feedback so issues can be addressed promptly within the silo. - -**Real-world feedback** - -The same is necessary for the flow of the system. From inception to realization, an efficient process quickly provides relevant feedback to identify when there is an issue. The key words here are “quick” and “relevant.” Burying teams in thousands of irrelevant notifications make it difficult or even impossible to notice important events that need immediate action, and receiving even relevant information too late may allow small, easily solved issues to move downstream and become bigger problems. Imagine [if Lucy and Ethel][6] had provided immediate feedback that the conveyor belt was too fast—there would have been no problem with the chocolate production (though that would not have been nearly as funny). - -**DevOps and feedback** - -The Second Way of DevOps, principles of feedback, is all about getting relevant information quickly. With immediate, useful feedback, problems can be identified as they happen and addressed before impact is felt elsewhere in the development process. DevOps teams strive to “optimize for downstream” and immediately move to fix problems that might impact other teams that come after them. As with flow, feedback is a continual process to identify ways to quickly get important data and act on problems as they occur. - -> “Creating fast feedback is critical to achieving quality, reliability, and safety in the technology value stream.” - -–Gene Kim, et al., The DevOps Handbook: How to Create World-Class Agility, Reliability, and Security in Technology Organizations, IT Revolution Press, 2016 - -### Principles of continual experimentation and learning - -**Container continual experimentation and learning** - -It is a bit more challenging applying operational learning to the Third Way of DevOps:continual experimentation and learning. Trying to salvage what we can grasp of the very edges of the metaphor, containers make development easy, allowing developers and operations teams to test new code or configurations locally and safely outside of production and incorporate discovered benefits into production in a way that was difficult in the past. Changes can be radical and still version-controlled, documented, and shared quickly and easily. - -**Real-world continual experimentation and learning** - -For example, consider this anecdote from my own experience: Years ago, as a young, inexperienced sysadmin (just three weeks into the job), I was asked to make changes to an Apache virtual host running the website of the central IT department for a university. Without an easy-to-use test environment, I made a configuration change to the production site that I thought would accomplish the task and pushed it out. Within a few minutes, I overheard coworkers in the next cube: - -“Wait, is the website down?” - -“Hrm, yeah, it looks like it. What the heck?” - -There was much eye-rolling involved. - -Mortified (the shame is real, folks), I sunk down as far as I could into my seat and furiously tried to back out the changes I’d introduced. Later that same afternoon, the director of the department—the boss of my boss’s boss—appeared in my cube to talk about what had happened. “Don’t worry,” she told me. “We’re not mad at you. It was a mistake and now you have learned.” - -In the world of containers, this could have been easily changed and tested on my own laptop and the broken configuration identified by more skilled team members long before it ever made it into production. - -**DevOps continual experimentation and learning** - -A real culture of experimentation promotes the individual’s ability to find where a change in the process may be beneficial, and to test that assumption without the fear of retaliation if they fail. For DevOps teams, failure becomes an educational tool that adds to the knowledge of the individual and organization, rather than something to be feared or punished. Individuals in the DevOps team dedicate themselves to continuous learning, which in turn benefits the team and wider organization as that knowledge is shared. - -As the metaphor completely falls apart, focus needs to be given to a specific point: The other two principles may appear at first glance to focus entirely on process, but continual learning is a human task—important for the future of the project, the person, the team, and the organization. It has an impact on the process, but it also has an impact on the individual and other people. - -> “Experimentation and risk-taking are what enable us to relentlessly improve our system of work, which often requires us to do things very differently than how we’ve done it for decades.” - -–Gene Kim, et al., [The Phoenix Project: A Novel about IT, DevOps, and Helping Your Business Win][7], IT Revolution Press, 2013 - -### Containers can teach us DevOps - -Learning to work effectively with containers can help teach DevOps and the Three Ways: principles of flow, principles of feedback, and principles of continuous experimentation and learning. Looking holistically at the application and infrastructure rather than putting on blinders to everything outside the container teaches us to take all parts of the system and understand their upstream and downstream impacts, break out of silos, and work as a team to increase global performance and deep understanding of the entire system. Working to provide timely and accurate feedback teaches us to create effective feedback patterns within our organizations to identify problems before their impact grows. Finally, providing a safe environment to try new ideas and learn from them teaches us to create a culture where failure represents a positive addition to our knowledge and the ability to take big chances with educated guesses can result in new, elegant solutions to complex problems. - - --------------------------------------------------------------------------------- - -via: https://opensource.com/article/18/9/containers-can-teach-us-devops - -作者:[Chris Hermansen][a] -选题:[lujun9972](https://github.com/lujun9972) -译者:[译者ID](https://github.com/sd886393) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://opensource.com/users/clhermansen -[1]: https://itrevolution.com/the-three-ways-principles-underpinning-devops/ -[2]: https://en.wikipedia.org/wiki/The_Jetsons -[3]: http://itrevolution.com/the-three-ways-principles-underpinning-devops -[4]: https://prometheus.io/ -[5]: https://opensource.com/article/18/9/prometheus-operational-advantage -[6]: https://www.youtube.com/watch?v=8NPzLBSBzPI -[7]: https://itrevolution.com/book/the-phoenix-project/ diff --git a/translated/tech/20180928 What containers can teach us about DevOps.md b/translated/tech/20180928 What containers can teach us about DevOps.md new file mode 100644 index 0000000000..d514d8ba0b --- /dev/null +++ b/translated/tech/20180928 What containers can teach us about DevOps.md @@ -0,0 +1,105 @@ +容器技术对指导我们 DevOps 的一些启发 +====== + +容器技术的使用支撑了目前 DevOps 三大主要实践:流水线,及时反馈,持续实验与学习以改进。 + +![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/LAW-patent_reform_520x292_10136657_1012_dc.png?itok=Cd2PmDWf) + +容器技术与 DevOps 二者在发展的过程中是互相促进的关系。得益于 DevOps 的设计理念愈发先进,容器生态系统在设计上与组件选择上也有相应发展。同时,由于容器技术在生产环境中的使用,反过来也促进了 DevOps 三大主要实践:[支撑DevOps的三个实践][1]. + + +### 工作流 + +**容器中的工作流** + +每个容器都可以看成一个独立的封闭仓库,当你置身其中,不需要管外部的系统环境、集群环境、以及其他基础设施,不管你在里面如何折腾,只要对外提供正常的功能就好。一般来说,容器内运行的应用,一般作为整个应用系统架构的一部分:比如 web API,数据库,任务执行,缓存系统,垃圾回收器等。运维团队一般会限制容器的资源使用,并在此基础上建立完善的容器性能监控服务,从而降低其对基础设施或者下游其他用户的影响。 + +**现实中的工作流** + +那些跟“容器”一样独立工作的团队,也可以借鉴这种限制容器占用资源的策略。因为无论是在现实生活中的工作流(代码发布、构建基础设施,甚至制造[Spacely’s Sprockets][2]等),还是技术中的工作流(开发、测试、试运行、发布)都使用了这样的线性工作流,一旦某个独立的环节或者工作团队出现了问题,那么整个下游都会受到影响,虽然使用我们这种线性的工作流有效降低了工作耦合性。 + +**DevOps 中的工作流** + +DevOps 中的第一条原则,就是掌控整个执行链路的情况,努力理解系统如何协同工作,并理解其中出现的问题如何对整个过程产生影响。为了提高流程的效率,团队需要持续不断的找到系统中可能存在的性能浪费以及忽视的点,并最终修复它们。 + + +> “践行这样的工作流后,可以避免传递一个已知的缺陷到工作流的下游,避免产生一个可能会导致全局性能退化的局部优化,持续优化工作流的性能,持续加深对于系统的理解” + +–Gene Kim, [支撑DevOps的三个实践][3], IT 革命, 2017.4.25 + +### 反馈 + +**容器中的反馈** + +除了限制容器的资源,很多产品还提供了监控和通知容器性能指标的功能,从而了解当容器工作不正常时,容器内部处于什么样的工作状态。比如 目前[流行的][5][Prometheus][4],可以用来从容器和容器集群中收集相应的性能指标数据。容器本身特别适用于分隔应用系统,以及打包代码和其运行环境,但也同时带来不透明的特性,这时从中快速的收集信息,从而解决发生在其内部出现的问题,就显得尤为重要了。 + +**现实中的反馈** + +在现实中,从始至终同样也需要反馈。一个高效的处理流程中,及时的反馈能够快速的定位事情发生的时间。反馈的关键词是“快速”和“相关”。当一个团队处理大量不相关的事件时,那些真正需要快速反馈的重要信息,很容易就被忽视掉,并向下游传递形成更严重的问题。想象下[如果露西和埃塞尔][6]能够很快的意识到:传送带太快了,那么制作出的巧克力可能就没什么问题了(尽管这样就不太有趣了)。 + +**DevOps and feedback** + +DevOps 中的第二条原则,就是快速收集所有的相关有用信息,这样在出现的问题影响到其他开发进程之前,就可以被识别出。DevOps 团队应该努力去“优化下游“,以及快速解决那些可能会影响到之后团队的问题。同工作流一样,反馈也是一个持续的过程,目标是快速的获得重要的信息以及当问题出现后能够及时的响应。 + +> "快速的反馈对于提高技术的质量、可用性、安全性至关重要。" + +–Gene Kim, et al., DevOps 手册:如何在技​​术组织中创造世界级的敏捷性,可靠性和安全性, IT 革命, 2016 + +### 持续实验与学习 + +**容器中的持续实验与学习** + +如何让”持续的实验与学习“更具操作性是一个不小的挑战。容器让我们的开发工程师和运营团队,在不需要掌握太多边缘或难以理解的东西情况下,依然可以安全地进行本地和生产环境的测试,这在之前是难以做到的。即便是一些激进的实验,容器技术仍然让我们轻松地进行版本控制、记录、分享。 + +**现实中的持续实验与学习** + +举个我自己的例子:多年前,作为一个年轻、初出茅庐的系统管理员(仅仅工作三周),我被要求对一个运行某个大学核心IT部门网站的Apache虚拟主机进行更改。由于没有易于使用的测试环境,我直接在生产的站点上进行了配置修改,当时觉得配置没问题就发布了,几分钟后,我隔壁无意中听到了同事说: + +”等会,网站挂了?“ + +“没错,怎么回事?” + +很多人蒙圈了…… + +在被嘲讽之后(真实的嘲讽),我一头扎在工作台上,赶紧撤销我之前的更改。当天下午晚些时候,部门主管 - 我老板的老板的老板来到我的工位上,问发生了什么事。 +“别担心,”她告诉我。“我们不会生你的气,这是一个错误,现在你已经学会了。“ + +而在容器中,这种情形很容易的进行测试,并且也很容易在部署生产环境之前,被那些经验老道的团队成员发现。 + +**DevOps 中的持续实验与学习** + +做实验的初衷是我们每个人都希望通过一些改变从而能够提高一些东西,并勇敢地通过实验来验证我们的想法。对于 DevOps 团队来说,失败无论对团队还是个人来说都是经验,所要不要担心失败。团队中的每个成员不断学习、共享,也会不断提升其所在团队与组织的水平。 + +随着系统变得越来越琐碎,我们更需要将注意力发在特殊的点上:上面提到的两条原则主要关注的是流程的目前全貌,而持续的学习则是关注的则是整个项目、人员、团队、组织的未来。它不仅对流程产生了影响,还对流程中的每个人产生影响。 + +> "无风险的实验让我们能够不懈的改进我们的工作,但也要求我们使用之前没有用过的工作方式" + +–Gene Kim, et al., [凤凰计划:让你了解 IT、DevOps以及如何取得商业成功][7], IT 革命, 2013 + +### 容器技术给我们 DevOps 上的启迪 + +学习如何有效地使用容器可以学习DevOps的三条原则:工作流,反馈以及持续实验和学习。从整体上看应用程序和基础设施,而不是对容器外的东西置若罔闻,教会我们考虑到系统的所有部分,了解其上游和下游影响,打破孤岛,并作为一个团队工作,以提高全局性能和深度 +了解整个系统。通过努力提供及时准确的反馈,我们可以在组织内部创建有效的反馈模式,以便在问题发生影响之前发现问题。 +最后,提供一个安全的环境来尝试新的想法并从中学习,教会我们创造一种文化,在这种文化中,失败一方面促进了我们知识的增长,另一方面通过有根据的猜测,可以为复杂的问题带来新的、优雅的解决方案。 + + + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/18/9/containers-can-teach-us-devops + +作者:[Chris Hermansen][a] +选题:[lujun9972](https://github.com/lujun9972) +译者:[译者ID](https://github.com/littleji) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/clhermansen +[1]: https://itrevolution.com/the-three-ways-principles-underpinning-devops/ +[2]: https://en.wikipedia.org/wiki/The_Jetsons +[3]: http://itrevolution.com/the-three-ways-principles-underpinning-devops +[4]: https://prometheus.io/ +[5]: https://opensource.com/article/18/9/prometheus-operational-advantage +[6]: https://www.youtube.com/watch?v=8NPzLBSBzPI +[7]: https://itrevolution.com/book/the-phoenix-project/ From ec2bade58ce7c0b321232f32575a5cdd9f3f7a83 Mon Sep 17 00:00:00 2001 From: Ryze-Borgia <42087725+Ryze-Borgia@users.noreply.github.com> Date: Thu, 11 Oct 2018 18:36:50 +0800 Subject: [PATCH 349/736] Delete 20180915 Linux vs Mac- 7 Reasons Why Linux is a Better Choice than Mac.md --- ...s Why Linux is a Better Choice than Mac.md | 131 ------------------ 1 file changed, 131 deletions(-) delete mode 100644 sources/talk/20180915 Linux vs Mac- 7 Reasons Why Linux is a Better Choice than Mac.md diff --git a/sources/talk/20180915 Linux vs Mac- 7 Reasons Why Linux is a Better Choice than Mac.md b/sources/talk/20180915 Linux vs Mac- 7 Reasons Why Linux is a Better Choice than Mac.md deleted file mode 100644 index a9ece78ef7..0000000000 --- a/sources/talk/20180915 Linux vs Mac- 7 Reasons Why Linux is a Better Choice than Mac.md +++ /dev/null @@ -1,131 +0,0 @@ -Linux vs Mac: Linux 比 Mac 好的七个原因 -====== -最近我们谈论了一些[为什么 Linux 比 Windows 好][1]的原因。毫无疑问, Linux 是个非常优秀的平台。但是它和其他的操作系统一样也会有缺点。对于某些专门的领域,像是游戏, Windows 当然更好。 而对于视频编辑等任务, Mac 系统可能更为方便。这一切都取决于你的爱好,以及你想用你的系统做些什么。在这篇文章中,我们将会介绍一些 Linux 相对于 Mac 更好的一些地方。 - -如果你已经在用 Mac 或者打算买一台 Mac 电脑,我们建议你仔细考虑一下,看看是改为使用 Linux 还是继续使用 Mac 。 - -### Linux 比 Mac 好的 7 个原因 - -![Linux vs Mac: 为什么 Linux 更好][2] - -Linux 和 macOS 都是类 Unix 操作系统,并且都支持 Unix 命令行、 bash 和其他一些命令行工具,相比于 Windows ,他们所支持的应用和游戏比较少。但缺点也仅仅如此。 - -平面设计师和视频剪辑师更加倾向于使用 Mac 系统,而 Linux 更加适合做开发、系统管理、运维的工程师。 - -那要不要使用 Linux 呢,为什么要选择 Linux 呢?下面是根据实际经验和理性分析给出的一些建议。 - -#### 1\. 价格 - -![Linux vs Mac: 为什么 Linux 更好][3] - -假设你只是需要浏览文件、看电影、下载图片、写文档、制作报表或者做一些类似的工作,并且你想要一个更加安全的系统。 - -那在这种情况下,你觉得花费几百块买个系统完成这项工作,或者花费更多直接买个 Macbook 划算吗?当然,最终的决定权还是在你。 - -买个装好 Mac 系统的电脑还是买个便宜的电脑,然后自己装上免费的 Linux 系统,这个要看你自己的偏好。就我个人而言,除了音视频剪辑创作之外,Linux 都非常好地用,而对于音视频方面,我更倾向于使用 Final Cut Pro (专业的视频编辑软件) 和 Logic Pro X (专业的音乐制作软件)(这两款软件都是苹果公司推出的)。 - -#### 2\. 硬件支持 - -![Linux vs Mac: 为什么 Linux 更好][4] - -Linux 支持多种平台. 无论你的电脑配置如何,你都可以在上面安装 Linux,无论性能好或者差,Linux 都可以运行。[即使你的电脑已经使用很久了, 你仍然可以通过选择安装合适的发行版让 Linux 在你的电脑上流畅的运行][5]. - -而 Mac 不同,它是苹果机专用系统。如果你希望买个便宜的电脑,然后自己装上 Mac 系统,这几乎是不可能的。一般来说 Mac 都是和苹果设备连在一起的。 - -这是[在非苹果系统上安装 Mac OS 的教程][6]. 这里面需要用到的专业技术以及可能遇到的一些问题将会花费你许多时间,你需要想好这样做是否值得。 - -总之,Linux 所支持的硬件平台很广泛,而 MacOS 相对而言则非常少。 - -#### 3\. 安全性 - -![Linux vs Mac: 为什么 Linux 更好][7] - -很多人都说 ios 和 Mac 是非常安全的平台。的确,相比于 Windows ,它确实比较安全,可并不一定有 Linux 安全。 - -我不是在危言耸听。 Mac 系统上也有不少恶意软件和广告,并且[数量与日俱增][8]。我认识一些不太懂技术的用户 使用着非常缓慢的 Mac 电脑并且为此苦苦挣扎。一项快速调查显示[浏览器恶意劫持软件][9]是罪魁祸首. - -从来没有绝对安全的操作系统,Linux 也不例外。 Linux 也有漏洞,但是 Linux 发行版提供的及时更新弥补了这些漏洞。另外,到目前为止在 Linux 上还没有自动运行的病毒或浏览器劫持恶意软件的案例发生。 - -这可能也是一个你应该选择 Linux 而不是 Mac 的原因。 - -#### 4\. 可定制性与灵活性 - -![Linux vs Mac: 为什么 Linux 更好][10] - -如果你有不喜欢的东西,自己定制或者修改它都行。 - -举个例子,如果你不喜欢 Ubuntu 18.04.1 的 [Gnome 桌面环境][11],你可以换成 [KDE Plasma][11]。 你也可以尝试一些 [Gnome 扩展][12]丰富你的桌面选择。这种灵活性和可定制性在 Mac OS 是不可能有的。 - -除此之外你还可以根据需要修改一些操作系统的代码(但是可能需要一些专业知识)以创造出适合你的系统。这个在 Mac OS 上可以做吗? - -另外你可以根据需要从一系列的 Linux 发行版进行选择。比如说,如果你想喜欢 Mac OS上的工作流, [Elementary OS][13] 可能是个不错的选择。你想在你的旧电脑上装上一个轻量级的 Linux 发行版系统吗?这里是一个[轻量级 Linux 发行版列表][5]。相比较而言, Mac OS 缺乏这种灵活性。 - -#### 5\. 使用 Linux 有助于你的职业生涯 [针对 IT 行业和科学领域的学生] - -![Linux vs Mac: 为什么 Linux 更好][14] - -对于 IT 领域的学生和求职者而言,这是有争议的但是也是有一定的帮助的。使用 Linux 并不会让你成为一个优秀的人,也不一定能让你得到任何与 IT 相关的工作。 - -但是当你开始使用 Linux 并且开始探索如何使用的时候,你将会获得非常多的经验。作为一名技术人员,你迟早会接触终端,学习通过命令行实现文件系统管理以及应用程序安装。你可能不会知道这些都是一些 IT 公司的新职员需要培训的内容。 - -除此之外,Linux 在就业市场上还有很大的发展空间。 Linux 相关的技术有很多( Cloud 、 Kubernetes 、Sysadmin 等),您可以学习,获得证书并获得一份相关的高薪的工作。要学习这些,你必须使用 Linux 。 - -#### 6\. 可靠 - -![Linux vs Mac: 为什么 Linux 更好][15] - -想想为什么服务器上用的都是 Linux 系统,当然是因为它可靠。 - -但是它为什么可靠呢,相比于 Mac OS ,它的可靠体现在什么方面呢? - -答案很简单——给用户更多的控制权,同时提供更好的安全性。在 Mac OS 上,你并不能完全控制它,这样做是为了让操作变得更容易,同时提高你的用户体验。使用 Linux ,你可以做任何你想做的事情——这可能会导致(对某些人来说)糟糕的用户体验——但它确实使其更可靠。 - -#### 7\. 开源 - -![Linux vs Mac: 为什么 Linux 更好][16] - -开源并不是每个人都关心的。但对我来说,Linux 最重要的优势在于它的开源特性。下面讨论的大多数观点都是开源软件的直接优势。 - -简单解释一下,如果是开源软件,你可以自己查看或者修改它。但对 Mac 来说,苹果拥有独家控制权。即使你有足够的技术知识,也无法查看 Mac OS 的源代码。 - -形象点说,Mac 驱动的系统可以让你得到一辆车,但缺点是你不能打开引擎盖看里面是什么。那可能非常糟糕! - -如果你想深入了解开源软件的优势,可以在 OpenSource.com 上浏览一下 [Ben Balter 的文章][17]。 - -### 总结 - -现在你应该知道为什么 Linux 比 Mac 好了吧,你觉得呢?上面的这些原因可以说服你选择 Linux 吗?如果不行的话那又是为什么呢? - -在下方评论让我们知道你的想法。 - -Note: 这里的图片是以企鹅俱乐部为原型的。 - --------------------------------------------------------------------------------- - -via: https://itsfoss.com/linux-vs-mac/ - -作者:[Ankush Das][a] -选题:[lujun9972](https://github.com/lujun9972) -译者:[Ryze-Borgia](https://github.com/Ryze-Borgia) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://itsfoss.com/author/ankush/ -[1]: https://itsfoss.com/linux-better-than-windows/ -[2]: https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/09/Linux-vs-mac-featured.png -[3]: https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/09/linux-vs-mac-1.jpeg -[4]: https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/09/linux-vs-mac-4.jpeg -[5]: https://itsfoss.com/lightweight-linux-beginners/ -[6]: https://hackintosh.com/ -[7]: https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/09/linux-vs-mac-2.jpeg -[8]: https://www.computerworld.com/article/3262225/apple-mac/warning-as-mac-malware-exploits-climb-270.html -[9]: https://www.imore.com/how-to-remove-browser-hijack -[10]: https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/09/linux-vs-mac-3.jpeg -[11]: https://www.gnome.org/ -[12]: https://itsfoss.com/best-gnome-extensions/ -[13]: https://elementary.io/ -[14]: https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/09/linux-vs-mac-5.jpeg -[15]: https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/09/linux-vs-mac-6.jpeg -[16]: https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/09/linux-vs-mac-7.jpeg -[17]: https://opensource.com/life/15/12/why-open-source From ffea8012ba20077e2860083eea74ec13c0c6c5fc Mon Sep 17 00:00:00 2001 From: Ryze-Borgia <42087725+Ryze-Borgia@users.noreply.github.com> Date: Thu, 11 Oct 2018 05:44:53 -0500 Subject: [PATCH 350/736] =?UTF-8?q?=E7=BF=BB=E8=AF=91=E5=AE=8C=E6=88=90?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...s Why Linux is a Better Choice than Mac.md | 131 ++++++++++++++++++ 1 file changed, 131 insertions(+) create mode 100644 translated/talk/20180915 Linux vs Mac- 7 Reasons Why Linux is a Better Choice than Mac.md diff --git a/translated/talk/20180915 Linux vs Mac- 7 Reasons Why Linux is a Better Choice than Mac.md b/translated/talk/20180915 Linux vs Mac- 7 Reasons Why Linux is a Better Choice than Mac.md new file mode 100644 index 0000000000..a9ece78ef7 --- /dev/null +++ b/translated/talk/20180915 Linux vs Mac- 7 Reasons Why Linux is a Better Choice than Mac.md @@ -0,0 +1,131 @@ +Linux vs Mac: Linux 比 Mac 好的七个原因 +====== +最近我们谈论了一些[为什么 Linux 比 Windows 好][1]的原因。毫无疑问, Linux 是个非常优秀的平台。但是它和其他的操作系统一样也会有缺点。对于某些专门的领域,像是游戏, Windows 当然更好。 而对于视频编辑等任务, Mac 系统可能更为方便。这一切都取决于你的爱好,以及你想用你的系统做些什么。在这篇文章中,我们将会介绍一些 Linux 相对于 Mac 更好的一些地方。 + +如果你已经在用 Mac 或者打算买一台 Mac 电脑,我们建议你仔细考虑一下,看看是改为使用 Linux 还是继续使用 Mac 。 + +### Linux 比 Mac 好的 7 个原因 + +![Linux vs Mac: 为什么 Linux 更好][2] + +Linux 和 macOS 都是类 Unix 操作系统,并且都支持 Unix 命令行、 bash 和其他一些命令行工具,相比于 Windows ,他们所支持的应用和游戏比较少。但缺点也仅仅如此。 + +平面设计师和视频剪辑师更加倾向于使用 Mac 系统,而 Linux 更加适合做开发、系统管理、运维的工程师。 + +那要不要使用 Linux 呢,为什么要选择 Linux 呢?下面是根据实际经验和理性分析给出的一些建议。 + +#### 1\. 价格 + +![Linux vs Mac: 为什么 Linux 更好][3] + +假设你只是需要浏览文件、看电影、下载图片、写文档、制作报表或者做一些类似的工作,并且你想要一个更加安全的系统。 + +那在这种情况下,你觉得花费几百块买个系统完成这项工作,或者花费更多直接买个 Macbook 划算吗?当然,最终的决定权还是在你。 + +买个装好 Mac 系统的电脑还是买个便宜的电脑,然后自己装上免费的 Linux 系统,这个要看你自己的偏好。就我个人而言,除了音视频剪辑创作之外,Linux 都非常好地用,而对于音视频方面,我更倾向于使用 Final Cut Pro (专业的视频编辑软件) 和 Logic Pro X (专业的音乐制作软件)(这两款软件都是苹果公司推出的)。 + +#### 2\. 硬件支持 + +![Linux vs Mac: 为什么 Linux 更好][4] + +Linux 支持多种平台. 无论你的电脑配置如何,你都可以在上面安装 Linux,无论性能好或者差,Linux 都可以运行。[即使你的电脑已经使用很久了, 你仍然可以通过选择安装合适的发行版让 Linux 在你的电脑上流畅的运行][5]. + +而 Mac 不同,它是苹果机专用系统。如果你希望买个便宜的电脑,然后自己装上 Mac 系统,这几乎是不可能的。一般来说 Mac 都是和苹果设备连在一起的。 + +这是[在非苹果系统上安装 Mac OS 的教程][6]. 这里面需要用到的专业技术以及可能遇到的一些问题将会花费你许多时间,你需要想好这样做是否值得。 + +总之,Linux 所支持的硬件平台很广泛,而 MacOS 相对而言则非常少。 + +#### 3\. 安全性 + +![Linux vs Mac: 为什么 Linux 更好][7] + +很多人都说 ios 和 Mac 是非常安全的平台。的确,相比于 Windows ,它确实比较安全,可并不一定有 Linux 安全。 + +我不是在危言耸听。 Mac 系统上也有不少恶意软件和广告,并且[数量与日俱增][8]。我认识一些不太懂技术的用户 使用着非常缓慢的 Mac 电脑并且为此苦苦挣扎。一项快速调查显示[浏览器恶意劫持软件][9]是罪魁祸首. + +从来没有绝对安全的操作系统,Linux 也不例外。 Linux 也有漏洞,但是 Linux 发行版提供的及时更新弥补了这些漏洞。另外,到目前为止在 Linux 上还没有自动运行的病毒或浏览器劫持恶意软件的案例发生。 + +这可能也是一个你应该选择 Linux 而不是 Mac 的原因。 + +#### 4\. 可定制性与灵活性 + +![Linux vs Mac: 为什么 Linux 更好][10] + +如果你有不喜欢的东西,自己定制或者修改它都行。 + +举个例子,如果你不喜欢 Ubuntu 18.04.1 的 [Gnome 桌面环境][11],你可以换成 [KDE Plasma][11]。 你也可以尝试一些 [Gnome 扩展][12]丰富你的桌面选择。这种灵活性和可定制性在 Mac OS 是不可能有的。 + +除此之外你还可以根据需要修改一些操作系统的代码(但是可能需要一些专业知识)以创造出适合你的系统。这个在 Mac OS 上可以做吗? + +另外你可以根据需要从一系列的 Linux 发行版进行选择。比如说,如果你想喜欢 Mac OS上的工作流, [Elementary OS][13] 可能是个不错的选择。你想在你的旧电脑上装上一个轻量级的 Linux 发行版系统吗?这里是一个[轻量级 Linux 发行版列表][5]。相比较而言, Mac OS 缺乏这种灵活性。 + +#### 5\. 使用 Linux 有助于你的职业生涯 [针对 IT 行业和科学领域的学生] + +![Linux vs Mac: 为什么 Linux 更好][14] + +对于 IT 领域的学生和求职者而言,这是有争议的但是也是有一定的帮助的。使用 Linux 并不会让你成为一个优秀的人,也不一定能让你得到任何与 IT 相关的工作。 + +但是当你开始使用 Linux 并且开始探索如何使用的时候,你将会获得非常多的经验。作为一名技术人员,你迟早会接触终端,学习通过命令行实现文件系统管理以及应用程序安装。你可能不会知道这些都是一些 IT 公司的新职员需要培训的内容。 + +除此之外,Linux 在就业市场上还有很大的发展空间。 Linux 相关的技术有很多( Cloud 、 Kubernetes 、Sysadmin 等),您可以学习,获得证书并获得一份相关的高薪的工作。要学习这些,你必须使用 Linux 。 + +#### 6\. 可靠 + +![Linux vs Mac: 为什么 Linux 更好][15] + +想想为什么服务器上用的都是 Linux 系统,当然是因为它可靠。 + +但是它为什么可靠呢,相比于 Mac OS ,它的可靠体现在什么方面呢? + +答案很简单——给用户更多的控制权,同时提供更好的安全性。在 Mac OS 上,你并不能完全控制它,这样做是为了让操作变得更容易,同时提高你的用户体验。使用 Linux ,你可以做任何你想做的事情——这可能会导致(对某些人来说)糟糕的用户体验——但它确实使其更可靠。 + +#### 7\. 开源 + +![Linux vs Mac: 为什么 Linux 更好][16] + +开源并不是每个人都关心的。但对我来说,Linux 最重要的优势在于它的开源特性。下面讨论的大多数观点都是开源软件的直接优势。 + +简单解释一下,如果是开源软件,你可以自己查看或者修改它。但对 Mac 来说,苹果拥有独家控制权。即使你有足够的技术知识,也无法查看 Mac OS 的源代码。 + +形象点说,Mac 驱动的系统可以让你得到一辆车,但缺点是你不能打开引擎盖看里面是什么。那可能非常糟糕! + +如果你想深入了解开源软件的优势,可以在 OpenSource.com 上浏览一下 [Ben Balter 的文章][17]。 + +### 总结 + +现在你应该知道为什么 Linux 比 Mac 好了吧,你觉得呢?上面的这些原因可以说服你选择 Linux 吗?如果不行的话那又是为什么呢? + +在下方评论让我们知道你的想法。 + +Note: 这里的图片是以企鹅俱乐部为原型的。 + +-------------------------------------------------------------------------------- + +via: https://itsfoss.com/linux-vs-mac/ + +作者:[Ankush Das][a] +选题:[lujun9972](https://github.com/lujun9972) +译者:[Ryze-Borgia](https://github.com/Ryze-Borgia) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://itsfoss.com/author/ankush/ +[1]: https://itsfoss.com/linux-better-than-windows/ +[2]: https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/09/Linux-vs-mac-featured.png +[3]: https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/09/linux-vs-mac-1.jpeg +[4]: https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/09/linux-vs-mac-4.jpeg +[5]: https://itsfoss.com/lightweight-linux-beginners/ +[6]: https://hackintosh.com/ +[7]: https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/09/linux-vs-mac-2.jpeg +[8]: https://www.computerworld.com/article/3262225/apple-mac/warning-as-mac-malware-exploits-climb-270.html +[9]: https://www.imore.com/how-to-remove-browser-hijack +[10]: https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/09/linux-vs-mac-3.jpeg +[11]: https://www.gnome.org/ +[12]: https://itsfoss.com/best-gnome-extensions/ +[13]: https://elementary.io/ +[14]: https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/09/linux-vs-mac-5.jpeg +[15]: https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/09/linux-vs-mac-6.jpeg +[16]: https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/09/linux-vs-mac-7.jpeg +[17]: https://opensource.com/life/15/12/why-open-source From 02a4c97957a0a935dfb524c04fa19a9f6c5995b6 Mon Sep 17 00:00:00 2001 From: "Xingyu.Wang" Date: Thu, 11 Oct 2018 20:55:07 +0800 Subject: [PATCH 351/736] PRF:20180824 What Stable Kernel Should I Use.md @qhwdw --- ...0180824 What Stable Kernel Should I Use.md | 80 +++++++++---------- 1 file changed, 37 insertions(+), 43 deletions(-) diff --git a/translated/tech/20180824 What Stable Kernel Should I Use.md b/translated/tech/20180824 What Stable Kernel Should I Use.md index 7a8d330a77..a993ddcf20 100644 --- a/translated/tech/20180824 What Stable Kernel Should I Use.md +++ b/translated/tech/20180824 What Stable Kernel Should I Use.md @@ -1,6 +1,8 @@ 我应该使用哪些稳定版内核? ====== -很多人都问我这样的问题,在他们的产品/设备/笔记本/服务器等上面应该使用什么样的稳定版内核。一直以来,尤其是那些现在已经延长支持时间的内核,都是由我和其他人提供支持,因此,给出这个问题的答案并不是件容易的事情。因此这篇文章我将尝试去给出我在这个问题上的看法。当然,你可以任意选用任何一个你想去使用的内核版本,这里只是我的建议。 +> 本文作者 Greg Kroah-Hartman 是 Linux 稳定版内核的维护负责人。 + +很多人都问我这样的问题,在他们的产品/设备/笔记本/服务器等上面应该使用什么样的稳定版内核。一直以来,尤其是那些现在已经延长支持时间的内核,都是由我和其他人提供支持,因此,给出这个问题的答案并不是件容易的事情。在这篇文章我将尝试去给出我在这个问题上的看法。当然,你可以任意选用任何一个你想去使用的内核版本,这里只是我的建议。 和以前一样,在这里给出的这些看法只代表我个人的意见。 @@ -12,16 +14,12 @@ * 你最喜欢的 Linux 发行版支持的内核 * 最新的稳定版 - * 最新的 LTS 发行版 - * 仍然处于维护状态的老的 LTS 发行版 - - + * 最新的 LTS (长期支持)版本 + * 仍然处于维护状态的老的 LTS 版本 绝对不要去使用的内核: - * 不再维护的内核发行版 - - + * 不再维护的内核版本 给上面的列表给出具体的数字,今天是 2018 年 8 月 24 日,kernel.org 页面上可以看到是这样: @@ -30,11 +28,9 @@ 因此,基于上面的列表,那它应该是: * 4.18.5 是最新的稳定版 - * 4.14.67 是最新的 LTS 发行版 - * 4.9.124、4.4.152、以及 3.16.57 是仍然处于维护状态的老的 LTS 发行版 - * 4.17.19 和 3.18.119 是过去 60 天内 “生命周期终止” 的内核版本,它们仍然保留在 kernel.org 站点上,是为了仍然想去使用它们的那些人。 - - + * 4.14.67 是最新的 LTS 版本 + * 4.9.124、4.4.152、以及 3.16.57 是仍然处于维护状态的老的 LTS 版本 + * 4.17.19 和 3.18.119 是过去 60 天内有过发布的 “生命周期终止” 的内核版本,它们仍然保留在 kernel.org 站点上,是为了仍然想去使用它们的那些人。 非常容易,对吗? @@ -42,15 +38,15 @@ Ok,现在我给出这样选择的一些理由: ### Linux 发行版内核 -对于大多数 Linux 用户来说,最好的方案就是使用你喜欢的 Linux 发行版的内核。就我本人而言,我比较喜欢基于社区的、内核不断滚动升级的用最新内核的 Linux 发行版,并且它也是由开发者社区来支持的。这种类型的发行版有 Fedora、openSUSE、Arch、Gentoo、CoreOS、以及其它的。 +对于大多数 Linux 用户来说,最好的方案就是使用你喜欢的 Linux 发行版的内核。就我本人而言,我比较喜欢基于社区的、内核不断滚动升级的用最新内核的 Linux 发行版,并且它也是由开发者社区来支持的。这种类型的发行版有 Fedora、openSUSE、Arch、Gentoo、CoreOS,以及其它的。 所有这些发行版都使用了上游的最新的稳定版内核,并且确保定期打了需要的 bug 修复补丁。当它拥有了最新的修复之后([记住所有的修复都是安全修复][2]),这就是你可以使用的最安全、最好的内核之一。 -有些社区的 Linux 发行版需要很长的时间才发行一个新内核的发行版,但是最终发行的版本和所支持的内核都是非常好的。这些也都非常好用,Debian 和 Ubuntu 就是这样的例子。 +有些社区的 Linux 发行版需要很长的时间才发行一个新内核版本,但是最终发行的版本和所支持的内核都是非常好的。这些也都非常好用,Debian 和 Ubuntu 就是这样的例子。 -我没有在这里列出你所喜欢的发行版,并不是意味着它们的内核不够好。查看这些发行版的网站,确保它们的内核包是不断应用最新的安全补丁进行升级过的,那么它就应该是很好的。 +如果我没有在这里列出你所喜欢的发行版,并不是意味着它们的内核不够好。查看这些发行版的网站,确保它们的内核包是不断应用最新的安全补丁进行升级过的,那么它就应该是很好的。 -许多人好像喜欢旧的、“传统” 模式的发行版,以及使用 RHEL、SLES、CentOS 或者 “LTS” Ubuntu 发行版。这些发行版挑选一个特定的内核版本,然后使用好几年,而不是几十年。他们移植了最新的 bug 修复,有时也有一些内核的新特性,所有的只是追求堂吉诃德式的保持版本号不变而已,尽管他们已经在那个旧的内核版本上做了成千上万的变更。这其实是一个吃力不讨好的工作,开发者分配去做这些任务,看上去做的很不错,其实就是为了实现这些目标。如果你从来没有看到你的内核版本号发生过变化,而仍然在使用这些发行版。他们通常会为使用而付出一些成本,当发生错误时能够从这些公司得到一些支持,那就是值得的。 +许多人好像喜欢旧式、“传统” 模式的发行版,使用 RHEL、SLES、CentOS 或者 “LTS” Ubuntu 发行版。这些发行版挑选一个特定的内核版本,然后使用好几年,甚至几十年。他们反向移植了最新的 bug 修复,有时也有一些内核的新特性,所有的只是追求堂吉诃德式的保持版本号不变而已,尽管他们已经在那个旧的内核版本上做了成千上万的变更。这项工作是一项真正吃力不讨好的工作,分配到这些任务的开发人员做了一些精彩的工作才能实现这些目标。所以如果你希望永远不看到你的内核版本号发生过变化,那么就使用这些发行版。他们通常会为使用而付出一些钱,当发生错误时能够从这些公司得到一些支持,那就是值得的。 所以,你能使用的最好的内核是你可以求助于别人,而别人可以为你提供支持的内核。使用那些支持,你通常都已经为它支付过费用了(对于企业发行版),而这些公司也知道他们职责是什么。 @@ -58,55 +54,55 @@ Ok,现在我给出这样选择的一些理由: ### 最新的稳定版 -最新的稳定版内核是 Linux 内核开发者社区宣布为“稳定版”的最新的一个内核。大约每三个月,社区发行一个包含了对所有新硬件支持的、新的稳定版内核,最新版的内核不但改善内核性能,同时还包含内核各部分的 bug 修复。再经过三个月之后,进入到下一个内核版本的 bug 修复将被移植进入这个稳定版内核中,因此,使用这个内核版本的用户将确保立即得到这些修复。 +最新的稳定版内核是 Linux 内核开发者社区宣布为“稳定版”的最新的一个内核。大约每三个月,社区发行一个包含了对所有新硬件支持的、新的稳定版内核,最新版的内核不但改善内核性能,同时还包含内核各部分的 bug 修复。接下来的三个月之后,进入到下一个内核版本的 bug 修复将被反向移植进入这个稳定版内核中,因此,使用这个内核版本的用户将确保立即得到这些修复。 -最新的稳定版内核通常也是主流社区发行版使用的较好的内核,因此你可以确保它是经过测试和拥有大量用户使用的内核。另外,内核社区(全部开发者超过 4000 人)也将帮助这个发行版提供对用户的支持,因为这是他们做的最新的一个内核。 +最新的稳定版内核通常也是主流社区发行版所使用的内核,因此你可以确保它是经过测试和拥有大量用户使用的内核。另外,内核社区(全部开发者超过 4000 人)也将帮助这个发行版提供对用户的支持,因为这是他们做的最新的一个内核。 -三个月之后,将发行一个新的稳定版内核,你应该去更新到它以确保你的内核始终是最新的稳定版,当最新的稳定版内核发布之后,对你的当前稳定版内核的支持通常会落后几周时间。 +三个月之后,将发行一个新的稳定版内核,你应该去更新到它以确保你的内核始终是最新的稳定版,因为当最新的稳定版内核发布之后,对你的当前稳定版内核的支持通常会落后几周时间。 -如果你在上一个 LTS 版本发布之后购买了最新的硬件,为了能够支持最新的硬件,你几乎是绝对需要去运行这个最新的稳定版内核。对于台式机或新的服务器,它们通常需要运行在它们推荐的内核版本上。 +如果你在上一个 LTS (长期支持)版本发布之后购买了最新的硬件,为了能够支持最新的硬件,你几乎是绝对需要去运行这个最新的稳定版内核。对于台式机或新的服务器,最新的稳定版内核通常是推荐运行的内核。 -### 最新的 LTS 发行版 +### 最新的 LTS 版本 -如果你的硬件为了保证正常运行(像大多数的嵌入式设备),需要依赖供应商的源码树外的补丁,那么对你来说,最好的内核版本是最新的 LTS 发行版。这个发行版拥有所有进入稳定版内核的最新 bug 修复,以及大量的用户测试和使用。 +如果你的硬件为了保证正常运行(像大多数的嵌入式设备),需要依赖供应商的源码树外out-of-tree的补丁,那么对你来说,最好的内核版本是最新的 LTS 版本。这个版本拥有所有进入稳定版内核的最新 bug 修复,以及大量的用户测试和使用。 -请注意,这个最新的 LTS 发行版没有新特性,并且也几乎不会增加对新硬件的支持,因此,如果你需要使用一个新设备,那你的最佳选择就是最新的稳定版内核,而不是最新的 LTS 版内核。 +请注意,这个最新的 LTS 版本没有新特性,并且也几乎不会增加对新硬件的支持,因此,如果你需要使用一个新设备,那你的最佳选择就是最新的稳定版内核,而不是最新的 LTS 版内核。 -另外,对于这个 LTS 发行版内核的用户来说,他也不用担心每三个月一次的“重大”升级。因此,他们将一直坚持使用这个 LTS 内核发行版,并每年升级一次,这是一个很好的实践。 +另外,对于这个 LTS 版本的用户来说,他也不用担心每三个月一次的“重大”升级。因此,他们将一直坚持使用这个 LTS 版本,并每年升级一次,这是一个很好的实践。 -使用这个 LTS 发行版的不利方面是,你没法得到在最新版本内核上实现的内核性能提升,除非在未来的一年中,你升级到下一个 LTS 版内核。 +使用这个 LTS 版本的不利方面是,你没法得到在最新版本内核上实现的内核性能提升,除非在未来的一年中,你升级到下一个 LTS 版内核。 -另外,如果你使用的这个内核版本有问题,你所做的第一件事情就是向任意一位内核开发者报告发生的问题,并向他们询问,“最新的稳定版内核中是否也存在这个问题?”并且,你将意识到,对它的支持不会像使用最新的稳定版内核那样容易得到。 +另外,如果你使用的这个内核版本有问题,你所做的第一件事情就是向任意一位内核开发者报告发生的问题,并向他们询问,“最新的稳定版内核中是否也存在这个问题?”并且,你需要意识到,对它的支持不会像使用最新的稳定版内核那样容易得到。 -现在,如果你坚持使用一个有大量的补丁集的内核,并且不希望升级到每年一次的新 LTS 内核版本上,那么,或许你应该去使用老的 LTS 发行版内核: +现在,如果你坚持使用一个有大量的补丁集的内核,并且不希望升级到每年一次的新 LTS 版内核上,那么,或许你应该去使用老的 LTS 版内核: -### 老的 LTS 发行版 +### 老的 LTS 版本 -这些发行版传统上都由社区提供 2 年时间的支持,有时候当一个重要的 Linux 发行版(像 Debian 或 SLES 一样)依赖它时,这个支持时间会更长。然而在过去一年里,感谢 Google、Linaro、Linaro 成员公司、[kernelci.org][3]、以及其它公司在测试和基础设施上的大量投入,使得这些老的 LTS 发行版内核得到更长时间的支持。 +传统上,这些版本都由社区提供 2 年时间的支持,有时候当一个重要的 Linux 发行版(像 Debian 或 SLES)依赖它时,这个支持时间会更长。然而在过去一年里,感谢 Google、Linaro、Linaro 成员公司、[kernelci.org][3]、以及其它公司在测试和基础设施上的大量投入,使得这些老的 LTS 版内核得到更长时间的支持。 -这是最新的 LTS 发行版,它们将被支持多长时间,这是 2018 年 8 月 24 日显示在 [kernel.org/category/releases.html][4] 上的信息: +最新的 LTS 版本以及它们将被支持多长时间,这是 2018 年 8 月 24 日显示在 [kernel.org/category/releases.html][4] 上的信息: ![][5] -Google 和其它公司希望这些内核使用的时间更长的原因是,由于现在几乎所有的 SoC 芯片的疯狂(也有人说是打破常规)的开发模型。这些设备在芯片发行前几年就启动了他们的开发生命周期,而那些代码从来不会合并到上游,最终结果是始终在一个分支中,新的芯片基于一个 2 年以前的老内核发布。这些 SoC 的代码树通常增加了超过 200 万行的代码,这使得它们成为我们前面称之为“类 Linux 内核“的东西。 +Google 和其它公司希望这些内核使用的时间更长的原因是,由于现在几乎所有的 SoC 芯片的疯狂的(也有人说是打破常规)开发模型。这些设备在芯片发行前几年就启动了他们的开发周期,而那些代码从来不会合并到上游,最终结果是新打造的芯片是基于一个 2 年以前的老内核发布的。这些 SoC 的代码树通常增加了超过 200 万行的代码,这使得它们成为我们前面称之为“类 Linux 内核“的东西。 -如果在 2 年后,这个 LTS 发行版停止支持,那么来自社区的支持将立即停止,并且没有人对它再进行 bug 修复。这导致了在全球各地数以百万计的不安全设备仍然在使用中,这对任何生态系统来说都不是什么好事情。 +如果在 2 年后,这个 LTS 版本停止支持,那么来自社区的支持将立即停止,并且没有人对它再进行 bug 修复。这导致了在全球各地数以百万计的非常不安全的设备仍然在使用中,这对任何生态系统来说都不是什么好事情。 -由于这种依赖,这些公司现在要求新设备不断更新到最新的 LTS 发行版,而这些特定的发行版(即每个 4.9.y 发行版)就是为它们发行的。其中一个这样的例子就是新 Android 设备对内核版本的要求,这些新设备的 “O” 版本和现在的 “P” 版本指定了最低允许使用的内核版本,并且在设备上越来越频繁升级的、安全的 Android 发行版开始要求使用这些 “.y” 发行版。 +由于这种依赖,这些公司现在要求新设备不断更新到最新的 LTS 版本——这些为它们特定发布的版本(例如现在的每个 4.9.y 版本)。其中一个这样的例子就是新 Android 设备对内核版本的要求,这些新设备所带的 “Andrid O” 版本(和现在的 “Android P” 版本)指定了最低允许使用的内核版本,并且 Andoird 安全更新版本也开始越来越频繁在设备上要求使用这些 “.y” 版本。 -我注意到一些生产商现在已经在做这些事情。Sony 是其中一个非常好的例子,在他们的大多数新手机上,通过他们每季度的安全发行版,将设备更新到最新的 4.4.y 发行版上。另一个很好的例子是一家小型公司 Essential,他们持续跟踪 4.4.y 发行版,据我所知,他们发布新版本的速度比其它公司都快。 +我注意到一些生产商现在已经在做这些事情。Sony 是其中一个非常好的例子,在他们的大多数新手机上,通过他们每季度的安全更新版本,将设备更新到最新的 4.4.y 发行版上。另一个很好的例子是一家小型公司 Essential,据我所知,他们持续跟踪 4.4.y 版本的速度比其它公司都快。 -当使用这种很老的内核时有个重大警告。移植到这种内核中的 bug 修复比起最新版本的 LTS 内核来说数量少很多,因为这些使用很老的 LTS 内核的传统设备型号要远少于现在的用户使用的型号。如果你打算将它们用在有不可信的用户或虚拟机的地方,那么这些内核将不再被用于任何”通用计算“的模型中,因为对于这些内核不会去做像最近的 Spectre 这样的修复,如果在一些分支中存在这样的 bug,那么将极大地降低安全性。 +当使用这种老的内核时有个重大警告。反向移植到这种内核中的安全修复不如最新版本的 LTS 内核多,因为这些使用老的 LTS 内核的设备的传统模式是一个更加简化的用户模式。这些内核不能用于任何“通用计算”模式中,在这里用的是不可信用户untrusted user或虚拟机,极大地削弱了对老的内核做像最近的 Spectre 这样的修复的能力,如果在一些分支中存在这样的 bug 的话。 -因此,仅当在你能够完全控制的设备中使用老的 LTS 发行版,或者是使用在有一个非常强大的安全模型(像 Android 一样强制使用 SELinux 和应用程序隔离)去限制的情况下。绝对不要在有不可信用户、程序、或虚拟机的服务器上使用这些老的 LTS 发行版内核。 +因此,仅在你能够完全控制的设备,或者限定在一个非常强大的安全模型(像 Android 一样强制使用 SELinux 和应用程序隔离)时使用老的 LTS 版本。绝对不要在有不可信用户/程序,或虚拟机的服务器上使用这些老的 LTS 版内核。 -此外,如果社区对它有支持的话,社区对这些老的 LTS 内核发行版相比正常的 LTS 内核发行版的支持要少的多。如果你使用这些内核,那么你只能是一个人在战斗,你需要有能力去独自支持这些内核,或者依赖你的 SoC 供应商为你提供支持(需要注意的是,对于大部分供应商来说是不会为你提供支持的,因此,你要特别注意 …)。 +此外,如果社区对它有支持的话,社区对这些老的 LTS 版内核相比正常的 LTS 版内核的支持要少的多。如果你使用这些内核,那么你只能是一个人在战斗,你需要有能力去独自支持这些内核,或者依赖你的 SoC 供应商为你提供支持(需要注意的是,几乎没有供应商会为你提供支持,因此,你要特别注意 ……)。 ### 不再维护的内核发行版 -更让人感到惊讶的事情是,许多公司只是随便选一个内核发行版,然后将它封装到它们的产品里,并将它毫不犹豫地承载到数十万的部件中。其中一个这样的糟糕例子是 Lego Mindstorm 系统,不知道是什么原因在它们的设备上随意选取了一个 `-rc` 的内核发行版。`-rc` 的发行版是开发中的版本,Linux 内核开发者认为它根本就不适合任何人使用,更不用说是数百万的用户了。 +更让人感到惊讶的事情是,许多公司只是随便选一个内核发行版,然后将它封装到它们的产品里,并将它毫不犹豫地承载到数十万的部件中。其中一个这样的糟糕例子是 Lego Mindstorm 系统,不知道是什么原因在它们的设备上随意选取了一个 -rc 的内核发行版。-rc 的发行版是开发中的版本,根本没有 Linux 内核开发者认为它适合任何人使用,更不用说是数百万的用户了。 -当然,如果你愿意,你可以随意地使用它,但是需要注意的是,可能真的就只有你一个人在使用它。社区不会为你提供支持,因为他们不可能关注所有内核版本的特定问题,因此如果出现错误,你只能独自去解决它。对于一些公司和系统来说,这么做可能还行,但是如果没有为此有所规划,那么要当心因此而产生的”隐性“成本。 +当然,如果你愿意,你可以随意地使用它,但是需要注意的是,可能真的就只有你一个人在使用它。社区不会为你提供支持,因为他们不可能关注所有内核版本的特定问题,因此如果出现错误,你只能独自去解决它。对于一些公司和系统来说,这么做可能还行,但是如果没有为此有所规划,那么要当心因此而产生的“隐性”成本。 ### 总结 @@ -116,8 +112,6 @@ Google 和其它公司希望这些内核使用的时间更长的原因是,由 * 服务器:最新的稳定版内核或最新的 LTS 版内核 * 嵌入式设备:最新的 LTS 版内核或老的 LTS 版内核(如果使用的安全模型非常强大和严格) - - 至于我,在我的机器上运行什么样的内核?我的笔记本运行的是最新的开发版内核(即 Linus 的开发树)再加上我正在做修改的内核,我的服务器上运行的是最新的稳定版内核。因此,尽管我负责 LTS 发行版的支持工作,但我自己并不使用 LTS 版内核,除了在测试系统上。我依赖于开发版和最新的稳定版内核,以确保我的机器运行的是目前我们所知道的最快的也是最安全的内核版本。 -------------------------------------------------------------------------------- @@ -127,7 +121,7 @@ via: http://kroah.com/log/blog/2018/08/24/what-stable-kernel-should-i-use/ 作者:[Greg Kroah-Hartman][a] 选题:[lujun9972](https://github.com/lujun9972) 译者:[qhwdw](https://github.com/qhwdw) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 From a97bbde8971f5522dd1bb2e628c61e551569467c Mon Sep 17 00:00:00 2001 From: "Xingyu.Wang" Date: Thu, 11 Oct 2018 20:55:26 +0800 Subject: [PATCH 352/736] PUB:20180824 What Stable Kernel Should I Use.md @qhwdw https://linux.cn/article-10103-1.html --- .../20180824 What Stable Kernel Should I Use.md | 0 1 file changed, 0 insertions(+), 0 deletions(-) rename {translated/tech => published}/20180824 What Stable Kernel Should I Use.md (100%) diff --git a/translated/tech/20180824 What Stable Kernel Should I Use.md b/published/20180824 What Stable Kernel Should I Use.md similarity index 100% rename from translated/tech/20180824 What Stable Kernel Should I Use.md rename to published/20180824 What Stable Kernel Should I Use.md From eda6cdd3dfb3b69328e6a8927cf544529b8ad4cf Mon Sep 17 00:00:00 2001 From: lctt-bot Date: Thu, 11 Oct 2018 17:00:28 +0000 Subject: [PATCH 353/736] Revert "imquanquan Translating" This reverts commit 8ed24fe79507514e70d3d27eade4449d02d4acdd. --- sources/tech/20180130 Trying Other Go Versions.md | 3 +-- 1 file changed, 1 insertion(+), 2 deletions(-) diff --git a/sources/tech/20180130 Trying Other Go Versions.md b/sources/tech/20180130 Trying Other Go Versions.md index 1ab1b4f948..731747d19a 100644 --- a/sources/tech/20180130 Trying Other Go Versions.md +++ b/sources/tech/20180130 Trying Other Go Versions.md @@ -1,4 +1,3 @@ -imquanquan Translating Trying Other Go Versions ============================================================ @@ -110,4 +109,4 @@ via: https://pocketgophers.com/trying-other-versions/ [8]:https://pocketgophers.com/trying-other-versions/#trying-a-specific-release [9]:https://pocketgophers.com/guide-to-json/ [10]:https://pocketgophers.com/trying-other-versions/#trying-any-release -[11]:https://pocketgophers.com/trying-other-versions/#trying-a-source-build-e-g-tip +[11]:https://pocketgophers.com/trying-other-versions/#trying-a-source-build-e-g-tip \ No newline at end of file From 649e48e2eeec0839a7d3e675fcb2fc914331fd1d Mon Sep 17 00:00:00 2001 From: lctt-bot Date: Thu, 11 Oct 2018 17:00:45 +0000 Subject: [PATCH 354/736] Revert "zafiry is translating 20180205 Writing eBPF tracing tools in Rust.md" This reverts commit 204d905c0c27d50e4dc18f58145699d1f519277d. --- sources/tech/20180205 Writing eBPF tracing tools in Rust.md | 1 - 1 file changed, 1 deletion(-) diff --git a/sources/tech/20180205 Writing eBPF tracing tools in Rust.md b/sources/tech/20180205 Writing eBPF tracing tools in Rust.md index 093d3de215..18b8eb5742 100644 --- a/sources/tech/20180205 Writing eBPF tracing tools in Rust.md +++ b/sources/tech/20180205 Writing eBPF tracing tools in Rust.md @@ -1,4 +1,3 @@ -Zafiry translating... Writing eBPF tracing tools in Rust ============================================================ From 177cf50f0092c60dfe446bba8a1aaee25930cd41 Mon Sep 17 00:00:00 2001 From: "Xingyu.Wang" Date: Fri, 12 Oct 2018 01:48:33 +0800 Subject: [PATCH 355/736] PRF:20180724 Building a network attached storage device with a Raspberry Pi.md @jrglinux --- ...ched storage device with a Raspberry Pi.md | 129 +++--------------- 1 file changed, 21 insertions(+), 108 deletions(-) diff --git a/translated/tech/20180724 Building a network attached storage device with a Raspberry Pi.md b/translated/tech/20180724 Building a network attached storage device with a Raspberry Pi.md index 21c0c20cd5..8cf0a1802e 100644 --- a/translated/tech/20180724 Building a network attached storage device with a Raspberry Pi.md +++ b/translated/tech/20180724 Building a network attached storage device with a Raspberry Pi.md @@ -1,11 +1,12 @@ -树莓派自建 NAS 云盘之-树莓派搭建网络存储盘 +树莓派自建 NAS 云盘之——树莓派搭建网络存储盘 ====== +> 跟随这些逐步指导构建你自己的基于树莓派的 NAS 系统。 ![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/bus-storage.png?itok=95-zvHYl) -我将在接下来的三篇文章中讲述如何搭建一个简便、实用的 NAS 云盘系统。我在这个中心化的存储系统中存储数据,并且让它每晚都会自动的备份增量数据。本系列文章将利用 NFS 文件系统将磁盘挂载到同一网络下的不同设备上,使用 [Nextcloud][1] 来离线访问数据、分享数据。 +我将在接下来的这三篇文章中讲述如何搭建一个简便、实用的 NAS 云盘系统。我在这个中心化的存储系统中存储数据,并且让它每晚都会自动的备份增量数据。本系列文章将利用 NFS 文件系统将磁盘挂载到同一网络下的不同设备上,使用 [Nextcloud][1] 来离线访问数据、分享数据。 -本文主要讲述将数据盘挂载到远程设备上的软硬件步骤。本系列第二篇文章将讨论数据备份策略、如何添加定时备份数据任务。最后一篇文章中我们将会安装 Nextcloud 软件,用户通过Nextcloud 提供的 web 接口可以方便的离线或在线访问数据。本系列教程最终搭建的 NAS 云盘支持多用户操作、文件共享等功能,所以你可以通过它方便的分享数据,比如说你可以发送一个加密链接,跟朋友分享你的照片等等。 +本文主要讲述将数据盘挂载到远程设备上的软硬件步骤。本系列第二篇文章将讨论数据备份策略、如何添加定时备份数据任务。最后一篇文章中我们将会安装 Nextcloud 软件,用户通过 Nextcloud 提供的 web 界面可以方便的离线或在线访问数据。本系列教程最终搭建的 NAS 云盘支持多用户操作、文件共享等功能,所以你可以通过它方便的分享数据,比如说你可以发送一个加密链接,跟朋友分享你的照片等等。 最终的系统架构如下图所示: @@ -16,11 +17,11 @@ 首先需要准备硬件。本文所列方案只是其中一种示例,你也可以按不同的硬件方案进行采购。 -最主要的就是[树莓派3][2],它带有四核 CPU,1G RAM,以及(有些)快速的网络接口。数据将存储在两个 USB 磁盘驱动器上(这里使用 1TB 磁盘);其中一个磁盘用于每天数据存储,另一个用于数据备份。请务必使用有源 USB 磁盘驱动器或者带附加电源的 USB 集线器,因为树莓派无法为两个 USB 磁盘驱动器供电。 +最主要的就是[树莓派 3][2],它带有四核 CPU、1G RAM,以及(比较)快速的网络接口。数据将存储在两个 USB 磁盘驱动器上(这里使用 1TB 磁盘);其中一个磁盘用于每天数据存储,另一个用于数据备份。请务必使用有源 USB 磁盘驱动器或者带附加电源的 USB 集线器,因为树莓派无法为两个 USB 磁盘驱动器供电。 ### 软件 -社区中最活跃的操作系统当属 [Raspbian][3],便于定制个性化项目。已经有很多 [操作指南][4] 讲述如何在树莓派中安装 Raspbian 系统,所以这里不再赘述。在撰写本文时,最新的官方支持版本是 [Raspbian Stretch][5],它对我来说很好使用。 +在该社区中最活跃的操作系统当属 [Raspbian][3],便于定制个性化项目。已经有很多 [操作指南][4] 讲述如何在树莓派中安装 Raspbian 系统,所以这里不再赘述。在撰写本文时,最新的官方支持版本是 [Raspbian Stretch][5],它对我来说很好使用。 到此,我将假设你已经配置好了基本的 Raspbian 系统并且可以通过 `ssh` 访问到你的树莓派。 @@ -31,51 +32,28 @@ ``` pi@raspberrypi:~ $ sudo fdisk -l - - <...> - - Disk /dev/sda: 931.5 GiB, 1000204886016 bytes, 1953525168 sectors - Units: sectors of 1 * 512 = 512 bytes - Sector size (logical/physical): 512 bytes / 512 bytes - I/O size (minimum/optimal): 512 bytes / 512 bytes - Disklabel type: dos - Disk identifier: 0xe8900690 - - -Device     Boot Start        End    Sectors   Size Id Type - -/dev/sda1        2048 1953525167 1953523120 931.5G 83 Linux - - - +Device Boot Start End Sectors Size Id Type +/dev/sda1 2048 1953525167 1953523120 931.5G 83 Linux Disk /dev/sdb: 931.5 GiB, 1000204886016 bytes, 1953525168 sectors - Units: sectors of 1 * 512 = 512 bytes - Sector size (logical/physical): 512 bytes / 512 bytes - I/O size (minimum/optimal): 512 bytes / 512 bytes - Disklabel type: dos - Disk identifier: 0x6aa4f598 - - -Device     Boot Start        End    Sectors   Size Id Type - -/dev/sdb1  *     2048 1953521663 1953519616 931.5G  83 Linux +Device Boot Start End Sectors Size Id Type +/dev/sdb1 * 2048 1953521663 1953519616 931.5G 83 Linux ``` @@ -86,163 +64,101 @@ Device     Boot Start        End    Sectors   Size Id Type ``` pi@raspberrypi:~ $ sudo fdisk /dev/sda - - Welcome to fdisk (util-linux 2.29.2). - Changes will remain in memory only, until you decide to write them. - Be careful before using the write command. - - - Command (m for help): o - Created a new DOS disklabel with disk identifier 0x9c310964. - - Command (m for help): n - Partition type - -   p   primary (0 primary, 0 extended, 4 free) - -   e   extended (container for logical partitions) - + p primary (0 primary, 0 extended, 4 free) + e extended (container for logical partitions) Select (default p): p - Partition number (1-4, default 1): - First sector (2048-1953525167, default 2048): - Last sector, +sectors or +size{K,M,G,T,P} (2048-1953525167, default 1953525167): - - Created a new partition 1 of type 'Linux' and of size 931.5 GiB. - - Command (m for help): p - - Disk /dev/sda: 931.5 GiB, 1000204886016 bytes, 1953525168 sectors - Units: sectors of 1 * 512 = 512 bytes - Sector size (logical/physical): 512 bytes / 512 bytes - I/O size (minimum/optimal): 512 bytes / 512 bytes - Disklabel type: dos - Disk identifier: 0x9c310964 - - -Device     Boot Start        End    Sectors   Size Id Type - -/dev/sda1        2048 1953525167 1953523120 931.5G 83 Linux - - +Device Boot Start End Sectors Size Id Type +/dev/sda1 2048 1953525167 1953523120 931.5G 83 Linux Command (m for help): w - The partition table has been altered. - Syncing disks. - ``` 现在,我们将用 ext4 文件系统格式化新创建的分区 `/dev/sda1`: ``` pi@raspberrypi:~ $ sudo mkfs.ext4 /dev/sda1 - mke2fs 1.43.4 (31-Jan-2017) - Discarding device blocks: done - - <...> - - Allocating group tables: done - Writing inode tables: done - Creating journal (1024 blocks): done - Writing superblocks and filesystem accounting information: done - ``` 重复以上步骤后,让我们根据用途来对它们建立标签: ``` pi@raspberrypi:~ $ sudo e2label /dev/sda1 data - pi@raspberrypi:~ $ sudo e2label /dev/sdb1 backup - ``` -现在,让我们安装这些磁盘并存储一些数据。以我运营该系统超过一年的经验来看,当树莓派启动时(例如在断电后),USB 磁盘驱动器并不是总被安装,因此我建议使用 autofs 在需要的时候进行安装。 +现在,让我们安装这些磁盘并存储一些数据。以我运营该系统超过一年的经验来看,当树莓派启动时(例如在断电后),USB 磁盘驱动器并不是总被挂载,因此我建议使用 autofs 在需要的时候进行挂载。 首先,安装 autofs 并创建挂载点: ``` pi@raspberrypi:~ $ sudo apt install autofs - pi@raspberrypi:~ $ sudo mkdir /nas - ``` -然后添加下面这行来挂载设备 -`/etc/auto.master`: +然后添加下面这行来挂载设备 `/etc/auto.master`: + ``` /nas    /etc/auto.usb - ``` 如果不存在以下内容,则创建 `/etc/auto.usb`,然后重新启动 autofs 服务: ``` data -fstype=ext4,rw :/dev/disk/by-label/data - backup -fstype=ext4,rw :/dev/disk/by-label/backup - pi@raspberrypi3:~ $ sudo service autofs restart - ``` 现在你应该可以分别访问 `/nas/data` 以及 `/nas/backup` 磁盘了。显然,到此还不会令人太兴奋,因为你只是擦除了磁盘中的数据。不过,你可以执行以下命令来确认设备是否已经挂载成功: ``` pi@raspberrypi3:~ $ cd /nas/data - pi@raspberrypi3:/nas/data $ cd /nas/backup - pi@raspberrypi3:/nas/backup $ mount - <...> - /etc/auto.usb on /nas type autofs (rw,relatime,fd=6,pgrp=463,timeout=300,minproto=5,maxproto=5,indirect) - <...> - /dev/sda1 on /nas/data type ext4 (rw,relatime,data=ordered) - /dev/sdb1 on /nas/backup type ext4 (rw,relatime,data=ordered) - ``` -首先进入对应目录以确保 autofs 能够挂载设备。Autofs 会跟踪文件系统的访问记录,并随时挂载所需要的设备。然后 `mount` 命令会显示这两个 USB 磁盘驱动器已经挂载到我们想要的位置了。 +首先进入对应目录以确保 autofs 能够挂载设备。autofs 会跟踪文件系统的访问记录,并随时挂载所需要的设备。然后 `mount` 命令会显示这两个 USB 磁盘驱动器已经挂载到我们想要的位置了。 设置 autofs 的过程容易出错,如果第一次尝试失败,请不要沮丧。你可以上网搜索有关教程。 @@ -252,25 +168,21 @@ pi@raspberrypi3:/nas/backup $ mount ``` pi@raspberrypi:~ $ sudo apt install nfs-kernel-server - ``` 然后,需要告诉 NFS 服务器公开 `/nas/data` 目录,这是从树莓派外部可以访问的唯一设备(另一个用于备份)。编辑 `/etc/exports` 添加如下内容以允许所有可以访问 NAS 云盘的设备挂载存储: ``` /nas/data *(rw,sync,no_subtree_check) - ``` -更多有关限制挂载到单个设备的详细信息,请参阅 `man exports`。经过上面的配置,任何人都可以访问数据,只要他们可以访问 NFS 所需的端口:`111`和`2049`。我通过上面的配置,只允许通过路由器防火墙访问到我的家庭网络的 22 和 443 端口。这样,只有在家庭网络中的设备才能访问 NFS 服务器。 +更多有关限制挂载到单个设备的详细信息,请参阅 `man exports`。经过上面的配置,任何人都可以访问数据,只要他们可以访问 NFS 所需的端口:`111` 和 `2049`。我通过上面的配置,只允许通过路由器防火墙访问到我的家庭网络的 22 和 443 端口。这样,只有在家庭网络中的设备才能访问 NFS 服务器。 如果要在 Linux 计算机挂载存储,运行以下命令: ``` you@desktop:~ $ sudo mkdir /nas/data - you@desktop:~ $ sudo mount -t nfs :/nas/data /nas/data - ``` 同样,我建议使用 autofs 来挂载该网络设备。如果需要其他帮助,请参看 [如何使用 Autofs 来挂载 NFS 共享][6]。 @@ -284,7 +196,7 @@ via: https://opensource.com/article/18/7/network-attached-storage-Raspberry-Pi 作者:[Manuel Dewald][a] 选题:[lujun9972](https://github.com/lujun9972) 译者:[jrg](https://github.com/jrglinux) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 @@ -296,3 +208,4 @@ via: https://opensource.com/article/18/7/network-attached-storage-Raspberry-Pi [5]: https://www.raspberrypi.org/blog/raspbian-stretch/ [6]: https://opensource.com/article/18/6/using-autofs-mount-nfs-shares + From f75394f2590e72dc30043d639f00981cabd0033f Mon Sep 17 00:00:00 2001 From: "Xingyu.Wang" Date: Fri, 12 Oct 2018 01:48:57 +0800 Subject: [PATCH 356/736] PUB:20180724 Building a network attached storage device with a Raspberry Pi.md @jrglinux https://linux.cn/article-10104-1.html --- ...lding a network attached storage device with a Raspberry Pi.md | 0 1 file changed, 0 insertions(+), 0 deletions(-) rename {translated/tech => published}/20180724 Building a network attached storage device with a Raspberry Pi.md (100%) diff --git a/translated/tech/20180724 Building a network attached storage device with a Raspberry Pi.md b/published/20180724 Building a network attached storage device with a Raspberry Pi.md similarity index 100% rename from translated/tech/20180724 Building a network attached storage device with a Raspberry Pi.md rename to published/20180724 Building a network attached storage device with a Raspberry Pi.md From a760f60ee4d349aabfdb4d0cd51fe3d60a44868c Mon Sep 17 00:00:00 2001 From: "Xingyu.Wang" Date: Fri, 12 Oct 2018 01:59:15 +0800 Subject: [PATCH 357/736] PRF:20180703 Install Oracle VirtualBox On Ubuntu 18.04 LTS Headless Server.md @qhwdw --- ...Box On Ubuntu 18.04 LTS Headless Server.md | 100 ++++++++---------- 1 file changed, 47 insertions(+), 53 deletions(-) diff --git a/translated/tech/20180703 Install Oracle VirtualBox On Ubuntu 18.04 LTS Headless Server.md b/translated/tech/20180703 Install Oracle VirtualBox On Ubuntu 18.04 LTS Headless Server.md index 3a9f28e2c3..84c805506a 100644 --- a/translated/tech/20180703 Install Oracle VirtualBox On Ubuntu 18.04 LTS Headless Server.md +++ b/translated/tech/20180703 Install Oracle VirtualBox On Ubuntu 18.04 LTS Headless Server.md @@ -3,92 +3,90 @@ ![](https://www.ostechnix.com/wp-content/uploads/2016/07/Install-Oracle-VirtualBox-On-Ubuntu-18.04-720x340.png) -本教程将指导你在 Ubuntu 18.04 LTS 无头服务器上,一步一步地安装 **Oracle VirtualBox**。同时,本教程也将介绍如何使用 **phpVirtualBox** 去管理安装在无头服务器上的 **VirtualBox** 实例。**phpVirtualBox** 是 VirtualBox 的一个基于 Web 的后端工具。这个教程也可以工作在 Debian 和其它 Ubuntu 衍生版本上,如 Linux Mint。现在,我们开始。 +本教程将指导你在 Ubuntu 18.04 LTS 无头服务器上,一步一步地安装 **Oracle VirtualBox**。同时,本教程也将介绍如何使用 **phpVirtualBox** 去管理安装在无头服务器上的 **VirtualBox** 实例。**phpVirtualBox** 是 VirtualBox 的一个基于 Web 的前端工具。这个教程也可以工作在 Debian 和其它 Ubuntu 衍生版本上,如 Linux Mint。现在,我们开始。 ### 前提条件 在安装 Oracle VirtualBox 之前,我们的 Ubuntu 18.04 LTS 服务器上需要满足如下的前提条件。 首先,逐个运行如下的命令来更新 Ubuntu 服务器。 + ``` $ sudo apt update - $ sudo apt upgrade - $ sudo apt dist-upgrade - ``` 接下来,安装如下的必需的包: + ``` $ sudo apt install build-essential dkms unzip wget - ``` 安装完成所有的更新和必需的包之后,重启动 Ubuntu 服务器。 + ``` $ sudo reboot - ``` ### 在 Ubuntu 18.04 LTS 服务器上安装 VirtualBox -添加 Oracle VirtualBox 官方仓库。为此你需要去编辑 **/etc/apt/sources.list** 文件: +添加 Oracle VirtualBox 官方仓库。为此你需要去编辑 `/etc/apt/sources.list` 文件: + ``` $ sudo nano /etc/apt/sources.list - ``` 添加下列的行。 在这里,我将使用 Ubuntu 18.04 LTS,因此我添加下列的仓库。 + ``` deb http://download.virtualbox.org/virtualbox/debian bionic contrib - ``` ![][2] -用你的 Ubuntu 发行版的代码名字替换关键字 **‘bionic’**,比如,**‘xenial’、‘vivid’、‘utopic’、‘trusty’、‘raring’、‘quantal’、‘precise’、‘lucid’、‘jessie’、‘wheezy’、或 ‘squeeze‘**。 +用你的 Ubuntu 发行版的代码名字替换关键字 ‘bionic’,比如,‘xenial’、‘vivid’、‘utopic’、‘trusty’、‘raring’、‘quantal’、‘precise’、‘lucid’、‘jessie’、‘wheezy’、或 ‘squeeze‘。 然后,运行下列的命令去添加 Oracle 公钥: + ``` $ wget -q https://www.virtualbox.org/download/oracle_vbox_2016.asc -O- | sudo apt-key add - - ``` 对于 VirtualBox 的老版本,添加如下的公钥: + ``` $ wget -q https://www.virtualbox.org/download/oracle_vbox.asc -O- | sudo apt-key add - - ``` 接下来,使用如下的命令去更新软件源: + ``` $ sudo apt update - ``` 最后,使用如下的命令去安装最新版本的 Oracle VirtualBox: + ``` $ sudo apt install virtualbox-5.2 - ``` ### 添加用户到 VirtualBox 组 -我们需要去创建并添加我们的系统用户到 **vboxusers** 组中。你也可以单独创建用户,然后将它分配到 **vboxusers** 组中,也可以使用已有的用户。我不想去创建新用户,因此,我添加已存在的用户到这个组中。请注意,如果你为 virtualbox 使用一个单独的用户,那么你必须注销当前用户,并使用那个特定的用户去登入,来完成剩余的步骤。 +我们需要去创建并添加我们的系统用户到 `vboxusers` 组中。你也可以单独创建用户,然后将它分配到 `vboxusers` 组中,也可以使用已有的用户。我不想去创建新用户,因此,我添加已存在的用户到这个组中。请注意,如果你为 virtualbox 使用一个单独的用户,那么你必须注销当前用户,并使用那个特定的用户去登入,来完成剩余的步骤。 + +我使用的是我的用户名 `sk`,因此,我运行如下的命令将它添加到 `vboxusers` 组中。 -我使用的是我的用户名 **sk**,因此,我运行如下的命令将它添加到 **vboxusers** 组中。 ``` $ sudo usermod -aG vboxusers sk - ``` 现在,运行如下的命令去检查 virtualbox 内核模块是否已加载。 + ``` $ sudo systemctl status vboxdrv - ``` ![][3] @@ -96,15 +94,15 @@ $ sudo systemctl status vboxdrv 正如你在上面的截屏中所看到的,vboxdrv 模块已加载,并且是已运行的状态! 对于老的 Ubuntu 版本,运行: + ``` $ sudo /etc/init.d/vboxdrv status - ``` 如果 virtualbox 模块没有启动,运行如下的命令去启动它。 + ``` $ sudo /etc/init.d/vboxdrv setup - ``` 很好!我们已经成功安装了 VirtualBox 并启动了 virtualbox 模块。现在,我们继续来安装 Oracle VirtualBox 的扩展包。 @@ -119,21 +117,19 @@ VirtualBox 扩展包为 VirtualBox 访客系统提供了如下的功能。 * Intel PXE 引导 ROM * 对 Linux 宿主机上的 PCI 直通提供支持 +从[这里][4]为 VirtualBox 5.2.x 下载最新版的扩展包。 - -从[**这里**][4]为 VirtualBox 5.2.x 下载最新版的扩展包。 ``` $ wget https://download.virtualbox.org/virtualbox/5.2.14/Oracle_VM_VirtualBox_Extension_Pack-5.2.14.vbox-extpack - ``` 使用如下的命令去安装扩展包: + ``` $ sudo VBoxManage extpack install Oracle_VM_VirtualBox_Extension_Pack-5.2.14.vbox-extpack - ``` -恭喜!我们已经成功地在 Ubuntu 18.04 LTS 服务器上安装了 Oracle VirtualBox 的扩展包。现在已经可以去部署虚拟机了。参考 [**virtualbox 官方指南**][5],在命令行中开始创建和管理虚拟机。 +恭喜!我们已经成功地在 Ubuntu 18.04 LTS 服务器上安装了 Oracle VirtualBox 的扩展包。现在已经可以去部署虚拟机了。参考 [virtualbox 官方指南][5],在命令行中开始创建和管理虚拟机。 然而,并不是每个人都擅长使用命令行。有些人可能希望在图形界面中去创建和使用虚拟机。不用担心!下面我们为你带来非常好用的 **phpVirtualBox** 工具! @@ -146,84 +142,82 @@ $ sudo VBoxManage extpack install Oracle_VM_VirtualBox_Extension_Pack-5.2.14.vbo 由于它是基于 web 的工具,我们需要安装 Apache web 服务器、PHP 和一些 php 模块。 为此,运行如下命令: + ``` $ sudo apt install apache2 php php-mysql libapache2-mod-php php-soap php-xml - ``` -然后,从 [**下载页面**][6] 上下载 phpVirtualBox 5.2.x 版。请注意,由于我们已经安装了 VirtualBox 5.2 版,因此,同样的我们必须去安装 phpVirtualBox 的 5.2 版本。 +然后,从 [下载页面][6] 上下载 phpVirtualBox 5.2.x 版。请注意,由于我们已经安装了 VirtualBox 5.2 版,因此,同样的我们必须去安装 phpVirtualBox 的 5.2 版本。 运行如下的命令去下载它: + ``` $ wget https://github.com/phpvirtualbox/phpvirtualbox/archive/5.2-0.zip - ``` 使用如下命令解压下载的安装包: + ``` $ unzip 5.2-0.zip - ``` -这个命令将解压 5.2.0.zip 文件的内容到一个命名为 “phpvirtualbox-5.2-0” 的文件夹中。现在,复制或移动这个文件夹的内容到你的 apache web 服务器的根文件夹中。 +这个命令将解压 5.2.0.zip 文件的内容到一个名为 `phpvirtualbox-5.2-0` 的文件夹中。现在,复制或移动这个文件夹的内容到你的 apache web 服务器的根文件夹中。 + ``` $ sudo mv phpvirtualbox-5.2-0/ /var/www/html/phpvirtualbox - ``` 给 phpvirtualbox 文件夹分配适当的权限。 + ``` $ sudo chmod 777 /var/www/html/phpvirtualbox/ - ``` 接下来,我们开始配置 phpVirtualBox。 像下面这样复制示例配置文件。 + ``` $ sudo cp /var/www/html/phpvirtualbox/config.php-example /var/www/html/phpvirtualbox/config.php - ``` -编辑 phpVirtualBox 的 **config.php** 文件: +编辑 phpVirtualBox 的 `config.php` 文件: + ``` $ sudo nano /var/www/html/phpvirtualbox/config.php - ``` 找到下列行,并且用你的系统用户名和密码去替换它(就是前面的“添加用户到 VirtualBox 组中”节中使用的用户名)。 -在我的案例中,我的 Ubuntu 系统用户名是 **sk** ,它的密码是 **ubuntu**。 +在我的案例中,我的 Ubuntu 系统用户名是 `sk` ,它的密码是 `ubuntu`。 + ``` var $username = 'sk'; var $password = 'ubuntu'; - ``` ![][7] 保存并关闭这个文件。 -接下来,创建一个名为 **/etc/default/virtualbox** 的新文件: +接下来,创建一个名为 `/etc/default/virtualbox` 的新文件: + ``` $ sudo nano /etc/default/virtualbox - ``` -添加下列行。用你自己的系统用户替换 ‘sk’。 +添加下列行。用你自己的系统用户替换 `sk`。 + ``` VBOXWEB_USER=sk - ``` 最后,重引导你的系统或重启下列服务去完成整个配置工作。 + ``` $ sudo systemctl restart vboxweb-service - $ sudo systemctl restart vboxdrv - $ sudo systemctl restart apache2 - ``` ### 调整防火墙允许连接 Apache web 服务器 @@ -231,6 +225,7 @@ $ sudo systemctl restart apache2 如果你在 Ubuntu 18.04 LTS 上启用了 UFW,那么在默认情况下,apache web 服务器是不能被任何远程系统访问的。你必须通过下列的步骤让 http 和 https 流量允许通过 UFW。 首先,我们使用如下的命令来查看在策略中已经安装了哪些应用: + ``` $ sudo ufw app list Available applications: @@ -238,12 +233,12 @@ Apache Apache Full Apache Secure OpenSSH - ``` 正如你所见,Apache 和 OpenSSH 应该已经在 UFW 的策略文件中安装了。 -如果你在策略中看到的是 **“Apache Full”**,说明它允许流量到达 **80** 和 **443** 端口: +如果你在策略中看到的是 `Apache Full`,说明它允许流量到达 80 和 443 端口: + ``` $ sudo ufw app info "Apache Full" Profile: Apache Full @@ -253,34 +248,33 @@ server. Ports: 80,443/tcp - ``` 现在,运行如下的命令去启用这个策略中的 HTTP 和 HTTPS 的入站流量: + ``` $ sudo ufw allow in "Apache Full" Rules updated Rules updated (v6) - ``` 如果你希望允许 https 流量,但是仅是 http (80) 的流量,运行如下的命令: + ``` $ sudo ufw app info "Apache" - ``` ### 访问 phpVirtualBox 的 Web 控制台 现在,用任意一台远程系统的 web 浏览器来访问。 -在地址栏中,输入:****。 +在地址栏中,输入:`http://IP-address-of-virtualbox-headless-server/phpvirtualbox`。 -在我的案例中,我导航到这个链接 – **** +在我的案例中,我导航到这个链接 – `http://192.168.225.22/phpvirtualbox`。 你将看到如下的屏幕输出。输入 phpVirtualBox 管理员用户凭据。 -phpVirtualBox 的默认管理员用户名和密码是 **admin** / **admin**。 +phpVirtualBox 的默认管理员用户名和密码是 `admin` / `admin`。 ![][8] @@ -303,7 +297,7 @@ via: https://www.ostechnix.com/install-oracle-virtualbox-ubuntu-16-04-headless-s 作者:[SK][a] 选题:[lujun9972](https://github.com/lujun9972) 译者:[qhwdw](https://github.com/qhwdw) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 From 19f969f0aa4445ddddbd5c985c0091f91183080f Mon Sep 17 00:00:00 2001 From: "Xingyu.Wang" Date: Fri, 12 Oct 2018 01:59:34 +0800 Subject: [PATCH 358/736] PUB:20180703 Install Oracle VirtualBox On Ubuntu 18.04 LTS Headless Server.md @qhwdw https://linux.cn/article-10105-1.html --- ...eadless Virtualization Server Using KVM In Ubuntu 18.04 LTS.md | 0 1 file changed, 0 insertions(+), 0 deletions(-) rename {translated/tech => published}/20180704 Setup Headless Virtualization Server Using KVM In Ubuntu 18.04 LTS.md (100%) diff --git a/translated/tech/20180704 Setup Headless Virtualization Server Using KVM In Ubuntu 18.04 LTS.md b/published/20180704 Setup Headless Virtualization Server Using KVM In Ubuntu 18.04 LTS.md similarity index 100% rename from translated/tech/20180704 Setup Headless Virtualization Server Using KVM In Ubuntu 18.04 LTS.md rename to published/20180704 Setup Headless Virtualization Server Using KVM In Ubuntu 18.04 LTS.md From 95a1f9a881a374dd27c4a10fc92fc54a220d8e89 Mon Sep 17 00:00:00 2001 From: pityonline Date: Fri, 5 Oct 2018 09:39:08 +0800 Subject: [PATCH 359/736] =?UTF-8?q?PRF:=20#10487=20#=2010594=20=E5=AE=8C?= =?UTF-8?q?=E6=88=90=E6=A0=A1=E5=AF=B9?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...0170926 Managing users on Linux systems.md | 141 +++++++++--------- 1 file changed, 71 insertions(+), 70 deletions(-) diff --git a/published/20170926 Managing users on Linux systems.md b/published/20170926 Managing users on Linux systems.md index aeb6e2904b..bc39e44fed 100644 --- a/published/20170926 Managing users on Linux systems.md +++ b/published/20170926 Managing users on Linux systems.md @@ -1,14 +1,15 @@ -管理 Linux 系统中的用户 +管理 Linux 系统中的用户 ====== + ![](https://images.idgesg.net/images/article/2017/09/charging-bull-100735753-large.jpg) -也许你的 Lniux 用户并不是愤怒的公牛,但是当涉及管理他们的账户的时候,能让他们一直开心也是一种挑战。监控他们当前正在访问的东西,追踪他们他们遇到问题时的解决方案,并且保证能把他们在使用系统时出现的重要变动记录下来。这里有一些方法和工具可以使这份工作轻松一点。 +也许你的 Linux 用户并不是愤怒的公牛,但是当涉及管理他们的账户的时候,能让他们一直满意也是一种挑战。你需要监控他们的访问权限,跟进他们遇到问题时的解决方案,并且把他们在使用系统时出现的重要变动记录下来。这里有一些方法和工具可以让这个工作轻松一点。 -### 配置账户 +### 配置账户 -添加和移除账户是管理用户中最简单的一项,但是这里面仍然有很多需要考虑的选项。无论你是用桌面工具或是命令行选项,这都是一个非常自动化的过程。你可以使用命令添加一个新用户,像是 `adduser jdoe`,这同时会触发一系列的事情。使用下一个可用的 UID 可以创建 John 的账户,或许还会被许多用以配置账户的文件所填充。当你运行 `adduser` 命令加一个新的用户名的时候,它将会提示一些额外的信息,同时解释这是在干什么。 +添加和删除账户是管理用户中比较简单的一项,但是这里面仍然有很多需要考虑的方面。无论你是用桌面工具或是命令行选项,这都是一个非常自动化的过程。你可以使用 `adduser jdoe` 命令添加一个新用户,同时会触发一系列的反应。在创建 John 这个账户时会自动使用下一个可用的 UID,并有很多自动生成的文件来完成这个工作。当你运行 `adduser` 后跟一个参数时(要创建的用户名),它会提示一些额外的信息,同时解释这是在干什么。 -``` +``` $ sudo adduser jdoe Adding user 'jdoe' ... Adding new group `jdoe' (1001) ... @@ -20,21 +21,21 @@ Retype new UNIX password: passwd: password updated successfully Changing the user information for jdoe Enter the new value, or press ENTER for the default - Full Name []: John Doe - Room Number []: - Work Phone []: - Home Phone []: - Other []: + Full Name []: John Doe + Room Number []: + Work Phone []: + Home Phone []: + Other []: Is the information correct? [Y/n] Y -``` +``` -像你看到的那样,`adduser` 将添加用户的信息(到 `/etc/passwd` 和 `/etc/shadow` 文件中),创建新的家目录,并用 `/etc/skel` 里设置的文件填充家目录,提示你分配初始密码和认定信息,然后确认这些信息都是正确的,如果你在最后的提示 “Is the information correct” 处的答案是 “n”,它将回溯你之前所有的回答,允许修改任何你想要修改的地方。 +如你所见,`adduser` 会添加用户的信息(到 `/etc/passwd` 和 `/etc/shadow` 文件中),创建新的家目录home directory,并用 `/etc/skel` 里设置的文件填充家目录,提示你分配初始密码和认证信息,然后确认这些信息都是正确的,如果你在最后的提示 “Is the information correct?” 处的回答是 “n”,它会回溯你之前所有的回答,允许修改任何你想要修改的地方。 -创建好一个用户后,你可能会想要确认一下它是否是你期望的样子,更好的方法是确保在添加第一个帐户**之前**,“自动”选择与您想要查看的内容相匹配。默认有默认的好处,它对于你想知道他们定义在哪里有所用处,以防你想作出一些变动 —— 例如,你不想家目录在 `/home` 里,你不想用户 UID 从 1000 开始,或是你不想家目录下的文件被系统上的**每个人**都可读。 +创建好一个用户后,你可能会想要确认一下它是否是你期望的样子,更好的方法是确保在添加第一个帐户**之前**,“自动”选择与你想要查看的内容是否匹配。默认有默认的好处,它对于你想知道他们定义在哪里很有用,以便你想做出一些变动 —— 例如,你不想让用户的家目录在 `/home` 里,你不想让用户 UID 从 1000 开始,或是你不想让家目录下的文件被系统中的**每个人**都可读。 -`adduser` 如何工作的一些细节设置在 `/etc/adduser.conf` 文件里。这个文件包含的一些设置决定了一个新的账户如何配置,以及它之后的样子。注意,注释和空白行将会在输出中被忽略,因此我们可以更加集中注意在设置上面。 +`adduser` 的一些配置细节设置在 `/etc/adduser.conf` 文件里。这个文件包含的一些配置项决定了一个新的账户如何配置,以及它之后的样子。注意,注释和空白行将会在输出中被忽略,因此我们更关注配置项。 -``` +``` $ cat /etc/adduser.conf | grep -v "^#" | grep -v "^$" DSHELL=/bin/bash DHOME=/home @@ -55,45 +56,45 @@ DIR_MODE=0755 SETGID_HOME=no QUOTAUSER="" SKEL_IGNORE_REGEX="dpkg-(old|new|dist|save)" -``` +``` -可以看到,我们有了一个默认的 shell(`DSHELL`),UID(`FIRST_UID`)的开始数值,家目录(`DHOME`)的位置,以及启动文件(`SKEL`)的来源位置。这个文件也会指定分配给家目录(`DIR_HOME`)的权限。 +可以看到,我们有了一个默认的 shell(`DSHELL`),UID(`FIRST_UID`)的起始值,家目录(`DHOME`)的位置,以及启动文件(`SKEL`)的来源位置。这个文件也会指定分配给家目录(`DIR_HOME`)的权限。 -其中 `DIR_HOME` 是最重要的设置,它决定了每个家目录被使用的权限。这个设置分配给用户创建的目录权限是 `755`,家目录的权限将会设置为 `rwxr-xr-x`。用户可以读其他用户的文件,但是不能修改和移除他们。如果你想要更多的限制,你可以更改这个设置为 `750`(用户组外的任何人都不可访问)甚至是 `700`(除用户自己外的人都不可访问)。 +其中 `DIR_HOME` 是最重要的设置,它决定了每个家目录被使用的权限。这个设置分配给用户创建的目录权限是 755,家目录的权限将会设置为 `rwxr-xr-x`。用户可以读其他用户的文件,但是不能修改和移除它们。如果你想要更多的限制,你可以更改这个设置为 750(用户组外的任何人都不可访问)甚至是 700(除用户自己外的人都不可访问)。 -任何用户账号在创建之前都可以进行手动修改。例如,你可以编辑 `/etc/passwd` 或者修改家目录的权限,开始在新服务器上添加用户之前配置 `/etc/adduser.conf` 可以确保一定的一致性,从长远来看可以节省时间和避免一些麻烦。 +任何用户账号在创建之前都可以进行手动修改。例如,你可以编辑 `/etc/passwd` 或者修改家目录的权限,开始在新服务器上添加用户之前配置 `/etc/adduser.conf` 可以确保一定的一致性,从长远来看可以节省时间和避免一些麻烦。 -`/etc/adduser.conf` 的修改将会在之后创建的用户上生效。如果你想以不同的方式设置某个特定账户,除了用户名之外,你还可以选择使用 `adduser` 命令提供账户配置选项。或许你想为某些账户分配不同的 shell,请求特殊的 UID,完全禁用登录。`adduser` 的帮助页将会为你显示一些配置个人账户的选择。 +`/etc/adduser.conf` 的修改将会在之后创建的用户上生效。如果你想以不同的方式设置某个特定账户,除了用户名之外,你还可以选择使用 `adduser` 命令提供账户配置选项。或许你想为某些账户分配不同的 shell,分配特殊的 UID,或完全禁用该账户登录。`adduser` 的帮助页将会为你显示一些配置个人账户的选择。 ``` adduser [options] [--home DIR] [--shell SHELL] [--no-create-home] [--uid ID] [--firstuid ID] [--lastuid ID] [--ingroup GROUP | --gid ID] [--disabled-password] [--disabled-login] [--gecos GECOS] [--add_extra_groups] [--encrypt-home] user -``` +``` -每个 Linux 系统现在都会默认把每个用户放入对应的组中。作为一个管理员,你可能会选择以不同的方式去做事。你也许会发现把用户放在一个共享组中可以让你的站点工作的更好,这时,选择使用 `adduser` 的 `--gid` 选项去选择一个特定的组。当然,用户总是许多组的成员,因此也有一些选项去管理主要和次要的组。 +每个 Linux 系统现在都会默认把每个用户放入对应的组中。作为一个管理员,你可能会选择以不同的方式。你也许会发现把用户放在一个共享组中更适合你的站点,你就可以选择使用 `adduser` 的 `--gid` 选项指定一个特定的组。当然,用户总是许多组的成员,因此也有一些选项来管理主要和次要的组。 -### 处理用户密码 +### 处理用户密码 -一直以来,知道其他人的密码都是一个不好的念头,在设置账户时,管理员通常使用一个临时的密码,然后在用户第一次登录时会运行一条命令强制他修改密码。这里是一个例子: +一直以来,知道其他人的密码都不是一件好事,在设置账户时,管理员通常使用一个临时密码,然后在用户第一次登录时运行一条命令强制他修改密码。这里是一个例子: ``` $ sudo chage -d 0 jdoe ``` -当用户第一次登录的时候,会看到像这样的事情: +当用户第一次登录时,会看到类似下面的提示: ``` WARNING: Your password has expired. You must change your password now and login again! Changing password for jdoe. (current) UNIX password: -``` +``` -### 添加用户到副组 +### 添加用户到副组 -添加用户到副组中,你可能会用如下所示的 `usermod` 命令 —— 添加用户到组中并确认已经做出变动。 +添加用户到副组中,你可能会用如下所示的 `usermod` 命令添加用户到组中并确认已经做出变动。 ``` $ sudo usermod -a -G sudo jdoe @@ -101,54 +102,54 @@ $ sudo grep sudo /etc/group sudo:x:27:shs,jdoe ``` -记住在一些组,像是 `sudo` 或者 `wheel` 组中,意味着包含特权,一定要特别注意这一点。 +记住在一些组意味着特别的权限,如 sudo 或者 wheel 组,一定要特别注意这一点。 -### 移除用户,添加组等 +### 移除用户,添加组等 + +Linux 系统也提供了移除账户,添加新的组,移除组等一些命令。例如,`deluser` 命令,将会从 `/etc/passwd` 和 `/etc/shadow` 中移除用户记录,但是会完整保留其家目录,除非你添加了 `--remove-home` 或者 `--remove-all-files` 选项。`addgroup` 命令会添加一个组,默认按目前组的次序分配下一个 id(在用户组范围内),除非你使用 `--gid` 选项指定 id。 -Linux 系统也提供了命令去移除账户、添加新的组、移除组等。例如,`deluser` 命令,将会从 `/etc/passwd` 和 `/etc/shadow` 中移除用户登录入口,但是会完整保留他的家目录,除非你添加了 `--remove-home` 或者 `--remove-all-files` 选项。`addgroup` 命令会添加一个组,按目前组的次序给他下一个 ID(在用户组范围内),除非你使用 `--gid` 选项指定 ID。 - ``` $ sudo addgroup testgroup --gid=131 Adding group `testgroup' (GID 131) ... Done. -``` +``` -### 管理特权账户 +### 管理特权账户 -一些 Linux 系统中有一个 wheel 组,它给组中成员赋予了像 root 一样运行命令的能力。在这种情况下,`/etc/sudoers` 将会引用该组。在 Debian 系统中,这个组被叫做 `sudo`,但是以相同的方式工作,你在 `/etc/sudoers` 中可以看到像这样的引用: +一些 Linux 系统中有一个 wheel 组,它给组中成员赋予了像 root 一样运行命令的权限。在这种情况下,`/etc/sudoers` 将会引用该组。在 Debian 系统中,这个组被叫做 sudo,但是原理是相同的,你在 `/etc/sudoers` 中可以看到像这样的信息: ``` -%sudo ALL=(ALL:ALL) ALL -``` +%sudo ALL=(ALL:ALL) ALL +``` -这个基础的设定意味着,任何在 wheel 或者 sudo 组中的成员,只要在他们运行的命令之前添加 `sudo`,就可以以 root 的权限去运行命令。 +这行基本的配置意味着任何在 wheel 或者 sudo 组中的成员只要在他们运行的命令之前添加 `sudo`,就可以以 root 的权限去运行命令。 -你可以向 `sudoers` 文件中添加更多有限的特权 —— 也许给特定用户运行一两个 root 的命令。如果这样做,您还应定期查看 `/etc/sudoers` 文件以评估用户拥有的权限,以及仍然需要提供的权限。 +你可以向 sudoers 文件中添加更多有限的权限 —— 也许给特定用户几个能以 root 运行的命令。如果你是这样做的,你应该定期查看 `/etc/sudoers` 文件以评估用户拥有的权限,以及仍然需要提供的权限。 -在下面显示的命令中,我们看到在 `/etc/sudoers` 中匹配到的行。在这个文件中最有趣的行是,包含能使用 `sudo` 运行命令的路径设置,以及两个允许通过 `sudo` 运行命令的组。像刚才提到的那样,单个用户可以通过包含在 `sudoers` 文件中来获得权限,但是更有实际意义的方法是通过组成员来定义各自的权限。 +在下面显示的命令中,我们过滤了 `/etc/sudoers` 中有效的配置行。其中最有意思的是,它包含了能使用 `sudo` 运行命令的路径设置,以及两个允许通过 `sudo` 运行命令的组。像刚才提到的那样,单个用户可以通过包含在 sudoers 文件中来获得权限,但是更有实际意义的方法是通过组成员来定义各自的权限。 ``` # cat /etc/sudoers | grep -v "^#" | grep -v "^$" Defaults env_reset Defaults mail_badpass Defaults secure_path="/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin" -root ALL=(ALL:ALL) ALL -%admin ALL=(ALL) ALL <== admin group -%sudo ALL=(ALL:ALL) ALL <== sudo group -``` +root ALL=(ALL:ALL) ALL +%admin ALL=(ALL) ALL <== admin group +%sudo ALL=(ALL:ALL) ALL <== sudo group +``` -### 登录检查 +### 登录检查 -你可以通过以下命令查看用户的上一次登录: +你可以通过以下命令查看用户的上一次登录: ``` # last jdoe jdoe pts/18 192.168.0.11 Thu Sep 14 08:44 - 11:48 (00:04) jdoe pts/18 192.168.0.11 Thu Sep 14 13:43 - 18:44 (00:00) jdoe pts/18 192.168.0.11 Thu Sep 14 19:42 - 19:43 (00:00) -``` +``` -如果你想查看每一个用户上一次的登录情况,你可以通过一个像这样的循环来运行 `last` 命令: +如果你想查看每一个用户上一次的登录情况,你可以通过一个像这样的循环来运行 `last` 命令: ``` $ for user in `ls /home`; do last $user | head -1; done @@ -157,21 +158,21 @@ jdoe pts/18 192.168.0.11 Thu Sep 14 19:42 - 19:43 (00:03) rocket pts/18 192.168.0.11 Thu Sep 14 13:02 - 13:02 (00:00) shs pts/17 192.168.0.11 Thu Sep 14 12:45 still logged in -``` - -此命令仅显示自当前 `wtmp` 文件变为活跃状态以来已登录的用户。空白行表示用户自那以后从未登录过,但没有将其调出。一个更好的命令是过滤掉在这期间从未登录过的用户的显示: - ``` -$ for user in `ls /home`; do echo -n "$user ";last $user | head -1 | awk '{print substr($0,40)}'; done + +此命令仅显示自当前 wtmp 文件登录过的用户。空白行表示用户自那以后从未登录过,但没有将他们显示出来。一个更好的命令可以明确地显示这期间从未登录过的用户: + +``` +$ for user in `ls /home`; do echo -n "$user"; last $user | head -1 | awk '{print substr($0,40)}'; done dhayes jdoe pts/18 192.168.0.11 Thu Sep 14 19:42 - 19:43 peanut pts/19 192.168.0.29 Mon Sep 11 09:15 - 17:11 rocket pts/18 192.168.0.11 Thu Sep 14 13:02 - 13:02 shs pts/17 192.168.0.11 Thu Sep 14 12:45 still logged tsmith -``` +``` -这个命令会打印很多,但是可以通过一个脚本使它更加清晰易用。 +这个命令要打很多字,但是可以通过一个脚本使它更加清晰易用。 ``` #!/bin/bash @@ -180,13 +181,13 @@ for user in `ls /home` do echo -n "$user ";last $user | head -1 | awk '{print substr($0,40)}' done -``` +``` -有时,此类信息可以提醒您用户角色的变动,表明他们可能不再需要相关帐户。 +有时这些信息可以提醒你用户角色的变动,表明他们可能不再需要相关帐户了。 -### 与用户沟通 +### 与用户沟通 -Linux 提供了许多方法和用户沟通。你可以向 `/etc/motd` 文件中添加信息,当用户从终端登录到服务器时,将会显示这些信息。你也可以通过例如 `write`(通知单个用户)或者 `wall`(`write` 给所有已登录的用户)命令发送通知。 +Linux 提供了许多和用户沟通的方法。你可以向 `/etc/motd` 文件中添加信息,当用户从终端登录到服务器时,将会显示这些信息。你也可以通过例如 `write`(通知单个用户)或者 `wall`(write 给所有已登录的用户)命令发送通知。 ``` $ wall System will go down in one hour @@ -194,30 +195,30 @@ $ wall System will go down in one hour Broadcast message from shs@stinkbug (pts/17) (Thu Sep 14 14:04:16 2017): System will go down in one hour -``` +``` -重要的通知应该通过多个管道传递,因为很难预测用户实际会注意到什么。mesage-of-the-day(motd),`wall` 和 email 通知可以吸引用户大部分的注意力。 +重要的通知应该通过多个渠道传达,因为很难预测用户实际会注意到什么。mesage-of-the-day(motd),`wall` 和 email 通知可以吸引用户大部分的注意力。 -### 注意日志文件 +### 注意日志文件 -更多地注意日志文件上也可以帮你理解用户活动。事实上,`/var/log/auth.log` 文件将会为你显示用户的登录和注销活动,组的创建等。`/var/log/message` 或者 `/var/log/syslog` 文件将会告诉你更多有关系统活动的事情。 +多注意日志文件也可以帮你理解用户的活动情况。尤其 `/var/log/auth.log` 文件将会显示用户的登录和注销活动,组的创建记录等。`/var/log/message` 或者 `/var/log/syslog` 文件将会告诉你更多有关系统活动的日志。 -### 追踪问题和请求 +### 追踪问题和需求 -无论你是否在 Linux 系统上安装了票务系统,跟踪用户遇到的问题以及他们提出的请求都非常重要。如果请求的一部分久久不见回应,用户必然不会高兴。即使是纸质日志也可能是有用的,或者更好的是,有一个电子表格,可以让你注意到哪些问题仍然悬而未决,以及问题的根本原因是什么。确保解决问题和请求非常重要,日志还可以帮助您记住你必须采取的措施来解决几个月甚至几年后重新出现的问题。 +无论你是否在 Linux 系统上安装了事件跟踪系统,跟踪用户遇到的问题以及他们提出的需求都非常重要。如果需求的一部分久久不见回应,用户必然不会高兴。即使是记录在纸上也是有用的,或者最好有个电子表格,这可以让你注意到哪些问题仍然悬而未决,以及问题的根本原因是什么。确认问题和需求非常重要,记录还可以帮助你记住你必须采取的措施来解决几个月甚至几年后重新出现的问题。 -### 总结 +### 总结 -在繁忙的服务器上管理用户帐户部分取决于从配置良好的默认值开始,部分取决于监控用户活动和遇到的问题。如果用户觉得你对他们的顾虑有所回应并且知道在需要系统升级时会发生什么,他们可能会很高兴。 +在繁忙的服务器上管理用户帐号,部分取决于配置良好的默认值,部分取决于监控用户活动和遇到的问题。如果用户觉得你对他们的顾虑有所回应并且知道在需要系统升级时会发生什么,他们可能会很高兴。 ------------ +-------------------------------------------------------------------------------- via: https://www.networkworld.com/article/3225109/linux/managing-users-on-linux-systems.html 作者:[Sandra Henry-Stocker][a] 译者:[dianbanjiu](https://github.com/dianbanjiu) -校对:[wxy](https://github.com/wxy) +校对:[wxy](https://github.com/wxy)、[pityonline](https://github.com/pityonline) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 -[a]:https://www.networkworld.com/author/Sandra-Henry_Stocker/ +[a]: https://www.networkworld.com/author/Sandra-Henry_Stocker/ From e6b5af53a1e45821c6bdebe6547528e6cc1133ed Mon Sep 17 00:00:00 2001 From: geekpi Date: Fri, 12 Oct 2018 09:30:45 +0800 Subject: [PATCH 360/736] translated --- ...cue (Single User mode) - Emergency Mode.md | 90 ------------------- ...cue (Single User mode) - Emergency Mode.md | 88 ++++++++++++++++++ 2 files changed, 88 insertions(+), 90 deletions(-) delete mode 100644 sources/tech/20180925 How to Boot Ubuntu 18.04 - Debian 9 Server in Rescue (Single User mode) - Emergency Mode.md create mode 100644 translated/tech/20180925 How to Boot Ubuntu 18.04 - Debian 9 Server in Rescue (Single User mode) - Emergency Mode.md diff --git a/sources/tech/20180925 How to Boot Ubuntu 18.04 - Debian 9 Server in Rescue (Single User mode) - Emergency Mode.md b/sources/tech/20180925 How to Boot Ubuntu 18.04 - Debian 9 Server in Rescue (Single User mode) - Emergency Mode.md deleted file mode 100644 index 7a3702a124..0000000000 --- a/sources/tech/20180925 How to Boot Ubuntu 18.04 - Debian 9 Server in Rescue (Single User mode) - Emergency Mode.md +++ /dev/null @@ -1,90 +0,0 @@ -translating---geekpi - -How to Boot Ubuntu 18.04 / Debian 9 Server in Rescue (Single User mode) / Emergency Mode -====== -Booting a Linux Server into a single user mode or **rescue mode** is one of the important troubleshooting that a Linux admin usually follow while recovering the server from critical conditions. In Ubuntu 18.04 and Debian 9, single user mode is known as a rescue mode. - -Apart from the rescue mode, Linux servers can be booted in **emergency mode** , the main difference between them is that, emergency mode loads a minimal environment with read only root file system file system, also it does not enable any network or other services. But rescue mode try to mount all the local file systems & try to start some important services including network. - -In this article we will discuss how we can boot our Ubuntu 18.04 LTS / Debian 9 Server in rescue mode and emergency mode. - -#### Booting Ubuntu 18.04 LTS Server in Single User / Rescue Mode: - -Reboot your server and go to boot loader (Grub) screen and Select “ **Ubuntu** “, bootloader screen would look like below, - -![](https://www.linuxtechi.com/wp-content/uploads/2018/09/Bootloader-Screen-Ubuntu18-04-Server.jpg) - -Press “ **e** ” and then go the end of line which starts with word “ **linux** ” and append “ **systemd.unit=rescue.target** “. Remove the word “ **$vt_handoff** ” if it exists. - -![](https://www.linuxtechi.com/wp-content/uploads/2018/09/rescue-target-ubuntu18-04.jpg) - -Now Press Ctrl-x or F10 to boot, - -![](https://www.linuxtechi.com/wp-content/uploads/2018/09/rescue-mode-ubuntu18-04.jpg) - -Now press enter and then you will get the shell where all file systems will be mounted in read-write mode and do the troubleshooting. Once you are done with troubleshooting, you can reboot your server using “ **reboot** ” command. - -#### Booting Ubuntu 18.04 LTS Server in emergency mode - -Reboot the server and go the boot loader screen and select “ **Ubuntu** ” and then press “ **e** ” and go to the end of line which starts with word linux, and append “ **systemd.unit=emergency.target** ” - -![](https://www.linuxtechi.com/wp-content/uploads/2018/09/Emergecny-target-ubuntu18-04-server.jpg) - -Now Press Ctlr-x or F10 to boot in emergency mode, you will get a shell and do the troubleshooting from there. As we had already discussed that in emergency mode, file systems will be mounted in read-only mode and also there will be no networking in this mode, - -![](https://www.linuxtechi.com/wp-content/uploads/2018/09/Emergency-prompt-debian9.jpg) - -Use below command to mount the root file system in read-write mode, - -``` -# mount -o remount,rw / - -``` - -Similarly, you can remount rest of file systems in read-write mode . - -#### Booting Debian 9 into Rescue & Emergency Mode - -Reboot your Debian 9.x server and go to grub screen and select “ **Debian GNU/Linux** ” - -![](https://www.linuxtechi.com/wp-content/uploads/2018/09/Debian9-Grub-Screen.jpg) - -Press “ **e** ” and go to end of line which starts with word linux and append “ **systemd.unit=rescue.target** ” to boot the system in rescue mode and to boot in emergency mode then append “ **systemd.unit=emergency.target** ” - -#### Rescue mode : - -![](https://www.linuxtechi.com/wp-content/uploads/2018/09/Rescue-mode-Debian9.jpg) - -Now press Ctrl-x or F10 to boot in rescue mode - -![](https://www.linuxtechi.com/wp-content/uploads/2018/09/Rescue-Mode-Shell-Debian9.jpg) - -Press Enter to get the shell and from there you can start troubleshooting. - -#### Emergency Mode: - -![](https://www.linuxtechi.com/wp-content/uploads/2018/09/Emergency-target-grub-debian9.jpg) - -Now press ctrl-x or F10 to boot your system in emergency mode - -![](https://www.linuxtechi.com/wp-content/uploads/2018/09/Emergency-prompt-debian9.jpg) - -Press enter to get the shell and use “ **mount -o remount,rw /** ” command to mount the root file system in read-write mode. - -**Note:** In case root password is already set in Ubuntu 18.04 and Debian 9 Server then you must enter root password to get shell in rescue and emergency mode - -That’s all from this article, please do share your feedback and comments in case you like this article. - - --------------------------------------------------------------------------------- - -via: https://www.linuxtechi.com/boot-ubuntu-18-04-debian-9-rescue-emergency-mode/ - -作者:[Pradeep Kumar][a] -选题:[lujun9972](https://github.com/lujun9972) -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: http://www.linuxtechi.com/author/pradeep/ diff --git a/translated/tech/20180925 How to Boot Ubuntu 18.04 - Debian 9 Server in Rescue (Single User mode) - Emergency Mode.md b/translated/tech/20180925 How to Boot Ubuntu 18.04 - Debian 9 Server in Rescue (Single User mode) - Emergency Mode.md new file mode 100644 index 0000000000..b1e566f1a9 --- /dev/null +++ b/translated/tech/20180925 How to Boot Ubuntu 18.04 - Debian 9 Server in Rescue (Single User mode) - Emergency Mode.md @@ -0,0 +1,88 @@ +如何在救援(单用户模式)/紧急模式下启动 Ubuntu 18.04/Debian 9 服务器 +====== +将 Linux 服务器引导到单用户模式或**救援模式**是 Linux 管理员在关键时刻恢复服务器时通常使用的重要故障排除方法之一。在 Ubuntu 18.04 和 Debian 9 中,单用户模式被称为救援模式。 + +除了救援模式外,Linux 服务器可以在**紧急模式**下启动,它们之间的主要区别在于,紧急模式加载了带有只读根文件系统文件系统的最小环境,也没有启用任何网络或其他服务。但救援模式尝试挂载所有本地文件系统并尝试启动一些重要的服务,包括网络。 + +在本文中,我们将讨论如何在救援模式和紧急模式下启动 Ubuntu 18.04 LTS/Debian 9 服务器。 + +#### 在单用户/救援模式下启动 Ubuntu 18.04 LTS 服务器: + +重启服务器并进入启动加载程序 (Grub) 屏幕并选择 “**Ubuntu**”,启动加载器页面如下所示, + +![](https://www.linuxtechi.com/wp-content/uploads/2018/09/Bootloader-Screen-Ubuntu18-04-Server.jpg) + +按下 “**e**”,然后移动到以 “**linux**” 开头的行尾,并添加 “**systemd.unit=rescue.target**”。如果存在单词 “**$vt_handoff**” 就删除它。 + +![](https://www.linuxtechi.com/wp-content/uploads/2018/09/rescue-target-ubuntu18-04.jpg) + +现在按 Ctrl-x 或 F10 启动, + +![](https://www.linuxtechi.com/wp-content/uploads/2018/09/rescue-mode-ubuntu18-04.jpg) + +现在按回车键,然后你将得到所有文件系统都以读写模式挂载的 shell 并进行故障排除。完成故障排除后,可以使用 “**reboot**” 命令重新启动服务器。 + +#### 在紧急模式下启动 Ubuntu 18.04 LTS 服务器 + +重启服务器并进入启动加载程序页面并选择 “**Ubuntu**”,然后按 “**e**” 并移动到以 linux 开头的行尾,并添加 “**systemd.unit=emergency.target**“。 + +![](https://www.linuxtechi.com/wp-content/uploads/2018/09/Emergecny-target-ubuntu18-04-server.jpg) + +现在按 Ctlr-x 或 F10 以紧急模式启动,你将获得一个 shell 并从那里进行故障排除。正如我们已经讨论过的那样,在紧急模式下,文件系统将以只读模式挂载,并且在这种模式下也不会有网络, + +![](https://www.linuxtechi.com/wp-content/uploads/2018/09/Emergency-prompt-debian9.jpg) + +使用以下命令将根文件系统挂载到读写模式, + +``` +# mount -o remount,rw / + +``` + +同样,你可以在读写模式下重新挂载其余文件系统。 + +#### 将 Debian 9 引导到救援和紧急模式 + +重启 Debian 9.x 服务器并进入 grub页面选择 “**Debian GNU/Linux**”。 + +![](https://www.linuxtechi.com/wp-content/uploads/2018/09/Debian9-Grub-Screen.jpg) + +按下 “**e**” 并移动到 linux 开头的行尾并添加 “**systemd.unit=rescue.target**” 以在救援模式下启动系统, 要在紧急模式下启动,那就添加 “**systemd.unit=emergency.target**“ + +#### 救援模式: + +![](https://www.linuxtechi.com/wp-content/uploads/2018/09/Rescue-mode-Debian9.jpg) + +现在按 Ctrl-x 或 F10 以救援模式启动 + +![](https://www.linuxtechi.com/wp-content/uploads/2018/09/Rescue-Mode-Shell-Debian9.jpg) + +按下回车键以获取 shell,然后从这里开始故障排除。 + +#### 紧急模式: + +![](https://www.linuxtechi.com/wp-content/uploads/2018/09/Emergency-target-grub-debian9.jpg) + +现在按下 ctrl-x 或 F10 以紧急模式启动系统 + +![](https://www.linuxtechi.com/wp-content/uploads/2018/09/Emergency-prompt-debian9.jpg) + +按下回车获取 shell 并使用 “**mount -o remount,rw /**” 命令以读写模式挂载根文件系统。 + +**注意:**如果已经在 Ubuntu 18.04 和 Debian 9 Server 中设置了 root 密码,那么你必须输入 root 密码才能在救援和紧急模式下获得 shell + +就是这些了,如果您喜欢这篇文章,请分享你的反馈和评论。 + + +-------------------------------------------------------------------------------- + +via: https://www.linuxtechi.com/boot-ubuntu-18-04-debian-9-rescue-emergency-mode/ + +作者:[Pradeep Kumar][a] +选题:[lujun9972](https://github.com/lujun9972) +译者:[geekpi](https://github.com/geekpi) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: http://www.linuxtechi.com/author/pradeep/ From 384670bc9b055a9abe14385022c6b0772de0eb17 Mon Sep 17 00:00:00 2001 From: geekpi Date: Fri, 12 Oct 2018 09:42:39 +0800 Subject: [PATCH 361/736] translating --- ...ts of Equations into LaTeX Instantly With This Nifty Tool.md | 2 ++ 1 file changed, 2 insertions(+) diff --git a/sources/tech/20181009 Convert Screenshots of Equations into LaTeX Instantly With This Nifty Tool.md b/sources/tech/20181009 Convert Screenshots of Equations into LaTeX Instantly With This Nifty Tool.md index f2c17ff7c2..8e9abf4b52 100644 --- a/sources/tech/20181009 Convert Screenshots of Equations into LaTeX Instantly With This Nifty Tool.md +++ b/sources/tech/20181009 Convert Screenshots of Equations into LaTeX Instantly With This Nifty Tool.md @@ -1,3 +1,5 @@ +translating---geekpi + Convert Screenshots of Equations into LaTeX Instantly With This Nifty Tool ====== **Mathpix is a nifty little tool that allows you to take screenshots of complex mathematical equations and instantly converts it into LaTeX editable text.** From d67a25df227da164ac183dd6557a11c4c474fd1b Mon Sep 17 00:00:00 2001 From: "Xingyu.Wang" Date: Fri, 12 Oct 2018 09:54:47 +0800 Subject: [PATCH 362/736] Revert "PUB:20180703 Install Oracle VirtualBox On Ubuntu 18.04 LTS Headless Server.md" This reverts commit 19f969f0aa4445ddddbd5c985c0091f91183080f. --- ...eadless Virtualization Server Using KVM In Ubuntu 18.04 LTS.md | 0 1 file changed, 0 insertions(+), 0 deletions(-) rename {published => translated/tech}/20180704 Setup Headless Virtualization Server Using KVM In Ubuntu 18.04 LTS.md (100%) diff --git a/published/20180704 Setup Headless Virtualization Server Using KVM In Ubuntu 18.04 LTS.md b/translated/tech/20180704 Setup Headless Virtualization Server Using KVM In Ubuntu 18.04 LTS.md similarity index 100% rename from published/20180704 Setup Headless Virtualization Server Using KVM In Ubuntu 18.04 LTS.md rename to translated/tech/20180704 Setup Headless Virtualization Server Using KVM In Ubuntu 18.04 LTS.md From 5e40840a28e355fec4b30f7877f3ed5436a01efe Mon Sep 17 00:00:00 2001 From: sd886393 Date: Fri, 12 Oct 2018 09:55:20 +0800 Subject: [PATCH 363/736] =?UTF-8?q?=E3=80=90=E8=AE=A4=E9=A2=86=E3=80=91201?= =?UTF-8?q?81010=20Cloc=20-=20Count=20The=20Lines=20Of=20Source=20Code=20I?= =?UTF-8?q?n=20Many=20Programming=20Languages?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...unt The Lines Of Source Code In Many Programming Languages.md | 1 + 1 file changed, 1 insertion(+) diff --git a/sources/tech/20181010 Cloc - Count The Lines Of Source Code In Many Programming Languages.md b/sources/tech/20181010 Cloc - Count The Lines Of Source Code In Many Programming Languages.md index 1cefdaaa4f..44acf43298 100644 --- a/sources/tech/20181010 Cloc - Count The Lines Of Source Code In Many Programming Languages.md +++ b/sources/tech/20181010 Cloc - Count The Lines Of Source Code In Many Programming Languages.md @@ -1,3 +1,4 @@ +translating by littleji Cloc – Count The Lines Of Source Code In Many Programming Languages ====== From b9182f512811f17f745741d40ab9399c1a2e93f6 Mon Sep 17 00:00:00 2001 From: "Xingyu.Wang" Date: Fri, 12 Oct 2018 09:55:36 +0800 Subject: [PATCH 364/736] PUB:20180703 Install Oracle VirtualBox On Ubuntu 18.04 LTS Headless Server.md @qhwdw https://linux.cn/article-10105-1.htm --- ...stall Oracle VirtualBox On Ubuntu 18.04 LTS Headless Server.md | 0 1 file changed, 0 insertions(+), 0 deletions(-) rename {translated/tech => published}/20180703 Install Oracle VirtualBox On Ubuntu 18.04 LTS Headless Server.md (100%) diff --git a/translated/tech/20180703 Install Oracle VirtualBox On Ubuntu 18.04 LTS Headless Server.md b/published/20180703 Install Oracle VirtualBox On Ubuntu 18.04 LTS Headless Server.md similarity index 100% rename from translated/tech/20180703 Install Oracle VirtualBox On Ubuntu 18.04 LTS Headless Server.md rename to published/20180703 Install Oracle VirtualBox On Ubuntu 18.04 LTS Headless Server.md From 6662cd36dd388a6eda2c0257889c7743569d1814 Mon Sep 17 00:00:00 2001 From: darksun Date: Fri, 12 Oct 2018 10:34:44 +0800 Subject: [PATCH 365/736] =?UTF-8?q?=E9=80=89=E9=A2=98:=20Design=20faster?= =?UTF-8?q?=20web=20pages,=20part=201:=20Image=20compression?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...er web pages, part 1- Image compression.md | 183 ++++++++++++++++++ 1 file changed, 183 insertions(+) create mode 100644 sources/tech/20181010 Design faster web pages, part 1- Image compression.md diff --git a/sources/tech/20181010 Design faster web pages, part 1- Image compression.md b/sources/tech/20181010 Design faster web pages, part 1- Image compression.md new file mode 100644 index 0000000000..f78912cb81 --- /dev/null +++ b/sources/tech/20181010 Design faster web pages, part 1- Image compression.md @@ -0,0 +1,183 @@ +Design faster web pages, part 1: Image compression +====== + +![](https://fedoramagazine.org/wp-content/uploads/2018/02/fasterwebsites1-816x345.jpg) + +Lots of web developers want to achieve fast loading web pages. As more page views come from mobile devices, making websites look better on smaller screens using responsive design is just one side of the coin. Browser Calories can make the difference in loading times, which satisfies not just the user but search engines that rank on loading speed. This article series covers how to slim down your web pages with tools Fedora offers. + +### Preparation + +Before you sart to slim down your web pages, you need to identify the core issues. For this, you can use [Browserdiet][1]. It’s a browser add-on available for Firefox, Opera and Chrome and other browsers. It analyzes the performance values of the actual open web page, so you know where to start slimming down. + +Next you’ll need some pages to work on. The example screenshot shows a test of [getfedora.org][2]. At first it looks very simple and responsive. + +![Browser Diet - values of getfedora.org][3] + +However, BrowserDiet’s page analysis shows there are 1.8MB in files downloaded. Therefore, there’s some work to do! + +### Web optimization + +There are over 281 KB of JavaScript files, 203 KB more in CSS files, and 1.2 MB in images. Start with the biggest issue — the images. The tool set you need for this is GIMP, ImageMagick, and optipng. You can easily install them using the following command: + +``` +sudo dnf install gimp imagemagick optipng + +``` + +For example, take the [following file][4] which is 6.4 KB: + +![][4] + +First, use the file command to get some basic information about this image: + +``` +$ file cinnamon.png +cinnamon.png: PNG image data, 60 x 60, 8-bit/color RGBA, non-interlaced + +``` + +The image — which is only in grey and white — is saved in 8-bit/color RGBA mode. That’s not as efficient as it could be. + +Start GIMP so you can set a more appropriate color mode. Open cinnamon.png in GIMP. Then go to Image>Mode and set it to greyscale. Export the image as PNG with compression factor 9. All other settings in the export dialog should be the default. + +``` +$ file cinnamon.png +cinnamon.png: PNG image data, 60 x 60, 8-bit gray+alpha, non-interlaced + +``` + +The output shows the file’s now in 8bit gray+alpha mode. The file size has shrunk from 6.4 KB to 2.8 KB. That’s already only 43.75% of the original size. But there’s more you can do! + +You can also use the ImageMagick tool identify to provide more information about the image. + +``` +$ identify cinnamon2.png +cinnamon.png PNG 60x60 60x60+0+0 8-bit Grayscale Gray 2831B 0.000u 0:00.000 + +``` + +This tells you the file is 2831 bytes. Jump back into GIMP, and export the file. In the export dialog disable the storing of the time stamp and the alpha channel color values to reduce this a little more. Now the file output shows: + +``` +$ identify cinnamon.png +cinnamon.png PNG 60x60 60x60+0+0 8-bit Grayscale Gray 2798B 0.000u 0:00.000 + +``` + +Next, use optipng to losslessly optimize your PNG images. There are other tools that do similar things, including **advdef** (which is part of advancecomp), **pngquant** and **pngcrush.** + +Run optipng on your file. Note that this will replace your original: + +``` +$ optipng -o7 cinnamon.png +** Processing: cinnamon.png +60x60 pixels, 2x8 bits/pixel, grayscale+alpha +Reducing image to 8 bits/pixel, grayscale +Input IDAT size = 2720 bytes +Input file size = 2812 bytes + +Trying: + zc = 9 zm = 8 zs = 0 f = 0 IDAT size = 1922 + zc = 9 zm = 8 zs = 1 f = 0 IDAT size = 1920 + +Selecting parameters: + zc = 9 zm = 8 zs = 1 f = 0 IDAT size = 1920 + +Output IDAT size = 1920 bytes (800 bytes decrease) +Output file size = 2012 bytes (800 bytes = 28.45% decrease) + +``` + +The option -o7 is the slowest to process, but provides the best end results. You’ve knocked 800 more bytes off the file size, which is now 2012 bytes. + +To resize all of the PNGs in a directory, use this command: + +``` +$ optipng -o7 -dir= *.png + +``` + +The option -dir lets you give a target directory for the output. If this option is not used, optipng would overwrite the original images. + +### Choosing the right file format + +When it comes to pictures for the usage in the internet, you have the choice between: + + ++ [JPG or JPEG][9] ++ [GIF][10] ++ [PNG][11] ++ [aPNG][12] ++ [JPG-LS][13] ++ [JPG 2000 or JP2][14] ++ [SVG][15] + + +JPG-LS and JPG 2000 are not widely used. Only a few digital cameras support these formats, so they can be ignored. aPNG is an animated PNG, and not widely used either. + +You could save a few bytes more through changing the compression rate or choosing another file format. The first option you can’t do in GIMP, as it’s already using the highest compression rate. As there are no [alpha channels][5] in the picture, you can choose JPG as file format instead. For now use the default value of 90% quality — you could change it down to 85%, but then alias effects become visible. This saves a few bytes more: + +``` +$ identify cinnamon.jpg +cinnamon.jpg JPEG 60x60 60x60+0+0 8-bit sRGB 2676B 0.000u 0:00.000 + +``` + +Alone this conversion to the right color space and choosing JPG as file format brought down the file size from 23 KB to 12.3 KB, a reduction of nearly 50%. + + +#### PNG vs. JPG: quality and compression rate + +So what about the rest of the images? This method would work for all the other pictures, except the Fedora “flavor” logos and the logos for the four foundations. Those are presented on a white background. + +One of the main differences between PNG and JPG is that JPG has no alpha channel. Therefore it can’t handle transparency. If you rework these images by using a JPG on a white background, you can reduce the file size from 40.7 KB to 28.3 KB. + +Now there are four more images you can rework: the backgrounds. For the grey background, set the mode to greyscale again. With this bigger picture, the savings also is bigger. It shrinks from 216.2 KB to 51.0 KB — it’s now barely 25% of its original size. All in all, you’ve shrunk 481.1 KB down to 191.5 KB — only 39.8% of the starting size. + +#### Quality vs. Quantity + +Another difference between PNG and JPG is the quality. PNG is a lossless compressed raster graphics format. But JPG loses size through compression, and thus affects quality. That doesn’t mean you shouldn’t use JPG, though. But you have to find a balance between file size and quality. + +### Achievement + +This is the end of Part 1. After following the techniques described above, here are the results: + +![][6] + +You brought image size down to 488.9 KB versus 1.2MB at the start. That’s only about a third of the size, just through optimizing with optipng. This page can probably be made to load faster still. On the scale from snail to hypersonic, it’s not reached racing car speed yet! + +Finally you can check the results in [Google Insights][7], for example: + +![][8] + +In the Mobile area the page gathered 10 points on scoring, but is still in the Medium sector. It looks totally different for the Desktop, which has gone from 62/100 to 91/100 and went up to Good. As mentioned before, this test isn’t the be all and end all. Consider scores such as these to help you go in the right direction. Keep in mind you’re optimizing for the user experience, and not for a search engine. + + +-------------------------------------------------------------------------------- + +via: https://fedoramagazine.org/design-faster-web-pages-part-1-image-compression/ + +作者:[Sirko Kemter][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://fedoramagazine.org/author/gnokii/ +[b]: https://github.com/lujun9972 +[1]: https://browserdiet.com/calories/ +[2]: http://getfedora.org +[3]: https://fedoramagazine.org/wp-content/uploads/2018/02/ff-addon-diet.jpg +[4]: https://getfedora.org/static/images/cinnamon.png +[5]: https://www.webopedia.com/TERM/A/alpha_channel.html +[6]: https://fedoramagazine.org/wp-content/uploads/2018/02/ff-addon-diet-i.jpg +[7]: https://developers.google.com/speed/pagespeed/insights/?url=getfedora.org&tab=mobile +[8]: https://fedoramagazine.org/wp-content/uploads/2018/02/PageSpeed_Insights.png +[9]: https://en.wikipedia.org/wiki/JPEG +[10]: https://en.wikipedia.org/wiki/GIF +[11]: https://en.wikipedia.org/wiki/Portable_Network_Graphics +[12]: https://en.wikipedia.org/wiki/APNG +[13]: https://en.wikipedia.org/wiki/JPEG_2000 +[14]: https://en.wikipedia.org/wiki/JPEG_2000 +[15]: https://en.wikipedia.org/wiki/Scalable_Vector_Graphics From 67581225c61e3783738ff53638da8c05c2c408b2 Mon Sep 17 00:00:00 2001 From: darksun Date: Fri, 12 Oct 2018 10:35:49 +0800 Subject: [PATCH 366/736] =?UTF-8?q?=E9=80=89=E9=A2=98:=20The=20First=20Bet?= =?UTF-8?q?a=20of=20Haiku=20is=20Released=20After=2016=20Years=20of=20Deve?= =?UTF-8?q?lopment?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ... Released After 16 Years of Development.md | 87 +++++++++++++++++++ 1 file changed, 87 insertions(+) create mode 100644 sources/tech/20181011 The First Beta of Haiku is Released After 16 Years of Development.md diff --git a/sources/tech/20181011 The First Beta of Haiku is Released After 16 Years of Development.md b/sources/tech/20181011 The First Beta of Haiku is Released After 16 Years of Development.md new file mode 100644 index 0000000000..b6daaef053 --- /dev/null +++ b/sources/tech/20181011 The First Beta of Haiku is Released After 16 Years of Development.md @@ -0,0 +1,87 @@ +The First Beta of Haiku is Released After 16 Years of Development +====== +There are a number of small operating systems out there that are designed to replicate the past. Haiku is one of those. We will look to see where Haiku came from and what the new release has to offer. + +![Haiku OS desktop screenshot][1]Haiku desktop + +### What is Haiku? + +Haiku’s history begins with the now defunct [Be Inc][2]. Be Inc was founded by former Apple executive [Jean-Louis Gassée][3] after he was ousted by CEO [John Sculley][4]. Gassée wanted to create a new operating system from the ground up. BeOS was created with digital media work in mind and was designed to take advantage of the most modern hardware of the time. Originally, Be Inc attempted to create their own platform encompassing both hardware and software. The result was called the [BeBox][5]. After BeBox failed to sell well, Be turned their attention to BeOS. + +In the 1990s, Apple was looking for a new operating system to replace the aging Classic Mac OS. The two contenders were Gassée’s BeOS and Steve Jobs’ NeXTSTEP. In the end, Apple went with NeXTSTEP. Be tried to license BeOS to hardware makers, but [in at least one case][6] Microsoft threatened to revoke a manufacturer’s Windows license if they sold BeOS machines. Eventually, Be Inc was sold to Palm in 2001 for $11 million. BeOS was subsequently discontinued. + +Following the news of Palm’s purchase, a number of loyal fans decided they wanted to keep the operating system alive. The original name of the project was OpenBeOS, but was changed to Haiku to avoid infringing on Palm’s trademarks. The name is a reference to reference to the [haikus][7] used as error messages by many of the applications. Haiku is completely written from scratch and is compatible with BeOS. + +### Why Haiku? + +According to the project’s website, [Haiku][8] “is a fast, efficient, simple to use, easy to learn, and yet very powerful system for computer users of all levels”. Haiku comes with a kernel that have been customized for performance. Like FreeBSD, there is a “single team writing everything from the kernel, drivers, userland services, toolkit, and graphics stack to the included desktop applications and preflets”. + +### New Features in Haiku Beta Release + +A number of new features have been introduced since the release of Alpha 4.1. (Please note that Haiku is a passion project and all the devs are part-time, so some they can’t spend as much time working on Haiku as they would like.) + +![Haiku OS software][9] +HaikuDepot, Haiku’s package manager + +One of the biggest features is the inclusion of a complete package management system. HaikuDepot allows you to sort through many applications. Many are built specifically for Haiku, but a number have been ported to the platform, such as [LibreOffice][10], [Otter Browser][11], and [Calligra][12]. Interestingly, each Haiku package is [“a special type of compressed filesystem image, which is ‘mounted’ upon installation”][13]. There is also a command line interface for package management named `pkgman`. + +Another big feature is an upgraded browser. Haiku was able to hire a developer to work full-time for a year to improve the performance of WebPositive, the built-in browser. This included an update to a newer version of WebKit. WebPositive will now play Youtube videos properly. + +![Haiku OS WebPositive browser][14] +WebPositive, Haiku’s built-in browser + +Other features include: + + * A completely rewritten network preflet + * User interface cleanup + * Media subsystem improvements, including better streaming support, HDA driver improvements, and FFmpeg decoder plugin improvements + * Native RemoteDesktop improved + * Add EFI bootloader and GPT support + * Updated Ethernet & WiFi drivers + * Updated filesystem drivers + * General system stabilization + * Experimental Bluetooth stack + + + +### Thoughts on Haiku OS + +I have been following Haiku for many years. I’ve installed and played with the nightly builds a dozen times over the last couple of years. I even took some time to start learning one of its programming languages, so that I could write apps. But I got busy with other things. + +I’m very conflicted about it. I like Haiku because it is a neat non-Linux project, but it is only just getting features that everyone else takes for granted, like a package manager. + +If you’ve got a couple of minutes, download the [ISO][15] and install it on the virtual machine of your choice. You just might like it. + +Have you ever used Haiku or BeOS? If so, what are your favorite features? Let us know in the comments below. + +If you found this article interesting, please take a minute to share it on social media, Hacker News or [Reddit][16]. + +-------------------------------------------------------------------------------- + +via: https://itsfoss.com/haiku-os-release/ + +作者:[John Paul][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://itsfoss.com/author/john/ +[b]: https://github.com/lujun9972 +[1]: https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/10/haiku.jpg +[2]: https://en.wikipedia.org/wiki/Be_Inc. +[3]: https://en.wikipedia.org/wiki/Jean-Louis_Gass%C3%A9e +[4]: https://en.wikipedia.org/wiki/John_Sculley +[5]: https://en.wikipedia.org/wiki/BeBox +[6]: https://birdhouse.org/beos/byte/30-bootloader/ +[7]: https://en.wikipedia.org/wiki/Haiku +[8]: https://www.haiku-os.org/about/ +[9]: https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/10/haiku-depot.png +[10]: https://www.libreoffice.org/ +[11]: https://itsfoss.com/otter-browser-review/ +[12]: https://www.calligra.org/ +[13]: https://www.haiku-os.org/get-haiku/release-notes/ +[14]: https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/10/webpositive.jpg +[15]: https://www.haiku-os.org/get-haiku +[16]: http://reddit.com/r/linuxusersgroup From 4e4735c84c4abd4f9a4b5114db3699089222934f Mon Sep 17 00:00:00 2001 From: "Xingyu.Wang" Date: Fri, 12 Oct 2018 10:37:09 +0800 Subject: [PATCH 367/736] PRF:20170810 How we built our first full-stack JavaScript web app in three weeks.md @BriFuture --- ...stack JavaScript web app in three weeks.md | 56 +++++++------------ 1 file changed, 21 insertions(+), 35 deletions(-) diff --git a/translated/tech/20170810 How we built our first full-stack JavaScript web app in three weeks.md b/translated/tech/20170810 How we built our first full-stack JavaScript web app in three weeks.md index 90448211c3..e74aa85bda 100644 --- a/translated/tech/20170810 How we built our first full-stack JavaScript web app in three weeks.md +++ b/translated/tech/20170810 How we built our first full-stack JavaScript web app in three weeks.md @@ -1,14 +1,15 @@ 三周内构建 JavaScript 全栈 web 应用 -============================================================ +=========== ![](https://cdn-images-1.medium.com/max/2000/1*PgKBpQHRUgqpXcxtyehPZg.png) -应用 Align 中,用户主页的控制面板 + +*应用 Align 中,用户主页的控制面板* ### 从构思到部署应用程序的简单分步指南 -我在 Grace Hopper Program 为期三个月的编码训练营即将结束,实际上这篇文章的标题有些纰漏 —— 现在我已经构建了 _三个_ 全栈应用:[从零开始的电子商店(an e-commerce store from scratch)][3]、我个人的 [私人黑客马拉松项目(personal hackathon project)][4],还有这个“三周的结业项目”。这个项目是迄今为止强度最大的 —— 我和另外两名队友共同花费三周的时光 —— 而它也是我在训练营中最引以为豪的成就。这是我目前所构建和涉及的第一款稳定且复杂的应用。 +我在 Grace Hopper Program 为期三个月的编码训练营即将结束,实际上这篇文章的标题有些纰漏 —— 现在我已经构建了 _三个_ 全栈应用:[从零开始的电子商店][3]、我个人的 [私人黑客马拉松项目][4],还有这个“三周的结业项目”。这个项目是迄今为止强度最大的 —— 我和另外两名队友共同花费三周的时光 —— 而它也是我在训练营中最引以为豪的成就。这是我目前所构建和涉及的第一款稳定且复杂的应用。 -如大多数开发者所知,即使你“知道怎么编写代码”,但真正要制作第一款全栈的应用却是非常困难的。JavaScript 生态系统出奇的大:有包管理器,模块,构建工具,转译器,数据库,库文件,还要对上述所有东西进行选择,难怪如此多的编程新手除了 Codecademy 的教程外,做不了任何东西。这就是为什么我想让你体验这个决策的分布教程,跟着我们队伍的脚印,构建可用的应用。 +如大多数开发者所知,即使你“知道怎么编写代码”,但真正要制作第一款全栈的应用却是非常困难的。JavaScript 生态系统出奇的大:有包管理器、模块、构建工具、转译器、数据库、库文件,还要对上述所有东西进行选择,难怪如此多的编程新手除了 Codecademy 的教程外,做不了任何东西。这就是为什么我想让你体验这个决策的分布教程,跟着我们队伍的脚印,构建可用的应用。 * * * @@ -38,12 +39,8 @@ ![](https://cdn-images-1.medium.com/max/400/1*r5FBoa8JsYOoJihDgrpzhg.jpeg) - - ![](https://cdn-images-1.medium.com/max/400/1*0O8ZWiyUgWm0b8wEiHhuPw.jpeg) - - ![](https://cdn-images-1.medium.com/max/400/1*y9Q5v-sF0PWmkhthcW338g.jpeg) 这些骨架确保我们意见统一,提供了可预见的蓝图,让我们向着计划的方向努力。 @@ -53,35 +50,32 @@ 到了设计数据结构的时候。基于我们的示意图和用户故事,我们在 Google doc 中制作了一个清单,它包含我们将会需要的模型和每个模型应该包含的属性。我们知道需要 “目标(goal)” 模型、“用户(user)”模型、“里程碑(milestone)”模型、“记录(checkin)”模型还有最后的“资源(resource)”模型和“上传(upload)”模型, ![](https://cdn-images-1.medium.com/max/800/1*oA3mzyixVzsvnN_egw1xwg.png) -最初的数据模型结构 + +*最初的数据模型结构* 在正式确定好这些模型后,我们需要选择某种 _类型_ 的数据库:“关系型的”还是“非关系型的”(也就是“SQL”还是“NoSQL”)。由于基于表的 SQL 数据库需要预定义的格式,而基于文档的 NoSQL 数据库却可以用动态格式描述非结构化数据。 对于我们这个情况,用 SQL 型还是 No-SQL 型的数据库没多大影响,由于下列原因,我们最终选择了 Google 的 NoSQL 云数据库 Firebase: 1. 它能够把用户上传的图片保存在云端并存储起来 - 2. 它包含 WebSocket 功能,能够实时更新 - 3. 它能够处理用户验证,并且提供简单的 OAuth 功能。 -我们确定了数据库后,就要理解数据模型之间的关系了。由于 Firebase 是 NoSQL 类型,我们无法创建联合表或者设置像 _"记录 (Checkins)属于目标(Goals)"_ 的从属关系。因此我们需要弄清楚 JSON 树是什么样的,对象是怎样嵌套的(或者不是嵌套的关系)。最终,我们构建了像这样的模型: - +我们确定了数据库后,就要理解数据模型之间的关系了。由于 Firebase 是 NoSQL 类型,我们无法创建联合表或者设置像 _“记录 (Checkins)属于目标(Goals)”_ 的从属关系。因此我们需要弄清楚 JSON 树是什么样的,对象是怎样嵌套的(或者不是嵌套的关系)。最终,我们构建了像这样的模型: ![](https://cdn-images-1.medium.com/max/800/1*py0hQy-XHZWmwff3PM6F1g.png) -我们最终为目标(Goal)对象确定的 Firebase 数据格式。注意里程碑(Milestones)和记录(Checkins)对象嵌套在 Goals 中。 -_(注意: 出于性能考虑,Firebase 更倾向于简单、常规的数据结构, 但对于我们这种情况,需要在数据中进行嵌套,因为我们不会从数据库中获取目标(Goal)却不获取相应的子对象里程碑(Milestones)和记录(Checkins)。)_ +*我们最终为目标(Goal)对象确定的 Firebase 数据格式。注意里程碑(Milestones)和记录(Checkins)对象嵌套在 Goals 中。* + +_(注意: 出于性能考虑,Firebase 更倾向于简单、常规的数据结构, 但对于我们这种情况,需要在数据中进行嵌套,因为我们不会从数据库中获取目标(Goal)却不获取相应的子对象里程碑(Milestones)和记录(Checkins)。)_ ### 第 4 步:设置好 Github 和敏捷开发工作流 我们知道,从一开始就保持井然有序、执行敏捷开发对我们有极大好处。我们设置好 Github 上的仓库,我们无法直接将代码合并到主(master)分支,这迫使我们互相审阅代码。 - ![](https://cdn-images-1.medium.com/max/800/1*5kDNcvJpr2GyZ0YqLauCoQ.png) -我们还在 [Waffle.io][5] 网站上创建了敏捷开发的面板,它是免费的,很容易集成到 Github。我们在 Waffle 面板上罗列出所有用户故事以及需要我们去修复的 bugs。之后当我们开始编码时,我们每个人会为自己正在研究的每一个用户故事创建一个 git 分支,在完成工作后合并这一条条的分支。 - +我们还在 [Waffle.io][5] 网站上创建了敏捷开发的面板,它是免费的,很容易集成到 Github。我们在 Waffle 面板上罗列出所有用户故事以及需要我们去修复的 bug。之后当我们开始编码时,我们每个人会为自己正在研究的每一个用户故事创建一个 git 分支,在完成工作后合并这一条条的分支。 ![](https://cdn-images-1.medium.com/max/800/1*gnWqGwQsdGtpt3WOwe0s_A.gif) @@ -103,9 +97,9 @@ _(注意: 出于性能考虑,Firebase 更倾向于简单、常规的数据结 接下来是为应用创建 “概念证明”,也可以说是实现起来最复杂的基本功能的原型,证明我们的应用 _可以_ 实现。对我们而言,这意味着要找个前端库来实现时间线的渲染,成功连接到 Firebase,显示数据库中的一些种子数据。 - ![](https://cdn-images-1.medium.com/max/800/1*d5Wu3fOlX8Xdqix1RPZWSA.png) -Victory.JS 绘制的简单时间线 + +*Victory.JS 绘制的简单时间线* 我们找到了基于 D3 构建的响应式库 Victory.JS,花了一天时间阅读文档,用 _VictoryLine_ 和 _VictoryScatter_ 组件实现了非常基础的示例,能够可视化地显示数据库中的数据。实际上,这很有用!我们可以开始构建了。 @@ -113,26 +107,16 @@ Victory.JS 绘制的简单时间线 最后,是时候构建出应用中那些令人期待的功能了。取决于你要构建的应用,这一重要步骤会有些明显差异。我们根据所用的框架,编码出不同的用户故事并保存在 Waffle 上。常常需要同时接触前端和后端代码(比如,创建一个前端表格同时要连接到数据库)。我们实现了包含以下这些大大小小的功能: -* 能够创建新目标(goals)、里程碑(milestones)和记录(checkins) - +* 能够创建新目标、里程碑和记录 * 能够删除目标,里程碑和记录 - * 能够更改时间线的名称,颜色和详细内容 - * 能够缩放时间线 - * 能够为资源添加链接 - * 能够上传视频 - * 在达到相关目标的里程碑和记录时弹出资源和视频 - * 集成富文本编辑器 - * 用户注册、验证、OAuth 验证 - * 弹出查看时间线选项 - * 加载画面 有各种原因,这一步花了我们很多时间 —— 这一阶段是产生最多优质代码的阶段,每当我们实现了一个功能,就会有更多的事情要完善。 @@ -142,7 +126,8 @@ Victory.JS 绘制的简单时间线 当我们使用 MVP 架构实现了想要的功能,就可以开始清理,对它进行美化了。像表单,菜单和登陆栏等组件,我的团队用的是 Material-UI,不需要很多深层次的设计知识,它也能确保每个组件看上去都很圆润光滑。 ![](https://cdn-images-1.medium.com/max/800/1*PCRFAbsPBNPYhz6cBgWRCw.gif) -这是我制作的最喜爱功能之一了。它美得令人心旷神怡。 + +*这是我制作的最喜爱功能之一了。它美得令人心旷神怡。* 我们花了一点时间来选择颜色方案和编写 CSS ,这让我们在编程中休息了一段美妙的时间。期间我们还设计了 logo 图标,还上传了网站图标。 @@ -169,15 +154,16 @@ Victory.JS 绘制的简单时间线 但是,现在我们感到非常开心,不仅是因为成品,还因为我们从这个过程中获得了难以估量的知识和理解。点击 [这里][7] 查看 Align 应用! ![](https://cdn-images-1.medium.com/max/800/1*KbqmSW-PMjgfWYWS_vGIqg.jpeg) -Align 团队:Sara Kladky (左), Melanie Mohn (中), 还有我自己. + +*Align 团队:Sara Kladky(左),Melanie Mohn(中),还有我自己。* -------------------------------------------------------------------------------- via: https://medium.com/ladies-storm-hackathons/how-we-built-our-first-full-stack-javascript-web-app-in-three-weeks-8a4668dbd67c?imm_mid=0f581a&cmp=em-web-na-na-newsltr_20170816 -作者:[Sophia Ciocca ][a] +作者:[Sophia Ciocca][a] 译者:[BriFuture](https://github.com/BriFuture) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 From 6dede941dd660465df797888b0fd37e8f4bbf28d Mon Sep 17 00:00:00 2001 From: darksun Date: Fri, 12 Oct 2018 10:38:34 +0800 Subject: [PATCH 368/736] =?UTF-8?q?=E9=80=89=E9=A2=98:=20How=20To=20List?= =?UTF-8?q?=20The=20Enabled/Active=20Repositories=20In=20Linux?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...he Enabled-Active Repositories In Linux.md | 289 ++++++++++++++++++ 1 file changed, 289 insertions(+) create mode 100644 sources/tech/20181010 How To List The Enabled-Active Repositories In Linux.md diff --git a/sources/tech/20181010 How To List The Enabled-Active Repositories In Linux.md b/sources/tech/20181010 How To List The Enabled-Active Repositories In Linux.md new file mode 100644 index 0000000000..b4ff872202 --- /dev/null +++ b/sources/tech/20181010 How To List The Enabled-Active Repositories In Linux.md @@ -0,0 +1,289 @@ +How To List The Enabled/Active Repositories In Linux +====== +There are many ways to list enabled repositories in Linux. + +Here we are going to show you the easy methods to list active repositories. + +It will helps you to know what are the repositories enabled on your system. + +Once you have this information in handy then you can add any repositories that you want if it’s not already enabled. + +Say for example, if you would like to enable `epel repository` then you need to check whether the epel repository is enabled or not. In this case this tutorial would help you. + +### What Is Repository? + +A software repository is a central place which stores the software packages for the particular application. + +All the Linux distributions are maintaining their own repositories and they allow users to retrieve and install packages on their machine. + +Each vendor offered a unique package management tool to manage their repositories such as search, install, update, upgrade, remove, etc. + +Most of the Linux distributions comes as freeware except RHEL and SUSE. To access their repositories you need to buy a subscriptions. + +**Suggested Read :** +**(#)** [How To Add, Enable And Disable A Repository By Using The DNF/YUM Config Manager Command On Linux][1] +**(#)** [How To List Installed Packages By Size (Largest) On Linux][2] +**(#)** [How To View/List The Available Packages Updates In Linux][3] +**(#)** [How To View A Particular Package Installed/Updated/Upgraded/Removed/Erased Date On Linux][4] +**(#)** [How To View Detailed Information About A Package In Linux][5] +**(#)** [How To Search If A Package Is Available On Your Linux Distribution Or Not][6] +**(#)** [How To List An Available Package Groups In Linux][7] +**(#)** [Newbies corner – A Graphical frontend tool for Linux Package Manager][8] +**(#)** [Linux Expert should knows, list of Command line Package Manager & Usage][9] + +### How To List The Enabled Repositories on RHEL/CentOS + +RHEL & CentOS systems are using RPM packages hence we can use the `Yum Package Manager` to get this information. + +YUM stands for Yellowdog Updater, Modified is an open-source command-line front-end package-management utility for RPM based systems such as Red Hat Enterprise Linux (RHEL) and CentOS. + +Yum is the primary tool for getting, installing, deleting, querying, and managing RPM packages from distribution repositories, as well as other third-party repositories. + +**Suggested Read :** [YUM Command To Manage Packages on RHEL/CentOS Systems][10] + +RHEL based systems are mainly offering the below three major repositories. These repository will be enabled by default. + + * **`base:`** It’s containing all the core packages and base packages. + * **`extras:`** It provides additional functionality to CentOS without breaking upstream compatibility or updating base components. It is an upstream repository, as well as additional CentOS packages. + * **`updates:`** It’s offering bug fixed packages, Security packages and Enhancement packages. + + + +``` +# yum repolist +or +# yum repolist enabled + +Loaded plugins: fastestmirror +Determining fastest mirrors + 选题模板.txt 中文排版指北.md comic core.md Dict.md lctt2014.md lctt2016.md lctt2018.md LCTT翻译规范.md LICENSE Makefile published README.md sign.md sources translated epel: ewr.edge.kernel.org +repo id repo name status +!base/7/x86_64 CentOS-7 - Base 9,911 +!epel/x86_64 Extra Packages for Enterprise Linux 7 - x86_64 12,687 +!extras/7/x86_64 CentOS-7 - Extras 403 +!updates/7/x86_64 CentOS-7 - Updates 1,348 +repolist: 24,349 + +``` + +### How To List The Enabled Repositories on Fedora + +DNF stands for Dandified yum. We can tell DNF, the next generation of yum package manager (Fork of Yum) using hawkey/libsolv library for backend. Aleš Kozumplík started working on DNF since Fedora 18 and its implemented/launched in Fedora 22 finally. + +Dnf command is used to install, update, search & remove packages on Fedora 22 and later system. It automatically resolve dependencies and make it smooth package installation without any trouble. + +Yum replaced by DNF due to several long-term problems in Yum which was not solved. Asked why ? he did not patches the Yum issues. Aleš Kozumplík explains that patching was technically hard and YUM team wont accept the changes immediately and other major critical, YUM is 56K lines but DNF is 29K lies. So, there is no option for further development, except to fork. + +**Suggested Read :** [DNF (Fork of YUM) Command To Manage Packages on Fedora System][11] + +Fedora system is mainly offering the below two major repositories. These repository will be enabled by default. + + * **`fedora:`** It’s containing all the core packages and base packages. + * **`updates:`** It’s offering bug fixed packages, Security packages and Enhancement packages from the stable release branch. + + + +``` +# dnf repolist +or +# dnf repolist enabled + +Last metadata expiration check: 0:02:56 ago on Wed 10 Oct 2018 06:12:22 PM IST. +repo id repo name status +docker-ce-stable Docker CE Stable - x86_64 6 +*fedora Fedora 26 - x86_64 53,912 +home_mhogomchungu mhogomchungu's Home Project (Fedora_25) 19 +home_moritzmolch_gencfsm Gnome Encfs Manager (Fedora_25) 5 +mystro256-gnome-redshift Copr repo for gnome-redshift owned by mystro256 6 +nodesource Node.js Packages for Fedora Linux 26 - x86_64 83 +rabiny-albert Copr repo for albert owned by rabiny 3 +*rpmfusion-free RPM Fusion for Fedora 26 - Free 536 +*rpmfusion-free-updates RPM Fusion for Fedora 26 - Free - Updates 278 +*rpmfusion-nonfree RPM Fusion for Fedora 26 - Nonfree 202 +*rpmfusion-nonfree-updates RPM Fusion for Fedora 26 - Nonfree - Updates 95 +*updates Fedora 26 - x86_64 - Updates 14,595 + +``` + +### How To List The Enabled Repositories on Debian/Ubuntu + +Debian based systems are using APT/APT-GET package manager hence we can use the `APT/APT-GET Package Manager` to get this information. + +APT stands for Advanced Packaging Tool (APT) which is replacement for apt-get, like how DNF came to picture instead of YUM. It’s feature rich command-line tools with included all the futures in one command (APT) such as apt-cache, apt-search, dpkg, apt-cdrom, apt-config, apt-key, etc..,. and several other unique features. For example we can easily install .dpkg packages through APT but we can’t do through Apt-Get similar more features are included into APT command. APT-GET replaced by APT Due to lock of futures missing in apt-get which was not solved. + +Apt-Get stands for Advanced Packaging Tool (APT). apg-get is a powerful command-line tool which is used to automatically download and install new software packages, upgrade existing software packages, update the package list index, and to upgrade the entire Debian based systems. + +``` +# apt-cache policy +Package files: + 100 /var/lib/dpkg/status + release a=now + 500 http://ppa.launchpad.net/peek-developers/stable/ubuntu artful/main amd64 Packages + release v=17.10,o=LP-PPA-peek-developers-stable,a=artful,n=artful,l=Peek stable releases,c=main,b=amd64 + origin ppa.launchpad.net + 500 http://ppa.launchpad.net/notepadqq-team/notepadqq/ubuntu artful/main amd64 Packages + release v=17.10,o=LP-PPA-notepadqq-team-notepadqq,a=artful,n=artful,l=Notepadqq,c=main,b=amd64 + origin ppa.launchpad.net + 500 http://dl.google.com/linux/chrome/deb stable/main amd64 Packages + release v=1.0,o=Google, Inc.,a=stable,n=stable,l=Google,c=main,b=amd64 + origin dl.google.com + 500 https://download.docker.com/linux/ubuntu artful/stable amd64 Packages + release o=Docker,a=artful,l=Docker CE,c=stable,b=amd64 + origin download.docker.com + 500 http://security.ubuntu.com/ubuntu artful-security/multiverse amd64 Packages + release v=17.10,o=Ubuntu,a=artful-security,n=artful,l=Ubuntu,c=multiverse,b=amd64 + origin security.ubuntu.com + 500 http://security.ubuntu.com/ubuntu artful-security/universe amd64 Packages + release v=17.10,o=Ubuntu,a=artful-security,n=artful,l=Ubuntu,c=universe,b=amd64 + origin security.ubuntu.com + 500 http://security.ubuntu.com/ubuntu artful-security/restricted i386 Packages + release v=17.10,o=Ubuntu,a=artful-security,n=artful,l=Ubuntu,c=restricted,b=i386 + origin security.ubuntu.com +. +. + origin in.archive.ubuntu.com + 500 http://in.archive.ubuntu.com/ubuntu artful/restricted amd64 Packages + release v=17.10,o=Ubuntu,a=artful,n=artful,l=Ubuntu,c=restricted,b=amd64 + origin in.archive.ubuntu.com + 500 http://in.archive.ubuntu.com/ubuntu artful/main i386 Packages + release v=17.10,o=Ubuntu,a=artful,n=artful,l=Ubuntu,c=main,b=i386 + origin in.archive.ubuntu.com + 500 http://in.archive.ubuntu.com/ubuntu artful/main amd64 Packages + release v=17.10,o=Ubuntu,a=artful,n=artful,l=Ubuntu,c=main,b=amd64 + origin in.archive.ubuntu.com +Pinned packages: + +``` + +### How To List The Enabled Repositories on openSUSE + +openSUSE system uses zypper package manager hence we can use the zypper Package Manager to get this information. + +Zypper is a command line package manager for suse & openSUSE distributions. It’s used to install, update, search & remove packages & manage repositories, perform various queries, and more. Zypper command-line interface to ZYpp system management library (libzypp). + +**Suggested Read :** [Zypper Command To Manage Packages On openSUSE & suse Systems][12] + +``` +# zypper repos + +# | Alias | Name | Enabled | GPG Check | Refresh +--+-----------------------+-----------------------------------------------------+---------+-----------+-------- +1 | packman-repository | packman-repository | Yes | (r ) Yes | Yes +2 | google-chrome | google-chrome | Yes | (r ) Yes | Yes +3 | home_lazka0_ql-stable | Stable Quod Libet / Ex Falso Builds (openSUSE_42.1) | Yes | (r ) Yes | No +4 | repo-non-oss | openSUSE-leap/42.1-Non-Oss | Yes | (r ) Yes | Yes +5 | repo-oss | openSUSE-leap/42.1-Oss | Yes | (r ) Yes | Yes +6 | repo-update | openSUSE-42.1-Update | Yes | (r ) Yes | Yes +7 | repo-update-non-oss | openSUSE-42.1-Update-Non-Oss | Yes | (r ) Yes | Yes + +``` + +List Repositories with URI. + +``` +# zypper lr -u + +# | Alias | Name | Enabled | GPG Check | Refresh | URI +--+-----------------------+-----------------------------------------------------+---------+-----------+---------+--------------------------------------------------------------------------------- +1 | packman-repository | packman-repository | Yes | (r ) Yes | Yes | http://ftp.gwdg.de/pub/linux/packman/suse/openSUSE_Leap_42.1/ +2 | google-chrome | google-chrome | Yes | (r ) Yes | Yes | http://dl.google.com/linux/chrome/rpm/stable/x86_64 +3 | home_lazka0_ql-stable | Stable Quod Libet / Ex Falso Builds (openSUSE_42.1) | Yes | (r ) Yes | No | http://download.opensuse.org/repositories/home:/lazka0:/ql-stable/openSUSE_42.1/ +4 | repo-non-oss | openSUSE-leap/42.1-Non-Oss | Yes | (r ) Yes | Yes | http://download.opensuse.org/distribution/leap/42.1/repo/non-oss/ +5 | repo-oss | openSUSE-leap/42.1-Oss | Yes | (r ) Yes | Yes | http://download.opensuse.org/distribution/leap/42.1/repo/oss/ +6 | repo-update | openSUSE-42.1-Update | Yes | (r ) Yes | Yes | http://download.opensuse.org/update/leap/42.1/oss/ +7 | repo-update-non-oss | openSUSE-42.1-Update-Non-Oss | Yes | (r ) Yes | Yes | http://download.opensuse.org/update/leap/42.1/non-oss/ + +``` + +List Repositories by priority. + +``` +# zypper lr -p + +# | Alias | Name | Enabled | GPG Check | Refresh | Priority +--+-----------------------+-----------------------------------------------------+---------+-----------+---------+--------- +1 | packman-repository | packman-repository | Yes | (r ) Yes | Yes | 99 +2 | google-chrome | google-chrome | Yes | (r ) Yes | Yes | 99 +3 | home_lazka0_ql-stable | Stable Quod Libet / Ex Falso Builds (openSUSE_42.1) | Yes | (r ) Yes | No | 99 +4 | repo-non-oss | openSUSE-leap/42.1-Non-Oss | Yes | (r ) Yes | Yes | 99 +5 | repo-oss | openSUSE-leap/42.1-Oss | Yes | (r ) Yes | Yes | 99 +6 | repo-update | openSUSE-42.1-Update | Yes | (r ) Yes | Yes | 99 +7 | repo-update-non-oss | openSUSE-42.1-Update-Non-Oss | Yes | (r ) Yes | Yes | 99 + +``` + +### How To List The Enabled Repositories on ArchLinux + +Arch Linux based systems are using pacman package manager hence we can use the pacman Package Manager to get this information. + +pacman stands for package manager utility (pacman). pacman is a command-line utility to install, build, remove and manage Arch Linux packages. pacman uses libalpm (Arch Linux Package Management (ALPM) library) as a back-end to perform all the actions. + +**Suggested Read :** [Pacman Command To Manage Packages On Arch Linux Based Systems][13] + +``` +# pacman -Syy +:: Synchronizing package databases... + core 132.6 KiB 1524K/s 00:00 [############################################] 100% + extra 1859.0 KiB 750K/s 00:02 [############################################] 100% + community 3.5 MiB 149K/s 00:24 [############################################] 100% + multilib 182.7 KiB 1363K/s 00:00 [############################################] 100% + +``` + +### How To List The Enabled Repositories on Linux using INXI Utility + +inxi is a nifty tool to check hardware information on Linux and offers wide range of option to get all the hardware information on Linux system that i never found in any other utility which are available in Linux. It was forked from the ancient and mindbendingly perverse yet ingenius infobash, by locsmif. + +inxi is a script that quickly shows system hardware, CPU, drivers, Xorg, Desktop, Kernel, GCC version(s), Processes, RAM usage, and a wide variety of other useful information, also used for forum technical support & debugging tool. + +Additionally this utility will display all the distribution repository data information such as RHEL, CentOS, Fedora, Debain, Ubuntu, LinuxMint, ArchLinux, openSUSE, Manjaro, etc., + +**Suggested Read :** [inxi – A Great Tool to Check Hardware Information on Linux][14] + +``` +# inxi -r +Repos: Active apt sources in file: /etc/apt/sources.list + deb http://in.archive.ubuntu.com/ubuntu/ yakkety main restricted + deb http://in.archive.ubuntu.com/ubuntu/ yakkety-updates main restricted + deb http://in.archive.ubuntu.com/ubuntu/ yakkety universe + deb http://in.archive.ubuntu.com/ubuntu/ yakkety-updates universe + deb http://in.archive.ubuntu.com/ubuntu/ yakkety multiverse + deb http://in.archive.ubuntu.com/ubuntu/ yakkety-updates multiverse + deb http://in.archive.ubuntu.com/ubuntu/ yakkety-backports main restricted universe multiverse + deb http://security.ubuntu.com/ubuntu yakkety-security main restricted + deb http://security.ubuntu.com/ubuntu yakkety-security universe + deb http://security.ubuntu.com/ubuntu yakkety-security multiverse + Active apt sources in file: /etc/apt/sources.list.d/arc-theme.list + deb http://download.opensuse.org/repositories/home:/Horst3180/xUbuntu_16.04/ / + Active apt sources in file: /etc/apt/sources.list.d/snwh-ubuntu-pulp-yakkety.list + deb http://ppa.launchpad.net/snwh/pulp/ubuntu yakkety main + +``` + +-------------------------------------------------------------------------------- + +via: https://www.2daygeek.com/how-to-list-the-enabled-active-repositories-in-linux/ + +作者:[Prakash Subramanian][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://www.2daygeek.com/author/prakash/ +[b]: https://github.com/lujun9972 +[1]: https://www.2daygeek.com/how-to-add-enable-disable-a-repository-dnf-yum-config-manager-on-linux/ +[2]: https://www.2daygeek.com/how-to-list-installed-packages-by-size-largest-on-linux/ +[3]: https://www.2daygeek.com/how-to-view-list-the-available-packages-updates-in-linux/ +[4]: https://www.2daygeek.com/how-to-view-a-particular-package-installed-updated-upgraded-removed-erased-date-on-linux/ +[5]: https://www.2daygeek.com/how-to-view-detailed-information-about-a-package-in-linux/ +[6]: https://www.2daygeek.com/how-to-search-if-a-package-is-available-on-your-linux-distribution-or-not/ +[7]: https://www.2daygeek.com/how-to-list-an-available-package-groups-in-linux/ +[8]: https://www.2daygeek.com/list-of-graphical-frontend-tool-for-linux-package-manager/ +[9]: https://www.2daygeek.com/list-of-command-line-package-manager-for-linux/ +[10]: https://www.2daygeek.com/yum-command-examples-manage-packages-rhel-centos-systems/ +[11]: https://www.2daygeek.com/dnf-command-examples-manage-packages-fedora-system/ +[12]: https://www.2daygeek.com/zypper-command-examples-manage-packages-opensuse-system/ +[13]: https://www.2daygeek.com/pacman-command-examples-manage-packages-arch-linux-system/ +[14]: https://www.2daygeek.com/inxi-system-hardware-information-on-linux/ From 4910feff90e06e7365af572637bd4f397e419d62 Mon Sep 17 00:00:00 2001 From: darksun Date: Fri, 12 Oct 2018 10:40:46 +0800 Subject: [PATCH 369/736] =?UTF-8?q?=E9=80=89=E9=A2=98:=20A=20Front-end=20F?= =?UTF-8?q?or=20Popular=20Package=20Managers?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ... Front-end For Popular Package Managers.md | 186 ++++++++++++++++++ 1 file changed, 186 insertions(+) create mode 100644 sources/tech/20181011 A Front-end For Popular Package Managers.md diff --git a/sources/tech/20181011 A Front-end For Popular Package Managers.md b/sources/tech/20181011 A Front-end For Popular Package Managers.md new file mode 100644 index 0000000000..6cdef8bd98 --- /dev/null +++ b/sources/tech/20181011 A Front-end For Popular Package Managers.md @@ -0,0 +1,186 @@ +A Front-end For Popular Package Managers +====== + +![](https://www.ostechnix.com/wp-content/uploads/2018/10/sysget-720x340.png) + +Are you a distro-hopper who likes to try new Linux OSs every few days? If so, I have something for you. Say hello to **Sysget** , a front-end for popular package managers in Unix-like operating systems. You don’t need to learn about every package managers to do basic stuffs like installing, updating, upgrading and removing packages. You just need to remember one syntax for every package manager on every Unix-like operating systems. Sysget is a wrapper script for package managers and it is written in C++. The source code is freely available on GitHub. + +Using Sysget, you can do all sorts of basic package management operations including the following: + + * Install packages, + * Update packages, + * Upgrade packages, + * Search for packages, + * Remove packages, + * Remove orphan packages, + * Update database, + * Upgrade system, + * Clear package manager cache. + + + +**An Important note to Linux learners:** + +Sysget is not going to replace the package managers and definitely not suitable for everyone. If you’re a newbie who frequently switch to new Linux OS, Sysget may help. It is just wrapper script that helps the distro hoppers (or the new Linux users) who become frustrated when they have to learn new commands to install, update, upgrade, search and remove packages when using different package managers in different Linux distributions. + +If you’re a Linux administrator or enthusiast who want to learn the internals of Linux, you should stick with your distribution’s package manager and learn to use it well. + +### Installing Sysget + +Installing sysget is trivial. Go to the [**releases page**][1] and download latest Sysget binary and install it as shown below. As of writing this guide, the latest version was 1.2. + +``` +$ sudo wget -O /usr/local/bin/sysget https://github.com/emilengler/sysget/releases/download/v1.2/sysget + +$ sudo mkdir -p /usr/local/share/sysget + +$ sudo chmod a+x /usr/local/bin/sysget + +``` + +### Usage + +Sysget commands are mostly same as APT package manager, so it should be easy to use for the newbies. + +When you run Sysget for the first time, you will be asked to choose the package manager you want to use. Since I am on Ubuntu, I chose **apt-get**. + +![](https://www.ostechnix.com/wp-content/uploads/2018/10/sysget-1.png) + +You must choose the right package manager depending upon the distribution you’re running. For instance, if you’re on Arch Linux, choose **pacman**. For CentOS, choose **yum**. For FreeBSD, choose **pkg**. The list of currently supported package managers are: + + 1. apt-get (Debian) + 2. xbps (Void) + 3. dnf (Fedora) + 4. yum (Enterprise Linux/Legacy Fedora) + 5. zypper (OpenSUSE) + 6. eopkg (Solus) + 7. pacman (Arch) + 8. emerge (Gentoo) + 9. pkg (FreeBSD) + 10. chromebrew (ChromeOS) + 11. homebrew (Mac OS) + 12. nix (Nix OS) + 13. snap (Independent) + 14. npm (Javascript, Global) + + + +Just in case you assigned a wrong package manager, you can set a new package manager using the following command: + +``` +$ sudo sysget set yum +Package manager changed to yum + +``` + +Just make sure you have chosen your native package manager. + +Now, you can perform the package management operations as the way you do using your native package manager. + +To install a package, for example Emacs, simply run: + +``` +$ sudo sysget install emacs + +``` + +The above command will invoke the native package manager (In my case it is “apt-get”) and install the given package. + +![](https://www.ostechnix.com/wp-content/uploads/2018/10/Install-package-using-Sysget.png) + +Similarly, to remove a package, simply run: + +``` +$ sudo sysget remove emacs + +``` + +![](https://www.ostechnix.com/wp-content/uploads/2018/10/Remove-package-using-Sysget.png) + +Update software repository (database): + +``` +$ sudo sysget update + +``` + +**Search for a specific package:** + +``` +$ sudo sysget search emacs + +``` + +**Upgrade a single package:** + +``` +$ sudo sysget upgrade emacs + +``` + +**Upgrade all packages:** + +``` +$ sudo sysget upgrade + +``` + +**Remove all orphaned packages:** + +``` +$ sudo sysget autoremove + +``` + +**Clear the package manager cache:** + +``` +$ sudo sysget clean + +``` + +For more details, refer the help section: + +``` +$ sysget help +Help of sysget +sysget [OPTION] [ARGUMENT] + +search [query] search for a package in the resporitories +install [package] install a package from the repos +remove [package] removes a package +autoremove removes not needed packages (orphans) +update update the database +upgrade do a system upgrade +upgrade [package] upgrade a specific package +clean clean the download cache +set [NEW MANAGER] set a new package manager + +``` + +Please remember that the sysget syntax is same for all package managers in different Linux distributions. You don’t need to memorize the commands for each package manager. + +Again, I must tell you Sysget isn’t a replacement for a package manager. It is just wrapper for popular package managers in Unix-like systems and it performs the basic package management operations only. + +Sysget might be somewhat useful for newbies and distro-hoppers who are lazy to learn new commands for different package manager. Give it a try if you’re interested and see if it helps. + +And, that’s all for now. More good stuffs to come. Stay tuned! + +Cheers! + + + +-------------------------------------------------------------------------------- + +via: https://www.ostechnix.com/sysget-a-front-end-for-popular-package-managers/ + +作者:[SK][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://www.ostechnix.com/author/sk/ +[b]: https://github.com/lujun9972 +[1]: https://github.com/emilengler/sysget/releases From a575818964cdc570037f8577238f701b56615db9 Mon Sep 17 00:00:00 2001 From: darksun Date: Fri, 12 Oct 2018 10:46:02 +0800 Subject: [PATCH 370/736] =?UTF-8?q?=E9=80=89=E9=A2=98:=20Talk=20over=20tex?= =?UTF-8?q?t:=20Conversational=20interface=20design=20and=20usability?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...sational interface design and usability.md | 105 ++++++++++++++++++ 1 file changed, 105 insertions(+) create mode 100644 sources/talk/20181010 Talk over text- Conversational interface design and usability.md diff --git a/sources/talk/20181010 Talk over text- Conversational interface design and usability.md b/sources/talk/20181010 Talk over text- Conversational interface design and usability.md new file mode 100644 index 0000000000..e9d76f9ef4 --- /dev/null +++ b/sources/talk/20181010 Talk over text- Conversational interface design and usability.md @@ -0,0 +1,105 @@ +Talk over text: Conversational interface design and usability +====== +To make conversational interfaces more human-centered, we must free our thinking from the trappings of web and mobile design. + +![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/migration_innovation_computer_software.png?itok=VCFLtd0q) + +Conversational interfaces are unique among the screen-based and physically manipulated user interfaces that characterize the range of digital experiences we encounter on a daily basis. As [Conversational Design][1] author Erika Hall eloquently writes, "Conversation is not a new interface. It's the oldest interface." And the conversation, the most human interaction of all, lies at the nexus of the aural and verbal rather than the visual and physical. This makes it particularly challenging for machines to meet the high expectations we tend to have when it comes to typical human conversations. + +How do we design for conversational interfaces, which run the gamut from omnichannel chatbots on our websites and mobile apps to mono-channel voice assistants on physical devices such as the Amazon Echo and Google Home? What recommendations do other experts on conversational design and usability have when it comes to crafting the most robust chatbot or voice interface possible? In this overview, we focus on three areas: information architecture, design, and usability testing. + +### Information architecture: Trees, not sitemaps + +Consider the websites we visit and the visual interfaces we use regularly. Each has a navigational tool, whether it is a list of links or a series of buttons, that helps us gain some understanding of the interface. In a web-optimized information architecture, we can see the entire hierarchy of a website and its contents in the form of such navigation bars and sitemaps. + +On the other hand, in a conversational information architecture—whether articulated in a chatbot or a voice assistant—the structure of our interactions must be provided to us in a simple and straightforward way. For instance, in lieu of a navigation bar that has links to pages like About, Menu, Order, and Locations with further links underneath, we can create a conversational means of describing how to navigate the options we wish to pursue. + +Consider the differences between the two examples of navigation below. + +| **Web-based navigation:** | **Conversational navigation:** | +| Present all options in the navigation bar | Present only certain top-level options to access deeper options | +|-------------------------------------------|-----------------------------------------------------------------| +| • Floss's Pizza | • "To learn more about us, say About" | +| • About | • "To hear our menu, say Menu" | +| ◦ Team | • "To place an order, say Order" | +| ◦ Our story | • "To find out where we are, say Where" | +| • Menu | | +| ◦ Pizzas | | +| ◦ Pastas | | +| ◦ Platters | | +| • Order | | +| ◦ Pickup | | +| ◦ Delivery | | +| • Where we are | | +| ◦ Area map • "Welcome to Floss's Pizza!" | | + +In a conversational context, an appropriate information architecture that focuses on decision trees is of paramount importance, because one of the biggest issues many conversational interfaces face is excessive verbosity. By avoiding information overload, prizing structural simplicity, and prescribing one-word directions, your users can traverse conversational interfaces without any additional visual aid. + +### Design: Finessing flows and language + +![Well-designed language example][3] + +An example of well-designed language that encapsulates Hall's conversational key moments. + +In her book Conversational Design, Hall emphasizes the need for all conversational interfaces to adhere to conversational maxims outlined by Paul Grice and advanced by Robin Lakoff. These conversational maxims highlight the characteristics every conversational interface should have to succeed: quantity (just enough information but not too much), quality (truthfulness), relation (relevance), manner (concision, orderliness, and lack of ambiguity), and politeness (Lakoff's addition). + +In the process, Hall spotlights four key moments that build trust with users of conversational interfaces and give them all of the information they need to interact successfully with the conversational experience, whether it is a chatbot or a voice assistant. + + * **Introduction:** Invite the user's interest and encourage trust with a friendly but brief greeting that welcomes them to an unfamiliar interface. + + * **Orientation:** Offer system options, such as how to exit out of certain interactions, and provide a list of options that help the user achieve their goal. + + * **Action:** After each response from the user, offer a new set of tasks and corresponding controls for the user to proceed with further interaction. + + * **Guidance:** Provide feedback to the user after every response and give clear instructions. + + + + +Taken as a whole, these key moments indicate that good conversational design obligates us to consider how we write machine utterances to be both inviting and informative and to structure our decision flows in such a way that they flow naturally to the user. In other words, rather than visual design chops or an eye for style, conversational design requires us to be good writers and thoughtful architects of decision trees. + +![Decision flow example ][5] + +An example decision flow that adheres to Hall's key moments. + +One metaphor I use on a regular basis to conceive of each point in a conversational interface that presents a choice to the user is the dichotomous key. In tree science, dichotomous keys are used to identify trees in their natural habitat through certain salient characteristics. What makes dichotomous keys special, however, is the fact that each card in a dichotomous key only offers two choices (hence the moniker "dichotomous") with a clearly defined characteristic that cannot be mistaken for another. Eventually, after enough dichotomous choices have been made, we can winnow down the available options to the correct genus of tree. + +We should design conversational interfaces in the same way, with particular attention given to disambiguation and decision-making that never verges on too much complexity. Because conversational interfaces require deeply nested hierarchical structures to reach certain outcomes, we can never be too helpful in the instructions and options we offer our users. + +### Usability testing: Dialogues, not dialogs + +Conversational usability is a relatively unexplored and less-understood area because it is frequently based on verbal and aural interactions rather than visual or physical ones. Whereas chatbots can be evaluated for their usability using traditional means such as think-aloud, voice assistants and other voice-driven interfaces have no such luxury. + +For voice interfaces, we are unable to pursue approaches involving eye-tracking or think-aloud, since these interfaces are purely aural and users' utterances outside of responses to interface prompts can introduce bad data. For this reason, when our Acquia Labs team built [Ask GeorgiaGov][6], the first Alexa skill for residents of the state of Georgia, we chose retrospective probing (RP) for our usability tests. + +In retrospective probing, the conversational interaction proceeds until the completion of the task, at which point the user is asked about their impressions of the interface. Retrospective probing is well-positioned for voice interfaces because it allows the conversation to proceed unimpeded by interruptions such as think-aloud feedback. Nonetheless, it does come with the disadvantage of suffering from our notoriously unreliable memories, as it forces us to recollect past interactions rather than ones we completed immediately before recollection. + +### Challenges and opportunities + +Conversational interfaces are here to stay in our rapidly expanding spectrum of digital experiences. Though they enrich the range of ways we have to engage users, they also present unprecedented challenges when it comes to information architecture, design, and usability testing. With the help of previous work such as Grice's conversational maxims and Hall's key moments, we can design and build effective conversational interfaces by focusing on strong writing and well-considered decision flows. + +The fact that conversation is the oldest and most human of interfaces is also edifying when we approach other user interfaces that lack visual or physical manipulation. As Hall writes, "The ideal interface is an interface that's not noticeable at all." Whether or not we will eventually reach the utopian outcome of conversational interfaces that feel completely natural to the human ear, we can make conversational interfaces more human-centered by freeing our thinking from the trappings of web and mobile. + +Preston So will present [Talk Over Text: Conversational Interface Design and Usability][7] at [All Things Open][8], October 21-23 in Raleigh, North Carolina. + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/18/10/conversational-interface-design-and-usability + +作者:[Preston So][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/prestonso +[b]: https://github.com/lujun9972 +[1]: https://abookapart.com/products/conversational-design +[2]: /file/411001 +[3]: https://opensource.com/sites/default/files/uploads/conversational-interfaces_1.png (Well-designed language example) +[4]: /file/411006 +[5]: https://opensource.com/sites/default/files/uploads/conversational-interfaces_2.png (Decision flow example ) +[6]: https://www.acquia.com/blog/ask-georgiagov-alexa-skill-citizens-georgia-acquia-labs/12/10/2017/3312516 +[7]: https://allthingsopen.org/talk/talk-over-text-conversational-interface-design-and-usability/ +[8]: https://allthingsopen.org/ From cdd263f6451e39843d77b8fb01b3e57d290c2130 Mon Sep 17 00:00:00 2001 From: darksun Date: Fri, 12 Oct 2018 10:50:35 +0800 Subject: [PATCH 371/736] =?UTF-8?q?=E9=80=89=E9=A2=98:=20Getting=20started?= =?UTF-8?q?=20with=20Minikube:=20Kubernetes=20on=20your=20laptop?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...ith Minikube- Kubernetes on your laptop.md | 160 ++++++++++++++++++ 1 file changed, 160 insertions(+) create mode 100644 sources/tech/20181011 Getting started with Minikube- Kubernetes on your laptop.md diff --git a/sources/tech/20181011 Getting started with Minikube- Kubernetes on your laptop.md b/sources/tech/20181011 Getting started with Minikube- Kubernetes on your laptop.md new file mode 100644 index 0000000000..c533a113a3 --- /dev/null +++ b/sources/tech/20181011 Getting started with Minikube- Kubernetes on your laptop.md @@ -0,0 +1,160 @@ +Getting started with Minikube: Kubernetes on your laptop +====== +A step-by-step guide for running Minikube. + +![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/cube_innovation_process_block_container.png?itok=vkPYmSRQ) + +Minikube is advertised on the [Hello Minikube][1] tutorial page as a simple way to run Kubernetes for Docker. While that documentation is very informative, it is primarily written for MacOS. You can dig deeper for instructions for Windows or a Linux distribution, but they are not very clear. And much of the documentation—like one on [installing drivers for Minikube][2]—is targeted at Debian/Ubuntu users. + +### Prerequisites + + 1. You have [installed Docker][3]. + + 2. Your computer is an RHEL/CentOS/Fedora-based workstation. + + 3. You have [installed a working KVM2 hypervisor][4]. + + 4. You have a working **docker-machine-driver-kvm2**. The following commands will install the driver: + +``` + curl -Lo docker-machine-driver-kvm2 https://storage.googleapis.com/minikube/releases/latest/docker-machine-driver-kvm2 \ + chmod +x docker-machine-driver-kvm2 \ + && sudo cp docker-machine-driver-kvm2 /usr/local/bin/ \ + && rm docker-machine-driver-kvm2 +``` + +### Download, install, and start Minikube + + 1. Create a directory for the two files you will download: [minikube][5] and [kubectl][6]. + + + 2. Open a terminal window and run the following command to install minikube. + +``` +curl -Lo minikube https://storage.googleapis.com/minikube/releases/latest/minikube-linux-amd64 + +``` + +Note that the minikube version (e.g., minikube-linux-amd64) may differ based on your computer's specs. + + + + 3. **chmod** to make it writable. + +``` +chmod +x minikube + +``` + + + + 4. Move the file to the **/usr/local/bin** path so you can run it as a command. + +``` +mv minikube /usr/local/bin + +``` + + + + 5. Install kubectl using the following command (similar to the minikube installation process). + +``` +curl -Lo kubectl https://storage.googleapis.com/kubernetes-release/release/$(curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt)/bin/linux/amd64/kubectl + +``` + +Use the **curl** command to determine the latest version of Kubernetes. + + + + 6. **chmod** to make kubectl writable. + +``` +chmod +x kubectl + +``` + + + + 7. Move kubectl to the **/usr/local/bin** path to run it as a command. + +``` +mv kubectl /usr/local/bin + +``` + + + + 8. Run **minikube start**. To do so, you need to have a hypervisor available. I used KVM2, and you can also use Virtualbox. Make sure to run the following command as a user instead of root so the configuration will be stored for the user instead of root. + +``` +minikube start --vm-driver=kvm2 + +``` + +It can take quite a while, so wait for it. + + + + 9. Minikube should download and start. Use the following command to make sure it was successful. + +``` +cat ~/.kube/config + +``` + + + + 10. Execute the following command to run Minikube as the context. The context is what determines which cluster kubectl is interacting with. You can see all your available contexts in the ~/.kube/config file. + +``` +kubectl config use-context minikube + +``` + + + + 11. Run the **config** file command again to check that context Minikube is there. + +``` +cat ~/.kube/config + +``` + + + + 12. Finally, run the following command to open a browser with the Kubernetes dashboard. + +``` +minikube dashboard + +``` + + + + +This guide aims to make things easier for RHEL/Fedora/CentOS-based operating system users. + +Now that Minikube is up and running, read [Running Kubernetes Locally via Minikube][7] to start using it. + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/18/10/getting-started-minikube + +作者:[Bryant Son][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: +[b]: https://github.com/lujun9972 +[1]: https://kubernetes.io/docs/tutorials/hello-minikube +[2]: https://github.com/kubernetes/minikube/blob/master/docs/drivers.md +[3]: https://docs.docker.com/install +[4]: https://github.com/kubernetes/minikube/blob/master/docs/drivers.md#kvm2-driver +[5]: https://github.com/kubernetes/minikube/releases +[6]: https://kubernetes.io/docs/tasks/tools/install-kubectl/#install-kubectl-binary-using-curl +[7]: https://kubernetes.io/docs/setup/minikube From 7e57e926705b4cf471b08bdd060605f3363c6477 Mon Sep 17 00:00:00 2001 From: "Xingyu.Wang" Date: Fri, 12 Oct 2018 11:03:04 +0800 Subject: [PATCH 372/736] PUB:20170810 How we built our first full-stack JavaScript web app in three weeks.md @BriFuture https://linux.cn/article-10106-1.html --- ...uilt our first full-stack JavaScript web app in three weeks.md | 0 1 file changed, 0 insertions(+), 0 deletions(-) rename {translated/tech => published}/20170810 How we built our first full-stack JavaScript web app in three weeks.md (100%) diff --git a/translated/tech/20170810 How we built our first full-stack JavaScript web app in three weeks.md b/published/20170810 How we built our first full-stack JavaScript web app in three weeks.md similarity index 100% rename from translated/tech/20170810 How we built our first full-stack JavaScript web app in three weeks.md rename to published/20170810 How we built our first full-stack JavaScript web app in three weeks.md From 822a656e22443f91fd4d94d34b36934f0275fb91 Mon Sep 17 00:00:00 2001 From: darksun Date: Fri, 12 Oct 2018 11:03:05 +0800 Subject: [PATCH 373/736] =?UTF-8?q?=E9=80=89=E9=A2=98:=20Tools=20Used=20in?= =?UTF-8?q?=206.828?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- sources/tech/20180911 Tools Used in 6.828.md | 247 +++++++++++++++++++ 1 file changed, 247 insertions(+) create mode 100644 sources/tech/20180911 Tools Used in 6.828.md diff --git a/sources/tech/20180911 Tools Used in 6.828.md b/sources/tech/20180911 Tools Used in 6.828.md new file mode 100644 index 0000000000..c9afeae4ea --- /dev/null +++ b/sources/tech/20180911 Tools Used in 6.828.md @@ -0,0 +1,247 @@ +Tools Used in 6.828 +====== +### Tools Used in 6.828 + +You'll use two sets of tools in this class: an x86 emulator, QEMU, for running your kernel; and a compiler toolchain, including assembler, linker, C compiler, and debugger, for compiling and testing your kernel. This page has the information you'll need to download and install your own copies. This class assumes familiarity with Unix commands throughout. + +We highly recommend using a Debathena machine, such as athena.dialup.mit.edu, to work on the labs. If you use the MIT Athena machines that run Linux, then all the software tools you will need for this course are located in the 6.828 locker: just type 'add -f 6.828' to get access to them. + +If you don't have access to a Debathena machine, we recommend you use a virtual machine with Linux. If you really want to, you can build and install the tools on your own machine. We have instructions below for Linux and MacOS computers. + +It should be possible to get this development environment running under windows with the help of [Cygwin][1]. Install cygwin, and be sure to install the flex and bison packages (they are under the development header). + +For an overview of useful commands in the tools used in 6.828, see the [lab tools guide][2]. + +#### Compiler Toolchain + +A "compiler toolchain" is the set of programs, including a C compiler, assemblers, and linkers, that turn code into executable binaries. You'll need a compiler toolchain that generates code for 32-bit Intel architectures ("x86" architectures) in the ELF binary format. + +##### Test Your Compiler Toolchain + +Modern Linux and BSD UNIX distributions already provide a toolchain suitable for 6.828. To test your distribution, try the following commands: + +``` +% objdump -i + +``` + +The second line should say `elf32-i386`. + +``` +% gcc -m32 -print-libgcc-file-name + +``` + +The command should print something like `/usr/lib/gcc/i486-linux-gnu/version/libgcc.a` or `/usr/lib/gcc/x86_64-linux-gnu/version/32/libgcc.a` + +If both these commands succeed, you're all set, and don't need to compile your own toolchain. + +If the gcc command fails, you may need to install a development environment. On Ubuntu Linux, try this: + +``` +% sudo apt-get install -y build-essential gdb + +``` + +On 64-bit machines, you may need to install a 32-bit support library. The symptom is that linking fails with error messages like "`__udivdi3` not found" and "`__muldi3` not found". On Ubuntu Linux, try this to fix the problem: + +``` +% sudo apt-get install gcc-multilib + +``` + +##### Using a Virtual Machine + +Otherwise, the easiest way to get a compatible toolchain is to install a modern Linux distribution on your computer. With platform virtualization, Linux can cohabitate with your normal computing environment. Installing a Linux virtual machine is a two step process. First, you download the virtualization platform. + + * [**VirtualBox**][3] (free for Mac, Linux, Windows) — [Download page][3] + * [VMware Player][4] (free for Linux and Windows, registration required) + * [VMware Fusion][5] (Downloadable from IS&T for free). + + + +VirtualBox is a little slower and less flexible, but free! + +Once the virtualization platform is installed, download a boot disk image for the Linux distribution of your choice. + + * [Ubuntu Desktop][6] is what we use. + + + +This will download a file named something like `ubuntu-10.04.1-desktop-i386.iso`. Start up your virtualization platform and create a new (32-bit) virtual machine. Use the downloaded Ubuntu image as a boot disk; the procedure differs among VMs but is pretty simple. Type `objdump -i`, as above, to verify that your toolchain is now set up. You will do your work inside the VM. + +##### Building Your Own Compiler Toolchain + +This will take longer to set up, but give slightly better performance than a virtual machine, and lets you work in your own familiar environment (Unix/MacOS). Fast-forward to the end for MacOS instructions. + +###### Linux + +You can use your own tool chain by adding the following line to `conf/env.mk`: + +``` +GCCPREFIX= + +``` + +We assume that you are installing the toolchain into `/usr/local`. You will need a fair amount of disk space to compile the tools (around 1GiB). If you don't have that much space, delete each directory after its `make install` step. + +Download the following packages: + ++ ftp://ftp.gmplib.org/pub/gmp-5.0.2/gmp-5.0.2.tar.bz2 ++ https://www.mpfr.org/mpfr-3.1.2/mpfr-3.1.2.tar.bz2 ++ http://www.multiprecision.org/downloads/mpc-0.9.tar.gz ++ http://ftpmirror.gnu.org/binutils/binutils-2.21.1.tar.bz2 ++ http://ftpmirror.gnu.org/gcc/gcc-4.6.4/gcc-core-4.6.4.tar.bz2 ++ http://ftpmirror.gnu.org/gdb/gdb-7.3.1.tar.bz2 + +(You may also use newer versions of these packages.) Unpack and build the packages. The `green bold` text shows you how to install into `/usr/local`, which is what we recommend. To install into a different directory, $PFX, note the differences in lighter type ([hide][7]). If you have problems, see below. + +``` +export PATH=$PFX/bin:$PATH +export LD_LIBRARY_PATH=$PFX/lib:$LD_LIBRARY_PATH + +tar xjf gmp-5.0.2.tar.bz2 +cd gmp-5.0.2 +./configure --prefix=$PFX +make +make install # This step may require privilege (sudo make install) +cd .. + +tar xjf mpfr-3.1.2.tar.bz2 +cd mpfr-3.1.2 +./configure --prefix=$PFX --with-gmp=$PFX +make +make install # This step may require privilege (sudo make install) +cd .. + +tar xzf mpc-0.9.tar.gz +cd mpc-0.9 +./configure --prefix=$PFX --with-gmp=$PFX --with-mpfr=$PFX +make +make install # This step may require privilege (sudo make install) +cd .. + + +tar xjf binutils-2.21.1.tar.bz2 +cd binutils-2.21.1 +./configure --prefix=$PFX --target=i386-jos-elf --disable-werror +make +make install # This step may require privilege (sudo make install) +cd .. + +i386-jos-elf-objdump -i +# Should produce output like: +# BFD header file version (GNU Binutils) 2.21.1 +# elf32-i386 +# (header little endian, data little endian) +# i386... + + +tar xjf gcc-core-4.6.4.tar.bz2 +cd gcc-4.6.4 +mkdir build # GCC will not compile correctly unless you build in a separate directory +cd build +../configure --prefix=$PFX --with-gmp=$PFX --with-mpfr=$PFX --with-mpc=$PFX \ + --target=i386-jos-elf --disable-werror \ + --disable-libssp --disable-libmudflap --with-newlib \ + --without-headers --enable-languages=c MAKEINFO=missing +make all-gcc +make install-gcc # This step may require privilege (sudo make install-gcc) +make all-target-libgcc +make install-target-libgcc # This step may require privilege (sudo make install-target-libgcc) +cd ../.. + +i386-jos-elf-gcc -v +# Should produce output like: +# Using built-in specs. +# COLLECT_GCC=i386-jos-elf-gcc +# COLLECT_LTO_WRAPPER=/usr/local/libexec/gcc/i386-jos-elf/4.6.4/lto-wrapper +# Target: i386-jos-elf + + +tar xjf gdb-7.3.1.tar.bz2 +cd gdb-7.3.1 +./configure --prefix=$PFX --target=i386-jos-elf --program-prefix=i386-jos-elf- \ + --disable-werror +make all +make install # This step may require privilege (sudo make install) +cd .. + +``` + +###### Linux troubleshooting + + * Q. I can't run `make install` because I don't have root permission on this machine. +A. Our instructions assume you are installing into the `/usr/local` directory. However, this may not be allowed in your environment. If you can only install code into your home directory, that's OK. In the instructions above, replace `--prefix=/usr/local` with `--prefix=$HOME` (and [click here][7] to update the instructions further). You will also need to change your `PATH` and `LD_LIBRARY_PATH` environment variables, to inform your shell where to find the tools. For example: +``` + export PATH=$HOME/bin:$PATH + export LD_LIBRARY_PATH=$HOME/lib:$LD_LIBRARY_PATH +``` + +Enter these lines in your `~/.bashrc` file so you don't need to type them every time you log in. + + + + * Q. My build fails with an inscrutable message about "library not found". +A. You need to set your `LD_LIBRARY_PATH`. The environment variable must include the `PREFIX/lib` directory (for instance, `/usr/local/lib`). + + + +#### MacOS + +First begin by installing developer tools on Mac OSX: +`xcode-select --install` + +First begin by installing developer tools on Mac OSX: + +You can install the qemu dependencies from homebrew, however do not install qemu itself as you will need the 6.828 patched version. + +`brew install $(brew deps qemu)` + +The gettext utility does not add installed binaries to the path, so you will need to run + +`PATH=${PATH}:/usr/local/opt/gettext/bin make install` + +when installing qemu below. + +### QEMU Emulator + +[QEMU][8] is a modern and fast PC emulator. QEMU version 2.3.0 is set up on Athena for x86 machines in the 6.828 locker (`add -f 6.828`) + +Unfortunately, QEMU's debugging facilities, while powerful, are somewhat immature, so we highly recommend you use our patched version of QEMU instead of the stock version that may come with your distribution. The version installed on Athena is already patched. To build your own patched version of QEMU: + + 1. Clone the IAP 6.828 QEMU git repository `git clone https://github.com/mit-pdos/6.828-qemu.git qemu` + 2. On Linux, you may need to install several libraries. We have successfully built 6.828 QEMU on Debian/Ubuntu 16.04 after installing the following packages: libsdl1.2-dev, libtool-bin, libglib2.0-dev, libz-dev, and libpixman-1-dev. + 3. Configure the source code (optional arguments are shown in square brackets; replace PFX with a path of your choice) + 1. Linux: `./configure --disable-kvm --disable-werror [--prefix=PFX] [--target-list="i386-softmmu x86_64-softmmu"]` + 2. OS X: `./configure --disable-kvm --disable-werror --disable-sdl [--prefix=PFX] [--target-list="i386-softmmu x86_64-softmmu"]` The `prefix` argument specifies where to install QEMU; without it QEMU will install to `/usr/local` by default. The `target-list` argument simply slims down the architectures QEMU will build support for. + 4. Run `make && make install` + + + + +-------------------------------------------------------------------------------- + +via: https://pdos.csail.mit.edu/6.828/2018/tools.html + +作者:[csail.mit][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://pdos.csail.mit.edu +[b]: https://github.com/lujun9972 +[1]: http://www.cygwin.com +[2]: labguide.html +[3]: http://www.oracle.com/us/technologies/virtualization/oraclevm/ +[4]: http://www.vmware.com/products/player/ +[5]: http://www.vmware.com/products/fusion/ +[6]: http://www.ubuntu.com/download/desktop +[7]: +[8]: http://www.nongnu.org/qemu/ +[9]: mailto:6828-staff@lists.csail.mit.edu +[10]: https://i.creativecommons.org/l/by/3.0/us/88x31.png +[11]: https://creativecommons.org/licenses/by/3.0/us/ +[12]: https://pdos.csail.mit.edu/6.828/2018/index.html From 695d562c64868bfc7ed560e4ef0c930d0d2ef1c8 Mon Sep 17 00:00:00 2001 From: darksun Date: Fri, 12 Oct 2018 11:06:54 +0800 Subject: [PATCH 374/736] =?UTF-8?q?=E9=80=89=E9=A2=98:=206.828=20lab=20too?= =?UTF-8?q?ls=20guide?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- .../tech/20180907 6.828 lab tools guide.md | 201 ++++++++++++++++++ 1 file changed, 201 insertions(+) create mode 100644 sources/tech/20180907 6.828 lab tools guide.md diff --git a/sources/tech/20180907 6.828 lab tools guide.md b/sources/tech/20180907 6.828 lab tools guide.md new file mode 100644 index 0000000000..e9061a3097 --- /dev/null +++ b/sources/tech/20180907 6.828 lab tools guide.md @@ -0,0 +1,201 @@ +6.828 lab tools guide +====== +### 6.828 lab tools guide + +Familiarity with your environment is crucial for productive development and debugging. This page gives a brief overview of the JOS environment and useful GDB and QEMU commands. Don't take our word for it, though. Read the GDB and QEMU manuals. These are powerful tools that are worth knowing how to use. + +#### Debugging tips + +##### Kernel + +GDB is your friend. Use the qemu-gdb target (or its `qemu-gdb-nox` variant) to make QEMU wait for GDB to attach. See the GDB reference below for some commands that are useful when debugging kernels. + +If you're getting unexpected interrupts, exceptions, or triple faults, you can ask QEMU to generate a detailed log of interrupts using the -d argument. + +To debug virtual memory issues, try the QEMU monitor commands info mem (for a high-level overview) or info pg (for lots of detail). Note that these commands only display the _current_ page table. + +(Lab 4+) To debug multiple CPUs, use GDB's thread-related commands like thread and info threads. + +##### User environments (lab 3+) + +GDB also lets you debug user environments, but there are a few things you need to watch out for, since GDB doesn't know that there's a distinction between multiple user environments, or between user and kernel. + +You can start JOS with a specific user environment using make run- _name_ (or you can edit `kern/init.c` directly). To make QEMU wait for GDB to attach, use the run- _name_ -gdb variant. + +You can symbolically debug user code, just like you can kernel code, but you have to tell GDB which symbol table to use with the symbol-file command, since it can only use one symbol table at a time. The provided `.gdbinit` loads the kernel symbol table, `obj/kern/kernel`. The symbol table for a user environment is in its ELF binary, so you can load it using symbol-file obj/user/ _name_. _Don't_ load symbols from any `.o` files, as those haven't been relocated by the linker (libraries are statically linked into JOS user binaries, so those symbols are already included in each user binary). Make sure you get the _right_ user binary; library functions will be linked at different EIPs in different binaries and GDB won't know any better! + +(Lab 4+) Since GDB is attached to the virtual machine as a whole, it sees clock interrupts as just another control transfer. This makes it basically impossible to step through user code because a clock interrupt is virtually guaranteed the moment you let the VM run again. The stepi command works because it suppresses interrupts, but it only steps one assembly instruction. Breakpoints generally work, but watch out because you can hit the same EIP in a different environment (indeed, a different binary altogether!). + +#### Reference + +##### JOS makefile + +The JOS GNUmakefile includes a number of phony targets for running JOS in various ways. All of these targets configure QEMU to listen for GDB connections (the `*-gdb` targets also wait for this connection). To start once QEMU is running, simply run gdb from your lab directory. We provide a `.gdbinit` file that automatically points GDB at QEMU, loads the kernel symbol file, and switches between 16-bit and 32-bit mode. Exiting GDB will shut down QEMU. + + * make qemu +Build everything and start QEMU with the VGA console in a new window and the serial console in your terminal. To exit, either close the VGA window or press `Ctrl-c` or `Ctrl-a x` in your terminal. + * make qemu-nox +Like `make qemu`, but run with only the serial console. To exit, press `Ctrl-a x`. This is particularly useful over SSH connections to Athena dialups because the VGA window consumes a lot of bandwidth. + * make qemu-gdb +Like `make qemu`, but rather than passively accepting GDB connections at any time, this pauses at the first machine instruction and waits for a GDB connection. + * make qemu-nox-gdb +A combination of the `qemu-nox` and `qemu-gdb` targets. + * make run- _name_ +(Lab 3+) Run user program _name_. For example, `make run-hello` runs `user/hello.c`. + * make run- _name_ -nox, run- _name_ -gdb, run- _name_ -gdb-nox, +(Lab 3+) Variants of `run-name` that correspond to the variants of the `qemu` target. + + + +The makefile also accepts a few useful variables: + + * make V=1 ... +Verbose mode. Print out every command being executed, including arguments. + * make V=1 grade +Stop after any failed grade test and leave the QEMU output in `jos.out` for inspection. + * make QEMUEXTRA=' _args_ ' ... +Specify additional arguments to pass to QEMU. + + + +##### JOS obj/ + +The JOS GNUmakefile includes a number of phony targets for running JOS in various ways. All of these targets configure QEMU to listen for GDB connections (thetargets also wait for this connection). To start once QEMU is running, simply runfrom your lab directory. We provide afile that automatically points GDB at QEMU, loads the kernel symbol file, and switches between 16-bit and 32-bit mode. Exiting GDB will shut down QEMU.The makefile also accepts a few useful variables: + +When building JOS, the makefile also produces some additional output files that may prove useful while debugging: + + * `obj/boot/boot.asm`, `obj/kern/kernel.asm`, `obj/user/hello.asm`, etc. +Assembly code listings for the bootloader, kernel, and user programs. + * `obj/kern/kernel.sym`, `obj/user/hello.sym`, etc. +Symbol tables for the kernel and user programs. + * `obj/boot/boot.out`, `obj/kern/kernel`, `obj/user/hello`, etc +Linked ELF images of the kernel and user programs. These contain symbol information that can be used by GDB. + + + +##### GDB + +See the [GDB manual][1] for a full guide to GDB commands. Here are some particularly useful commands for 6.828, some of which don't typically come up outside of OS development. + + * Ctrl-c +Halt the machine and break in to GDB at the current instruction. If QEMU has multiple virtual CPUs, this halts all of them. + * c (or continue) +Continue execution until the next breakpoint or `Ctrl-c`. + * si (or stepi) +Execute one machine instruction. + * b function or b file:line (or breakpoint) +Set a breakpoint at the given function or line. + * b * _addr_ (or breakpoint) +Set a breakpoint at the EIP _addr_. + * set print pretty +Enable pretty-printing of arrays and structs. + * info registers +Print the general purpose registers, `eip`, `eflags`, and the segment selectors. For a much more thorough dump of the machine register state, see QEMU's own `info registers` command. + * x/ _N_ x _addr_ +Display a hex dump of _N_ words starting at virtual address _addr_. If _N_ is omitted, it defaults to 1. _addr_ can be any expression. + * x/ _N_ i _addr_ +Display the _N_ assembly instructions starting at _addr_. Using `$eip` as _addr_ will display the instructions at the current instruction pointer. + * symbol-file _file_ +(Lab 3+) Switch to symbol file _file_. When GDB attaches to QEMU, it has no notion of the process boundaries within the virtual machine, so we have to tell it which symbols to use. By default, we configure GDB to use the kernel symbol file, `obj/kern/kernel`. If the machine is running user code, say `hello.c`, you can switch to the hello symbol file using `symbol-file obj/user/hello`. + + + +QEMU represents each virtual CPU as a thread in GDB, so you can use all of GDB's thread-related commands to view or manipulate QEMU's virtual CPUs. + + * thread _n_ +GDB focuses on one thread (i.e., CPU) at a time. This command switches that focus to thread _n_ , numbered from zero. + * info threads +List all threads (i.e., CPUs), including their state (active or halted) and what function they're in. + + + +##### QEMU + +QEMU includes a built-in monitor that can inspect and modify the machine state in useful ways. To enter the monitor, press Ctrl-a c in the terminal running QEMU. Press Ctrl-a c again to switch back to the serial console. + +For a complete reference to the monitor commands, see the [QEMU manual][2]. Here are some particularly useful commands: + + * xp/ _N_ x _paddr_ +Display a hex dump of _N_ words starting at _physical_ address _paddr_. If _N_ is omitted, it defaults to 1. This is the physical memory analogue of GDB's `x` command. + + * info registers +Display a full dump of the machine's internal register state. In particular, this includes the machine's _hidden_ segment state for the segment selectors and the local, global, and interrupt descriptor tables, plus the task register. This hidden state is the information the virtual CPU read from the GDT/LDT when the segment selector was loaded. Here's the CS when running in the JOS kernel in lab 1 and the meaning of each field: +``` + CS =0008 10000000 ffffffff 10cf9a00 DPL=0 CS32 [-R-] +``` + + * `CS =0008` +The visible part of the code selector. We're using segment 0x8. This also tells us we're referring to the global descriptor table (0x8 &4=0), and our CPL (current privilege level) is 0x8&3=0. + * `10000000` +The base of this segment. Linear address = logical address + 0x10000000. + * `ffffffff` +The limit of this segment. Linear addresses above 0xffffffff will result in segment violation exceptions. + * `10cf9a00` +The raw flags of this segment, which QEMU helpfully decodes for us in the next few fields. + * `DPL=0` +The privilege level of this segment. Only code running with privilege level 0 can load this segment. + * `CS32` +This is a 32-bit code segment. Other values include `DS` for data segments (not to be confused with the DS register), and `LDT` for local descriptor tables. + * `[-R-]` +This segment is read-only. + * info mem +(Lab 2+) Display mapped virtual memory and permissions. For example, +``` + ef7c0000-ef800000 00040000 urw + efbf8000-efc00000 00008000 -rw + +``` + +tells us that the 0x00040000 bytes of memory from 0xef7c0000 to 0xef800000 are mapped read/write and user-accessible, while the memory from 0xefbf8000 to 0xefc00000 is mapped read/write, but only kernel-accessible. + + * info pg +(Lab 2+) Display the current page table structure. The output is similar to `info mem`, but distinguishes page directory entries and page table entries and gives the permissions for each separately. Repeated PTE's and entire page tables are folded up into a single line. For example, +``` + VPN range Entry Flags Physical page + [00000-003ff] PDE[000] -------UWP + [00200-00233] PTE[200-233] -------U-P 00380 0037e 0037d 0037c 0037b 0037a .. + [00800-00bff] PDE[002] ----A--UWP + [00800-00801] PTE[000-001] ----A--U-P 0034b 00349 + [00802-00802] PTE[002] -------U-P 00348 + +``` + +This shows two page directory entries, spanning virtual addresses 0x00000000 to 0x003fffff and 0x00800000 to 0x00bfffff, respectively. Both PDE's are present, writable, and user and the second PDE is also accessed. The second of these page tables maps three pages, spanning virtual addresses 0x00800000 through 0x00802fff, of which the first two are present, user, and accessed and the third is only present and user. The first of these PTE's maps physical page 0x34b. + + + + +QEMU also takes some useful command line arguments, which can be passed into the JOS makefile using the + + * make QEMUEXTRA='-d int' ... +Log all interrupts, along with a full register dump, to `qemu.log`. You can ignore the first two log entries, "SMM: enter" and "SMM: after RMS", as these are generated before entering the boot loader. After this, log entries look like +``` + 4: v=30 e=0000 i=1 cpl=3 IP=001b:00800e2e pc=00800e2e SP=0023:eebfdf28 EAX=00000005 + EAX=00000005 EBX=00001002 ECX=00200000 EDX=00000000 + ESI=00000805 EDI=00200000 EBP=eebfdf60 ESP=eebfdf28 + ... + +``` + +The first line describes the interrupt. The `4:` is just a log record counter. `v` gives the vector number in hex. `e` gives the error code. `i=1` indicates that this was produced by an `int` instruction (versus a hardware interrupt). The rest of the line should be self-explanatory. See info registers for a description of the register dump that follows. + +Note: If you're running a pre-0.15 version of QEMU, the log will be written to `/tmp` instead of the current directory. + + + + +-------------------------------------------------------------------------------- + +via: https://pdos.csail.mit.edu/6.828/2018/labguide.html + +作者:[csail.mit][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://pdos.csail.mit.edu +[b]: https://github.com/lujun9972 +[1]: http://sourceware.org/gdb/current/onlinedocs/gdb/ +[2]: http://wiki.qemu.org/download/qemu-doc.html#pcsys_005fmonitor From 1f52dee0fa2f195e7ff82a8e374bfd72b966d39a Mon Sep 17 00:00:00 2001 From: "Xingyu.Wang" Date: Fri, 12 Oct 2018 11:10:32 +0800 Subject: [PATCH 375/736] PRF:20180827 Solve -error- failed to commit transaction (conflicting files)- In Arch Linux.md @lujun9972 --- ...mit transaction (conflicting files)- In Arch Linux.md | 9 ++++++--- 1 file changed, 6 insertions(+), 3 deletions(-) diff --git a/translated/tech/20180827 Solve -error- failed to commit transaction (conflicting files)- In Arch Linux.md b/translated/tech/20180827 Solve -error- failed to commit transaction (conflicting files)- In Arch Linux.md index d77a63be3d..7764b5186e 100644 --- a/translated/tech/20180827 Solve -error- failed to commit transaction (conflicting files)- In Arch Linux.md +++ b/translated/tech/20180827 Solve -error- failed to commit transaction (conflicting files)- In Arch Linux.md @@ -3,25 +3,28 @@ ![](https://www.ostechnix.com/wp-content/uploads/2018/06/arch_linux_wallpaper-720x340.png) -自我更新 Arch Linux 桌面以来已经有一个月了。今天我试着更新我的 Arch Linux 系统,然后遇到一个错误 **“error:failed to commit transaction (conflicting files) stfl:/usr/lib/libstfl.so.0 exists in filesystem”**。看起来是 pacman 无法更新一个已经存在于文件系统上的库 (/usr/lib/libstfl.so.0)。如果你也遇到了同样的问题,下面是一个快速解决方案。 +自我更新 Arch Linux 桌面以来已经有一个月了。今天我试着更新我的 Arch Linux 系统,然后遇到一个错误 “error:failed to commit transaction (conflicting files) stfl:/usr/lib/libstfl.so.0 exists in filesystem”。看起来是 pacman 无法更新一个已经存在于文件系统上的库 (/usr/lib/libstfl.so.0)。如果你也遇到了同样的问题,下面是一个快速解决方案。 ### 解决 Arch Linux 中出现的 “error:failed to commit transaction (conflicting files)” 有三种方法。 -1。简单在升级时忽略导致问题的 **stfl** 库并尝试再次更新系统。请参阅此指南以了解 [**如何在更新时忽略软件包 **][1]。 +1。简单在升级时忽略导致问题的 stfl 库并尝试再次更新系统。请参阅此指南以了解 [如何在更新时忽略软件包][1]。 2。使用命令覆盖这个包: + ``` $ sudo pacman -Syu --overwrite /usr/lib/libstfl.so.0 ``` 3。手工删掉 stfl 库然后再次升级系统。请确保目标包不被其他任何重要的包所依赖。可以通过去 archlinux.org 查看是否有这种冲突。 + ``` $ sudo rm /usr/lib/libstfl.so.0 ``` 现在,尝试更新系统: + ``` $ sudo pacman -Syu ``` @@ -41,7 +44,7 @@ via: https://www.ostechnix.com/how-to-solve-error-failed-to-commit-transaction-c 作者:[SK][a] 选题:[lujun9972](https://github.com/lujun9972) 译者:[lujun9972](https://github.com/lujun9972) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 From 09b5fca3f2703cc350c3360d817a7b3ae77e45fc Mon Sep 17 00:00:00 2001 From: "Xingyu.Wang" Date: Fri, 12 Oct 2018 11:10:53 +0800 Subject: [PATCH 376/736] PUB:20180827 Solve -error- failed to commit transaction (conflicting files)- In Arch Linux.md @lujun9972 https://linux.cn/article-10107-1.html --- ...ed to commit transaction (conflicting files)- In Arch Linux.md | 0 1 file changed, 0 insertions(+), 0 deletions(-) rename {translated/tech => published}/20180827 Solve -error- failed to commit transaction (conflicting files)- In Arch Linux.md (100%) diff --git a/translated/tech/20180827 Solve -error- failed to commit transaction (conflicting files)- In Arch Linux.md b/published/20180827 Solve -error- failed to commit transaction (conflicting files)- In Arch Linux.md similarity index 100% rename from translated/tech/20180827 Solve -error- failed to commit transaction (conflicting files)- In Arch Linux.md rename to published/20180827 Solve -error- failed to commit transaction (conflicting files)- In Arch Linux.md From 9c3814a225601d2be8b475e758824e9a8ce14bea Mon Sep 17 00:00:00 2001 From: darksun Date: Fri, 12 Oct 2018 11:14:11 +0800 Subject: [PATCH 377/736] =?UTF-8?q?=E9=80=89=E9=A2=98:=20Lab=201:=20PC=20B?= =?UTF-8?q?ootstrap=20and=20GCC=20Calling=20Conventions?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...C Bootstrap and GCC Calling Conventions.md | 616 ++++++++++++++++++ 1 file changed, 616 insertions(+) create mode 100644 sources/tech/20180913 Lab 1- PC Bootstrap and GCC Calling Conventions.md diff --git a/sources/tech/20180913 Lab 1- PC Bootstrap and GCC Calling Conventions.md b/sources/tech/20180913 Lab 1- PC Bootstrap and GCC Calling Conventions.md new file mode 100644 index 0000000000..365b5eb5f8 --- /dev/null +++ b/sources/tech/20180913 Lab 1- PC Bootstrap and GCC Calling Conventions.md @@ -0,0 +1,616 @@ +Lab 1: PC Bootstrap and GCC Calling Conventions +====== +### Lab 1: Booting a PC + +#### Introduction + +This lab is split into three parts. The first part concentrates on getting familiarized with x86 assembly language, the QEMU x86 emulator, and the PC's power-on bootstrap procedure. The second part examines the boot loader for our 6.828 kernel, which resides in the `boot` directory of the `lab` tree. Finally, the third part delves into the initial template for our 6.828 kernel itself, named JOS, which resides in the `kernel` directory. + +##### Software Setup + +The files you will need for this and subsequent lab assignments in this course are distributed using the [Git][1] version control system. To learn more about Git, take a look at the [Git user's manual][2], or, if you are already familiar with other version control systems, you may find this [CS-oriented overview of Git][3] useful. + +The URL for the course Git repository is . To install the files in your Athena account, you need to _clone_ the course repository, by running the commands below. You must use an x86 Athena machine; that is, `uname -a` should mention `i386 GNU/Linux` or `i686 GNU/Linux` or `x86_64 GNU/Linux`. You can log into a public Athena host with `ssh -X athena.dialup.mit.edu`. + +``` +athena% mkdir ~/6.828 +athena% cd ~/6.828 +athena% add git +athena% git clone https://pdos.csail.mit.edu/6.828/2018/jos.git lab +Cloning into lab... +athena% cd lab +athena% + +``` + +Git allows you to keep track of the changes you make to the code. For example, if you are finished with one of the exercises, and want to checkpoint your progress, you can _commit_ your changes by running: + +``` +athena% git commit -am 'my solution for lab1 exercise 9' +Created commit 60d2135: my solution for lab1 exercise 9 + 1 files changed, 1 insertions(+), 0 deletions(-) +athena% + +``` + +You can keep track of your changes by using the git diff command. Running git diff will display the changes to your code since your last commit, and git diff origin/lab1 will display the changes relative to the initial code supplied for this lab. Here, `origin/lab1` is the name of the git branch with the initial code you downloaded from our server for this assignment. + +We have set up the appropriate compilers and simulators for you on Athena. To use them, run add -f 6.828. You must run this command every time you log in (or add it to your `~/.environment` file). If you get obscure errors while compiling or running `qemu`, double check that you added the course locker. + +If you are working on a non-Athena machine, you'll need to install `qemu` and possibly `gcc` following the directions on the [tools page][4]. We've made several useful debugging changes to `qemu` and some of the later labs depend on these patches, so you must build your own. If your machine uses a native ELF toolchain (such as Linux and most BSD's, but notably _not_ OS X), you can simply install `gcc` from your package manager. Otherwise, follow the directions on the tools page. + +##### Hand-In Procedure + +You will turn in your assignments using the [submission website][5]. You need to request an API key from the submission website before you can turn in any assignments or labs. + +The lab code comes with GNU Make rules to make submission easier. After committing your final changes to the lab, type make handin to submit your lab. + +``` +athena% git commit -am "ready to submit my lab" +[lab1 c2e3c8b] ready to submit my lab + 2 files changed, 18 insertions(+), 2 deletions(-) + +athena% make handin +git archive --prefix=lab1/ --format=tar HEAD | gzip > lab1-handin.tar.gz +Get an API key for yourself by visiting https://6828.scripts.mit.edu/2018/handin.py/ +Please enter your API key: XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX + % Total % Received % Xferd Average Speed Time Time Time Current + Dload Upload Total Spent Left Speed +100 50199 100 241 100 49958 414 85824 --:--:-- --:--:-- --:--:-- 85986 +athena% + +``` + +make handin will store your API key in _myapi.key_. If you need to change your API key, just remove this file and let make handin generate it again ( _myapi.key_ must not include newline characters). + +If use make handin and you have either uncomitted changes or untracked files, you will see output similar to the following: + +``` + M hello.c +?? bar.c +?? foo.pyc +Untracked files will not be handed in. Continue? [y/N] + +``` + +Inspect the above lines and make sure all files that your lab solution needs are tracked i.e. not listed in a line that begins with ??. + +In the case that make handin does not work properly, try fixing the problem with the curl or Git commands. Or you can run make tarball. This will make a tar file for you, which you can then upload via our [web interface][5]. + +You can run make grade to test your solutions with the grading program. The [web interface][5] uses the same grading program to assign your lab submission a grade. You should check the output of the grader (it may take a few minutes since the grader runs periodically) and ensure that you received the grade which you expected. If the grades don't match, your lab submission probably has a bug -- check the output of the grader (resp-lab*.txt) to see which particular test failed. + +For Lab 1, you do not need to turn in answers to any of the questions below. (Do answer them for yourself though! They will help with the rest of the lab.) + +#### Part 1: PC Bootstrap + +The purpose of the first exercise is to introduce you to x86 assembly language and the PC bootstrap process, and to get you started with QEMU and QEMU/GDB debugging. You will not have to write any code for this part of the lab, but you should go through it anyway for your own understanding and be prepared to answer the questions posed below. + +##### Getting Started with x86 assembly + +If you are not already familiar with x86 assembly language, you will quickly become familiar with it during this course! The [PC Assembly Language Book][6] is an excellent place to start. Hopefully, the book contains mixture of new and old material for you. + +_Warning:_ Unfortunately the examples in the book are written for the NASM assembler, whereas we will be using the GNU assembler. NASM uses the so-called _Intel_ syntax while GNU uses the _AT &T_ syntax. While semantically equivalent, an assembly file will differ quite a lot, at least superficially, depending on which syntax is used. Luckily the conversion between the two is pretty simple, and is covered in [Brennan's Guide to Inline Assembly][7]. + +Exercise 1. Familiarize yourself with the assembly language materials available on [the 6.828 reference page][8]. You don't have to read them now, but you'll almost certainly want to refer to some of this material when reading and writing x86 assembly. + +We do recommend reading the section "The Syntax" in [Brennan's Guide to Inline Assembly][7]. It gives a good (and quite brief) description of the AT&T assembly syntax we'll be using with the GNU assembler in JOS. + +Certainly the definitive reference for x86 assembly language programming is Intel's instruction set architecture reference, which you can find on [the 6.828 reference page][8] in two flavors: an HTML edition of the old [80386 Programmer's Reference Manual][9], which is much shorter and easier to navigate than more recent manuals but describes all of the x86 processor features that we will make use of in 6.828; and the full, latest and greatest [IA-32 Intel Architecture Software Developer's Manuals][10] from Intel, covering all the features of the most recent processors that we won't need in class but you may be interested in learning about. An equivalent (and often friendlier) set of manuals is [available from AMD][11]. Save the Intel/AMD architecture manuals for later or use them for reference when you want to look up the definitive explanation of a particular processor feature or instruction. + +##### Simulating the x86 + +Instead of developing the operating system on a real, physical personal computer (PC), we use a program that faithfully emulates a complete PC: the code you write for the emulator will boot on a real PC too. Using an emulator simplifies debugging; you can, for example, set break points inside of the emulated x86, which is difficult to do with the silicon version of an x86. + +In 6.828 we will use the [QEMU Emulator][12], a modern and relatively fast emulator. While QEMU's built-in monitor provides only limited debugging support, QEMU can act as a remote debugging target for the [GNU debugger][13] (GDB), which we'll use in this lab to step through the early boot process. + +To get started, extract the Lab 1 files into your own directory on Athena as described above in "Software Setup", then type make (or gmake on BSD systems) in the `lab` directory to build the minimal 6.828 boot loader and kernel you will start with. (It's a little generous to call the code we're running here a "kernel," but we'll flesh it out throughout the semester.) + +``` +athena% cd lab +athena% make ++ as kern/entry.S ++ cc kern/entrypgdir.c ++ cc kern/init.c ++ cc kern/console.c ++ cc kern/monitor.c ++ cc kern/printf.c ++ cc kern/kdebug.c ++ cc lib/printfmt.c ++ cc lib/readline.c ++ cc lib/string.c ++ ld obj/kern/kernel ++ as boot/boot.S ++ cc -Os boot/main.c ++ ld boot/boot +boot block is 380 bytes (max 510) ++ mk obj/kern/kernel.img + +``` + +(If you get errors like "undefined reference to `__udivdi3'", you probably don't have the 32-bit gcc multilib. If you're running Debian or Ubuntu, try installing the gcc-multilib package.) + +Now you're ready to run QEMU, supplying the file `obj/kern/kernel.img`, created above, as the contents of the emulated PC's "virtual hard disk." This hard disk image contains both our boot loader (`obj/boot/boot`) and our kernel (`obj/kernel`). + +``` +athena% make qemu + +``` + +or + +``` +athena% make qemu-nox + +``` + +This executes QEMU with the options required to set the hard disk and direct serial port output to the terminal. Some text should appear in the QEMU window: + +``` +Booting from Hard Disk... +6828 decimal is XXX octal! +entering test_backtrace 5 +entering test_backtrace 4 +entering test_backtrace 3 +entering test_backtrace 2 +entering test_backtrace 1 +entering test_backtrace 0 +leaving test_backtrace 0 +leaving test_backtrace 1 +leaving test_backtrace 2 +leaving test_backtrace 3 +leaving test_backtrace 4 +leaving test_backtrace 5 +Welcome to the JOS kernel monitor! +Type 'help' for a list of commands. +K> + +``` + +Everything after '`Booting from Hard Disk...`' was printed by our skeletal JOS kernel; the `K>` is the prompt printed by the small _monitor_ , or interactive control program, that we've included in the kernel. If you used make qemu, these lines printed by the kernel will appear in both the regular shell window from which you ran QEMU and the QEMU display window. This is because for testing and lab grading purposes we have set up the JOS kernel to write its console output not only to the virtual VGA display (as seen in the QEMU window), but also to the simulated PC's virtual serial port, which QEMU in turn outputs to its own standard output. Likewise, the JOS kernel will take input from both the keyboard and the serial port, so you can give it commands in either the VGA display window or the terminal running QEMU. Alternatively, you can use the serial console without the virtual VGA by running make qemu-nox. This may be convenient if you are SSH'd into an Athena dialup. To quit qemu, type Ctrl+a x. + +There are only two commands you can give to the kernel monitor, `help` and `kerninfo`. + +``` +K> help +help - display this list of commands +kerninfo - display information about the kernel +K> kerninfo +Special kernel symbols: + entry f010000c (virt) 0010000c (phys) + etext f0101a75 (virt) 00101a75 (phys) + edata f0112300 (virt) 00112300 (phys) + end f0112960 (virt) 00112960 (phys) +Kernel executable memory footprint: 75KB +K> + +``` + +The `help` command is obvious, and we will shortly discuss the meaning of what the `kerninfo` command prints. Although simple, it's important to note that this kernel monitor is running "directly" on the "raw (virtual) hardware" of the simulated PC. This means that you should be able to copy the contents of `obj/kern/kernel.img` onto the first few sectors of a _real_ hard disk, insert that hard disk into a real PC, turn it on, and see exactly the same thing on the PC's real screen as you did above in the QEMU window. (We don't recommend you do this on a real machine with useful information on its hard disk, though, because copying `kernel.img` onto the beginning of its hard disk will trash the master boot record and the beginning of the first partition, effectively causing everything previously on the hard disk to be lost!) + +##### The PC's Physical Address Space + +We will now dive into a bit more detail about how a PC starts up. A PC's physical address space is hard-wired to have the following general layout: + +``` ++------------------+ <- 0xFFFFFFFF (4GB) +| 32-bit | +| memory mapped | +| devices | +| | +/\/\/\/\/\/\/\/\/\/\ + +/\/\/\/\/\/\/\/\/\/\ +| | +| Unused | +| | ++------------------+ <- depends on amount of RAM +| | +| | +| Extended Memory | +| | +| | ++------------------+ <- 0x00100000 (1MB) +| BIOS ROM | ++------------------+ <- 0x000F0000 (960KB) +| 16-bit devices, | +| expansion ROMs | ++------------------+ <- 0x000C0000 (768KB) +| VGA Display | ++------------------+ <- 0x000A0000 (640KB) +| | +| Low Memory | +| | ++------------------+ <- 0x00000000 + +``` + +The first PCs, which were based on the 16-bit Intel 8088 processor, were only capable of addressing 1MB of physical memory. The physical address space of an early PC would therefore start at 0x00000000 but end at 0x000FFFFF instead of 0xFFFFFFFF. The 640KB area marked "Low Memory" was the _only_ random-access memory (RAM) that an early PC could use; in fact the very earliest PCs only could be configured with 16KB, 32KB, or 64KB of RAM! + +The 384KB area from 0x000A0000 through 0x000FFFFF was reserved by the hardware for special uses such as video display buffers and firmware held in non-volatile memory. The most important part of this reserved area is the Basic Input/Output System (BIOS), which occupies the 64KB region from 0x000F0000 through 0x000FFFFF. In early PCs the BIOS was held in true read-only memory (ROM), but current PCs store the BIOS in updateable flash memory. The BIOS is responsible for performing basic system initialization such as activating the video card and checking the amount of memory installed. After performing this initialization, the BIOS loads the operating system from some appropriate location such as floppy disk, hard disk, CD-ROM, or the network, and passes control of the machine to the operating system. + +When Intel finally "broke the one megabyte barrier" with the 80286 and 80386 processors, which supported 16MB and 4GB physical address spaces respectively, the PC architects nevertheless preserved the original layout for the low 1MB of physical address space in order to ensure backward compatibility with existing software. Modern PCs therefore have a "hole" in physical memory from 0x000A0000 to 0x00100000, dividing RAM into "low" or "conventional memory" (the first 640KB) and "extended memory" (everything else). In addition, some space at the very top of the PC's 32-bit physical address space, above all physical RAM, is now commonly reserved by the BIOS for use by 32-bit PCI devices. + +Recent x86 processors can support _more_ than 4GB of physical RAM, so RAM can extend further above 0xFFFFFFFF. In this case the BIOS must arrange to leave a _second_ hole in the system's RAM at the top of the 32-bit addressable region, to leave room for these 32-bit devices to be mapped. Because of design limitations JOS will use only the first 256MB of a PC's physical memory anyway, so for now we will pretend that all PCs have "only" a 32-bit physical address space. But dealing with complicated physical address spaces and other aspects of hardware organization that evolved over many years is one of the important practical challenges of OS development. + +##### The ROM BIOS + +In this portion of the lab, you'll use QEMU's debugging facilities to investigate how an IA-32 compatible computer boots. + +Open two terminal windows and cd both shells into your lab directory. In one, enter make qemu-gdb (or make qemu-nox-gdb). This starts up QEMU, but QEMU stops just before the processor executes the first instruction and waits for a debugging connection from GDB. In the second terminal, from the same directory you ran `make`, run make gdb. You should see something like this, + +``` +athena% make gdb +GNU gdb (GDB) 6.8-debian +Copyright (C) 2008 Free Software Foundation, Inc. +License GPLv3+: GNU GPL version 3 or later +This is free software: you are free to change and redistribute it. +There is NO WARRANTY, to the extent permitted by law. Type "show copying" +and "show warranty" for details. +This GDB was configured as "i486-linux-gnu". ++ target remote localhost:26000 +The target architecture is assumed to be i8086 +[f000:fff0] 0xffff0: ljmp $0xf000,$0xe05b +0x0000fff0 in ?? () ++ symbol-file obj/kern/kernel +(gdb) + +``` + +We provided a `.gdbinit` file that set up GDB to debug the 16-bit code used during early boot and directed it to attach to the listening QEMU. (If it doesn't work, you may have to add an `add-auto-load-safe-path` in your `.gdbinit` in your home directory to convince `gdb` to process the `.gdbinit` we provided. `gdb` will tell you if you have to do this.) + +The following line: + +``` +[f000:fff0] 0xffff0: ljmp $0xf000,$0xe05b + +``` + +is GDB's disassembly of the first instruction to be executed. From this output you can conclude a few things: + + * The IBM PC starts executing at physical address 0x000ffff0, which is at the very top of the 64KB area reserved for the ROM BIOS. + * The PC starts executing with `CS = 0xf000` and `IP = 0xfff0`. + * The first instruction to be executed is a `jmp` instruction, which jumps to the segmented address `CS = 0xf000` and `IP = 0xe05b`. + + + +Why does QEMU start like this? This is how Intel designed the 8088 processor, which IBM used in their original PC. Because the BIOS in a PC is "hard-wired" to the physical address range 0x000f0000-0x000fffff, this design ensures that the BIOS always gets control of the machine first after power-up or any system restart - which is crucial because on power-up there _is_ no other software anywhere in the machine's RAM that the processor could execute. The QEMU emulator comes with its own BIOS, which it places at this location in the processor's simulated physical address space. On processor reset, the (simulated) processor enters real mode and sets CS to 0xf000 and the IP to 0xfff0, so that execution begins at that (CS:IP) segment address. How does the segmented address 0xf000:fff0 turn into a physical address? + +To answer that we need to know a bit about real mode addressing. In real mode (the mode that PC starts off in), address translation works according to the formula: _physical address_ = 16 选题模板.txt 中文排版指北.md comic core.md Dict.md lctt2014.md lctt2016.md lctt2018.md LCTT翻译规范.md LICENSE Makefile published README.md sign.md sources translated _segment_ \+ _offset_. So, when the PC sets CS to 0xf000 and IP to 0xfff0, the physical address referenced is: + +``` + 16 * 0xf000 + 0xfff0 # in hex multiplication by 16 is + = 0xf0000 + 0xfff0 # easy--just append a 0. + = 0xffff0 + +``` + +`0xffff0` is 16 bytes before the end of the BIOS (`0x100000`). Therefore we shouldn't be surprised that the first thing that the BIOS does is `jmp` backwards to an earlier location in the BIOS; after all how much could it accomplish in just 16 bytes? + +Exercise 2. Use GDB's si (Step Instruction) command to trace into the ROM BIOS for a few more instructions, and try to guess what it might be doing. You might want to look at [Phil Storrs I/O Ports Description][14], as well as other materials on the [6.828 reference materials page][8]. No need to figure out all the details - just the general idea of what the BIOS is doing first. + +When the BIOS runs, it sets up an interrupt descriptor table and initializes various devices such as the VGA display. This is where the "`Starting SeaBIOS`" message you see in the QEMU window comes from. + +After initializing the PCI bus and all the important devices the BIOS knows about, it searches for a bootable device such as a floppy, hard drive, or CD-ROM. Eventually, when it finds a bootable disk, the BIOS reads the _boot loader_ from the disk and transfers control to it. + +#### Part 2: The Boot Loader + +Floppy and hard disks for PCs are divided into 512 byte regions called _sectors_. A sector is the disk's minimum transfer granularity: each read or write operation must be one or more sectors in size and aligned on a sector boundary. If the disk is bootable, the first sector is called the _boot sector_ , since this is where the boot loader code resides. When the BIOS finds a bootable floppy or hard disk, it loads the 512-byte boot sector into memory at physical addresses 0x7c00 through 0x7dff, and then uses a `jmp` instruction to set the CS:IP to `0000:7c00`, passing control to the boot loader. Like the BIOS load address, these addresses are fairly arbitrary - but they are fixed and standardized for PCs. + +The ability to boot from a CD-ROM came much later during the evolution of the PC, and as a result the PC architects took the opportunity to rethink the boot process slightly. As a result, the way a modern BIOS boots from a CD-ROM is a bit more complicated (and more powerful). CD-ROMs use a sector size of 2048 bytes instead of 512, and the BIOS can load a much larger boot image from the disk into memory (not just one sector) before transferring control to it. For more information, see the ["El Torito" Bootable CD-ROM Format Specification][15]. + +For 6.828, however, we will use the conventional hard drive boot mechanism, which means that our boot loader must fit into a measly 512 bytes. The boot loader consists of one assembly language source file, `boot/boot.S`, and one C source file, `boot/main.c` Look through these source files carefully and make sure you understand what's going on. The boot loader must perform two main functions: + + 1. First, the boot loader switches the processor from real mode to _32-bit protected mode_ , because it is only in this mode that software can access all the memory above 1MB in the processor's physical address space. Protected mode is described briefly in sections 1.2.7 and 1.2.8 of [PC Assembly Language][6], and in great detail in the Intel architecture manuals. At this point you only have to understand that translation of segmented addresses (segment:offset pairs) into physical addresses happens differently in protected mode, and that after the transition offsets are 32 bits instead of 16. + 2. Second, the boot loader reads the kernel from the hard disk by directly accessing the IDE disk device registers via the x86's special I/O instructions. If you would like to understand better what the particular I/O instructions here mean, check out the "IDE hard drive controller" section on [the 6.828 reference page][8]. You will not need to learn much about programming specific devices in this class: writing device drivers is in practice a very important part of OS development, but from a conceptual or architectural viewpoint it is also one of the least interesting. + + + +After you understand the boot loader source code, look at the file `obj/boot/boot.asm`. This file is a disassembly of the boot loader that our GNUmakefile creates _after_ compiling the boot loader. This disassembly file makes it easy to see exactly where in physical memory all of the boot loader's code resides, and makes it easier to track what's happening while stepping through the boot loader in GDB. Likewise, `obj/kern/kernel.asm` contains a disassembly of the JOS kernel, which can often be useful for debugging. + +You can set address breakpoints in GDB with the `b` command. For example, b *0x7c00 sets a breakpoint at address 0x7C00. Once at a breakpoint, you can continue execution using the c and si commands: c causes QEMU to continue execution until the next breakpoint (or until you press Ctrl-C in GDB), and si _N_ steps through the instructions _`N`_ at a time. + +To examine instructions in memory (besides the immediate next one to be executed, which GDB prints automatically), you use the x/i command. This command has the syntax x/ _N_ i _ADDR_ , where _N_ is the number of consecutive instructions to disassemble and _ADDR_ is the memory address at which to start disassembling. + +Exercise 3. Take a look at the [lab tools guide][16], especially the section on GDB commands. Even if you're familiar with GDB, this includes some esoteric GDB commands that are useful for OS work. + +Set a breakpoint at address 0x7c00, which is where the boot sector will be loaded. Continue execution until that breakpoint. Trace through the code in `boot/boot.S`, using the source code and the disassembly file `obj/boot/boot.asm` to keep track of where you are. Also use the `x/i` command in GDB to disassemble sequences of instructions in the boot loader, and compare the original boot loader source code with both the disassembly in `obj/boot/boot.asm` and GDB. + +Trace into `bootmain()` in `boot/main.c`, and then into `readsect()`. Identify the exact assembly instructions that correspond to each of the statements in `readsect()`. Trace through the rest of `readsect()` and back out into `bootmain()`, and identify the begin and end of the `for` loop that reads the remaining sectors of the kernel from the disk. Find out what code will run when the loop is finished, set a breakpoint there, and continue to that breakpoint. Then step through the remainder of the boot loader. + +Be able to answer the following questions: + + * At what point does the processor start executing 32-bit code? What exactly causes the switch from 16- to 32-bit mode? + * What is the _last_ instruction of the boot loader executed, and what is the _first_ instruction of the kernel it just loaded? + * _Where_ is the first instruction of the kernel? + * How does the boot loader decide how many sectors it must read in order to fetch the entire kernel from disk? Where does it find this information? + + + +##### Loading the Kernel + +We will now look in further detail at the C language portion of the boot loader, in `boot/main.c`. But before doing so, this is a good time to stop and review some of the basics of C programming. + +Exercise 4. Read about programming with pointers in C. The best reference for the C language is _The C Programming Language_ by Brian Kernighan and Dennis Ritchie (known as 'K &R'). We recommend that students purchase this book (here is an [Amazon Link][17]) or find one of [MIT's 7 copies][18]. + +Read 5.1 (Pointers and Addresses) through 5.5 (Character Pointers and Functions) in K&R. Then download the code for [pointers.c][19], run it, and make sure you understand where all of the printed values come from. In particular, make sure you understand where the pointer addresses in printed lines 1 and 6 come from, how all the values in printed lines 2 through 4 get there, and why the values printed in line 5 are seemingly corrupted. + +There are other references on pointers in C (e.g., [A tutorial by Ted Jensen][20] that cites K&R heavily), though not as strongly recommended. + +_Warning:_ Unless you are already thoroughly versed in C, do not skip or even skim this reading exercise. If you do not really understand pointers in C, you will suffer untold pain and misery in subsequent labs, and then eventually come to understand them the hard way. Trust us; you don't want to find out what "the hard way" is. + +To make sense out of `boot/main.c` you'll need to know what an ELF binary is. When you compile and link a C program such as the JOS kernel, the compiler transforms each C source ('`.c`') file into an _object_ ('`.o`') file containing assembly language instructions encoded in the binary format expected by the hardware. The linker then combines all of the compiled object files into a single _binary image_ such as `obj/kern/kernel`, which in this case is a binary in the ELF format, which stands for "Executable and Linkable Format". + +Full information about this format is available in [the ELF specification][21] on [our reference page][8], but you will not need to delve very deeply into the details of this format in this class. Although as a whole the format is quite powerful and complex, most of the complex parts are for supporting dynamic loading of shared libraries, which we will not do in this class. The [Wikipedia page][22] has a short description. + +For purposes of 6.828, you can consider an ELF executable to be a header with loading information, followed by several _program sections_ , each of which is a contiguous chunk of code or data intended to be loaded into memory at a specified address. The boot loader does not modify the code or data; it loads it into memory and starts executing it. + +An ELF binary starts with a fixed-length _ELF header_ , followed by a variable-length _program header_ listing each of the program sections to be loaded. The C definitions for these ELF headers are in `inc/elf.h`. The program sections we're interested in are: + + * `.text`: The program's executable instructions. + * `.rodata`: Read-only data, such as ASCII string constants produced by the C compiler. (We will not bother setting up the hardware to prohibit writing, however.) + * `.data`: The data section holds the program's initialized data, such as global variables declared with initializers like `int x = 5;`. + + + +When the linker computes the memory layout of a program, it reserves space for _uninitialized_ global variables, such as `int x;`, in a section called `.bss` that immediately follows `.data` in memory. C requires that "uninitialized" global variables start with a value of zero. Thus there is no need to store contents for `.bss` in the ELF binary; instead, the linker records just the address and size of the `.bss` section. The loader or the program itself must arrange to zero the `.bss` section. + +Examine the full list of the names, sizes, and link addresses of all the sections in the kernel executable by typing: + +``` +athena% objdump -h obj/kern/kernel + +(If you compiled your own toolchain, you may need to use i386-jos-elf-objdump) + +``` + +You will see many more sections than the ones we listed above, but the others are not important for our purposes. Most of the others are to hold debugging information, which is typically included in the program's executable file but not loaded into memory by the program loader. + +Take particular note of the "VMA" (or _link address_ ) and the "LMA" (or _load address_ ) of the `.text` section. The load address of a section is the memory address at which that section should be loaded into memory. + +The link address of a section is the memory address from which the section expects to execute. The linker encodes the link address in the binary in various ways, such as when the code needs the address of a global variable, with the result that a binary usually won't work if it is executing from an address that it is not linked for. (It is possible to generate _position-independent_ code that does not contain any such absolute addresses. This is used extensively by modern shared libraries, but it has performance and complexity costs, so we won't be using it in 6.828.) + +Typically, the link and load addresses are the same. For example, look at the `.text` section of the boot loader: + +``` +athena% objdump -h obj/boot/boot.out + +``` + +The boot loader uses the ELF _program headers_ to decide how to load the sections. The program headers specify which parts of the ELF object to load into memory and the destination address each should occupy. You can inspect the program headers by typing: + +``` +athena% objdump -x obj/kern/kernel + +``` + +The program headers are then listed under "Program Headers" in the output of objdump. The areas of the ELF object that need to be loaded into memory are those that are marked as "LOAD". Other information for each program header is given, such as the virtual address ("vaddr"), the physical address ("paddr"), and the size of the loaded area ("memsz" and "filesz"). + +Back in boot/main.c, the `ph->p_pa` field of each program header contains the segment's destination physical address (in this case, it really is a physical address, though the ELF specification is vague on the actual meaning of this field). + +The BIOS loads the boot sector into memory starting at address 0x7c00, so this is the boot sector's load address. This is also where the boot sector executes from, so this is also its link address. We set the link address by passing `-Ttext 0x7C00` to the linker in `boot/Makefrag`, so the linker will produce the correct memory addresses in the generated code. + +Exercise 5. Trace through the first few instructions of the boot loader again and identify the first instruction that would "break" or otherwise do the wrong thing if you were to get the boot loader's link address wrong. Then change the link address in `boot/Makefrag` to something wrong, run make clean, recompile the lab with make, and trace into the boot loader again to see what happens. Don't forget to change the link address back and make clean again afterward! + +Look back at the load and link addresses for the kernel. Unlike the boot loader, these two addresses aren't the same: the kernel is telling the boot loader to load it into memory at a low address (1 megabyte), but it expects to execute from a high address. We'll dig in to how we make this work in the next section. + +Besides the section information, there is one more field in the ELF header that is important to us, named `e_entry`. This field holds the link address of the _entry point_ in the program: the memory address in the program's text section at which the program should begin executing. You can see the entry point: + +``` +athena% objdump -f obj/kern/kernel + +``` + +You should now be able to understand the minimal ELF loader in `boot/main.c`. It reads each section of the kernel from disk into memory at the section's load address and then jumps to the kernel's entry point. + +Exercise 6. We can examine memory using GDB's x command. The [GDB manual][23] has full details, but for now, it is enough to know that the command x/ _N_ x _ADDR_ prints _`N`_ words of memory at _`ADDR`_. (Note that both '`x`'s in the command are lowercase.) _Warning_ : The size of a word is not a universal standard. In GNU assembly, a word is two bytes (the 'w' in xorw, which stands for word, means 2 bytes). + +Reset the machine (exit QEMU/GDB and start them again). Examine the 8 words of memory at 0x00100000 at the point the BIOS enters the boot loader, and then again at the point the boot loader enters the kernel. Why are they different? What is there at the second breakpoint? (You do not really need to use QEMU to answer this question. Just think.) + +#### Part 3: The Kernel + +We will now start to examine the minimal JOS kernel in a bit more detail. (And you will finally get to write some code!). Like the boot loader, the kernel begins with some assembly language code that sets things up so that C language code can execute properly. + +##### Using virtual memory to work around position dependence + +When you inspected the boot loader's link and load addresses above, they matched perfectly, but there was a (rather large) disparity between the _kernel's_ link address (as printed by objdump) and its load address. Go back and check both and make sure you can see what we're talking about. (Linking the kernel is more complicated than the boot loader, so the link and load addresses are at the top of `kern/kernel.ld`.) + +Operating system kernels often like to be linked and run at very high _virtual address_ , such as 0xf0100000, in order to leave the lower part of the processor's virtual address space for user programs to use. The reason for this arrangement will become clearer in the next lab. + +Many machines don't have any physical memory at address 0xf0100000, so we can't count on being able to store the kernel there. Instead, we will use the processor's memory management hardware to map virtual address 0xf0100000 (the link address at which the kernel code _expects_ to run) to physical address 0x00100000 (where the boot loader loaded the kernel into physical memory). This way, although the kernel's virtual address is high enough to leave plenty of address space for user processes, it will be loaded in physical memory at the 1MB point in the PC's RAM, just above the BIOS ROM. This approach requires that the PC have at least a few megabytes of physical memory (so that physical address 0x00100000 works), but this is likely to be true of any PC built after about 1990. + +In fact, in the next lab, we will map the _entire_ bottom 256MB of the PC's physical address space, from physical addresses 0x00000000 through 0x0fffffff, to virtual addresses 0xf0000000 through 0xffffffff respectively. You should now see why JOS can only use the first 256MB of physical memory. + +For now, we'll just map the first 4MB of physical memory, which will be enough to get us up and running. We do this using the hand-written, statically-initialized page directory and page table in `kern/entrypgdir.c`. For now, you don't have to understand the details of how this works, just the effect that it accomplishes. Up until `kern/entry.S` sets the `CR0_PG` flag, memory references are treated as physical addresses (strictly speaking, they're linear addresses, but boot/boot.S set up an identity mapping from linear addresses to physical addresses and we're never going to change that). Once `CR0_PG` is set, memory references are virtual addresses that get translated by the virtual memory hardware to physical addresses. `entry_pgdir` translates virtual addresses in the range 0xf0000000 through 0xf0400000 to physical addresses 0x00000000 through 0x00400000, as well as virtual addresses 0x00000000 through 0x00400000 to physical addresses 0x00000000 through 0x00400000. Any virtual address that is not in one of these two ranges will cause a hardware exception which, since we haven't set up interrupt handling yet, will cause QEMU to dump the machine state and exit (or endlessly reboot if you aren't using the 6.828-patched version of QEMU). + +Exercise 7. Use QEMU and GDB to trace into the JOS kernel and stop at the `movl %eax, %cr0`. Examine memory at 0x00100000 and at 0xf0100000. Now, single step over that instruction using the stepi GDB command. Again, examine memory at 0x00100000 and at 0xf0100000. Make sure you understand what just happened. + +What is the first instruction _after_ the new mapping is established that would fail to work properly if the mapping weren't in place? Comment out the `movl %eax, %cr0` in `kern/entry.S`, trace into it, and see if you were right. + +##### Formatted Printing to the Console + +Most people take functions like `printf()` for granted, sometimes even thinking of them as "primitives" of the C language. But in an OS kernel, we have to implement all I/O ourselves. + +Read through `kern/printf.c`, `lib/printfmt.c`, and `kern/console.c`, and make sure you understand their relationship. It will become clear in later labs why `printfmt.c` is located in the separate `lib` directory. + +Exercise 8. We have omitted a small fragment of code - the code necessary to print octal numbers using patterns of the form "%o". Find and fill in this code fragment. + +Be able to answer the following questions: + + 1. Explain the interface between `printf.c` and `console.c`. Specifically, what function does `console.c` export? How is this function used by `printf.c`? + + 2. Explain the following from `console.c`: +``` + 1 if (crt_pos >= CRT_SIZE) { + 2 int i; + 3 memmove(crt_buf, crt_buf + CRT_COLS, (CRT_SIZE - CRT_COLS) 选题模板.txt 中文排版指北.md comic core.md Dict.md lctt2014.md lctt2016.md lctt2018.md LCTT翻译规范.md LICENSE Makefile published README.md sign.md sources translated sizeof(uint16_t)); + 4 for (i = CRT_SIZE - CRT_COLS; i < CRT_SIZE; i++) + 5 crt_buf[i] = 0x0700 | ' '; + 6 crt_pos -= CRT_COLS; + 7 } + +``` + + 3. For the following questions you might wish to consult the notes for Lecture 2. These notes cover GCC's calling convention on the x86. + +Trace the execution of the following code step-by-step: +``` + int x = 1, y = 3, z = 4; + cprintf("x %d, y %x, z %d\n", x, y, z); + +``` + + * In the call to `cprintf()`, to what does `fmt` point? To what does `ap` point? + * List (in order of execution) each call to `cons_putc`, `va_arg`, and `vcprintf`. For `cons_putc`, list its argument as well. For `va_arg`, list what `ap` points to before and after the call. For `vcprintf` list the values of its two arguments. + 4. Run the following code. +``` + unsigned int i = 0x00646c72; + cprintf("H%x Wo%s", 57616, &i); + +``` + +What is the output? Explain how this output is arrived at in the step-by-step manner of the previous exercise. [Here's an ASCII table][24] that maps bytes to characters. + +The output depends on that fact that the x86 is little-endian. If the x86 were instead big-endian what would you set `i` to in order to yield the same output? Would you need to change `57616` to a different value? + +[Here's a description of little- and big-endian][25] and [a more whimsical description][26]. + + 5. In the following code, what is going to be printed after `'y='`? (note: the answer is not a specific value.) Why does this happen? +``` + cprintf("x=%d y=%d", 3); + +``` + + 6. Let's say that GCC changed its calling convention so that it pushed arguments on the stack in declaration order, so that the last argument is pushed last. How would you have to change `cprintf` or its interface so that it would still be possible to pass it a variable number of arguments? + + + + +Challenge Enhance the console to allow text to be printed in different colors. The traditional way to do this is to make it interpret [ANSI escape sequences][27] embedded in the text strings printed to the console, but you may use any mechanism you like. There is plenty of information on [the 6.828 reference page][8] and elsewhere on the web on programming the VGA display hardware. If you're feeling really adventurous, you could try switching the VGA hardware into a graphics mode and making the console draw text onto the graphical frame buffer. + +##### The Stack + +In the final exercise of this lab, we will explore in more detail the way the C language uses the stack on the x86, and in the process write a useful new kernel monitor function that prints a _backtrace_ of the stack: a list of the saved Instruction Pointer (IP) values from the nested `call` instructions that led to the current point of execution. + +Exercise 9. Determine where the kernel initializes its stack, and exactly where in memory its stack is located. How does the kernel reserve space for its stack? And at which "end" of this reserved area is the stack pointer initialized to point to? + +The x86 stack pointer (`esp` register) points to the lowest location on the stack that is currently in use. Everything _below_ that location in the region reserved for the stack is free. Pushing a value onto the stack involves decreasing the stack pointer and then writing the value to the place the stack pointer points to. Popping a value from the stack involves reading the value the stack pointer points to and then increasing the stack pointer. In 32-bit mode, the stack can only hold 32-bit values, and esp is always divisible by four. Various x86 instructions, such as `call`, are "hard-wired" to use the stack pointer register. + +The `ebp` (base pointer) register, in contrast, is associated with the stack primarily by software convention. On entry to a C function, the function's _prologue_ code normally saves the previous function's base pointer by pushing it onto the stack, and then copies the current `esp` value into `ebp` for the duration of the function. If all the functions in a program obey this convention, then at any given point during the program's execution, it is possible to trace back through the stack by following the chain of saved `ebp` pointers and determining exactly what nested sequence of function calls caused this particular point in the program to be reached. This capability can be particularly useful, for example, when a particular function causes an `assert` failure or `panic` because bad arguments were passed to it, but you aren't sure _who_ passed the bad arguments. A stack backtrace lets you find the offending function. + +Exercise 10. To become familiar with the C calling conventions on the x86, find the address of the `test_backtrace` function in `obj/kern/kernel.asm`, set a breakpoint there, and examine what happens each time it gets called after the kernel starts. How many 32-bit words does each recursive nesting level of `test_backtrace` push on the stack, and what are those words? + +Note that, for this exercise to work properly, you should be using the patched version of QEMU available on the [tools][4] page or on Athena. Otherwise, you'll have to manually translate all breakpoint and memory addresses to linear addresses. + +The above exercise should give you the information you need to implement a stack backtrace function, which you should call `mon_backtrace()`. A prototype for this function is already waiting for you in `kern/monitor.c`. You can do it entirely in C, but you may find the `read_ebp()` function in `inc/x86.h` useful. You'll also have to hook this new function into the kernel monitor's command list so that it can be invoked interactively by the user. + +The backtrace function should display a listing of function call frames in the following format: + +``` +Stack backtrace: + ebp f0109e58 eip f0100a62 args 00000001 f0109e80 f0109e98 f0100ed2 00000031 + ebp f0109ed8 eip f01000d6 args 00000000 00000000 f0100058 f0109f28 00000061 + ... + +``` + +Each line contains an `ebp`, `eip`, and `args`. The `ebp` value indicates the base pointer into the stack used by that function: i.e., the position of the stack pointer just after the function was entered and the function prologue code set up the base pointer. The listed `eip` value is the function's _return instruction pointer_ : the instruction address to which control will return when the function returns. The return instruction pointer typically points to the instruction after the `call` instruction (why?). Finally, the five hex values listed after `args` are the first five arguments to the function in question, which would have been pushed on the stack just before the function was called. If the function was called with fewer than five arguments, of course, then not all five of these values will be useful. (Why can't the backtrace code detect how many arguments there actually are? How could this limitation be fixed?) + +The first line printed reflects the _currently executing_ function, namely `mon_backtrace` itself, the second line reflects the function that called `mon_backtrace`, the third line reflects the function that called that one, and so on. You should print _all_ the outstanding stack frames. By studying `kern/entry.S` you'll find that there is an easy way to tell when to stop. + +Here are a few specific points you read about in K&R Chapter 5 that are worth remembering for the following exercise and for future labs. + + * If `int *p = (int*)100`, then `(int)p + 1` and `(int)(p + 1)` are different numbers: the first is `101` but the second is `104`. When adding an integer to a pointer, as in the second case, the integer is implicitly multiplied by the size of the object the pointer points to. + * `p[i]` is defined to be the same as `*(p+i)`, referring to the i'th object in the memory pointed to by p. The above rule for addition helps this definition work when the objects are larger than one byte. + * `&p[i]` is the same as `(p+i)`, yielding the address of the i'th object in the memory pointed to by p. + + + +Although most C programs never need to cast between pointers and integers, operating systems frequently do. Whenever you see an addition involving a memory address, ask yourself whether it is an integer addition or pointer addition and make sure the value being added is appropriately multiplied or not. + +Exercise 11. Implement the backtrace function as specified above. Use the same format as in the example, since otherwise the grading script will be confused. When you think you have it working right, run make grade to see if its output conforms to what our grading script expects, and fix it if it doesn't. _After_ you have handed in your Lab 1 code, you are welcome to change the output format of the backtrace function any way you like. + +If you use `read_ebp()`, note that GCC may generate "optimized" code that calls `read_ebp()` _before_ `mon_backtrace()`'s function prologue, which results in an incomplete stack trace (the stack frame of the most recent function call is missing). While we have tried to disable optimizations that cause this reordering, you may want to examine the assembly of `mon_backtrace()` and make sure the call to `read_ebp()` is happening after the function prologue. + +At this point, your backtrace function should give you the addresses of the function callers on the stack that lead to `mon_backtrace()` being executed. However, in practice you often want to know the function names corresponding to those addresses. For instance, you may want to know which functions could contain a bug that's causing your kernel to crash. + +To help you implement this functionality, we have provided the function `debuginfo_eip()`, which looks up `eip` in the symbol table and returns the debugging information for that address. This function is defined in `kern/kdebug.c`. + +Exercise 12. Modify your stack backtrace function to display, for each `eip`, the function name, source file name, and line number corresponding to that `eip`. + +In `debuginfo_eip`, where do `__STAB_*` come from? This question has a long answer; to help you to discover the answer, here are some things you might want to do: + + * look in the file `kern/kernel.ld` for `__STAB_*` + * run objdump -h obj/kern/kernel + * run objdump -G obj/kern/kernel + * run gcc -pipe -nostdinc -O2 -fno-builtin -I. -MD -Wall -Wno-format -DJOS_KERNEL -gstabs -c -S kern/init.c, and look at init.s. + * see if the bootloader loads the symbol table in memory as part of loading the kernel binary + + + +Complete the implementation of `debuginfo_eip` by inserting the call to `stab_binsearch` to find the line number for an address. + +Add a `backtrace` command to the kernel monitor, and extend your implementation of `mon_backtrace` to call `debuginfo_eip` and print a line for each stack frame of the form: + +``` +K> backtrace +Stack backtrace: + ebp f010ff78 eip f01008ae args 00000001 f010ff8c 00000000 f0110580 00000000 + kern/monitor.c:143: monitor+106 + ebp f010ffd8 eip f0100193 args 00000000 00001aac 00000660 00000000 00000000 + kern/init.c:49: i386_init+59 + ebp f010fff8 eip f010003d args 00000000 00000000 0000ffff 10cf9a00 0000ffff + kern/entry.S:70: +0 +K> + +``` + +Each line gives the file name and line within that file of the stack frame's `eip`, followed by the name of the function and the offset of the `eip` from the first instruction of the function (e.g., `monitor+106` means the return `eip` is 106 bytes past the beginning of `monitor`). + +Be sure to print the file and function names on a separate line, to avoid confusing the grading script. + +Tip: printf format strings provide an easy, albeit obscure, way to print non-null-terminated strings like those in STABS tables. `printf("%.*s", length, string)` prints at most `length` characters of `string`. Take a look at the printf man page to find out why this works. + +You may find that some functions are missing from the backtrace. For example, you will probably see a call to `monitor()` but not to `runcmd()`. This is because the compiler in-lines some function calls. Other optimizations may cause you to see unexpected line numbers. If you get rid of the `-O2` from `GNUMakefile`, the backtraces may make more sense (but your kernel will run more slowly). + +**This completes the lab.** In the `lab` directory, commit your changes with git commit and type make handin to submit your code. + +-------------------------------------------------------------------------------- + +via: https://pdos.csail.mit.edu/6.828/2018/labs/lab1/ + +作者:[csail.mit][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: +[b]: https://github.com/lujun9972 +[1]: http://www.git-scm.com/ +[2]: http://www.kernel.org/pub/software/scm/git/docs/user-manual.html +[3]: http://eagain.net/articles/git-for-computer-scientists/ +[4]: https://pdos.csail.mit.edu/6.828/2018/tools.html +[5]: https://6828.scripts.mit.edu/2018/handin.py/ +[6]: https://pdos.csail.mit.edu/6.828/2018/readings/pcasm-book.pdf +[7]: http://www.delorie.com/djgpp/doc/brennan/brennan_att_inline_djgpp.html +[8]: https://pdos.csail.mit.edu/6.828/2018/reference.html +[9]: https://pdos.csail.mit.edu/6.828/2018/readings/i386/toc.htm +[10]: http://www.intel.com/content/www/us/en/processors/architectures-software-developer-manuals.html +[11]: http://developer.amd.com/resources/developer-guides-manuals/ +[12]: http://www.qemu.org/ +[13]: http://www.gnu.org/software/gdb/ +[14]: http://web.archive.org/web/20040404164813/members.iweb.net.au/~pstorr/pcbook/book2/book2.htm +[15]: https://pdos.csail.mit.edu/6.828/2018/readings/boot-cdrom.pdf +[16]: https://pdos.csail.mit.edu/6.828/2018/labguide.html +[17]: http://www.amazon.com/C-Programming-Language-2nd/dp/0131103628/sr=8-1/qid=1157812738/ref=pd_bbs_1/104-1502762-1803102?ie=UTF8&s=books +[18]: http://library.mit.edu/F/AI9Y4SJ2L5ELEE2TAQUAAR44XV5RTTQHE47P9MKP5GQDLR9A8X-10422?func=item-global&doc_library=MIT01&doc_number=000355242&year=&volume=&sub_library= +[19]: https://pdos.csail.mit.edu/6.828/2018/labs/lab1/pointers.c +[20]: https://pdos.csail.mit.edu/6.828/2018/readings/pointers.pdf +[21]: https://pdos.csail.mit.edu/6.828/2018/readings/elf.pdf +[22]: http://en.wikipedia.org/wiki/Executable_and_Linkable_Format +[23]: https://sourceware.org/gdb/current/onlinedocs/gdb/Memory.html +[24]: http://web.cs.mun.ca/~michael/c/ascii-table.html +[25]: http://www.webopedia.com/TERM/b/big_endian.html +[26]: http://www.networksorcery.com/enp/ien/ien137.txt +[27]: http://rrbrandt.dee.ufcg.edu.br/en/docs/ansi/ From 5b2a5895e1afa2b2b37abd8c94c3b4e840436155 Mon Sep 17 00:00:00 2001 From: darksun Date: Fri, 12 Oct 2018 11:29:25 +0800 Subject: [PATCH 378/736] =?UTF-8?q?=E9=80=89=E9=A2=98:=20Lab=202:=20Memory?= =?UTF-8?q?=20Management?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- .../tech/20180927 Lab 2- Memory Management.md | 272 ++++++++++++++++++ 1 file changed, 272 insertions(+) create mode 100644 sources/tech/20180927 Lab 2- Memory Management.md diff --git a/sources/tech/20180927 Lab 2- Memory Management.md b/sources/tech/20180927 Lab 2- Memory Management.md new file mode 100644 index 0000000000..386bf6ceaf --- /dev/null +++ b/sources/tech/20180927 Lab 2- Memory Management.md @@ -0,0 +1,272 @@ +Lab 2: Memory Management +====== +### Lab 2: Memory Management + +#### Introduction + +In this lab, you will write the memory management code for your operating system. Memory management has two components. + +The first component is a physical memory allocator for the kernel, so that the kernel can allocate memory and later free it. Your allocator will operate in units of 4096 bytes, called _pages_. Your task will be to maintain data structures that record which physical pages are free and which are allocated, and how many processes are sharing each allocated page. You will also write the routines to allocate and free pages of memory. + +The second component of memory management is _virtual memory_ , which maps the virtual addresses used by kernel and user software to addresses in physical memory. The x86 hardware's memory management unit (MMU) performs the mapping when instructions use memory, consulting a set of page tables. You will modify JOS to set up the MMU's page tables according to a specification we provide. + +##### Getting started + +In this and future labs you will progressively build up your kernel. We will also provide you with some additional source. To fetch that source, use Git to commit changes you've made since handing in lab 1 (if any), fetch the latest version of the course repository, and then create a local branch called `lab2` based on our lab2 branch, `origin/lab2`: + +``` + athena% cd ~/6.828/lab + athena% add git + athena% git pull + Already up-to-date. + athena% git checkout -b lab2 origin/lab2 + Branch lab2 set up to track remote branch refs/remotes/origin/lab2. + Switched to a new branch "lab2" + athena% +``` + +The git checkout -b command shown above actually does two things: it first creates a local branch `lab2` that is based on the `origin/lab2` branch provided by the course staff, and second, it changes the contents of your `lab` directory to reflect the files stored on the `lab2` branch. Git allows switching between existing branches using git checkout _branch-name_ , though you should commit any outstanding changes on one branch before switching to a different one. + +You will now need to merge the changes you made in your `lab1` branch into the `lab2` branch, as follows: + +``` + athena% git merge lab1 + Merge made by recursive. + kern/kdebug.c | 11 +++++++++-- + kern/monitor.c | 19 +++++++++++++++++++ + lib/printfmt.c | 7 +++---- + 3 files changed, 31 insertions(+), 6 deletions(-) + athena% +``` + +In some cases, Git may not be able to figure out how to merge your changes with the new lab assignment (e.g. if you modified some of the code that is changed in the second lab assignment). In that case, the git merge command will tell you which files are _conflicted_ , and you should first resolve the conflict (by editing the relevant files) and then commit the resulting files with git commit -a. + +Lab 2 contains the following new source files, which you should browse through: + + * `inc/memlayout.h` + * `kern/pmap.c` + * `kern/pmap.h` + * `kern/kclock.h` + * `kern/kclock.c` + + + +`memlayout.h` describes the layout of the virtual address space that you must implement by modifying `pmap.c`. `memlayout.h` and `pmap.h` define the `PageInfo` structure that you'll use to keep track of which pages of physical memory are free. `kclock.c` and `kclock.h` manipulate the PC's battery-backed clock and CMOS RAM hardware, in which the BIOS records the amount of physical memory the PC contains, among other things. The code in `pmap.c` needs to read this device hardware in order to figure out how much physical memory there is, but that part of the code is done for you: you do not need to know the details of how the CMOS hardware works. + +Pay particular attention to `memlayout.h` and `pmap.h`, since this lab requires you to use and understand many of the definitions they contain. You may want to review `inc/mmu.h`, too, as it also contains a number of definitions that will be useful for this lab. + +Before beginning the lab, don't forget to add -f 6.828 to get the 6.828 version of QEMU. + +##### Lab Requirements + +In this lab and subsequent labs, do all of the regular exercises described in the lab and _at least one_ challenge problem. (Some challenge problems are more challenging than others, of course!) Additionally, write up brief answers to the questions posed in the lab and a short (e.g., one or two paragraph) description of what you did to solve your chosen challenge problem. If you implement more than one challenge problem, you only need to describe one of them in the write-up, though of course you are welcome to do more. Place the write-up in a file called `answers-lab2.txt` in the top level of your `lab` directory before handing in your work. + +##### Hand-In Procedure + +When you are ready to hand in your lab code and write-up, add your `answers-lab2.txt` to the Git repository, commit your changes, and then run make handin. + +``` + athena% git add answers-lab2.txt + athena% git commit -am "my answer to lab2" + [lab2 a823de9] my answer to lab2 + 4 files changed, 87 insertions(+), 10 deletions(-) + athena% make handin +``` + +As before, we will be grading your solutions with a grading program. You can run make grade in the `lab` directory to test your kernel with the grading program. You may change any of the kernel source and header files you need to in order to complete the lab, but needless to say you must not change or otherwise subvert the grading code. + +#### Part 1: Physical Page Management + +The operating system must keep track of which parts of physical RAM are free and which are currently in use. JOS manages the PC's physical memory with _page granularity_ so that it can use the MMU to map and protect each piece of allocated memory. + +You'll now write the physical page allocator. It keeps track of which pages are free with a linked list of `struct PageInfo` objects (which, unlike xv6, are not embedded in the free pages themselves), each corresponding to a physical page. You need to write the physical page allocator before you can write the rest of the virtual memory implementation, because your page table management code will need to allocate physical memory in which to store page tables. + +Exercise 1. In the file `kern/pmap.c`, you must implement code for the following functions (probably in the order given). + +`boot_alloc()` +`mem_init()` (only up to the call to `check_page_free_list(1)`) +`page_init()` +`page_alloc()` +`page_free()` + +`check_page_free_list()` and `check_page_alloc()` test your physical page allocator. You should boot JOS and see whether `check_page_alloc()` reports success. Fix your code so that it passes. You may find it helpful to add your own `assert()`s to verify that your assumptions are correct. + +This lab, and all the 6.828 labs, will require you to do a bit of detective work to figure out exactly what you need to do. This assignment does not describe all the details of the code you'll have to add to JOS. Look for comments in the parts of the JOS source that you have to modify; those comments often contain specifications and hints. You will also need to look at related parts of JOS, at the Intel manuals, and perhaps at your 6.004 or 6.033 notes. + +#### Part 2: Virtual Memory + +Before doing anything else, familiarize yourself with the x86's protected-mode memory management architecture: namely _segmentation_ and _page translation_. + +Exercise 2. Look at chapters 5 and 6 of the [Intel 80386 Reference Manual][1], if you haven't done so already. Read the sections about page translation and page-based protection closely (5.2 and 6.4). We recommend that you also skim the sections about segmentation; while JOS uses the paging hardware for virtual memory and protection, segment translation and segment-based protection cannot be disabled on the x86, so you will need a basic understanding of it. + +##### Virtual, Linear, and Physical Addresses + +In x86 terminology, a _virtual address_ consists of a segment selector and an offset within the segment. A _linear address_ is what you get after segment translation but before page translation. A _physical address_ is what you finally get after both segment and page translation and what ultimately goes out on the hardware bus to your RAM. + +``` + Selector +--------------+ +-----------+ + ---------->| | | | + | Segmentation | | Paging | +Software | |-------->| |----------> RAM + Offset | Mechanism | | Mechanism | + ---------->| | | | + +--------------+ +-----------+ + Virtual Linear Physical + +``` + +A C pointer is the "offset" component of the virtual address. In `boot/boot.S`, we installed a Global Descriptor Table (GDT) that effectively disabled segment translation by setting all segment base addresses to 0 and limits to `0xffffffff`. Hence the "selector" has no effect and the linear address always equals the offset of the virtual address. In lab 3, we'll have to interact a little more with segmentation to set up privilege levels, but as for memory translation, we can ignore segmentation throughout the JOS labs and focus solely on page translation. + +Recall that in part 3 of lab 1, we installed a simple page table so that the kernel could run at its link address of 0xf0100000, even though it is actually loaded in physical memory just above the ROM BIOS at 0x00100000. This page table mapped only 4MB of memory. In the virtual address space layout you are going to set up for JOS in this lab, we'll expand this to map the first 256MB of physical memory starting at virtual address 0xf0000000 and to map a number of other regions of the virtual address space. + +Exercise 3. While GDB can only access QEMU's memory by virtual address, it's often useful to be able to inspect physical memory while setting up virtual memory. Review the QEMU [monitor commands][2] from the lab tools guide, especially the `xp` command, which lets you inspect physical memory. To access the QEMU monitor, press Ctrl-a c in the terminal (the same binding returns to the serial console). + +Use the xp command in the QEMU monitor and the x command in GDB to inspect memory at corresponding physical and virtual addresses and make sure you see the same data. + +Our patched version of QEMU provides an info pg command that may also prove useful: it shows a compact but detailed representation of the current page tables, including all mapped memory ranges, permissions, and flags. Stock QEMU also provides an info mem command that shows an overview of which ranges of virtual addresses are mapped and with what permissions. + +From code executing on the CPU, once we're in protected mode (which we entered first thing in `boot/boot.S`), there's no way to directly use a linear or physical address. _All_ memory references are interpreted as virtual addresses and translated by the MMU, which means all pointers in C are virtual addresses. + +The JOS kernel often needs to manipulate addresses as opaque values or as integers, without dereferencing them, for example in the physical memory allocator. Sometimes these are virtual addresses, and sometimes they are physical addresses. To help document the code, the JOS source distinguishes the two cases: the type `uintptr_t` represents opaque virtual addresses, and `physaddr_t` represents physical addresses. Both these types are really just synonyms for 32-bit integers (`uint32_t`), so the compiler won't stop you from assigning one type to another! Since they are integer types (not pointers), the compiler _will_ complain if you try to dereference them. + +The JOS kernel can dereference a `uintptr_t` by first casting it to a pointer type. In contrast, the kernel can't sensibly dereference a physical address, since the MMU translates all memory references. If you cast a `physaddr_t` to a pointer and dereference it, you may be able to load and store to the resulting address (the hardware will interpret it as a virtual address), but you probably won't get the memory location you intended. + +To summarize: + +C typeAddress type `T*` Virtual `uintptr_t` Virtual `physaddr_t` Physical + +Question + + 1. Assuming that the following JOS kernel code is correct, what type should variable `x` have, `uintptr_t` or `physaddr_t`? + +``` + mystery_t x; + char* value = return_a_pointer(); + *value = 10; + x = (mystery_t) value; + +``` + + + + +The JOS kernel sometimes needs to read or modify memory for which it knows only the physical address. For example, adding a mapping to a page table may require allocating physical memory to store a page directory and then initializing that memory. However, the kernel cannot bypass virtual address translation and thus cannot directly load and store to physical addresses. One reason JOS remaps all of physical memory starting from physical address 0 at virtual address 0xf0000000 is to help the kernel read and write memory for which it knows just the physical address. In order to translate a physical address into a virtual address that the kernel can actually read and write, the kernel must add 0xf0000000 to the physical address to find its corresponding virtual address in the remapped region. You should use `KADDR(pa)` to do that addition. + +The JOS kernel also sometimes needs to be able to find a physical address given the virtual address of the memory in which a kernel data structure is stored. Kernel global variables and memory allocated by `boot_alloc()` are in the region where the kernel was loaded, starting at 0xf0000000, the very region where we mapped all of physical memory. Thus, to turn a virtual address in this region into a physical address, the kernel can simply subtract 0xf0000000. You should use `PADDR(va)` to do that subtraction. + +##### Reference counting + +In future labs you will often have the same physical page mapped at multiple virtual addresses simultaneously (or in the address spaces of multiple environments). You will keep a count of the number of references to each physical page in the `pp_ref` field of the `struct PageInfo` corresponding to the physical page. When this count goes to zero for a physical page, that page can be freed because it is no longer used. In general, this count should be equal to the number of times the physical page appears below `UTOP` in all page tables (the mappings above `UTOP` are mostly set up at boot time by the kernel and should never be freed, so there's no need to reference count them). We'll also use it to keep track of the number of pointers we keep to the page directory pages and, in turn, of the number of references the page directories have to page table pages. + +Be careful when using `page_alloc`. The page it returns will always have a reference count of 0, so `pp_ref` should be incremented as soon as you've done something with the returned page (like inserting it into a page table). Sometimes this is handled by other functions (for example, `page_insert`) and sometimes the function calling `page_alloc` must do it directly. + +##### Page Table Management + +Now you'll write a set of routines to manage page tables: to insert and remove linear-to-physical mappings, and to create page table pages when needed. + +Exercise 4. In the file `kern/pmap.c`, you must implement code for the following functions. + +``` + + pgdir_walk() + boot_map_region() + page_lookup() + page_remove() + page_insert() + + +``` + +`check_page()`, called from `mem_init()`, tests your page table management routines. You should make sure it reports success before proceeding. + +#### Part 3: Kernel Address Space + +JOS divides the processor's 32-bit linear address space into two parts. User environments (processes), which we will begin loading and running in lab 3, will have control over the layout and contents of the lower part, while the kernel always maintains complete control over the upper part. The dividing line is defined somewhat arbitrarily by the symbol `ULIM` in `inc/memlayout.h`, reserving approximately 256MB of virtual address space for the kernel. This explains why we needed to give the kernel such a high link address in lab 1: otherwise there would not be enough room in the kernel's virtual address space to map in a user environment below it at the same time. + +You'll find it helpful to refer to the JOS memory layout diagram in `inc/memlayout.h` both for this part and for later labs. + +##### Permissions and Fault Isolation + +Since kernel and user memory are both present in each environment's address space, we will have to use permission bits in our x86 page tables to allow user code access only to the user part of the address space. Otherwise bugs in user code might overwrite kernel data, causing a crash or more subtle malfunction; user code might also be able to steal other environments' private data. Note that the writable permission bit (`PTE_W`) affects both user and kernel code! + +The user environment will have no permission to any of the memory above `ULIM`, while the kernel will be able to read and write this memory. For the address range `[UTOP,ULIM)`, both the kernel and the user environment have the same permission: they can read but not write this address range. This range of address is used to expose certain kernel data structures read-only to the user environment. Lastly, the address space below `UTOP` is for the user environment to use; the user environment will set permissions for accessing this memory. + +##### Initializing the Kernel Address Space + +Now you'll set up the address space above `UTOP`: the kernel part of the address space. `inc/memlayout.h` shows the layout you should use. You'll use the functions you just wrote to set up the appropriate linear to physical mappings. + +Exercise 5. Fill in the missing code in `mem_init()` after the call to `check_page()`. + +Your code should now pass the `check_kern_pgdir()` and `check_page_installed_pgdir()` checks. + +Question + + 2. What entries (rows) in the page directory have been filled in at this point? What addresses do they map and where do they point? In other words, fill out this table as much as possible: + | Entry | Base Virtual Address | Points to (logically): | + |-------|----------------------|---------------------------------------| + | 1023 | ? | Page table for top 4MB of phys memory | + | 1022 | ? | ? | + | . | ? | ? | + | . | ? | ? | + | . | ? | ? | + | 2 | 0x00800000 | ? | + | 1 | 0x00400000 | ? | + | 0 | 0x00000000 | [see next question] | + 3. We have placed the kernel and user environment in the same address space. Why will user programs not be able to read or write the kernel's memory? What specific mechanisms protect the kernel memory? + 4. What is the maximum amount of physical memory that this operating system can support? Why? + 5. How much space overhead is there for managing memory, if we actually had the maximum amount of physical memory? How is this overhead broken down? + 6. Revisit the page table setup in `kern/entry.S` and `kern/entrypgdir.c`. Immediately after we turn on paging, EIP is still a low number (a little over 1MB). At what point do we transition to running at an EIP above KERNBASE? What makes it possible for us to continue executing at a low EIP between when we enable paging and when we begin running at an EIP above KERNBASE? Why is this transition necessary? + + +``` +Challenge! We consumed many physical pages to hold the page tables for the KERNBASE mapping. Do a more space-efficient job using the PTE_PS ("Page Size") bit in the page directory entries. This bit was _not_ supported in the original 80386, but is supported on more recent x86 processors. You will therefore have to refer to [Volume 3 of the current Intel manuals][3]. Make sure you design the kernel to use this optimization only on processors that support it! +``` + +``` +Challenge! Extend the JOS kernel monitor with commands to: + + * Display in a useful and easy-to-read format all of the physical page mappings (or lack thereof) that apply to a particular range of virtual/linear addresses in the currently active address space. For example, you might enter `'showmappings 0x3000 0x5000'` to display the physical page mappings and corresponding permission bits that apply to the pages at virtual addresses 0x3000, 0x4000, and 0x5000. + * Explicitly set, clear, or change the permissions of any mapping in the current address space. + * Dump the contents of a range of memory given either a virtual or physical address range. Be sure the dump code behaves correctly when the range extends across page boundaries! + * Do anything else that you think might be useful later for debugging the kernel. (There's a good chance it will be!) +``` + + +##### Address Space Layout Alternatives + +The address space layout we use in JOS is not the only one possible. An operating system might map the kernel at low linear addresses while leaving the _upper_ part of the linear address space for user processes. x86 kernels generally do not take this approach, however, because one of the x86's backward-compatibility modes, known as _virtual 8086 mode_ , is "hard-wired" in the processor to use the bottom part of the linear address space, and thus cannot be used at all if the kernel is mapped there. + +It is even possible, though much more difficult, to design the kernel so as not to have to reserve _any_ fixed portion of the processor's linear or virtual address space for itself, but instead effectively to allow user-level processes unrestricted use of the _entire_ 4GB of virtual address space - while still fully protecting the kernel from these processes and protecting different processes from each other! + +``` +Challenge! Each user-level environment maps the kernel. Change JOS so that the kernel has its own page table and so that a user-level environment runs with a minimal number of kernel pages mapped. That is, each user-level environment maps just enough pages mapped so that the user-level environment can enter and leave the kernel correctly. You also have to come up with a plan for the kernel to read/write arguments to system calls. +``` + +``` +Challenge! Write up an outline of how a kernel could be designed to allow user environments unrestricted use of the full 4GB virtual and linear address space. Hint: do the previous challenge exercise first, which reduces the kernel to a few mappings in a user environment. Hint: the technique is sometimes known as " _follow the bouncing kernel_. " In your design, be sure to address exactly what has to happen when the processor transitions between kernel and user modes, and how the kernel would accomplish such transitions. Also describe how the kernel would access physical memory and I/O devices in this scheme, and how the kernel would access a user environment's virtual address space during system calls and the like. Finally, think about and describe the advantages and disadvantages of such a scheme in terms of flexibility, performance, kernel complexity, and other factors you can think of. +``` + +``` +Challenge! Since our JOS kernel's memory management system only allocates and frees memory on page granularity, we do not have anything comparable to a general-purpose `malloc`/`free` facility that we can use within the kernel. This could be a problem if we want to support certain types of I/O devices that require _physically contiguous_ buffers larger than 4KB in size, or if we want user-level environments, and not just the kernel, to be able to allocate and map 4MB _superpages_ for maximum processor efficiency. (See the earlier challenge problem about PTE_PS.) + +Generalize the kernel's memory allocation system to support pages of a variety of power-of-two allocation unit sizes from 4KB up to some reasonable maximum of your choice. Be sure you have some way to divide larger allocation units into smaller ones on demand, and to coalesce multiple small allocation units back into larger units when possible. Think about the issues that might arise in such a system. +``` + +**This completes the lab.** Make sure you pass all of the make grade tests and don't forget to write up your answers to the questions and a description of your challenge exercise solution in `answers-lab2.txt`. Commit your changes (including adding `answers-lab2.txt`) and type make handin in the `lab` directory to hand in your lab. + +-------------------------------------------------------------------------------- + +via: https://pdos.csail.mit.edu/6.828/2018/labs/lab2/ + +作者:[csail.mit][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://pdos.csail.mit.edu +[b]: https://github.com/lujun9972 +[1]: https://pdos.csail.mit.edu/6.828/2018/readings/i386/toc.htm +[2]: https://pdos.csail.mit.edu/6.828/2018/labguide.html#qemu +[3]: https://pdos.csail.mit.edu/6.828/2018/readings/ia32/IA32-3A.pdf From 2140160b13342ff438a0ca929ec2fb13453affb8 Mon Sep 17 00:00:00 2001 From: darksun Date: Fri, 12 Oct 2018 11:41:41 +0800 Subject: [PATCH 379/736] =?UTF-8?q?=E9=80=89=E9=A2=98:=20Lab=203:=20User?= =?UTF-8?q?=20Environments?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- .../tech/20181004 Lab 3- User Environments.md | 524 ++++++++++++++++++ 1 file changed, 524 insertions(+) create mode 100644 sources/tech/20181004 Lab 3- User Environments.md diff --git a/sources/tech/20181004 Lab 3- User Environments.md b/sources/tech/20181004 Lab 3- User Environments.md new file mode 100644 index 0000000000..2dc1522b69 --- /dev/null +++ b/sources/tech/20181004 Lab 3- User Environments.md @@ -0,0 +1,524 @@ +Lab 3: User Environments +====== +### Lab 3: User Environments + +#### Introduction + +In this lab you will implement the basic kernel facilities required to get a protected user-mode environment (i.e., "process") running. You will enhance the JOS kernel to set up the data structures to keep track of user environments, create a single user environment, load a program image into it, and start it running. You will also make the JOS kernel capable of handling any system calls the user environment makes and handling any other exceptions it causes. + +**Note:** In this lab, the terms _environment_ and _process_ are interchangeable - both refer to an abstraction that allows you to run a program. We introduce the term "environment" instead of the traditional term "process" in order to stress the point that JOS environments and UNIX processes provide different interfaces, and do not provide the same semantics. + +##### Getting Started + +Use Git to commit your changes after your Lab 2 submission (if any), fetch the latest version of the course repository, and then create a local branch called `lab3` based on our lab3 branch, `origin/lab3`: + +``` + athena% cd ~/6.828/lab + athena% add git + athena% git commit -am 'changes to lab2 after handin' + Created commit 734fab7: changes to lab2 after handin + 4 files changed, 42 insertions(+), 9 deletions(-) + athena% git pull + Already up-to-date. + athena% git checkout -b lab3 origin/lab3 + Branch lab3 set up to track remote branch refs/remotes/origin/lab3. + Switched to a new branch "lab3" + athena% git merge lab2 + Merge made by recursive. + kern/pmap.c | 42 +++++++++++++++++++ + 1 files changed, 42 insertions(+), 0 deletions(-) + athena% +``` + +Lab 3 contains a number of new source files, which you should browse: + +``` +inc/ env.h Public definitions for user-mode environments + trap.h Public definitions for trap handling + syscall.h Public definitions for system calls from user environments to the kernel + lib.h Public definitions for the user-mode support library +kern/ env.h Kernel-private definitions for user-mode environments + env.c Kernel code implementing user-mode environments + trap.h Kernel-private trap handling definitions + trap.c Trap handling code + trapentry.S Assembly-language trap handler entry-points + syscall.h Kernel-private definitions for system call handling + syscall.c System call implementation code +lib/ Makefrag Makefile fragment to build user-mode library, obj/lib/libjos.a + entry.S Assembly-language entry-point for user environments + libmain.c User-mode library setup code called from entry.S + syscall.c User-mode system call stub functions + console.c User-mode implementations of putchar and getchar, providing console I/O + exit.c User-mode implementation of exit + panic.c User-mode implementation of panic +user/ * Various test programs to check kernel lab 3 code +``` + +In addition, a number of the source files we handed out for lab2 are modified in lab3. To see the differences, you can type: + +``` + $ git diff lab2 + +``` + +You may also want to take another look at the [lab tools guide][1], as it includes information on debugging user code that becomes relevant in this lab. + +##### Lab Requirements + +This lab is divided into two parts, A and B. Part A is due a week after this lab was assigned; you should commit your changes and make handin your lab before the Part A deadline, making sure your code passes all of the Part A tests (it is okay if your code does not pass the Part B tests yet). You only need to have the Part B tests passing by the Part B deadline at the end of the second week. + +As in lab 2, you will need to do all of the regular exercises described in the lab and _at least one_ challenge problem (for the entire lab, not for each part). Write up brief answers to the questions posed in the lab and a one or two paragraph description of what you did to solve your chosen challenge problem in a file called `answers-lab3.txt` in the top level of your `lab` directory. (If you implement more than one challenge problem, you only need to describe one of them in the write-up.) Do not forget to include the answer file in your submission with git add answers-lab3.txt. + +##### Inline Assembly + +In this lab you may find GCC's inline assembly language feature useful, although it is also possible to complete the lab without using it. At the very least, you will need to be able to understand the fragments of inline assembly language ("`asm`" statements) that already exist in the source code we gave you. You can find several sources of information on GCC inline assembly language on the class [reference materials][2] page. + +#### Part A: User Environments and Exception Handling + +The new include file `inc/env.h` contains basic definitions for user environments in JOS. Read it now. The kernel uses the `Env` data structure to keep track of each user environment. In this lab you will initially create just one environment, but you will need to design the JOS kernel to support multiple environments; lab 4 will take advantage of this feature by allowing a user environment to `fork` other environments. + +As you can see in `kern/env.c`, the kernel maintains three main global variables pertaining to environments: + +``` + struct Env *envs = NULL; // All environments + struct Env *curenv = NULL; // The current env + static struct Env *env_free_list; // Free environment list + +``` + +Once JOS gets up and running, the `envs` pointer points to an array of `Env` structures representing all the environments in the system. In our design, the JOS kernel will support a maximum of `NENV` simultaneously active environments, although there will typically be far fewer running environments at any given time. (`NENV` is a constant `#define`'d in `inc/env.h`.) Once it is allocated, the `envs` array will contain a single instance of the `Env` data structure for each of the `NENV` possible environments. + +The JOS kernel keeps all of the inactive `Env` structures on the `env_free_list`. This design allows easy allocation and deallocation of environments, as they merely have to be added to or removed from the free list. + +The kernel uses the `curenv` symbol to keep track of the _currently executing_ environment at any given time. During boot up, before the first environment is run, `curenv` is initially set to `NULL`. + +##### Environment State + +The `Env` structure is defined in `inc/env.h` as follows (although more fields will be added in future labs): + +``` + struct Env { + struct Trapframe env_tf; // Saved registers + struct Env *env_link; // Next free Env + envid_t env_id; // Unique environment identifier + envid_t env_parent_id; // env_id of this env's parent + enum EnvType env_type; // Indicates special system environments + unsigned env_status; // Status of the environment + uint32_t env_runs; // Number of times environment has run + + // Address space + pde_t *env_pgdir; // Kernel virtual address of page dir + }; +``` + +Here's what the `Env` fields are for: + + * **env_tf** : +This structure, defined in `inc/trap.h`, holds the saved register values for the environment while that environment is _not_ running: i.e., when the kernel or a different environment is running. The kernel saves these when switching from user to kernel mode, so that the environment can later be resumed where it left off. + * **env_link** : +This is a link to the next `Env` on the `env_free_list`. `env_free_list` points to the first free environment on the list. + * **env_id** : +The kernel stores here a value that uniquely identifiers the environment currently using this `Env` structure (i.e., using this particular slot in the `envs` array). After a user environment terminates, the kernel may re-allocate the same `Env` structure to a different environment - but the new environment will have a different `env_id` from the old one even though the new environment is re-using the same slot in the `envs` array. + * **env_parent_id** : +The kernel stores here the `env_id` of the environment that created this environment. In this way the environments can form a “family tree,” which will be useful for making security decisions about which environments are allowed to do what to whom. + * **env_type** : +This is used to distinguish special environments. For most environments, it will be `ENV_TYPE_USER`. We'll introduce a few more types for special system service environments in later labs. + * **env_status** : +This variable holds one of the following values: + * `ENV_FREE`: +Indicates that the `Env` structure is inactive, and therefore on the `env_free_list`. + * `ENV_RUNNABLE`: +Indicates that the `Env` structure represents an environment that is waiting to run on the processor. + * `ENV_RUNNING`: +Indicates that the `Env` structure represents the currently running environment. + * `ENV_NOT_RUNNABLE`: +Indicates that the `Env` structure represents a currently active environment, but it is not currently ready to run: for example, because it is waiting for an interprocess communication (IPC) from another environment. + * `ENV_DYING`: +Indicates that the `Env` structure represents a zombie environment. A zombie environment will be freed the next time it traps to the kernel. We will not use this flag until Lab 4. + * **env_pgdir** : +This variable holds the kernel _virtual address_ of this environment's page directory. + + + +Like a Unix process, a JOS environment couples the concepts of "thread" and "address space". The thread is defined primarily by the saved registers (the `env_tf` field), and the address space is defined by the page directory and page tables pointed to by `env_pgdir`. To run an environment, the kernel must set up the CPU with _both_ the saved registers and the appropriate address space. + +Our `struct Env` is analogous to `struct proc` in xv6. Both structures hold the environment's (i.e., process's) user-mode register state in a `Trapframe` structure. In JOS, individual environments do not have their own kernel stacks as processes do in xv6. There can be only one JOS environment active in the kernel at a time, so JOS needs only a _single_ kernel stack. + +##### Allocating the Environments Array + +In lab 2, you allocated memory in `mem_init()` for the `pages[]` array, which is a table the kernel uses to keep track of which pages are free and which are not. You will now need to modify `mem_init()` further to allocate a similar array of `Env` structures, called `envs`. + +``` +Exercise 1. Modify `mem_init()` in `kern/pmap.c` to allocate and map the `envs` array. This array consists of exactly `NENV` instances of the `Env` structure allocated much like how you allocated the `pages` array. Also like the `pages` array, the memory backing `envs` should also be mapped user read-only at `UENVS` (defined in `inc/memlayout.h`) so user processes can read from this array. +``` + +You should run your code and make sure `check_kern_pgdir()` succeeds. + +##### Creating and Running Environments + +You will now write the code in `kern/env.c` necessary to run a user environment. Because we do not yet have a filesystem, we will set up the kernel to load a static binary image that is _embedded within the kernel itself_. JOS embeds this binary in the kernel as a ELF executable image. + +The Lab 3 `GNUmakefile` generates a number of binary images in the `obj/user/` directory. If you look at `kern/Makefrag`, you will notice some magic that "links" these binaries directly into the kernel executable as if they were `.o` files. The `-b binary` option on the linker command line causes these files to be linked in as "raw" uninterpreted binary files rather than as regular `.o` files produced by the compiler. (As far as the linker is concerned, these files do not have to be ELF images at all - they could be anything, such as text files or pictures!) If you look at `obj/kern/kernel.sym` after building the kernel, you will notice that the linker has "magically" produced a number of funny symbols with obscure names like `_binary_obj_user_hello_start`, `_binary_obj_user_hello_end`, and `_binary_obj_user_hello_size`. The linker generates these symbol names by mangling the file names of the binary files; the symbols provide the regular kernel code with a way to reference the embedded binary files. + +In `i386_init()` in `kern/init.c` you'll see code to run one of these binary images in an environment. However, the critical functions to set up user environments are not complete; you will need to fill them in. + +``` +Exercise 2. In the file `env.c`, finish coding the following functions: + + * `env_init()` +Initialize all of the `Env` structures in the `envs` array and add them to the `env_free_list`. Also calls `env_init_percpu`, which configures the segmentation hardware with separate segments for privilege level 0 (kernel) and privilege level 3 (user). + * `env_setup_vm()` +Allocate a page directory for a new environment and initialize the kernel portion of the new environment's address space. + * `region_alloc()` +Allocates and maps physical memory for an environment + * `load_icode()` +You will need to parse an ELF binary image, much like the boot loader already does, and load its contents into the user address space of a new environment. + * `env_create()` +Allocate an environment with `env_alloc` and call `load_icode` to load an ELF binary into it. + * `env_run()` +Start a given environment running in user mode. + + + +As you write these functions, you might find the new cprintf verb `%e` useful -- it prints a description corresponding to an error code. For example, + + r = -E_NO_MEM; + panic("env_alloc: %e", r); + +will panic with the message "env_alloc: out of memory". +``` + +Below is a call graph of the code up to the point where the user code is invoked. Make sure you understand the purpose of each step. + + * `start` (`kern/entry.S`) + * `i386_init` (`kern/init.c`) + * `cons_init` + * `mem_init` + * `env_init` + * `trap_init` (still incomplete at this point) + * `env_create` + * `env_run` + * `env_pop_tf` + + + +Once you are done you should compile your kernel and run it under QEMU. If all goes well, your system should enter user space and execute the `hello` binary until it makes a system call with the `int` instruction. At that point there will be trouble, since JOS has not set up the hardware to allow any kind of transition from user space into the kernel. When the CPU discovers that it is not set up to handle this system call interrupt, it will generate a general protection exception, find that it can't handle that, generate a double fault exception, find that it can't handle that either, and finally give up with what's known as a "triple fault". Usually, you would then see the CPU reset and the system reboot. While this is important for legacy applications (see [this blog post][3] for an explanation of why), it's a pain for kernel development, so with the 6.828 patched QEMU you'll instead see a register dump and a "Triple fault." message. + +We'll address this problem shortly, but for now we can use the debugger to check that we're entering user mode. Use make qemu-gdb and set a GDB breakpoint at `env_pop_tf`, which should be the last function you hit before actually entering user mode. Single step through this function using si; the processor should enter user mode after the `iret` instruction. You should then see the first instruction in the user environment's executable, which is the `cmpl` instruction at the label `start` in `lib/entry.S`. Now use b *0x... to set a breakpoint at the `int $0x30` in `sys_cputs()` in `hello` (see `obj/user/hello.asm` for the user-space address). This `int` is the system call to display a character to the console. If you cannot execute as far as the `int`, then something is wrong with your address space setup or program loading code; go back and fix it before continuing. + +##### Handling Interrupts and Exceptions + +At this point, the first `int $0x30` system call instruction in user space is a dead end: once the processor gets into user mode, there is no way to get back out. You will now need to implement basic exception and system call handling, so that it is possible for the kernel to recover control of the processor from user-mode code. The first thing you should do is thoroughly familiarize yourself with the x86 interrupt and exception mechanism. + +``` +Exercise 3. Read Chapter 9, Exceptions and Interrupts in the 80386 Programmer's Manual (or Chapter 5 of the IA-32 Developer's Manual), if you haven't already. +``` + +In this lab we generally follow Intel's terminology for interrupts, exceptions, and the like. However, terms such as exception, trap, interrupt, fault and abort have no standard meaning across architectures or operating systems, and are often used without regard to the subtle distinctions between them on a particular architecture such as the x86. When you see these terms outside of this lab, the meanings might be slightly different. + +##### Basics of Protected Control Transfer + +Exceptions and interrupts are both "protected control transfers," which cause the processor to switch from user to kernel mode (CPL=0) without giving the user-mode code any opportunity to interfere with the functioning of the kernel or other environments. In Intel's terminology, an _interrupt_ is a protected control transfer that is caused by an asynchronous event usually external to the processor, such as notification of external device I/O activity. An _exception_ , in contrast, is a protected control transfer caused synchronously by the currently running code, for example due to a divide by zero or an invalid memory access. + +In order to ensure that these protected control transfers are actually _protected_ , the processor's interrupt/exception mechanism is designed so that the code currently running when the interrupt or exception occurs _does not get to choose arbitrarily where the kernel is entered or how_. Instead, the processor ensures that the kernel can be entered only under carefully controlled conditions. On the x86, two mechanisms work together to provide this protection: + + 1. **The Interrupt Descriptor Table.** The processor ensures that interrupts and exceptions can only cause the kernel to be entered at a few specific, well-defined entry-points _determined by the kernel itself_ , and not by the code running when the interrupt or exception is taken. + +The x86 allows up to 256 different interrupt or exception entry points into the kernel, each with a different _interrupt vector_. A vector is a number between 0 and 255. An interrupt's vector is determined by the source of the interrupt: different devices, error conditions, and application requests to the kernel generate interrupts with different vectors. The CPU uses the vector as an index into the processor's _interrupt descriptor table_ (IDT), which the kernel sets up in kernel-private memory, much like the GDT. From the appropriate entry in this table the processor loads: + + * the value to load into the instruction pointer (`EIP`) register, pointing to the kernel code designated to handle that type of exception. + * the value to load into the code segment (`CS`) register, which includes in bits 0-1 the privilege level at which the exception handler is to run. (In JOS, all exceptions are handled in kernel mode, privilege level 0.) + 2. **The Task State Segment.** The processor needs a place to save the _old_ processor state before the interrupt or exception occurred, such as the original values of `EIP` and `CS` before the processor invoked the exception handler, so that the exception handler can later restore that old state and resume the interrupted code from where it left off. But this save area for the old processor state must in turn be protected from unprivileged user-mode code; otherwise buggy or malicious user code could compromise the kernel. + +For this reason, when an x86 processor takes an interrupt or trap that causes a privilege level change from user to kernel mode, it also switches to a stack in the kernel's memory. A structure called the _task state segment_ (TSS) specifies the segment selector and address where this stack lives. The processor pushes (on this new stack) `SS`, `ESP`, `EFLAGS`, `CS`, `EIP`, and an optional error code. Then it loads the `CS` and `EIP` from the interrupt descriptor, and sets the `ESP` and `SS` to refer to the new stack. + +Although the TSS is large and can potentially serve a variety of purposes, JOS only uses it to define the kernel stack that the processor should switch to when it transfers from user to kernel mode. Since "kernel mode" in JOS is privilege level 0 on the x86, the processor uses the `ESP0` and `SS0` fields of the TSS to define the kernel stack when entering kernel mode. JOS doesn't use any other TSS fields. + + + + +##### Types of Exceptions and Interrupts + +All of the synchronous exceptions that the x86 processor can generate internally use interrupt vectors between 0 and 31, and therefore map to IDT entries 0-31. For example, a page fault always causes an exception through vector 14. Interrupt vectors greater than 31 are only used by _software interrupts_ , which can be generated by the `int` instruction, or asynchronous _hardware interrupts_ , caused by external devices when they need attention. + +In this section we will extend JOS to handle the internally generated x86 exceptions in vectors 0-31. In the next section we will make JOS handle software interrupt vector 48 (0x30), which JOS (fairly arbitrarily) uses as its system call interrupt vector. In Lab 4 we will extend JOS to handle externally generated hardware interrupts such as the clock interrupt. + +##### An Example + +Let's put these pieces together and trace through an example. Let's say the processor is executing code in a user environment and encounters a divide instruction that attempts to divide by zero. + + 1. The processor switches to the stack defined by the `SS0` and `ESP0` fields of the TSS, which in JOS will hold the values `GD_KD` and `KSTACKTOP`, respectively. + + 2. The processor pushes the exception parameters on the kernel stack, starting at address `KSTACKTOP`: + +``` + +--------------------+ KSTACKTOP + | 0x00000 | old SS | " - 4 + | old ESP | " - 8 + | old EFLAGS | " - 12 + | 0x00000 | old CS | " - 16 + | old EIP | " - 20 <---- ESP + +--------------------+ + +``` + + 3. Because we're handling a divide error, which is interrupt vector 0 on the x86, the processor reads IDT entry 0 and sets `CS:EIP` to point to the handler function described by the entry. + + 4. The handler function takes control and handles the exception, for example by terminating the user environment. + + + + +For certain types of x86 exceptions, in addition to the "standard" five words above, the processor pushes onto the stack another word containing an _error code_. The page fault exception, number 14, is an important example. See the 80386 manual to determine for which exception numbers the processor pushes an error code, and what the error code means in that case. When the processor pushes an error code, the stack would look as follows at the beginning of the exception handler when coming in from user mode: + +``` + +--------------------+ KSTACKTOP + | 0x00000 | old SS | " - 4 + | old ESP | " - 8 + | old EFLAGS | " - 12 + | 0x00000 | old CS | " - 16 + | old EIP | " - 20 + | error code | " - 24 <---- ESP + +--------------------+ +``` + +##### Nested Exceptions and Interrupts + +The processor can take exceptions and interrupts both from kernel and user mode. It is only when entering the kernel from user mode, however, that the x86 processor automatically switches stacks before pushing its old register state onto the stack and invoking the appropriate exception handler through the IDT. If the processor is _already_ in kernel mode when the interrupt or exception occurs (the low 2 bits of the `CS` register are already zero), then the CPU just pushes more values on the same kernel stack. In this way, the kernel can gracefully handle _nested exceptions_ caused by code within the kernel itself. This capability is an important tool in implementing protection, as we will see later in the section on system calls. + +If the processor is already in kernel mode and takes a nested exception, since it does not need to switch stacks, it does not save the old `SS` or `ESP` registers. For exception types that do not push an error code, the kernel stack therefore looks like the following on entry to the exception handler: + +``` + +--------------------+ <---- old ESP + | old EFLAGS | " - 4 + | 0x00000 | old CS | " - 8 + | old EIP | " - 12 + +--------------------+ +``` + +For exception types that push an error code, the processor pushes the error code immediately after the old `EIP`, as before. + +There is one important caveat to the processor's nested exception capability. If the processor takes an exception while already in kernel mode, and _cannot push its old state onto the kernel stack_ for any reason such as lack of stack space, then there is nothing the processor can do to recover, so it simply resets itself. Needless to say, the kernel should be designed so that this can't happen. + +##### Setting Up the IDT + +You should now have the basic information you need in order to set up the IDT and handle exceptions in JOS. For now, you will set up the IDT to handle interrupt vectors 0-31 (the processor exceptions). We'll handle system call interrupts later in this lab and add interrupts 32-47 (the device IRQs) in a later lab. + +The header files `inc/trap.h` and `kern/trap.h` contain important definitions related to interrupts and exceptions that you will need to become familiar with. The file `kern/trap.h` contains definitions that are strictly private to the kernel, while `inc/trap.h` contains definitions that may also be useful to user-level programs and libraries. + +Note: Some of the exceptions in the range 0-31 are defined by Intel to be reserved. Since they will never be generated by the processor, it doesn't really matter how you handle them. Do whatever you think is cleanest. + +The overall flow of control that you should achieve is depicted below: + +``` + IDT trapentry.S trap.c + ++----------------+ +| &handler1 |---------> handler1: trap (struct Trapframe *tf) +| | // do stuff { +| | call trap // handle the exception/interrupt +| | // ... } ++----------------+ +| &handler2 |--------> handler2: +| | // do stuff +| | call trap +| | // ... ++----------------+ + . + . + . ++----------------+ +| &handlerX |--------> handlerX: +| | // do stuff +| | call trap +| | // ... ++----------------+ +``` + +Each exception or interrupt should have its own handler in `trapentry.S` and `trap_init()` should initialize the IDT with the addresses of these handlers. Each of the handlers should build a `struct Trapframe` (see `inc/trap.h`) on the stack and call `trap()` (in `trap.c`) with a pointer to the Trapframe. `trap()` then handles the exception/interrupt or dispatches to a specific handler function. + +``` +Exercise 4. Edit `trapentry.S` and `trap.c` and implement the features described above. The macros `TRAPHANDLER` and `TRAPHANDLER_NOEC` in `trapentry.S` should help you, as well as the T_* defines in `inc/trap.h`. You will need to add an entry point in `trapentry.S` (using those macros) for each trap defined in `inc/trap.h`, and you'll have to provide `_alltraps` which the `TRAPHANDLER` macros refer to. You will also need to modify `trap_init()` to initialize the `idt` to point to each of these entry points defined in `trapentry.S`; the `SETGATE` macro will be helpful here. + +Your `_alltraps` should: + + 1. push values to make the stack look like a struct Trapframe + 2. load `GD_KD` into `%ds` and `%es` + 3. `pushl %esp` to pass a pointer to the Trapframe as an argument to trap() + 4. `call trap` (can `trap` ever return?) + + + +Consider using the `pushal` instruction; it fits nicely with the layout of the `struct Trapframe`. + +Test your trap handling code using some of the test programs in the `user` directory that cause exceptions before making any system calls, such as `user/divzero`. You should be able to get make grade to succeed on the `divzero`, `softint`, and `badsegment` tests at this point. +``` + +``` +Challenge! You probably have a lot of very similar code right now, between the lists of `TRAPHANDLER` in `trapentry.S` and their installations in `trap.c`. Clean this up. Change the macros in `trapentry.S` to automatically generate a table for `trap.c` to use. Note that you can switch between laying down code and data in the assembler by using the directives `.text` and `.data`. +``` + +``` +Questions + +Answer the following questions in your `answers-lab3.txt`: + + 1. What is the purpose of having an individual handler function for each exception/interrupt? (i.e., if all exceptions/interrupts were delivered to the same handler, what feature that exists in the current implementation could not be provided?) + 2. Did you have to do anything to make the `user/softint` program behave correctly? The grade script expects it to produce a general protection fault (trap 13), but `softint`'s code says `int $14`. _Why_ should this produce interrupt vector 13? What happens if the kernel actually allows `softint`'s `int $14` instruction to invoke the kernel's page fault handler (which is interrupt vector 14)? +``` + + +This concludes part A of the lab. Don't forget to add `answers-lab3.txt`, commit your changes, and run make handin before the part A deadline. + +#### Part B: Page Faults, Breakpoints Exceptions, and System Calls + +Now that your kernel has basic exception handling capabilities, you will refine it to provide important operating system primitives that depend on exception handling. + +##### Handling Page Faults + +The page fault exception, interrupt vector 14 (`T_PGFLT`), is a particularly important one that we will exercise heavily throughout this lab and the next. When the processor takes a page fault, it stores the linear (i.e., virtual) address that caused the fault in a special processor control register, `CR2`. In `trap.c` we have provided the beginnings of a special function, `page_fault_handler()`, to handle page fault exceptions. + +``` +Exercise 5. Modify `trap_dispatch()` to dispatch page fault exceptions to `page_fault_handler()`. You should now be able to get make grade to succeed on the `faultread`, `faultreadkernel`, `faultwrite`, and `faultwritekernel` tests. If any of them don't work, figure out why and fix them. Remember that you can boot JOS into a particular user program using make run- _x_ or make run- _x_ -nox. For instance, make run-hello-nox runs the _hello_ user program. +``` + +You will further refine the kernel's page fault handling below, as you implement system calls. + +##### The Breakpoint Exception + +The breakpoint exception, interrupt vector 3 (`T_BRKPT`), is normally used to allow debuggers to insert breakpoints in a program's code by temporarily replacing the relevant program instruction with the special 1-byte `int3` software interrupt instruction. In JOS we will abuse this exception slightly by turning it into a primitive pseudo-system call that any user environment can use to invoke the JOS kernel monitor. This usage is actually somewhat appropriate if we think of the JOS kernel monitor as a primitive debugger. The user-mode implementation of `panic()` in `lib/panic.c`, for example, performs an `int3` after displaying its panic message. + +``` +Exercise 6. Modify `trap_dispatch()` to make breakpoint exceptions invoke the kernel monitor. You should now be able to get make grade to succeed on the `breakpoint` test. +``` + +``` +Challenge! Modify the JOS kernel monitor so that you can 'continue' execution from the current location (e.g., after the `int3`, if the kernel monitor was invoked via the breakpoint exception), and so that you can single-step one instruction at a time. You will need to understand certain bits of the `EFLAGS` register in order to implement single-stepping. + +Optional: If you're feeling really adventurous, find some x86 disassembler source code - e.g., by ripping it out of QEMU, or out of GNU binutils, or just write it yourself - and extend the JOS kernel monitor to be able to disassemble and display instructions as you are stepping through them. Combined with the symbol table loading from lab 1, this is the stuff of which real kernel debuggers are made. +``` + +``` +Questions + + 3. The break point test case will either generate a break point exception or a general protection fault depending on how you initialized the break point entry in the IDT (i.e., your call to `SETGATE` from `trap_init`). Why? How do you need to set it up in order to get the breakpoint exception to work as specified above and what incorrect setup would cause it to trigger a general protection fault? + 4. What do you think is the point of these mechanisms, particularly in light of what the `user/softint` test program does? +``` + + +##### System calls + +User processes ask the kernel to do things for them by invoking system calls. When the user process invokes a system call, the processor enters kernel mode, the processor and the kernel cooperate to save the user process's state, the kernel executes appropriate code in order to carry out the system call, and then resumes the user process. The exact details of how the user process gets the kernel's attention and how it specifies which call it wants to execute vary from system to system. + +In the JOS kernel, we will use the `int` instruction, which causes a processor interrupt. In particular, we will use `int $0x30` as the system call interrupt. We have defined the constant `T_SYSCALL` to 48 (0x30) for you. You will have to set up the interrupt descriptor to allow user processes to cause that interrupt. Note that interrupt 0x30 cannot be generated by hardware, so there is no ambiguity caused by allowing user code to generate it. + +The application will pass the system call number and the system call arguments in registers. This way, the kernel won't need to grub around in the user environment's stack or instruction stream. The system call number will go in `%eax`, and the arguments (up to five of them) will go in `%edx`, `%ecx`, `%ebx`, `%edi`, and `%esi`, respectively. The kernel passes the return value back in `%eax`. The assembly code to invoke a system call has been written for you, in `syscall()` in `lib/syscall.c`. You should read through it and make sure you understand what is going on. + +``` +Exercise 7. Add a handler in the kernel for interrupt vector `T_SYSCALL`. You will have to edit `kern/trapentry.S` and `kern/trap.c`'s `trap_init()`. You also need to change `trap_dispatch()` to handle the system call interrupt by calling `syscall()` (defined in `kern/syscall.c`) with the appropriate arguments, and then arranging for the return value to be passed back to the user process in `%eax`. Finally, you need to implement `syscall()` in `kern/syscall.c`. Make sure `syscall()` returns `-E_INVAL` if the system call number is invalid. You should read and understand `lib/syscall.c` (especially the inline assembly routine) in order to confirm your understanding of the system call interface. Handle all the system calls listed in `inc/syscall.h` by invoking the corresponding kernel function for each call. + +Run the `user/hello` program under your kernel (make run-hello). It should print "`hello, world`" on the console and then cause a page fault in user mode. If this does not happen, it probably means your system call handler isn't quite right. You should also now be able to get make grade to succeed on the `testbss` test. +``` + +``` +Challenge! Implement system calls using the `sysenter` and `sysexit` instructions instead of using `int 0x30` and `iret`. + +The `sysenter/sysexit` instructions were designed by Intel to be faster than `int/iret`. They do this by using registers instead of the stack and by making assumptions about how the segmentation registers are used. The exact details of these instructions can be found in Volume 2B of the Intel reference manuals. + +The easiest way to add support for these instructions in JOS is to add a `sysenter_handler` in `kern/trapentry.S` that saves enough information about the user environment to return to it, sets up the kernel environment, pushes the arguments to `syscall()` and calls `syscall()` directly. Once `syscall()` returns, set everything up for and execute the `sysexit` instruction. You will also need to add code to `kern/init.c` to set up the necessary model specific registers (MSRs). Section 6.1.2 in Volume 2 of the AMD Architecture Programmer's Manual and the reference on SYSENTER in Volume 2B of the Intel reference manuals give good descriptions of the relevant MSRs. You can find an implementation of `wrmsr` to add to `inc/x86.h` for writing to these MSRs [here][4]. + +Finally, `lib/syscall.c` must be changed to support making a system call with `sysenter`. Here is a possible register layout for the `sysenter` instruction: + + eax - syscall number + edx, ecx, ebx, edi - arg1, arg2, arg3, arg4 + esi - return pc + ebp - return esp + esp - trashed by sysenter + +GCC's inline assembler will automatically save registers that you tell it to load values directly into. Don't forget to either save (push) and restore (pop) other registers that you clobber, or tell the inline assembler that you're clobbering them. The inline assembler doesn't support saving `%ebp`, so you will need to add code to save and restore it yourself. The return address can be put into `%esi` by using an instruction like `leal after_sysenter_label, %%esi`. + +Note that this only supports 4 arguments, so you will need to leave the old method of doing system calls around to support 5 argument system calls. Furthermore, because this fast path doesn't update the current environment's trap frame, it won't be suitable for some of the system calls we add in later labs. + +You may have to revisit your code once we enable asynchronous interrupts in the next lab. Specifically, you'll need to enable interrupts when returning to the user process, which `sysexit` doesn't do for you. +``` + +##### User-mode startup + +A user program starts running at the top of `lib/entry.S`. After some setup, this code calls `libmain()`, in `lib/libmain.c`. You should modify `libmain()` to initialize the global pointer `thisenv` to point at this environment's `struct Env` in the `envs[]` array. (Note that `lib/entry.S` has already defined `envs` to point at the `UENVS` mapping you set up in Part A.) Hint: look in `inc/env.h` and use `sys_getenvid`. + +`libmain()` then calls `umain`, which, in the case of the hello program, is in `user/hello.c`. Note that after printing "`hello, world`", it tries to access `thisenv->env_id`. This is why it faulted earlier. Now that you've initialized `thisenv` properly, it should not fault. If it still faults, you probably haven't mapped the `UENVS` area user-readable (back in Part A in `pmap.c`; this is the first time we've actually used the `UENVS` area). + +``` +Exercise 8. Add the required code to the user library, then boot your kernel. You should see `user/hello` print "`hello, world`" and then print "`i am environment 00001000`". `user/hello` then attempts to "exit" by calling `sys_env_destroy()` (see `lib/libmain.c` and `lib/exit.c`). Since the kernel currently only supports one user environment, it should report that it has destroyed the only environment and then drop into the kernel monitor. You should be able to get make grade to succeed on the `hello` test. +``` + +##### Page faults and memory protection + +Memory protection is a crucial feature of an operating system, ensuring that bugs in one program cannot corrupt other programs or corrupt the operating system itself. + +Operating systems usually rely on hardware support to implement memory protection. The OS keeps the hardware informed about which virtual addresses are valid and which are not. When a program tries to access an invalid address or one for which it has no permissions, the processor stops the program at the instruction causing the fault and then traps into the kernel with information about the attempted operation. If the fault is fixable, the kernel can fix it and let the program continue running. If the fault is not fixable, then the program cannot continue, since it will never get past the instruction causing the fault. + +As an example of a fixable fault, consider an automatically extended stack. In many systems the kernel initially allocates a single stack page, and then if a program faults accessing pages further down the stack, the kernel will allocate those pages automatically and let the program continue. By doing this, the kernel only allocates as much stack memory as the program needs, but the program can work under the illusion that it has an arbitrarily large stack. + +System calls present an interesting problem for memory protection. Most system call interfaces let user programs pass pointers to the kernel. These pointers point at user buffers to be read or written. The kernel then dereferences these pointers while carrying out the system call. There are two problems with this: + + 1. A page fault in the kernel is potentially a lot more serious than a page fault in a user program. If the kernel page-faults while manipulating its own data structures, that's a kernel bug, and the fault handler should panic the kernel (and hence the whole system). But when the kernel is dereferencing pointers given to it by the user program, it needs a way to remember that any page faults these dereferences cause are actually on behalf of the user program. + 2. The kernel typically has more memory permissions than the user program. The user program might pass a pointer to a system call that points to memory that the kernel can read or write but that the program cannot. The kernel must be careful not to be tricked into dereferencing such a pointer, since that might reveal private information or destroy the integrity of the kernel. + + + +For both of these reasons the kernel must be extremely careful when handling pointers presented by user programs. + +You will now solve these two problems with a single mechanism that scrutinizes all pointers passed from userspace into the kernel. When a program passes the kernel a pointer, the kernel will check that the address is in the user part of the address space, and that the page table would allow the memory operation. + +Thus, the kernel will never suffer a page fault due to dereferencing a user-supplied pointer. If the kernel does page fault, it should panic and terminate. + +``` +Exercise 9. Change `kern/trap.c` to panic if a page fault happens in kernel mode. + +Hint: to determine whether a fault happened in user mode or in kernel mode, check the low bits of the `tf_cs`. + +Read `user_mem_assert` in `kern/pmap.c` and implement `user_mem_check` in that same file. + +Change `kern/syscall.c` to sanity check arguments to system calls. + +Boot your kernel, running `user/buggyhello`. The environment should be destroyed, and the kernel should _not_ panic. You should see: + + [00001000] user_mem_check assertion failure for va 00000001 + [00001000] free env 00001000 + Destroyed the only environment - nothing more to do! +Finally, change `debuginfo_eip` in `kern/kdebug.c` to call `user_mem_check` on `usd`, `stabs`, and `stabstr`. If you now run `user/breakpoint`, you should be able to run backtrace from the kernel monitor and see the backtrace traverse into `lib/libmain.c` before the kernel panics with a page fault. What causes this page fault? You don't need to fix it, but you should understand why it happens. +``` + +Note that the same mechanism you just implemented also works for malicious user applications (such as `user/evilhello`). + +``` +Exercise 10. Boot your kernel, running `user/evilhello`. The environment should be destroyed, and the kernel should not panic. You should see: + + [00000000] new env 00001000 + ... + [00001000] user_mem_check assertion failure for va f010000c + [00001000] free env 00001000 +``` + +**This completes the lab.** Make sure you pass all of the make grade tests and don't forget to write up your answers to the questions and a description of your challenge exercise solution in `answers-lab3.txt`. Commit your changes and type make handin in the `lab` directory to submit your work. + +Before handing in, use git status and git diff to examine your changes and don't forget to git add answers-lab3.txt. When you're ready, commit your changes with git commit -am 'my solutions to lab 3', then make handin and follow the directions. + +-------------------------------------------------------------------------------- + +via: https://pdos.csail.mit.edu/6.828/2018/labs/lab3/ + +作者:[csail.mit][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://pdos.csail.mit.edu +[b]: https://github.com/lujun9972 +[1]: https://pdos.csail.mit.edu/6.828/2018/labs/labguide.html +[2]: https://pdos.csail.mit.edu/6.828/2018/labs/reference.html +[3]: http://blogs.msdn.com/larryosterman/archive/2005/02/08/369243.aspx +[4]: http://ftp.kh.edu.tw/Linux/SuSE/people/garloff/linux/k6mod.c From 39b5879505f2dcb12c1ac3c64b4a8c0d8d2ac1f0 Mon Sep 17 00:00:00 2001 From: ivo-wang Date: Fri, 12 Oct 2018 13:22:29 +0800 Subject: [PATCH 380/736] Translating By ivo --- sources/tech/20171222 10 keys to quick game development.md | 3 +++ 1 file changed, 3 insertions(+) diff --git a/sources/tech/20171222 10 keys to quick game development.md b/sources/tech/20171222 10 keys to quick game development.md index 02f4388044..643ba077d3 100644 --- a/sources/tech/20171222 10 keys to quick game development.md +++ b/sources/tech/20171222 10 keys to quick game development.md @@ -1,3 +1,6 @@ +**translating by [ivo](https://github.com/ivo)** + + 10 keys to quick game development ====== ![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/computer_keyboard_laptop_development_code_woman.png?itok=vbYz6jjb) From da4c450e726959ce141ba644f214d0de7b38d846 Mon Sep 17 00:00:00 2001 From: ivo_wang Date: Fri, 12 Oct 2018 13:48:32 +0800 Subject: [PATCH 381/736] change name --- sources/tech/20171222 10 keys to quick game development.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/sources/tech/20171222 10 keys to quick game development.md b/sources/tech/20171222 10 keys to quick game development.md index 643ba077d3..4fe6f514a5 100644 --- a/sources/tech/20171222 10 keys to quick game development.md +++ b/sources/tech/20171222 10 keys to quick game development.md @@ -1,4 +1,4 @@ -**translating by [ivo](https://github.com/ivo)** +**translating by [ivo-wang](https://github.com/ivo-wang)** 10 keys to quick game development From 71d6b10fd5cf876f167cbd8914eda010524c98b4 Mon Sep 17 00:00:00 2001 From: sd886393 Date: Fri, 12 Oct 2018 14:22:35 +0800 Subject: [PATCH 382/736] =?UTF-8?q?=E3=80=90=E5=AE=8C=E6=88=90=E7=BF=BB?= =?UTF-8?q?=E8=AF=91=E3=80=9120181010=20Cloc=20-=20Count=20The=20Lines=20O?= =?UTF-8?q?f=20Source=20Code=20In=20Many=20Programming=20Languages?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...urce Code In Many Programming Languages.md | 198 ------------------ ...urce Code In Many Programming Languages.md | 198 ++++++++++++++++++ 2 files changed, 198 insertions(+), 198 deletions(-) delete mode 100644 sources/tech/20181010 Cloc - Count The Lines Of Source Code In Many Programming Languages.md create mode 100644 translated/tech/20181010 Cloc - Count The Lines Of Source Code In Many Programming Languages.md diff --git a/sources/tech/20181010 Cloc - Count The Lines Of Source Code In Many Programming Languages.md b/sources/tech/20181010 Cloc - Count The Lines Of Source Code In Many Programming Languages.md deleted file mode 100644 index 44acf43298..0000000000 --- a/sources/tech/20181010 Cloc - Count The Lines Of Source Code In Many Programming Languages.md +++ /dev/null @@ -1,198 +0,0 @@ -translating by littleji -Cloc – Count The Lines Of Source Code In Many Programming Languages -====== - -![](https://www.ostechnix.com/wp-content/uploads/2018/10/cloc-720x340.png) - -As a developer, you may need to share the progress and statistics of your code to your boss or colleagues. Your boss might want to analyze the code and give any additional inputs. In such cases, there are few programs, as far as I know, available to analyze the source code. One such program is [**Ohcount**][1]. Today, I came across yet another similar utility namely **“Cloc”**. Using the Cloc, you can easily count the lines of source code in several programming languages. It counts the blank lines, comment lines, and physical lines of source code and displays the result in a neat tabular-column format. Cloc is free, open source and cross-platform utility entirely written in **Perl** programming language. - -### Features - -Cloc ships with numerous advantages including the following: - - * Easy to install/use. Requires no dependencies. - * Portable - * It can produce results in a variety of formats, such as plain text, SQL, JSON, XML, YAML, comma separated values. - * Can count your git commits. - * Count the code in directories and sub-directories. - * Count codes count code within compressed archives like tar balls, Zip files, Java .ear files etc. - * Open source and cross platform. - - - -### Installing Cloc - -The Cloc utility is available in the default repositories of most Unix-like operating systems. So, you can install it using the default package manager as shown below. - -On Arch Linux and its variants: - -``` -$ sudo pacman -S cloc - -``` - -On Debian, Ubuntu: - -``` -$ sudo apt-get install cloc - -``` - -On CentOS, Red Hat, Scientific Linux: - -``` -$ sudo yum install cloc - -``` - -On Fedora: - -``` -$ sudo dnf install cloc - -``` - -On FreeBSD: - -``` -$ sudo pkg install cloc - -``` - -It can also installed using third-party package manager like [**NPM**][2] as well. - -``` -$ npm install -g cloc - -``` - -### Count The Lines Of Source Code In Many Programming Languages - -Let us start with a simple example. I have a “hello world” program written in C in my current working directory. - -``` -$ cat hello.c -#include -int main() -{ - // printf() displays the string inside quotation - printf("Hello, World!"); - return 0; -} - -``` - -To count the lines of code in the hello.c program, simply run: - -``` -$ cloc hello.c - -``` - -Sample output: - -![](https://www.ostechnix.com/wp-content/uploads/2018/10/Hello-World-Program.png) - -The first column specifies the **name of programming languages that the source code consists of**. As you can see in the above output, the source code of “hello world” program is written using **C** programming language. - -The second column displays the **number of files in each programming languages**. So, our code contains **1 file** in total. - -The third column displays the **total number of blank lines**. We have zero blank files in our code. - -The fourth column displays **number of comment lines**. - -And the final and fifth column displays **total physical lines of given source code**. - -It is just a 6 line code program, so counting the lines in the code is not a big deal. What about the some big source code file? Have a look at the following example: - -``` -$ cloc file.tar.gz - -``` - -Sample output: - -![](https://www.ostechnix.com/wp-content/uploads/2018/10/cloc-1.png) - -As per the above output, it is quite difficult to manually find exact count of code. But, Cloc displays the result in seconds with nice tabular-column format. You can view the gross total of each section at the end which is quite handy when it comes to analyze the source code of a program. - -Cloc not only counts the individual source code files, but also files inside directories and sub-directories, archives, and even in specific git commits etc. - -**Count the lines of codes in a directory:** - -``` -$ cloc dir/ - -``` - -![][4] - -**Sub-directory:** - -``` -$ cloc dir/cloc/tests - -``` - -![][5] - -**Count the lines of codes in archive file:** - -``` -$ cloc archive.zip - -``` - -![][6] - -You can also count lines in a git repository, using a specific commit like below. - -``` -$ git clone https://github.com/AlDanial/cloc.git - -$ cd cloc - -$ cloc 157d706 - -``` - -![][7] - -Cloc can recognize several programming languages. To view the complete list of recognized languages, run: - -``` -$ cloc --show-lang - -``` - -For more details, refer the help section. - -``` -$ cloc --help - -``` - -Cheers! - - - --------------------------------------------------------------------------------- - -via: https://www.ostechnix.com/cloc-count-the-lines-of-source-code-in-many-programming-languages/ - -作者:[SK][a] -选题:[lujun9972][b] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://www.ostechnix.com/author/sk/ -[b]: https://github.com/lujun9972 -[1]: https://www.ostechnix.com/ohcount-the-source-code-line-counter-and-analyzer/ -[2]: https://www.ostechnix.com/install-node-js-linux/ -[3]: data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7 -[4]: http://www.ostechnix.com/wp-content/uploads/2018/10/cloc-2-1.png -[5]: http://www.ostechnix.com/wp-content/uploads/2018/10/cloc-4.png -[6]: http://www.ostechnix.com/wp-content/uploads/2018/10/cloc-3.png -[7]: http://www.ostechnix.com/wp-content/uploads/2018/10/cloc-5.png diff --git a/translated/tech/20181010 Cloc - Count The Lines Of Source Code In Many Programming Languages.md b/translated/tech/20181010 Cloc - Count The Lines Of Source Code In Many Programming Languages.md new file mode 100644 index 0000000000..4ef15514e5 --- /dev/null +++ b/translated/tech/20181010 Cloc - Count The Lines Of Source Code In Many Programming Languages.md @@ -0,0 +1,198 @@ +Cloc – 计算不同编程语言源代码的行数 +====== + +![](https://www.ostechnix.com/wp-content/uploads/2018/10/cloc-720x340.png) + +作为一个开发人员,你可能需要不时的想你的领导或者同事分享你目前的工作与代码开发进展,抑或你的领导想对代码进行全方位的分析。在这种情况下,你就需要用到我所知的这么几个程序,其中一个是[**Ohcount**][1]。今天,我遇到了另一个程序, **"Cloc"**。通过使用 Cloc,你可以很容易的计算出多种语言的源代码行数。它还可以计算空行数、代码行数、实际占用的行数,并通过整齐的表格进行结果输出。Cloc 是免费的、开源的、跨平台程序,使用的 **Perl** 进行开发。 + +### 特点 + +Cloc 有很多优势: + + * 容易安装和实用,不需要额外的依赖项。 + * 便携式 + * 支持多种的结果格式导出,包括:纯文本、SQL、JSON、XML、YAML、CSV + * 可以计算 git 的提交数 + * 可递归计算文件夹内的代码行数 + * 可计算压缩后的文件,如:tar、zip、Java ear + * 开源跨平台部署 + + + +### 安装 + +Cloc 的安装包在大多数的 *nix 操作系统的默认软件库内,所以你只需要使用默认的包管理器安装如下这样。 + +Arch Linux: + +``` +$ sudo pacman -S cloc + +``` + +Debian, Ubuntu: + +``` +$ sudo apt-get install cloc + +``` + +CentOS, Red Hat, Scientific Linux: + +``` +$ sudo yum install cloc + +``` + +Fedora: + +``` +$ sudo dnf install cloc + +``` + +FreeBSD: + +``` +$ sudo pkg install cloc + +``` + +当然你也可以使用第三方的包管理器比如[**NPM**][2]。 + +``` +$ npm install -g cloc + +``` + +### 使用实例 + +首先来几个简单的例子,比如下面在我目前工作目录中的的 C 代码。 + +``` +$ cat hello.c +#include +int main() +{ + // printf() displays the string inside quotation + printf("Hello, World!"); + return 0; +} + +``` + +想要计算行数,只需要简单运行: + +``` +$ cloc hello.c + +``` + +输出: + +![](https://www.ostechnix.com/wp-content/uploads/2018/10/Hello-World-Program.png) + +第一列是被分析文件的语言类型,上面我们可以看到该分析的文件语言类型是 **C**。 + +第二列显示的是该种语言类型有多少文件,图中说明只有一个。 + +第三列显示空行的个数,图中显示无。 + +第四列显示注释的行数。 + +第五列显示该文件中的总共的行数。 + +这是一个有只有6行代码的源文件,我们看到算的还算准确,那么如果用来算一个行数较多的源文件,会发生什么呢? + +``` +$ cloc file.tar.gz + +``` + +输出: + +![](https://www.ostechnix.com/wp-content/uploads/2018/10/cloc-1.png) + +根据上述输出,手动查找准确的代码计数非常困难。但是,Cloc以易读的表格格式显示结果。你还可以在最后查看每个部分的总计,这在分析程序的源代码时非常方便。 + +除了源代码文件,Cloc 还能递归的计算各个目录及其子目录下的文件、压缩包、甚至 git 中的 commit 数目等。 + + +**文件夹中使用的例子:** + +``` +$ cloc dir/ + +``` + +![][4] + +**子文件夹中使用的例子:** + +``` +$ cloc dir/cloc/tests + +``` + +![][5] + +**计算一个压缩包中源代码的行数:** + +``` +$ cloc archive.zip + +``` + +![][6] + +**你还可以计算一个 git 项目:** + +``` +$ git clone https://github.com/AlDanial/cloc.git + +$ cd cloc + +$ cloc 157d706 + +``` + +![][7] + +**使用下面的命令,查看 Cloc 支持的语言类型:** + +``` +$ cloc --show-lang + +``` + +当然,help 能告诉你更多关于 Cloc 的使用帮助。 + +``` +$ cloc --help + +``` + +开始使用吧! + + + +-------------------------------------------------------------------------------- + +via: https://www.ostechnix.com/cloc-count-the-lines-of-source-code-in-many-programming-languages/ + +作者:[SK][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/littleji) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://www.ostechnix.com/author/sk/ +[b]: https://github.com/lujun9972 +[1]: https://www.ostechnix.com/ohcount-the-source-code-line-counter-and-analyzer/ +[2]: https://www.ostechnix.com/install-node-js-linux/ +[3]: data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7 +[4]: http://www.ostechnix.com/wp-content/uploads/2018/10/cloc-2-1.png +[5]: http://www.ostechnix.com/wp-content/uploads/2018/10/cloc-4.png +[6]: http://www.ostechnix.com/wp-content/uploads/2018/10/cloc-3.png +[7]: http://www.ostechnix.com/wp-content/uploads/2018/10/cloc-5.png From 890d8597ab6d1019538fcdbd5713b1aa1cdd94aa Mon Sep 17 00:00:00 2001 From: ranchong <37066859+ranchong@users.noreply.github.com> Date: Fri, 12 Oct 2018 17:06:58 +0800 Subject: [PATCH 383/736] How technology changes the rules for doing agile --- .../20180117 How technology changes the rules for doing agile.md | 1 + 1 file changed, 1 insertion(+) diff --git a/sources/talk/20180117 How technology changes the rules for doing agile.md b/sources/talk/20180117 How technology changes the rules for doing agile.md index 1b67935509..c212d5cf87 100644 --- a/sources/talk/20180117 How technology changes the rules for doing agile.md +++ b/sources/talk/20180117 How technology changes the rules for doing agile.md @@ -1,3 +1,4 @@ +Translating by ranchong How technology changes the rules for doing agile ====== From 6d54a82c3794ec2e9d8223b24622587869909f58 Mon Sep 17 00:00:00 2001 From: ranchong <37066859+ranchong@users.noreply.github.com> Date: Fri, 12 Oct 2018 17:16:24 +0800 Subject: [PATCH 384/736] Update 20180117 How technology changes the rules for doing agile.md --- .../20180117 How technology changes the rules for doing agile.md | 1 + 1 file changed, 1 insertion(+) diff --git a/sources/talk/20180117 How technology changes the rules for doing agile.md b/sources/talk/20180117 How technology changes the rules for doing agile.md index 1b67935509..c212d5cf87 100644 --- a/sources/talk/20180117 How technology changes the rules for doing agile.md +++ b/sources/talk/20180117 How technology changes the rules for doing agile.md @@ -1,3 +1,4 @@ +Translating by ranchong How technology changes the rules for doing agile ====== From cf568db09fdabeddae6f5c04da4f605595b7c701 Mon Sep 17 00:00:00 2001 From: brifuture Date: Fri, 12 Oct 2018 19:50:41 +0800 Subject: [PATCH 385/736] Pick an article --- ...1014 Compiling Lisp to JavaScript From Scratch in 350 LOC.md | 2 ++ 1 file changed, 2 insertions(+) diff --git a/sources/tech/20161014 Compiling Lisp to JavaScript From Scratch in 350 LOC.md b/sources/tech/20161014 Compiling Lisp to JavaScript From Scratch in 350 LOC.md index 7c315546fa..41d66c744e 100644 --- a/sources/tech/20161014 Compiling Lisp to JavaScript From Scratch in 350 LOC.md +++ b/sources/tech/20161014 Compiling Lisp to JavaScript From Scratch in 350 LOC.md @@ -1,3 +1,5 @@ +BriFuture is translating this article + # Compiling Lisp to JavaScript From Scratch in 350 In this article we will look at a from-scratch implementation of a compiler from a simple LISP-like calculator language to JavaScript. The complete source code can be found [here][7]. From 6d42473356cd09cd2d161009397cf3750c02e6a4 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?=E7=AB=A0=E5=86=9B?= Date: Fri, 12 Oct 2018 22:08:36 +0800 Subject: [PATCH 386/736] rm 20181002 how to use ssh and sftp - translater complete --- ...and SFTP protocols on your home network.md | 78 ------------------- ...and SFTP protocols on your home network.md | 74 ++++++++++++++++++ 2 files changed, 74 insertions(+), 78 deletions(-) delete mode 100644 sources/tech/20181002 How to use the SSH and SFTP protocols on your home network.md create mode 100644 translated/tech/20181002 How use SSH and SFTP protocols on your home network.md diff --git a/sources/tech/20181002 How to use the SSH and SFTP protocols on your home network.md b/sources/tech/20181002 How to use the SSH and SFTP protocols on your home network.md deleted file mode 100644 index a58aa55ffd..0000000000 --- a/sources/tech/20181002 How to use the SSH and SFTP protocols on your home network.md +++ /dev/null @@ -1,78 +0,0 @@ -translating by singledo - -How to use the SSH and SFTP protocols on your home network -====== - -Use the SSH and SFTP protocols to access other devices, efficiently and securely transfer files, and more. - -![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/openwires_fromRHT_520_0612LL.png?itok=PqZi55Ab) - -Years ago, I decided to set up an extra computer (I always have extra computers) so that I could access it from work to transfer files I might need. To do this, the basic first step is to have your ISP assign a fixed IP address. - -The not-so-basic but much more important next step is to set up your accessible system safely. In this particular case, I was planning to access it only from work, so I could restrict access to that IP address. Even so, you want to use all possible security features. What is amazing—and scary—is that as soon as you set this up, people from all over the world will immediately attempt to access your system. You can discover this by checking the logs. I presume there are bots constantly searching for open doors wherever they can find them. - -Not long after I set up my computer, I decided my access was more a toy than a need, so I turned it off and gave myself one less thing to worry about. Nonetheless, there is another use for SSH and SFTP inside your home network, and it is more or less already set up for you. - -One requirement, of course, is that the other computer in your home must be turned on, although it doesn’t matter whether someone is logged on or not. You also need to know its IP address. There are two ways to find this out. One is to get access to the router, which you can do through a browser. Typically, its address is something like **192.168.1.254**. With some searching, it should be easy enough to find out what is currently on and hooked up to the system by eth0 or WiFi. What can be challenging is recognizing the computer you’re interested in. - -I find it easier to go to the computer in question, bring up a shell, and type: - -``` -ifconfig - -``` - -This spits out a lot of information, but the bit you want is right after `inet` and might look something like **192.168.1.234**. After you find that, go back to the client computer you want to access this host, and on the command line, type: - -``` -ssh gregp@192.168.1.234 - -``` - -For this to work, **gregp** must be a valid user on that system. You will then be asked for his password, and if you enter it correctly, you will be connected to that other computer in a shell environment. I confess that I don’t use SSH in this way very often. I have used it at times so I can run `dnf` to upgrade some other computer than the one I’m sitting at. Usually, I use SFTP: - -``` -sftp gregp@192.168.1.234 - -``` - -because I have a greater need for an easy method of transferring files from one computer to another. It’s certainly more convenient and less time-consuming than using a USB stick or an external drive. - -`get`, to receive files from the host; and `put`, to send files to the host. I usually migrate to the directory on my client where I either want to save files I will get from the host or send to the host before I connect. When you connect, you will be in the top-level directory—in this example, **home/gregp**. Once connected, you can then use `cd` just as you would in your client, except now you’re changing your working directory on the host. You may need to use `ls` to make sure you know where you are. - -Once you’re connected, the two basic commands for SFTP are, to receive files from the host; and, to send files to the host. I usually migrate to the directory on my client where I either want to save files I will get from the host or send to the host before I connect. When you connect, you will be in the top-level directory—in this example,. Once connected, you can then usejust as you would in your client, except now you’re changing your working directory on the host. You may need to useto make sure you know where you are. - -If you need to change the working directory on your client, use the command `lcd` (as in **local change directory** ). Similarly, use `lls` to show the working directory contents on your client system. - -What if the host doesn’t have a directory with the name you would like? Use `mkdir` to make a new directory on it. Or you might copy a whole directory of files to the host with this: - -``` -put -r ThisDir/ - -``` - -which creates the directory and then copies all of its files and subdirectories to the host. These transfers are extremely fast, as fast as your hardware allows, and have none of the bottlenecks you might encounter on the internet. To see a list of commands you can use in an SFTP session, check: - -``` -man sftp - -``` - -I have also been able to put SFTP to use on a Windows VM on my computer, yet another advantage of setting up a VM rather than a dual-boot system. This lets me move files to or from the Linux part of the system. So far I have only done this using a client in Windows. - -You can also use SSH and SFTP to access any devices connected to your router by wire or WiFi. For a while, I used an app called [SSHDroid][1], which runs SSH in a passive mode. In other words, you use your computer to access the Android device that is the host. Recently I found another app, [Admin Hands][2], where the tablet or phone is the client and can be used for either SSH or SFTP operations. This app is great for backing up or sharing photos from your phone. - --------------------------------------------------------------------------------- - -via: https://opensource.com/article/18/10/ssh-sftp-home-network - -作者:[Geg Pittman][a] -选题:[lujun9972](https://github.com/lujun9972) -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://opensource.com/users/greg-p -[1]: https://play.google.com/store/apps/details?id=berserker.android.apps.sshdroid -[2]: https://play.google.com/store/apps/details?id=com.arpaplus.adminhands&hl=en_US diff --git a/translated/tech/20181002 How use SSH and SFTP protocols on your home network.md b/translated/tech/20181002 How use SSH and SFTP protocols on your home network.md new file mode 100644 index 0000000000..db202a3043 --- /dev/null +++ b/translated/tech/20181002 How use SSH and SFTP protocols on your home network.md @@ -0,0 +1,74 @@ +如何在家中使用 SSH 和 SFTP 协议 +====== + +通过 SSH 和 SFTP 协议 ,我们能够访问其他设备 ,有效而且安全的传输文件及更多 。 + +![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/openwires_fromRHT_520_0612LL.png?itok=PqZi55Ab) + +多年前 ,我决定配置一个额外的电脑 ,以便我能在工作时能够访问它来传输我所需要的文件 。最基本的一步是要求你的网络提供商 ( ISP )提供一个固定的地址 ( IP Address )。 + +保证你系统的访问是安全的 ,这是一个不必要但很重要的步骤 。在此种特殊情况下 ,我计划只在工作的时候能够访问它 。所以我能够约束访问的 IP 地址 。即使如此 ,你依然要尽多的采用安全措施 。一旦你建立起来这个 ,全世界的人们都能立即访问你的系统 。这是非常令人惊奇及恐慌的 。你能通过日志文件来发现这一点 。我推测有探测机器人在尽它们所能的搜索那些没有安全措施的系统 。 + +在我建立系统不久后 ,我觉得我的访问是一个简单的玩具而不是我想要的 ,为此 ,我将它关闭了好让我不在为它而担心 。尽管如此 ,这个系统在家庭网络中对于 SSH 和 SFTP 还有其他的用途 ,它至少已经为你而创建了 。 + +一个必备条件 ,你家的另一台电脑必须已经开机了 ,至于电脑是否已经非常老旧是没有影响的 。你也需要知道另一台电脑的 IP 地址 。有两个方法能够知道做到 ,一个是通过网页进入你的路由器 ,一般情况下你的地址格式类似于 **192.168.1.254** 。通过一些搜索 ,找出当前是开机的并且和系统 eth0 或者 wifi 挂钩的系统是足够简单的 。如何组织你所敢兴趣的电脑是一个挑战 。 + +询问电脑问题是简单的 ,打开 shell ,输入 : + +``` +ifconfig + +``` + +命令会输出一些信息 ,你所需要的信息在 `inet` 后面 ,看起来和 **192.168.1.234** 类似 。当你发现这个后 ,回到你的客户端电脑 ,在命令行中输入 : + +``` +ssh gregp@192.168.1.234 + +``` + +上面的命令能够正常执行 ,**gregp** 必须在主机系统中是中确的用户名 。用户的密码也会被需要 。如果你键入的密码和用户名都是正确的 ,你将通过 shell 环境连接上了其他电脑 。我坦诚 ,对于 SSH 我并不是经常使用的 。我偶尔使用它 ,所以我能够运行 `dnf` 来更新我就坐的其他电脑 。通常 ,我用 SFTP : + +``` +sftp grego@192.168.1.234 + +``` + +对于用更简单的方法来把一个文件传输到另一个文件 ,我有很强烈的需求 。相对于闪存棒和额外的设备 ,它更加方便 ,耗时更少 。 + +一旦连接建立成功 ,SFTP 有两个基本的命令 ,`get` ,从主机接收文件 ;`put` ,向主机发送文件 。在客户端 ,我经常移动到我想接收或者传输的文件夹下 ,在开始连接之前 。在连接之后 ,你将在顶层目录 **home/gregp** 。一旦连接成功 ,你将和在客户端一样的使用 `cd` ,除非在主机上你改变了你的工作路径 。你会需要用 `ls` 来确认你的位置 。 + +在客户端 ,如果你想改变工作路劲 。用 `lcd` 命令 ( **local change directory**)。相同的 ,用 `lls` 来显示客户端工作目录的内容 。 + +如果你不喜欢主机工作目录的名字时 ,你该怎么办 ?用 `mkdir` 在主机上创建一个新的文件夹 。或者将整个文件全拷贝到主机 : + +``` +put -r thisDir/ + +``` + +在主机上创建文件夹和传输文件以及子文件夹是非常快速的 ,能达到硬件的上限 。在网络传输的过程中不会遇到瓶颈 。查看 SFTP 能够使用的功能 ,查看 : + +``` +man sftp + +``` + +在我的电脑上我也可以在 windows 虚拟机上用 SFTP ,另一个优势是配置一个虚拟机而不是一个双系统 。这让我能够在系统的 Linux 部分移入或者移出文件 。到目前为止 ,我只用了 windows 的客户端 。 + +你能够进入到任何通过无线或者 WIFI 连接到你路由器的设备 。暂时 ,我使用一个叫做 [SSHDroid][1] 的应用 ,能够在被动模式下运行 SSH 。换句话来说 ,你能够用你的电脑访问作为主机的 Android 设备 。近来我还发现了另外一个应用 ,[Admin Hands][2] ,不管你的客户端是桌面还是手机 ,都能使用 SSH 或者 SFTP 操作 。这个应用对于备份和手机分享照片是极好的 。 + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/18/10/ssh-sftp-home-network + +作者:[Geg Pittman][a] +选题:[lujun9972](https://github.com/lujun9972) +译者:[singledo](https://github.com/singledo) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/greg-p +[1]: https://play.google.com/store/apps/details?id=berserker.android.apps.sshdroid +[2]: https://play.google.com/store/apps/details?id=com.arpaplus.adminhands&hl=en_US From 2ea5fda4f7d1715ca93af1e9f1c8ed35a4e6474a Mon Sep 17 00:00:00 2001 From: chenliang Date: Fri, 12 Oct 2018 22:41:31 +0800 Subject: [PATCH 387/736] translating by Flowsnow --- ... Getting started with Minikube- Kubernetes on your laptop.md | 2 ++ 1 file changed, 2 insertions(+) diff --git a/sources/tech/20181011 Getting started with Minikube- Kubernetes on your laptop.md b/sources/tech/20181011 Getting started with Minikube- Kubernetes on your laptop.md index c533a113a3..241b5978c4 100644 --- a/sources/tech/20181011 Getting started with Minikube- Kubernetes on your laptop.md +++ b/sources/tech/20181011 Getting started with Minikube- Kubernetes on your laptop.md @@ -1,3 +1,5 @@ +translating by Flowsnow + Getting started with Minikube: Kubernetes on your laptop ====== A step-by-step guide for running Minikube. From 21d2d9c7e088e6d4adad49b4634c2a6f9dae5871 Mon Sep 17 00:00:00 2001 From: "Xingyu.Wang" Date: Fri, 12 Oct 2018 22:41:39 +0800 Subject: [PATCH 388/736] PRF:20180816 Add YouTube Player Controls To Your Linux Desktop With browser-mpris2 (Chrome Extension).md @geekpi --- ... With browser-mpris2 (Chrome Extension).md | 40 +++++++++---------- 1 file changed, 20 insertions(+), 20 deletions(-) diff --git a/translated/tech/20180816 Add YouTube Player Controls To Your Linux Desktop With browser-mpris2 (Chrome Extension).md b/translated/tech/20180816 Add YouTube Player Controls To Your Linux Desktop With browser-mpris2 (Chrome Extension).md index a72b4cdd8d..b3d262d94c 100644 --- a/translated/tech/20180816 Add YouTube Player Controls To Your Linux Desktop With browser-mpris2 (Chrome Extension).md +++ b/translated/tech/20180816 Add YouTube Player Controls To Your Linux Desktop With browser-mpris2 (Chrome Extension).md @@ -1,17 +1,18 @@ -使用 browser-mpris2(Chrome 扩展)将 YouTube 播放器控件添加到 Linux 桌面 +使用 Chrome 扩展将 YouTube 播放器控件添加到 Linux 桌面 ====== -一个我怀念的 Unity 功能(虽然只使用了一小段时间)是在 Web 浏览器中访问 YouTube 等网站时自动获取 Ubuntu 声音指示器中的播放器控件,因此你可以直接从顶部栏暂停或停止视频,以及浏览视频/歌曲信息和预览。 -这个 Unity 功能已经消失很久了,但我正在为 Gnome Shell 寻找类似的东西,然后我遇到了 **[browser-mpris2][1],这是一个为 Google Chrome/Chromium 实现 MPRIS v2 接口的扩展,目前只支持 YouTube**,我想可能会有一些 Linux Uprising 的读者会喜欢这个。 +一个我怀念的 Unity 功能(虽然只使用了一小段时间)是在 Web 浏览器中访问 YouTube 等网站时在 Ubuntu 声音指示器中自动出现播放器控件,因此你可以直接从顶部栏暂停或停止视频,以及浏览视频/歌曲信息和预览。 -**该扩展还适用于 Opera 和 Vivaldi 等基于 Chromium 的 Web 浏览器。** -** -** **browser-mpris2 也支持 Firefox,但因为通过 about:debugging 加载扩展是临时的,而这是 browser-mpris2 所需要的,因此本文不包括 Firefox 的指导。开发人员[打算][2]将来将扩展提交到 Firefox 插件网站上。** +这个 Unity 功能已经消失很久了,但我正在为 Gnome Shell 寻找类似的东西,然后我遇到了 [browser-mpris2][1],这是一个为 Google Chrome/Chromium 实现 MPRIS v2 接口的扩展,目前只支持 YouTube,我想可能会有一些读者会喜欢这个。 -**使用此 Chrome 扩展,你可以在支持 MPRIS2 的 applets 中获得 YouTube 媒体播放器控件(播放、暂停、停止和查找 -)**。例如,如果你使用 Gnome Shell,你可将 YouTube 媒体播放器控件作为永久通知,或者你可以使用 Media Player Indicator 之类的扩展来实现此目的。在 Cinnamon /Linux Mint with Cinnamon 中,它出现在声音 Applet 中。 +该扩展还适用于 Opera 和 Vivaldi 等基于 Chromium 的 Web 浏览器。 -**我无法在 Unity 上用它**,我不知道为什么。我没有在不同桌面环境(KDE、Xfce、MATE 等)中使用其他支持 MPRIS2 的 applet 尝试此扩展。如果你尝试过,请告诉我们它是否适用于你的桌面环境/支持 MPRIS2 的 applet。 +browser-mpris2 也支持 Firefox,但因为通过 `about:debugging` 加载扩展是临时的,而这是 browser-mpris2 所需要的,因此本文不包括 Firefox 的指导。开发人员[打算][2]将来将扩展提交到 Firefox 插件网站上。 + +使用此 Chrome 扩展,你可以在支持 MPRIS2 的 applets 中获得 YouTube 媒体播放器控件(播放、暂停、停止和查找 +)。例如,如果你使用 Gnome Shell,你可将 YouTube 媒体播放器控件作为永久显示的控件,或者你可以使用 Media Player Indicator 之类的扩展来实现此目的。在 Cinnamon /Linux Mint with Cinnamon 中,它出现在声音 Applet 中。 + +我无法在 Unity 上用它,我不知道为什么。我没有在不同桌面环境(KDE、Xfce、MATE 等)中使用其他支持 MPRIS2 的 applet 尝试此扩展。如果你尝试过,请告诉我们它是否适用于你的桌面环境/支持 MPRIS2 的 applet。 以下是在使用 Gnome Shell 的 Ubuntu 18.04 并装有 Chromium 浏览器的[媒体播放器指示器][3]的截图,其中显示了有关当前正在播放的 YouTube 视频的信息及其控件(播放/暂停,停止和查找): @@ -19,42 +20,41 @@ 在 Linux Mint 19 Cinnamon 中使用其默认声音 applet 和 Chromium 浏览器的截图: - ![](https://2.bp.blogspot.com/-I2DuYetv7eQ/W3VtUUcg26I/AAAAAAAABXc/Tv-RemkyO60k6CC_mYUxewG-KfVgpFefACLcBGAs/s1600/browser-mpris2-cinnamon-linux-mint.png) ### 如何为 Google Chrom/Chromium安装 browser-mpris2 -**1\. 如果你还没有安装 Git 就安装它** +1、 如果你还没有安装 Git 就安装它 在 Debian/Ubuntu/Linux Mint 中,使用此命令安装 git: + ``` sudo apt install git - ``` -**2\. 下载并安装 [browser-mpris2][1] 所需文件。** +2、 下载并安装 [browser-mpris2][1] 所需文件。 + +下面的命令克隆了 browser-mpris2 的 Git 仓库并将 chrome-mpris2 安装到 `/usr/local/bin/`(在一个你可以保存 browser-mpris2 文件夹的地方运行 `git clone ...` 命令,由于它会被 Chrome/Chromium 使用,你不能删除它): -下面的命令克隆了 browser-mpris2 的 Git 仓库并将 chrome-mpris2 安装到 `/usr/local/bin/`(在一个你可以保存 browser-mpris2 文件夹的地方运行 “git clone ...” 命令,由于它会被 Chrome/Chromium 使用,你不能删除它): ``` git clone https://github.com/otommod/browser-mpris2 sudo install browser-mpris2/native/chrome-mpris2 /usr/local/bin/ - ``` -**3\. 在基于 Chrome/Chromium 的 Web 浏览器中加载此扩展。** +3、 在基于 Chrome/Chromium 的 Web 浏览器中加载此扩展。 ![](https://3.bp.blogspot.com/-yEoNFj2wAXM/W3Vvewa979I/AAAAAAAABXo/dmltlNZk3J4sVa5jQenFFrT28ecklY92QCLcBGAs/s640/browser-mpris2-chrome-developer-load-unpacked.png) -打开 Goog​​le Chrome、Chromium、Opera 或 Vivaldi 浏览器,进入 Extensions 页面(在 URL 栏中输入 `chrome://extensions`),在屏幕右上角切换到`开发者模式`。然后选择 `Load Unpacked` 并选择 chrome-mpris2 目录(确保没有选择子文件夹)。 +打开 Goog​​le Chrome、Chromium、Opera 或 Vivaldi 浏览器,进入 Extensions 页面(在 URL 栏中输入 `chrome://extensions`),在屏幕右上角切换到“开发者模式”。然后选择 “Load Unpacked” 并选择 chrome-mpris2 目录(确保没有选择子文件夹)。 复制扩展 ID 并保存它,因为你以后需要它(它类似于这样:`emngjajgcmeiligomkgpngljimglhhii`,但它会与你的不一样,因此确保使用你计算机中的 ID!)。 -**4\. 运行 **`install-chrome.py`**(在 `browser-mpris2/native` 文件夹中),指定扩展 id 和 chrome-mpris2 路径。 +4、 运行 `install-chrome.py`(在 `browser-mpris2/native` 文件夹中),指定扩展 id 和 chrome-mpris2 路径。 在终端中使用此命令(将 `REPLACE-THIS-WITH-EXTENSION-ID` 替换为上一步中 `chrome://extensions` 下显示的 browser-mpris2 扩展 ID)安装此扩展: + ``` browser-mpris2/native/install-chrome.py REPLACE-THIS-WITH-EXTENSION-ID /usr/local/bin/chrome-mpris2 - ``` 你只需要运行此命令一次,无需将其添加到启动或其他类似的地方。你在 Google Chrome 或 Chromium 浏览器中播放的任何 YouTube 视频都应显示在你正在使用的任何 MPRISv2 applet 中。你无需重启 Web 浏览器。 @@ -66,7 +66,7 @@ via: https://www.linuxuprising.com/2018/08/add-youtube-player-controls-to-your.h 作者:[Logix][a] 选题:[lujun9972](https://github.com/lujun9972) 译者:[geekpi](https://github.com/geekpi) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 From fd2218be926c43c28ffae032f0abb37f7d81456e Mon Sep 17 00:00:00 2001 From: chenliang Date: Fri, 12 Oct 2018 22:43:43 +0800 Subject: [PATCH 389/736] translating bu Flowsnow --- sources/tech/20181001 How to Install Pip on Ubuntu.md | 2 ++ 1 file changed, 2 insertions(+) diff --git a/sources/tech/20181001 How to Install Pip on Ubuntu.md b/sources/tech/20181001 How to Install Pip on Ubuntu.md index 8751dc50f9..b465b816a0 100644 --- a/sources/tech/20181001 How to Install Pip on Ubuntu.md +++ b/sources/tech/20181001 How to Install Pip on Ubuntu.md @@ -1,3 +1,5 @@ +translating by Flowsnow + How to Install Pip on Ubuntu ====== **Pip is a command line tool that allows you to install software packages written in Python. Learn how to install Pip on Ubuntu and how to use it for installing Python applications.** From 8ddf77776be05ed64cb4286463a329f93355252a Mon Sep 17 00:00:00 2001 From: "Xingyu.Wang" Date: Fri, 12 Oct 2018 22:45:31 +0800 Subject: [PATCH 390/736] PUB:20180816 Add YouTube Player Controls To Your Linux Desktop With browser-mpris2 (Chrome Extension).md @geekpi https://linux.cn/article-10108-1.html --- ...o Your Linux Desktop With browser-mpris2 (Chrome Extension).md | 0 1 file changed, 0 insertions(+), 0 deletions(-) rename {translated/tech => published}/20180816 Add YouTube Player Controls To Your Linux Desktop With browser-mpris2 (Chrome Extension).md (100%) diff --git a/translated/tech/20180816 Add YouTube Player Controls To Your Linux Desktop With browser-mpris2 (Chrome Extension).md b/published/20180816 Add YouTube Player Controls To Your Linux Desktop With browser-mpris2 (Chrome Extension).md similarity index 100% rename from translated/tech/20180816 Add YouTube Player Controls To Your Linux Desktop With browser-mpris2 (Chrome Extension).md rename to published/20180816 Add YouTube Player Controls To Your Linux Desktop With browser-mpris2 (Chrome Extension).md From 3886d02f1bbf5ac5047d07b3f4878d87a7efab77 Mon Sep 17 00:00:00 2001 From: chenliang Date: Fri, 12 Oct 2018 22:46:11 +0800 Subject: [PATCH 391/736] translating by Flowsnow --- .../tech/20181011 A Front-end For Popular Package Managers.md | 2 ++ 1 file changed, 2 insertions(+) diff --git a/sources/tech/20181011 A Front-end For Popular Package Managers.md b/sources/tech/20181011 A Front-end For Popular Package Managers.md index 6cdef8bd98..9353ba7f75 100644 --- a/sources/tech/20181011 A Front-end For Popular Package Managers.md +++ b/sources/tech/20181011 A Front-end For Popular Package Managers.md @@ -1,3 +1,5 @@ +translating by Flowsnow + A Front-end For Popular Package Managers ====== From d9646c82d1be68017280b1f56ee3e7db13505490 Mon Sep 17 00:00:00 2001 From: chenliang Date: Fri, 12 Oct 2018 23:20:55 +0800 Subject: [PATCH 392/736] translated by Flowsnow translated by Flowsnow --- .../20181001 How to Install Pip on Ubuntu.md | 181 ------------------ .../20181001 How to Install Pip on Ubuntu.md | 167 ++++++++++++++++ 2 files changed, 167 insertions(+), 181 deletions(-) delete mode 100644 sources/tech/20181001 How to Install Pip on Ubuntu.md create mode 100644 translated/tech/20181001 How to Install Pip on Ubuntu.md diff --git a/sources/tech/20181001 How to Install Pip on Ubuntu.md b/sources/tech/20181001 How to Install Pip on Ubuntu.md deleted file mode 100644 index b465b816a0..0000000000 --- a/sources/tech/20181001 How to Install Pip on Ubuntu.md +++ /dev/null @@ -1,181 +0,0 @@ -translating by Flowsnow - -How to Install Pip on Ubuntu -====== -**Pip is a command line tool that allows you to install software packages written in Python. Learn how to install Pip on Ubuntu and how to use it for installing Python applications.** - -There are numerous ways to [install software on Ubuntu][1]. You can install applications from the software center, from downloaded DEB files, from PPA, from [Snap packages][2], [using Flatpak][3], using [AppImage][4] and even from the good old source code. - -There is one more way to install packages in [Ubuntu][5]. It’s called Pip and you can use it to install Python-based applications. - -### What is Pip - -[Pip][6] stands for “Pip Installs Packages”. [Pip][7] is a command line based package management system. It is used to install and manage software written in [Python language][8]. - -You can use Pip to install packages listed in the Python Package Index ([PyPI][9]). - -As a software developer, you can use pip to install various Python module and packages for your own Python projects. - -As an end user, you may need pip in order to install some applications that are developed using Python and can be installed easily using pip. One such example is [Stress Terminal][10] application that you can easily install with pip. - -Let’s see how you can install pip on Ubuntu and other Ubuntu-based distributions. - -### How to install Pip on Ubuntu - -![Install pip on Ubuntu Linux][11] - -Pip is not installed on Ubuntu by default. You’ll have to install it. Installing pip on Ubuntu is really easy. I’ll show it to you in a moment. - -Ubuntu 18.04 has both Python 2 and Python 3 installed by default. And hence, you should install pip for both Python versions. - -Pip, by default, refers to the Python 2. Pip in Python 3 is referred by pip3. - -Note: I am using Ubuntu 18.04 in this tutorial. But the instructions here should be valid for other versions like Ubuntu 16.04, 18.10 etc. You may also use the same commands on other Linux distributions based on Ubuntu such as Linux Mint, Linux Lite, Xubuntu, Kubuntu etc. - -#### Install pip for Python 2 - -First, make sure that you have Python 2 installed. On Ubuntu, use the command below to verify. - -``` -python2 --version - -``` - -If there is no error and a valid output that shows the Python version, you have Python 2 installed. So now you can install pip for Python 2 using this command: - -``` -sudo apt install python-pip - -``` - -It will install pip and a number of other dependencies with it. Once installed, verify that you have pip installed correctly. - -``` -pip --version - -``` - -It should show you a version number, something like this: - -``` -pip 9.0.1 from /usr/lib/python2.7/dist-packages (python 2.7) - -``` - -This mans that you have successfully installed pip on Ubuntu. - -#### Install pip for Python 3 - -You have to make sure that Python 3 is installed on Ubuntu. To check that, use this command: - -``` -python3 --version - -``` - -If it shows you a number like Python 3.6.6, Python 3 is installed on your Linux system. - -Now, you can install pip3 using the command below: - -``` -sudo apt install python3-pip - -``` - -You should verify that pip3 has been installed correctly using this command: - -``` -pip3 --version - -``` - -It should show you a number like this: - -``` -pip 9.0.1 from /usr/lib/python3/dist-packages (python 3.6) - -``` - -It means that pip3 is successfully installed on your system. - -### How to use Pip command - -Now that you have installed pip, let’s quickly see some of the basic pip commands. These commands will help you use pip commands for searching, installing and removing Python packages. - -To search packages from the Python Package Index, you can use the following pip command: - -``` -pip search - -``` - -For example, if you search or stress, it will show all the packages that have the string ‘stress’ in its name or description. - -``` -pip search stress -stress (1.0.0) - A trivial utility for consuming system resources. -s-tui (0.8.2) - Stress Terminal UI stress test and monitoring tool -stressypy (0.0.12) - A simple program for calling stress and/or stress-ng from python -fuzzing (0.3.2) - Tools for stress testing applications. -stressant (0.4.1) - Simple stress-test tool -stressberry (0.1.7) - Stress tests for the Raspberry Pi -mobbage (0.2) - A HTTP stress test and benchmark tool -stresser (0.2.1) - A large-scale stress testing framework. -cyanide (1.3.0) - Celery stress testing and integration test support. -pysle (1.5.7) - An interface to ISLEX, a pronunciation dictionary with stress markings. -ggf (0.3.2) - global geometric factors and corresponding stresses of the optical stretcher -pathod (0.17) - A pathological HTTP/S daemon for testing and stressing clients. -MatPy (1.0) - A toolbox for intelligent material design, and automatic yield stress determination -netblow (0.1.2) - Vendor agnostic network testing framework to stress network failures -russtress (0.1.3) - Package that helps you to put lexical stress in russian text -switchy (0.1.0a1) - A fast FreeSWITCH control library purpose-built on traffic theory and stress testing. -nx4_selenium_test (0.1) - Provides a Python class and apps which monitor and/or stress-test the NoMachine NX4 web interface -physical_dualism (1.0.0) - Python library that approximates the natural frequency from stress via physical dualism, and vice versa. -fsm_effective_stress (1.0.0) - Python library that uses the rheological-dynamical analogy (RDA) to compute damage and effective buckling stress in prismatic shell structures. -processpathway (0.3.11) - A nifty little toolkit to create stress-free, frustrationless image processing pathways from your webcam for computer vision experiments. Or observing your cat. - -``` - -If you want to install an application using pip, you can use it in the following manner: - -``` -pip install - -``` - -Pip doesn’t support tab completion so the package name should be exact. It will download all the necessary files and installed that package. - -If you want to remove a Python package installed via pip, you can use the remove option in pip. - -``` -pip uninstall - -``` - -You can use pip3 instead of pip in the above commands. - -I hope this quick tip helped you to install pip on Ubuntu. If you have any questions or suggestions, please let me know in the comment section below. - --------------------------------------------------------------------------------- - -via: https://itsfoss.com/install-pip-ubuntu/ - -作者:[Abhishek Prakash][a] -选题:[lujun9972](https://github.com/lujun9972) -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://itsfoss.com/author/abhishek/ -[1]: https://itsfoss.com/how-to-add-remove-programs-in-ubuntu/ -[2]: https://itsfoss.com/use-snap-packages-ubuntu-16-04/ -[3]: https://itsfoss.com/flatpak-guide/ -[4]: https://itsfoss.com/use-appimage-linux/ -[5]: https://www.ubuntu.com/ -[6]: https://en.wikipedia.org/wiki/Pip_(package_manager) -[7]: https://pypi.org/project/pip/ -[8]: https://www.python.org/ -[9]: https://pypi.org/ -[10]: https://itsfoss.com/stress-terminal-ui/ -[11]: https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/10/install-pip-ubuntu.png diff --git a/translated/tech/20181001 How to Install Pip on Ubuntu.md b/translated/tech/20181001 How to Install Pip on Ubuntu.md new file mode 100644 index 0000000000..01801c078d --- /dev/null +++ b/translated/tech/20181001 How to Install Pip on Ubuntu.md @@ -0,0 +1,167 @@ +如何在Ubuntu上安装Pip +====== +**Pip是一个命令行工具,允许你安装Python编写的软件包。 学习如何在Ubuntu上安装Pip以及如何使用它来安装Python应用程序。** + +有许多方法可以[在Ubuntu上安装软件][1]。 你可以从软件中心安装应用程序,也可以从下载的DEB文件,PPA(LCTT译者注:PPA即Personal Package Archives,个人软件包集),[Snap软件包][2],也可以使用[使用Flatpak][3],使用[AppImage][4],甚至从旧的源代码安装。 + +还有一种方法可以在[Ubuntu][5]中安装软件包。 它被称为Pip,你可以使用它来安装基于Python的应用程序。 + +### 什么是Pip + +[Pip][6]代表“Pip Installs Packages”。 [Pip][7]是一个基于命令行的包管理系统。 用于安装和管理[Python语言][8]编写的软件。 + +你可以使用Pip来安装Python包索引([PyPI][9])中列出的包。 + +作为软件开发人员,你可以使用pip为你自己的Python项目安装各种Python模块和包。 + +作为最终用户,你可能需要使用pip来安装一些Python开发的并且可以使用pip轻松安装的应用程序。 一个这样的例子是[Stress Terminal][10]应用程序,你可以使用pip轻松安装。 + +让我们看看如何在Ubuntu和其他基于Ubuntu的发行版上安装pip。 + +### 如何在Ubuntu上安装Pip + +![Install pip on Ubuntu Linux][11] + +默认情况下,Pip未安装在Ubuntu上。 你必须安装它。 在Ubuntu上安装pip非常简单。 我马上展示给你。 + +Ubuntu 18.04默认安装了Python 2和Python 3。 因此,你应该为两个Python版本安装pip。 + +Pip,默认情况下是指Python 2。pip3代表Python 3中的Pip。 + +注意:我在本教程中使用的是Ubuntu 18.04。 但是这里的教程应该适用于其他版本,如Ubuntu 16.04,18.10等。你也可以在基于Ubuntu的其他Linux发行版上使用相同的命令,如Linux Mint,Linux Lite,Xubuntu,Kubuntu等。 + +#### 为Python 2安装pip + +首先,确保已经安装了Python 2。 在Ubuntu上,可以使用以下命令进行验证。 + +``` +python2 --version +``` + +如果没有错误并且显示了Python版本的有效输出,则说明安装了Python 2。 所以现在你可以使用这个命令为Python 2安装pip: + +``` +sudo apt install python-pip +``` + +它将使用它安装pip和许多其他的依赖项。 安装完成后,请确认你已正确安装了pip。 + +``` +pip --version +``` + +它应该显示一个版本号,如下所示: + +``` +pip 9.0.1 from /usr/lib/python2.7/dist-packages (python 2.7) +``` + +这意味着你已经成功在Ubuntu上安装了pip。 + +#### 为Python 3安装pip + +你必须确保在Ubuntu上安装了Python 3。 可以使用以下命令检查一下: + +``` +python3 --version +``` + +如果显示了像Python 3.6.6这样的数字,则说明Python 3在你的Linux系统上安装好了。 + +现在,你可以使用以下命令安装pip3: + +``` +sudo apt install python3-pip +``` + +你应该使用以下命令验证pip3是否已正确安装: + +``` +pip3 --version +``` + +它应该显示一个这样的数字: + +``` +pip 9.0.1 from /usr/lib/python3/dist-packages (python 3.6) +``` + +这意味着pip3已成功安装在你的系统上。 + +### 如何使用Pip命令 + +现在你已经安装了pip,让我们快速看一些基本的pip命令。 这些命令将帮助你使用pip命令来搜索,安装和删除Python包。 + +要从Python包索引PypI中搜索包,可以使用以下pip命令: + +``` +pip search +``` + +例如,如果你搜索stress这个词,将会显示名称或描述中包含字符串'stress'的所有包。 + +``` +pip search stress +stress (1.0.0) - A trivial utility for consuming system resources. +s-tui (0.8.2) - Stress Terminal UI stress test and monitoring tool +stressypy (0.0.12) - A simple program for calling stress and/or stress-ng from python +fuzzing (0.3.2) - Tools for stress testing applications. +stressant (0.4.1) - Simple stress-test tool +stressberry (0.1.7) - Stress tests for the Raspberry Pi +mobbage (0.2) - A HTTP stress test and benchmark tool +stresser (0.2.1) - A large-scale stress testing framework. +cyanide (1.3.0) - Celery stress testing and integration test support. +pysle (1.5.7) - An interface to ISLEX, a pronunciation dictionary with stress markings. +ggf (0.3.2) - global geometric factors and corresponding stresses of the optical stretcher +pathod (0.17) - A pathological HTTP/S daemon for testing and stressing clients. +MatPy (1.0) - A toolbox for intelligent material design, and automatic yield stress determination +netblow (0.1.2) - Vendor agnostic network testing framework to stress network failures +russtress (0.1.3) - Package that helps you to put lexical stress in russian text +switchy (0.1.0a1) - A fast FreeSWITCH control library purpose-built on traffic theory and stress testing. +nx4_selenium_test (0.1) - Provides a Python class and apps which monitor and/or stress-test the NoMachine NX4 web interface +physical_dualism (1.0.0) - Python library that approximates the natural frequency from stress via physical dualism, and vice versa. +fsm_effective_stress (1.0.0) - Python library that uses the rheological-dynamical analogy (RDA) to compute damage and effective buckling stress in prismatic shell structures. +processpathway (0.3.11) - A nifty little toolkit to create stress-free, frustrationless image processing pathways from your webcam for computer vision experiments. Or observing your cat. +``` + +如果要使用pip安装应用程序,可以按以下方式使用它: + +``` +pip install +``` + +Pip不支持使用tab键补全包名,因此包名称应该是准确的。 它将下载所有必需的文件并安装该软件包。 + +如果要删除通过pip安装的Python包,可以使用pip中的remove选项。 + +``` +pip uninstall +``` + +你可以在上面的命令中使用pip3代替pip。 + +我希望这个快速提示可以帮助你在Ubuntu上安装pip。 如果你有任何问题或建议,请在下面的评论部分告诉我。 + +-------------------------------------------------------------------------------- + +via: https://itsfoss.com/install-pip-ubuntu/ + +作者:[Abhishek Prakash][a] +选题:[lujun9972](https://github.com/lujun9972) +译者:[Flowsnow](https://github.com/Flowsnow) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://itsfoss.com/author/abhishek/ +[1]: https://itsfoss.com/how-to-add-remove-programs-in-ubuntu/ +[2]: https://itsfoss.com/use-snap-packages-ubuntu-16-04/ +[3]: https://itsfoss.com/flatpak-guide/ +[4]: https://itsfoss.com/use-appimage-linux/ +[5]: https://www.ubuntu.com/ +[6]: https://en.wikipedia.org/wiki/Pip_(package_manager) +[7]: https://pypi.org/project/pip/ +[8]: https://www.python.org/ +[9]: https://pypi.org/ +[10]: https://itsfoss.com/stress-terminal-ui/ +[11]: https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/10/install-pip-ubuntu.png From 8914280d599d23af9737e8638b19203dd89e53be Mon Sep 17 00:00:00 2001 From: "Xingyu.Wang" Date: Fri, 12 Oct 2018 23:23:09 +0800 Subject: [PATCH 393/736] PRF:20180821 A checklist for submitting your first Linux kernel patch.md @qhwdw --- ...ubmitting your first Linux kernel patch.md | 136 ++++++++---------- 1 file changed, 58 insertions(+), 78 deletions(-) diff --git a/translated/tech/20180821 A checklist for submitting your first Linux kernel patch.md b/translated/tech/20180821 A checklist for submitting your first Linux kernel patch.md index bf23f20674..92f24808c7 100644 --- a/translated/tech/20180821 A checklist for submitting your first Linux kernel patch.md +++ b/translated/tech/20180821 A checklist for submitting your first Linux kernel patch.md @@ -1,150 +1,130 @@ -提交你的第一个 Linux 内核补丁时的一个检查列表 +如何提交你的第一个 Linux 内核补丁 ====== +> 学习如何做出你的首个 Linux 内核贡献,以及在开始之前你应该知道什么。 ![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/linux_penguin_green.png?itok=ENdVzW22) -Linux 内核是最大的且变动最快的开源项目之一,它由大约 53,600 个文件和近 2,000 万行代码组成。在全世界范围内超过 15,600 位程序员为它贡献代码,Linux 内核项目的维护者使用了如下的协作模型。 +Linux 内核是最大且变动最快的开源项目之一,它由大约 53,600 个文件和近 2,000 万行代码组成。在全世界范围内超过 15,600 位程序员为它贡献代码,Linux 内核项目的维护者使用了如下的协作模型。 ![](https://opensource.com/sites/default/files/karnik_figure1.png) -本文中,为了便于在 Linux 内核中提交你的第一个贡献,我将为你提供一个必需的快速检查列表,以告诉你在提交补丁时,应该去查看和了解的内容。对于你贡献的第一个补丁的提交流程方面的更多内容,请阅读 [KernelNewbies 第一个内核补丁教程][1]。 +本文中,为了便于在 Linux 内核中提交你的第一个贡献,我将为你提供一个必需的快速检查列表,以告诉你在提交补丁时,应该去查看和了解的内容。对于你贡献的第一个补丁的提交流程方面的更多内容,请阅读 [KernelNewbies 的第一个内核补丁教程][1]。 ### 为内核作贡献 -#### 第 1 步:准备你的系统 +**第 1 步:准备你的系统。** 本文开始之前,假设你的系统已经具备了如下的工具: + 文本编辑器 + Email 客户端 -+ 版本控制系统(即:git) ++ 版本控制系统(例如:git) + +**第 2 步:下载 Linux 内核代码仓库。** -#### 第 2 步:下载 Linux 内核代码仓库: ``` git clone -b staging-testing - git://git.kernel.org/pub/scm/linux/kernel/git/gregkh/staging.git - ``` -### 复制你的当前配置: +复制你的当前配置: + ``` cp /boot/config-`uname -r`* .config - ``` -### 第 3 步:构建/安装你的内核 +**第 3 步:构建/安装你的内核。** + ``` make -jX - sudo make modules_install install - ``` -### 第 4 步:创建一个分支并切换到它 +**第 4 步:创建一个分支并切换到该分支。** + ``` git checkout -b first-patch - ``` -### 第 5 步:更新你的内核并指向到最新的代码 +**第 5 步:更新你的内核并指向到最新的代码。** + ``` git fetch origin - git rebase origin/staging-testing - ``` -### 第 6 步:在最新的代码基础上产生一个变更 +**第 6 步:在最新的代码库上产生一个变更。** 使用 `make` 命令重新编译,确保你的变更没有错误。 -### 第 7 步:提交你的变更并创建一个补丁 +**第 7 步:提交你的变更并创建一个补丁。** + ``` git add - git commit -s -v - git format-patch -o /tmp/ HEAD^ - ``` ![](https://opensource.com/sites/default/files/karnik_figure2.png) -主题是由冒号分隔的文件名组成,接下来是使用祈使语态来描述补丁做了什么。空行之后是强制规定的 `off` 标记,最后是你的补丁的 `diff` 信息。 +主题是由冒号分隔的文件名组成,跟着是使用祈使语态来描述补丁做了什么。空行之后是强制的 `signed off` 标记,最后是你的补丁的 `diff` 信息。 下面是另外一个简单补丁的示例: ![](https://opensource.com/sites/default/files/karnik_figure3.png) -接下来,[使用 email 从命令行][2](在本例子中使用的是 Mutt)发送这个补丁: +接下来,[从命令行使用邮件][2](在本例子中使用的是 Mutt)发送这个补丁: + ``` mutt -H /tmp/0001- - ``` 使用 [get_maintainer.pl 脚本][11],去了解你的补丁应该发送给哪位维护者的列表。 - ### 提交你的第一个补丁之前,你应该知道的事情 - * [Greg Kroah-Hartman](3) 的 [staging tree][4] 是提交你的 [第一个补丁][1] 的最好的地方,因为他更容易接受新贡献者的补丁。在你熟悉了补丁发送流程以后,你就可以去发送复杂度更高的子系统专用的补丁。 +* [Greg Kroah-Hartman](3) 的 [staging tree][4] 是提交你的 [第一个补丁][1] 的最好的地方,因为他更容易接受新贡献者的补丁。在你熟悉了补丁发送流程以后,你就可以去发送复杂度更高的子系统专用的补丁。 +* 你也可以从纠正代码中的编码风格开始。想学习更多关于这方面的内容,请阅读 [Linux 内核编码风格文档][5]。 +* [checkpatch.pl][6] 脚本可以帮你检测编码风格方面的错误。例如,运行如下的命令:`perl scripts/checkpatch.pl -f drivers/staging/android/* | less` +* 你可以去补全开发者留下的 TODO 注释中未完成的内容:`find drivers/staging -name TODO` +* [Coccinelle][7] 是一个模式匹配的有用工具。 +* 阅读 [归档的内核邮件][8]。 +* 为找到灵感,你可以去遍历 [linux.git 日志][9]去查看以前的作者的提交内容。 +* 注意:不要与你的补丁的审核者在邮件顶部交流!下面就是一个这样的例子: - * 你也可以从纠正代码中的编码风格开始。想学习更多关于这方面的内容,请阅读 [Linux 内核编码风格文档][5]。 - - * [checkpatch.pl][6] 脚本可以检测你的编码风格方面的错误。例如,运行如下的命令: - - ``` - perl scripts/checkpatch.pl -f drivers/staging/android/* | less + **错误的方式:** ``` + Chris, + Yes let’s schedule the meeting tomorrow, on the second floor. + + > On Fri, Apr 26, 2013 at 9:25 AM, Chris wrote: + > Hey John, I had some questions: + > 1. Do you want to schedule the meeting tomorrow? + > 2. On which floor in the office? + > 3. What time is suitable to you? +``` + (注意那最后一个问题,在回复中无意中落下了。) + + **正确的方式:** - * 你可以去补全开发者留下的 TODO 注释中未完成的内容: - ``` - find drivers/staging -name TODO ``` + Chris, + See my answers below... - * [Coccinelle][7] 是一个模式匹配的有用工具。 + > On Fri, Apr 26, 2013 at 9:25 AM, Chris wrote: + > Hey John, I had some questions: + > 1. Do you want to schedule the meeting tomorrow? + Yes tomorrow is fine. + > 2. On which floor in the office? + Let's keep it on the second floor. + > 3. What time is suitable to you? + 09:00 am would be alright. +``` + (所有问题全部回复,并且这种方式还保存了阅读的时间。) +* [Eudyptula challenge][10] 是学习内核基础知识的非常好的方式。 - * 阅读 [归档的内核邮件][8]。 - - * 为找到灵感,你可以去遍历 [linux.git log][9] 查看以前的作者的提交内容。 - - * 注意:不要为了评估你的补丁而在社区置顶帖子!下面就是一个这样的例子: - -**错误的方式:** - -Chris, -_Yes let’s schedule the meeting tomorrow, on the second floor._ - -> On Fri, Apr 26, 2013 at 9:25 AM, Chris wrote: -> Hey John, I had some questions: -> 1\. Do you want to schedule the meeting tomorrow? -> 2\. On which floor in the office? -> 3\. What time is suitable to you? - -(注意那最后一个问题,在回复中无意中落下了。) - -**正确的方式:** - -Chris, -See my answers below... - -> On Fri, Apr 26, 2013 at 9:25 AM, Chris wrote: -> Hey John, I had some questions: -> 1\. Do you want to schedule the meeting tomorrow? -_Yes tomorrow is fine._ -> 2\. On which floor in the office? -_Let's keep it on the second floor._ -> 3\. What time is suitable to you? -_09:00 am would be alright._ - -(所有问题全部回复,并且这种方式还保存了阅读的时间。) - - * [Eudyptula challenge][10] 是学习内核基础知识的非常好的方式。 - - -想学习更多内容,阅读 [KernelNewbies 第一个内核补丁教程][1]。之后如果你还有任何问题,可以在 [kernelnewbies 邮件列表][12] 或者 [#kernelnewbies IRC channel][13] 中提问。 +想学习更多内容,阅读 [KernelNewbies 的第一个内核补丁教程][1]。之后如果你还有任何问题,可以在 [kernelnewbies 邮件列表][12] 或者 [#kernelnewbies IRC channel][13] 中提问。 -------------------------------------------------------------------------------- @@ -153,7 +133,7 @@ via: https://opensource.com/article/18/8/first-linux-kernel-patch 作者:[Sayli Karnik][a] 选题:[lujun9972](https://github.com/lujun9972) 译者:[qhwdw](https://github.com/qhwdw) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 From 2eb3374e9605513ab70e08fb4486873092472eb1 Mon Sep 17 00:00:00 2001 From: "Xingyu.Wang" Date: Fri, 12 Oct 2018 23:23:35 +0800 Subject: [PATCH 394/736] PUB:20180821 A checklist for submitting your first Linux kernel patch.md @qhwdw https://linux.cn/article-10109-1.html --- ...21 A checklist for submitting your first Linux kernel patch.md | 0 1 file changed, 0 insertions(+), 0 deletions(-) rename {translated/tech => published}/20180821 A checklist for submitting your first Linux kernel patch.md (100%) diff --git a/translated/tech/20180821 A checklist for submitting your first Linux kernel patch.md b/published/20180821 A checklist for submitting your first Linux kernel patch.md similarity index 100% rename from translated/tech/20180821 A checklist for submitting your first Linux kernel patch.md rename to published/20180821 A checklist for submitting your first Linux kernel patch.md From 29780b0566bca37aa47403bcb471a07ec60e1253 Mon Sep 17 00:00:00 2001 From: chenliang Date: Fri, 12 Oct 2018 23:59:35 +0800 Subject: [PATCH 395/736] translated by Flowsnow --- ...ith Minikube- Kubernetes on your laptop.md | 162 ------------------ ...ith Minikube- Kubernetes on your laptop.md | 123 +++++++++++++ 2 files changed, 123 insertions(+), 162 deletions(-) delete mode 100644 sources/tech/20181011 Getting started with Minikube- Kubernetes on your laptop.md create mode 100644 translated/tech/20181011 Getting started with Minikube- Kubernetes on your laptop.md diff --git a/sources/tech/20181011 Getting started with Minikube- Kubernetes on your laptop.md b/sources/tech/20181011 Getting started with Minikube- Kubernetes on your laptop.md deleted file mode 100644 index 241b5978c4..0000000000 --- a/sources/tech/20181011 Getting started with Minikube- Kubernetes on your laptop.md +++ /dev/null @@ -1,162 +0,0 @@ -translating by Flowsnow - -Getting started with Minikube: Kubernetes on your laptop -====== -A step-by-step guide for running Minikube. - -![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/cube_innovation_process_block_container.png?itok=vkPYmSRQ) - -Minikube is advertised on the [Hello Minikube][1] tutorial page as a simple way to run Kubernetes for Docker. While that documentation is very informative, it is primarily written for MacOS. You can dig deeper for instructions for Windows or a Linux distribution, but they are not very clear. And much of the documentation—like one on [installing drivers for Minikube][2]—is targeted at Debian/Ubuntu users. - -### Prerequisites - - 1. You have [installed Docker][3]. - - 2. Your computer is an RHEL/CentOS/Fedora-based workstation. - - 3. You have [installed a working KVM2 hypervisor][4]. - - 4. You have a working **docker-machine-driver-kvm2**. The following commands will install the driver: - -``` - curl -Lo docker-machine-driver-kvm2 https://storage.googleapis.com/minikube/releases/latest/docker-machine-driver-kvm2 \ - chmod +x docker-machine-driver-kvm2 \ - && sudo cp docker-machine-driver-kvm2 /usr/local/bin/ \ - && rm docker-machine-driver-kvm2 -``` - -### Download, install, and start Minikube - - 1. Create a directory for the two files you will download: [minikube][5] and [kubectl][6]. - - - 2. Open a terminal window and run the following command to install minikube. - -``` -curl -Lo minikube https://storage.googleapis.com/minikube/releases/latest/minikube-linux-amd64 - -``` - -Note that the minikube version (e.g., minikube-linux-amd64) may differ based on your computer's specs. - - - - 3. **chmod** to make it writable. - -``` -chmod +x minikube - -``` - - - - 4. Move the file to the **/usr/local/bin** path so you can run it as a command. - -``` -mv minikube /usr/local/bin - -``` - - - - 5. Install kubectl using the following command (similar to the minikube installation process). - -``` -curl -Lo kubectl https://storage.googleapis.com/kubernetes-release/release/$(curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt)/bin/linux/amd64/kubectl - -``` - -Use the **curl** command to determine the latest version of Kubernetes. - - - - 6. **chmod** to make kubectl writable. - -``` -chmod +x kubectl - -``` - - - - 7. Move kubectl to the **/usr/local/bin** path to run it as a command. - -``` -mv kubectl /usr/local/bin - -``` - - - - 8. Run **minikube start**. To do so, you need to have a hypervisor available. I used KVM2, and you can also use Virtualbox. Make sure to run the following command as a user instead of root so the configuration will be stored for the user instead of root. - -``` -minikube start --vm-driver=kvm2 - -``` - -It can take quite a while, so wait for it. - - - - 9. Minikube should download and start. Use the following command to make sure it was successful. - -``` -cat ~/.kube/config - -``` - - - - 10. Execute the following command to run Minikube as the context. The context is what determines which cluster kubectl is interacting with. You can see all your available contexts in the ~/.kube/config file. - -``` -kubectl config use-context minikube - -``` - - - - 11. Run the **config** file command again to check that context Minikube is there. - -``` -cat ~/.kube/config - -``` - - - - 12. Finally, run the following command to open a browser with the Kubernetes dashboard. - -``` -minikube dashboard - -``` - - - - -This guide aims to make things easier for RHEL/Fedora/CentOS-based operating system users. - -Now that Minikube is up and running, read [Running Kubernetes Locally via Minikube][7] to start using it. - --------------------------------------------------------------------------------- - -via: https://opensource.com/article/18/10/getting-started-minikube - -作者:[Bryant Son][a] -选题:[lujun9972][b] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: -[b]: https://github.com/lujun9972 -[1]: https://kubernetes.io/docs/tutorials/hello-minikube -[2]: https://github.com/kubernetes/minikube/blob/master/docs/drivers.md -[3]: https://docs.docker.com/install -[4]: https://github.com/kubernetes/minikube/blob/master/docs/drivers.md#kvm2-driver -[5]: https://github.com/kubernetes/minikube/releases -[6]: https://kubernetes.io/docs/tasks/tools/install-kubectl/#install-kubectl-binary-using-curl -[7]: https://kubernetes.io/docs/setup/minikube diff --git a/translated/tech/20181011 Getting started with Minikube- Kubernetes on your laptop.md b/translated/tech/20181011 Getting started with Minikube- Kubernetes on your laptop.md new file mode 100644 index 0000000000..84f1187a32 --- /dev/null +++ b/translated/tech/20181011 Getting started with Minikube- Kubernetes on your laptop.md @@ -0,0 +1,123 @@ +Minikube入门:笔记本上的Kubernetes +====== +运行Minikube的分步指南。 + +![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/cube_innovation_process_block_container.png?itok=vkPYmSRQ) + +在[Hello Minikube][1]教程页面上Minikube被宣传为基于Docker运行Kubernetes的一种简单方法。 虽然该文档非常有用,但它主要是为MacOS编写的。 你可以深入挖掘在Windows或某个Linux发行版上的使用说明,但它们不是很清楚。 许多文档都是针对Debian / Ubuntu用户的,比如[安装Minikube的驱动程序][2]。 + +### 先决条件 + +1. 你已经[安装了Docker][3]。 +2. 你的计算机是一个RHEL / CentOS / 基于Fedora的工作站。 +3. 你已经[安装了正常运行的KVM2虚拟机管理程序][4]。 +4. 你有一个运行的**docker-machine-driver-kvm2**。 以下命令将安装驱动程序: + +``` +curl -Lo docker-machine-driver-kvm2 https://storage.googleapis.com/minikube/releases/latest/docker-machine-driver-kvm2 \ +chmod +x docker-machine-driver-kvm2 \ +&& sudo cp docker-machine-driver-kvm2 /usr/local/bin/ \ +&& rm docker-machine-driver-kvm2 +``` + +### 下载,安装和启动Minikube + + 1. 为你要即将下载的两个文件创建一个目录,两个文件分别是:[minikube][5]和[kubectl][6]。 + + + 2. 打开终端窗口并运行以下命令来安装minikube。 + +``` +curl -Lo minikube https://storage.googleapis.com/minikube/releases/latest/minikube-linux-amd64 +``` + +请注意,minikube版本(例如,minikube-linux-amd64)可能因计算机的规格而有所不同。 + + 3. **chmod**加执行权限。 + +``` +chmod +x minikube +``` + + 4. 将文件移动到**/usr/local/bin**路径下,以便你能将其作为命令运行。 + +``` +mv minikube /usr/local/bin +``` + + 5. 使用以下命令安装kubectl(类似于minikube的安装过程)。 + +``` +curl -Lo kubectl https://storage.googleapis.com/kubernetes-release/release/$(curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt)/bin/linux/amd64/kubectl +``` + +使用**curl**命令确定最新版本的Kubernetes。 + + 6. **chmod**给kubectl加执行权限。 + +``` +chmod +x kubectl +``` + + 7. 将kubectl移动到**/usr/local/bin**路径下作为命令运行。 + +``` +mv kubectl /usr/local/bin +``` + + 8. 运行**minikube start**命令。 为此,你需要有虚拟机管理程序。 我使用过KVM2,你也可以使用Virtualbox。 确保是用户而不是root身份运行以下命令,以便为用户而不是root存储配置。 + +``` +minikube start --vm-driver=kvm2 +``` + +这可能需要一段时间,等一会。 + + 9. Minikube应该下载并开始。 使用以下命令确保成功。 + +``` +cat ~/.kube/config +``` + + 10. 执行以下命令以运行Minikube作为上下文。 上下文决定了kubectl与哪个集群交互。 你可以在~/.kube/config文件中查看所有可用的上下文。 + +``` +kubectl config use-context minikube +``` + + 11. 再次查看**config** 文件以检查Minikube是否存在上下文。 + +``` +cat ~/.kube/config +``` + + 12. 最后,运行以下命令打开浏览器查看Kubernetes仪表板。 + +``` +minikube dashboard +``` + +本指南旨在使RHEL / Fedora / CentOS操作系统用户操作更轻松。 + +现在Minikube已启动并运行,请阅读[通过Minikube在本地运行Kubernetes][7]这篇官网教程开始使用它。 + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/18/10/getting-started-minikube + +作者:[Bryant Son][a] +选题:[lujun9972][b] +译者:[Flowsnow](https://github.com/Flowsnow) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: +[b]: https://github.com/lujun9972 +[1]: https://kubernetes.io/docs/tutorials/hello-minikube +[2]: https://github.com/kubernetes/minikube/blob/master/docs/drivers.md +[3]: https://docs.docker.com/install +[4]: https://github.com/kubernetes/minikube/blob/master/docs/drivers.md#kvm2-driver +[5]: https://github.com/kubernetes/minikube/releases +[6]: https://kubernetes.io/docs/tasks/tools/install-kubectl/#install-kubectl-binary-using-curl +[7]: https://kubernetes.io/docs/setup/minikube From 9f8371d5baf3df39e7b7b32fe91dbe532656826b Mon Sep 17 00:00:00 2001 From: chenliang Date: Sat, 13 Oct 2018 00:30:51 +0800 Subject: [PATCH 396/736] translated by Flowsnow --- ... Front-end For Popular Package Managers.md | 188 ------------------ ... Front-end For Popular Package Managers.md | 167 ++++++++++++++++ 2 files changed, 167 insertions(+), 188 deletions(-) delete mode 100644 sources/tech/20181011 A Front-end For Popular Package Managers.md create mode 100644 translated/tech/20181011 A Front-end For Popular Package Managers.md diff --git a/sources/tech/20181011 A Front-end For Popular Package Managers.md b/sources/tech/20181011 A Front-end For Popular Package Managers.md deleted file mode 100644 index 9353ba7f75..0000000000 --- a/sources/tech/20181011 A Front-end For Popular Package Managers.md +++ /dev/null @@ -1,188 +0,0 @@ -translating by Flowsnow - -A Front-end For Popular Package Managers -====== - -![](https://www.ostechnix.com/wp-content/uploads/2018/10/sysget-720x340.png) - -Are you a distro-hopper who likes to try new Linux OSs every few days? If so, I have something for you. Say hello to **Sysget** , a front-end for popular package managers in Unix-like operating systems. You don’t need to learn about every package managers to do basic stuffs like installing, updating, upgrading and removing packages. You just need to remember one syntax for every package manager on every Unix-like operating systems. Sysget is a wrapper script for package managers and it is written in C++. The source code is freely available on GitHub. - -Using Sysget, you can do all sorts of basic package management operations including the following: - - * Install packages, - * Update packages, - * Upgrade packages, - * Search for packages, - * Remove packages, - * Remove orphan packages, - * Update database, - * Upgrade system, - * Clear package manager cache. - - - -**An Important note to Linux learners:** - -Sysget is not going to replace the package managers and definitely not suitable for everyone. If you’re a newbie who frequently switch to new Linux OS, Sysget may help. It is just wrapper script that helps the distro hoppers (or the new Linux users) who become frustrated when they have to learn new commands to install, update, upgrade, search and remove packages when using different package managers in different Linux distributions. - -If you’re a Linux administrator or enthusiast who want to learn the internals of Linux, you should stick with your distribution’s package manager and learn to use it well. - -### Installing Sysget - -Installing sysget is trivial. Go to the [**releases page**][1] and download latest Sysget binary and install it as shown below. As of writing this guide, the latest version was 1.2. - -``` -$ sudo wget -O /usr/local/bin/sysget https://github.com/emilengler/sysget/releases/download/v1.2/sysget - -$ sudo mkdir -p /usr/local/share/sysget - -$ sudo chmod a+x /usr/local/bin/sysget - -``` - -### Usage - -Sysget commands are mostly same as APT package manager, so it should be easy to use for the newbies. - -When you run Sysget for the first time, you will be asked to choose the package manager you want to use. Since I am on Ubuntu, I chose **apt-get**. - -![](https://www.ostechnix.com/wp-content/uploads/2018/10/sysget-1.png) - -You must choose the right package manager depending upon the distribution you’re running. For instance, if you’re on Arch Linux, choose **pacman**. For CentOS, choose **yum**. For FreeBSD, choose **pkg**. The list of currently supported package managers are: - - 1. apt-get (Debian) - 2. xbps (Void) - 3. dnf (Fedora) - 4. yum (Enterprise Linux/Legacy Fedora) - 5. zypper (OpenSUSE) - 6. eopkg (Solus) - 7. pacman (Arch) - 8. emerge (Gentoo) - 9. pkg (FreeBSD) - 10. chromebrew (ChromeOS) - 11. homebrew (Mac OS) - 12. nix (Nix OS) - 13. snap (Independent) - 14. npm (Javascript, Global) - - - -Just in case you assigned a wrong package manager, you can set a new package manager using the following command: - -``` -$ sudo sysget set yum -Package manager changed to yum - -``` - -Just make sure you have chosen your native package manager. - -Now, you can perform the package management operations as the way you do using your native package manager. - -To install a package, for example Emacs, simply run: - -``` -$ sudo sysget install emacs - -``` - -The above command will invoke the native package manager (In my case it is “apt-get”) and install the given package. - -![](https://www.ostechnix.com/wp-content/uploads/2018/10/Install-package-using-Sysget.png) - -Similarly, to remove a package, simply run: - -``` -$ sudo sysget remove emacs - -``` - -![](https://www.ostechnix.com/wp-content/uploads/2018/10/Remove-package-using-Sysget.png) - -Update software repository (database): - -``` -$ sudo sysget update - -``` - -**Search for a specific package:** - -``` -$ sudo sysget search emacs - -``` - -**Upgrade a single package:** - -``` -$ sudo sysget upgrade emacs - -``` - -**Upgrade all packages:** - -``` -$ sudo sysget upgrade - -``` - -**Remove all orphaned packages:** - -``` -$ sudo sysget autoremove - -``` - -**Clear the package manager cache:** - -``` -$ sudo sysget clean - -``` - -For more details, refer the help section: - -``` -$ sysget help -Help of sysget -sysget [OPTION] [ARGUMENT] - -search [query] search for a package in the resporitories -install [package] install a package from the repos -remove [package] removes a package -autoremove removes not needed packages (orphans) -update update the database -upgrade do a system upgrade -upgrade [package] upgrade a specific package -clean clean the download cache -set [NEW MANAGER] set a new package manager - -``` - -Please remember that the sysget syntax is same for all package managers in different Linux distributions. You don’t need to memorize the commands for each package manager. - -Again, I must tell you Sysget isn’t a replacement for a package manager. It is just wrapper for popular package managers in Unix-like systems and it performs the basic package management operations only. - -Sysget might be somewhat useful for newbies and distro-hoppers who are lazy to learn new commands for different package manager. Give it a try if you’re interested and see if it helps. - -And, that’s all for now. More good stuffs to come. Stay tuned! - -Cheers! - - - --------------------------------------------------------------------------------- - -via: https://www.ostechnix.com/sysget-a-front-end-for-popular-package-managers/ - -作者:[SK][a] -选题:[lujun9972][b] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://www.ostechnix.com/author/sk/ -[b]: https://github.com/lujun9972 -[1]: https://github.com/emilengler/sysget/releases diff --git a/translated/tech/20181011 A Front-end For Popular Package Managers.md b/translated/tech/20181011 A Front-end For Popular Package Managers.md new file mode 100644 index 0000000000..adbd03cbda --- /dev/null +++ b/translated/tech/20181011 A Front-end For Popular Package Managers.md @@ -0,0 +1,167 @@ +给受欢迎的包管理器加个前端 +====== + +![](https://www.ostechnix.com/wp-content/uploads/2018/10/sysget-720x340.png) + +你是一个喜欢每隔几天尝试Linux操作系统的新发行版的人吗? 如果是这样,我有一些东西对你有用。 尝试**Sysget**,这是类Unix操作系统中流行软件包管理器的前端。 你不需要了解每个包管理器来执行基本的操作,例如安装,更新,升级和删除包。 你只需要记住每个类Unix操作系统上每个包管理器的一种语法即可。 Sysget是包管理器的包装脚本,它是用C ++编写的。 源代码可在GitHub上免费获得。 + +使用Sysget,你可以执行各种基本的包管理操作,包括: + +- 安装包, +- 更新包, +- 升级包, +- 搜索包, +- 删除包, +- 删除弃用包, +- 更新数据库, +- 升级系统, +- 清除包管理器缓存。 + +**给Linux学习者的一个重要提示:** + +Sysget不会取代软件包管理器,绝对不适合所有人。 如果你是经常切换到新Linux操作系统的新手,Sysget可能会有所帮助。 当在不同的Linux发行版中使用不同的软件包管理器时,就必须学习安装,更新,升级,搜索和删除软件包的新命令,这时Sysget就是帮助发行版收割机用户(或新Linux用户)的包装脚本。 + +如果你是Linux管理员或想要学习Linux深层的爱好者,你应该坚持使用你的发行版的软件包管理器并学习如何使用它。 + +### 安装Sysget + +安装sysget很简单。 转到[**发布页面**][1]并下载最新的Sysget二进制文件并按如下所示进行安装。 在编写本指南时,sysget最新版本为1.2。 + +``` +$ sudo wget -O /usr/local/bin/sysget https://github.com/emilengler/sysget/releases/download/v1.2/sysget +$ sudo mkdir -p /usr/local/share/sysget +$ sudo chmod a+x /usr/local/bin/sysget +``` + +### 用法 + +Sysget命令与APT包管理器大致相同,因此它应该适合新手使用。 + +当你第一次运行Sysget时,系统会要求你选择要使用的包管理器。 由于我在Ubuntu,我选择了**apt-get**。 + +![](https://www.ostechnix.com/wp-content/uploads/2018/10/sysget-1.png) + +你必须根据正在运行的发行版选择正确的包管理器。 例如,如果你使用的是Arch Linux,请选择**pacman**。 对于CentOS,请选择**yum**。 对于FreeBSD,请选择**pkg**。 当前支持的包管理器列表是: + + 1. apt-get (Debian) + 2. xbps (Void) + 3. dnf (Fedora) + 4. yum (Enterprise Linux/Legacy Fedora) + 5. zypper (OpenSUSE) + 6. eopkg (Solus) + 7. pacman (Arch) + 8. emerge (Gentoo) + 9. pkg (FreeBSD) + 10. chromebrew (ChromeOS) + 11. homebrew (Mac OS) + 12. nix (Nix OS) + 13. snap (Independent) + 14. npm (Javascript, Global) + +如果你分配了错误的包管理器,则可以使用以下命令设置新的包管理器: + +``` +$ sudo sysget set yum +Package manager changed to yum +``` + +只需确保你选择了本地包管理器。 + +现在,你可以像使用本机包管理器一样执行包管理操作。 + +要安装软件包,例如Emacs,只需运行: + +``` +$ sudo sysget install emacs +``` + +上面的命令将调用本机包管理器(在我的例子中是“apt-get”)并安装给定的包。 + +![](https://www.ostechnix.com/wp-content/uploads/2018/10/Install-package-using-Sysget.png) + +同样,要删除包,只需运行: + +``` +$ sudo sysget remove emacs +``` + +![](https://www.ostechnix.com/wp-content/uploads/2018/10/Remove-package-using-Sysget.png) + +**更新软件仓库(数据库):** + +``` +$ sudo sysget update +``` + +**搜索特定包:** + +``` +$ sudo sysget search emacs +``` + +**升级单个包:** + +``` +$ sudo sysget upgrade emacs +``` + +**升级所有包:** + +``` +$ sudo sysget upgrade +``` + +**移除废弃的包** + +``` +$ sudo sysget autoremove +``` + +**清理包管理器的缓存** + +``` +$ sudo sysget clean +``` + +有关更多详细信息,请参阅帮助部分: + +``` +$ sysget help +Help of sysget +sysget [OPTION] [ARGUMENT] + +search [query] search for a package in the resporitories +install [package] install a package from the repos +remove [package] removes a package +autoremove removes not needed packages (orphans) +update update the database +upgrade do a system upgrade +upgrade [package] upgrade a specific package +clean clean the download cache +set [NEW MANAGER] set a new package manager +``` + +请记住,不同Linux发行版中的所有包管理器的sysget语法都是相同的。 你不需要记住每个包管理器的命令。 + +同样,我必须告诉你Sysget不是包管理器的替代品。 它只是类Unix系统中流行的包管理器的包装器,它只执行基本的包管理操作。 + +Sysget对于不想去学习不同包管理器的新命令的新手和发行版收割机用户可能有些用处。 如果你有兴趣,试一试,看看它是否有帮助。 + +而且,这就是本次所有的内容了。 更多干活即将到来。 敬请关注! + +祝快乐! + +-------------------------------------------------------------------------------- + +via: https://www.ostechnix.com/sysget-a-front-end-for-popular-package-managers/ + +作者:[SK][a] +选题:[lujun9972][b] +译者:[Flowsnow](https://github.com/Flowsnow) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://www.ostechnix.com/author/sk/ +[b]: https://github.com/lujun9972 +[1]: https://github.com/emilengler/sysget/releases From 9220f135197573ee0d118ede548319c862ac0ff1 Mon Sep 17 00:00:00 2001 From: chenliang Date: Sat, 13 Oct 2018 00:54:32 +0800 Subject: [PATCH 397/736] translating by Flowsnow --- sources/tech/20171214 Peeking into your Linux packages.md | 2 ++ 1 file changed, 2 insertions(+) diff --git a/sources/tech/20171214 Peeking into your Linux packages.md b/sources/tech/20171214 Peeking into your Linux packages.md index d354d79b1b..cd1f354250 100644 --- a/sources/tech/20171214 Peeking into your Linux packages.md +++ b/sources/tech/20171214 Peeking into your Linux packages.md @@ -1,3 +1,5 @@ +translating by Flowsnow + Peeking into your Linux packages ====== Do you ever wonder how many _thousands_ of packages are installed on your Linux system? And, yes, I said "thousands." Even a fairly modest Linux system is likely to have well over a thousand packages installed. And there are many ways to get details on what they are. From 9b8b05ff7d7ea0c5709cde7899128b3a543f7cd7 Mon Sep 17 00:00:00 2001 From: chenliang Date: Sat, 13 Oct 2018 00:56:10 +0800 Subject: [PATCH 398/736] translating by Flowsnow --- .../20180615 How To Rename Multiple Files At Once In Linux.md | 2 ++ 1 file changed, 2 insertions(+) diff --git a/sources/tech/20180615 How To Rename Multiple Files At Once In Linux.md b/sources/tech/20180615 How To Rename Multiple Files At Once In Linux.md index d03dd4527b..f5c36573be 100644 --- a/sources/tech/20180615 How To Rename Multiple Files At Once In Linux.md +++ b/sources/tech/20180615 How To Rename Multiple Files At Once In Linux.md @@ -1,3 +1,5 @@ +translating by Flowsnow + How To Rename Multiple Files At Once In Linux ====== From 543ad11738ef6429923aa4b0fcb90adaea964f5a Mon Sep 17 00:00:00 2001 From: chenliang Date: Sat, 13 Oct 2018 01:09:11 +0800 Subject: [PATCH 399/736] translating by Flowsnow --- .../20180215 Build a bikesharing app with Redis and Python.md | 2 ++ 1 file changed, 2 insertions(+) diff --git a/sources/tech/20180215 Build a bikesharing app with Redis and Python.md b/sources/tech/20180215 Build a bikesharing app with Redis and Python.md index 06e4c6949a..d3232a0b4c 100644 --- a/sources/tech/20180215 Build a bikesharing app with Redis and Python.md +++ b/sources/tech/20180215 Build a bikesharing app with Redis and Python.md @@ -1,3 +1,5 @@ +translating by Flowsnow + Build a bikesharing app with Redis and Python ====== From b5d0e983dd76f11b3bf3d249b1f1ed1b1075cefa Mon Sep 17 00:00:00 2001 From: "Xingyu.Wang" Date: Sat, 13 Oct 2018 11:00:05 +0800 Subject: [PATCH 400/736] PRF:20181001 How to Install Pip on Ubuntu.md @Flowsnow --- .../20181001 How to Install Pip on Ubuntu.md | 75 ++++++++++--------- 1 file changed, 38 insertions(+), 37 deletions(-) diff --git a/translated/tech/20181001 How to Install Pip on Ubuntu.md b/translated/tech/20181001 How to Install Pip on Ubuntu.md index 01801c078d..c873c79960 100644 --- a/translated/tech/20181001 How to Install Pip on Ubuntu.md +++ b/translated/tech/20181001 How to Install Pip on Ubuntu.md @@ -1,50 +1,51 @@ -如何在Ubuntu上安装Pip +如何在 Ubuntu 上安装 pip ====== -**Pip是一个命令行工具,允许你安装Python编写的软件包。 学习如何在Ubuntu上安装Pip以及如何使用它来安装Python应用程序。** -有许多方法可以[在Ubuntu上安装软件][1]。 你可以从软件中心安装应用程序,也可以从下载的DEB文件,PPA(LCTT译者注:PPA即Personal Package Archives,个人软件包集),[Snap软件包][2],也可以使用[使用Flatpak][3],使用[AppImage][4],甚至从旧的源代码安装。 +**`pip` 是一个命令行工具,允许你安装 Python 编写的软件包。 学习如何在 Ubuntu 上安装 `pip` 以及如何使用它来安装 Python 应用程序。** -还有一种方法可以在[Ubuntu][5]中安装软件包。 它被称为Pip,你可以使用它来安装基于Python的应用程序。 +有许多方法可以[在 Ubuntu 上安装软件][1]。 你可以从软件中心安装应用程序,也可以从下载的 DEB 文件、PPA(LCTT 译注:PPA 即 Personal Package Archives,个人软件包集)、[Snap 软件包][2],也可以使用 [Flatpak][3]、使用 [AppImage][4],甚至用旧的源代码安装方式。 -### 什么是Pip +还有一种方法可以在 [Ubuntu][5] 中安装软件包。 它被称为 `pip`,你可以使用它来安装基于 Python 的应用程序。 -[Pip][6]代表“Pip Installs Packages”。 [Pip][7]是一个基于命令行的包管理系统。 用于安装和管理[Python语言][8]编写的软件。 +### 什么是 pip -你可以使用Pip来安装Python包索引([PyPI][9])中列出的包。 +[pip][6] 代表 “pip Installs Packages”。 [pip][7] 是一个基于命令行的包管理系统。 用于安装和管理 [Python 语言][8]编写的软件。 -作为软件开发人员,你可以使用pip为你自己的Python项目安装各种Python模块和包。 +你可以使用 `pip` 来安装 Python 包索引([PyPI][9])中列出的包。 -作为最终用户,你可能需要使用pip来安装一些Python开发的并且可以使用pip轻松安装的应用程序。 一个这样的例子是[Stress Terminal][10]应用程序,你可以使用pip轻松安装。 +作为软件开发人员,你可以使用 `pip` 为你自己的 Python 项目安装各种 Python 模块和包。 -让我们看看如何在Ubuntu和其他基于Ubuntu的发行版上安装pip。 +作为最终用户,你可能需要使用 `pip` 来安装一些 Python 开发的并且可以使用 `pip` 轻松安装的应用程序。 一个这样的例子是 [Stress Terminal][10] 应用程序,你可以使用 `pip` 轻松安装。 -### 如何在Ubuntu上安装Pip +让我们看看如何在 Ubuntu 和其他基于 Ubuntu 的发行版上安装 `pip`。 + +### 如何在 Ubuntu 上安装 pip ![Install pip on Ubuntu Linux][11] -默认情况下,Pip未安装在Ubuntu上。 你必须安装它。 在Ubuntu上安装pip非常简单。 我马上展示给你。 +默认情况下,`pip` 未安装在 Ubuntu 上。 你必须首先安装它才能使用。 在 Ubuntu 上安装 `pip` 非常简单。 我马上展示给你。 -Ubuntu 18.04默认安装了Python 2和Python 3。 因此,你应该为两个Python版本安装pip。 +Ubuntu 18.04 默认安装了 Python 2 和 Python 3。 因此,你应该为两个 Python 版本安装 `pip`。 -Pip,默认情况下是指Python 2。pip3代表Python 3中的Pip。 +`pip`,默认情况下是指 Python 2。`pip3` 代表 Python 3 中的 pip。 -注意:我在本教程中使用的是Ubuntu 18.04。 但是这里的教程应该适用于其他版本,如Ubuntu 16.04,18.10等。你也可以在基于Ubuntu的其他Linux发行版上使用相同的命令,如Linux Mint,Linux Lite,Xubuntu,Kubuntu等。 +注意:我在本教程中使用的是 Ubuntu 18.04。 但是这里的教程应该适用于其他版本,如Ubuntu 16.04、18.10 等。你也可以在基于 Ubuntu 的其他 Linux 发行版上使用相同的命令,如 Linux Mint、Linux Lite、Xubuntu、Kubuntu 等。 -#### 为Python 2安装pip +#### 为 Python 2 安装 pip -首先,确保已经安装了Python 2。 在Ubuntu上,可以使用以下命令进行验证。 +首先,确保已经安装了 Python 2。 在 Ubuntu 上,可以使用以下命令进行验证。 ``` python2 --version ``` -如果没有错误并且显示了Python版本的有效输出,则说明安装了Python 2。 所以现在你可以使用这个命令为Python 2安装pip: +如果没有错误并且显示了 Python 版本的有效输出,则说明安装了 Python 2。 所以现在你可以使用这个命令为 Python 2 安装 `pip`: ``` sudo apt install python-pip ``` -它将使用它安装pip和许多其他的依赖项。 安装完成后,请确认你已正确安装了pip。 +这将安装 `pip` 和它的许多其他依赖项。 安装完成后,请确认你已正确安装了 `pip`。 ``` pip --version @@ -56,25 +57,25 @@ pip --version pip 9.0.1 from /usr/lib/python2.7/dist-packages (python 2.7) ``` -这意味着你已经成功在Ubuntu上安装了pip。 +这意味着你已经成功在 Ubuntu 上安装了 `pip`。 -#### 为Python 3安装pip +#### 为 Python 3 安装 pip -你必须确保在Ubuntu上安装了Python 3。 可以使用以下命令检查一下: +你必须确保在 Ubuntu 上安装了 Python 3。 可以使用以下命令检查一下: ``` python3 --version ``` -如果显示了像Python 3.6.6这样的数字,则说明Python 3在你的Linux系统上安装好了。 +如果显示了像 Python 3.6.6 这样的数字,则说明 Python 3 在你的 Linux 系统上安装好了。 -现在,你可以使用以下命令安装pip3: +现在,你可以使用以下命令安装 `pip3`: ``` sudo apt install python3-pip ``` -你应该使用以下命令验证pip3是否已正确安装: +你应该使用以下命令验证 `pip3` 是否已正确安装: ``` pip3 --version @@ -86,19 +87,19 @@ pip3 --version pip 9.0.1 from /usr/lib/python3/dist-packages (python 3.6) ``` -这意味着pip3已成功安装在你的系统上。 +这意味着 `pip3` 已成功安装在你的系统上。 -### 如何使用Pip命令 +### 如何使用 pip 命令 -现在你已经安装了pip,让我们快速看一些基本的pip命令。 这些命令将帮助你使用pip命令来搜索,安装和删除Python包。 +现在你已经安装了 `pip`,让我们快速看一些基本的 `pip` 命令。 这些命令将帮助你使用 `pip` 命令来搜索、安装和删除 Python 包。 -要从Python包索引PypI中搜索包,可以使用以下pip命令: +要从 Python 包索引 PyPI 中搜索包,可以使用以下 `pip` 命令: ``` pip search ``` -例如,如果你搜索stress这个词,将会显示名称或描述中包含字符串'stress'的所有包。 +例如,如果你搜索“stress”这个词,将会显示名称或描述中包含字符串“stress”的所有包。 ``` pip search stress @@ -124,23 +125,23 @@ fsm_effective_stress (1.0.0) - Python library that uses the rheological-dynamica processpathway (0.3.11) - A nifty little toolkit to create stress-free, frustrationless image processing pathways from your webcam for computer vision experiments. Or observing your cat. ``` -如果要使用pip安装应用程序,可以按以下方式使用它: +如果要使用 `pip` 安装应用程序,可以按以下方式使用它: ``` pip install ``` -Pip不支持使用tab键补全包名,因此包名称应该是准确的。 它将下载所有必需的文件并安装该软件包。 +`pip` 不支持使用 tab 键补全包名,因此包名称需要准确指定。 它将下载所有必需的文件并安装该软件包。 -如果要删除通过pip安装的Python包,可以使用pip中的remove选项。 +如果要删除通过 `pip` 安装的 Python 包,可以使用 `pip` 中的 `uninstall` 选项。 ``` pip uninstall ``` -你可以在上面的命令中使用pip3代替pip。 +你可以在上面的命令中使用 `pip3` 代替 `pip`。 -我希望这个快速提示可以帮助你在Ubuntu上安装pip。 如果你有任何问题或建议,请在下面的评论部分告诉我。 +我希望这个快速提示可以帮助你在 Ubuntu 上安装 `pip`。 如果你有任何问题或建议,请在下面的评论部分告诉我。 -------------------------------------------------------------------------------- @@ -149,7 +150,7 @@ via: https://itsfoss.com/install-pip-ubuntu/ 作者:[Abhishek Prakash][a] 选题:[lujun9972](https://github.com/lujun9972) 译者:[Flowsnow](https://github.com/Flowsnow) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 @@ -159,7 +160,7 @@ via: https://itsfoss.com/install-pip-ubuntu/ [3]: https://itsfoss.com/flatpak-guide/ [4]: https://itsfoss.com/use-appimage-linux/ [5]: https://www.ubuntu.com/ -[6]: https://en.wikipedia.org/wiki/Pip_(package_manager) +[6]: https://en.wikipedia.org/wiki/pip_(package_manager) [7]: https://pypi.org/project/pip/ [8]: https://www.python.org/ [9]: https://pypi.org/ From 445fa2db90b2c019b45c032e60722981cc03f12d Mon Sep 17 00:00:00 2001 From: "Xingyu.Wang" Date: Sat, 13 Oct 2018 11:00:45 +0800 Subject: [PATCH 401/736] PUB:20181001 How to Install Pip on Ubuntu.md MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit @Flowsnow https://linux.cn/article-10110-1.html 排版上再注意点,比如中英文之间留个空格。 --- .../tech => published}/20181001 How to Install Pip on Ubuntu.md | 0 1 file changed, 0 insertions(+), 0 deletions(-) rename {translated/tech => published}/20181001 How to Install Pip on Ubuntu.md (100%) diff --git a/translated/tech/20181001 How to Install Pip on Ubuntu.md b/published/20181001 How to Install Pip on Ubuntu.md similarity index 100% rename from translated/tech/20181001 How to Install Pip on Ubuntu.md rename to published/20181001 How to Install Pip on Ubuntu.md From 49b2c2c7d907b72e16e769aba60b243da9270e64 Mon Sep 17 00:00:00 2001 From: way-ww <40491614+way-ww@users.noreply.github.com> Date: Sat, 13 Oct 2018 12:06:00 +0800 Subject: [PATCH 402/736] translating request to translate --- .../20181009 How To Create And Maintain Your Own Man Pages.md | 1 + 1 file changed, 1 insertion(+) diff --git a/sources/tech/20181009 How To Create And Maintain Your Own Man Pages.md b/sources/tech/20181009 How To Create And Maintain Your Own Man Pages.md index 6d78d132e2..cb93af4b92 100644 --- a/sources/tech/20181009 How To Create And Maintain Your Own Man Pages.md +++ b/sources/tech/20181009 How To Create And Maintain Your Own Man Pages.md @@ -1,3 +1,4 @@ +Translating by way-ww How To Create And Maintain Your Own Man Pages ====== From 47dabb0704b22bc4baa28feca81e7fc59e4b3b71 Mon Sep 17 00:00:00 2001 From: David Dai Date: Sat, 13 Oct 2018 13:13:43 +0800 Subject: [PATCH 403/736] StdioA request for translating Design faster web pages, part 1: Image compression --- ...181010 Design faster web pages, part 1- Image compression.md | 2 ++ 1 file changed, 2 insertions(+) diff --git a/sources/tech/20181010 Design faster web pages, part 1- Image compression.md b/sources/tech/20181010 Design faster web pages, part 1- Image compression.md index f78912cb81..758e3d0710 100644 --- a/sources/tech/20181010 Design faster web pages, part 1- Image compression.md +++ b/sources/tech/20181010 Design faster web pages, part 1- Image compression.md @@ -1,3 +1,5 @@ +Translating by StdioA + Design faster web pages, part 1: Image compression ====== From 93ca14159cb99d858f887971855bab5286b43c23 Mon Sep 17 00:00:00 2001 From: "Xingyu.Wang" Date: Sat, 13 Oct 2018 13:45:20 +0800 Subject: [PATCH 404/736] PRF:20180412 A Desktop GUI Application For NPM.md @geekpi --- ...80412 A Desktop GUI Application For NPM.md | 67 +++++++++---------- 1 file changed, 31 insertions(+), 36 deletions(-) diff --git a/translated/tech/20180412 A Desktop GUI Application For NPM.md b/translated/tech/20180412 A Desktop GUI Application For NPM.md index 99928a08f2..ef72a39fe0 100644 --- a/translated/tech/20180412 A Desktop GUI Application For NPM.md +++ b/translated/tech/20180412 A Desktop GUI Application For NPM.md @@ -1,9 +1,9 @@ -NPM 的桌面 GUI 程序 +ndm:NPM 的桌面 GUI 程序 ====== ![](https://www.ostechnix.com/wp-content/uploads/2018/04/ndm-3-720x340.png) -NPM 是 **N** ode **P** ackage **M** anager (node 包管理器)的缩写,它是用于安装 NodeJS 软件包或模块的命令行软件包管理器。我们发布过一个指南描述了如何[**使用 NPM 管理 NodeJS 包**][1]。你可能已经注意到,使用 Npm 管理 NodeJS 包或模块并不是什么大问题。但是,如果你不习惯用 CLI 的方式,这有一个名为 **NDM** 的桌面 GUI 程序,它可用于管理 NodeJS 程序/模块。 NDM,代表 **N** PM **D** esktop **M** anager (npm 桌面管理器),是 NPM 的免费开源图形前端,它允许我们通过简单图形桌面安装、更新、删除 NodeJS 包。 +NPM 是 **N**ode **P**ackage **M**anager (node 包管理器)的缩写,它是用于安装 NodeJS 软件包或模块的命令行软件包管理器。我们发布过一个指南描述了如何[使用 NPM 管理 NodeJS 包][1]。你可能已经注意到,使用 Npm 管理 NodeJS 包或模块并不是什么大问题。但是,如果你不习惯用 CLI 的方式,这有一个名为 **NDM** 的桌面 GUI 程序,它可用于管理 NodeJS 程序/模块。 NDM,代表 **N**PM **D**esktop **M**anager (npm 桌面管理器),是 NPM 的自由开源图形前端,它允许我们通过简单图形桌面安装、更新、删除 NodeJS 包。 在这个简短的教程中,我们将了解 Linux 中的 Ndm。 @@ -11,59 +11,58 @@ NPM 是 **N** ode **P** ackage **M** anager (node 包管理器)的缩写, NDM 在 AUR 中可用,因此你可以在 Arch Linux 及其衍生版(如 Antergos 和 Manjaro Linux)上使用任何 AUR 助手程序安装。 -使用 [**Pacaur**][2]: +使用 [Pacaur][2]: + ``` $ pacaur -S ndm - ``` -使用 [**Packer**][3]: +使用 [Packer][3]: + ``` $ packer -S ndm - ``` -使用 [**Trizen**][4]: +使用 [Trizen][4]: + ``` $ trizen -S ndm - ``` -使用 [**Yay**][5]: +使用 [Yay][5]: + ``` $ yay -S ndm - ``` -使用 [**Yaourt**][6]: +使用 [Yaourt][6]: + ``` $ yaourt -S ndm - ``` 在基于 RHEL 的系统(如 CentOS)上,运行以下命令以安装 NDM。 + ``` $ echo "[fury] name=ndm repository baseurl=https://repo.fury.io/720kb/ enabled=1 gpgcheck=0" | sudo tee /etc/yum.repos.d/ndm.repo && sudo yum update && - ``` 在 Debian、Ubuntu、Linux Mint: + ``` $ echo "deb [trusted=yes] https://apt.fury.io/720kb/ /" | sudo tee /etc/apt/sources.list.d/ndm.list && sudo apt-get update && sudo apt-get install ndm - ``` 也可以使用 **Linuxbrew** 安装 NDM。首先,按照以下链接中的说明安装 Linuxbrew。 安装 Linuxbrew 后,可以使用以下命令安装 NDM: + ``` $ brew update - $ brew install ndm - ``` -在其他 Linux 发行版上,进入[**NDM 发布页面**][7],下载最新版本,自行编译和安装。 +在其他 Linux 发行版上,进入 [NDM 发布页面][7],下载最新版本,自行编译和安装。 ### NDM 使用 @@ -73,15 +72,15 @@ $ brew install ndm 在这里你可以本地或全局安装 NodeJS 包/模块。 -**本地安装 NodeJS 包** +#### 本地安装 NodeJS 包 -要在本地安装软件包,首先通过单击主屏幕上的 **“Add projects”** 按钮选择项目目录,然后选择要保留项目文件的目录。例如,我选择了一个名为 **“demo”** 的目录作为我的项目目录。 +要在本地安装软件包,首先通过单击主屏幕上的 “Add projects” 按钮选择项目目录,然后选择要保留项目文件的目录。例如,我选择了一个名为 “demo” 的目录作为我的项目目录。 -单击项目目录(即 **demo**),然后单击 **Add packages** 按钮。 +单击项目目录(即 demo),然后单击 “Add packages” 按钮。 ![][10] -输入要安装的软件包名称,然后单击 **Install** 按钮。 +输入要安装的软件包名称,然后单击 “Install” 按钮。 ![][11] @@ -91,41 +90,37 @@ $ brew install ndm 同样,你可以创建单独的项目目录并在其中安装 NodeJS 模块。要查看项目中已安装模块的列表,请单击项目目录,右侧将显示软件包。 -**全局安装 NodeJS 包** +#### 全局安装 NodeJS 包 -要全局安装 NodeJS 包,请单击主界面左侧的 **Globals** 按钮。然后,单击 “Add packages” 按钮,输入包的名称并单击 “Install” 按钮。 +要全局安装 NodeJS 包,请单击主界面左侧的 “Globals” 按钮。然后,单击 “Add packages” 按钮,输入包的名称并单击 “Install” 按钮。 -**管理包** +#### 管理包 单击任何已安装的包,不将在顶部看到各种选项,例如: - 1. 版本(查看已安装的版本), -  2. 最新(安装最新版本), -  3. 更新(更新当前选定的包), -  4. 卸载(删除所选包)等。 - - +1. 版本(查看已安装的版本), +2. 最新(安装最新版本), +3. 更新(更新当前选定的包), +4. 卸载(删除所选包)等。 ![][13] -NDM 还有两个选项,即 **“Update npm”** 用于将 node 包管理器更新成最新可用版本, **Doctor** 运行一组检查以确保你的 npm 安装有所需的功能管理你的包/模块。 +NDM 还有两个选项,即 “Update npm” 用于将 node 包管理器更新成最新可用版本, 而 “Doctor” 会运行一组检查以确保你的 npm 安装有所需的功能管理你的包/模块。 -### 结论 +### 总结 NDM 使安装、更新、删除 NodeJS 包的过程更加容易!你无需记住执行这些任务的命令。NDM 让我们在简单的图形界面中点击几下鼠标即可完成所有操作。对于那些懒得输入命令的人来说,NDM 是管理 NodeJS 包的完美伴侣。 干杯! - - -------------------------------------------------------------------------------- via: https://www.ostechnix.com/ndm-a-desktop-gui-application-for-npm/ 作者:[SK][a] -译者:[geekpi](https://github.com/geekpi) -校对:[校对者ID](https://github.com/校对者ID) 选题:[lujun9972](https://github.com/lujun9972) +译者:[geekpi](https://github.com/geekpi) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 From 3d9b742d9d088f330a8a8a1dad5df7427d0078e5 Mon Sep 17 00:00:00 2001 From: "Xingyu.Wang" Date: Sat, 13 Oct 2018 13:45:36 +0800 Subject: [PATCH 405/736] PUB:20180412 A Desktop GUI Application For NPM.md @geekpi https://linux.cn/article-10111-1.html --- .../20180412 A Desktop GUI Application For NPM.md | 0 1 file changed, 0 insertions(+), 0 deletions(-) rename {translated/tech => published}/20180412 A Desktop GUI Application For NPM.md (100%) diff --git a/translated/tech/20180412 A Desktop GUI Application For NPM.md b/published/20180412 A Desktop GUI Application For NPM.md similarity index 100% rename from translated/tech/20180412 A Desktop GUI Application For NPM.md rename to published/20180412 A Desktop GUI Application For NPM.md From 6179bfcacb02151cb7837e6b84d5ae23c35e66f8 Mon Sep 17 00:00:00 2001 From: "Xingyu.Wang" Date: Sat, 13 Oct 2018 14:10:16 +0800 Subject: [PATCH 406/736] PRF:20180814 Automating backups on a Raspberry Pi NAS.md @jrglinux https://linux.cn/article-10112-1.html --- ...utomating backups on a Raspberry Pi NAS.md | 130 +++++------------- 1 file changed, 31 insertions(+), 99 deletions(-) diff --git a/translated/tech/20180814 Automating backups on a Raspberry Pi NAS.md b/translated/tech/20180814 Automating backups on a Raspberry Pi NAS.md index 111b508245..cbb508ae8f 100644 --- a/translated/tech/20180814 Automating backups on a Raspberry Pi NAS.md +++ b/translated/tech/20180814 Automating backups on a Raspberry Pi NAS.md @@ -1,19 +1,16 @@ -Part-II 树莓派自建 NAS 云盘之数据自动备份 +树莓派自建 NAS 云盘之——数据自动备份 ====== +> 把你的树莓派变成数据的安全之所。 ![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/brain_data.png?itok=RH6NA32X) -在《树莓派自建 NAS 云盘》系列的 [第一篇][1] 文章中,我们讨论了建立 NAS 的一些基本步骤,添加了两块 1TB 的存储硬盘驱动(一个用于数据存储,一个用于数据备份),并且通过 网络文件系统(NFS)将数据存储盘挂载到远程终端上。本文是此系列的第二篇文章,我们将探讨数据自动备份。数据自动备份保证了数据的安全,为硬件损坏后的数据恢复提供便利以及减少了文件误操作带来的不必要的麻烦。 - - +在《树莓派自建 NAS 云盘》系列的 [第一篇][1] 文章中,我们讨论了建立 NAS 的一些基本步骤,添加了两块 1TB 的存储硬盘驱动(一个用于数据存储,一个用于数据备份),并且通过网络文件系统(NFS)将数据存储盘挂载到远程终端上。本文是此系列的第二篇文章,我们将探讨数据自动备份。数据自动备份保证了数据的安全,为硬件损坏后的数据恢复提供便利以及减少了文件误操作带来的不必要的麻烦。 ![](https://opensource.com/sites/default/files/uploads/nas_part2.png) - - ### 备份策略 -我们就从为小型 NAS 构想一个备份策略着手开始吧。我建议每天有时间节点有计划的去备份数据,以防止干扰到我们正常的访问 NAS,比如备份时间点避开正在访问 NAS 并写入文件的时间点。举个例子,你可以每天凌晨 2 点去进行数据备份。 +我们就从为小型 NAS 构想一个备份策略着手开始吧。我建议每天有时间节点、有计划的去备份数据,以防止干扰到我们正常的访问 NAS,比如备份时间点避开正在访问 NAS 并写入文件的时间点。举个例子,你可以每天凌晨 2 点去进行数据备份。 另外,你还得决定每天的备份需要被保留的时间长短,因为如果没有时间限制,存储空间很快就会被用完。一般每天的备份保留一周便可以,如果数据出了问题,你便可以很方便的从备份中恢复出来原数据。但是如果需要恢复数据到更久之前怎么办?可以将每周一的备份文件保留一个月、每个月的备份保留更长时间。让我们把每月的备份保留一年时间,每一年的备份保留更长时间、例如五年。 @@ -24,27 +21,24 @@ Part-II 树莓派自建 NAS 云盘之数据自动备份 * 每年 12 个月备份 * 每五年 5 个年备份 - 你应该还记得,我们搭建的备份盘和数据盘大小相同(每个 1 TB)。如何将不止 10 个 1TB 数据的备份从数据盘存放到只有 1TB 大小的备份盘呢?如果你创建的是完整备份,这显然不可能。因此,你需要创建增量备份,它是每一份备份都基于上一份备份数据而创建的。增量备份方式不会每隔一天就成倍的去占用存储空间,它每天只会增加一点占用空间。 以下是我的情况:我的 NAS 自 2016 年 8 月开始运行,备份盘上有 20 个备份。目前,我在数据盘上存储了 406GB 的文件。我的备份盘用了 726GB。当然,备份盘空间使用率在很大程度上取决于数据的更改频率,但正如你所看到的,增量备份不会占用 20 个完整备份所需的空间。然而,随着时间的推移,1TB 空间也可能不足以进行备份。一旦数据增长接近 1TB 限制(或任何备份盘容量),应该选择更大的备份盘空间并将数据移动转移过去。 ### 利用 rsync 进行数据备份 -利用 rsync 命令行工具可以生成完整备份。 +利用 `rsync` 命令行工具可以生成完整备份。 ``` pi@raspberrypi:~ $ rsync -a /nas/data/ /nas/backup/2018-08-01 - ``` -这段命令将挂载在 /nas/data/ 目录下的数据盘中的数据进行了完整的复制备份。备份文件保存在 /nas/backup/2018-08-01 目录下。`-a` 参数是以归档模式进行备份,这将会备份所有的元数据,例如文件的修改日期、权限、拥有者以及软连接文件。 +这段命令将挂载在 `/nas/data/` 目录下的数据盘中的数据进行了完整的复制备份。备份文件保存在 `/nas/backup/2018-08-01` 目录下。`-a` 参数是以归档模式进行备份,这将会备份所有的元数据,例如文件的修改日期、权限、拥有者以及软连接文件。 现在,你已经在 8 月 1 日创建了完整的初始备份,你将在 8 月 2 日创建第一个增量备份。 ``` pi@raspberrypi:~ $ rsync -a --link-dest /nas/backup/2018-08-01/ /nas/data/ /nas/backup/2018-08-02 - ``` 上面这行代码又创建了一个关于 `/nas/data` 目录中数据的备份。备份路径是 `/nas/backup/2018-08-02`。这里的参数 `--link-dest` 指定了一个备份文件所在的路径。这样,这次备份会与 `/nas/backup/2018-08-01` 的备份进行比对,只备份已经修改过的文件,未做修改的文件将不会被复制,而是创建一个到上一个备份文件中它们的硬链接。 @@ -53,142 +47,81 @@ pi@raspberrypi:~ $ rsync -a --link-dest /nas/backup/2018-08-01/ /nas/data/ /nas/ ![](https://opensource.com/sites/default/files/uploads/backup_flow.png) -左侧框是在进行了第二次备份后的原数据状态。中间的盒子是昨天的备份。昨天的备份中只有图片 `file1.jpg` 并没有 `file2.txt` 。右侧的框反映了今天的增量备份。增量备份命令创建昨天不存在的 `file2.txt`。由于 `file1.jpg` 自昨天以来没有被修改,所以今天创建了一个硬链接,它不会额外占用磁盘上的空间。 +左侧框是在进行了第二次备份后的原数据状态。中间的方块是昨天的备份。昨天的备份中只有图片 `file1.jpg` 并没有 `file2.txt` 。右侧的框反映了今天的增量备份。增量备份命令创建昨天不存在的 `file2.txt`。由于 `file1.jpg` 自昨天以来没有被修改,所以今天创建了一个硬链接,它不会额外占用磁盘上的空间。 ### 自动化备份 -你肯定也不想每天凌晨去输入命令进行数据备份吧。你可以创建一个任务定时去调用下面的脚本让它自动化备份 +你肯定也不想每天凌晨去输入命令进行数据备份吧。你可以创建一个任务定时去调用下面的脚本让它自动化备份。 ``` #!/bin/bash - - TODAY=$(date +%Y-%m-%d) - DATADIR=/nas/data/ - BACKUPDIR=/nas/backup/ - SCRIPTDIR=/nas/data/backup_scripts - LASTDAYPATH=${BACKUPDIR}/$(ls ${BACKUPDIR} | tail -n 1) - TODAYPATH=${BACKUPDIR}/${TODAY} - if [[ ! -e ${TODAYPATH} ]]; then - -        mkdir -p ${TODAYPATH} - + mkdir -p ${TODAYPATH} fi - - rsync -a --link-dest ${LASTDAYPATH} ${DATADIR} ${TODAYPATH} $@ - - ${SCRIPTDIR}/deleteOldBackups.sh - ``` -第一段代码指定了数据路径、备份路劲、脚本路径以及昨天和今天的备份路径。第二段代码调用 rsync 命令。最后一段代码执行 `deleteOldBackups.sh` 脚本,它会清除一些过期的没有必要的备份数据。如果不想频繁的调用 `deleteOldBackups.sh`,你也可以手动去执行它。 +第一段代码指定了数据路径、备份路径、脚本路径以及昨天和今天的备份路径。第二段代码调用 `rsync` 命令。最后一段代码执行 `deleteOldBackups.sh` 脚本,它会清除一些过期的没有必要的备份数据。如果不想频繁的调用 `deleteOldBackups.sh`,你也可以手动去执行它。 下面是今天讨论的备份策略的一个简单完整的示例脚本。 ``` #!/bin/bash - BACKUPDIR=/nas/backup/ - - function listYearlyBackups() { - -        for i in 0 1 2 3 4 5 - -                do ls ${BACKUPDIR} | egrep "$(date +%Y -d "${i} year ago")-[0-9]{2}-[0-9]{2}" | sort -u | head -n 1 - -        done - + for i in 0 1 2 3 4 5 + do ls ${BACKUPDIR} | egrep "$(date +%Y -d "${i} year ago")-[0-9]{2}-[0-9]{2}" | sort -u | head -n 1 + done } - - function listMonthlyBackups() { - -        for i in 0 1 2 3 4 5 6 7 8 9 10 11 12 - -                do ls ${BACKUPDIR} | egrep "$(date +%Y-%m -d "${i} month ago")-[0-9]{2}" | sort -u | head -n 1 - -        done - + for i in 0 1 2 3 4 5 6 7 8 9 10 11 12 + do ls ${BACKUPDIR} | egrep "$(date +%Y-%m -d "${i} month ago")-[0-9]{2}" | sort -u | head -n 1 + done } - - function listWeeklyBackups() { - -        for i in 0 1 2 3 4 - -                do ls ${BACKUPDIR} | grep "$(date +%Y-%m-%d -d "last monday -${i} weeks")" - -        done - + for i in 0 1 2 3 4 + do ls ${BACKUPDIR} | grep "$(date +%Y-%m-%d -d "last monday -${i} weeks")" + done } - - function listDailyBackups() { - -        for i in 0 1 2 3 4 5 6 - -                do ls ${BACKUPDIR} | grep "$(date +%Y-%m-%d -d "-${i} day")" - -        done - + for i in 0 1 2 3 4 5 6 + do ls ${BACKUPDIR} | grep "$(date +%Y-%m-%d -d "-${i} day")" + done } - - function getAllBackups() { - -        listYearlyBackups - -        listMonthlyBackups - -        listWeeklyBackups - -        listDailyBackups - + listYearlyBackups + listMonthlyBackups + listWeeklyBackups + listDailyBackups } - - function listUniqueBackups() { - -        getAllBackups | sort -u - + getAllBackups | sort -u } - - function listBackupsToDelete() { - -        ls ${BACKUPDIR} | grep -v -e "$(echo -n $(listUniqueBackups) |sed "s/ /\\\|/g")" - + ls ${BACKUPDIR} | grep -v -e "$(echo -n $(listUniqueBackups) |sed "s/ /\\\|/g")" } - - cd ${BACKUPDIR} - listBackupsToDelete | while read file_to_delete; do - -        rm -rf ${file_to_delete} - + rm -rf ${file_to_delete} done - ``` 这段脚本会首先根据你的备份策略列出所有需要保存的备份文件,然后它会删除那些再也不需要了的备份目录。 @@ -197,7 +130,6 @@ done ``` 0 2 * * * /nas/data/backup_scripts/daily.sh - ``` 有关创建定时任务请参考 [cron 创建定时任务][2]。 @@ -218,12 +150,12 @@ via: https://opensource.com/article/18/8/automate-backups-raspberry-pi 作者:[Manuel Dewald][a] 选题:[lujun9972](https://github.com/lujun9972) 译者:[jrg](https://github.com/jrglinux) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 [a]: https://opensource.com/users/ntlx -[1]: https://opensource.com/article/18/7/network-attached-storage-Raspberry-Pi +[1]: https://linux.cn/article-10104-1.html [2]: https://opensource.com/article/17/11/how-use-cron-linux [3]: https://nextcloud.com/ From e19fbe557cf76b033b0ad9777c97f2bb38c4330a Mon Sep 17 00:00:00 2001 From: "Xingyu.Wang" Date: Sat, 13 Oct 2018 14:10:44 +0800 Subject: [PATCH 407/736] PUB:https://linux.cn/article-10112-1.html @jrglinux https://linux.cn/article-10112-1.html --- .../20180814 Automating backups on a Raspberry Pi NAS.md | 0 1 file changed, 0 insertions(+), 0 deletions(-) rename {translated/tech => published}/20180814 Automating backups on a Raspberry Pi NAS.md (100%) diff --git a/translated/tech/20180814 Automating backups on a Raspberry Pi NAS.md b/published/20180814 Automating backups on a Raspberry Pi NAS.md similarity index 100% rename from translated/tech/20180814 Automating backups on a Raspberry Pi NAS.md rename to published/20180814 Automating backups on a Raspberry Pi NAS.md From e8d4fc23796d4eaac331d23fda9edb71f0f50966 Mon Sep 17 00:00:00 2001 From: HankChow <280630620@qq.com> Date: Sat, 13 Oct 2018 14:34:48 +0800 Subject: [PATCH 408/736] translated --- ...Distro With Another in Dual Boot -Guide.md | 162 ------------------ ...Distro With Another in Dual Boot -Guide.md | 155 +++++++++++++++++ 2 files changed, 155 insertions(+), 162 deletions(-) delete mode 100644 sources/tech/20180925 How to Replace one Linux Distro With Another in Dual Boot -Guide.md create mode 100644 translated/tech/20180925 How to Replace one Linux Distro With Another in Dual Boot -Guide.md diff --git a/sources/tech/20180925 How to Replace one Linux Distro With Another in Dual Boot -Guide.md b/sources/tech/20180925 How to Replace one Linux Distro With Another in Dual Boot -Guide.md deleted file mode 100644 index 0e473dbc59..0000000000 --- a/sources/tech/20180925 How to Replace one Linux Distro With Another in Dual Boot -Guide.md +++ /dev/null @@ -1,162 +0,0 @@ -HankChow translating - -How to Replace one Linux Distro With Another in Dual Boot [Guide] -====== -**If you have a Linux distribution installed, you can replace it with another distribution in the dual boot. You can also keep your personal documents while switching the distribution.** - -![How to Replace One Linux Distribution With Another From Dual Boot][1] - -Suppose you managed to [successfully dual boot Ubuntu and Windows][2]. But after reading the [Linux Mint versus Ubuntu discussion][3], you realized that [Linux Mint][4] is more suited for your needs. What would you do now? How would you [remove Ubuntu][5] and [install Mint in dual boot][6]? - -You might think that you need to uninstall [Ubuntu][7] from dual boot first and then repeat the dual booting steps with Linux Mint. Let me tell you something. You don’t need to do all of that. - -If you already have a Linux distribution installed in dual boot, you can easily replace it with another. You don’t have to uninstall the existing Linux distribution. You simply delete its partition and install the new distribution on the disk space vacated by the previous distribution. - -Another good news is that you may be able to keep your Home directory with all your documents and pictures while switching the Linux distributions. - -Let me show you how to switch Linux distributions. - -### Replace one Linux with another from dual boot - - - -Let me describe the scenario I am going to use here. I have Linux Mint 19 installed on my system in dual boot mode with Windows 10. I am going to replace it with elementary OS 5. I’ll also keep my personal files (music, pictures, videos, documents from my home directory) while switching distributions. - -Let’s first take a look at the requirements: - - * A system with Linux and Windows dual boot - * Live USB of Linux you want to install - * Backup of your important files in Windows and in Linux on an external disk (optional yet recommended) - - - -#### Things to keep in mind for keeping your home directory while changing Linux distribution - -If you want to keep your files from existing Linux install as it is, you must have a separate root and home directory. You might have noticed that in my [dual boot tutorials][8], I always go for ‘Something Else’ option and then manually create root and home partitions instead of choosing ‘Install alongside Windows’ option. This is where all the troubles in manually creating separate home partition pay off. - -Keeping Home on a separate partition is helpful in situations when you want to replace your existing Linux install with another without losing your files. - -Note: You must remember the exact username and password of your existing Linux install in order to use the same home directory as it is in the new distribution. - -If you don’t have a separate Home partition, you may create it later as well BUT I won’t recommend that. That process is slightly complicated and I don’t want you to mess up your system. - -With that much background information, it’s time to see how to replace a Linux distribution with another. - -#### Step 1: Create a live USB of the new Linux distribution - -Alright! I already mentioned it in the requirements but I still included it in the main steps to avoid confusion. - -You can create a live USB using a start up disk creator like [Etcher][9] in Windows or Linux. The process is simple so I am not going to list the steps here. - -#### Step 2: Boot into live USB and proceed to installing Linux - -Since you have already dual booted before, you probably know the drill. Plugin the live USB, restart your system and at the boot time, press F10 or F12 repeatedly to enter BIOS settings. - -In here, choose to boot from the USB. And then you’ll see the option to try the live environment or installing it immediately. - -You should start the installation procedure. When you reach the ‘Installation type’ screen, choose the ‘Something else’ option. - -![Replacing one Linux with another from dual boot][10] -Select ‘Something else’ here - -#### Step 3: Prepare the partition - -You’ll see the partitioning screen now. Look closely and you’ll see your Linux installation with Ext4 file system type. - -![Identifying Linux partition in dual boot][11] -Identify where your Linux is installed - -In the above picture, the Ext4 partition labeled as Linux Mint 19 is the root partition. The second Ext4 partition of 82691 MB is the Home partition. I [haven’t used any swap space][12] here. - -Now, if you have just one Ext4 partition, that means that your home directory is on the same partition as root. In this case, you won’t be able to keep your Home directory. I suggest that you copy the important files to an external disk else you’ll lose them forever. - -It’s time to delete the root partition. Select the root partition and click the – sign. This will create some free space. - -![Delete root partition of your existing Linux install][13] -Delete root partition - -When you have the free space, click on + sign. - -![Create root partition for the new Linux][14] -Create a new root partition - -Now you should create a new partition out of this free space. If you had just one root partition in your previous Linux install, you should create root and home partitions here. You can also create the swap partition if you want to. - -If you had root and home partition separately, just create a root partition from the deleted root partition. - -![Create root partition for the new Linux][15] -Creating root partition - -You may ask why did I use delete and add instead of using the ‘change’ option. It’s because a few years ago, using change didn’t work for me. So I prefer to do a – and +. Is it superstition? Maybe. - -One important thing to do here is to mark the newly created partition for format. f you don’t change the size of the partition, it won’t be formatted unless you explicitly ask it to format. And if the partition is not formatted, you’ll have issues. - -![][16] -It’s important to format the root partition - -Now if you already had a separate Home partition on your existing Linux install, you should select it and click on change. - -![Recreate home partition][17] -Retouch the already existing home partition (if any) - -You just have to specify that you are mounting it as home partition. - -![Specify the home mount point][18] -Specify the home mount point - -If you had a swap partition, you can repeat the same steps as the home partition. This time specify that you want to use the space as swap. - -At this stage, you should have a root partition (with format option selected) and a home partition (and a swap if you want to). Hit the install now button to start the installation. - -![Verify partitions while replacing one Linux with another][19] -Verify the partitions - -The next few screens would be familiar to you. What matters is the screen where you are asked to create user and password. - -If you had a separate home partition previously and you want to use the same home directory, you MUST use the same username and password that you had before. Computer name doesn’t matter. - -![To keep the home partition intact, use the previous user and password][20] -To keep the home partition intact, use the previous user and password - -Your struggle is almost over. You don’t have to do anything else other than waiting for the installation to finish. - -![Wait for installation to finish][21] -Wait for installation to finish - -Once the installation is over, restart your system. You’ll have a new Linux distribution or version. - -In my case, I had the entire home directory of Linux Mint 19 as it is in the elementary OS. All the videos, pictures I had remained as it is. Isn’t that nice? - --------------------------------------------------------------------------------- - -via: https://itsfoss.com/replace-linux-from-dual-boot/ - -作者:[Abhishek Prakash][a] -选题:[lujun9972](https://github.com/lujun9972) -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://itsfoss.com/author/abhishek/ -[1]: https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/09/Replace-Linux-Distro-from-dual-boot.png -[2]: https://itsfoss.com/install-ubuntu-1404-dual-boot-mode-windows-8-81-uefi/ -[3]: https://itsfoss.com/linux-mint-vs-ubuntu/ -[4]: https://www.linuxmint.com/ -[5]: https://itsfoss.com/uninstall-ubuntu-linux-windows-dual-boot/ -[6]: https://itsfoss.com/guide-install-linux-mint-16-dual-boot-windows/ -[7]: https://www.ubuntu.com/ -[8]: https://itsfoss.com/guide-install-elementary-os-luna/ -[9]: https://etcher.io/ -[10]: https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/09/replace-linux-with-another-1.jpg -[11]: https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/09/replace-linux-with-another-2.jpg -[12]: https://itsfoss.com/swap-size/ -[13]: https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/09/replace-linux-with-another-3.jpg -[14]: https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/09/replace-linux-with-another-4.jpg -[15]: https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/09/replace-linux-with-another-5.jpg -[16]: https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/09/replace-linux-with-another-6.jpg -[17]: https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/09/replace-linux-with-another-7.jpg -[18]: https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/09/replace-linux-with-another-8.jpg -[19]: https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/09/replace-linux-with-another-9.jpg -[20]: https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/09/replace-linux-with-another-10.jpg -[21]: https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/09/replace-linux-with-another-11.jpg diff --git a/translated/tech/20180925 How to Replace one Linux Distro With Another in Dual Boot -Guide.md b/translated/tech/20180925 How to Replace one Linux Distro With Another in Dual Boot -Guide.md new file mode 100644 index 0000000000..b5e2f97c0b --- /dev/null +++ b/translated/tech/20180925 How to Replace one Linux Distro With Another in Dual Boot -Guide.md @@ -0,0 +1,155 @@ +如何在双系统引导下替换 Linux 发行版 +====== +在双系统引导的状态下,你可以将已安装的 Linux 发行版替换为另一个发行版,同时还可以保留原本的个人数据。 + +![How to Replace One Linux Distribution With Another From Dual Boot][1] + +假设你的电脑上已经[以双系统的形式安装了 Ubuntu 和 Windows][2],但经过[将 Linux Mint 与 Ubuntu 比较][3]之后,你又觉得 [Linux Mint][4] 会更适合自己的时候,你会怎样做?又该如何在[删除 Ubuntu][5] 的同时[在双系统中安装 Mint][6] 呢? + +你或许觉得应该首先从在双系统中卸载 [Ubuntu][7],然后使用 Linux Mint 重新安装成双系统。但实际上并不需要这么麻烦。 + +如果你已经在双系统引导中安装了一种 Linux 发行版,就可以轻松替换成另一个发行版了,而且也不必卸载已有的 Linux 发行版,只需要删除其所在的分区,然后在腾出的磁盘空间上安装另一个 Linux 发行版就可以了。 + +与此同时,更换 Linux 发行版后,仍然会保留原本 home 目录中包含所有文件。 + +下面就来详细介绍一下。 + +### 在双系统引导中替换 Linux 发行版 + + + +这是我的演示范例。我使用双系统引导同时安装了 Windows 10 和 Linux Mint 19,然后我会把 Linux Mint 19 替换成 Elementary OS 5,同时在替换后保留我的个人文件(包括音乐、图片、视频和 home 目录中的文件)。 + +你需要做好以下这些准备: + + * 使用 Linux 和 Windows 双系统引导 + * 需要安装的 Linux 发行版的 USB live 版 + * 在外部磁盘备份 Windows 和 Linux 中的重要文件(并非必要,但建议备份一下) + + + +#### 在替换 Linux 发行版时要记住保留你的 home 目录 + +如果想让个人文件在安装新 Linux 系统的过程中不受影响,原有的 Linux 系统必须具有单独的 root 目录和 home 目录。你可能会发现我的[双系统引导教程][8]在安装过程中不选择“与 Windows 一起安装”选项,而选择“其它”选项,然后手动创建 root 和 home 分区。所以,手动创建单独的 home 分区也算是一个磨刀不误砍柴工的操作。因为如果要在不丢失文件的情况下,将现有的 Linux 发行版替换为另一个发行版,需要将 home 目录存放在一个单独的分区上。 + +不过,你必须记住现有 Linux 系统的用户名和密码才能使用与新系统中相同的 home 目录。 + +如果你没有单独的 home 分区,也可以后续再进行创建。但这并不是推荐做法,因为这个过程会比较复杂,有可能会把你的系统搞乱。 + +下面来看看如何替换到另一个 Linux 发行版。 + +#### 步骤 1:为新的 Linux 发行版创建一个 USB live 版 + +尽管上文中已经提到了它,但我还是要重复一次以免忽略。 + +你可以使用 Windows 或 Linux 中的启动盘创建器(例如 [Etcher][9])来创建 USB live 版,这个过程比较简单,这里不再详细叙述。 + +#### 步骤 2:启动 USB live 版并安装 Linux + +你应该已经使用过双系统启动,对这个过程不会陌生。使用 USB live 版重新启动系统,在启动时反复按 F10 或 F12 进入 BIOS 设置。选择从 USB 启动,就可以看到进入 live 环境或立即安装的选项。 + +在安装过程中,进入“安装类型”界面时,选择“其它”选项。 + +![Replacing one Linux with another from dual boot][10] +(在这里选择“其它”选项) + +#### 步骤 3:准备分区操作 + +下图是分区界面。你会看到使用 Ext4 文件系统类型来安装 Linux。 + +![Identifying Linux partition in dual boot][11] +(确定 Linux 的安装位置) + +在上图中,标记为 Linux Mint 19 的 Ext4 分区是 root 分区,大小为 82691 MB 的第二个 Ext4 分区是 home 分区。在这里我这里没有使用[交换空间][12]。 + +如果你只有一个 Ext4 分区,就意味着你的 home 目录与 root 目录位于同一分区。在这种情况下,你就无法保留 home 目录中的文件了,这个时候我建议将重要文件复制到外部磁盘,否则这些文件将不会保留。 + +然后是删除 root 分区。选择 root 分区,然后点击 - 号,这个操作释放了一些磁盘空间。 + +![Delete root partition of your existing Linux install][13] +(删除 root 分区) + +磁盘空间释放出来后,点击 + 号。 + +![Create root partition for the new Linux][14] +(创建新的 root 分区) + +现在已经在可用空间中创建一个新分区。如果你之前的 Linux 系统中只有一个 root 分区,就应该在这里创建 root 分区和 home 分区。如果需要,还可以创建交换分区。 + +如果你之前已经有 root 分区和 home 分区,那么只需要从已删除的 root 分区创建 root 分区就可以了。 + +![Create root partition for the new Linux][15] +(创建 root 分区) + +你可能有疑问,为什么要经过“删除”和“添加”两个过程,而不使用“更改”选项。这是因为以前使用“更改”选项好像没有效果,所以我更喜欢用 - 和 +。这是迷信吗?也许是吧。 + +这里有一个重要的步骤,对新创建的 root 分区进行格式化。在没有更改分区大小的情况下,默认是不会对分区进行格式化的。如果分区没有被格式化,之后可能会出现问题。 + +![][16] +(格式化 root 分区很重要) + +如果你在新的 Linux 系统上已经划分了单独的 home 分区,选中它并点击更改。 + +![Recreate home partition][17] +(修改已有的 home 分区) + +然后指定将其作为 home 分区挂载即可。 + +![Specify the home mount point][18] +(指定 home 分区的挂载点) + +如果你还有交换分区,可以重复与 home 分区相同的步骤,唯一不同的是要指定将空间用作交换空间。 + +现在的状态应该是有一个 root 分区(将被格式化)和一个 home 分区(如果需要,还可以使用交换分区)。点击“立即安装”可以开始安装。 + +![Verify partitions while replacing one Linux with another][19] +(检查分区情况) + +接下来的几个界面就很熟悉了,要重点注意的是创建用户和密码的步骤。如果你之前有一个单独的 home 分区,并且还想使用相同的 home 目录,那你必须使用和之前相同的用户名和密码,至于设备名称则可以任意指定。 + +![To keep the home partition intact, use the previous user and password][20] +(要保持 home 分区不变,请使用之前的用户名和密码) + +接下来只要静待安装完成,不需执行任何操作。 + +![Wait for installation to finish][21] +(等待安装完成) + +安装完成后重新启动系统,你就能使用新的 Linux 发行版。 + +在以上的例子中,我可以在新的 Linux Mint 19 中使用原有的 Elementary OS 中的整个 home 目录,并且其中所有视频和图片都原封不动。岂不美哉? + +-------------------------------------------------------------------------------- + +via: https://itsfoss.com/replace-linux-from-dual-boot/ + +作者:[Abhishek Prakash][a] +选题:[lujun9972](https://github.com/lujun9972) +译者:[HankChow](https://github.com/HankChow) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://itsfoss.com/author/abhishek/ +[1]: https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/09/Replace-Linux-Distro-from-dual-boot.png +[2]: https://itsfoss.com/install-ubuntu-1404-dual-boot-mode-windows-8-81-uefi/ +[3]: https://itsfoss.com/linux-mint-vs-ubuntu/ +[4]: https://www.linuxmint.com/ +[5]: https://itsfoss.com/uninstall-ubuntu-linux-windows-dual-boot/ +[6]: https://itsfoss.com/guide-install-linux-mint-16-dual-boot-windows/ +[7]: https://www.ubuntu.com/ +[8]: https://itsfoss.com/guide-install-elementary-os-luna/ +[9]: https://etcher.io/ +[10]: https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/09/replace-linux-with-another-1.jpg +[11]: https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/09/replace-linux-with-another-2.jpg +[12]: https://itsfoss.com/swap-size/ +[13]: https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/09/replace-linux-with-another-3.jpg +[14]: https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/09/replace-linux-with-another-4.jpg +[15]: https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/09/replace-linux-with-another-5.jpg +[16]: https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/09/replace-linux-with-another-6.jpg +[17]: https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/09/replace-linux-with-another-7.jpg +[18]: https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/09/replace-linux-with-another-8.jpg +[19]: https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/09/replace-linux-with-another-9.jpg +[20]: https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/09/replace-linux-with-another-10.jpg +[21]: https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/09/replace-linux-with-another-11.jpg + From 8de8e0fa3d941d7c2ccdb237f4fa9cfd8ca978d9 Mon Sep 17 00:00:00 2001 From: HankChow <280630620@qq.com> Date: Sat, 13 Oct 2018 14:48:07 +0800 Subject: [PATCH 409/736] hankchow translating --- ...08 Python at the pump- A script for filling your gas tank.md | 2 ++ 1 file changed, 2 insertions(+) diff --git a/sources/tech/20181008 Python at the pump- A script for filling your gas tank.md b/sources/tech/20181008 Python at the pump- A script for filling your gas tank.md index 493a906b3f..265048b93b 100644 --- a/sources/tech/20181008 Python at the pump- A script for filling your gas tank.md +++ b/sources/tech/20181008 Python at the pump- A script for filling your gas tank.md @@ -1,3 +1,5 @@ +HankChow translating + Python at the pump: A script for filling your gas tank ====== Here's how I used Python to discover a strategy for cost-effective fill-ups. From d70ceb0d2ee70ec92162107a6561c5676acbc86a Mon Sep 17 00:00:00 2001 From: HankChow <280630620@qq.com> Date: Sat, 13 Oct 2018 22:50:58 +0800 Subject: [PATCH 410/736] translated --- ...ump- A script for filling your gas tank.md | 103 ------------------ ...ump- A script for filling your gas tank.md | 101 +++++++++++++++++ 2 files changed, 101 insertions(+), 103 deletions(-) delete mode 100644 sources/tech/20181008 Python at the pump- A script for filling your gas tank.md create mode 100644 translated/tech/20181008 Python at the pump- A script for filling your gas tank.md diff --git a/sources/tech/20181008 Python at the pump- A script for filling your gas tank.md b/sources/tech/20181008 Python at the pump- A script for filling your gas tank.md deleted file mode 100644 index 265048b93b..0000000000 --- a/sources/tech/20181008 Python at the pump- A script for filling your gas tank.md +++ /dev/null @@ -1,103 +0,0 @@ -HankChow translating - -Python at the pump: A script for filling your gas tank -====== -Here's how I used Python to discover a strategy for cost-effective fill-ups. - -![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/bulb-light-energy-power-idea.png?itok=zTEEmTZB) - -I recently began driving a car that had traditionally used premium gas (93 octane). According to the maker, though, it requires only 91 octane. The thing is, in the US, you can buy only 87, 89, or 93 octane. Where I live, gas prices jump 30 cents per gallon jump from one grade to the next, so premium costs 60 cents more than regular. So why not try to save some money? - -It’s easy enough to wait until the gas gauge shows that the tank is half full and then fill it with 89 octane, and there you have 91 octane. But it gets tricky to know what to do next—half a tank of 91 octane plus half a tank of 93 ends up being 92, and where do you go from there? You can make continuing calculations, but they get increasingly messy. This is where Python came into the picture. - -I wanted to come up with a simple scheme in which I could fill the tank at some level with 93 octane, then at the same or some other level with 89 octane, with the primary goal to never get below 91 octane with the final mixture. What I needed to do was create some recurring calculation that uses the previous octane value for the preceding fill-up. I suppose there would be some polynomial equation that would solve this, but in Python, this sounds like a loop. - -``` -#!/usr/bin/env python -# octane.py - -o = 93.0 -newgas = 93.0   # this represents the octane of the last fillup -i = 1 -while i < 21:                   # 20 iterations (trips to the pump) -    if newgas == 89.0:          # if the last fillup was with 89 octane -                                # switch to 93 -        newgas = 93.0 -        o = newgas/2 + o/2      # fill when gauge is 1/2 full -    else:                       # if it wasn't 89 octane, switch to that -        newgas = 89.0 -        o = newgas/2 + o/2      # fill when gauge says 1/2 full -    print str(i) + ': '+ str(o) -    i += 1 -``` - -As you can see, I am initializing the variable o (the current octane mixture in the tank) and the variable newgas (what I last filled the tank with) at the same value of 93. The loop then will repeat 20 times, for 20 fill-ups, switching from 89 octane and 93 octane for every other trip to the station. - -``` -1: 91.0 -2: 92.0 -3: 90.5 -4: 91.75 -5: 90.375 -6: 91.6875 -7: 90.34375 -8: 91.671875 -9: 90.3359375 -10: 91.66796875 -11: 90.333984375 -12: 91.6669921875 -13: 90.3334960938 -14: 91.6667480469 -15: 90.3333740234 -16: 91.6666870117 -17: 90.3333435059 -18: 91.6666717529 -19: 90.3333358765 -20: 91.6666679382 -``` - -This shows is that I probably need only 10 or 15 loops to see stabilization. It also shows that soon enough, I undershoot my 91 octane target. It’s also interesting to see this stabilization of the alternating mixture values, and it turns out this happens with any scheme where you choose the same amounts each time. In fact, it is true even if the amount of the fill-up is different for 89 and 93 octane. - -So at this point, I began playing with fractions, reasoning that I would probably need a bigger 93 octane fill-up than the 89 fill-up. I also didn’t want to make frequent trips to the gas station. What I ended up with (which seemed pretty good to me) was to wait until the tank was about 7⁄12 full and fill it with 89 octane, then wait until it was ¼ full and fill it with 93 octane. - -Here is what the changes in the loop look like: - -``` -    if newgas == 89.0:             -                                  -        newgas = 93.0 -        o = 3*newgas/4 + o/4       -    else:                         -        newgas = 89.0 -        o = 5*newgas/12 + 7*o/12 -``` - -Here are the numbers, starting with the tenth fill-up: - -``` -10: 92.5122272978 -11: 91.0487992571 -12: 92.5121998143 -13: 91.048783225 -14: 92.5121958062 -15: 91.048780887 -``` - -As you can see, this keeps the final octane very slightly above 91 all the time. Of course, my gas gauge isn’t marked in twelfths, but 7⁄12 is slightly less than 5⁄8, and I can handle that. - -An alternative simple solution might have been run the tank to empty and fill with 93 octane, then next time only half-fill it for 89—and perhaps this will be my default plan. Personally, I’m not a fan of running the tank all the way down since this isn’t always convenient. On the other hand, it could easily work on a long trip. And sometimes I buy gas because of a sudden drop in prices. So in the end, this scheme is one of a series of options that I can consider. - -The most important thing for Python users: Don’t code while driving! - --------------------------------------------------------------------------------- - -via: https://opensource.com/article/18/10/python-gas-pump - -作者:[Greg Pittman][a] -选题:[lujun9972](https://github.com/lujun9972) -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://opensource.com/users/greg-p diff --git a/translated/tech/20181008 Python at the pump- A script for filling your gas tank.md b/translated/tech/20181008 Python at the pump- A script for filling your gas tank.md new file mode 100644 index 0000000000..396ed17291 --- /dev/null +++ b/translated/tech/20181008 Python at the pump- A script for filling your gas tank.md @@ -0,0 +1,101 @@ +使用 Python 为你的油箱加油 +====== +我来介绍一下我是如何使用 Python 来节省成本的。 + +![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/bulb-light-energy-power-idea.png?itok=zTEEmTZB) + +我最近在开一辆烧 93 号汽油的车子。根据汽车制造商的说法,它只需要加 91 号汽油就可以了。然而,在美国只能买到 87 号、89 号、93 号汽油。而我家附近的汽油的物价水平是每增加一号,每加仑就要多付 30 美分,因此如果加 93 号汽油,每加仑就要多花 60 美分。为什么不能节省一些钱呢? + +一开始很简单,只需要先加满 93 号汽油,然后在油量表显示油箱半满的时候,用 89 号汽油加满,就得到一整箱 91 号汽油了。但接下来就麻烦了,剩下半箱 91 号汽油加上半箱 93 号汽油,只会变成一箱 92 号汽油,再接下来呢?如果继续算下去,只会越来越混乱。这个时候 Python 就派上用场了。 + +我的方案是,可以根据汽油的实时状态,不断向油箱中加入 93 号汽油或者 89 号汽油,而最终目标是使油箱内汽油的号数不低于 91。我需要做的是只是通过一些算法来判断新旧汽油混合之后的号数。使用多项式方程或许也可以解决这个问题,但如果使用 Python,好像只需要进行循环就可以了。 + +``` +#!/usr/bin/env python +# octane.py + +o = 93.0 +newgas = 93.0 # 这个变量记录上一次加入的汽油号数 +i = 1 +while i < 21: # 20 次迭代 (加油次数) + if newgas == 89.0: # 如果上一次加的是 89 号汽油,改加 93 号汽油 + newgas = 93.0 + o = newgas/2 + o/2 # 当油箱半满的时候就加油 + else: # 如果上一次加的是 93 号汽油,则改加 89 号汽油 + newgas = 89.0 + o = newgas/2 + o/2 # 当油箱半满的时候就加油 + print str(i) + ': '+ str(o) + i += 1 +``` + +在代码中,我首先将变量 `o`(油箱中的当前混合汽油号数)和变量 `newgas`(上一次加入的汽油号数)的初始值都设为 93,然后循环 20 次,也就是分别加入 89 号汽油和 93 号汽油一共 20 次,以保持混合汽油号数稳定。 + +``` +1: 91.0 +2: 92.0 +3: 90.5 +4: 91.75 +5: 90.375 +6: 91.6875 +7: 90.34375 +8: 91.671875 +9: 90.3359375 +10: 91.66796875 +11: 90.333984375 +12: 91.6669921875 +13: 90.3334960938 +14: 91.6667480469 +15: 90.3333740234 +16: 91.6666870117 +17: 90.3333435059 +18: 91.6666717529 +19: 90.3333358765 +20: 91.6666679382 +``` + +从以上数据来看,只需要 10 到 15 次循环,汽油号数就比较稳定了,也相当接近 91 号汽油的目标。这种交替混合直到稳定的现象看起来很有趣,每次交替加入同等量的不同号数汽油,都会趋于稳定。实际上,即使加入的 89 号汽油和 93 号汽油的量不同,也会趋于稳定。 + +因此,我尝试了不同的比例,我认为加入的 93 号汽油需要比 89 号汽油更多一点。在尽量少补充新汽油的情况下,我最终计算到的结果是 89 号汽油要在油箱大约 7/12 满的时候加进去,而 93 号汽油则要在油箱 1/4 满的时候才加进去。 + +我的循环将会更改成这样: + +``` + if newgas == 89.0: + + newgas = 93.0 + o = 3*newgas/4 + o/4 + else: + newgas = 89.0 + o = 5*newgas/12 + 7*o/12 +``` + +以下是从第十次加油开始的混合汽油号数: + +``` +10: 92.5122272978 +11: 91.0487992571 +12: 92.5121998143 +13: 91.048783225 +14: 92.5121958062 +15: 91.048780887 +``` + +如你所见,这个调整会令混合汽油号数始终略高于 91。当然,我的油量表并没有 1/12 的刻度,但是 7/12 略小于 5/8,我可以近似地计算。 + +一个更简单地方案是每次都首先加满 93 号汽油,然后在油箱半满时加入 89 号汽油直到耗尽,这可能会是我的常规方案。但就我个人而言,这种方法并不太好,有时甚至会产生一些麻烦。但对于长途旅行来说,这种方案会相对简便一些。有时我也会因为油价突然下跌而购买一些汽油,所以,这个方案是我可以考虑的一系列选项之一。 + +当然最重要的是:开车不写码,写码不开车! + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/18/10/python-gas-pump + +作者:[Greg Pittman][a] +选题:[lujun9972](https://github.com/lujun9972) +译者:[HankChow](https://github.com/HankChow) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/greg-p + From acced2d8aa01e173384f1c68c013685d10c27f3e Mon Sep 17 00:00:00 2001 From: David Dai Date: Sat, 13 Oct 2018 23:21:35 +0800 Subject: [PATCH 411/736] Finish translation of Design faster web pages, part 1- Image compression --- ...er web pages, part 1- Image compression.md | 185 ------------------ ...er web pages, part 1- Image compression.md | 183 +++++++++++++++++ 2 files changed, 183 insertions(+), 185 deletions(-) delete mode 100644 sources/tech/20181010 Design faster web pages, part 1- Image compression.md create mode 100644 translated/tech/20181010 Design faster web pages, part 1- Image compression.md diff --git a/sources/tech/20181010 Design faster web pages, part 1- Image compression.md b/sources/tech/20181010 Design faster web pages, part 1- Image compression.md deleted file mode 100644 index 758e3d0710..0000000000 --- a/sources/tech/20181010 Design faster web pages, part 1- Image compression.md +++ /dev/null @@ -1,185 +0,0 @@ -Translating by StdioA - -Design faster web pages, part 1: Image compression -====== - -![](https://fedoramagazine.org/wp-content/uploads/2018/02/fasterwebsites1-816x345.jpg) - -Lots of web developers want to achieve fast loading web pages. As more page views come from mobile devices, making websites look better on smaller screens using responsive design is just one side of the coin. Browser Calories can make the difference in loading times, which satisfies not just the user but search engines that rank on loading speed. This article series covers how to slim down your web pages with tools Fedora offers. - -### Preparation - -Before you sart to slim down your web pages, you need to identify the core issues. For this, you can use [Browserdiet][1]. It’s a browser add-on available for Firefox, Opera and Chrome and other browsers. It analyzes the performance values of the actual open web page, so you know where to start slimming down. - -Next you’ll need some pages to work on. The example screenshot shows a test of [getfedora.org][2]. At first it looks very simple and responsive. - -![Browser Diet - values of getfedora.org][3] - -However, BrowserDiet’s page analysis shows there are 1.8MB in files downloaded. Therefore, there’s some work to do! - -### Web optimization - -There are over 281 KB of JavaScript files, 203 KB more in CSS files, and 1.2 MB in images. Start with the biggest issue — the images. The tool set you need for this is GIMP, ImageMagick, and optipng. You can easily install them using the following command: - -``` -sudo dnf install gimp imagemagick optipng - -``` - -For example, take the [following file][4] which is 6.4 KB: - -![][4] - -First, use the file command to get some basic information about this image: - -``` -$ file cinnamon.png -cinnamon.png: PNG image data, 60 x 60, 8-bit/color RGBA, non-interlaced - -``` - -The image — which is only in grey and white — is saved in 8-bit/color RGBA mode. That’s not as efficient as it could be. - -Start GIMP so you can set a more appropriate color mode. Open cinnamon.png in GIMP. Then go to Image>Mode and set it to greyscale. Export the image as PNG with compression factor 9. All other settings in the export dialog should be the default. - -``` -$ file cinnamon.png -cinnamon.png: PNG image data, 60 x 60, 8-bit gray+alpha, non-interlaced - -``` - -The output shows the file’s now in 8bit gray+alpha mode. The file size has shrunk from 6.4 KB to 2.8 KB. That’s already only 43.75% of the original size. But there’s more you can do! - -You can also use the ImageMagick tool identify to provide more information about the image. - -``` -$ identify cinnamon2.png -cinnamon.png PNG 60x60 60x60+0+0 8-bit Grayscale Gray 2831B 0.000u 0:00.000 - -``` - -This tells you the file is 2831 bytes. Jump back into GIMP, and export the file. In the export dialog disable the storing of the time stamp and the alpha channel color values to reduce this a little more. Now the file output shows: - -``` -$ identify cinnamon.png -cinnamon.png PNG 60x60 60x60+0+0 8-bit Grayscale Gray 2798B 0.000u 0:00.000 - -``` - -Next, use optipng to losslessly optimize your PNG images. There are other tools that do similar things, including **advdef** (which is part of advancecomp), **pngquant** and **pngcrush.** - -Run optipng on your file. Note that this will replace your original: - -``` -$ optipng -o7 cinnamon.png -** Processing: cinnamon.png -60x60 pixels, 2x8 bits/pixel, grayscale+alpha -Reducing image to 8 bits/pixel, grayscale -Input IDAT size = 2720 bytes -Input file size = 2812 bytes - -Trying: - zc = 9 zm = 8 zs = 0 f = 0 IDAT size = 1922 - zc = 9 zm = 8 zs = 1 f = 0 IDAT size = 1920 - -Selecting parameters: - zc = 9 zm = 8 zs = 1 f = 0 IDAT size = 1920 - -Output IDAT size = 1920 bytes (800 bytes decrease) -Output file size = 2012 bytes (800 bytes = 28.45% decrease) - -``` - -The option -o7 is the slowest to process, but provides the best end results. You’ve knocked 800 more bytes off the file size, which is now 2012 bytes. - -To resize all of the PNGs in a directory, use this command: - -``` -$ optipng -o7 -dir= *.png - -``` - -The option -dir lets you give a target directory for the output. If this option is not used, optipng would overwrite the original images. - -### Choosing the right file format - -When it comes to pictures for the usage in the internet, you have the choice between: - - -+ [JPG or JPEG][9] -+ [GIF][10] -+ [PNG][11] -+ [aPNG][12] -+ [JPG-LS][13] -+ [JPG 2000 or JP2][14] -+ [SVG][15] - - -JPG-LS and JPG 2000 are not widely used. Only a few digital cameras support these formats, so they can be ignored. aPNG is an animated PNG, and not widely used either. - -You could save a few bytes more through changing the compression rate or choosing another file format. The first option you can’t do in GIMP, as it’s already using the highest compression rate. As there are no [alpha channels][5] in the picture, you can choose JPG as file format instead. For now use the default value of 90% quality — you could change it down to 85%, but then alias effects become visible. This saves a few bytes more: - -``` -$ identify cinnamon.jpg -cinnamon.jpg JPEG 60x60 60x60+0+0 8-bit sRGB 2676B 0.000u 0:00.000 - -``` - -Alone this conversion to the right color space and choosing JPG as file format brought down the file size from 23 KB to 12.3 KB, a reduction of nearly 50%. - - -#### PNG vs. JPG: quality and compression rate - -So what about the rest of the images? This method would work for all the other pictures, except the Fedora “flavor” logos and the logos for the four foundations. Those are presented on a white background. - -One of the main differences between PNG and JPG is that JPG has no alpha channel. Therefore it can’t handle transparency. If you rework these images by using a JPG on a white background, you can reduce the file size from 40.7 KB to 28.3 KB. - -Now there are four more images you can rework: the backgrounds. For the grey background, set the mode to greyscale again. With this bigger picture, the savings also is bigger. It shrinks from 216.2 KB to 51.0 KB — it’s now barely 25% of its original size. All in all, you’ve shrunk 481.1 KB down to 191.5 KB — only 39.8% of the starting size. - -#### Quality vs. Quantity - -Another difference between PNG and JPG is the quality. PNG is a lossless compressed raster graphics format. But JPG loses size through compression, and thus affects quality. That doesn’t mean you shouldn’t use JPG, though. But you have to find a balance between file size and quality. - -### Achievement - -This is the end of Part 1. After following the techniques described above, here are the results: - -![][6] - -You brought image size down to 488.9 KB versus 1.2MB at the start. That’s only about a third of the size, just through optimizing with optipng. This page can probably be made to load faster still. On the scale from snail to hypersonic, it’s not reached racing car speed yet! - -Finally you can check the results in [Google Insights][7], for example: - -![][8] - -In the Mobile area the page gathered 10 points on scoring, but is still in the Medium sector. It looks totally different for the Desktop, which has gone from 62/100 to 91/100 and went up to Good. As mentioned before, this test isn’t the be all and end all. Consider scores such as these to help you go in the right direction. Keep in mind you’re optimizing for the user experience, and not for a search engine. - - --------------------------------------------------------------------------------- - -via: https://fedoramagazine.org/design-faster-web-pages-part-1-image-compression/ - -作者:[Sirko Kemter][a] -选题:[lujun9972][b] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://fedoramagazine.org/author/gnokii/ -[b]: https://github.com/lujun9972 -[1]: https://browserdiet.com/calories/ -[2]: http://getfedora.org -[3]: https://fedoramagazine.org/wp-content/uploads/2018/02/ff-addon-diet.jpg -[4]: https://getfedora.org/static/images/cinnamon.png -[5]: https://www.webopedia.com/TERM/A/alpha_channel.html -[6]: https://fedoramagazine.org/wp-content/uploads/2018/02/ff-addon-diet-i.jpg -[7]: https://developers.google.com/speed/pagespeed/insights/?url=getfedora.org&tab=mobile -[8]: https://fedoramagazine.org/wp-content/uploads/2018/02/PageSpeed_Insights.png -[9]: https://en.wikipedia.org/wiki/JPEG -[10]: https://en.wikipedia.org/wiki/GIF -[11]: https://en.wikipedia.org/wiki/Portable_Network_Graphics -[12]: https://en.wikipedia.org/wiki/APNG -[13]: https://en.wikipedia.org/wiki/JPEG_2000 -[14]: https://en.wikipedia.org/wiki/JPEG_2000 -[15]: https://en.wikipedia.org/wiki/Scalable_Vector_Graphics diff --git a/translated/tech/20181010 Design faster web pages, part 1- Image compression.md b/translated/tech/20181010 Design faster web pages, part 1- Image compression.md new file mode 100644 index 0000000000..a34af65920 --- /dev/null +++ b/translated/tech/20181010 Design faster web pages, part 1- Image compression.md @@ -0,0 +1,183 @@ +设计更快的网页——第一部分:图片压缩 +====== + +![](https://fedoramagazine.org/wp-content/uploads/2018/02/fasterwebsites1-816x345.jpg) + +很多 Web 开发者都希望做出加载速度很快的网页。在移动设备浏览占比越来越大的背景下,使用响应式设计使得网站在小屏幕下看起来更漂亮只是其中一个方面。Browser Calories 可以展示网页的加载时间——这不单单关系到用户,还会影响到通过加载速度来进行评级的搜索引擎。这个系列的文章介绍了如何使用 Fedora 提供的工具来给网页“瘦身”。 + +### 准备工作 + +在你开始缩减网页之前,你需要明确核心问题所在。为此,你可以使用 [Browserdiet][1]. 这是一个浏览器插件,适用于 Firefox, Opera, Chrome 和其它浏览器。它会对打开的网页进行性能分析,这样你就可以知道应该从哪里入手来缩减网页。 + +然后,你需要一些用来处理的页面。下面的例子是针对 [getferoda.org][2] 的测试截图。一开始,它看起来非常简单,也符合响应式设计。 + +![Browser Diet - getfedora.org 的评分][3] + +然而,BroserDiet 的网页分析表明,这个网页需要加载 1.8MB 的文件。所以,我们现在有活干了! + +### Web 优化 + +网页中包含 281 KB 的 JavaScript 文件,203 KB 的 CSS 文件,还有 1.2 MB 的图片。我们先从最严重的问题——图片开始入手。为了解决问题,你需要的工具集有 GIMP, ImageMagick 和 optipng. 你可以使用如下命令轻松安装它们: + +``` +sudo dnf install gimp imagemagick optipng + +``` + +比如,我们先拿到这个 6.4 KB 的[文件][4]: + +![][4] + +首先,使用 file 命令来获取这张图片的一些基本信息: + +``` +$ file cinnamon.png +cinnamon.png: PNG image data, 60 x 60, 8-bit/color RGBA, non-interlaced + +``` + +这张只由白色和灰色构成的图片使用 8 位 / RGBA 模式来存储。这种方式并没有那么高效。 + +使用 GIMP,你可以为这张图片设置一个更合适的颜色模式。在 GIMP 中打开 cinnamon.png. 然后,在“图片 > 模式”菜单中将其设置为“灰度模式”。将这张图片以 PNG 格式导出。导出时使用压缩因子 9,导出对话框中的其它配置均使用默认选项。 + +``` +$ file cinnamon.png +cinnamon.png: PNG image data, 60 x 60, 8-bit gray+alpha, non-interlaced + +``` + +输出显示,现在这个文件现在处于 8 位 / 灰阶+aplha 模式。文件大小从 6.4 KB 缩小到了 2.8 KB. 这已经是原来大小的 43.75% 了。但是,我们能做的还有很多! + +你可以使用 ImageMagick 工具来查看这张图片的更多信息。 + +``` +$ identify cinnamon2.png +cinnamon.png PNG 60x60 60x60+0+0 8-bit Grayscale Gray 2831B 0.000u 0:00.000 + +``` + +它告诉你,这个文件的大小为 2831 字节。我们回到 GIMP,重新导出文件。在导出对话框中,取消存储时间戳和 alpha 通道色值,来让文件更小一点。现在文件输出显示: + +``` +$ identify cinnamon.png +cinnamon.png PNG 60x60 60x60+0+0 8-bit Grayscale Gray 2798B 0.000u 0:00.000 + +``` + +下面,用 optipng 来无损优化你的 PNG 图片。具有相似功能的工具有很多,包括 **advdef**(这是 advancecomp 的一部分),**pngquant** 和 **pngcrush**。 + +对你的文件运行 optipng. 注意,这个操作会覆盖你的原文件: + +``` +$ optipng -o7 cinnamon.png +** Processing: cinnamon.png +60x60 pixels, 2x8 bits/pixel, grayscale+alpha +Reducing image to 8 bits/pixel, grayscale +Input IDAT size = 2720 bytes +Input file size = 2812 bytes + +Trying: + zc = 9 zm = 8 zs = 0 f = 0 IDAT size = 1922 + zc = 9 zm = 8 zs = 1 f = 0 IDAT size = 1920 + +Selecting parameters: + zc = 9 zm = 8 zs = 1 f = 0 IDAT size = 1920 + +Output IDAT size = 1920 bytes (800 bytes decrease) +Output file size = 2012 bytes (800 bytes = 28.45% decrease) + +``` + +-o7 选项处理起来最慢,但最终效果最好。于是你又将文件缩小了 800 字节,现在它只有 2012 字节了。 + +要压缩文件夹下的所有 PNG,可以使用这个命令: + +``` +$ optipng -o7 -dir= *.png + +``` + +-dir 选项用来指定输出文件夹。如果不加这个选项,optipng 会覆盖原文件。 + +### 选择正确的文件格式 + +当涉及到在互联网中使用的图片时,你可以选择: + + ++ [JPG 或 JPEG][9] ++ [GIF][10] ++ [PNG][11] ++ [aPNG][12] ++ [JPG-LS][13] ++ [JPG 2000 或 JP2][14] ++ [SVG][15] + + +JPG-LS 和 JPG 2000 没有得到广泛使用。只有一部分数码相机支持这些格式,所以我们可以忽略它们。aPNG 是动态的 PNG 格式,也没有广泛使用。 + +可以通过更改压缩率或者使用其它文件格式来节省下更多字节。我们无法在 GIMP 中应用第一种方法,因为现在的图片已经使用了最高的压缩率了。因为我们的图片中不再包含 [aplha 通道][5],你可以使用 JPG 类型来替代 PNG. 现在,使用默认值:90% 质量——你可以将它减小至 85%,但这样会导致可见的叠影。这样又省下一些字节: + +``` +$ identify cinnamon.jpg +cinnamon.jpg JPEG 60x60 60x60+0+0 8-bit sRGB 2676B 0.000u 0:00.000 + +``` + +只将这张图转成正确的色域,并使用 JPG 作为文件格式,就可以将它从 23 KB 缩小到 12.3 KB,减少了近 50%. + + +#### PNG vs JPG: 质量和压缩率 + +那么,剩下的文件我们要怎么办呢?除了 Fedora “风味”图标和四个特性图标之外,此方法适用于所有其他图片。我们能够处理的图片都有一个白色的背景。 + +PNG 和 JPG 的一个主要区别在于,JPG 没有 alpha 通道。所以,它没有透明度选项。如果你使用 JPG 并为它添加白色背景,你可以将文件从 40.7 KB 缩小至 28.3 KB. + +现在又有了四个可以处理的图片:背景图。对于灰色背景,你可以再次使用灰阶模式。对更大的图片,我们就可以节省下更多的空间。它从 216.2 KB 缩小到了 51 KB——基本上只有原图的 25% 了。整体下来,你把这些图片从 481.1 KB 缩小到了 191.5 KB——只有一开始的 39.8%. + +#### 质量 vs 大小 + +PNG 和 JPG 的另外一个区别在于质量。PNG 是一种无损压缩光栅图形格式。但是 JPG 虽然使用压缩来缩小体积,可是这会影响到质量。不过,这并不意味着你不应该使用 JPG,只是你需要在文件大小和质量中找到一个平衡。 + +### 成就 + +这就是第一部分的结尾了。在使用上述技术后,得到的结果如下: + +![][6] + +你将一开始 1.2 MB 的图片体积缩小到了 488.9 KB. 只需通过 optipng 进行优化,就可以达到之前体积的三分之一。这可能使得页面更快地加载。不过,要是使用蜗牛到超音速来对比,这个速度还没到达赛车的速度呢! + +最后,你可以在 [Google Insights][7] 中查看结果,例如: + +![][8] + +在移动端部分,这个页面的得分提升了 10 分,但它依然处于“中等”水平。对于桌面端,结果看起来完全不同,从 62/100 分提升至了 91/100 分,等级也达到了“好”的水平。如我们之前所说的,这个测试并不意味着我们的工作就做完了。通过参考这些分数可以让你朝着正确的方向前进。请记住,你正在为用户体验来进行优化,而不是搜索引擎。 + + +-------------------------------------------------------------------------------- + +via: https://fedoramagazine.org/design-faster-web-pages-part-1-image-compression/ + +作者:[Sirko Kemter][a] +选题:[lujun9972][b] +译者:[StdioA](https://github.com/StdioA) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://fedoramagazine.org/author/gnokii/ +[b]: https://github.com/lujun9972 +[1]: https://browserdiet.com/calories/ +[2]: http://getfedora.org +[3]: https://fedoramagazine.org/wp-content/uploads/2018/02/ff-addon-diet.jpg +[4]: https://getfedora.org/static/images/cinnamon.png +[5]: https://www.webopedia.com/TERM/A/alpha_channel.html +[6]: https://fedoramagazine.org/wp-content/uploads/2018/02/ff-addon-diet-i.jpg +[7]: https://developers.google.com/speed/pagespeed/insights/?url=getfedora.org&tab=mobile +[8]: https://fedoramagazine.org/wp-content/uploads/2018/02/PageSpeed_Insights.png +[9]: https://en.wikipedia.org/wiki/JPEG +[10]: https://en.wikipedia.org/wiki/GIF +[11]: https://en.wikipedia.org/wiki/Portable_Network_Graphics +[12]: https://en.wikipedia.org/wiki/APNG +[13]: https://en.wikipedia.org/wiki/JPEG_2000 +[14]: https://en.wikipedia.org/wiki/JPEG_2000 +[15]: https://en.wikipedia.org/wiki/Scalable_Vector_Graphics From 6c15bdce01057d7c179b64682ecb62b0ea87797a Mon Sep 17 00:00:00 2001 From: "Xingyu.Wang" Date: Sun, 14 Oct 2018 10:37:50 +0800 Subject: [PATCH 412/736] PRF:20181011 A Front-end For Popular Package Managers.md @Flowsnow --- ... Front-end For Popular Package Managers.md | 76 +++++++++---------- 1 file changed, 38 insertions(+), 38 deletions(-) diff --git a/translated/tech/20181011 A Front-end For Popular Package Managers.md b/translated/tech/20181011 A Front-end For Popular Package Managers.md index adbd03cbda..db839d6475 100644 --- a/translated/tech/20181011 A Front-end For Popular Package Managers.md +++ b/translated/tech/20181011 A Front-end For Popular Package Managers.md @@ -1,11 +1,11 @@ -给受欢迎的包管理器加个前端 +Sysget:给主流的包管理器加个前端 ====== ![](https://www.ostechnix.com/wp-content/uploads/2018/10/sysget-720x340.png) -你是一个喜欢每隔几天尝试Linux操作系统的新发行版的人吗? 如果是这样,我有一些东西对你有用。 尝试**Sysget**,这是类Unix操作系统中流行软件包管理器的前端。 你不需要了解每个包管理器来执行基本的操作,例如安装,更新,升级和删除包。 你只需要记住每个类Unix操作系统上每个包管理器的一种语法即可。 Sysget是包管理器的包装脚本,它是用C ++编写的。 源代码可在GitHub上免费获得。 +你是一个喜欢每隔几天尝试 Linux 操作系统的新发行版的发行版收割机吗?如果是这样,我有一些东西对你有用。 尝试 Sysget,这是一个类 Unix 操作系统中的流行软件包管理器的前端。 你不需要学习每个包管理器来执行基本的操作,例如安装、更新、升级和删除包。 你只需要对每个运行在类 Unix 操作系统上的包管理器记住一种语法即可。 Sysget 是包管理器的包装脚本,它是用 C++ 编写的。 源代码可在 GitHub 上免费获得。 -使用Sysget,你可以执行各种基本的包管理操作,包括: +使用 Sysget,你可以执行各种基本的包管理操作,包括: - 安装包, - 更新包, @@ -17,15 +17,15 @@ - 升级系统, - 清除包管理器缓存。 -**给Linux学习者的一个重要提示:** +**给 Linux 学习者的一个重要提示:** -Sysget不会取代软件包管理器,绝对不适合所有人。 如果你是经常切换到新Linux操作系统的新手,Sysget可能会有所帮助。 当在不同的Linux发行版中使用不同的软件包管理器时,就必须学习安装,更新,升级,搜索和删除软件包的新命令,这时Sysget就是帮助发行版收割机用户(或新Linux用户)的包装脚本。 +Sysget 不会取代软件包管理器,绝对不适合所有人。如果你是经常切换到新 Linux 操作系统的新手,Sysget 可能会有所帮助。当在不同的 Linux 发行版中使用不同的软件包管理器时,就必须学习安装、更新、升级、搜索和删除软件包的新命令,这时 Sysget 就是帮助发行版收割机distro hopper(或新 Linux 用户)的包装脚本。 -如果你是Linux管理员或想要学习Linux深层的爱好者,你应该坚持使用你的发行版的软件包管理器并学习如何使用它。 +如果你是 Linux 管理员或想要学习 Linux 深层的爱好者,你应该坚持使用你的发行版的软件包管理器并学习如何使用它。 -### 安装Sysget +### 安装 Sysget -安装sysget很简单。 转到[**发布页面**][1]并下载最新的Sysget二进制文件并按如下所示进行安装。 在编写本指南时,sysget最新版本为1.2。 +安装 Sysget 很简单。 转到[发布页面][1]并下载最新的 Sysget 二进制文件并按如下所示进行安装。 在编写本指南时,Sysget 最新版本为1.2。 ``` $ sudo wget -O /usr/local/bin/sysget https://github.com/emilengler/sysget/releases/download/v1.2/sysget @@ -35,28 +35,28 @@ $ sudo chmod a+x /usr/local/bin/sysget ### 用法 -Sysget命令与APT包管理器大致相同,因此它应该适合新手使用。 +Sysget 命令与 APT 包管理器大致相同,因此它应该适合新手使用。 -当你第一次运行Sysget时,系统会要求你选择要使用的包管理器。 由于我在Ubuntu,我选择了**apt-get**。 +当你第一次运行 Sysget 时,系统会要求你选择要使用的包管理器。 由于我在 Ubuntu,我选择了 apt-get。 ![](https://www.ostechnix.com/wp-content/uploads/2018/10/sysget-1.png) -你必须根据正在运行的发行版选择正确的包管理器。 例如,如果你使用的是Arch Linux,请选择**pacman**。 对于CentOS,请选择**yum**。 对于FreeBSD,请选择**pkg**。 当前支持的包管理器列表是: +你必须根据正在运行的发行版选择正确的包管理器。 例如,如果你使用的是 Arch Linux,请选择 pacman。 对于 CentOS,请选择 yum。 对于 FreeBSD,请选择 pkg。 当前支持的包管理器列表是: - 1. apt-get (Debian) - 2. xbps (Void) - 3. dnf (Fedora) - 4. yum (Enterprise Linux/Legacy Fedora) - 5. zypper (OpenSUSE) - 6. eopkg (Solus) - 7. pacman (Arch) - 8. emerge (Gentoo) - 9. pkg (FreeBSD) - 10. chromebrew (ChromeOS) - 11. homebrew (Mac OS) - 12. nix (Nix OS) - 13. snap (Independent) - 14. npm (Javascript, Global) +1. apt-get (Debian) +2. xbps (Void) +3. dnf (Fedora) +4. yum (Enterprise Linux/Legacy Fedora) +5. zypper (OpenSUSE) +6. eopkg (Solus) +7. pacman (Arch) +8. emerge (Gentoo) +9. pkg (FreeBSD) +10. chromebrew (ChromeOS) +11. homebrew (Mac OS) +12. nix (Nix OS) +13. snap (Independent) +14. npm (Javascript, Global) 如果你分配了错误的包管理器,则可以使用以下命令设置新的包管理器: @@ -69,13 +69,13 @@ Package manager changed to yum 现在,你可以像使用本机包管理器一样执行包管理操作。 -要安装软件包,例如Emacs,只需运行: +要安装软件包,例如 Emacs,只需运行: ``` $ sudo sysget install emacs ``` -上面的命令将调用本机包管理器(在我的例子中是“apt-get”)并安装给定的包。 +上面的命令将调用本机包管理器(在我的例子中是 “apt-get”)并安装给定的包。 ![](https://www.ostechnix.com/wp-content/uploads/2018/10/Install-package-using-Sysget.png) @@ -87,37 +87,37 @@ $ sudo sysget remove emacs ![](https://www.ostechnix.com/wp-content/uploads/2018/10/Remove-package-using-Sysget.png) -**更新软件仓库(数据库):** +更新软件仓库(数据库): ``` $ sudo sysget update ``` -**搜索特定包:** +搜索特定包: ``` $ sudo sysget search emacs ``` -**升级单个包:** +升级单个包: ``` $ sudo sysget upgrade emacs ``` -**升级所有包:** +升级所有包: ``` $ sudo sysget upgrade ``` -**移除废弃的包** +移除废弃的包: ``` $ sudo sysget autoremove ``` -**清理包管理器的缓存** +清理包管理器的缓存: ``` $ sudo sysget clean @@ -141,13 +141,13 @@ clean clean the download cache set [NEW MANAGER] set a new package manager ``` -请记住,不同Linux发行版中的所有包管理器的sysget语法都是相同的。 你不需要记住每个包管理器的命令。 +请记住,不同 Linux 发行版中的所有包管理器的 Sysget 语法都是相同的。 你不需要记住每个包管理器的命令。 -同样,我必须告诉你Sysget不是包管理器的替代品。 它只是类Unix系统中流行的包管理器的包装器,它只执行基本的包管理操作。 +同样,我必须告诉你 Sysget 不是包管理器的替代品。 它只是类 Unix 系统中流行的包管理器的包装器,它只执行基本的包管理操作。 -Sysget对于不想去学习不同包管理器的新命令的新手和发行版收割机用户可能有些用处。 如果你有兴趣,试一试,看看它是否有帮助。 +Sysget 对于不想去学习不同包管理器的新命令的新手和发行版收割机用户可能有些用处。 如果你有兴趣,试一试,看看它是否有帮助。 -而且,这就是本次所有的内容了。 更多干活即将到来。 敬请关注! +而且,这就是本次所有的内容了。 更多干货即将到来。 敬请关注! 祝快乐! @@ -158,7 +158,7 @@ via: https://www.ostechnix.com/sysget-a-front-end-for-popular-package-managers/ 作者:[SK][a] 选题:[lujun9972][b] 译者:[Flowsnow](https://github.com/Flowsnow) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 From 11799ce20a8c5062288c5f4944f336170e3b13f5 Mon Sep 17 00:00:00 2001 From: "Xingyu.Wang" Date: Sun, 14 Oct 2018 10:38:27 +0800 Subject: [PATCH 413/736] PUB:20181011 A Front-end For Popular Package Managers.md @Flowsnow https://linux.cn/article-10113-1.html --- .../20181011 A Front-end For Popular Package Managers.md | 0 1 file changed, 0 insertions(+), 0 deletions(-) rename {translated/tech => published}/20181011 A Front-end For Popular Package Managers.md (100%) diff --git a/translated/tech/20181011 A Front-end For Popular Package Managers.md b/published/20181011 A Front-end For Popular Package Managers.md similarity index 100% rename from translated/tech/20181011 A Front-end For Popular Package Managers.md rename to published/20181011 A Front-end For Popular Package Managers.md From af030a4fabfe2bca626fb27a85d07a3778ae920c Mon Sep 17 00:00:00 2001 From: "Xingyu.Wang" Date: Sun, 14 Oct 2018 11:36:38 +0800 Subject: [PATCH 414/736] PRF:20180926 An introduction to swap space on Linux systems.md @heguangzhi --- ...oduction to swap space on Linux systems.md | 190 +++++++----------- 1 file changed, 78 insertions(+), 112 deletions(-) diff --git a/translated/tech/20180926 An introduction to swap space on Linux systems.md b/translated/tech/20180926 An introduction to swap space on Linux systems.md index 0a36a44e9f..da5bd3b8db 100644 --- a/translated/tech/20180926 An introduction to swap space on Linux systems.md +++ b/translated/tech/20180926 An introduction to swap space on Linux systems.md @@ -1,152 +1,131 @@ - -Linux 系统上 swap 空间的介绍 +Linux 系统上交换空间的介绍 ====== +> 学习如何修改你的系统上的交换空间的容量,以及你到底需要多大的交换空间。 + ![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/fail_progress_cycle_momentum_arrow.png?itok=q-ZFa_Eh) -当今无论什么操作系统 Swap 空间是非常常见的。Linux 使用 Swap 空间来增加主机可用的虚拟内存。它可以是常规文件或逻辑卷上使用一个或多个专用swap 分区或 swap 文件。 +当今无论什么操作系统交换Swap空间是非常常见的。Linux 使用交换空间来增加主机可用的虚拟内存。它可以在常规文件或逻辑卷上使用一个或多个专用交换分区或交换文件。 典型计算机中有两种基本类型的内存。第一种类型,随机存取存储器 (RAM),用于存储计算机使用的数据和程序。只有程序和数据存储在 RAM 中,计算机才能使用它们。随机存储器是易失性存储器;也就是说,如果计算机关闭了,存储在 RAM 中的数据就会丢失。 +硬盘是用于长期存储数据和程序的磁性介质。该磁介质可以很好的保存数据;即使计算机断电,存储在磁盘上的数据也会保留下来。CPU(中央处理器)不能直接访问硬盘上的程序和数据;它们必须首先复制到 RAM 中,RAM 是 CPU 访问代码指令和操作数据的地方。在引导过程中,计算机将特定的操作系统程序(如内核、init 或 systemd)以及硬盘上的数据复制到 RAM 中,在 RAM 中,计算机的处理器 CPU 可以直接访问这些数据。 -硬盘是用于长期存储数据和程序的磁性介质。该磁介质可以很好的保存数据;即使计算机断电,存储在磁盘上的数据也会保留下来。CPU (中央处理器)不能直接访问硬盘上的程序和数据;他们必须首先复制到 RAM 中,RAM 是 CPU 访问代码指令和操作数据的地方。在引导过程中,计算机将特定的操作系统程序(如内核、init 或 systemd )以及硬盘上的数据复制到 RAM 中,在 RAM 中,计算机的处理器 CPU 可以直接访问这些数据。 +### 交换空间 -### Swap 空间 +交换空间是现代 Linux 系统中的第二种内存类型。交换空间的主要功能是当全部的 RAM 被占用并且需要更多内存时,用磁盘空间代替 RAM 内存。 -Swap 空间是现代 Linux 系统中的第二种内存类型。Swap 空间的主要功能是当全部的 RAM 被占用并且需要更多内存时,用磁盘空间代替 RAM 内存。 +例如,假设你有一个 8GB RAM 的计算机。如果你启动的程序没有填满 RAM,一切都好,不需要交换。假设你在处理电子表格,当添加更多的行时,你电子表格会增长,加上所有正在运行的程序,将会占用全部的 RAM 。如果这时没有可用的交换空间,你将不得不停止处理电子表格,直到关闭一些其他程序来释放一些 RAM 。 -例如,假设你有一个 8GB RAM 的计算机。如果你启动的程序没有填满 RAM,一切好,不需要 Swap。假设你在处理电子表格,当添加更多的行时,你电子表格会增长,加上所有正在运行的程序,将会占用全部的 RAM 。如果这时没有可用的 Swap 空间,你将不得不停止处理电子表格,直到关闭一些其他程序来释放一些 RAM 。 +内核使用一个内存管理程序来检测最近没有使用的内存块(内存页)。内存管理程序将这些相对不经常使用的内存页交换到硬盘上专门指定用于“分页”或交换的特殊分区。这会释放 RAM,为输入电子表格更多数据腾出了空间。那些换出到硬盘的内存页面被内核的内存管理代码跟踪,如果需要,可以被分页回 RAM。 -内核使用一个内存管理程序来检测最近没有使用的内存块,也就是内存页面。内存管理程序将这些相对不经常使用的内存页交换到硬盘上专门指定用于“分页”或 swap 的特殊分区。释放 RAM ,为输入电子表格更多数据腾出了空间。那些换出到硬盘的内存页面被内核的内存管理代码跟踪,如果需要,可以被分页回 RAM。 +Linux 计算机中的内存总量是 RAM + 交换分区,交换分区被称为虚拟内存. -Linux 计算机中的内存总量是 RAM + swap 分区,swap 分区被称为虚拟内存. +### Linux 交换分区类型 -### Linux swap 分区类型 +Linux 提供了两种类型的交换空间。默认情况下,大多数 Linux 在安装时都会创建一个交换分区,但是也可以使用一个特殊配置的文件作为交换文件。交换分区顾名思义就是一个标准磁盘分区,由 `mkswap` 命令指定交换空间。 -Linux 提供了两种类型的 swap 空间。默认情况下,大多数 Linux 在安装时都会创建一个 swap 分区,但是也可以使用一个特殊配置的文件作为 swap 文件。swap 分区顾名思义就是一个标准磁盘分区,由 `mkswap` 命令指定 swap 空间。 - -如果没有可用磁盘空间来创建新的 swap 分区,或者卷组中没有空间为 swap 空间创建逻辑卷,则可以使用 swap 文件。这只是一个创建并预分配指定大小的常规文件。然后运行 `mkswap` 命令将其配置为 swap 空间。除非绝对必要,否则我不建议使用文件来做 swap 空间。 +如果没有可用磁盘空间来创建新的交换分区,或者卷组中没有空间为交换空间创建逻辑卷,则可以使用交换文件。这只是一个创建好并预分配指定大小的常规文件。然后运行 `mkswap` 命令将其配置为交换空间。除非绝对必要,否则我不建议使用文件来做交换空间。(LCTT 译注:Ubuntu 近来的版本采用了交换文件而非交换空间,所以我对于这种说法保留看法) ### 频繁交换 -当总虚拟内存( RAM 和 swap 空间 )变得快满时,可能会发生频繁交换 。系统花了太多时间在 swap 空间和 RAM 之间做内存块页面切换,以至于几乎没有时间用于实际工作。这种情况是显而易见的:系统变得缓慢或完全无反应,硬盘指示灯几乎持续亮起。 +当总虚拟内存(RAM 和交换空间)变得快满时,可能会发生频繁交换。系统花了太多时间在交换空间和 RAM 之间做内存块的页面切换,以至于几乎没有时间用于实际工作。这种情况的典型症状是:系统变得缓慢或完全无反应,硬盘指示灯几乎持续亮起。 -使用 `free` 的命令来显示 CPU 负载和内存使用情况,你会发现 CPU 负载非常高,可能达到系统中 CPU 内核数量的30到40倍。另一个情况是 RAM 和 swap 空间几乎完全被分配了。 +使用 `free` 的命令来显示 CPU 负载和内存使用情况,你会发现 CPU 负载非常高,可能达到系统中 CPU 内核数量的 30 到 40 倍。另一个情况是 RAM 和交换空间几乎完全被分配了。 +事实上,查看 SAR(系统活动报告)数据也可以显示这些内容。在我的每个系统上都安装 SAR ,并将这些用于数据分析。 -事实上,查看 SAR (系统活动报告)数据也可以显示这些内容。在我的每个系统上都安装 SAR ,并将这些用于数据分析。 +### 交换空间的正确大小是多少? +许多年前,硬盘上分配给交换空间大小是计算机上的 RAM 的两倍(当然,这是大多数计算机的 RAM 以 KB 或 MB 为单位的时候)。因此,如果一台计算机有 64KB 的 RAM,应该分配 128KB 的交换分区。该规则考虑到了这样的事实情况,即 RAM 大小在当时非常小,分配超过 2 倍的 RAM 用于交换空间并不能提高性能。使用超过两倍的 RAM 进行交换,比实际执行有用的工作的时候,大多数系统将花费更多的时间。 -### swap 空间的正确大小是多少? +RAM 现在已经很便宜了,如今大多数计算机的 RAM 都达到了几十亿字节。我的大多数新电脑至少有 8GB 内存,一台有 32GB 内存,我的主工作站有 64GB 内存。我的旧电脑有 4 到 8GB 的内存。 -许多年前,硬盘上分配给 swap 空间大小是计算机上的 RAM 的两倍(当然,这是大多数计算机的 RAM 以 KB 或 MB 为单位的时候)。因此,如果一台计算机有 64KB 的 RAM,应该分配 128KB 的 swap 分区。该规则考虑到了这样的事实情况,即 RAM 大小在当时非常小,分配超过2倍的 RAM 用于 swap 空间并不能提高性能。使用超过两倍的 RAM 进行交换,比实际执行有用的工作的时候,大多数系统将花费更多的时间。 +当操作具有大量 RAM 的计算机时,交换空间的限制性能系数远低于 2 倍。[Fedora 28 在线安装指南][1] 定义了当前关于交换空间分配的方法。下面内容是我提出的建议。 +下表根据系统中的 RAM 大小以及是否有足够的内存让系统休眠,提供了交换分区的推荐大小。建议的交换分区大小是在安装过程中自动建立的。但是,为了满足系统休眠,您需要在自定义分区阶段编辑交换空间。 -RAM 现在已经很便宜了,如今大多数计算机的 RAM 都达到了几十亿字节。我的大多数新电脑至少有 8GB 内存,一台有32GB 内存,我的主工作站有 64GB 内存。我的旧电脑有4到 8GB 的内存。 +_表 1: Fedora 28 文档中推荐的系统交换空间_ - -当操作具有大 RAM 的计算机时,swap 空间的限制性能系数远低于 2倍。[Fedora 28在线安装指南][1] 定义了当前关于 swap 空间分配的方法。下面内容是我提出的建议。 - -下表根据系统中的 RAM 大小以及是否有足够的内存让系统休眠,提供了交换分区的推荐大小。建议的 swap 分区大小是在安装过程中自动建立的。但是,为了满足系统休眠,您需要在自定义分区阶段编辑 swap 空间。 - -_表 1: Fedora 28文档中推荐的系统 swap 空间_ - -| **系统内存大小 ** | **推荐 swap 空间 ** | **建议 swap 大小用休眠模式 ** | +| **系统内存大小** | **推荐的交换空间** | **推荐的交换空间大小(支持休眠模式)** | |--------------------------|-----------------------------|---------------------------------------| -| 小于 2 GB | 2倍 RAM | 3 倍 RAM | +| 小于 2 GB | 2 倍 RAM | 3 倍 RAM | | 2 GB - 8 GB | 等于 RAM 大小 | 2 倍 RAM | | 8 GB - 64 GB | 0.5 倍 RAM | 1.5 倍 RAM | | 大于 64 GB | 工作量相关 | 不建议休眠模式 | +在上面列出的每个范围之间的边界(例如,具有 2GB、8GB 或 64GB 的系统 RAM),请根据所选交换空间和支持休眠功能请谨慎使用。如果你的系统资源允许,增加交换空间可能会带来更好的性能。 -在上面列出的每个范围之间的边界(例如,具有 2GB、8GB 或 64GB 的系统 RAM),请根据所选 swap 空间和支持休眠功能请谨慎使用。如果你的系统资源允许,增加 swap 空间可能会带来更好的性能。 - -当然,大多数 Linux 管理员对多大的 swap 空间量有自己的想法。下面的表2包含了基于我在多种环境中的个人经历所做出的建议。这些可能不适合你,但是和表1一样,它们可能对你有所帮助。 +当然,大多数 Linux 管理员对多大的交换空间量有自己的想法。下面的表2 包含了基于我在多种环境中的个人经历所做出的建议。这些可能不适合你,但是和表 1 一样,它们可能对你有所帮助。 -_表 2: 作者推荐的系统 swap 空间_ +_表 2: 作者推荐的系统交换空间_ -| RAM 大小 | 推荐 swap 空间 | +| RAM 大小 | 推荐的交换空间 | |---------------|------------------------| | ≤ 2GB | 2X RAM | | 2GB – 8GB | = RAM | | >8GB | 8GB | -这两个表中共同点,随着 RAM 数量的增加,超过某一点增加更多 swap 空间只会导致在 swap 空间几乎被全部使用之前就发生频繁交换。根据以上建议,则应尽可能添加更多 RAM,而不是增加更多 swap 空间。如类似影响系统性能的情况一样,请使用最适合你的建议。根据 Linux 环境中的条件进行测试和更改是需要时间和精力的。 +这两个表中共同点,随着 RAM 数量的增加,超过某一点增加更多交换空间只会导致在交换空间几乎被全部使用之前就发生频繁交换。根据以上建议,则应尽可能添加更多 RAM,而不是增加更多交换空间。如类似影响系统性能的情况一样,请使用最适合你的建议。根据 Linux 环境中的条件进行测试和更改是需要时间和精力的。 +### 向非 LVM 磁盘环境添加更多交换空间 -### 向非 LVM 磁盘环境添加更多 swap 空间 +面对已安装 Linux 的主机并对交换空间的需求不断变化,有时有必要修改系统定义的交换空间的大小。此过程可用于需要增加交换空间大小的任何情况。它假设有足够的可用磁盘空间。此过程还假设磁盘分区为 “原始的” EXT4 和交换分区,而不是使用逻辑卷管理(LVM)。 -面对已安装 Linux 的主机并对 swap 空间的需求不断变化,有时有必要修改系统定义的 swap 空间的大小。此过程可用于需要增加 swap 空间大小的任何情况。它假设有足够的可用磁盘空间。此过程还假设磁盘在 “raw” EXT4 和 swap 分区中分区,并且不使用逻辑卷 (LVM)。 - - - -要基本步骤很简单: - - 1. 关闭现有的 swap 空间。 - - 2. 创建所需大小的新 swap 分区。 +基本步骤很简单: + 1. 关闭现有的交换空间。 + 2. 创建所需大小的新交换分区。 3. 重读分区表。 + 4. 将分区配置为交换空间。 + 5. 添加新分区到 `/etc/fstab`。 + 6. 打开交换空间。 - 4. 将分区配置为 swap 空间。 +应该不需要重新启动机器。 - 5. 添加新分区到 /etc/fstab。 +为了安全起见,在关闭交换空间前,至少你应该确保没有应用程序在运行,也没有交换空间在使用。`free` 或 `top` 命令可以告诉你交换空间是否在使用中。为了更安全,您可以恢复到运行级别 1 或单用户模式。 - 6. 打开 swap 空间。 - - -不应需要重新启动机器。 - - -为了安全起见,在关闭 swap 空间前,至少你应该确保没有应用程序在运行,也没有 swap 空间在使用。`free` 或 `top` 命令可以告诉你 swap 空间是否在使用中。为了更安全,您可以恢复到运行级别1或单用户模式。 - -使用关闭所有 swap 空间的命令关闭 swap 分区: +使用关闭所有交换空间的命令关闭交换分区: ``` swapoff -a - ``` 现在查看硬盘上的现有分区。 ``` fdisk -l - ``` -这将显示每个驱动器上的分区表。按编号标识当前的 swap 分区。 +这将显示每个驱动器上的分区表。按编号标识当前的交换分区。 - -使用以下命令在交互模式下启动 `fdisk`: +使用以下命令在交互模式下启动 `fdisk`: ``` fdisk /dev/ - ``` -例如: +例如: ``` fdisk /dev/sda - ``` 此时,`fdisk` 是交互方式的,只在指定的磁盘驱动器上进行操作。 -使用 fdisk `p` 子命令验证磁盘上是否有足够的可用空间来创建新的 swap 分区。硬盘上的空间以 512字节 以及起始和结束柱面编号的形式显示,因此您可能需要做一些计算来确定分配分区之间和末尾的可用空间。 +使用 `fdisk` 的 `p` 子命令验证磁盘上是否有足够的可用空间来创建新的交换分区。硬盘上的空间以 512 字节的块以及起始和结束柱面编号的形式显示,因此您可能需要做一些计算来确定分配分区之间和末尾的可用空间。 -使用 `n` 子命令创建新的交换分区。fdisk 会问你开始柱面。默认情况下,它选择编号最低的可用柱面。如果你想改变这一点,输入开始柱面的编号。 +使用 `n` 子命令创建新的交换分区。`fdisk` 会问你开始柱面。默认情况下,它选择编号最低的可用柱面。如果你想改变这一点,输入开始柱面的编号。 -`fdisk` 命令允许你以多种格式输入分区的大小,包括最后一个柱面号或字节、KB 或 MB 的大小。键入 4000M ,这将在新分区上提供大约 4GB 的空间(例如),然后按 Enter 键。 +`fdisk` 命令允许你以多种格式输入分区的大小,包括最后一个柱面号或字节、KB 或 MB 的大小。例如,键入 4000M ,这将在新分区上提供大约 4GB 的空间,然后按回车键。 使用 `p` 子命令来验证分区是否按照指定的方式创建的。请注意,除非使用结束柱面编号,否则分区可能与你指定的不完全相同。`fdisk` 命令只能在整个柱面上增量的分配磁盘空间,因此你的分区可能比你指定的稍小或稍大。如果分区不是您想要的,你可以删除它并重新创建它。 -现在指定新分区是 swap 分区了 。子命令 `t` 允许你指定定分区的类型。所以输入 `t`,指定分区号,当它要求十六进制分区类型时,输入82,这是Linux swap 分区类型,然后按 Enter。 +现在指定新分区是交换分区了 。子命令 `t` 允许你指定定分区的类型。所以输入 `t`,指定分区号,当它要求十六进制分区类型时,输入 `82`,这是 Linux 交换分区类型,然后按回车键。 - -当你对创建的分区感到满意时,使用 `w` 子命令将新的分区表写入磁盘。`fdisk` 程序将退出,并在完成修改后的分区表的编写后返回命令提示符。当`fdisk` 完成写入新分区表时,会收到以下消息: +当你对创建的分区感到满意时,使用 `w` 子命令将新的分区表写入磁盘。`fdisk` 程序将退出,并在完成修改后的分区表的编写后返回命令提示符。当 `fdisk` 完成写入新分区表时,会收到以下消息: ``` The partition table has been altered! @@ -157,83 +136,70 @@ The new table will be used at the next reboot. Syncing disks. ``` - 此时,你使用 `partprobe` 命令强制内核重新读取分区表,这样就不需要执行重新启动机器。 ``` partprobe ``` +使用命令 `fdisk -l` 列出分区,新交换分区应该在列出的分区中。确保新的分区类型是 “Linux swap”。 -使用命令 `fdisk -l` 列出分区,新 swap 分区应该在列出的分区中。确保新的分区类型是 “Linux swap”。 - -修改 /etc/fstab 文件以指向新的 swap 分区。如下所示: +修改 `/etc/fstab` 文件以指向新的交换分区。如下所示: ``` LABEL=SWAP-sdaX   swap        swap    defaults        0 0 - ``` -其中 `X` 是分区号。根据新 swap 分区的位置,添加以下内容: +其中 `X` 是分区号。根据新交换分区的位置,添加以下内容: ``` /dev/sdaY         swap        swap    defaults        0 0 - ``` -请确保使用正确的分区号。现在,可以执行创建 swap 分区的最后一步。使用 `mkswap` 命令将分区定义为 swap 分区。 +请确保使用正确的分区号。现在,可以执行创建交换分区的最后一步。使用 `mkswap` 命令将分区定义为交换分区。 ``` mkswap /dev/sdaY - ``` -最后一步是使用以下命令启用 swap 空间: +最后一步是使用以下命令启用交换空间: ``` swapon -a - ``` -你的新 swap 分区现在与以前存在的 swap 分区一起在线。您可以使用 `free` 或`top` 命令来验证这一点。 +你的新交换分区现在与以前存在的交换分区一起在线。您可以使用 `free` 或`top` 命令来验证这一点。 -#### 在 LVM 磁盘环境中添加 swap 空间 +#### 在 LVM 磁盘环境中添加交换空间 -如果你的磁盘使用 LVM ,更改 swap 空间将相当容易。同样,假设当前 swap 卷所在的卷组中有可用空间。默认情况下,LVM 环境中的 Fedora Linux 在安装过程将 swap 分区创建为逻辑卷。您可以非常简单地增加 swap 卷的大小。 +如果你的磁盘使用 LVM ,更改交换空间将相当容易。同样,假设当前交换卷所在的卷组中有可用空间。默认情况下,LVM 环境中的 Fedora Linux 在安装过程将交换分区创建为逻辑卷。您可以非常简单地增加交换卷的大小。 -以下是在 LVM 环境中增加 swap 空间大小的步骤: +以下是在 LVM 环境中增加交换空间大小的步骤: - 1. 关闭所有 swap 。 + 1. 关闭所有交换空间。 + 2. 增加指定用于交换空间的逻辑卷的大小。 + 3. 为交换空间调整大小的卷配置。 + 4. 启用交换空间。 - 2. 增加指定用于 swap 的逻辑卷的大小。 - - 3. 为 swap 空间调整大小的卷配置。 - - 4. 启用 swap。 - - - -首先,让我们使用 `lvs` 命令(列出逻辑卷)来验证 swap 是否存在以及 swap 是否是逻辑卷。 +首先,让我们使用 `lvs` 命令(列出逻辑卷)来验证交换空间是否存在以及交换空间是否是逻辑卷。 ``` [root@studentvm1 ~]# lvs -  LV     VG                Attr       LSize  Pool   Origin Data%  Meta%  Move Log Cpy%Sync Convert -  home   fedora_studentvm1 -wi-ao----  2.00g                                                       -  pool00 fedora_studentvm1 twi-aotz--  2.00g               8.17   2.93                             -  root   fedora_studentvm1 Vwi-aotz--  2.00g pool00        8.17                                   -  swap   fedora_studentvm1 -wi-ao----  8.00g                                                       -  tmp    fedora_studentvm1 -wi-ao----  5.00g                                                       -  usr    fedora_studentvm1 -wi-ao---- 15.00g                                                       -  var    fedora_studentvm1 -wi-ao---- 10.00g                                                       + LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert + home fedora_studentvm1 -wi-ao---- 2.00g + pool00 fedora_studentvm1 twi-aotz-- 2.00g 8.17 2.93 + root fedora_studentvm1 Vwi-aotz-- 2.00g pool00 8.17 + swap fedora_studentvm1 -wi-ao---- 8.00g + tmp fedora_studentvm1 -wi-ao---- 5.00g + usr fedora_studentvm1 -wi-ao---- 15.00g + var fedora_studentvm1 -wi-ao---- 10.00g [root@studentvm1 ~]# ``` -你可以看到当前的 swap 大小为 8GB。在这种情况下,我们希望将 2GB 添加到此 swap 卷中。首先,停止现有的 swap 。如果 swap 空间正在使用,终止正在运行的程序。 - +你可以看到当前的交换空间大小为 8GB。在这种情况下,我们希望将 2GB 添加到此交换卷中。首先,停止现有的交换空间。如果交换空间正在使用,终止正在运行的程序。 ``` swapoff -a - ``` 现在增加逻辑卷的大小。 @@ -245,7 +211,7 @@ swapoff -a [root@studentvm1 ~]# ``` -运行 `mkswap` 命令将整个 10GB 分区变成 swap 空间。 +运行 `mkswap` 命令将整个 10GB 分区变成交换空间。 ``` [root@studentvm1 ~]# mkswap /dev/mapper/fedora_studentvm1-swap @@ -255,14 +221,14 @@ no label, UUID=3cc2bee0-e746-4b66-aa2d-1ea15ef1574a [root@studentvm1 ~]# ``` -重新启用 swap 。 +重新启用交换空间。 ``` [root@studentvm1 ~]# swapon -a [root@studentvm1 ~]# ``` -现在,使用 `lsblk ` 命令验证新 swap 空间是否存在。同样,不需要重新启动机器。 +现在,使用 `lsblk ` 命令验证新交换空间是否存在。同样,不需要重新启动机器。 ``` [root@studentvm1 ~]# lsblk @@ -287,7 +253,7 @@ sr0                                   11:0    1 1024M  0 rom [root@studentvm1 ~]# ``` -您也可以使用`swapon -s` 命令或 `top` 、`free` 或其他几个命令来验证这一点。 +您也可以使用 `swapon -s` 命令或 `top`、`free` 或其他几个命令来验证这一点。 ``` [root@studentvm1 ~]# free @@ -297,7 +263,7 @@ Swap:      10485756           0    10485756 [root@studentvm1 ~]# ``` -请注意,不同的命令以不同的形式显示或要求输入设备文件。在 /dev 目录中访问特定设备有多种方式。在我的文章[Managing Devices in Linux][2] 中有更多关于 /dev 目录及其内容说明。 +请注意,不同的命令以不同的形式显示或要求输入设备文件。在 `/dev` 目录中访问特定设备有多种方式。在我的文章 [在 Linux 中管理设备][2] 中有更多关于 `/dev` 目录及其内容说明。 -------------------------------------------------------------------------------- @@ -306,10 +272,10 @@ via: https://opensource.com/article/18/9/swap-space-linux-systems 作者:[David Both][a] 选题:[lujun9972](https://github.com/lujun9972) 译者:[heguangzhi](https://github.com/heguangzhi) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 [a]: https://opensource.com/users/dboth [1]: https://docs.fedoraproject.org/en-US/fedora/f28/install-guide/ -[2]: https://opensource.com/article/16/11/managing-devices-linux +[2]: https://linux.cn/article-8099-1.html From 96751c6d24454a2c1cd777ab99ab6b85c77acda9 Mon Sep 17 00:00:00 2001 From: "Xingyu.Wang" Date: Sun, 14 Oct 2018 11:37:01 +0800 Subject: [PATCH 415/736] PUB:20180926 An introduction to swap space on Linux systems.md @heguangzhi https://linux.cn/article-10114-1.html --- .../20180926 An introduction to swap space on Linux systems.md | 0 1 file changed, 0 insertions(+), 0 deletions(-) rename {translated/tech => published}/20180926 An introduction to swap space on Linux systems.md (100%) diff --git a/translated/tech/20180926 An introduction to swap space on Linux systems.md b/published/20180926 An introduction to swap space on Linux systems.md similarity index 100% rename from translated/tech/20180926 An introduction to swap space on Linux systems.md rename to published/20180926 An introduction to swap space on Linux systems.md From 44e2a01c280f4aecdf87a0de5502e2a1fa811aed Mon Sep 17 00:00:00 2001 From: hopefully2333 <787016457@qq.com> Date: Sun, 14 Oct 2018 11:59:31 +0800 Subject: [PATCH 416/736] translated by hopefully2333 translated by hopefully2333 --- ...8 Play Windows games on Fedora with Steam Play and Proton.md | 2 ++ 1 file changed, 2 insertions(+) diff --git a/sources/tech/20181008 Play Windows games on Fedora with Steam Play and Proton.md b/sources/tech/20181008 Play Windows games on Fedora with Steam Play and Proton.md index 22b4cc8558..459ada125e 100644 --- a/sources/tech/20181008 Play Windows games on Fedora with Steam Play and Proton.md +++ b/sources/tech/20181008 Play Windows games on Fedora with Steam Play and Proton.md @@ -1,3 +1,5 @@ +translated by hopefully2333 + Play Windows games on Fedora with Steam Play and Proton ====== From 09dcd65e41413c55c8d4717cb738827bd75279ba Mon Sep 17 00:00:00 2001 From: pityonline Date: Sat, 13 Oct 2018 22:50:43 +0800 Subject: [PATCH 417/736] =?UTF-8?q?PRF:=20#10644=20=E8=B0=83=E6=95=B4?= =?UTF-8?q?=E7=A9=BA=E7=99=BD?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...urce Code In Many Programming Languages.md | 34 ++++--------------- 1 file changed, 7 insertions(+), 27 deletions(-) diff --git a/translated/tech/20181010 Cloc - Count The Lines Of Source Code In Many Programming Languages.md b/translated/tech/20181010 Cloc - Count The Lines Of Source Code In Many Programming Languages.md index 4ef15514e5..663b66fe9c 100644 --- a/translated/tech/20181010 Cloc - Count The Lines Of Source Code In Many Programming Languages.md +++ b/translated/tech/20181010 Cloc - Count The Lines Of Source Code In Many Programming Languages.md @@ -9,15 +9,13 @@ Cloc – 计算不同编程语言源代码的行数 Cloc 有很多优势: - * 容易安装和实用,不需要额外的依赖项。 - * 便携式 - * 支持多种的结果格式导出,包括:纯文本、SQL、JSON、XML、YAML、CSV - * 可以计算 git 的提交数 - * 可递归计算文件夹内的代码行数 - * 可计算压缩后的文件,如:tar、zip、Java ear - * 开源跨平台部署 - - +* 容易安装和实用,不需要额外的依赖项。 +* 便携式 +* 支持多种的结果格式导出,包括:纯文本、SQL、JSON、XML、YAML、CSV +* 可以计算 git 的提交数 +* 可递归计算文件夹内的代码行数 +* 可计算压缩后的文件,如:tar、zip、Java ear +* 开源跨平台部署 ### 安装 @@ -27,42 +25,36 @@ Arch Linux: ``` $ sudo pacman -S cloc - ``` Debian, Ubuntu: ``` $ sudo apt-get install cloc - ``` CentOS, Red Hat, Scientific Linux: ``` $ sudo yum install cloc - ``` Fedora: ``` $ sudo dnf install cloc - ``` FreeBSD: ``` $ sudo pkg install cloc - ``` 当然你也可以使用第三方的包管理器比如[**NPM**][2]。 ``` $ npm install -g cloc - ``` ### 使用实例 @@ -78,14 +70,12 @@ int main() printf("Hello, World!"); return 0; } - ``` 想要计算行数,只需要简单运行: ``` $ cloc hello.c - ``` 输出: @@ -106,7 +96,6 @@ $ cloc hello.c ``` $ cloc file.tar.gz - ``` 输出: @@ -117,12 +106,10 @@ $ cloc file.tar.gz 除了源代码文件,Cloc 还能递归的计算各个目录及其子目录下的文件、压缩包、甚至 git 中的 commit 数目等。 - **文件夹中使用的例子:** ``` $ cloc dir/ - ``` ![][4] @@ -131,7 +118,6 @@ $ cloc dir/ ``` $ cloc dir/cloc/tests - ``` ![][5] @@ -140,7 +126,6 @@ $ cloc dir/cloc/tests ``` $ cloc archive.zip - ``` ![][6] @@ -153,7 +138,6 @@ $ git clone https://github.com/AlDanial/cloc.git $ cd cloc $ cloc 157d706 - ``` ![][7] @@ -162,20 +146,16 @@ $ cloc 157d706 ``` $ cloc --show-lang - ``` 当然,help 能告诉你更多关于 Cloc 的使用帮助。 ``` $ cloc --help - ``` 开始使用吧! - - -------------------------------------------------------------------------------- via: https://www.ostechnix.com/cloc-count-the-lines-of-source-code-in-many-programming-languages/ From 48c72ff1a8bee071219fad386765d092082c47e4 Mon Sep 17 00:00:00 2001 From: pityonline Date: Sat, 13 Oct 2018 22:55:57 +0800 Subject: [PATCH 418/736] =?UTF-8?q?PRF:=20#10644=20=E8=B0=83=E6=95=B4?= =?UTF-8?q?=E6=A0=87=E7=82=B9=E7=AC=A6=E5=8F=B7=E4=BD=BF=E7=94=A8?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...urce Code In Many Programming Languages.md | 28 +++++++++---------- 1 file changed, 14 insertions(+), 14 deletions(-) diff --git a/translated/tech/20181010 Cloc - Count The Lines Of Source Code In Many Programming Languages.md b/translated/tech/20181010 Cloc - Count The Lines Of Source Code In Many Programming Languages.md index 663b66fe9c..4c8fec79f7 100644 --- a/translated/tech/20181010 Cloc - Count The Lines Of Source Code In Many Programming Languages.md +++ b/translated/tech/20181010 Cloc - Count The Lines Of Source Code In Many Programming Languages.md @@ -1,9 +1,9 @@ -Cloc – 计算不同编程语言源代码的行数 +Cloc –– 计算不同编程语言源代码的行数 ====== ![](https://www.ostechnix.com/wp-content/uploads/2018/10/cloc-720x340.png) -作为一个开发人员,你可能需要不时的想你的领导或者同事分享你目前的工作与代码开发进展,抑或你的领导想对代码进行全方位的分析。在这种情况下,你就需要用到我所知的这么几个程序,其中一个是[**Ohcount**][1]。今天,我遇到了另一个程序, **"Cloc"**。通过使用 Cloc,你可以很容易的计算出多种语言的源代码行数。它还可以计算空行数、代码行数、实际占用的行数,并通过整齐的表格进行结果输出。Cloc 是免费的、开源的、跨平台程序,使用的 **Perl** 进行开发。 +作为一个开发人员,你可能需要不时的想你的领导或者同事分享你目前的工作与代码开发进展,抑或你的领导想对代码进行全方位的分析。在这种情况下,你就需要用到我所知的这么几个程序,其中一个是 [**Ohcount**][1]。今天,我遇到了另一个程序,**Cloc**。通过使用 Cloc,你可以很容易的计算出多种语言的源代码行数。它还可以计算空行数、代码行数、实际占用的行数,并通过整齐的表格进行结果输出。Cloc 是免费的、开源的、跨平台程序,使用 **Perl** 进行开发。 ### 特点 @@ -19,7 +19,7 @@ Cloc 有很多优势: ### 安装 -Cloc 的安装包在大多数的 *nix 操作系统的默认软件库内,所以你只需要使用默认的包管理器安装如下这样。 +Cloc 的安装包在大多数的 \*nix 操作系统的默认软件库内,所以你只需要使用默认的包管理器安装如下这样。 Arch Linux: @@ -27,13 +27,13 @@ Arch Linux: $ sudo pacman -S cloc ``` -Debian, Ubuntu: +Debian/Ubuntu: ``` $ sudo apt-get install cloc ``` -CentOS, Red Hat, Scientific Linux: +CentOS/Red Hat/Scientific Linux: ``` $ sudo yum install cloc @@ -51,7 +51,7 @@ FreeBSD: $ sudo pkg install cloc ``` -当然你也可以使用第三方的包管理器比如[**NPM**][2]。 +当然你也可以使用第三方的包管理器比如 [**NPM**][2]。 ``` $ npm install -g cloc @@ -66,9 +66,9 @@ $ cat hello.c #include int main() { - // printf() displays the string inside quotation - printf("Hello, World!"); - return 0; + // printf() displays the string inside quotation + printf("Hello, World!"); + return 0; } ``` @@ -102,7 +102,7 @@ $ cloc file.tar.gz ![](https://www.ostechnix.com/wp-content/uploads/2018/10/cloc-1.png) -根据上述输出,手动查找准确的代码计数非常困难。但是,Cloc以易读的表格格式显示结果。你还可以在最后查看每个部分的总计,这在分析程序的源代码时非常方便。 +根据上述输出,手动查找准确的代码计数非常困难。但是,Cloc 以易读的表格格式显示结果。你还可以在最后查看每个部分的总计,这在分析程序的源代码时非常方便。 除了源代码文件,Cloc 还能递归的计算各个目录及其子目录下的文件、压缩包、甚至 git 中的 commit 数目等。 @@ -114,7 +114,7 @@ $ cloc dir/ ![][4] -**子文件夹中使用的例子:** +**子文件夹中使用的例子**: ``` $ cloc dir/cloc/tests @@ -122,7 +122,7 @@ $ cloc dir/cloc/tests ![][5] -**计算一个压缩包中源代码的行数:** +**计算一个压缩包中源代码的行数**: ``` $ cloc archive.zip @@ -130,7 +130,7 @@ $ cloc archive.zip ![][6] -**你还可以计算一个 git 项目:** +**你还可以计算一个 git 项目**: ``` $ git clone https://github.com/AlDanial/cloc.git @@ -142,7 +142,7 @@ $ cloc 157d706 ![][7] -**使用下面的命令,查看 Cloc 支持的语言类型:** +**使用下面的命令,查看 Cloc 支持的语言类型**: ``` $ cloc --show-lang From 1e31984cf2b33c6447bc83f177188dce154b5102 Mon Sep 17 00:00:00 2001 From: pityonline Date: Sat, 13 Oct 2018 22:56:31 +0800 Subject: [PATCH 419/736] =?UTF-8?q?PRF:=20#10644=20=E4=BF=AE=E6=AD=A3?= =?UTF-8?q?=E8=AF=91=E8=80=85=E4=BF=A1=E6=81=AF=EF=BC=8C=E6=B7=BB=E5=8A=A0?= =?UTF-8?q?=E6=A0=A1=E5=AF=B9=E8=80=85=E4=BF=A1=E6=81=AF?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ... The Lines Of Source Code In Many Programming Languages.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/translated/tech/20181010 Cloc - Count The Lines Of Source Code In Many Programming Languages.md b/translated/tech/20181010 Cloc - Count The Lines Of Source Code In Many Programming Languages.md index 4c8fec79f7..6c321f44cc 100644 --- a/translated/tech/20181010 Cloc - Count The Lines Of Source Code In Many Programming Languages.md +++ b/translated/tech/20181010 Cloc - Count The Lines Of Source Code In Many Programming Languages.md @@ -162,8 +162,8 @@ via: https://www.ostechnix.com/cloc-count-the-lines-of-source-code-in-many-progr 作者:[SK][a] 选题:[lujun9972][b] -译者:[译者ID](https://github.com/littleji) -校对:[校对者ID](https://github.com/校对者ID) +译者:[littleji](https://github.com/littleji) +校对:[pityonline](https://github.com/pityonline) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 From 1b2cde45b1cde2313ba36b84e9bee924a7bd58e0 Mon Sep 17 00:00:00 2001 From: pityonline Date: Sat, 13 Oct 2018 23:03:35 +0800 Subject: [PATCH 420/736] =?UTF-8?q?PRF:=20#10644=20=E5=88=A0=E9=99=A4?= =?UTF-8?q?=E5=A4=9A=E4=BD=99=E7=9A=84=E9=93=BE=E6=8E=A5=E5=B9=B6=E8=B0=83?= =?UTF-8?q?=E6=95=B4=E9=93=BE=E6=8E=A5=20ID?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...Source Code In Many Programming Languages.md | 17 ++++++++--------- 1 file changed, 8 insertions(+), 9 deletions(-) diff --git a/translated/tech/20181010 Cloc - Count The Lines Of Source Code In Many Programming Languages.md b/translated/tech/20181010 Cloc - Count The Lines Of Source Code In Many Programming Languages.md index 6c321f44cc..019c1851fd 100644 --- a/translated/tech/20181010 Cloc - Count The Lines Of Source Code In Many Programming Languages.md +++ b/translated/tech/20181010 Cloc - Count The Lines Of Source Code In Many Programming Languages.md @@ -112,7 +112,7 @@ $ cloc file.tar.gz $ cloc dir/ ``` -![][4] +![][3] **子文件夹中使用的例子**: @@ -120,7 +120,7 @@ $ cloc dir/ $ cloc dir/cloc/tests ``` -![][5] +![][4] **计算一个压缩包中源代码的行数**: @@ -128,7 +128,7 @@ $ cloc dir/cloc/tests $ cloc archive.zip ``` -![][6] +![][5] **你还可以计算一个 git 项目**: @@ -140,7 +140,7 @@ $ cd cloc $ cloc 157d706 ``` -![][7] +![][6] **使用下面的命令,查看 Cloc 支持的语言类型**: @@ -171,8 +171,7 @@ via: https://www.ostechnix.com/cloc-count-the-lines-of-source-code-in-many-progr [b]: https://github.com/lujun9972 [1]: https://www.ostechnix.com/ohcount-the-source-code-line-counter-and-analyzer/ [2]: https://www.ostechnix.com/install-node-js-linux/ -[3]: data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7 -[4]: http://www.ostechnix.com/wp-content/uploads/2018/10/cloc-2-1.png -[5]: http://www.ostechnix.com/wp-content/uploads/2018/10/cloc-4.png -[6]: http://www.ostechnix.com/wp-content/uploads/2018/10/cloc-3.png -[7]: http://www.ostechnix.com/wp-content/uploads/2018/10/cloc-5.png +[3]: http://www.ostechnix.com/wp-content/uploads/2018/10/cloc-2-1.png +[4]: http://www.ostechnix.com/wp-content/uploads/2018/10/cloc-4.png +[5]: http://www.ostechnix.com/wp-content/uploads/2018/10/cloc-3.png +[6]: http://www.ostechnix.com/wp-content/uploads/2018/10/cloc-5.png From 489bfd4de754d899880b6905dc00e02902c42ea5 Mon Sep 17 00:00:00 2001 From: pityonline Date: Sun, 14 Oct 2018 16:32:42 +0800 Subject: [PATCH 421/736] =?UTF-8?q?PRF:=20#10644=20=E5=AE=8C=E6=88=90?= =?UTF-8?q?=E6=A0=A1=E5=AF=B9=E8=AF=91=E6=96=87=E5=86=85=E5=AE=B9?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...urce Code In Many Programming Languages.md | 46 +++++++++---------- 1 file changed, 23 insertions(+), 23 deletions(-) diff --git a/translated/tech/20181010 Cloc - Count The Lines Of Source Code In Many Programming Languages.md b/translated/tech/20181010 Cloc - Count The Lines Of Source Code In Many Programming Languages.md index 019c1851fd..d90663cd76 100644 --- a/translated/tech/20181010 Cloc - Count The Lines Of Source Code In Many Programming Languages.md +++ b/translated/tech/20181010 Cloc - Count The Lines Of Source Code In Many Programming Languages.md @@ -1,25 +1,25 @@ -Cloc –– 计算不同编程语言源代码的行数 +cloc –– 计算不同编程语言源代码的行数 ====== ![](https://www.ostechnix.com/wp-content/uploads/2018/10/cloc-720x340.png) -作为一个开发人员,你可能需要不时的想你的领导或者同事分享你目前的工作与代码开发进展,抑或你的领导想对代码进行全方位的分析。在这种情况下,你就需要用到我所知的这么几个程序,其中一个是 [**Ohcount**][1]。今天,我遇到了另一个程序,**Cloc**。通过使用 Cloc,你可以很容易的计算出多种语言的源代码行数。它还可以计算空行数、代码行数、实际占用的行数,并通过整齐的表格进行结果输出。Cloc 是免费的、开源的、跨平台程序,使用 **Perl** 进行开发。 +作为一个开发人员,你可能需要不时地向你的领导或者同事分享你目前的工作与代码开发进展,抑或你的领导想对代码进行全方位的分析。这时,你就需要用到一些代码统计的工具,我知道其中一个是 [**Ohcount**][1]。今天,我遇到了另一个程序,**cloc**。你可以用 cloc 很容易地统计多种语言的源代码行数。它还可以计算空行数、代码行数、实际代码的行数,并通过整齐的表格进行结果输出。cloc 是免费的、开源的跨平台程序,使用 **Perl** 进行开发。 ### 特点 -Cloc 有很多优势: +cloc 有很多优势: -* 容易安装和实用,不需要额外的依赖项。 -* 便携式 +* 安装方便而且易用,不需要额外的依赖项 +* 可移植 * 支持多种的结果格式导出,包括:纯文本、SQL、JSON、XML、YAML、CSV * 可以计算 git 的提交数 * 可递归计算文件夹内的代码行数 -* 可计算压缩后的文件,如:tar、zip、Java ear -* 开源跨平台部署 +* 可计算压缩后的文件,如:tar、zip、Java 的 .ear 等类型 +* 开源,跨平台 ### 安装 -Cloc 的安装包在大多数的 \*nix 操作系统的默认软件库内,所以你只需要使用默认的包管理器安装如下这样。 +cloc 的安装包在大多数的类 Unix 操作系统的默认软件库内,所以你只需要使用默认的包管理器安装即可。 Arch Linux: @@ -51,13 +51,13 @@ FreeBSD: $ sudo pkg install cloc ``` -当然你也可以使用第三方的包管理器比如 [**NPM**][2]。 +当然你也可以使用第三方的包管理器,比如 [**NPM**][2]。 ``` $ npm install -g cloc ``` -### 使用实例 +### 统计多种语言代码数据的使用举例 首先来几个简单的例子,比如下面在我目前工作目录中的的 C 代码。 @@ -82,17 +82,17 @@ $ cloc hello.c ![](https://www.ostechnix.com/wp-content/uploads/2018/10/Hello-World-Program.png) -第一列是被分析文件的语言类型,上面我们可以看到该分析的文件语言类型是 **C**。 +第一列是被分析文件的编程语言,上面我们可以看到这个文件是用 C 语言编写的。 -第二列显示的是该种语言类型有多少文件,图中说明只有一个。 +第二列显示的是该种语言有多少文件,图中说明只有一个。 -第三列显示空行的个数,图中显示无。 +第三列显示空行的数量,图中显示是 0 行。 第四列显示注释的行数。 -第五列显示该文件中的总共的行数。 +第五列显示该文件中实际的代码总行数。 -这是一个有只有6行代码的源文件,我们看到算的还算准确,那么如果用来算一个行数较多的源文件,会发生什么呢? +这是一个有只有 6 行代码的源文件,我们看到统计的还算准确,那么如果用来统计一个行数较多的源文件呢? ``` $ cloc file.tar.gz @@ -102,11 +102,11 @@ $ cloc file.tar.gz ![](https://www.ostechnix.com/wp-content/uploads/2018/10/cloc-1.png) -根据上述输出,手动查找准确的代码计数非常困难。但是,Cloc 以易读的表格格式显示结果。你还可以在最后查看每个部分的总计,这在分析程序的源代码时非常方便。 +上述输出结果如果手动统计准确的代码行数非常困难,但是 cloc 只需要几秒,而且以易读的表格格式显示结果。你还可以在最后查看每个部分的总计,这在分析程序的源代码时非常方便。 -除了源代码文件,Cloc 还能递归的计算各个目录及其子目录下的文件、压缩包、甚至 git 中的 commit 数目等。 +除了源代码文件,cloc 还能递归计算各个目录及其子目录下的文件、压缩包、甚至 git commit 数目等。 -**文件夹中使用的例子:** +文件夹中使用的例子: ``` $ cloc dir/ @@ -114,7 +114,7 @@ $ cloc dir/ ![][3] -**子文件夹中使用的例子**: +子文件夹中使用的例子*: ``` $ cloc dir/cloc/tests @@ -122,7 +122,7 @@ $ cloc dir/cloc/tests ![][4] -**计算一个压缩包中源代码的行数**: +计算一个压缩包中源代码的行数: ``` $ cloc archive.zip @@ -130,7 +130,7 @@ $ cloc archive.zip ![][5] -**你还可以计算一个 git 项目**: +你还可以计算一个 git 项目,也可以像下面这样针对某次提交时的状态统计: ``` $ git clone https://github.com/AlDanial/cloc.git @@ -142,13 +142,13 @@ $ cloc 157d706 ![][6] -**使用下面的命令,查看 Cloc 支持的语言类型**: +cloc 可以自动识别一些语言,使用下面的命令查看 cloc 支持的语言: ``` $ cloc --show-lang ``` -当然,help 能告诉你更多关于 Cloc 的使用帮助。 +更新信息请查阅 cloc 的使用帮助。 ``` $ cloc --help From 272d89667925231e6d3b380dd2748d0227765371 Mon Sep 17 00:00:00 2001 From: ChenYi <31087327+cyleft@users.noreply.github.com> Date: Sun, 14 Oct 2018 16:39:31 +0800 Subject: [PATCH 422/736] Update 20181009 6 Commands To Shutdown And Reboot The Linux System From Terminal.md --- ...ds To Shutdown And Reboot The Linux System From Terminal.md | 3 +++ 1 file changed, 3 insertions(+) diff --git a/sources/tech/20181009 6 Commands To Shutdown And Reboot The Linux System From Terminal.md b/sources/tech/20181009 6 Commands To Shutdown And Reboot The Linux System From Terminal.md index 15230ecd0b..c119f69ebf 100644 --- a/sources/tech/20181009 6 Commands To Shutdown And Reboot The Linux System From Terminal.md +++ b/sources/tech/20181009 6 Commands To Shutdown And Reboot The Linux System From Terminal.md @@ -1,3 +1,6 @@ +translating---cyleft +==== + 6 Commands To Shutdown And Reboot The Linux System From Terminal ====== Linux administrator performing many tasks in their routine work. The system Shutdown and Reboot task also included in it. From daf2c1c574afa6eb9816d2c74fa5162eeca829c5 Mon Sep 17 00:00:00 2001 From: Chang Liu Date: Sun, 14 Oct 2018 19:05:50 +0800 Subject: [PATCH 423/736] [Translating] 20180817 How To Lock The Keyboard And Mouse, But Not The Screen In Linux.md (#10675) Signed-off-by: Chang Liu --- ...o Lock The Keyboard And Mouse, But Not The Screen In Linux.md | 1 + 1 file changed, 1 insertion(+) diff --git a/sources/tech/20180817 How To Lock The Keyboard And Mouse, But Not The Screen In Linux.md b/sources/tech/20180817 How To Lock The Keyboard And Mouse, But Not The Screen In Linux.md index d3c729f1d0..d671a35457 100644 --- a/sources/tech/20180817 How To Lock The Keyboard And Mouse, But Not The Screen In Linux.md +++ b/sources/tech/20180817 How To Lock The Keyboard And Mouse, But Not The Screen In Linux.md @@ -1,3 +1,4 @@ +FSSlc Translating How To Lock The Keyboard And Mouse, But Not The Screen In Linux ====== From d801765fb6ce692dc683edb652422b2711b42f17 Mon Sep 17 00:00:00 2001 From: HankChow <280630620@qq.com> Date: Sun, 14 Oct 2018 20:03:27 +0800 Subject: [PATCH 424/736] hankchow translating --- sources/tech/20180715 Why is Python so slow.md | 2 ++ 1 file changed, 2 insertions(+) diff --git a/sources/tech/20180715 Why is Python so slow.md b/sources/tech/20180715 Why is Python so slow.md index 931d32a4b2..5c39a528a1 100644 --- a/sources/tech/20180715 Why is Python so slow.md +++ b/sources/tech/20180715 Why is Python so slow.md @@ -1,3 +1,5 @@ +HankChow translating + Why is Python so slow? ============================================================ From 117cca20cee43d8cb431e62bf8987f924b25ec08 Mon Sep 17 00:00:00 2001 From: "Xingyu.Wang" Date: Sun, 14 Oct 2018 21:23:17 +0800 Subject: [PATCH 425/736] PRF:20180926 How to use the Scikit-learn Python library for data science projects.md @Flowsnow --- ...ython library for data science projects.md | 125 +++++++++--------- 1 file changed, 63 insertions(+), 62 deletions(-) diff --git a/translated/tech/20180926 How to use the Scikit-learn Python library for data science projects.md b/translated/tech/20180926 How to use the Scikit-learn Python library for data science projects.md index 6f94cb8327..b7ebe9a6bd 100644 --- a/translated/tech/20180926 How to use the Scikit-learn Python library for data science projects.md +++ b/translated/tech/20180926 How to use the Scikit-learn Python library for data science projects.md @@ -1,90 +1,92 @@ -如何将Scikit-learn Python库用于数据科学项目 +如何将 Scikit-learn Python 库用于数据科学项目 ====== +> 灵活多样的 Python 库为数据分析和数据挖掘提供了强力的机器学习工具。 + ![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/brain_data.png?itok=RH6NA32X) -Scikit-learn Python库最初于2007年发布,从头到尾都通常用于解决机器学习和数据科学问题。 多功能库提供整洁,一致,高效的API和全面的在线文档。 +Scikit-learn Python 库最初于 2007 年发布,通常用于解决各种方面的机器学习和数据科学问题。这个多种功能的库提供了整洁、一致、高效的 API 和全面的在线文档。 -### 什么是Scikit-learn? +### 什么是 Scikit-learn? -[Scikit-learn][1]是一个开源Python库,拥有强大的数据分析和数据挖掘工具。 在BSD许可下可用,并建立在以下机器学习库上: +[Scikit-learn][1] 是一个开源 Python 库,拥有强大的数据分析和数据挖掘工具。 在 BSD 许可下可用,并建立在以下机器学习库上: -- **NumPy**,一个用于操作多维数组和矩阵的库。 它还具有广泛的数学函数汇集,可用于执行各种计算。 -- **SciPy**,一个由各种库组成的生态系统,用于完成技术计算任务。 -- **Matplotlib**,一个用于绘制各种图表和图形的库。 +- `NumPy`,一个用于操作多维数组和矩阵的库。它还具有广泛的数学函数汇集,可用于执行各种计算。 +- `SciPy`,一个由各种库组成的生态系统,用于完成技术计算任务。 +- `Matplotlib`,一个用于绘制各种图表和图形的库。 -Scikit-learn提供了广泛的内置算法,可以充分用于数据科学项目。 +Scikit-learn 提供了广泛的内置算法,可以充分用于数据科学项目。 -以下是使用Scikit-learn库的主要方法。 +以下是使用 Scikit-learn 库的主要方法。 -#### 1. 分类 +#### 1、分类 -[分类][2]工具识别与提供的数据相关联的类别。 例如,它们可用于将电子邮件分类为垃圾邮件或非垃圾邮件。 +[分类][2]工具识别与提供的数据相关联的类别。例如,它们可用于将电子邮件分类为垃圾邮件或非垃圾邮件。 -Scikit-learn中的分类算法包括: +Scikit-learn 中的分类算法包括: -- 支持向量机(SVM) -- 最邻近 -- 随机森林 +- 支持向量机Support vector machines(SVM) +- 最邻近Nearest neighbors +- 随机森林Random forest -#### 2. 回归 +#### 2、回归 -回归涉及到创建一个模型去试图理解输入和输出数据之间的关系。 例如,回归工具可用于了解股票价格的行为。 +回归涉及到创建一个模型去试图理解输入和输出数据之间的关系。例如,回归工具可用于理解股票价格的行为。 回归算法包括: -- SVM -- 岭回归Ridge regression -- Lasso(LCTT译者注:Lasso 即 least absolute shrinkage and selection operator,又译最小绝对值收敛和选择算子、套索算法) +- 支持向量机Support vector machines(SVM) +- 岭回归Ridge regression +- Lasso(LCTT 译注:Lasso 即 least absolute shrinkage and selection operator,又译为最小绝对值收敛和选择算子、套索算法) -#### 3. 聚类 +#### 3、聚类 -Scikit-learn聚类工具用于自动将具有相同特征的数据分组。 例如,可以根据客户数据的地点对客户数据进行细分。 +Scikit-learn 聚类工具用于自动将具有相同特征的数据分组。 例如,可以根据客户数据的地点对客户数据进行细分。 聚类算法包括: - K-means -- 谱聚类Spectral clustering +- 谱聚类Spectral clustering - Mean-shift -#### 4. 降维 +#### 4、降维 -降维降低了用于分析的随机变量的数量。 例如,为了提高可视化效率,可能不会考虑外围数据。 +降维降低了用于分析的随机变量的数量。例如,为了提高可视化效率,可能不会考虑外围数据。 降维算法包括: -- 主成分分析Principal component analysis(PCA) -- 功能选择Feature selection -- 非负矩阵分解Non-negative matrix factorization +- 主成分分析Principal component analysis(PCA) +- 功能选择Feature selection +- 非负矩阵分解Non-negative matrix factorization -#### 5. 模型选择 +#### 5、模型选择 -模型选择算法提供了用于比较,验证和选择要在数据科学项目中使用的最佳参数和模型的工具。 +模型选择算法提供了用于比较、验证和选择要在数据科学项目中使用的最佳参数和模型的工具。 通过参数调整能够增强精度的模型选择模块包括: -- 网格搜索Grid search -- 交叉验证Cross-validation -- 指标Metrics +- 网格搜索Grid search +- 交叉验证Cross-validation +- 指标Metrics -#### 6. 预处理 +#### 6、预处理 -Scikit-learn预处理工具在数据分析期间的特征提取和规范化中非常重要。 例如,您可以使用这些工具转换输入数据(如文本)并在分析中应用其特征。 +Scikit-learn 预处理工具在数据分析期间的特征提取和规范化中非常重要。 例如,您可以使用这些工具转换输入数据(如文本)并在分析中应用其特征。 预处理模块包括: - 预处理 - 特征提取 -### Scikit-learn库示例 +### Scikit-learn 库示例 -让我们用一个简单的例子来说明如何在数据科学项目中使用Scikit-learn库。 +让我们用一个简单的例子来说明如何在数据科学项目中使用 Scikit-learn 库。 -我们将使用[鸢尾花花卉数据集][3],该数据集包含在Scikit-learn库中。 鸢尾花数据集包含有关三种花种的150个细节,三种花种分别为: +我们将使用[鸢尾花花卉数据集][3],该数据集包含在 Scikit-learn 库中。 鸢尾花数据集包含有关三种花种的 150 个细节,三种花种分别为: -- Setosa-标记为0 -- Versicolor-标记为1 -- Virginica-标记为2 +- Setosa:标记为 0 +- Versicolor:标记为 1 +- Virginica:标记为 2 数据集包括每种花种的以下特征(以厘米为单位): @@ -93,24 +95,24 @@ Scikit-learn预处理工具在数据分析期间的特征提取和规范化中 - 花瓣长度 - 花瓣宽度 -#### 第1步:导入库 +#### 第 1 步:导入库 -由于Iris数据集包含在Scikit-learn数据科学库中,我们可以将其加载到我们的工作区中,如下所示: +由于鸢尾花花卉数据集包含在 Scikit-learn 数据科学库中,我们可以将其加载到我们的工作区中,如下所示: ``` from sklearn import datasets iris = datasets.load_iris() ``` -这些命令从**sklearn**导入数据集**datasets**模块,然后使用**datasets**中的**load_iris()**方法将数据包含在工作空间中。 +这些命令从 `sklearn` 导入数据集 `datasets` 模块,然后使用 `datasets` 中的 `load_iris()` 方法将数据包含在工作空间中。 -#### 第2步:获取数据集特征 +#### 第 2 步:获取数据集特征 -数据集**datasets**模块包含几种方法,使您更容易熟悉处理数据。 +数据集 `datasets` 模块包含几种方法,使您更容易熟悉处理数据。 -在Scikit-learn中,数据集指的是类似字典的对象,其中包含有关数据的所有详细信息。 使用**.data**键存储数据,该数据列是一个数组列表。 +在 Scikit-learn 中,数据集指的是类似字典的对象,其中包含有关数据的所有详细信息。 使用 `.data` 键存储数据,该数据列是一个数组列表。 -例如,我们可以利用**iris.data**输出有关Iris花卉数据集的信息。 +例如,我们可以利用 `iris.data` 输出有关鸢尾花花卉数据集的信息。 ``` print(iris.data) @@ -139,7 +141,7 @@ print(iris.data)  [5.1 3.5 1.4 0.3] ``` -我们还使用**iris.target**向我们提供有关花朵不同标签的信息。 +我们还使用 `iris.target` 向我们提供有关花朵不同标签的信息。 ``` print(iris.target) @@ -153,22 +155,21 @@ print(iris.target)  1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 2 2 2 2 2 2 2 2 2 2 2  2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2  2 2] - ``` -如果我们使用**iris.target_names**,我们将输出数据集中找到的标签名称的数组。 +如果我们使用 `iris.target_names`,我们将输出数据集中找到的标签名称的数组。 ``` print(iris.target_names) ``` -以下是运行Python代码后的结果: +以下是运行 Python 代码后的结果: ``` ['setosa' 'versicolor' 'virginica'] ``` -#### 第3步:可视化数据集 +#### 第 3 步:可视化数据集 我们可以使用[箱形图][4]来生成鸢尾花数据集的视觉描绘。 箱形图说明了数据如何通过四分位数在平面上分布的。 @@ -188,16 +189,16 @@ sns.set(rc={'figure.figsize':(2,15)}) 在横轴上: - * 0是萼片长度 - * 1是萼片宽度 - * 2是花瓣长度 - * 3是花瓣宽度 + * 0 是萼片长度 + * 1 是萼片宽度 + * 2 是花瓣长度 + * 3 是花瓣宽度 垂直轴的尺寸以厘米为单位。 ### 总结 -以下是这个简单的Scikit-learn数据科学教程的完整代码。 +以下是这个简单的 Scikit-learn 数据科学教程的完整代码。 ``` from sklearn import datasets @@ -212,9 +213,9 @@ sns.boxplot(data = box_data,width=0.5,fliersize=5) sns.set(rc={'figure.figsize':(2,15)}) ``` -Scikit-learn是一个多功能的Python库,可用于高效完成数据科学项目。 +Scikit-learn 是一个多功能的 Python 库,可用于高效完成数据科学项目。 -如果您想了解更多信息,请查看[LiveEdu][5]上的教程,例如Andrey Bulezyuk关于使用Scikit-learn库创建[机器学习应用程序][6]的视频。 +如果您想了解更多信息,请查看 [LiveEdu][5] 上的教程,例如 Andrey Bulezyuk 关于使用 Scikit-learn 库创建[机器学习应用程序][6]的视频。 有什么评价或者疑问吗? 欢迎在下面分享。 @@ -225,7 +226,7 @@ via: https://opensource.com/article/18/9/how-use-scikit-learn-data-science-proje 作者:[Dr.Michael J.Garbade][a] 选题:[lujun9972](https://github.com/lujun9972) 译者:[Flowsnow](https://github.com/Flowsnow) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 @@ -235,4 +236,4 @@ via: https://opensource.com/article/18/9/how-use-scikit-learn-data-science-proje [3]: https://en.wikipedia.org/wiki/Iris_flower_data_set [4]: https://en.wikipedia.org/wiki/Box_plot [5]: https://www.liveedu.tv/guides/data-science/ -[6]: https://www.liveedu.tv/andreybu/REaxr-machine-learning-model-python-sklearn-kera/oPGdP-machine-learning-model-python-sklearn-kera/ \ No newline at end of file +[6]: https://www.liveedu.tv/andreybu/REaxr-machine-learning-model-python-sklearn-kera/oPGdP-machine-learning-model-python-sklearn-kera/ From f71ca3eedc56002da539782e725201d5a70122c7 Mon Sep 17 00:00:00 2001 From: "Xingyu.Wang" Date: Sun, 14 Oct 2018 21:23:38 +0800 Subject: [PATCH 426/736] PUB:20180926 How to use the Scikit-learn Python library for data science projects.md @Flowsnow https://linux.cn/article-10115-1.html --- ...e the Scikit-learn Python library for data science projects.md | 0 1 file changed, 0 insertions(+), 0 deletions(-) rename {translated/tech => published}/20180926 How to use the Scikit-learn Python library for data science projects.md (100%) diff --git a/translated/tech/20180926 How to use the Scikit-learn Python library for data science projects.md b/published/20180926 How to use the Scikit-learn Python library for data science projects.md similarity index 100% rename from translated/tech/20180926 How to use the Scikit-learn Python library for data science projects.md rename to published/20180926 How to use the Scikit-learn Python library for data science projects.md From cb5592d7e40fa685569d176893f515b5fa24b20f Mon Sep 17 00:00:00 2001 From: ypingcn <1344632698@qq.com> Date: Sun, 14 Oct 2018 21:26:29 +0800 Subject: [PATCH 427/736] Claim: 20180921 Control your data with Syncthing- An open source synchronization tool.md --- ... data with Syncthing- An open source synchronization tool.md | 2 ++ 1 file changed, 2 insertions(+) diff --git a/sources/tech/20180921 Control your data with Syncthing- An open source synchronization tool.md b/sources/tech/20180921 Control your data with Syncthing- An open source synchronization tool.md index 32be152b4c..97aa36801b 100644 --- a/sources/tech/20180921 Control your data with Syncthing- An open source synchronization tool.md +++ b/sources/tech/20180921 Control your data with Syncthing- An open source synchronization tool.md @@ -1,3 +1,5 @@ +translating by ypingcn + Control your data with Syncthing: An open source synchronization tool ====== Decide how to store and share your personal information. From b0a8edac3fd6f676a6cf2631cd9457f1a77afd40 Mon Sep 17 00:00:00 2001 From: hopefully2333 <787016457@qq.com> Date: Sun, 14 Oct 2018 21:35:46 +0800 Subject: [PATCH 428/736] translating by hopefully2333 translating by hopefully2333 --- ...8 Play Windows games on Fedora with Steam Play and Proton.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/sources/tech/20181008 Play Windows games on Fedora with Steam Play and Proton.md b/sources/tech/20181008 Play Windows games on Fedora with Steam Play and Proton.md index 459ada125e..16930083fd 100644 --- a/sources/tech/20181008 Play Windows games on Fedora with Steam Play and Proton.md +++ b/sources/tech/20181008 Play Windows games on Fedora with Steam Play and Proton.md @@ -1,4 +1,4 @@ -translated by hopefully2333 +translating by hopefully2333 Play Windows games on Fedora with Steam Play and Proton ====== From a6c9bc7aa18ffb1dd207c32d0d5b0e0b513164ff Mon Sep 17 00:00:00 2001 From: MjSeven <33125422+MjSeven@users.noreply.github.com> Date: Sun, 14 Oct 2018 22:06:24 +0800 Subject: [PATCH 429/736] Update 20171204 Improve your Bash scripts with Argbash.md --- .../tech/20171204 Improve your Bash scripts with Argbash.md | 3 +++ 1 file changed, 3 insertions(+) diff --git a/sources/tech/20171204 Improve your Bash scripts with Argbash.md b/sources/tech/20171204 Improve your Bash scripts with Argbash.md index 826512867e..8b9b565ec1 100644 --- a/sources/tech/20171204 Improve your Bash scripts with Argbash.md +++ b/sources/tech/20171204 Improve your Bash scripts with Argbash.md @@ -1,3 +1,6 @@ +Translating by MjSeven + + # [Improve your Bash scripts with Argbash][1] ![](https://fedoramagazine.org/wp-content/uploads/2017/11/argbash-1-945x400.png) From 1e43fdcb15bdac3554f6cb980746c7a7ab94378c Mon Sep 17 00:00:00 2001 From: "Xingyu.Wang" Date: Sun, 14 Oct 2018 22:22:09 +0800 Subject: [PATCH 430/736] PRF:20180910 How To List An Available Package Groups In Linux.md @HankChow --- ...st An Available Package Groups In Linux.md | 50 +++++++++---------- 1 file changed, 23 insertions(+), 27 deletions(-) diff --git a/translated/tech/20180910 How To List An Available Package Groups In Linux.md b/translated/tech/20180910 How To List An Available Package Groups In Linux.md index b192e6c5f0..dc4ebafc9f 100644 --- a/translated/tech/20180910 How To List An Available Package Groups In Linux.md +++ b/translated/tech/20180910 How To List An Available Package Groups In Linux.md @@ -1,10 +1,11 @@ 如何在 Linux 中列出可用的软件包组 ====== + 我们知道,如果想要在 Linux 中安装软件包,可以使用软件包管理器来进行安装。由于系统管理员需要频繁用到软件包管理器,所以它是 Linux 当中的一个重要工具。 但是如果想一次性安装一个软件包组,在 Linux 中有可能吗?又如何通过命令去实现呢? -在 Linux 中确实可以用软件包管理器来达到这样的目的。很多软件包管理器都有这样的选项来实现这个功能,但就我所知,`apt` 或 `apt-get` 软件包管理器却并没有这个选项。因此对基于 Debian 的系统,需要使用的命令是 `tasksel`,而不是 `apt`或 `apt-get` 这样的官方软件包管理器。 +在 Linux 中确实可以用软件包管理器来达到这样的目的。很多软件包管理器都有这样的选项来实现这个功能,但就我所知,`apt` 或 `apt-get` 软件包管理器却并没有这个选项。因此对基于 Debian 的系统,需要使用的命令是 `tasksel`,而不是 `apt` 或 `apt-get` 这样的官方软件包管理器。 在 Linux 中安装软件包组有很多好处。对于 LAMP 来说,安装过程会包含多个软件包,但如果安装软件包组命令来安装,只安装一个包就可以了。 @@ -13,19 +14,20 @@ 软件包组是一组用于公共功能的软件包,包括系统工具、声音和视频。 安装软件包组的过程中,会获取到一系列的依赖包,从而大大节省了时间。 **推荐阅读:** -**(#)** [如何在 Linux 上按照大小列出已安装的软件包][1] -**(#)** [如何在 Linux 上查看/列出可用的软件包更新][2] -**(#)** [如何在 Linux 上查看软件包的安装/更新/升级/移除/卸载时间][3] -**(#)** [如何在 Linux 上查看一个软件包的详细信息][4] -**(#)** [如何查看一个软件包是否在你的 Linux 发行版上可用][5] -**(#)** [萌新指导:一个可视化的 Linux 包管理工具][6] -**(#)** [老手必会:命令行软件包管理器的用法][7] + +- [如何在 Linux 上按照大小列出已安装的软件包][1] +- [如何在 Linux 上查看/列出可用的软件包更新][2] +- [如何在 Linux 上查看软件包的安装/更新/升级/移除/卸载时间][3] +- [如何在 Linux 上查看一个软件包的详细信息][4] +- [如何查看一个软件包是否在你的 Linux 发行版上可用][5] +- [萌新指导:一个可视化的 Linux 包管理工具][6] +- [老手必会:命令行软件包管理器的用法][7] ### 如何在 CentOS/RHEL 系统上列出可用的软件包组 RHEL 和 CentOS 系统使用的是 RPM 软件包,因此可以使用 `yum` 软件包管理器来获取相关的软件包信息。 -`yum` 是 Yellowdog Updater, Modified 的缩写,它是一个用于基于 RPM 系统(例如 RHEL 和 CentOS)的,开源的命令行软件包管理工具。它是从分发库或其它第三方库中获取、安装、删除、查询和管理 RPM 包的主要工具。 +`yum` 是 “Yellowdog Updater, Modified” 的缩写,它是一个用于基于 RPM 系统(例如 RHEL 和 CentOS)的,开源的命令行软件包管理工具。它是从发行版仓库或其它第三方库中获取、安装、删除、查询和管理 RPM 包的主要工具。 **推荐阅读:** [使用 yum 命令在 RHEL/CentOS 系统上管理软件包][8] @@ -69,10 +71,9 @@ Available Language Groups: . . Done - ``` -如果需要列出相关联的软件包,可以执行以下这个命令。下面的例子是列出和 Performance Tools 组相关联的软件包。 +如果需要列出相关联的软件包,可以执行以下这个命令。下面的例子是列出和 “Performance Tools” 组相关联的软件包。 ``` # yum groupinfo "Performance Tools" @@ -103,18 +104,17 @@ Group: Performance Tools tiobench tuned tuned-utils - ``` ### 如何在 Fedora 系统上列出可用的软件包组 Fedora 系统使用的是 DNF 软件包管理器,因此可以通过 DNF 软件包管理器来获取相关的信息。 -DNF 的含义是 Dandified yum。、DNF 软件包管理器是 YUM 软件包管理器的一个分支,它使用 hawkey/libsolv 库作为后端。从 Fedora 18 开始,Aleš Kozumplík 开始着手 DNF 的开发,直到在Fedora 22 开始加入到系统中。 +DNF 的含义是 “Dandified yum”。DNF 软件包管理器是 YUM 软件包管理器的一个分支,它使用 hawkey/libsolv 库作为后端。从 Fedora 18 开始,Aleš Kozumplík 开始着手 DNF 的开发,直到在 Fedora 22 开始加入到系统中。 `dnf` 命令可以在 Fedora 22 及更高版本上安装、更新、搜索和删除软件包, 它可以自动解决软件包的依赖关系并其顺利安装,不会产生问题。 -由于一些长期未被解决的问题的存在,YUM 被 DNF 逐渐取代了。而 Aleš Kozumplík 的 DNF 却并未对 yum 的这些问题作出修补,他认为这是技术上的难题,YUM 团队也从不接受这些更改。而且 YUM 的代码量有 5.6 万行,而 DNF 只有 2.9 万行。因此已经不需要沿着 YUM 的方向继续开发了,重新开一个分支才是更好的选择。 +YUM 被 DNF 取代是由于 YUM 中存在一些长期未被解决的问题。为什么 Aleš Kozumplík 没有对 yum 的这些问题作出修补呢,他认为补丁解决存在技术上的难题,而 YUM 团队也不会马上接受这些更改,还有一些重要的问题。而且 YUM 的代码量有 5.6 万行,而 DNF 只有 2.9 万行。因此已经不需要沿着 YUM 的方向继续开发了,重新开一个分支才是更好的选择。 **推荐阅读:** [在 Fedora 系统上使用 DNF 命令管理软件包][9] @@ -167,13 +167,11 @@ Available Groups: Hardware Support Sound and Video System Tools - ``` -如果需要列出相关联的软件包,可以执行以下这个命令。下面的例子是列出和 Editor 组相关联的软件包。 +如果需要列出相关联的软件包,可以执行以下这个命令。下面的例子是列出和 “Editor” 组相关联的软件包。 ``` - # dnf groupinfo Editors Last metadata expiration check: 0:04:57 ago on Sun 09 Sep 2018 07:10:36 PM IST. @@ -267,7 +265,7 @@ i | yast2_basis | 20150918-25.1 | @System | | yast2_install_wf | 20150918-25.1 | Main Repository (OSS) | ``` -如果需要列出相关联的软件包,可以执行以下这个命令。下面的例子是列出和 file_server 组相关联的软件包。另外 `zypper` 还允许用户使用不同的选项执行相同的操作。 +如果需要列出相关联的软件包,可以执行以下这个命令。下面的例子是列出和 “file_server” 组相关联的软件包。另外 `zypper` 还允许用户使用不同的选项执行相同的操作。 ``` # zypper info file_server @@ -346,7 +344,7 @@ Contents : | yast2-tftp-server | package | Recommended ``` -如果需要列出相关联的软件包,可以执行以下这个命令。 +如果需要列出相关联的软件包,也可以执行以下这个命令。 ``` # zypper info pattern file_server @@ -385,7 +383,7 @@ Contents : | yast2-tftp-server | package | Recommended ``` -如果需要列出相关联的软件包,可以执行以下这个命令。 +如果需要列出相关联的软件包,也可以执行以下这个命令。 ``` # zypper info -t pattern file_server @@ -431,7 +429,7 @@ Contents : [tasksel][11] 是 Debian/Ubuntu 系统上一个很方便的工具,只需要很少的操作就可以用它来安装好一组软件包。可以在 `/usr/share/tasksel` 目录下的 `.desc` 文件中安排软件包的安装任务。 -默认情况下,`tasksel` 工具是作为 Debian 系统的一部分安装的,但桌面版 Ubuntu 则没有自带 `tasksel`,类似软件包管理器中的元包(meta-packages)。 +默认情况下,`tasksel` 工具是作为 Debian 系统的一部分安装的,但桌面版 Ubuntu 则没有自带 `tasksel`,这个功能类似软件包管理器中的元包(meta-packages)。 `tasksel` 工具带有一个基于 zenity 的简单用户界面,例如命令行中的弹出图形对话框。 @@ -483,7 +481,7 @@ u openssh-server OpenSSH server u server Basic Ubuntu server ``` -如果需要列出相关联的软件包,可以执行以下这个命令。下面的例子是列出和 lamp-server 组相关联的软件包。 +如果需要列出相关联的软件包,可以执行以下这个命令。下面的例子是列出和 “lamp-server” 组相关联的软件包。 ``` # tasksel --task-desc "lamp-server" @@ -494,7 +492,7 @@ Selects a ready-made Linux/Apache/MySQL/PHP server. 基于 Arch Linux 的系统使用的是 pacman 软件包管理器,因此可以通过 pacman 软件包管理器来获取相关的信息。 -pacman 是 package manager 的缩写。`pacman` 可以用于安装、构建、删除和管理 Arch Linux 软件包。`pacman` 使用 libalpm(Arch Linux Package Management 库,ALPM)作为后端来执行所有操作。 +pacman 是 “package manager” 的缩写。`pacman` 可以用于安装、构建、删除和管理 Arch Linux 软件包。`pacman` 使用 libalpm(Arch Linux Package Management 库,ALPM)作为后端来执行所有操作。 **推荐阅读:** [使用 pacman 在基于 Arch Linux 的系统上管理软件包][13] @@ -536,10 +534,9 @@ realtime sugar-fructose tesseract-data vim-plugins - ``` -如果需要列出相关联的软件包,可以执行以下这个命令。下面的例子是列出和 gnome 组相关联的软件包。 +如果需要列出相关联的软件包,可以执行以下这个命令。下面的例子是列出和 “gnome” 组相关联的软件包。 ``` # pacman -Sg gnome @@ -603,7 +600,6 @@ Interrupt signal received ``` # pacman -Sg gnome | wc -l 64 - ``` -------------------------------------------------------------------------------- @@ -613,7 +609,7 @@ via: https://www.2daygeek.com/how-to-list-an-available-package-groups-in-linux/ 作者:[Prakash Subramanian][a] 选题:[lujun9972](https://github.com/lujun9972) 译者:[HankChow](https://github.com/HankChow) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 From 8a1b2ab0d3243572c56f93ca73d78c18da5109ba Mon Sep 17 00:00:00 2001 From: "Xingyu.Wang" Date: Sun, 14 Oct 2018 22:22:33 +0800 Subject: [PATCH 431/736] PUB:20180910 How To List An Available Package Groups In Linux.md @HankChow https://linux.cn/article-10116-1.html --- .../20180910 How To List An Available Package Groups In Linux.md | 0 1 file changed, 0 insertions(+), 0 deletions(-) rename {translated/tech => published}/20180910 How To List An Available Package Groups In Linux.md (100%) diff --git a/translated/tech/20180910 How To List An Available Package Groups In Linux.md b/published/20180910 How To List An Available Package Groups In Linux.md similarity index 100% rename from translated/tech/20180910 How To List An Available Package Groups In Linux.md rename to published/20180910 How To List An Available Package Groups In Linux.md From 661180467ff873e45a18e6fee8413fc0d3ded1f3 Mon Sep 17 00:00:00 2001 From: jrg Date: Sun, 14 Oct 2018 22:32:50 +0800 Subject: [PATCH 432/736] Update 20181010 An introduction to using tcpdump at the Linux command line.md translating by jrg, 20181014 --- ...n introduction to using tcpdump at the Linux command line.md | 2 ++ 1 file changed, 2 insertions(+) diff --git a/sources/tech/20181010 An introduction to using tcpdump at the Linux command line.md b/sources/tech/20181010 An introduction to using tcpdump at the Linux command line.md index 6998661f23..b498d0ca43 100644 --- a/sources/tech/20181010 An introduction to using tcpdump at the Linux command line.md +++ b/sources/tech/20181010 An introduction to using tcpdump at the Linux command line.md @@ -1,3 +1,5 @@ +[translation by jrg] + An introduction to using tcpdump at the Linux command line ====== From dcea3cbc4cc51265a019d046fcc7cc130a6c6fbb Mon Sep 17 00:00:00 2001 From: jrg Date: Sun, 14 Oct 2018 22:44:03 +0800 Subject: [PATCH 433/736] Update 20181001 16 iptables tips and tricks for sysadmins.md translating by jrg, 20181014 --- .../tech/20181001 16 iptables tips and tricks for sysadmins.md | 2 ++ 1 file changed, 2 insertions(+) diff --git a/sources/tech/20181001 16 iptables tips and tricks for sysadmins.md b/sources/tech/20181001 16 iptables tips and tricks for sysadmins.md index 9e07971c81..a9b3e77c5a 100644 --- a/sources/tech/20181001 16 iptables tips and tricks for sysadmins.md +++ b/sources/tech/20181001 16 iptables tips and tricks for sysadmins.md @@ -1,3 +1,5 @@ +[translating by jrg, 20181014] + 16 iptables tips and tricks for sysadmins ====== Iptables provides powerful capabilities to control traffic coming in and out of your system. From 05f7f45efacbed1ba3599847dc1b770adaf1597a Mon Sep 17 00:00:00 2001 From: jrg Date: Sun, 14 Oct 2018 22:49:50 +0800 Subject: [PATCH 434/736] Update 20180928 Using Grails with jQuery and DataTables.md translating by jrg,20181014 --- .../tech/20180928 Using Grails with jQuery and DataTables.md | 2 ++ 1 file changed, 2 insertions(+) diff --git a/sources/tech/20180928 Using Grails with jQuery and DataTables.md b/sources/tech/20180928 Using Grails with jQuery and DataTables.md index 9a9ad08fb0..0f02fabe8a 100644 --- a/sources/tech/20180928 Using Grails with jQuery and DataTables.md +++ b/sources/tech/20180928 Using Grails with jQuery and DataTables.md @@ -1,3 +1,5 @@ +[translating by jrg 20181014] + Using Grails with jQuery and DataTables ====== From 73f8678eb53a8d8695acd03d0636363095c74d03 Mon Sep 17 00:00:00 2001 From: geekpi Date: Mon, 15 Oct 2018 08:59:14 +0800 Subject: [PATCH 435/736] translated --- .../20181003 Introducing Swift on Fedora.md | 72 ------------------- .../20181003 Introducing Swift on Fedora.md | 70 ++++++++++++++++++ 2 files changed, 70 insertions(+), 72 deletions(-) delete mode 100644 sources/tech/20181003 Introducing Swift on Fedora.md create mode 100644 translated/tech/20181003 Introducing Swift on Fedora.md diff --git a/sources/tech/20181003 Introducing Swift on Fedora.md b/sources/tech/20181003 Introducing Swift on Fedora.md deleted file mode 100644 index 186117cd7c..0000000000 --- a/sources/tech/20181003 Introducing Swift on Fedora.md +++ /dev/null @@ -1,72 +0,0 @@ -translating---geekpi - -Introducing Swift on Fedora -====== - -![](https://fedoramagazine.org/wp-content/uploads/2018/09/swift-816x345.jpg) - -Swift is a general-purpose programming language built using a modern approach to safety, performance, and software design patterns. It aims to be the best language for a variety of programming projects, ranging from systems programming to desktop applications and scaling up to cloud services. Read more about it and how to try it out in Fedora. - -### Safe, Fast, Expressive - -Like many modern programming languages, Swift was designed to be safer than C-based languages. For example, variables are always initialized before they can be used. Arrays and integers are checked for overflow. Memory is automatically managed. - -Swift puts intent right in the syntax. To declare a variable, use the var keyword. To declare a constant, use let. - -Swift also guarantees that objects can never be nil; in fact, trying to use an object known to be nil will cause a compile-time error. When using a nil value is appropriate, it supports a mechanism called **optionals**. An optional may contain nil, but is safely unwrapped using the **?** operator. - -Some additional features include: - - * Closures unified with function pointers - * Tuples and multiple return values - * Generics - * Fast and concise iteration over a range or collection - * Structs that support methods, extensions, and protocols - * Functional programming patterns, e.g., map and filter - * Powerful error handling built-in - * Advanced control flow with do, guard, defer, and repeat keywords - - - -### Try Swift out - -Swift is available in Fedora 28 under then package name **swift-lang**. Once installed, run swift and the REPL console starts up. - -``` -$ swift -Welcome to Swift version 4.2 (swift-4.2-RELEASE). Type :help for assistance. - 1> let greeting="Hello world!" -greeting: String = "Hello world!" - 2> print(greeting) -Hello world! - 3> greeting = "Hello universe!" -error: repl.swift:3:10: error: cannot assign to value: 'greeting' is a 'let' constant -greeting = "Hello universe!" -~~~~~~~~ ^ - - - 3> - -``` - -Swift has a growing community, and in particular, a [work group][1] dedicated to making it an efficient and effective server-side programming language. Be sure to visit [its home page][2] for more ways to get involved. - -Photo by [Uillian Vargas][3] on [Unsplash][4]. - - --------------------------------------------------------------------------------- - -via: https://fedoramagazine.org/introducing-swift-fedora/ - -作者:[Link Dupont][a] -选题:[lujun9972](https://github.com/lujun9972) -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://fedoramagazine.org/author/linkdupont/ -[1]: https://swift.org/server/ -[2]: http://swift.org -[3]: https://unsplash.com/photos/7oJpVR1inGk?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText -[4]: https://unsplash.com/search/photos/fast?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText diff --git a/translated/tech/20181003 Introducing Swift on Fedora.md b/translated/tech/20181003 Introducing Swift on Fedora.md new file mode 100644 index 0000000000..ead00d327f --- /dev/null +++ b/translated/tech/20181003 Introducing Swift on Fedora.md @@ -0,0 +1,70 @@ +介绍 Fedora上的 Swift +====== + +![](https://fedoramagazine.org/wp-content/uploads/2018/09/swift-816x345.jpg) + +Swift 是一种使用现代方法构建安全性、性能和软件设计模式的通用编程语言。它旨在成为各种编程项目的最佳语言,从系统编程到桌面应用程序,以及扩展到云服务。阅读更多关于它的内容以及如何在 Fedora 中尝试它。 + +### 安全、快速、富有表现力 + +与许多现代编程语言一样,Swift 被设计为比基于 C 的语言更安全。例如,变量总是在可以使用之前初始化。检查数组和整数是否溢出。内存自动管理。 + +Swift 将意图放在语法中。要声明变量,请使用 var 关键字。要声明常量,请使用 let。 + +Swift 还保证对象永远不会是 nil。实际上,尝试使用已知为 nil 的对象将导致编译时错误。当使用 nil 值时,它支持一种称为 **optional** 的机制。optional 可能包含 nil,但使用 **?** 运算符可以安全地解包。 + +一些额外的功能包括: + + * 与函数指针统一的闭包 +  * 元组和多个返回值 +  * 泛型 +  * 对范围或集合进行快速而简洁的迭代 +  * 支持方法、扩展和协议的结构体 +  * 函数式编程模式,例如 map 和 filter +  * 内置强大的错误处理 +  * 拥有 do、guard、defer 和 repeat 关键字的高级控制流 + + + +### 尝试 Swift + +Swift 在 Fedora 28 中可用,包名为 **swift-lang**。安装完成后,运行 swift 并启动 REPL 控制台。 + +``` +$ swift +Welcome to Swift version 4.2 (swift-4.2-RELEASE). Type :help for assistance. + 1> let greeting="Hello world!" +greeting: String = "Hello world!" + 2> print(greeting) +Hello world! + 3> greeting = "Hello universe!" +error: repl.swift:3:10: error: cannot assign to value: 'greeting' is a 'let' constant +greeting = "Hello universe!" +~~~~~~~~ ^ + + + 3> + +``` + +Swift 有一个不断发展的社区,特别底,有一个[工作组][1]致力于使其成为一种高效且有力的服务器端编程语言。请访问[主页][2]了解更多参与方式。 + +图片由 [Uillian Vargas][3] 发布在 [Unsplash][4] 上。 + + +-------------------------------------------------------------------------------- + +via: https://fedoramagazine.org/introducing-swift-fedora/ + +作者:[Link Dupont][a] +选题:[lujun9972](https://github.com/lujun9972) +译者:[geekpi](https://github.com/geekpi) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://fedoramagazine.org/author/linkdupont/ +[1]: https://swift.org/server/ +[2]: http://swift.org +[3]: https://unsplash.com/photos/7oJpVR1inGk?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText +[4]: https://unsplash.com/search/photos/fast?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText From 82a8537038c141c6300fcb04537339d50933a7b6 Mon Sep 17 00:00:00 2001 From: Chang Liu Date: Mon, 15 Oct 2018 09:02:31 +0800 Subject: [PATCH 436/736] [Translated] 20180817 How To Lock The Keyboard And Mouse, But Not The Screen In Linux.md Signed-off-by: Chang Liu --- ... And Mouse, But Not The Screen In Linux.md | 191 ------------------ ... And Mouse, But Not The Screen In Linux.md | 171 ++++++++++++++++ 2 files changed, 171 insertions(+), 191 deletions(-) delete mode 100644 sources/tech/20180817 How To Lock The Keyboard And Mouse, But Not The Screen In Linux.md create mode 100644 translated/tech/20180817 How To Lock The Keyboard And Mouse, But Not The Screen In Linux.md diff --git a/sources/tech/20180817 How To Lock The Keyboard And Mouse, But Not The Screen In Linux.md b/sources/tech/20180817 How To Lock The Keyboard And Mouse, But Not The Screen In Linux.md deleted file mode 100644 index d671a35457..0000000000 --- a/sources/tech/20180817 How To Lock The Keyboard And Mouse, But Not The Screen In Linux.md +++ /dev/null @@ -1,191 +0,0 @@ -FSSlc Translating -How To Lock The Keyboard And Mouse, But Not The Screen In Linux -====== - -![](https://www.ostechnix.com/wp-content/uploads/2017/09/Lock-The-Keyboard-And-Mouse-720x340.jpg) - -My 4-years-old niece is a curious-kid. She loves “Avatar” movie very much. When the Avatar movie is on, she became so focused and her eyes are glued to the screen. But the problem is she often touches a key in the keyboard or move the mouse or click the mouse button while watching the movie. Sometimes, she accidentally close or pause the movie by pressing a key in the keyboard. So I was looking for a way to lock down both the keyboard and mouse, but not the screen. Luckily, I came across a perfect solution in Ubuntu forum. If you don’t want your cat or puppy walking on your keyboard or your kid messing up with the keyboard and mouse while you watching something important on the screen, I suggest you to try **“xtrlock”** utility. It is a simple, yet useful utility to lock the X display till the user enters their password at the keyboard. In this brief tutorial, I will show you how to lock the keyboard and mouse, but not the screen in Linux. This trick will work on all Linux operating systems. - -### Install xtrlock - -The xtrlock package is available in the default repositories of most Linux operating systems. So, you can install it using your distribution’s package manager. - -On **Arch Linux** and derivatives, run the following command to install it. -``` -$ sudo pacman -S xtrlock - -``` - -On **Fedora** : -``` -$ sudo dnf install xtrlock - -``` - -On **RHEL, CentOS** : -``` -$ sudo yum install xtrlock - -``` - -On **SUSE/openSUSE** : -``` -$ sudo zypper install xtrlock - -``` - -On **Debian, Ubuntu, Linux Mint** : -``` -$ sudo apt-get install xtrlock - -``` - -### Lock the Keyboard and Mouse, but not the Screen using xtrlock - -Once xtrlock installed, create a keyboard shortcut. You need this to lock the keyboard and mouse using the key combination of your choice. - -Create a new file called **lockkbmouse** in **/usr/local/bin**. -``` -$ sudo vi /usr/local/bin/lockkbmouse - -``` - -Add the following lines into it. -``` -#!/bin/bash -sleep 1 && xtrlock - -``` - -Save the file and close the file. - -Make it as executable using the following command: -``` -$ sudo chmod a+x /usr/local/bin/lockkbmouse - -``` - -Next, we need to create keyboard a shortcut. - -**In Arch Linux MATE desktop:** - -Go to **System - > Preferences -> Hardware -> keyboard Shortcuts**. - -Click **Add** to create a new shortcut. - -![][2] - -Enter the name for your shortcut and add the following line in the command box, and click **Apply** button. -``` -bash -c "sleep 1 && xtrlock" - -``` - -![][3] - -To assign the shortcut key, just select or double click on it and type the key combination of your choice. For example, I use **Alt+k**. - -![][4] - -To clear the key combination, press BACKSPACE key. Once you finished, close the Keyboard Settings window. - -**In Ubuntu GNOME DE:** - -Go to **System Settings - > Devices -> Keyboard**. Click the **+** symbol at the end. - -Enter the name for your shortcut and add the following line in the command box, and click **Add** button. -``` -bash -c "sleep 1 && xtrlock" - -``` - -![][5] - -Next, assign the shortcut key to the newly created shortcut. To do so, just select or double click on it and click on **“Set shortcut”** button. - -![][6] - -You will now see the following screen. - -![][7] - -Type the key combination of your choice. For example, I use **Alt+k**. - -![][8] - -To clear the key combination, press BACKSPACE key. The shortcut key has been assigned. Once you finished, close the Keyboard Settings window. - -From now on, whenever you press the keyboard shortcut key (ALT+k in our case), the mouse pointer will turn into a a padlock. Now, the keyboard and mouse have been locked, so you can freely watch the movies or whatever you want to. Even your kid or pet touches some keys on the keyboard or clicks a mouse button, they won’t work. - -Here is xtrclock in action. - -![][9] - -Do you see the a small lock button? It means that the keyboard and mouse have been locked. Even if you move the lock button, nothing will happen. The task in the background will keep running until you unlock your screen and manually close the running task. - -### Unlock keyboard and mouse - -To unlock the keyboard and mouse, simply type your password and hit “Enter”. You will not see the password as you type it. Just type the password anyway and hit ENTER key. The mouse and keyboard will start to work after you entered the correct password. If you entered an incorrect password, you will hear a bell sound. Press **ESC** key to clear the incorrect password and re-enter the correct password again. To remove one character of a partially typed password, press either **BACKSPACE** or **DELETE** keys. - -### What if I permanently get locked out of the screen? - -The xtrclock tool may not work on some DEs, for example GDM. It may permanently lock you out of the screen. Please test it in a virtual machine and then try it in your personal or official desktop if it really works. I tested this on Arch Linux MATE desktop and Ubuntu 18.04 GNOME desktop. It worked just fine. - -Just in case, you are locked out of the screen permanently, switch to the TTY (CTRL+ALT+F2) then run: -``` -$ sudo killall xtrlock - -``` - -Alternatively, you can use the **chvt** command to switch between TTY and X session. - -For example, to switch to TTY1, run: -``` -$ sudo chvt 1 - -``` - -To switch back to the X session again, type: -``` -$ sudo chvt 7 - -``` - -Different distros uses different key combinations to switch between TTYs. Please refer your distribution’s official website for more details. - -For more details about xtrlock, refer man pages. -``` -$ man xtrlock - -``` - -And, that’s all for now. Hope this helps. If you find our guides useful, please spend a moment to share them on your social, professional networks and support OSTechNix. - -**Resource:** - - * [**Ubuntu forum**][10] - - - --------------------------------------------------------------------------------- - -via: https://www.ostechnix.com/lock-keyboard-mouse-not-screen-linux/ - -作者:[SK][a] -选题:[lujun9972](https://github.com/lujun9972) -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:https://www.ostechnix.com/author/sk/ -[1]:data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7 -[2]:http://www.ostechnix.com/wp-content/uploads/2017/09/Keyboard-Shortcuts_001.png -[3]:http://www.ostechnix.com/wp-content/uploads/2017/09/Keyboard-Shortcuts_002.png -[4]:http://www.ostechnix.com/wp-content/uploads/2017/09/Keyboard-Shortcuts_003.png -[5]:http://www.ostechnix.com/wp-content/uploads/2018/01/Add-xtrlock-shortcut.png -[6]:http://www.ostechnix.com/wp-content/uploads/2018/01/set-shortcut-key-1.png -[7]:http://www.ostechnix.com/wp-content/uploads/2018/01/set-shortcut-key-2.png -[8]:http://www.ostechnix.com/wp-content/uploads/2018/01/set-shortcut-key-3.png -[9]:http://www.ostechnix.com/wp-content/uploads/2018/01/xtrclock-1.png -[10]:https://ubuntuforums.org/showthread.php?t=993800 diff --git a/translated/tech/20180817 How To Lock The Keyboard And Mouse, But Not The Screen In Linux.md b/translated/tech/20180817 How To Lock The Keyboard And Mouse, But Not The Screen In Linux.md new file mode 100644 index 0000000000..3a0a0592cc --- /dev/null +++ b/translated/tech/20180817 How To Lock The Keyboard And Mouse, But Not The Screen In Linux.md @@ -0,0 +1,171 @@ +如何在 Linux 下锁住键盘和鼠标而不锁屏 +====== + +![](https://www.ostechnix.com/wp-content/uploads/2017/09/Lock-The-Keyboard-And-Mouse-720x340.jpg) + +我四岁的侄女是个好奇的孩子,她非常喜爱“阿凡达”电影,当阿凡达电影在播放时,她是如此的专注,好似眼睛粘在了屏幕上。但问题是当她观看电影时,她经常会碰到键盘上的某个键或者移动了鼠标,又或者是点击了鼠标的按钮。有时她非常意外地按了键盘上的某个键,从而将电影关闭或者暂停了。所以我就想找个方法来将键盘和鼠标都锁住,但屏幕不会被锁住。幸运的是,我在 Ubuntu 论坛上找到了一个完美的解决方法。假如在你正看着屏幕上的某些重要的事情时,你不想让你的小猫或者小狗在你的键盘上行走,或者让你的孩子在键盘上瞎搞一气,那我建议你试试 **xtrlock** 这个工具。它很简单但非常实用,你可以锁定屏幕的显示直到用户在键盘上输入自己设定的密码(译者注:就是用户自己的密码,例如用来打开屏保的那个密码,不需要单独设定)。在这篇简单的教程中,我将为你展示如何在 Linux 下锁住键盘和鼠标,而不锁掉屏幕。这个技巧几乎可以在所有的 Linux 操作系统中生效。 + +### 安装 xtrlock + +xtrlock 软件包在大多数 Linux 操作系统的默认软件仓库中都可以获取到。所以你可以使用你安装的发行版的包管理器来安装它。 + +在 **Arch Linux** 及其衍生发行版中,运行下面的命令来安装它: +``` +$ sudo pacman -S xtrlock +``` + +在 **Fedora** 上使用: +``` +$ sudo dnf install xtrlock +``` + +在 **RHEL, CentOS** 上使用: +``` +$ sudo yum install xtrlock +``` + +在 **SUSE/openSUSE** 上使用: +``` +$ sudo zypper install xtrlock +``` + +在 **Debian, Ubuntu, Linux Mint** 上使用: +``` +$ sudo apt-get install xtrlock +``` + +### 使用 xtrlock 锁住键盘和鼠标但不锁屏 + +安装好 xtrlock 后,你需要根据你的选择来创建一个快捷键,通过这个快捷键来锁住键盘和鼠标。 + +在 **/usr/local/bin** 目录下创建一个名为 **lockkbmouse** 的新文件: +``` +$ sudo vi /usr/local/bin/lockkbmouse +``` + +然后将下面的命令添加到这个文件中: +``` +#!/bin/bash +sleep 1 && xtrlock +``` +保存并关闭这个文件。 + +然后使用下面的命令来使得它可以被执行: +``` +$ sudo chmod a+x /usr/local/bin/lockkbmouse +``` + +接着,我们就需要创建快捷键了。 + +**在 Arch Linux MATE 桌面中** + +依次点击 **System -> Preferences -> Hardware -> keyboard Shortcuts** + +然后点击 **Add** 来创建快捷键。 + +![][2] + +首先键入你的这个快捷键的名称,然后将下面的命令填入命令框中,最后点击 **Apply** 按钮。 +``` +bash -c "sleep 1 && xtrlock" +``` + +![][3] + +为了能够给这个快捷键赋予快捷方式,需要选中它或者双击它然后输入你选定的快捷键组合,例如我使用 **Alt+k** 这组快捷键。 + +![][4] + +如果要清除这个快捷键组合,按住 `BACKSPACE` 键就可以了。完成后,关闭键盘设定窗口。 + +**在 Ubuntu GNOME 桌面中** + +依次进入 **System Settings -> Devices -> Keyboard**,然后点击 **+** 这个符号。 + +键入你快捷键的名称并将下面的命令加到命令框里面,然后点击 **Add** 按钮。 +``` +bash -c "sleep 1 && xtrlock" +``` + +![][5] + +接下来为这个新建的快捷键赋予快捷方式。我们只需要选择或者双击 **“Set shortcut”** 这个按钮就可以了。 + +![][6] + +然后你将看到下面的一屏。 + +![][7] + +输入你选定的快捷键组合,例如我使用 **Alt+k**。 + +![][8] + +如果要清除这个快捷键组合,则可以按 `BACKSPACE` 这个键。这样快捷键便设定好了,完成这个后,关闭键盘设定窗口。 + +从现在起,每当你输入刚才设定的快捷键(在我们的示例中是 `ATL+K`),鼠标的指针便会变成一个挂锁的模样。现在,键盘和鼠标便被锁定了,这时你便可以自在地观看你的电影或者做其他你想做的事儿。即便是你的孩子或者宠物碰了键盘上的某些键或者点击了鼠标,这些操作都不会起作用。 + +因为 `xtrlock` 已经在工作了。 + +![][9] + +你看到了那个小的锁按钮了吗?它意味着键盘和鼠标已经被锁定了。即便你移动这个锁按钮,也不会发生任何事情。后台的任务在一直执行,直到你将屏幕解除,然后手动停掉运行中的任务。 + +### 将键盘和鼠标解锁 + +要将键盘和鼠标解锁,只需要输入你的密码然后敲击“Enter”键就可以了,在输入的过程中你将看不到密码。只需要输入然后敲 `ENTER` 键就可以了。在你输入了正确的密码后,鼠标和键盘就可以再工作了。假如你输入了一个错误的密码,你将听到警告声。按 **ESC** 来清除输入的错误密码,然后重新输入正确的密码。要去掉未完全输入完的密码中的一个字符,只需要按 **BACKSPACE** 或者 **DELETE** 键就可以了。 + +### 要是我被永久地锁住了怎么办? + +以防你被永久地锁定了屏幕,切换至一个 TTY(例如 CTRL+ALT+F2)然后运行: +``` +$ sudo killall xtrlock +``` + +或者你还可以使用 **chvt** 命令来在 TTY 和 X 会话之间切换。 + +例如,如果要切换到 TTY1,则运行: +``` +$ sudo chvt 1 +``` + +要切换回 X 会话,则键入: +``` +$ sudo chvt 7 +``` + +不同的发行版使用了不同的快捷键组合来在不同的 TTY 间切换。请参考你安装的对应发行版的官方网站了解更多详情。 + +如果想知道更多 xtrlock 的信息,请参考 man 页: +``` +$ man xtrlock +``` + +那么这就是全部了。希望这个指南可以帮到你。假如你发现这个指南很有用,请花点时间将这个指南共享到你的朋友圈并支持我们(OSTechNix)。 + +**资源:** + + * [**Ubuntu 论坛**][10] + +-------------------------------------------------------------------------------- + +via: https://www.ostechnix.com/lock-keyboard-mouse-not-screen-linux/ + +作者:[SK][a] +选题:[lujun9972](https://github.com/lujun9972) +译者:[FSSlc](https://github.com/FSSlc) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://www.ostechnix.com/author/sk/ +[1]:data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7 +[2]:http://www.ostechnix.com/wp-content/uploads/2017/09/Keyboard-Shortcuts_001.png +[3]:http://www.ostechnix.com/wp-content/uploads/2017/09/Keyboard-Shortcuts_002.png +[4]:http://www.ostechnix.com/wp-content/uploads/2017/09/Keyboard-Shortcuts_003.png +[5]:http://www.ostechnix.com/wp-content/uploads/2018/01/Add-xtrlock-shortcut.png +[6]:http://www.ostechnix.com/wp-content/uploads/2018/01/set-shortcut-key-1.png +[7]:http://www.ostechnix.com/wp-content/uploads/2018/01/set-shortcut-key-2.png +[8]:http://www.ostechnix.com/wp-content/uploads/2018/01/set-shortcut-key-3.png +[9]:http://www.ostechnix.com/wp-content/uploads/2018/01/xtrlock-1.png +[10]:https://ubuntuforums.org/showthread.php?t=993800 From 17160ba6da0c13aa3d92df2780fadb7f434e5aac Mon Sep 17 00:00:00 2001 From: geekpi Date: Mon, 15 Oct 2018 09:03:49 +0800 Subject: [PATCH 437/736] translating --- sources/tech/20180927 5 cool tiling window managers.md | 2 ++ 1 file changed, 2 insertions(+) diff --git a/sources/tech/20180927 5 cool tiling window managers.md b/sources/tech/20180927 5 cool tiling window managers.md index f687918c65..14da5f3479 100644 --- a/sources/tech/20180927 5 cool tiling window managers.md +++ b/sources/tech/20180927 5 cool tiling window managers.md @@ -1,3 +1,5 @@ +translating---geekpi + 5 cool tiling window managers ====== From 9adac0e4b15c485da64b8a37692b561f452944fe Mon Sep 17 00:00:00 2001 From: darksun Date: Mon, 15 Oct 2018 10:32:39 +0800 Subject: [PATCH 438/736] =?UTF-8?q?=E9=80=89=E9=A2=98:=20How=20to=20Instal?= =?UTF-8?q?l=20GRUB=20on=20Arch=20Linux=20(UEFI)?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...ow to Install GRUB on Arch Linux (UEFI).md | 74 +++++++++++++++++++ 1 file changed, 74 insertions(+) create mode 100644 sources/tech/20181013 How to Install GRUB on Arch Linux (UEFI).md diff --git a/sources/tech/20181013 How to Install GRUB on Arch Linux (UEFI).md b/sources/tech/20181013 How to Install GRUB on Arch Linux (UEFI).md new file mode 100644 index 0000000000..97cb5e0362 --- /dev/null +++ b/sources/tech/20181013 How to Install GRUB on Arch Linux (UEFI).md @@ -0,0 +1,74 @@ +How to Install GRUB on Arch Linux (UEFI) +====== + +![](http://fasterland.net/wp-content/uploads/2018/10/Arch-Linux-Boot-Menu-750x375.jpg) + +Some time ago, I wrote a tutorial on **[how to reinstall Grub][1] on Arch Linux after installing Windows.** + +A few weeks ago, I had to reinstall **Arch Linux** from scratch on my laptop and I discovered installing **Grub** was not as straightforward as I remembered. + +For this reason, I’m going to write this tutorial since **installing Grub on a UEFI bios** during a new **Arch Linux** installation it’s not too easy. + +### Locating the EFI partition + +The first important thing to do for installing **Grub** on **Arch Linux** is to locate the **EFI** partition. +Let’s run the following command in order to locate this partition: + +``` +# fdisk -l +``` + +We need to check the partition marked as **EFI System +**In my case is **/dev/sda2** + +After that, we need to mount this partition, for example, on /boot/efi: + +``` +# mkdir /boot/efi +# mount /dev/sdb2 /boot/efi +``` + +Another important thing to do is adding this partition into the **/etc/fstab** file. + +#### Installing Grub + +Now we can install Grub in our system: + +``` +# grub-mkconfig -o /boot/grub/grub.cfg +# grub-install --target=x86_64-efi --efi-directory=/boot/efi --bootloader-id=GRUB +``` + +#### Adding Windows Automatically into the Grub Menu + +In order to automatically add the **Windows entry into the Grub menu** , we need to install the **os-prober** program: + +``` +# pacman -Sy os-prober +``` + +In order to add the entry item let’s run the following commands: + +``` +# os-prober +# grub-mkconfig -o /boot/grub/grub.cfg +# grub-install --target=x86_64-efi --efi-directory=/boot/efi --bootloader-id=GRUB +``` + +You can find more about Grub on Arch Linux [here][2]. + +-------------------------------------------------------------------------------- + +via: http://fasterland.net/how-to-install-grub-on-arch-linux-uefi.html + +作者:[Francesco Mondello][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: http://fasterland.net/ +[b]: https://github.com/lujun9972 +[1]: http://fasterland.net/reinstall-grub-arch-linux.html +[2]: https://wiki.archlinux.org/index.php/GRUB From 004a86f18603fde8a732f0800e549f30b3a9028a Mon Sep 17 00:00:00 2001 From: darksun Date: Mon, 15 Oct 2018 10:34:00 +0800 Subject: [PATCH 439/736] =?UTF-8?q?=E9=80=89=E9=A2=98:=204=20Must-Have=20T?= =?UTF-8?q?ools=20for=20Monitoring=20Linux?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ... 4 Must-Have Tools for Monitoring Linux.md | 102 ++++++++++++++++++ 1 file changed, 102 insertions(+) create mode 100644 sources/tech/20181004 4 Must-Have Tools for Monitoring Linux.md diff --git a/sources/tech/20181004 4 Must-Have Tools for Monitoring Linux.md b/sources/tech/20181004 4 Must-Have Tools for Monitoring Linux.md new file mode 100644 index 0000000000..beb3bab797 --- /dev/null +++ b/sources/tech/20181004 4 Must-Have Tools for Monitoring Linux.md @@ -0,0 +1,102 @@ +4 Must-Have Tools for Monitoring Linux +====== + +![](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/monitoring-main.jpg?itok=YHLK-gn6) + +Linux. It’s powerful, flexible, stable, secure, user-friendly… the list goes on and on. There are so many reasons why people have adopted the open source operating system. One of those reasons which particularly stands out is its flexibility. Linux can be and do almost anything. In fact, it will (in most cases) go well above what most platforms can. Just ask any enterprise business why they use Linux and open source. + +But once you’ve deployed those servers and desktops, you need to be able to keep track of them. What’s going on? How are they performing? Is something afoot? In other words, you need to be able to monitor your Linux machines. “How?” you ask. That’s a great question, and one with many answers. I want to introduce you to a few such tools—from command line, to GUI, to full-blown web interfaces (with plenty of bells and whistles). From this collection of tools, you can gather just about any kind of information you need. I will stick only with tools that are open source, which will exempt some high-quality, proprietary solutions. But it’s always best to start with open source, and, chances are, you’ll find everything you need to monitor your desktops and servers. So, let’s take a look at four such tools. + +### Top + +We’ll first start with the obvious. The top command is a great place to start, when you need to monitor what processes are consuming resources. The top command has been around for a very long time and has, for years, been the first tool I turn to when something is amiss. What top does is provide a real-time view of all running systems on a Linux machine. The top command not only displays dynamic information about each running process (as well as the necessary information to manage those processes), but also gives you an overview of the machine (such as, how many CPUs are found, and how much RAM and swap space is available). When I feel something is going wrong with a machine, I immediately turn to top to see what processes are gobbling up the most CPU and MEM (Figure 1). From there, I can act accordingly. + +![top][2] + +Figure 1: Top running on Elementary OS. + +[Used with permission][3] + +There is no need to install anything to use the top command, because it is installed on almost every Linux distribution by default. For more information on top, issue the command man top. + +### Glances + +If you thought the top command offered up plenty of information, you’ve yet to experience Glances. Glances is another text-based monitoring tool. In similar fashion to top, glances offers a real-time listing of more information about your system than nearly any other monitor of its kind. You’ll see disk/network I/O, thermal readouts, fan speeds, disk usage by hardware device and logical volume, processes, warnings, alerts, and much more. Glances also includes a handy sidebar that displays information about disk, filesystem, network, sensors, and even Docker stats. To enable the sidebar, hit the 2 key (while glances is running). You’ll then see the added information (Figure 2). + +![glances][5] + +Figure 2: The glances monitor displaying docker stats along with all the other information it offers. + +[Used with permission][3] + +You won’t find glances installed by default. However, the tool is available in most standard repositories, so it can be installed from the command line or your distribution’s app store, without having to add a third-party repository. + +### GNOME System Monitor + +If you're not a fan of the command line, there are plenty of tools to make your monitoring life a bit easier. One such tool is GNOME System Monitor, which is a front-end for the top tool. But if you prefer a GUI, you can’t beat this app. + +With GNOME System Monitor, you can scroll through the listing of running apps (Figure 3), select an app, and then either end the process (by clicking End Process) or view more details about said process (by clicking the gear icon). + +![GNOME System Monitor][7] + +Figure 3: GNOME System Monitor in action. + +[Used with permission][3] + +You can also click any one of the tabs at the top of the window to get even more information about your system. The Resources tab is a very handy way to get real-time data on CPU, Memory, Swap, and Network (Figure 4). + +![GNOME System Monitor][9] + +Figure 4: The GNOME System Monitor Resources tab in action. + +[Used with permission][3] + +If you don’t find GNOME System Monitor installed by default, it can be found in the standard repositories, so it’s very simple to add to your system. + +### Nagios + +If you’re looking for an enterprise-grade networking monitoring system, look no further than [Nagios][10]. But don’t think Nagios is limited to only monitoring network traffic. This system has over 5,000 different add-ons that can be added to expand the system to perfectly meet (and exceed your needs). The Nagios monitor doesn’t come pre-installed on your Linux distribution and although the install isn’t quite as difficult as some similar tools, it does have some complications. And, because the Nagios version found in many of the default repositories is out of date, you’ll definitely want to install from source. Once installed, you can log into the Nagios web GUI and start monitoring (Figure 5). + +![Nagios ][12] + +Figure 5: With Nagios you can even start and stop services. + +[Used with permission][3] + +Of course, at this point, you’ve only installed the core and will also need to walk through the process of installing the plugins. Trust me when I say it’s worth the extra time. +The one caveat with Nagios is that you must manually install any remote hosts to be monitored (outside of the host the system is installed on) via text files. Fortunately, the installation will include sample configuration files (found in /usr/local/nagios/etc/objects) which you can use to create configuration files for remote servers (which are placed in /usr/local/nagios/etc/servers). + +Although Nagios can be a challenge to install, it is very much worth the time, as you will wind up with an enterprise-ready monitoring system capable of handling nearly anything you throw at it. + +### There’s More Where That Came From + +We’ve barely scratched the surface in terms of monitoring tools that are available for the Linux platform. No matter whether you’re looking for a general system monitor or something very specific, a command line or GUI application, you’ll find what you need. These four tools offer an outstanding starting point for any Linux administrator. Give them a try and see if you don’t find exactly the information you need. + +Learn more about Linux through the free ["Introduction to Linux" ][13] course from The Linux Foundation and edX. + +-------------------------------------------------------------------------------- + +via: https://www.linux.com/learn/intro-to-linux/2018/10/4-must-have-tools-monitoring-linux + +作者:[Jack Wallen][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://www.linux.com/users/jlwallen +[b]: https://github.com/lujun9972 +[1]: /files/images/monitoring1jpg +[2]: https://www.linux.com/sites/lcom/files/styles/rendered_file/public/monitoring_1.jpg?itok=UiyNGji0 (top) +[3]: /licenses/category/used-permission +[4]: /files/images/monitoring2jpg +[5]: https://www.linux.com/sites/lcom/files/styles/rendered_file/public/monitoring_2.jpg?itok=K3OxLcvE (glances) +[6]: /files/images/monitoring3jpg +[7]: https://www.linux.com/sites/lcom/files/styles/rendered_file/public/monitoring_3.jpg?itok=UKcyEDcT (GNOME System Monitor) +[8]: /files/images/monitoring4jpg +[9]: https://www.linux.com/sites/lcom/files/styles/rendered_file/public/monitoring_4.jpg?itok=orLRH3m0 (GNOME System Monitor) +[10]: https://www.nagios.org/ +[11]: /files/images/monitoring5jpg +[12]: https://www.linux.com/sites/lcom/files/styles/rendered_file/public/monitoring_5.jpg?itok=RGcLLWL7 (Nagios ) +[13]: https://training.linuxfoundation.org/linux-courses/system-administration-training/introduction-to-linux From 68938a14cc5fa2743dcb985a97106d12b762c7ef Mon Sep 17 00:00:00 2001 From: darksun Date: Mon, 15 Oct 2018 10:37:07 +0800 Subject: [PATCH 440/736] =?UTF-8?q?=E9=80=89=E9=A2=98:=20How=20to=20level?= =?UTF-8?q?=20up=20your=20organization's=20security=20expertise?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ... your organization-s security expertise.md | 147 ++++++++++++++++++ 1 file changed, 147 insertions(+) create mode 100644 sources/talk/20181011 How to level up your organization-s security expertise.md diff --git a/sources/talk/20181011 How to level up your organization-s security expertise.md b/sources/talk/20181011 How to level up your organization-s security expertise.md new file mode 100644 index 0000000000..e67db6a3fb --- /dev/null +++ b/sources/talk/20181011 How to level up your organization-s security expertise.md @@ -0,0 +1,147 @@ +How to level up your organization's security expertise +====== +These best practices will make your employees more savvy and your organization more secure. +![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/checklist_hands_team_collaboration.png?itok=u82QepPk) + +IT security is critical to every company these days. In the words of former FBI director Robert Mueller: “There are only two types of companies: Those that have been hacked, and those that will be.” + +At the same time, IT security is constantly evolving. We all know we need to keep up with the latest trends in cybersecurity and security tooling, but how can we do that without sacrificing our ability to keep moving forward on our business priorities? + +No single person in your organization can handle all of the security work alone; your entire development and operations team will need to develop an awareness of security tooling and best practices, just like they all need to build skills in open source and in agile software delivery. There are a number of best practices that can help you level up the overall security expertise in your company through basic and intermediate education, subject matter experts, and knowledge-sharing. + +### Basic education: Annual cybersecurity education and security contact information + +At IBM, we all complete an online cybersecurity training class each year. I recommend this as a best practice for other companies as well. The online training is taught at a basic level, and it doesn’t assume that anyone has a technical background. Topics include social engineering, phishing and spearfishing attacks, problematic websites, viruses and worms, and so on. We learn how to avoid situations that may put ourselves or our systems at risk, how to recognize signs of an attempted security breach, and how to report a problem if we notice something that seems suspicious. This online education serves the purpose of raising the overall security awareness and readiness of the organization at a low per-person cost. A nice side effect of this education is that this basic knowledge can be applied to our personal lives, and we can share what we learned with our family and friends as well. + +In addition to the general cybersecurity education, all employees should have annual training on data security and privacy regulations and how to comply with those. + +Finally, we make it easy to find the Corporate Security Incident Response team by sharing the link to its website in prominent places, including Slack, and setting up suggested matches to ensure that a search of our internal website will send people to the right place: + +![](https://opensource.com/sites/default/files/uploads/security_search_screen.png) + +### Intermediate education: Learn from your tools + +Another great source of security expertise is through pre-built security tools. For example, we have set up a set of automated security tests that run against our web services using IBM AppScan, and the reports it generates include background knowledge about the vulnerabilities it finds, the severity of the threat, how to determine if your application is susceptible to the vulnerability, and how to fix the problem, with code examples. + +Similarly, the free [npm audit command-line tool from npm, Inc.][1] will scan your open source Node.js modules and report any known vulnerabilities it finds. This tool also generates educational audit reports that include the severity of the threat, the vulnerable package, and versions with the vulnerability, an alternative package or versions that do not have the vulnerability, dependencies, and a link to more detailed information about the vulnerability. Here’s an example of a report from npm audit: + +| High | Regular Expression Denial of Service | +| --------------| ----------------------------------------- | +| Package | minimath | +| --------------| ----------------------------------------- | +| Dependency of | gulp [dev] | +| --------------| ----------------------------------------- | +| Path | gulp > vinyl-fs > glob-stream > minimatch | +| --------------| ----------------------------------------- | +| More info | https://nodesecurity.io/advisories/118 | + +Any good network-level security tool will also give you information on the types of attacks the tool is blocking and how it recognizes likely attacks. This information is available in the marketing materials online as well as the tool’s console and reports if you have access to those. + +Each of your development teams or squads should have at least one subject matter expert who takes the time to read and fully understand the vulnerability reports that are relevant to you. This is often the technical lead, but it could be anyone who is interested in learning more about security. Your local subject matter expert will be able to recognize similar security holes in the future earlier in the development and deployment process. + +Using the npm audit example above, a developer who reads and understands security advisory #118 from this report will be more likely to notice changes that may allow for a Regular Expression Denial of Service when reviewing code in the future. The team’s subject matter expert should also develop the skills needed to determine which of the vulnerability reports don’t actually apply to his or her specific project. + +### Intermediate education: Conferences + +Let’s not forget the value of attending security-related conferences, such as the [OWASP AppSec Conferences][2]. Conferences provide a great way for members of your team to focus on learning for a few days and bring back some of the newest ideas in the field. The “hallway track” of a conference, where we can learn from other practitioners, is also a valuable source of information. As much as most of us dislike being “sold to,” the sponsor hall at a conference is a good place to casually check out new security tools to see which ones you might be interested in evaluating later. + +If your organization is big enough, ask your DevOps and security tool vendors to come to you! If you’ve already procured some great tools, but adoption isn’t going as quickly as you would like, many vendors would be happy to provide your teams with some additional practical training. It’s in their best interests to increase the adoption of their tools (making you more likely to continue paying for their services and to increase your license count), just like it’s in your best interests to maximize the value you get out of the tools you’re paying for. We recently hosted a [Toolbox@IBM][3] \+ DevSecOps summit at our largest sites (those with a couple thousand IT professionals). More than a dozen vendors sponsored each event, came onsite, set up booths, and gave conference talks, just like they would at a technical conference. We also had several of our own presenters speaking about DevOps and security best practices that were working well for them, and we had booths set up by our Corporate Information Security Office, agile coaching, onsite tech support, and internal toolchain teams. We had several hundred attendees at each site. It was great for our technical community because we could focus on the tools that we had already procured, learn how other teams in our company were using them, and make connections to help each other in the future. + +When you send someone to a conference, it’s important to set the expectation that they will come back and share what they’ve learned with the team. We usually do this via an informal brown-bag lunch-and-learn, where people are encouraged to discuss new ideas interactively. + +### Subject-matter experts and knowledge-sharing: The secure engineering guild + +In the IBM Digital Business Group, we’ve adopted the squad model as described by [Spotify][4] and tweaked it to make it work for us. One sometimes-forgotten aspect of the squad model is the guild. Guilds are centers of excellence, focused around one topic or skill set, with members from many squads. Guild members learn together, share best practices with each other and their broader teams, and work to advance the state of the art. If you would like to establish your own secure engineering guild, here are some tips that have worked for me in setting up guilds in the past: + +**Step 1: Advertise and recruit** + +Your co-workers are busy people, so for many of them, a secure engineering guild could feel like just one more thing they have to cram into the week that doesn’t involve writing code. It’s important from the outset that the guild has a value proposition that will benefit its members as well as the organization. + +Zane Lackey from [Signal Sciences][5] gave me some excellent advice: It’s important to call out the truth. In the past, he said, security initiatives may have been more of a hindrance or even a blocker to getting work done. Your secure engineering guild needs to focus on ways to make your engineering team’s lives easier and more efficient instead. You need to find ways to automate more of the busywork related to security and to make your development teams more self-sufficient so you don’t have to rely on security “gates” or hurdles late in the development process. + +Here are some things that may attract people to your guild: + + * Learn about security vulnerabilities and what you can do to combat them + * Become a subject matter expert + * Participate in penetration testing + * Evaluate and pilot new security tools + * Add “Secure Engineering Guild” to your resume + + + +Here are some additional guild recruiting tips: + + * Reach out directly to your security experts and ask them to join: security architects, network security administrators, people from your corporate security department, and so on. + + * Bring in an external speaker who can get people excited about secure engineering. Advertise it as “sponsored by the Secure Engineering Guild” and collect names and contact information for people who want to join your guild, both before and after the talk. + + * Get executive support for the program. Perhaps one of your VPs will write a blog post extolling the virtues of secure engineering skills and asking people to join the guild (or perhaps you can draft the blog post for her or him to edit and publish). You can combine that blog post with advertising the external speaker if the timing allows. + + * Ask your management team to nominate someone from each squad to join the guild. This hardline approach is important if you have an urgent need to drive rapid improvement in your security posture. + + + + +**Step 2: Build a team** + +Guild meetings should be structured for action. It’s important to keep an agenda so people know what you plan to cover in each meeting, but leave time at the end for members to bring up any topics they want to discuss. Also be sure to take note of action items, and assign an owner and a target date for each of them. Finally, keep meeting minutes and send a brief summary out after each meeting. + +Your first few guild meetings are your best opportunity to set off on the right foot, with a bit of team-building. I like to run a little design thinking exercise where you ask team members to share their ideas for the guild’s mission statement, vote on their favorites, and use those to craft a simple and exciting mission statement. The mission statement should include three components: WHO will benefit, WHAT the guild will do, and the WOW factor. The exercise itself is valuable because you can learn why people have decided to volunteer to be a part of the guild in the first place, and what they hope will come of it. + +Another thing I like to do from the outset is ask people what they’re hoping to achieve as a guild. The guild should learn together, have fun, and do real work. Once you have those ideas out on the table, start putting owners and target dates next to those goals. + + * Would they like to run a book club? Get someone to suggest a book and set up book club meetings. + + * Would they like to share useful articles and blogs? Get someone to set up a Slack channel and invite everyone to it, or set up a shared document where people can contribute their favorite resources. + + * Would they like to pilot a new tool? Get someone to set up a free trial, try it out for their own team, and report back in a few weeks. + + * Would they like to continue a series of talks? Get someone to create a list of topics and speakers and send out the invitations. + + + + +If a few goals end up without owners or dates, that’s OK; just start a to-do list or backlog for people to refer to when they’ve completed their first task. + +Finally, survey the team to find the best time and day of the week for ongoing meetings and set those up. I recommend starting with weekly 30-minute meetings and adjust as needed. + +**Step 3: Keep the energy going, or reboot** + +As the months go on, your guild could start to lose energy. Here are some ways to keep the excitement going or reboot a guild that’s losing energy. + + * Don’t be an echo chamber. Invite people in from various parts of the organization to talk for a few minutes about what they’re doing with respect to security engineering, and where they have concerns or see gaps. + + * Show measurable progress. If you’ve been assigning owners to action items and completing them all along, you’ve certainly made progress, but if you look at it only from week to week, the progress can feel small or insignificant. Once per quarter, take a step back and write a blog about all you’ve accomplished and send it out to your organization. Showing off what you’ve accomplished makes the team proud of what they’ve accomplished, and it’s another opportunity to recruit even more people for your guild. + + * Don’t be afraid to take on a large project. The guild should not be an ivory tower; it should get things done. Your guild may, for example, decide to roll out a new security tool that you love across a large organization. With a little bit of project management and a lot of executive support, you can and should tackle cross-squad projects. The guild members can and should be responsible for getting stories from the large projects prioritized in their own squads’ backlogs and completed in a timely manner. + + * Periodically brainstorm the next set of action items. As time goes by, the most critical or pressing needs of your organization will likely change. People will be more motivated to work on the things they consider most important and urgent. + + * Reward the extra work. You might offer an executive-sponsored cash award for the most impactful secure engineering projects. You might also have the guild itself choose someone to send to a security conference now and then. + + + + +### Go forth, and make your company more secure + +A more secure company starts with a more educated team. Building upon that expertise, a secure engineering guild can drive real changes by developing and sharing best practices, finding the right owners for each action item, and driving them to closure. I hope you found a few tips here that will help you level up the security expertise in your organization. Please add your own helpful tips in the comments. + + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/18/10/how-level-security-expertise-your-organization + +作者:[Ann Marie Fred][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/annmarie99 +[b]: https://github.com/lujun9972 +[1]: https://www.npmjs.com/about +[2]: https://www.owasp.org/index.php/Category:OWASP_AppSec_Conference +[3]: mailto:Toolbox@IBM +[4]: https://medium.com/project-management-learnings/spotify-squad-framework-part-i-8f74bcfcd761 +[5]: https://www.signalsciences.com/ From 2218a1c10555f5eb93020cd408e3c1b3b6d81aa4 Mon Sep 17 00:00:00 2001 From: LuMing <784315443@qq.com> Date: Mon, 15 Oct 2018 10:41:39 +0800 Subject: [PATCH 441/736] translated --- ...ration between developers and designers.md | 67 ------------------- ...ration between developers and designers.md | 66 ++++++++++++++++++ 2 files changed, 66 insertions(+), 67 deletions(-) delete mode 100644 sources/talk/20180502 9 ways to improve collaboration between developers and designers.md create mode 100644 translated/talk/20180502 9 ways to improve collaboration between developers and designers.md diff --git a/sources/talk/20180502 9 ways to improve collaboration between developers and designers.md b/sources/talk/20180502 9 ways to improve collaboration between developers and designers.md deleted file mode 100644 index 637a54ee91..0000000000 --- a/sources/talk/20180502 9 ways to improve collaboration between developers and designers.md +++ /dev/null @@ -1,67 +0,0 @@ -LuuMing translating -9 ways to improve collaboration between developers and designers -====== - -![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/BUS_consensuscollab1.png?itok=ULQdGjlV) - -This article was co-written with [Jason Porter][1]. - -Design is a crucial element in any software project. Sooner or later, the developers' reasons for writing all this code will be communicated to the designers, human beings who aren't as familiar with its inner workings as the development team. - -Stereotypes exist on both side of the divide; engineers often expect designers to be flaky and irrational, while designers often expect engineers to be inflexible and demanding. The truth is considerably more nuanced and, at the end of the day, the fates of designers and developers are forever intertwined. - -Here are nine things that can improve collaboration between the two. - -### 1\. First, knock down the wall. Seriously. - -There are loads of memes about the "wall of confusion" in just about every industry. No matter what else you do, the first step toward tearing down this wall is getting both sides to agree it needs to be gone. Once everyone agrees the existing processes aren't functioning optimally, you can pick and choose from the rest of these ideas to begin fixing the problems. - -### 2\. Learn to empathize. - -Before rolling up any sleeves to build better communication, take a break. This is a great junction point for team building. A time to recognize that we're all people, we all have strengths and weaknesses, and most importantly, we're all on the same team. Discussions around workflows and productivity can become feisty, so it's crucial to build a foundation of trust and cooperation before diving on in. - -### 3\. Recognize differences. - -Designers and developers attack the same problem from different angles. Given a similar problem, designers will seek the solution with the biggest impact while developers will seek the solution with the least amount of waste. These two viewpoints do not have to be mutually exclusive. There is plenty of room for negotiation and compromise, and somewhere in the middle is where the end user receives the best experience possible. - -### 4\. Embrace similarities. - -This is all about workflow. CI/CD, scrum, agile, etc., are all basically saying the same thing: Ideate, iterate, investigate, and repeat. Iteration and reiteration are common denominators for both kinds of work. So instead of running a design cycle followed by a development cycle, it makes much more sense to run them concurrently and in tandem. Syncing cycles allows teams to communicate, collaborate, and influence each other every step of the way. - -### 5\. Manage expectations. - -All conflict can be distilled down to one simple idea: incompatible expectations. Therefore, an easy way to prevent systemic breakdowns is to manage expectations by ensuring that teams are thinking before talking and talking before doing. Setting expectations often evolves organically through everyday conversation. Forcing them to happen by having meetings can be counterproductive. - -### 6\. Meet early and meet often. - -Meeting once at the beginning of work and once at the end simply isn't enough. This doesn't mean you need daily or even weekly meetings. Setting a cadence for meetings can also be counterproductive. Let them happen whenever they're necessary. Great things can happen with impromptu meetings—even at the watercooler! If your team is distributed or has even one remote employee, video conferencing, text chat, or phone calls are all excellent ways to meet. It's important that everyone on the team has multiple ways to communicate with each other. - -### 7\. Build your own lexicon. - -Designers and developers sometimes have different terms for similar ideas. One person's card is another person's tile is a third person's box. Ultimately, the fit and accuracy of a term aren't as important as everyone's agreement to use the same term consistently. - -### 8\. Make everyone a communication steward. - -Everyone in the group is responsible for maintaining effective communication, regardless of how or when it happens. Each person should strive to say what they mean and mean what they say. - -### 9\. Give a darn. - -It only takes one member of a team to sabotage progress. Go all in. If every individual doesn't care about the product or the goal, there will be problems with motivation to make changes or continue the process. - -This article is based on [Designers and developers: Finding common ground for effective collaboration][2], a talk the authors will be giving at [Red Hat Summit 2018][3], which will be held May 8-10 in San Francisco. [Register by May 7][3] to save US$ 500 off of registration. Use discount code **OPEN18** on the payment page to apply the discount. - --------------------------------------------------------------------------------- - -via: https://opensource.com/article/18/5/9-ways-improve-collaboration-developers-designers - -作者:[Jason Brock][a] -选题:[lujun9972](https://github.com/lujun9972) -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:https://opensource.com/users/jkbrock -[1]:https://opensource.com/users/lightguardjp -[2]:https://agenda.summit.redhat.com/SessionDetail.aspx?id=154267 -[3]:https://www.redhat.com/en/summit/2018 diff --git a/translated/talk/20180502 9 ways to improve collaboration between developers and designers.md b/translated/talk/20180502 9 ways to improve collaboration between developers and designers.md new file mode 100644 index 0000000000..fe53452dd8 --- /dev/null +++ b/translated/talk/20180502 9 ways to improve collaboration between developers and designers.md @@ -0,0 +1,66 @@ +9 个方法,提升开发者与设计师之间的协作 +====== + +![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/BUS_consensuscollab1.png?itok=ULQdGjlV) + +本文由我与 [Jason Porter][1] 共同完成。 + +在任何软件项目中,设计至关重要。设计师不像开发团队那样熟悉其内部工作,但迟早都要知道开发人员写代码的意图。 + +两边都有自己的成见。工程师经常认为设计师们古怪不理性,而设计师也认为工程师们死板要求高。在一天的工作快要结束时,情况会变得更加微妙。设计师和开发者们的命运永远交织在一起。 + +做到以下九件事,便可以增强他们之间的合作 + +### 1\. 首先,说实在的,打破壁垒。 + +几乎每一个行业都有“迷惑之墙wall of confusion”的模子。无论你干什么工作,拆除这堵墙的第一步就是要双方都认同它需要拆除。一旦所有的人都认为现有的流程效率低下,你就可以从其他想法中获得灵感,然后解决问题。 + +### 2\. 学会共情 + +在撸起袖子开始干之前,休息一下。这是团队建设的重要的交汇点。一个时机去认识到:我们都是成人,我们都有自己的优点与缺点,更重要的是,我们是一个团队。围绕工作流程与工作效率的讨论会经常发生,因此在开始之前,建立一个信任与协作的基础至关重要。 + +### 3\. 认识差异 + +设计师和开发者从不同的角度攻克问题。对于相同的问题,设计师会追求更好的效果,而开发者会寻求更高的效率。这两种观点不必互相排斥。谈判和妥协的余地很大,并且在二者之间必然存在一个用户满意度最佳的中点。 + +### 4\. 拥抱共性 + +这一切都是与工作流程相关的。持续集成Continuous Integration/持续交付Continuous Delivery,scrum,agille 等等,都基本上说了一件事:构思,迭代,考察,重复。迭代和重复是两种工作的相同点。因此,不再让开发周期紧跟设计周期,而是同时并行地运行它们,这样会更有意义。同步周期Syncing cycles允许团队在每一步上交流、协作、互相影响。 + +### 5\. 管理期望 + +一切冲突的起因一言以蔽之:期望不符。因此,防止系统性分裂的简单办法就是通过确保团队成员在说之前先想、在做之前先说来管理期望。设定的期望往往会通过日常对话不断演变。强迫团队通过开会以达到其效果可能会适得其反。 + +### 6\. 按需开会 + +只在工作开始和工作结束开一次会远远不够。但也不意味着每天或每周都要开会。定期开会也可能会适得其反。试着按需开会吧。即兴会议可能会发生很棒的事情,即使是在开水房。如果你的团队是分散式的或者甚至有一名远程员工,视频会议,文本聊天或者打电话都是开会的好方法。团队中的每人都有多种方式互相沟通,这一点非常重要。 + +### 7\. 建立词库 + +设计师和开发者有时候对相似的想法有着不同的术语,就像把猫叫了个咪。毕竟,所有人都用的惯比起术语的准确度和适应度更重要。 + +### 8\. 学会沟通 + +无论什么时候,团队中的每个人都有责任去维持一个有效的沟通。每个人都应该努力做到一字一板。 + +### 9\. 不断改善 + +仅一名团队成员就能破坏整个进度。全力以赴。如果每个人都不关心产品或目标,继续项目或者做出改变的动机就会出现问题。 + +本文参考 [Designers and developers: Finding common ground for effective collaboration][2],演讲的作者将会出席在旧金山五月 8-10 号举办的[Red Hat Summit 2018][3]。[五月 7 号][3]注册将节省 500 美元。支付时使用优惠码 **OPEN18** 以获得更多折扣。 + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/18/5/9-ways-improve-collaboration-developers-designers + +作者:[Jason Brock][a] +选题:[lujun9972](https://github.com/lujun9972) +译者:[LuuMing](https://github.com/LuuMing) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://opensource.com/users/jkbrock +[1]:https://opensource.com/users/lightguardjp +[2]:https://agenda.summit.redhat.com/SessionDetail.aspx?id=154267 +[3]:https://www.redhat.com/en/summit/2018 From ca03f63b18a44eab67c3cf3aae661e2ce4d77156 Mon Sep 17 00:00:00 2001 From: darksun Date: Mon, 15 Oct 2018 10:45:08 +0800 Subject: [PATCH 442/736] =?UTF-8?q?=E9=80=89=E9=A2=98:=20Happy=20birthday,?= =?UTF-8?q?=20KDE:=2011=20applications=20you=20never=20knew=20existed?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ... 11 applications you never knew existed.md | 58 +++++++++++++++++++ 1 file changed, 58 insertions(+) create mode 100644 sources/tech/20181012 Happy birthday, KDE- 11 applications you never knew existed.md diff --git a/sources/tech/20181012 Happy birthday, KDE- 11 applications you never knew existed.md b/sources/tech/20181012 Happy birthday, KDE- 11 applications you never knew existed.md new file mode 100644 index 0000000000..62ad686ccc --- /dev/null +++ b/sources/tech/20181012 Happy birthday, KDE- 11 applications you never knew existed.md @@ -0,0 +1,58 @@ +Happy birthday, KDE: 11 applications you never knew existed +====== +Which fun or quirky app do you need today? +![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/BIZ_DebucketizeOrgChart_A.png?itok=RB3WBeQQ) + +The Linux desktop environment KDE celebrates its 22nd anniversary on October 14 this year. There are a gazillion* applications created by the KDE community of users, many of which provide fun and quirky services. We perused the list and picked out 11 applications you might like to know exist. + +*Not really, but [there are a lot][1]. + +### 11 KDE applications you never knew existed + +1\. [KTeaTime][2] is a timer for steeping tea. Set it by choosing the type of tea you are drinking—green, black, herbal, etc.—and the timer will ding when it's ready to remove the tea bag and drink. + +2\. [KTux][3] is just a screensaver... or is it? Tux is flying in outer space in his green spaceship. + +3\. [Blinken][4] is a memory game based on Simon Says, an electronic game released in 1978. Players are challenged to remember sequences of increasing length. + +4\. [Tellico][5] is a collection manager for organizing your favorite hobby. Maybe you still collect baseball cards. Maybe you're part of a wine club. Maybe you're a serious bookworm. Maybe all three! + +5\. [KRecipes][6] is **not** a simple recipe manager. It's got a lot going on! Shopping lists, nutrient analysis, advanced search, recipe ratings, import/export various formats, and more. + +6\. [KHangMan][7] is based on the classic game Hangman where you guess the word letter by letter. This game is available in several languages, and it can be used to improve your learning of another language. It has four categories, one of which is "animals" which is great for kids. + +7\. [KLettres][8] is another app that may help you learn a new language. It teaches the alphabet and challenges the user to read and pronounce syllables. + +8\. [KDiamond][9] is similar to Bejeweled or other single player puzzle games where the goal of the game is to build lines of a certain number of the same type of jewel or object. In this case, diamonds. + +9\. [KolourPaint][10] is a very simple editing tool for your images or app for creating simple vectors. + +10\. [Kiriki][11] is a dice game for 2-6 players similar to Yahtzee. + +11\. [RSIBreak][12] doesn't start with a K. What!? It starts with an "RSI" for "Repetitive Strain Injury," which can occur from working for long hours, day in and day out, with a mouse and keyboard. This app reminds you to take breaks and can be personalized to meet your needs. + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/18/10/kde-applications + +作者:[Opensource.com][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com +[b]: https://github.com/lujun9972 +[1]: https://www.kde.org/applications/ +[2]: https://www.kde.org/applications/games/kteatime/ +[3]: https://userbase.kde.org/KTux +[4]: https://www.kde.org/applications/education/blinken +[5]: http://tellico-project.org/ +[6]: https://www.kde.org/applications/utilities/krecipes/ +[7]: https://edu.kde.org/khangman/ +[8]: https://edu.kde.org/klettres/ +[9]: https://games.kde.org/game.php?game=kdiamond +[10]: https://www.kde.org/applications/graphics/kolourpaint/ +[11]: https://www.kde.org/applications/games/kiriki/ +[12]: https://userbase.kde.org/RSIBreak From 780385af95912e458cca4245769fb377c75eaa54 Mon Sep 17 00:00:00 2001 From: darksun Date: Mon, 15 Oct 2018 10:53:45 +0800 Subject: [PATCH 443/736] =?UTF-8?q?=E9=80=89=E9=A2=98:=20Command=20line=20?= =?UTF-8?q?quick=20tips:=20Reading=20files=20different=20ways?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...uick tips- Reading files different ways.md | 118 ++++++++++++++++++ 1 file changed, 118 insertions(+) create mode 100644 sources/tech/20181012 Command line quick tips- Reading files different ways.md diff --git a/sources/tech/20181012 Command line quick tips- Reading files different ways.md b/sources/tech/20181012 Command line quick tips- Reading files different ways.md new file mode 100644 index 0000000000..30c82c1843 --- /dev/null +++ b/sources/tech/20181012 Command line quick tips- Reading files different ways.md @@ -0,0 +1,118 @@ +Command line quick tips: Reading files different ways +====== + +![](https://fedoramagazine.org/wp-content/uploads/2018/10/commandlinequicktips-816x345.jpg) + +Fedora is delightful to use as a graphical operating system. You can point and click your way through just about any task easily. But you’ve probably seen there is a powerful command line under the hood. To try it out in a shell, just open the Terminal application in your Fedora system. This article is one in a series that will show you some common command line utilities. + +In this installment you’ll learn how to read files in different ways. If you open a Terminal to do some work on your system, chances are good that you’ll need to read a file or two. + +### The whole enchilada + +The **cat** command is well known to terminal users. When you **cat** a file, you’re simply displaying the whole file to the screen. Really what’s happening under the hood is the file is read one line at a time, then each line is written to the screen. + +Imagine you have a file with one word per line, called myfile. To make this clear, the file will contain the word equivalent for a number on each line, like this: + +``` + + one + two + three + four + five + +``` + +So if you **cat** that file, you’ll see this output: + +``` + + $ cat myfile + one + two + three + four + five + +``` + +Nothing too surprising there, right? But here’s an interesting twist. You can also **cat** that file backward. For this, use the **tac** command. (Note that Fedora takes no blame for this debatable humor!) + +``` + + $ tac myfile + five + four + three + two + one + +``` + +The **cat** file also lets you ornament the file in different ways, in case that’s helpful. For instance, you can number lines: + +``` + + $ cat -n myfile + 1 one + 2 two + 3 three + 4 four + 5 five + +``` + +There are additional options that will show special characters and other features. To learn more, run the command **man cat** , and when done just hit **q** to exit back to the shell. + +### Picking over your food + +Often a file is too long to fit on a screen, and you may want to be able to go through it like a document. In that case, try the **less** command: + +``` + + $ less myfile + +``` + +You can use your arrow keys as well as **PgUp/PgDn** to move around the file. Again, you can use the **q** key to quit back to the shell. + +There’s actually a **more** command too, based on an older UNIX command. If it’s important to you to still see the file when you’re done, you might want to use it. The **less** command brings you back to the shell the way you left it, and clears the display of any sign of the file you looked at. + +### Just the appetizer (or dessert) + +Sometimes the output you want is just the beginning of a file. For instance, the file might be so long that when you **cat** the whole thing, the first few lines scroll past before you can see them. The **head** command will help you grab just those lines: + +``` + + $ head -n 2 myfile + one + two + +``` + +In the same way, you can use **tail** to just grab the end of a file: + +``` + + $ tail -n 3 myfile + three + four + five + +``` + +Of course these are only a few simple commands in this area. But they’ll get you started when it comes to reading files. + +-------------------------------------------------------------------------------- + +via: https://fedoramagazine.org/commandline-quick-tips-reading-files-different-ways/ + +作者:[Paul W. Frields][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://fedoramagazine.org/author/pfrields/ +[b]: https://github.com/lujun9972 From 1188f5f1e2456d9cc9921b0458ffa0857dc5a30d Mon Sep 17 00:00:00 2001 From: darksun Date: Mon, 15 Oct 2018 10:55:38 +0800 Subject: [PATCH 444/736] =?UTF-8?q?=E9=80=89=E9=A2=98:=20How=20To=20Lock?= =?UTF-8?q?=20Virtual=20Console=20Sessions=20On=20Linux?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ... Lock Virtual Console Sessions On Linux.md | 129 ++++++++++++++++++ 1 file changed, 129 insertions(+) create mode 100644 sources/tech/20181012 How To Lock Virtual Console Sessions On Linux.md diff --git a/sources/tech/20181012 How To Lock Virtual Console Sessions On Linux.md b/sources/tech/20181012 How To Lock Virtual Console Sessions On Linux.md new file mode 100644 index 0000000000..8ec111c32c --- /dev/null +++ b/sources/tech/20181012 How To Lock Virtual Console Sessions On Linux.md @@ -0,0 +1,129 @@ +How To Lock Virtual Console Sessions On Linux +====== + +![](https://www.ostechnix.com/wp-content/uploads/2018/10/vlock-720x340.png) + +When you’re working on a shared system, you might not want the other users to sneak peak in your console to know what you’re actually doing. If so, I know a simple trick to lock your own session while still allowing other users to use the system on other virtual consoles. Thanks to **Vlock** , stands for **V** irtual Console **lock** , a command line program to lock one or more sessions on the Linux console. If necessary, you can lock the entire console and disable the virtual console switching functionality altogether. Vlock is especially useful for the shared Linux systems which have multiple users with access to the console. + +### Installing Vlock + +On Arch-based systems, the Vlock package is replaced with **kpd** package which is preinstalled by default, so you need not to bother with installation. + +On Debian, Ubuntu, Linux Mint, run the following command to install Vlock: + +``` + $ sudo apt-get install vlock +``` + +On Fedora: + +``` + $ sudo dnf install vlock +``` + +On RHEL, CentOS: + +``` + $ sudo yum install vlock +``` + +### Lock Virtual Console Sessions On Linux + +The general syntax for Vlock is: + +``` + vlock [ -acnshv ] [ -t ] [ plugins... ] +``` + +Where, + + * **a** – Lock all virtual console sessions, + * **c** – Lock current virtual console session, + * **n** – Switch to new empty console before locking all sessions, + * **s** – Disable SysRq key mechanism, + * **t** – Specify the timeout for the screensaver plugins, + * **h** – Display help section, + * **v** – Display version. + + + +Let me show you some examples. + +**1\. Lock current console session** + +When running Vlock without any arguments, it locks the current console session (TYY) by default. To unlock the session, you need to enter either the current user’s password or the root password. + +``` + $ vlock +``` + +![](https://www.ostechnix.com/wp-content/uploads/2018/10/vlock-1-1.gif) + +You can also use **-c** flag to lock the current console session. + +``` + $ vlock -c +``` + +Please note that this command will only lock the current console. You can switch to other consoles by pressing **ALT+F2**. For more details about switching between TTYs, refer the following guide. + +Also, if the system has multiple users, the other users can still access their respective TTYs. + +**2\. Lock all console sessions** + +To lock all TTYs at the same time and also disable the virtual console switching functionality, run: + +``` + $ vlock -a +``` + +Again, to unlock the console sessions, just press ENTER key and type your current user’s password or root user password. + +Please keep in mind that the **root user can always unlock any vlock session** at any time, unless disabled at compile time. + +**3. Switch to new virtual console before locking all consoles +** + +It is also possible to make Vlock to switch to new empty virtual console from X session before locking all consoles. To do so, use **-n** flag. + +``` + $ vlock -n +``` + +**4. Disable SysRq mechanism +** + +As you may know, the Magic SysRq key mechanism allows the users to perform some operations when the system freeze. So the users can unlock the consoles using SysRq. In order to prevent this, pass the **-s** option to disable SysRq mechanism. Please remember, this only works if the **-a** option is given. + +``` + $ vlock -sa +``` + +For more options and its usage, refer the help section or the man pages. + +``` + $ vlock -h + $ man vlock +``` + +Vlock prevents the unauthorized users from gaining the console access. If you’re looking for a simple console locking mechanism to your Linux machine, Vlock is worth checking! + +And, that’s all for now. Hope this was useful. More good stuffs to come. Stay tuned! + +Cheers! + + + +-------------------------------------------------------------------------------- + +via: https://www.ostechnix.com/how-to-lock-virtual-console-sessions-on-linux/ + +作者:[SK][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://www.ostechnix.com/author/sk/ +[b]: https://github.com/lujun9972 From b9faa61fef48545d6d35e503591862dd6824dc97 Mon Sep 17 00:00:00 2001 From: darksun Date: Mon, 15 Oct 2018 11:01:42 +0800 Subject: [PATCH 445/736] =?UTF-8?q?=E9=80=89=E9=A2=98:=20Exploring=20the?= =?UTF-8?q?=20Linux=20kernel:=20The=20secrets=20of=20Kconfig/kbuild?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...x kernel- The secrets of Kconfig-kbuild.md | 257 ++++++++++++++++++ 1 file changed, 257 insertions(+) create mode 100644 sources/tech/20181011 Exploring the Linux kernel- The secrets of Kconfig-kbuild.md diff --git a/sources/tech/20181011 Exploring the Linux kernel- The secrets of Kconfig-kbuild.md b/sources/tech/20181011 Exploring the Linux kernel- The secrets of Kconfig-kbuild.md new file mode 100644 index 0000000000..8ee4f34897 --- /dev/null +++ b/sources/tech/20181011 Exploring the Linux kernel- The secrets of Kconfig-kbuild.md @@ -0,0 +1,257 @@ +Exploring the Linux kernel: The secrets of Kconfig/kbuild +====== +Dive into understanding how the Linux config/build system works. + +![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/compass_map_explore_adventure.jpg?itok=ecCoVTrZ) + +The Linux kernel config/build system, also known as Kconfig/kbuild, has been around for a long time, ever since the Linux kernel code migrated to Git. As supporting infrastructure, however, it is seldom in the spotlight; even kernel developers who use it in their daily work never really think about it. + +To explore how the Linux kernel is compiled, this article will dive into the Kconfig/kbuild internal process, explain how the .config file and the vmlinux/bzImage files are produced, and introduce a smart trick for dependency tracking. + +### Kconfig + +The first step in building a kernel is always configuration. Kconfig helps make the Linux kernel highly modular and customizable. Kconfig offers the user many config targets: +| config | Update current config utilizing a line-oriented program | +| nconfig | Update current config utilizing a ncurses menu-based program | +| menuconfig | Update current config utilizing a menu-based program | +| xconfig | Update current config utilizing a Qt-based frontend | +| gconfig | Update current config utilizing a GTK+ based frontend | +| oldconfig | Update current config utilizing a provided .config as base | +| localmodconfig | Update current config disabling modules not loaded | +| localyesconfig | Update current config converting local mods to core | +| defconfig | New config with default from Arch-supplied defconfig | +| savedefconfig | Save current config as ./defconfig (minimal config) | +| allnoconfig | New config where all options are answered with 'no' | +| allyesconfig | New config where all options are accepted with 'yes' | +| allmodconfig | New config selecting modules when possible | +| alldefconfig | New config with all symbols set to default | +| randconfig | New config with a random answer to all options | +| listnewconfig | List new options | +| olddefconfig | Same as oldconfig but sets new symbols to their default value without prompting | +| kvmconfig | Enable additional options for KVM guest kernel support | +| xenconfig | Enable additional options for xen dom0 and guest kernel support | +| tinyconfig | Configure the tiniest possible kernel | + +I think **menuconfig** is the most popular of these targets. The targets are processed by different host programs, which are provided by the kernel and built during kernel building. Some targets have a GUI (for the user's convenience) while most don't. Kconfig-related tools and source code reside mainly under **scripts/kconfig/** in the kernel source. As we can see from **scripts/kconfig/Makefile** , there are several host programs, including **conf** , **mconf** , and **nconf**. Except for **conf** , each of them is responsible for one of the GUI-based config targets, so, **conf** deals with most of them. + +Logically, Kconfig's infrastructure has two parts: one implements a [new language][1] to define the configuration items (see the Kconfig files under the kernel source), and the other parses the Kconfig language and deals with configuration actions. + +Most of the config targets have roughly the same internal process (shown below): + +![](https://opensource.com/sites/default/files/uploads/kconfig_process.png) + +Note that all configuration items have a default value. + +The first step reads the Kconfig file under source root to construct an initial configuration database; then it updates the initial database by reading an existing configuration file according to this priority: + +> .config +> /lib/modules/$(shell,uname -r)/.config +> /etc/kernel-config +> /boot/config-$(shell,uname -r) +> ARCH_DEFCONFIG +> arch/$(ARCH)/defconfig + +If you are doing GUI-based configuration via **menuconfig** or command-line-based configuration via **oldconfig** , the database is updated according to your customization. Finally, the configuration database is dumped into the .config file. + +But the .config file is not the final fodder for kernel building; this is why the **syncconfig** target exists. **syncconfig** used to be a config target called **silentoldconfig** , but it doesn't do what the old name says, so it was renamed. Also, because it is for internal use (not for users), it was dropped from the list. + +Here is an illustration of what **syncconfig** does: + +![](https://opensource.com/sites/default/files/uploads/syncconfig.png) + +**syncconfig** takes .config as input and outputs many other files, which fall into three categories: + + * **auto.conf & tristate.conf** are used for makefile text processing. For example, you may see statements like this in a component's makefile: + +``` + obj-$(CONFIG_GENERIC_CALIBRATE_DELAY) += calibrate.o +``` + + * **autoconf.h** is used in C-language source files. + + * Empty header files under **include/config/** are used for configuration-dependency tracking during kbuild, which is explained below. + + + + +After configuration, we will know which files and code pieces are not compiled. + +### kbuild + +Component-wise building, called _recursive make_ , is a common way for GNU `make` to manage a large project. Kbuild is a good example of recursive make. By dividing source files into different modules/components, each component is managed by its own makefile. When you start building, a top makefile invokes each component's makefile in the proper order, builds the components, and collects them into the final executive. + +Kbuild refers to different kinds of makefiles: + + * **Makefile** is the top makefile located in source root. + * **.config** is the kernel configuration file. + * **arch/$(ARCH)/Makefile** is the arch makefile, which is the supplement to the top makefile. + * **scripts/Makefile.*** describes common rules for all kbuild makefiles. + * Finally, there are about 500 **kbuild makefiles**. + + + +The top makefile includes the arch makefile, reads the .config file, descends into subdirectories, invokes **make** on each component's makefile with the help of routines defined in **scripts/Makefile.*** , builds up each intermediate object, and links all the intermediate objects into vmlinux. Kernel document [Documentation/kbuild/makefiles.txt][2] describes all aspects of these makefiles. + +As an example, let's look at how vmlinux is produced on x86-64: + +![vmlinux overview][4] + +(The illustration is based on Richard Y. Steven's [blog][5]. It was updated and is used with the author's permission.) + +All the **.o** files that go into vmlinux first go into their own **built-in.a** , which is indicated via variables **KBUILD_VMLINUX_INIT** , **KBUILD_VMLINUX_MAIN** , **KBUILD_VMLINUX_LIBS** , then are collected into the vmlinux file. + +Take a look at how recursive make is implemented in the Linux kernel, with the help of simplified makefile code: + +``` + # In top Makefile + vmlinux: scripts/link-vmlinux.sh $(vmlinux-deps) +                 +$(call if_changed,link-vmlinux) + + # Variable assignments + vmlinux-deps := $(KBUILD_LDS) $(KBUILD_VMLINUX_INIT) $(KBUILD_VMLINUX_MAIN) $(KBUILD_VMLINUX_LIBS) + + export KBUILD_VMLINUX_INIT := $(head-y) $(init-y) + export KBUILD_VMLINUX_MAIN := $(core-y) $(libs-y2) $(drivers-y) $(net-y) $(virt-y) + export KBUILD_VMLINUX_LIBS := $(libs-y1) + export KBUILD_LDS          := arch/$(SRCARCH)/kernel/vmlinux.lds + + init-y          := init/ + drivers-y       := drivers/ sound/ firmware/ + net-y           := net/ + libs-y          := lib/ + core-y          := usr/ + virt-y          := virt/ + + # Transform to corresponding built-in.a + init-y          := $(patsubst %/, %/built-in.a, $(init-y)) + core-y          := $(patsubst %/, %/built-in.a, $(core-y)) + drivers-y       := $(patsubst %/, %/built-in.a, $(drivers-y)) + net-y           := $(patsubst %/, %/built-in.a, $(net-y)) + libs-y1         := $(patsubst %/, %/lib.a, $(libs-y)) + libs-y2         := $(patsubst %/, %/built-in.a, $(filter-out %.a, $(libs-y))) + virt-y          := $(patsubst %/, %/built-in.a, $(virt-y)) + + # Setup the dependency. vmlinux-deps are all intermediate objects, vmlinux-dirs + # are phony targets, so every time comes to this rule, the recipe of vmlinux-dirs + # will be executed. Refer "4.6 Phony Targets" of `info make` + $(sort $(vmlinux-deps)): $(vmlinux-dirs) ; + + # Variable vmlinux-dirs is the directory part of each built-in.a + vmlinux-dirs    := $(patsubst %/,%,$(filter %/, $(init-y) $(init-m) \ +                      $(core-y) $(core-m) $(drivers-y) $(drivers-m) \ +                      $(net-y) $(net-m) $(libs-y) $(libs-m) $(virt-y))) + + # The entry of recursive make + $(vmlinux-dirs): +                 $(Q)$(MAKE) $(build)=$@ need-builtin=1 +``` + +The recursive make recipe is expanded, for example: + +``` + make -f scripts/Makefile.build obj=init need-builtin=1 +``` + +This means **make** will go into **scripts/Makefile.build** to continue the work of building each **built-in.a**. With the help of **scripts/link-vmlinux.sh** , the vmlinux file is finally under source root. + +#### Understanding vmlinux vs. bzImage + +Many Linux kernel developers may not be clear about the relationship between vmlinux and bzImage. For example, here is their relationship in x86-64: + +![](https://opensource.com/sites/default/files/uploads/vmlinux-bzimage.png) + +The source root vmlinux is stripped, compressed, put into **piggy.S** , then linked with other peer objects into **arch/x86/boot/compressed/vmlinux**. Meanwhile, a file called setup.bin is produced under **arch/x86/boot**. There may be an optional third file that has relocation info, depending on the configuration of **CONFIG_X86_NEED_RELOCS**. + +A host program called **build** , provided by the kernel, builds these two (or three) parts into the final bzImage file. + +#### Dependency tracking + +Kbuild tracks three kinds of dependencies: + + 1. All prerequisite files (both * **.c** and * **.h** ) + 2. **CONFIG_** options used in all prerequisite files + 3. Command-line dependencies used to compile the target + + + +The first one is easy to understand, but what about the second and third? Kernel developers often see code pieces like this: + +``` + #ifdef CONFIG_SMP + __boot_cpu_id = cpu; + #endif +``` + +When **CONFIG_SMP** changes, this piece of code should be recompiled. The command line for compiling a source file also matters, because different command lines may result in different object files. + +When a **.c** file uses a header file via a **#include** directive, you need write a rule like this: + +``` + main.o: defs.h +         recipe... +``` + +When managing a large project, you need a lot of these kinds of rules; writing them all would be tedious and boring. Fortunately, most modern C compilers can write these rules for you by looking at the **#include** lines in the source file. For the GNU Compiler Collection (GCC), it is just a matter of adding a command-line parameter: **-MD depfile** + +``` + # In scripts/Makefile.lib + c_flags        = -Wp,-MD,$(depfile) $(NOSTDINC_FLAGS) $(LINUXINCLUDE)     \ +                  -include $(srctree)/include/linux/compiler_types.h       \ +                  $(__c_flags) $(modkern_cflags)                           \ +                  $(basename_flags) $(modname_flags) +``` + +This would generate a **.d** file with content like: + +``` + init_task.o: init/init_task.c include/linux/kconfig.h \ +  include/generated/autoconf.h include/linux/init_task.h \ +  include/linux/rcupdate.h include/linux/types.h \ +  ... +``` + +Then the host program **[fixdep][6]** takes care of the other two dependencies by taking the **depfile** and command line as input, then outputting a **. .cmd** file in makefile syntax, which records the command line and all the prerequisites (including the configuration) for a target. It looks like this: + +``` + # The command line used to compile the target + cmd_init/init_task.o := gcc -Wp,-MD,init/.init_task.o.d  -nostdinc ... + ... + # The dependency files + deps_init/init_task.o := \ + $(wildcard include/config/posix/timers.h) \ + $(wildcard include/config/arch/task/struct/on/stack.h) \ + $(wildcard include/config/thread/info/in/task.h) \ + ... +   include/uapi/linux/types.h \ +   arch/x86/include/uapi/asm/types.h \ +   include/uapi/asm-generic/types.h \ +   ... +``` + +A **. .cmd** file will be included during recursive make, providing all the dependency info and helping to decide whether to rebuild a target or not. + +The secret behind this is that **fixdep** will parse the **depfile** ( **.d** file), then parse all the dependency files inside, search the text for all the **CONFIG_** strings, convert them to the corresponding empty header file, and add them to the target's prerequisites. Every time the configuration changes, the corresponding empty header file will be updated, too, so kbuild can detect that change and rebuild the target that depends on it. Because the command line is also recorded, it is easy to compare the last and current compiling parameters. + +### Looking ahead + +Kconfig/kbuild remained the same for a long time until the new maintainer, Masahiro Yamada, joined in early 2017, and now kbuild is under active development again. Don't be surprised if you soon see something different from what's in this article. + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/18/10/kbuild-and-kconfig + +作者:[Cao Jin][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/pinocchio +[b]: https://github.com/lujun9972 +[1]: https://github.com/torvalds/linux/blob/master/Documentation/kbuild/kconfig-language.txt +[2]: https://www.mjmwired.net/kernel/Documentation/kbuild/makefiles.txt +[3]: https://opensource.com/file/411516 +[4]: https://opensource.com/sites/default/files/uploads/vmlinux_generation_process.png (vmlinux overview) +[5]: https://blog.csdn.net/richardysteven/article/details/52502734 +[6]: https://github.com/torvalds/linux/blob/master/scripts/basic/fixdep.c From df725121ec08b87bb030a94a0ac5fbf028740d31 Mon Sep 17 00:00:00 2001 From: "Xingyu.Wang" Date: Mon, 15 Oct 2018 12:09:51 +0800 Subject: [PATCH 446/736] PRF:20180925 Hegemon - A Modular System Monitor Application Written In Rust.md @geekpi --- ...tem Monitor Application Written In Rust.md | 30 +++++++++---------- 1 file changed, 14 insertions(+), 16 deletions(-) diff --git a/translated/tech/20180925 Hegemon - A Modular System Monitor Application Written In Rust.md b/translated/tech/20180925 Hegemon - A Modular System Monitor Application Written In Rust.md index 71aace4ce4..b25f2ecb0c 100644 --- a/translated/tech/20180925 Hegemon - A Modular System Monitor Application Written In Rust.md +++ b/translated/tech/20180925 Hegemon - A Modular System Monitor Application Written In Rust.md @@ -1,27 +1,25 @@ -Hegemon - 使用 Rust 编写的模块化系统监视程序 +Hegemon:使用 Rust 编写的模块化系统监视程序 ====== ![](https://www.ostechnix.com/wp-content/uploads/2018/09/hegemon-720x340.png) -在类 Unix 系统中监视运行进程时,最常用的程序是 **top** 和 top 的增强版 **htop**。我个人最喜欢的是 htop。但是,开发人员不时会发布这些程序的替代品。top 和 htop 工具的一个替代品是 **Hegemon**。它是使用 **Rust** 语言编写的模块化系统监视程序。 +在类 Unix 系统中监视运行进程时,最常用的程序是 `top` 和它的增强版 `htop`。我个人最喜欢的是 `htop`。但是,开发人员不时会发布这些程序的替代品。`top` 和 `htop` 工具的一个替代品是 `Hegemon`。它是使用 Rust 语言编写的模块化系统监视程序。 关于 Hegemon 的功能,我们可以列出以下这些: - * Hegemon 会监控 CPU、内存和交换页的使用情况。 -  * 它监控系统的温度和风扇速度。 -  * 更新间隔时间可以调整。默认值为 3 秒。 -  * 我们可以通过扩展数据流来展示更详细的图表和其他信息。 -  * 单元测试 -  * 干净的界面 -  * 免费且开源。 - - +* Hegemon 会监控 CPU、内存和交换页的使用情况。 +* 它监控系统的温度和风扇速度。 +* 更新间隔时间可以调整。默认值为 3 秒。 +* 我们可以通过扩展数据流来展示更详细的图表和其他信息。 +* 单元测试。 +* 干净的界面。 +* 自由开源。 ### 安装 Hegemon -确保已安装 **Rust 1.26** 或更高版本。要在 Linux 发行版中安装 Rust,请参阅以下指南: +确保已安装 Rust 1.26 或更高版本。要在 Linux 发行版中安装 Rust,请参阅以下指南: -[Install Rust Programming Language In Linux][2] +- [在 Linux 中安装 Rust 编程语言][2] 另外要安装 [libsensors][1] 库。它在大多数 Linux 发行版的默认仓库中都有。例如,你可以使用以下命令将其安装在基于 RPM 的系统(如 Fedora)中: @@ -51,10 +49,10 @@ $ hegemon ![](https://www.ostechnix.com/wp-content/uploads/2018/09/Hegemon-in-action.gif) -要退出,请按 **Q**。 +要退出,请按 `Q`。 -请注意,hegemon 仍处于早期开发阶段,并不能完全取代 **top** 命令。它可能存在 bug 和功能缺失。如果你遇到任何 bug,请在项目的 github 页面中报告它们。开发人员计划在即将推出的版本中引入更多功能。所以,请关注这个项目。 +请注意,hegemon 仍处于早期开发阶段,并不能完全取代 `top` 命令。它可能存在 bug 和功能缺失。如果你遇到任何 bug,请在项目的 GitHub 页面中报告它们。开发人员计划在即将推出的版本中引入更多功能。所以,请关注这个项目。 就是这些了。希望这篇文章有用。还有更多的好东西。敬请关注! @@ -69,7 +67,7 @@ via: https://www.ostechnix.com/hegemon-a-modular-system-monitor-application-writ 作者:[SK][a] 选题:[lujun9972](https://github.com/lujun9972) 译者:[geekpi](https://github.com/geekpi) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 From 5d528e4e0ab704b95cdc2d62c2305a7363324802 Mon Sep 17 00:00:00 2001 From: "Xingyu.Wang" Date: Mon, 15 Oct 2018 12:14:53 +0800 Subject: [PATCH 447/736] PUB:20180925 Hegemon - A Modular System Monitor Application Written In Rust.md @geekpi https://linux.cn/article-10117-1.html --- ...emon - A Modular System Monitor Application Written In Rust.md | 0 1 file changed, 0 insertions(+), 0 deletions(-) rename {translated/tech => published}/20180925 Hegemon - A Modular System Monitor Application Written In Rust.md (100%) diff --git a/translated/tech/20180925 Hegemon - A Modular System Monitor Application Written In Rust.md b/published/20180925 Hegemon - A Modular System Monitor Application Written In Rust.md similarity index 100% rename from translated/tech/20180925 Hegemon - A Modular System Monitor Application Written In Rust.md rename to published/20180925 Hegemon - A Modular System Monitor Application Written In Rust.md From 77972b7e8878148a2f7908c9439dfd2d1e7cfcaa Mon Sep 17 00:00:00 2001 From: "Xingyu.Wang" Date: Mon, 15 Oct 2018 12:19:00 +0800 Subject: [PATCH 448/736] PUB:20181010 Cloc - Count The Lines Of Source Code In Many Programming Languages.md @littleji https://linux.cn/article-10118-1.html --- ... The Lines Of Source Code In Many Programming Languages.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) rename {translated/tech => published}/20181010 Cloc - Count The Lines Of Source Code In Many Programming Languages.md (96%) diff --git a/translated/tech/20181010 Cloc - Count The Lines Of Source Code In Many Programming Languages.md b/published/20181010 Cloc - Count The Lines Of Source Code In Many Programming Languages.md similarity index 96% rename from translated/tech/20181010 Cloc - Count The Lines Of Source Code In Many Programming Languages.md rename to published/20181010 Cloc - Count The Lines Of Source Code In Many Programming Languages.md index d90663cd76..c6cd182cb2 100644 --- a/translated/tech/20181010 Cloc - Count The Lines Of Source Code In Many Programming Languages.md +++ b/published/20181010 Cloc - Count The Lines Of Source Code In Many Programming Languages.md @@ -1,9 +1,9 @@ -cloc –– 计算不同编程语言源代码的行数 +cloc:计算不同编程语言源代码的行数 ====== ![](https://www.ostechnix.com/wp-content/uploads/2018/10/cloc-720x340.png) -作为一个开发人员,你可能需要不时地向你的领导或者同事分享你目前的工作与代码开发进展,抑或你的领导想对代码进行全方位的分析。这时,你就需要用到一些代码统计的工具,我知道其中一个是 [**Ohcount**][1]。今天,我遇到了另一个程序,**cloc**。你可以用 cloc 很容易地统计多种语言的源代码行数。它还可以计算空行数、代码行数、实际代码的行数,并通过整齐的表格进行结果输出。cloc 是免费的、开源的跨平台程序,使用 **Perl** 进行开发。 +作为一个开发人员,你可能需要不时地向你的领导或者同事分享你目前的工作与代码开发进展,抑或你的领导想对代码进行全方位的分析。这时,你就需要用到一些代码统计的工具,我知道其中一个是 [**Ohcount**][1]。今天,我遇到了另一个程序,**cloc**。你可以用 cloc 很容易地统计多种语言的源代码行数。它还可以计算空行数、代码行数、实际代码的行数,并通过整齐的表格进行结果输出。cloc 是自由开源的跨平台程序,使用 **Perl** 进行开发。 ### 特点 From 448c9b7aac2492c7c3f62ff4990c1926f6447096 Mon Sep 17 00:00:00 2001 From: distant1219 Date: Mon, 15 Oct 2018 16:31:03 +0800 Subject: [PATCH 449/736] Update 20181004 PyTorch 1.0 Preview Release- Facebook-s newest Open Source AI.md I want to translate this eaasy --- ...orch 1.0 Preview Release- Facebook-s newest Open Source AI.md | 1 + 1 file changed, 1 insertion(+) diff --git a/sources/tech/20181004 PyTorch 1.0 Preview Release- Facebook-s newest Open Source AI.md b/sources/tech/20181004 PyTorch 1.0 Preview Release- Facebook-s newest Open Source AI.md index 6418db9444..08551028b2 100644 --- a/sources/tech/20181004 PyTorch 1.0 Preview Release- Facebook-s newest Open Source AI.md +++ b/sources/tech/20181004 PyTorch 1.0 Preview Release- Facebook-s newest Open Source AI.md @@ -1,3 +1,4 @@ +distant1219 is translating PyTorch 1.0 Preview Release: Facebook’s newest Open Source AI ====== Facebook already uses its own Open Source AI, PyTorch quite extensively in its own artificial intelligence projects. Recently, they have gone a league ahead by releasing a pre-release preview version 1.0. From 529cd5c1a25ea915aed368d4e2c140555d76247a Mon Sep 17 00:00:00 2001 From: belitex Date: Mon, 15 Oct 2018 17:06:13 +0800 Subject: [PATCH 450/736] =?UTF-8?q?=E7=BF=BB=E8=AF=91=E5=AE=8C=E6=88=90=20?= =?UTF-8?q?How=20Writing=20Can=20Expand=20Your=20Skills=20and=20Grow=20You?= =?UTF-8?q?r=20Career?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...Expand Your Skills and Grow Your Career.md | 47 ------------------- ...Expand Your Skills and Grow Your Career.md | 45 ++++++++++++++++++ 2 files changed, 45 insertions(+), 47 deletions(-) delete mode 100644 sources/talk/20180919 How Writing Can Expand Your Skills and Grow Your Career.md create mode 100644 translated/talk/20180919 How Writing Can Expand Your Skills and Grow Your Career.md diff --git a/sources/talk/20180919 How Writing Can Expand Your Skills and Grow Your Career.md b/sources/talk/20180919 How Writing Can Expand Your Skills and Grow Your Career.md deleted file mode 100644 index 324d3c8700..0000000000 --- a/sources/talk/20180919 How Writing Can Expand Your Skills and Grow Your Career.md +++ /dev/null @@ -1,47 +0,0 @@ -translating by belitex -How Writing Can Expand Your Skills and Grow Your Career -====== - -![](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/graffiti-1281310_1920.jpg?itok=RCayfGKv) - -At the recent [Open Source Summit in Vancouver][1], I participated in a panel discussion called [How Writing can Change Your Career for the Better (Even if You don't Identify as a Writer][2]. The panel was moderated by Rikki Endsley, Community Manager and Editor for Opensource.com, and it included VM (Vicky) Brasseur, Open Source Strategy Consultant; Alex Williams, Founder, Editor in Chief, The New Stack; and Dawn Foster, Consultant, The Scale Factory. - -The talk was [inspired by this article][3], in which Rikki examined some ways that writing can "spark joy" and improve your career in unexpected ways. Full disclosure: I have known Rikki for a long time. We worked at the same company for many years, raised our children together, and remain close friends. - -### Write and learn - -As Rikki noted in the talk description, “even if you don't consider yourself to be ‘a writer,’ you should consider writing about your open source contributions, project, or community.” Writing can be a great way to share knowledge and engage others in your work, but it has personal benefits as well. It can help you meet new people, learn new skills, and improve your communication style. - -I find that writing often clarifies for me what I don’t know about a particular topic. The process highlights gaps in my understanding and motivates me to fill in those gaps through further research, reading, and asking questions. - -“Writing about what you don't know can be much harder and more time consuming, but also much more fulfilling and help your career. I've found that writing about what I don't know helps me learn, because I have to research it and understand it well enough to explain it,” Rikki said. - -Writing about what you’ve just learned can be valuable to other learners as well. In her blog, [Julia Evans][4] often writes about learning new technical skills. She has a friendly, approachable style along with the ability to break down topics into bite-sized pieces. In her posts, Evans takes readers through her learning process, identifying what was and was not helpful to her along the way, essentially removing obstacles for her readers and clearing a path for those new to the topic. - -### Communicate more clearly - -Writing can help you practice thinking and speaking more precisely, especially if you’re writing (or speaking) for an international audience. [In this article,][5] for example, Isabel Drost-Fromm provides tips for removing ambiguity for non-native English speakers. Writing can also help you organize your thoughts before a presentation, whether you’re speaking at a conference or to your team. - -“The process of writing the articles helps me organize my talks and slides, and it was a great way to provide ‘notes’ for conference attendees, while sharing the topic with a larger international audience that wasn't at the event in person,” Rikki stated. - -If you’re interested in writing, I encourage you to do it. I highly recommend the articles mentioned here as a way to get started thinking about the story you have to tell. Unfortunately, our discussion at Open Source Summit was not recorded, but I hope we can do another talk in the future and share more ideas. - -Check out the schedule of talks for Open Source Summit Europe and sign up to receive updates: - --------------------------------------------------------------------------------- - -via: https://www.linux.com/blog/2018/9/how-writing-can-help-you-learn-new-skills-and-grow-your-career - -作者:[Amber Ankerholz][a] -选题:[lujun9972](https://github.com/lujun9972) -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://www.linux.com/users/aankerholz -[1]: https://events.linuxfoundation.org/events/open-source-summit-north-america-2018/ -[2]: https://ossna18.sched.com/event/FAOF/panel-discussion-how-writing-can-change-your-career-for-the-better-even-if-you-dont-identify-as-a-writer-moderated-by-rikki-endsley-opensourcecom-red-hat?iframe=no# -[3]: https://opensource.com/article/18/2/career-changing-magic-writing -[4]: https://jvns.ca/ -[5]: https://www.linux.com/blog/event/open-source-summit-eu/2017/12/technical-writing-international-audience diff --git a/translated/talk/20180919 How Writing Can Expand Your Skills and Grow Your Career.md b/translated/talk/20180919 How Writing Can Expand Your Skills and Grow Your Career.md new file mode 100644 index 0000000000..f75c55b892 --- /dev/null +++ b/translated/talk/20180919 How Writing Can Expand Your Skills and Grow Your Career.md @@ -0,0 +1,45 @@ +写作是如何帮助技能拓展和事业成长的 +====== + +![](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/graffiti-1281310_1920.jpg?itok=RCayfGKv) + +在最近的[温哥华开源峰会][1]上,我参加了一个小组讨论,叫做“写作是如何改变你的职业生涯的(即使你不是个作家)”。主持人是 Opensource.com 的社区经理兼编辑 Rikki Endsley,成员有开源策略顾问 VM (Vicky) Brasseur,The New Stack 的创始人兼主编 Alex Williams,还有 The Scale Factory 的顾问 Dawn Foster。 + +Rikki 在她的[这篇文章][3]中总结了一些能愉悦你,并且能以意想不到的方式改善你职业生涯的写作方法,我在峰会上的发言是受她这篇文章的启发。透露一下,我认识 Rikki 很久了,我们在同一家公司共事了很多年,一起带过孩子,到现在还是很亲密的朋友。 + +### 写作和学习 + +正如 Rikki 对这个小组讨论的描述,“即使你自认为不是一个‘作家’,你也应该考虑写一下对开源的贡献,还有你的项目或者社区”。写作是一种很好的方式,来分享自己的知识并让别人参与到你的工作中来,当然它对个人也有好处。写作能帮助你结识新人,学习新技能,还能改善你的沟通。 + +我发现写作能让我搞清楚自己对某个主题有哪些不懂的地方。写作的过程会让知识体系的空白很突出,这激励了我通过进一步的研究、阅读和提问来填补空白。 + +Rikki 说:“写那些你不知道的东西会更加困难也更加耗时,但是也更有成就感,更有益于你的事业。我发现写我不知道的东西有助于自己学习,因为得研究透彻才能给读者解释清楚。” + +把你刚学到的东西写出来对其他也在学习这些知识的人是很有价值的。[Julia Evans][4] 经常在她的博客里写有关学习新技能的文章。她能把主题分解成一个个小的部分,这种方法对读者很友好,容易上手。Evans 在自己的博客中带领读者了解她的学习过程,指出在这个过程中哪些是对她有用的,哪些是没用的,基本消除了读者的学习障碍,为新手清扫了道路。 + +### 更明确的沟通 + + +写作有助于练习思考和准确讲话,尤其是面向国际受众写作(或演讲)时。例如,在[这篇文章中][5],Isabel Drost-Fromm 为那些母语不是英语的演讲者提供了几个技巧来消除歧义。不管是在会议上还是在自己团队内发言,写作还能帮你在演示之前理清思路。 + +Rikki 说:“写文章的过程有助于我组织整理自己的发言和演示稿,也是一个给参会者提供笔记的好方式,还可以分享给没有参加活动的更多国际观众。” + +如果你有兴趣,我鼓励你去写作。我强烈建议你参考这里提到的文章,开始思考你要写的内容。 不幸的是,我们在开源峰会上的讨论没有记录,但我希望将来能再做一次讨论,分享更多的想法。 + +-------------------------------------------------------------------------------- + +via: https://www.linux.com/blog/2018/9/how-writing-can-help-you-learn-new-skills-and-grow-your-career + +作者:[Amber Ankerholz][a] +选题:[lujun9972](https://github.com/lujun9972) +译者:[belitex](https://github.com/belitex) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://www.linux.com/users/aankerholz +[1]: https://events.linuxfoundation.org/events/open-source-summit-north-america-2018/ +[2]: https://ossna18.sched.com/event/FAOF/panel-discussion-how-writing-can-change-your-career-for-the-better-even-if-you-dont-identify-as-a-writer-moderated-by-rikki-endsley-opensourcecom-red-hat?iframe=no# +[3]: https://opensource.com/article/18/2/career-changing-magic-writing +[4]: https://jvns.ca/ +[5]: https://www.linux.com/blog/event/open-source-summit-eu/2017/12/technical-writing-international-audience From 8bbe4640dafd97f97bed001048644f9c133bc797 Mon Sep 17 00:00:00 2001 From: way-ww <40491614+way-ww@users.noreply.github.com> Date: Mon, 15 Oct 2018 18:01:50 +0800 Subject: [PATCH 451/736] Delete 20181009 How To Create And Maintain Your Own Man Pages.md --- ... Create And Maintain Your Own Man Pages.md | 199 ------------------ 1 file changed, 199 deletions(-) delete mode 100644 sources/tech/20181009 How To Create And Maintain Your Own Man Pages.md diff --git a/sources/tech/20181009 How To Create And Maintain Your Own Man Pages.md b/sources/tech/20181009 How To Create And Maintain Your Own Man Pages.md deleted file mode 100644 index cb93af4b92..0000000000 --- a/sources/tech/20181009 How To Create And Maintain Your Own Man Pages.md +++ /dev/null @@ -1,199 +0,0 @@ -Translating by way-ww -How To Create And Maintain Your Own Man Pages -====== - -![](https://www.ostechnix.com/wp-content/uploads/2018/10/Um-pages-1-720x340.png) - -We already have discussed about a few [**good alternatives to Man pages**][1]. Those alternatives are mainly used for learning concise Linux command examples without having to go through the comprehensive man pages. If you’re looking for a quick and dirty way to easily and quickly learn a Linux command, those alternatives are worth trying. Now, you might be thinking – how can I create my own man-like help pages for a Linux command? This is where **“Um”** comes in handy. Um is a command line utility, used to easily create and maintain your own Man pages that contains only what you’ve learned about a command so far. - -By creating your own alternative to man pages, you can avoid lots of unnecessary, comprehensive details in a man page and include only what is necessary to keep in mind. If you ever wanted to created your own set of man-like pages, Um will definitely help. In this brief tutorial, we will see how to install “Um” command line utility and how to create our own man pages. - -### Installing Um - -Um is available for Linux and Mac OS. At present, it can only be installed using **Linuxbrew** package manager in Linux systems. Refer the following link if you haven’t installed Linuxbrew yet. - -Once Linuxbrew installed, run the following command to install Um utility. - -``` -$ brew install sinclairtarget/wst/um - -``` - -If you will see an output something like below, congratulations! Um has been installed and ready to use. - -``` -[...] -==> Installing sinclairtarget/wst/um -==> Downloading https://github.com/sinclairtarget/um/archive/4.0.0.tar.gz -==> Downloading from https://codeload.github.com/sinclairtarget/um/tar.gz/4.0.0 --=#=# # # -==> Downloading https://rubygems.org/gems/kramdown-1.17.0.gem -######################################################################## 100.0% -==> gem install /home/sk/.cache/Homebrew/downloads/d0a5d978120a791d9c5965fc103866815189a4e3939 -==> Caveats -Bash completion has been installed to: -/home/linuxbrew/.linuxbrew/etc/bash_completion.d -==> Summary -🍺 /home/linuxbrew/.linuxbrew/Cellar/um/4.0.0: 714 files, 1.3MB, built in 35 seconds -==> Caveats -==> openssl -A CA file has been bootstrapped using certificates from the SystemRoots -keychain. To add additional certificates (e.g. the certificates added in -the System keychain), place .pem files in -/home/linuxbrew/.linuxbrew/etc/openssl/certs - -and run -/home/linuxbrew/.linuxbrew/opt/openssl/bin/c_rehash -==> ruby -Emacs Lisp files have been installed to: -/home/linuxbrew/.linuxbrew/share/emacs/site-lisp/ruby -==> um -Bash completion has been installed to: -/home/linuxbrew/.linuxbrew/etc/bash_completion.d - -``` - -Before going to use to make your man pages, you need to enable bash completion for Um. - -To do so, open your **~/.bash_profile** file: - -``` -$ nano ~/.bash_profile - -``` - -And, add the following lines in it: - -``` -if [ -f $(brew --prefix)/etc/bash_completion.d/um-completion.sh ]; then - . $(brew --prefix)/etc/bash_completion.d/um-completion.sh -fi - -``` - -Save and close the file. Run the following commands to update the changes. - -``` -$ source ~/.bash_profile - -``` - -All done. let us go ahead and create our first man page. - -### Create And Maintain Your Own Man Pages - -Let us say, you want to create your own man page for “dpkg” command. To do so, run: - -``` -$ um edit dpkg - -``` - -The above command will open a markdown template in your default editor: - -![](https://www.ostechnix.com/wp-content/uploads/2018/10/Create-dpkg-man-page.png) - -My default editor is Vi, so the above commands open it in the Vi editor. Now, start adding everything you want to remember about “dpkg” command in this template. - -Here is a sample: - -![](https://www.ostechnix.com/wp-content/uploads/2018/10/Edit-dpkg-man-page.png) - -As you see in the above output, I have added Synopsis, description and two options for dpkg command. You can add as many as sections you want in the man pages. Make sure you have given proper and easily-understandable titles for each section. Once done, save and quit the file (If you use Vi editor, Press **ESC** key and type **:wq** ). - -Finally, view your newly created man page using command: - -``` -$ um dpkg - -``` - -![](http://www.ostechnix.com/wp-content/uploads/2018/10/View-dpkg-man-page.png) - -As you can see, the the dpkg man page looks exactly like the official man pages. If you want to edit and/or add more details in a man page, again run the same command and add the details. - -``` -$ um edit dpkg - -``` - -To view the list of newly created man pages using Um, run: - -``` -$ um list - -``` - -All man pages will be saved under a directory named**`.um`**in your home directory - -Just in case, if you don’t want a particular page, simply delete it as shown below. - -``` -$ um rm dpkg - -``` - -To view the help section and all available general options, run: - -``` -$ um --help -usage: um - um [ARGS...] - -The first form is equivalent to `um read `. - -Subcommands: - um (l)ist List the available pages for the current topic. - um (r)ead Read the given page under the current topic. - um (e)dit Create or edit the given page under the current topic. - um rm Remove the given page. - um (t)opic [topic] Get or set the current topic. - um topics List all topics. - um (c)onfig [config key] Display configuration environment. - um (h)elp [sub-command] Display this help message, or the help message for a sub-command. - -``` - -### Configure Um - -To view the current configuration, run: - -``` -$ um config -Options prefixed by '*' are set in /home/sk/.um/umconfig. -editor = vi -pager = less -pages_directory = /home/sk/.um/pages -default_topic = shell -pages_ext = .md - -``` - -In this file, you can edit and change the values for **pager** , **editor** , **default_topic** , **pages_directory** , and **pages_ext** options as you wish. Say for example, if you want to save the newly created Um pages in your **[Dropbox][2]** folder, simply change the value of **pages_directory** directive and point it to the Dropbox folder in **~/.um/umconfig** file. - -``` -pages_directory = /Users/myusername/Dropbox/um - -``` - -And, that’s all for now. Hope this was useful. More good stuffs to come. Stay tuned! - -Cheers! - - - --------------------------------------------------------------------------------- - -via: https://www.ostechnix.com/how-to-create-and-maintain-your-own-man-pages/ - -作者:[SK][a] -选题:[lujun9972][b] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://www.ostechnix.com/author/sk/ -[b]: https://github.com/lujun9972 -[1]: https://www.ostechnix.com/3-good-alternatives-man-pages-every-linux-user-know/ -[2]: https://www.ostechnix.com/install-dropbox-in-ubuntu-18-04-lts-desktop/ From 5f07a08baaf2448c518293f6cefae798a0c45ea4 Mon Sep 17 00:00:00 2001 From: way-ww <40491614+way-ww@users.noreply.github.com> Date: Mon, 15 Oct 2018 18:02:37 +0800 Subject: [PATCH 452/736] Create 20181009 How To Create And Maintain Your Own Man Pages.md --- ... Create And Maintain Your Own Man Pages.md | 199 ++++++++++++++++++ 1 file changed, 199 insertions(+) create mode 100644 translated/tech/20181009 How To Create And Maintain Your Own Man Pages.md diff --git a/translated/tech/20181009 How To Create And Maintain Your Own Man Pages.md b/translated/tech/20181009 How To Create And Maintain Your Own Man Pages.md new file mode 100644 index 0000000000..407ad90947 --- /dev/null +++ b/translated/tech/20181009 How To Create And Maintain Your Own Man Pages.md @@ -0,0 +1,199 @@ +如何创建和维护你的Man手册 +====== + +![](https://www.ostechnix.com/wp-content/uploads/2018/10/Um-pages-1-720x340.png) + +我们已经讨论了一些[Man手册的替代方案] [1]。 这些替代方案主要用于学习简洁的Linux命令示例,而无需通过全面过于详细的手册页。 如果你正在寻找一种快速而简单的方法来轻松快速地学习Linux命令,那么这些替代方案值得尝试。 现在,你可能正在考虑 - 如何为Linux命令创建自己的man-like帮助页面? 这时**“Um”**就派上用场了。 Um是一个命令行实用程序,用于轻松创建和维护包含你到目前为止所了解的所有命令的Man页面。 + +通过创建自己的手册页,你可以在手册页中避免大量不必要的细节,并且只包含你需要记住的内容。 如果你想创建自己的一套man-like页面,“Um”也能为你提供帮助。 在这个简短的教程中,我们将学习如何安装“Um”命令以及如何创建自己的man手册页。 + +### 安装 Um + +Um适用于Linux和Mac OS。 目前,它只能在Linux系统中使用** Linuxbrew **软件包管理器来进行安装。 如果你尚未安装Linuxbrew,请参考以下链接。 + +安装Linuxbrew后,运行以下命令安装Um实用程序。 + +``` +$ brew install sinclairtarget/wst/um + +``` + +如果你会看到类似下面的输出,恭喜你! Um已经安装好并且可以使用了。 + +``` +[...] +==> Installing sinclairtarget/wst/um +==> Downloading https://github.com/sinclairtarget/um/archive/4.0.0.tar.gz +==> Downloading from https://codeload.github.com/sinclairtarget/um/tar.gz/4.0.0 +-=#=# # # +==> Downloading https://rubygems.org/gems/kramdown-1.17.0.gem +######################################################################## 100.0% +==> gem install /home/sk/.cache/Homebrew/downloads/d0a5d978120a791d9c5965fc103866815189a4e3939 +==> Caveats +Bash completion has been installed to: +/home/linuxbrew/.linuxbrew/etc/bash_completion.d +==> Summary +[] /home/linuxbrew/.linuxbrew/Cellar/um/4.0.0: 714 files, 1.3MB, built in 35 seconds +==> Caveats +==> openssl +A CA file has been bootstrapped using certificates from the SystemRoots +keychain. To add additional certificates (e.g. the certificates added in +the System keychain), place .pem files in +/home/linuxbrew/.linuxbrew/etc/openssl/certs + +and run +/home/linuxbrew/.linuxbrew/opt/openssl/bin/c_rehash +==> ruby +Emacs Lisp files have been installed to: +/home/linuxbrew/.linuxbrew/share/emacs/site-lisp/ruby +==> um +Bash completion has been installed to: +/home/linuxbrew/.linuxbrew/etc/bash_completion.d + +``` + +在制作你的man手册页之前,你需要为Um启用bash补全。 + +要开启bash'补全,首先你需要打开 **~/.bash_profile** 文件: + +``` +$ nano ~/.bash_profile + +``` + +并在其中添加以下内容: + +``` +if [ -f $(brew --prefix)/etc/bash_completion.d/um-completion.sh ]; then + . $(brew --prefix)/etc/bash_completion.d/um-completion.sh +fi + +``` + +保存并关闭文件。运行以下命令以更新更改。 + +``` +$ source ~/.bash_profile + +``` + +准备工作全部完成。让我们继续创建我们的第一个man手册页。 + + +### 创建并维护自己的Man手册 + +如果你想为“dpkg”命令创建自己的Man手册。请运行: + +``` +$ um edit dpkg + +``` + +上面的命令将在默认编辑器中打开markdown模板: + +![](https://www.ostechnix.com/wp-content/uploads/2018/10/Create-dpkg-man-page.png) + +我的默认编辑器是Vi,因此上面的命令会在Vi编辑器中打开它。现在,开始在此模板中添加有关“dpkg”命令的所有内容。 + +下面是一个示例: + +![](https://www.ostechnix.com/wp-content/uploads/2018/10/Edit-dpkg-man-page.png) + +正如你在上图的输出中看到的,我为dpkg命令添加了概要,描述和两个参数选项。 你可以在Man手册中添加你所需要的所有部分。不过你也要确保为每个部分提供了适当且易于理解的标题。 完成后,保存并退出文件(如果使用Vi编辑器,请按ESC键并键入:wq)。 + +最后,使用以下命令查看新创建的Man手册页: + +``` +$ um dpkg + +``` + +![](http://www.ostechnix.com/wp-content/uploads/2018/10/View-dpkg-man-page.png) + +如你所见,dpkg的Man手册页看起来与官方手册页完全相同。 如果要在手册页中编辑和/或添加更多详细信息,请再次运行相同的命令并添加更多详细信息。 + +``` +$ um edit dpkg + +``` + +要使用Um查看新创建的Man手册页列表,请运行: + +``` +$ um list + +``` + +所有手册页将保存在主目录中名为**`.um` **的目录下 + +以防万一,如果你不想要某个特定页面,只需删除它,如下所示。 + +``` +$ um rm dpkg + +``` + +要查看帮助部分和所有可用的常规选项,请运行: + +``` +$ um --help +usage: um + um [ARGS...] + +The first form is equivalent to `um read `. + +Subcommands: + um (l)ist List the available pages for the current topic. + um (r)ead Read the given page under the current topic. + um (e)dit Create or edit the given page under the current topic. + um rm Remove the given page. + um (t)opic [topic] Get or set the current topic. + um topics List all topics. + um (c)onfig [config key] Display configuration environment. + um (h)elp [sub-command] Display this help message, or the help message for a sub-command. + +``` + +### 配置 Um + +要查看当前配置,请运行: + +``` +$ um config +Options prefixed by '*' are set in /home/sk/.um/umconfig. +editor = vi +pager = less +pages_directory = /home/sk/.um/pages +default_topic = shell +pages_ext = .md + +``` + +在此文件中,你可以根据需要编辑和更改** pager **,** editor **,** default_topic **,** pages_directory **和** pages_ext **选项的值。 比如说,如果你想在** [Dropbox] [2] **文件夹中保存新创建的Um页面,只需更改/.um/umconfig**文件中** pages_directory **的值并将其更改为Dropbox文件夹即可。 + +``` +pages_directory = /Users/myusername/Dropbox/um + +``` + +这就是全部内容,希望这些能对你有用,更多好的内容敬请关注! + +干杯! + + + +-------------------------------------------------------------------------------- + +via: https://www.ostechnix.com/how-to-create-and-maintain-your-own-man-pages/ + +作者:[SK][a] +选题:[lujun9972][b] +译者:[way-ww](https://github.com/way-ww) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://www.ostechnix.com/author/sk/ +[b]: https://github.com/lujun9972 +[1]: https://www.ostechnix.com/3-good-alternatives-man-pages-every-linux-user-know/ +[2]: https://www.ostechnix.com/install-dropbox-in-ubuntu-18-04-lts-desktop/ From 69e4ebdb113750a82b185b27d699030846b55b87 Mon Sep 17 00:00:00 2001 From: sd886393 Date: Mon, 15 Oct 2018 19:13:01 +0800 Subject: [PATCH 453/736] =?UTF-8?q?=E3=80=90=E7=94=B3=E9=A2=86=E3=80=91201?= =?UTF-8?q?80829=204=20open=20source=20monitoring=20tools?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- sources/tech/20180829 4 open source monitoring tools.md | 1 + 1 file changed, 1 insertion(+) diff --git a/sources/tech/20180829 4 open source monitoring tools.md b/sources/tech/20180829 4 open source monitoring tools.md index a5b8bf6806..dbc59d8a29 100644 --- a/sources/tech/20180829 4 open source monitoring tools.md +++ b/sources/tech/20180829 4 open source monitoring tools.md @@ -1,3 +1,4 @@ +translating by sd886393 4 open source monitoring tools ====== From 80d11d997c3bd78d470e01f47dc9ae4f56de1b61 Mon Sep 17 00:00:00 2001 From: belitex Date: Mon, 15 Oct 2018 20:44:24 +0800 Subject: [PATCH 454/736] translating by belitex: A sysadmin-s guide to containers --- sources/tech/20180827 A sysadmin-s guide to containers.md | 2 ++ 1 file changed, 2 insertions(+) diff --git a/sources/tech/20180827 A sysadmin-s guide to containers.md b/sources/tech/20180827 A sysadmin-s guide to containers.md index a6529d8842..6acf9c2a45 100644 --- a/sources/tech/20180827 A sysadmin-s guide to containers.md +++ b/sources/tech/20180827 A sysadmin-s guide to containers.md @@ -1,3 +1,5 @@ +translating by belitex + A sysadmin's guide to containers ====== From 1a297c4363ad9d7de97b2c42610e079b26b09753 Mon Sep 17 00:00:00 2001 From: thecyanbird <2534930703@qq.com> Date: Mon, 15 Oct 2018 20:55:03 +0800 Subject: [PATCH 455/736] Create 20180919 Linux Has a Code of Conduct and Not Everyone is Happy With it.md --- ...nduct and Not Everyone is Happy With it.md | 166 ++++++++++++++++++ 1 file changed, 166 insertions(+) create mode 100644 translated/talk/20180919 Linux Has a Code of Conduct and Not Everyone is Happy With it.md diff --git a/translated/talk/20180919 Linux Has a Code of Conduct and Not Everyone is Happy With it.md b/translated/talk/20180919 Linux Has a Code of Conduct and Not Everyone is Happy With it.md new file mode 100644 index 0000000000..32a839ed81 --- /dev/null +++ b/translated/talk/20180919 Linux Has a Code of Conduct and Not Everyone is Happy With it.md @@ -0,0 +1,166 @@ +Linux 拥有了新的行为准则,但是许多人都对此表示不满 +===== + +**Linux 内核有了新的行为准则Code of Conduct(CoC)。但在这条行为准则被签署以及发布仅仅 30 分钟之后,Linus Torvalds 就暂时离开了 Linux 内核的开发工作。因为新行为准则的作者那富有争议的过去,现在这件事成为了热点话题。许多人都对这新的行为准则表示不满。** + +如果你还不了解这件事,请参阅 [Linus Torvalds 对于自己之前的不良态度致歉并开始休假,以改善自己的行为态度][1] + +### Linux 内核开发遵守的新行为准则 + +Linux 内核开发者并不是以前没有需要遵守的行为准则,但是之前的[冲突准则code of conflict][2]现在被替换成了以“给内核开发社区营造更加热情,更方便他人参与的氛围”为目的的行为准则。 + +>“为营造一个开放并且热情的社区环境,我们,贡献者与维护者,许诺让每一个参与进我们项目和社区的人享受一个没有骚扰的体验。无关于他们的年纪,体型,身体残疾,种族,性别,性别认知与表达,社会经验,教育水平,社会或者经济地位,国籍,外表,人种,信仰,性认同和性取向。 + +你可以在这里阅读整篇行为准则 + +[Linux 行为准则][33] + +### Linus Torvalds 是被迫道歉并且休假的吗? + +![Linus Torvalds 的道歉][3] + +这个新的行为准则由 Linus Torvalds 和 Greg Kroah-Hartman (仅次于 Torvalds 的二把手)签发。来自 Intel 的 Dan Williams 和来自 Facebook 的 Chris Mason 也是该准则的签署者。 + +如果我正确地解读了时间线,在签署这个行为准则的半小时之后,Torvalds [发送了一封邮件,对自己之前的不良态度致歉][4]。他同时宣布会进行休假,以改善自己的行为态度。 + +不过有些人开始阅读这封邮件的话外之音,并对如下文字报以特别关注: + +>**在这周,许多社区成员批评了我之前种种不解人意的行为。我以前在邮件里进行的,对他人轻率的批评是非专业以及不必要的**。这种情况在我将事情放在私人渠道处理的时候尤为严重。我理解这件事情的严重性,这是不好的行为,我对此感到十分抱歉。 + +他是否是因为新的行为准则被强迫做出道歉,并且决定休假,可以通过这几行来判断。这也可以让我们采取一些措施,避免 Torvalds 被新的行为准则伤害。 + +### 有关贡献者盟约作者 Coraline Ada Ehmke 的争议 + +Linux 的行为准则基于[贡献者盟约Contributor Convenant1.4 版本][5]。贡献者盟约[被上百个开源项目所接纳][6],包括 Eclipse, Angular, Ruby, Kubernetes等项目。 + +贡献者盟约由 [Coraline Ada Ehmke][7] 创作,她是一个软件工程师,开源支持者,以及 [LGBT][8] 活动家。她对于促进开源世界的多样性做了显著的贡献。 + +Coraline 对于唯才是用的反对立场同样十分鲜明。[唯才是用meritocracy][9]这个词语源自拉丁文,本意为个人在系统内部的进步取决于他的“功绩”,例如智力水平,取得的证书以及教育程度。但[类似 Coraline 的活动家们认为][10]唯才是用是个糟糕的体系,因为他们只是通过人的智力产出来度量一个人,而并不重视他们的人性。 + +[![croraline meritocracy][11]][12] +图片来源:推特用户@nickmon1112 + +[Linus Torvalds 不止一次地说到,他在意的只是代码而并非写代码的人][13]。所以很明显,这忤逆了 Coraline 有关唯才是用体系的观点。 + +具体来说,Coraline 那被人关注饱受争议的过去,是一个关于 [Opal 项目][14]的事件。那是一个发生[在推特上的讨论][15],Elia,来自意大利的 Opal 项目核心开发者说“(那些变性人)不接受现实才是问题所在。” + +Coraline 并没有参加讨论,也不是 Opal 项目的贡献者。不过作为 LGBT 活动家,她以 Elia 发表“冒犯变性人群体的发言”为由,[要求他退出 Opal 项目][16]。 Coraline 和她的支持者——他们给这个项目做过贡献,通过在 GitHub 仓库平台上冗长且激烈的争论,试图将 Elia——此项目的核心开发者移出项目。 + +虽然 Elia 并没有离开这个项目,不过 Opal 项目的维护者同意实行一个行为准则。这个行为准则就是 Coraline 不停向维护者们宣扬的,她那著名的贡献者盟约。 + +不过故事到这里并没有结束。贡献者盟约稍后被更改,[加入了一些针对 Elia 的新条款][17]。这些新条款将行为准则的管束范围扩展到公共领域。不过这些更改稍后[被维护者们标记为恶意篡改][18]。最后 Opal 项目摆脱了贡献者盟约,并用自己的行为准则取而代之。 + +这个例子非常好的说明了,某些被冒犯的少数人群——他们并没有给这个项目哪怕一点贡献,是怎样试图去驱逐这个项目的核心开发者的。 + +### 人们对于 Linux 新的行为准则的以及 Torvalds 道歉的反映。 + +Linux 行为准则以及 Torvalds 的道歉一发布,社交媒体与论坛上就开始盛传种种谣言与[推测][19]。虽然很多人对新的行为准则感到满意,但仍有些人认为这是 [SJW 尝试渗透 Linux 社区][20]的阴谋。 + +Caroline 发布的一个富有嘲讽意味的推特让争论愈发激烈。 + +>我迫不及待期待看到大批的人离开 Linux 社区的场景了。现在它以及被 SJW 的成员渗透了。哈哈哈哈。 +[pic.twitter.com/eFeY6r4ENv][21] +> +>— Coraline Ada Ehmke (@CoralineAda) [9 月 16 日, 2018][22] + +随着对于 Linux 行为准则的争论持续发酵,Carolaine 公开宣称贡献者盟约是一份政治文件。这并不能被那些试图将政治因素排除在开源项目之外的人所接收。 + +>有些人说贡献者盟约是一份政治文件,他们说的没错。 +> +>— Coraline Ada Ehmke (@CoralineAda) [9 月 16 日, 2018][23] + +Nick Monroe,一位自由记者,宣称 Linux 行为准则远没有表面上看上去那么简单。为了证明自己的观点,他挖掘出了 Coraline 的过去。如果您愿意,可以阅读以下材料。 + +>好啦,你们已经看到过几千次了。这是一个行为准则。 +> +>它包含了社会认同的正义行为。 +> +>不过它或许没有看上去来的那么简单。[pic.twitter.com/8NUL2K1gu2][24] +> +>— Nick Monroe (@nickmon1112) [9 月 17 日, 2018][25] + +Nick 并不是唯一一个反对 Linux 新的行为准则的人。[SJW][26] 的参与引发了更多的阴谋论猜测。 + +>我猜今天关于 Linux 的大新闻就是现在,Linux 内核被一个post meritocracy世界观下的行为准则给掌控了。 +> +>这个行为准则的宗旨看起来不错。不过在实际操作中,它们通常被当作 SJW 分子攻击他们不喜之人的工具。况且,很多人都被 SJW 分子所厌恶。 +> +> — Mark Kern (@Grummz) [September 17, 2018][27] + +虽然很多人对于 Torvalds 的道歉感到欣慰,仍有一些人在责备 Torvalds 的态度。 + +>我是不是唯一一个认为 Linus Torvalds 这十几年来的态度恰好就是 Linux 和开源“社区”特有的那种,居高临下,粗鲁,鄙视一切新人的行为作风?反正作为一个新用户,我从来没有在 Linux 社区里感受到自己是受欢迎的。 +> +>— Jonathan Frappier (@jfrappier) [9 月 17 日, 2018][28] + +还有些人并不能接受 Torvalds 的道歉。 + +>哦快看啊,一个喜欢辱骂他人的开源软件维护者,在十几年的恶行之后,终于承认了他的行为**可能**是不正确的。 +> +>我关注的那些人因为这件事都惊讶到平地摔,并且决定给他(Linus Torvalds)寄饼干来庆祝。 🙄🙄🙄 +> +>— Kelly Ellis (@justkelly_ok) [9 月 17 日, 2018][29] + +Torvalds 的道歉引起了广泛关注 ;) + +>我现在要在我的个人档案里写上”我不知是否该原谅 Linus Torvalds“ 吗? +> +>— Verónica. (@maria_fibonacci) [9 月 17 日, 2018][30] + +不继续开玩笑了。有关 Linus 道歉的关注是由 Sharp 挑起的。他因为“恶劣的社区环境”于 2015 年退出了 Linux 内核的开发。 + +>现在我们要面对的问题是,这个成就 Linus,给予他肆意辱骂特权的社区能否迎来改变。不仅仅是 Linus 个人,Linux 内核开发社区也急需改变。 +> +>— Sage Sharp (@sagesharp) 9 月 17 日, 2018 + +### 你对于 Linux 行为准则怎么看? + +如果你问我的观点,我认为目前社区的确是需要一个行为准则。它能指导人们尊重他人,不因为他人的种族,宗教信仰,国籍,政治观点(左派或者右派)而歧视,营造出一个积极向上的社区氛围。 + +对于这个事件,你怎么看?你认为这个行为准则能够帮助 Linux 内核的开发,或者说因为 SJW 成员们的加入,情况会变得更糟? + +在 FOSS 里我们没有行为准则,不过我们都会持着文明友好的态度讨论问题。 + + via: https://itsfoss.com/linux-code-of-conduct/ + + 作者:[Abhishek Prakash][a] + 选题:[lujun9972](https://github.com/lujun9972) + 译者:[thecyanbird](https://github.com/thecyanbird) + 校对:[校对者ID](https://github.com/校对者ID) + + 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + + [a]: https://itsfoss.com/author/abhishek/ + [1]: https://itsfoss.com/torvalds-takes-a-break-from-linux/ + [2]: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/Documentation/CodeOfConflict?id=ddbd2b7ad99a418c60397901a0f3c997d030c65e + [3]: https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/09/linus-torvalds-apologizes.jpeg + [4]: https://lkml.org/lkml/2018/9/16/167 + [5]: https://www.contributor-covenant.org/version/1/4/code-of-conduct.html + [6]: https://www.contributor-covenant.org/adopters + [7]: https://en.wikipedia.org/wiki/Coraline_Ada_Ehmke + [8]: https://en.wikipedia.org/wiki/LGBT + [9]: https://en.wikipedia.org/wiki/Meritocracy + [10]: https://modelviewculture.com/pieces/the-dehumanizing-myth-of-the-meritocracy + [11]: https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/09/croraline-meritocracy.jpg + [12]: https://pbs.twimg.com/media/DnTTfi7XoAAdk08.jpg + [13]: https://arstechnica.com/information-technology/2015/01/linus-torvalds-on-why-he-isnt-nice-i-dont-care-about-you/ + [14]: https://opalrb.com/ + [15]: https://twitter.com/krainboltgreene/status/611569515315507200 + [16]: https://github.com/opal/opal/issues/941 + [17]: https://github.com/opal/opal/pull/948/commits/817321e27eccfffb3841f663815c17eecb8ef061#diff-a1ee87dafebc22cbd96979f1b2b7e837R11 + [18]: https://github.com/opal/opal/pull/948#issuecomment-113486020 + [19]: https://www.reddit.com/r/linux/comments/9go8cp/linus_torvalds_daughter_has_signed_the/ + [20]: https://snew.github.io/r/linux/comments/9ghrrj/linuxs_new_coc_is_a_piece_of_shit/ + [21]: https://t.co/eFeY6r4ENv + [22]: https://twitter.com/CoralineAda/status/1041441155874009093?ref_src=twsrc%5Etfw + [23]: https://twitter.com/CoralineAda/status/1041465346656530432?ref_src=twsrc%5Etfw + [24]: https://t.co/8NUL2K1gu2 + [25]: https://twitter.com/nickmon1112/status/1041668315947708416?ref_src=twsrc%5Etfw + [26]: https://www.urbandictionary.com/define.php?term=SJW + [27]: https://twitter.com/Grummz/status/1041524170331287552?ref_src=twsrc%5Etfw + [28]: https://twitter.com/jfrappier/status/1041486055038492674?ref_src=twsrc%5Etfw + [29]: https://twitter.com/justkelly_ok/status/1041522269002985473?ref_src=twsrc%5Etfw + [30]: https://twitter.com/maria_fibonacci/status/1041538148121997313?ref_src=twsrc%5Etfw + [31]: https://www.networkworld.com/article/2988850/opensource-subnet/linux-kernel-dev-sarah-sharp-quits-citing-brutal-communications-style.html + [32]: https://twitter.com/_sagesharp_/status/1041480963287539712?ref_src=twsrc%5Etfw +[33]: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=8a104f8b5867c682d994ffa7a74093c54469c11f From c709a3bbb70507682b573f99729b79c40617c9d4 Mon Sep 17 00:00:00 2001 From: thecyanbird <2534930703@qq.com> Date: Mon, 15 Oct 2018 21:00:13 +0800 Subject: [PATCH 456/736] Delete 20180919 Linux Has a Code of Conduct and Not Everyone is Happy With it.md --- ...nduct and Not Everyone is Happy With it.md | 169 ------------------ 1 file changed, 169 deletions(-) delete mode 100644 sources/talk/20180919 Linux Has a Code of Conduct and Not Everyone is Happy With it.md diff --git a/sources/talk/20180919 Linux Has a Code of Conduct and Not Everyone is Happy With it.md b/sources/talk/20180919 Linux Has a Code of Conduct and Not Everyone is Happy With it.md deleted file mode 100644 index 971a91f94f..0000000000 --- a/sources/talk/20180919 Linux Has a Code of Conduct and Not Everyone is Happy With it.md +++ /dev/null @@ -1,169 +0,0 @@ -thecyanbird translating - -Linux Has a Code of Conduct and Not Everyone is Happy With it -====== -**Linux kernel has a new code of conduct (CoC). Linus Torvalds took a break from Linux kernel development just 30 minutes after signing this code of conduct. And since **the writer of this code of conduct has had a controversial past,** it has now become a point of heated discussion. With all the politics involved, not many people are happy with this new CoC.** - -If you do not know already, [Linux creator Linus Torvalds has apologized for his past behavior and has taken a temporary break from Linux kernel development to improve his behavior][1]. - -### The new code of conduct for Linux kernel development - -Linux kernel developers have a code of conduct. It’s not like they didn’t have a code before, but the previous [code of conflict][2] is now replaced by this new code of conduct to “help make the kernel community a welcoming environment to participate in.” - -> “In the interest of fostering an open and welcoming environment, we as contributors and maintainers pledge to making participation in our project and our community a harassment-free experience for everyone, regardless of age, body size, disability, ethnicity, sex characteristics, gender identity and expression, level of experience, education, socio-economic status, nationality, personal appearance, race, religion, or sexual identity and orientation.” - -You can read the entire code of conduct on this commit page. - -[Linux Code of Conduct][33] - - -### Was Linus Torvalds forced to apologize and take a break? - -![Linus Torvalds Apologizes][3] - -The code of conduct was signed off by Linus Torvalds and Greg Kroah-Hartman (kind of second-in-command after Torvalds). Dan Williams of Intel and Chris Mason from Facebook were some of the other signees. - -If I have read through the timeline correctly, half an hour after signing this code of conduct, Torvalds sent a [mail apologizing for his past behavior][4]. He also announced taking a temporary break to improve upon his behavior. - -But at this point some people started reading between the lines, with a special attention to this line from his mail: - -> **This week people in our community confronted me about my lifetime of not understanding emotions**. My flippant attacks in emails have been both unprofessional and uncalled for. Especially at times when I made it personal. In my quest for a better patch, this made sense to me. I know now this was not OK and I am truly sorry. - -This particular line could be read as if he was coerced into apologizing and taking a break because of the new code of conduct. Though it could also be a precautionary measure to prevent Torvalds from violating the newly created code of conduct. - -### The controversy around Contributor Convent creator Coraline Ada Ehmke - -The Linux code of conduct is based on the [Contributor Covenant, version 1.4][5]. Contributor Convent has been adopted by hundreds of open source projects. Eclipse, Angular, Ruby, Kubernetes are some of the [many adopters of Contributor Convent][6]. - -Contributor Covenant has been created by [Coraline Ada Ehmke][7], a software developer, an open-source advocate, and an [LGBT][8] activist. She has been instrumental in promoting diversity in the open source world. - -Coraline has also been vocal about her stance against [meritocracy][9]. The Latin word meritocracy originally refers to a “system under which advancement within the system turns on “merits”, like intelligence, credentials, and education.” But activists like [Coraline believe][10] that meritocracy is a negative system where the worth of an individual is measured not by their humanity, but solely by their intellectual output. - -[![croraline meritocracy][11]][12] -Image credit: Twitter user @nickmon1112 - -Remember that [Linus Torvalds has repeatedly said that he cares about the code, not the person who writes it][13]. Clearly, this goes against Coraline’s view on meritocracy. - -Coraline has had a troubled incident in the past with a contributor of [Opal project][14]. There was a [discussion taking place on Twitter][15] where Elia, a core contributor to Opal project from Italy, said “(trans people) not accepting reality is the problem here”. - -Coraline was neither in the discussion nor was she a contributor to the Opal project. But as an LGBT activist, she took it to herself and [demanded that Elia be removed from the Opal Project][16] for his ‘views against trans people’. A lengthy and heated discussion took place on Opal’s GitHub repository. Coraline and her supporters, who never contributed to Opal, tried to coerce the moderators into removing Elia, a core contributor of the project. - -While Elia wasn’t removed from the project, Opal project maintainers agreed to put up a code of conduct in place. And this code of conduct was nothing else but Coraline’s famed Contributor Covenant that she had pitched to the maintainers herself. - -But the story didn’t end here. The Contributor Covenant was then modified and a [new clause added in order to get to Elia][17]. The new clause widened the scope of conduct in public spaces. This malicious change was [spotted by the maintainers][18] and they edited the clause. Opal eventually got rid of the Contributor Covenant and put in place its own guideline. - -This is a classic example of how a few offended people, who never contributed a single line of code to the project, tried to oust its core contributor. - -### People’s reaction on Linux Code of Conduct and Torvalds’ apology - -As soon as Linux code of conduct and Torvalds’ apology went public, Social Media and forums were rife with rumors and [speculations][19]. While many people appreciated this new development, there were some who saw a conspiracy by [SJW infiltrating Linux][20]. - -A sarcastic tweet by Caroline only fueled the fire. - -> I can’t wait for the mass exodus from Linux now that it’s been infiltrated by SJWs. Hahahah [pic.twitter.com/eFeY6r4ENv][21] -> -> — Coraline Ada Ehmke (@CoralineAda) [September 16, 2018][22] - -In the wake of the Linux CoC controversy, Coraline openly said that the Contributor Convent code of conduct is a political document. This did not go down well with the people who want the political stuff out of the open source projects. - -> Some people are saying that the Contributor Covenant is a political document, and they’re right. -> -> — Coraline Ada Ehmke (@CoralineAda) [September 16, 2018][23] - -Nick Monroe, a freelance journalist, dig up the past of Coraline in order to validate his claim that there is more to Linux CoC than meets the eye. You can go by the entire thread if you want. - -> Alright. You've seen this a million times before. It's a code of conduct blah blah blah -> -> that has social justice baked right into it. blah blah blah. -> -> But something is different about this. [pic.twitter.com/8NUL2K1gu2][24] -> -> — Nick Monroe (@nickmon1112) [September 17, 2018][25] - -Nick wasn’t the only one to disapprove of the new Linux CoC. The [SJW][26] involvement led to more skepticism. - -> I guess the big news in Linux today is that the Linux kernel is now governed by a Code of Conduct and a “post meritocracy” world view. -> -> In principle these CoCs look great. In practice they are abused tools to hunt people SJWs don’t like. And they don’t like a lot of people. -> -> — Mark Kern (@Grummz) [September 17, 2018][27] - -While there were many who appreciated Torvalds’ apology, there were a few who blamed Torvalds’ attitude: - -> Am I the only one who thinks Linus Torvalds attitude for decades was a prime contributors to how many of the condescending, rudes, jerks in Linux and open source "communities" behaved? I've never once felt welcomed into the Linux community as a new user. -> -> — Jonathan Frappier (@jfrappier) [September 17, 2018][28] - -And some were simply not amused with his apology: - -> Oh look, an abusive OSS maintainer finally admitted, after *decades* of abusive and toxic behavior, that his behavior *might* be an issue. -> -> And a bunch of people I follow are tripping all over themselves to give him cookies for that. 🙄🙄🙄 -> -> — Kelly Ellis (@justkelly_ok) [September 17, 2018][29] - -The entire Torvalds apology episode has raised a genuine concern ;) - -> Do we have to put "I don't/do forgive Linus Torvalds" in our bio now? -> -> — Verónica. (@maria_fibonacci) [September 17, 2018][30] - -Jokes apart, the genuine concern was raised by Sharp, who had [quit Linux Kernel development][31] in 2015 due to the ‘toxic community’. - -> The real test here is whether the community that built Linus up and protected his right to be verbally abusive will change. Linus not only needs to change himself, but the Linux kernel community needs to change as well. -> -> — Sage Sharp (@_sagesharp_) [September 17, 2018][32] - -### What do you think of Linux Code of Conduct? - -If you ask my opinion, I do think that a Code of Conduct is the need of the time. It guides people in behaving in a respectable way and helps create a positive environment for all kind of people irrespective of their race, ethnicity, religion, nationality and political views (both left and right). - -What are your views on the entire episode? Do you think the CoC will help Linux kernel development? Or will it deteriorate with the involvement of anti-meritocracy SJWs? - -We don’t have a code of conduct at It’s FOSS but let’s keep the discussion civil :) - --------------------------------------------------------------------------------- - -via: https://itsfoss.com/linux-code-of-conduct/ - -作者:[Abhishek Prakash][a] -选题:[lujun9972](https://github.com/lujun9972) -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://itsfoss.com/author/abhishek/ -[1]: https://itsfoss.com/torvalds-takes-a-break-from-linux/ -[2]: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/Documentation/CodeOfConflict?id=ddbd2b7ad99a418c60397901a0f3c997d030c65e -[3]: https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/09/linus-torvalds-apologizes.jpeg -[4]: https://lkml.org/lkml/2018/9/16/167 -[5]: https://www.contributor-covenant.org/version/1/4/code-of-conduct.html -[6]: https://www.contributor-covenant.org/adopters -[7]: https://en.wikipedia.org/wiki/Coraline_Ada_Ehmke -[8]: https://en.wikipedia.org/wiki/LGBT -[9]: https://en.wikipedia.org/wiki/Meritocracy -[10]: https://modelviewculture.com/pieces/the-dehumanizing-myth-of-the-meritocracy -[11]: https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/09/croraline-meritocracy.jpg -[12]: https://pbs.twimg.com/media/DnTTfi7XoAAdk08.jpg -[13]: https://arstechnica.com/information-technology/2015/01/linus-torvalds-on-why-he-isnt-nice-i-dont-care-about-you/ -[14]: https://opalrb.com/ -[15]: https://twitter.com/krainboltgreene/status/611569515315507200 -[16]: https://github.com/opal/opal/issues/941 -[17]: https://github.com/opal/opal/pull/948/commits/817321e27eccfffb3841f663815c17eecb8ef061#diff-a1ee87dafebc22cbd96979f1b2b7e837R11 -[18]: https://github.com/opal/opal/pull/948#issuecomment-113486020 -[19]: https://www.reddit.com/r/linux/comments/9go8cp/linus_torvalds_daughter_has_signed_the/ -[20]: https://snew.github.io/r/linux/comments/9ghrrj/linuxs_new_coc_is_a_piece_of_shit/ -[21]: https://t.co/eFeY6r4ENv -[22]: https://twitter.com/CoralineAda/status/1041441155874009093?ref_src=twsrc%5Etfw -[23]: https://twitter.com/CoralineAda/status/1041465346656530432?ref_src=twsrc%5Etfw -[24]: https://t.co/8NUL2K1gu2 -[25]: https://twitter.com/nickmon1112/status/1041668315947708416?ref_src=twsrc%5Etfw -[26]: https://www.urbandictionary.com/define.php?term=SJW -[27]: https://twitter.com/Grummz/status/1041524170331287552?ref_src=twsrc%5Etfw -[28]: https://twitter.com/jfrappier/status/1041486055038492674?ref_src=twsrc%5Etfw -[29]: https://twitter.com/justkelly_ok/status/1041522269002985473?ref_src=twsrc%5Etfw -[30]: https://twitter.com/maria_fibonacci/status/1041538148121997313?ref_src=twsrc%5Etfw -[31]: https://www.networkworld.com/article/2988850/opensource-subnet/linux-kernel-dev-sarah-sharp-quits-citing-brutal-communications-style.html -[32]: https://twitter.com/_sagesharp_/status/1041480963287539712?ref_src=twsrc%5Etfw -[33]: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=8a104f8b5867c682d994ffa7a74093c54469c11f From 71623588fb73761f9fb5728d1aa561dfe8143d32 Mon Sep 17 00:00:00 2001 From: David Chen Date: Sat, 13 Oct 2018 21:34:58 +0800 Subject: [PATCH 457/736] Third commit --- sources/tech/20180823 CLI- improved.md | 121 ++++++++++++++++++++++++- 1 file changed, 120 insertions(+), 1 deletion(-) diff --git a/sources/tech/20180823 CLI- improved.md b/sources/tech/20180823 CLI- improved.md index 52edaa28c8..8fb3ea9100 100644 --- a/sources/tech/20180823 CLI- improved.md +++ b/sources/tech/20180823 CLI- improved.md @@ -2,16 +2,29 @@ Translating by DavidChenLiang CLI: improved ====== +命令行:增强 +====== + +我不确定有多少web 开发者能完全逃避使用命令行。就我来说,我从1997年上大学就开始使用命令行了,那时l33t-hacker 让我着迷,同时我觉得又很难掌握它。 + I'm not sure many web developers can get away without visiting the command line. As for me, I've been using the command line since 1997, first at university when I felt both super cool l33t-hacker and simultaneously utterly out of my depth. +过去这些年我的命令行本领在逐步加强,我经常会去搜寻在我工作中能使用的更好的命令行工具。就此来说,下面就是我现在使用的增强的命令行工具的列表。 + Over the years my command line habits have improved and I often search for smarter tools for the jobs I commonly do. With that said, here's my current list of improved CLI tools. ### Ignoring my improvements +### 怎么忽略我的增强 + +在一些情况下我会用别名将新的和增强的命令行工具链接到原来的命令行(如`cat`和`ping`)。 In a number of cases I've aliased the new and improved command line tool over the original (as with `cat` and `ping`). +如果我需要运行原来的命令的话,有时我确实需要这么做,我能像下面这样来做。(我用的是Mac 你的输出可能不一样) + If I want to run the original command, which is sometimes I do need to do, then there's two ways I can do this (I'm on a Mac so your mileage may vary): + ``` $ \cat # ignore aliases named "cat" - explanation: https://stackoverflow.com/a/16506263/22617 $ command cat # ignore functions and aliases @@ -20,13 +33,22 @@ $ command cat # ignore functions and aliases ### bat > cat +`cat`用于打印文件的内容,但是如果你在命令行上花了很多时间的话,例如语法高亮之类的功能会非常有用。我先发现了[ccat][3]这个有语法高亮功能的的工具, 然后我发现了[bat][4],它有语法高亮,分页,行号和git集成。 + `cat` is used to print the contents of a file, but given more time spent in the command line, features like syntax highlighting come in very handy. I found [ccat][3] which offers highlighting then I found [bat][4] which has highlighting, paging, line numbers and git integration. +`bat`命令也能让我在输出里(只要输出比屏幕的高度长) +使用`/`关键字绑定来搜索(和用`less`搜索功能一样)。 + The `bat` command also allows me to search during output (only if the output is longer than the screen height) using the `/` key binding (similarly to `less` searching). ![Simple bat output][5] +我将别名`bat`链接到了`cat`命令: + I've also aliased `bat` to the `cat` command: + + ``` alias cat='bat' @@ -35,12 +57,18 @@ alias cat='bat' 💾 [Installation directions][4] ### prettyping > ping +### prettyping > ping + +`ping`非常有用,当我碰到“糟了,是不是神马服务挂了/我的网不通了?”这种情况下我最先想到的工具就是他了。但是`prettyping`(“prettyping” 可不是指"pre typing")(译注:英文字面意思是'预打印')在`ping`上加上了友好的输出,这可让我感觉命令行友好了很多呢。 `ping` is incredibly useful, and probably my goto tool for the "oh crap is X down/does my internet work!!!". But `prettyping` ("pretty ping" not "pre typing"!) gives ping a really nice output and just makes me feel like the command line is a bit more welcoming. ![/images/cli-improved/ping.gif][6] +我也将`ping`用别名链接到了`prettyping`命令: + I've also aliased `ping` to the `prettyping` command: + ``` alias ping='prettyping --nolegend' @@ -50,13 +78,22 @@ alias ping='prettyping --nolegend' ### fzf > ctrl+r +在命令行上使用`ctrl+r`将允许你在命令历史里发现搜索使用过的命令,这是个挺好的小技巧,但是它需要你给出非常精确的输入才能正常运行。 + In the terminal, using `ctrl+r` will allow you to [search backwards][8] through your history. It's a nice trick, albeit a bit fiddly. +`fzf`这个工具相比于`ctrl+r`有了**巨大的**进步。它能针对命令行历史进行模糊查询,并且提供了对可能合格的结果的全面交互式预览。 + The `fzf` tool is a **huge** enhancement on `ctrl+r`. It's a fuzzy search against the terminal history, with a fully interactive preview of the possible matches. +除了搜索命令历史,`fzf`还能预览和打开文件,我在下面的视频里展示了这些功能。 + In addition to searching through the history, `fzf` can also preview and open files, which is what I've done in the video below: +对于预览的效果,我创建了一个叫`preview`的别名,它将`fzf`和前文提到的`bat`组合起来完成预览功能,还给上面绑定了一个定制的热键Ctrl+o来打开 VS Code + For this preview effect, I created an alias called `preview` which combines `fzf` with `bat` for the preview and a custom key binding to open VS Code: + ``` alias preview="fzf --preview 'bat --color \"always\" {}'" # add support for ctrl+o to open selected file in VS Code @@ -68,10 +105,21 @@ export FZF_DEFAULT_OPTS="--bind='ctrl-o:execute(code {})+abort'" ### htop > top +`top`是当我想快速诊断为什么机器上的CPU跑的那么累或者风扇为什么突然呼呼大做的时候首先会想到的工具。我在产品环境也会使用这个工具。讨厌的是Mac 上的`top`和 Linux 上的`top`有着极大的不同(包括其内部的 IMHO) + `top` is my goto tool for quickly diagnosing why the CPU on the machine is running hard or my fan is whirring. I also use these tools in production. Annoyingly (to me!) `top` on the Mac is vastly different (and inferior IMHO) to `top` on linux. +不过,`htop`是对 Linux 上的`top`和 Mac 上蹩脚的`top`的极大改进。它增加了包括颜色输出编码,键盘热键绑定以及不同的视图,这极大的帮助了我来理解进程之间的父子关系。 + However, `htop` is an improvement on both regular `top` and crappy-mac `top`. Lots of colour coding, keyboard bindings and different views which have helped me in the past to understand which processes belong to which. +方便的热键绑定包括: + + * P - CPU 使用率排序 + * M - 内存使用排序 + * F4 - 用字符串过滤进程(例如只看包括"node"的进程) + * space - 锚定一个单独进程,这样我能观察它是否有尖峰状态 + Handy key bindings include: * P - sort by CPU @@ -83,7 +131,10 @@ Handy key bindings include: ![htop output][10] +在Mac Sieera 上htop 有个奇怪的bug,可以通过以root 运行来绕过(我实在记不清这个bug 是什么,但是这个别名能搞定它,有点讨厌的是我得每次都输入 root 密码。): + There is a weird bug in Mac Sierra that can be overcome by running `htop` as root (I can't remember exactly what the bug is, but this alias fixes it - though annoying that I have to enter my password every now and again): + ``` alias top="sudo htop" # alias top and fix high sierra bug @@ -93,11 +144,16 @@ alias top="sudo htop" # alias top and fix high sierra bug ### diff-so-fancy > diff +我非常确定我是一些年前从 Paul Irish 那儿学来的这个技巧,尽管我很少直接使用`diff`,我的git 命令行会一直使用`diff`。`diff-so-fancy`给了我代码颜色和更改字符高亮的功能。 + I'm pretty sure I picked this one up from Paul Irish some years ago. Although I rarely fire up `diff` manually, my git commands use diff all the time. `diff-so-fancy` gives me both colour coding but also character highlight of changes. ![diff so fancy][12] +在我的`~/.gitconfig`文件里我有下面的选项来打开`git diff`和`git show`的`diff-so-fancy`功能。 + Then in my `~/.gitconfig` I have included the following entry to enable `diff-so-fancy` on `git diff` and `git show`: + ``` [pager] diff = diff-so-fancy | less --tabs=1,5 -RFX @@ -109,13 +165,22 @@ Then in my `~/.gitconfig` I have included the following entry to enable `diff-so ### fd > find +尽管我使用 Mac, 但我从来不是一个Spotlight的拥趸,我觉得他的功能很累赘,关键字也难记,更新它自己的数据库时会拖慢CPU,简直一无是处。我经常使用[Alfred][14],但是它额搜索功能也工作的不是很好。 + Although I use a Mac, I've never been a fan of Spotlight (I found it sluggish, hard to remember the keywords, the database update would hammer my CPU and generally useless!). I use [Alfred][14] a lot, but even the finder feature doesn't serve me well. +我倾向于在命令行中搜索文件,但是`find`的难用在于很难去记住那些合适的表达式来描述我想要的文件。(而且 Mac 上的 find 命令和非 Mac 的 find 命令还有些许不同,这更加深了我的失望。) + I tend to turn the command line to find files, but `find` is always a bit of a pain to remember the right expression to find what I want (and indeed the Mac flavour is slightly different non-mac find which adds to frustration). +`fd`是一个很好的替代品(它的作者和`bat`的作者是同一个人)。他非常快而且对于我经常要搜索的命令非常好记。 + `fd` is a great replacement (by the same individual who wrote `bat`). It is very fast and the common use cases I need to search with are simple to remember. A few handy commands: + +几个非常方便的例子: + ``` $ fd cli # all filenames containing "cli" $ fd -e md # all with .md extension @@ -129,19 +194,33 @@ $ fd cli -x wc -w # find "cli" and run `wc -w` on each file ### ncdu > du +对我来说,知道当前的磁盘空间使用是非常重要的任务。我用过 Mac 上的[Dish Daisy][17],但是我觉得那个程序产生结果有点慢。 + Knowing where disk space is being taking up is a fairly important task for me. I've used the Mac app [Disk Daisy][17] but I find that it can be a little slow to actually yield results. +`du -sh`命令是我经常会跑的命令(`-sh`是指结果以总结和人类可读的方式显示),不过我经常会想要深入挖掘那些占用了大量的目录。 + The `du -sh` command is what I'll use in the terminal (`-sh` means summary and human readable), but often I'll want to dig into the directories taking up the space. +`nudu`是一个非常棒的替代。它提供了一个交互式的界面并且允许快速的扫描那些占用了大量磁盘空间的目录和文件,它又快又准。(尽管不管在哪个工具的情况下,扫描我的home目录都要很长时间,它有550G) + + `ncdu` is a nice alternative. It offers an interactive interface and allows for quickly scanning which folders or files are responsible for taking up space and it's very quick to navigate. (Though any time I want to scan my entire home directory, it's going to take a long time, regardless of the tool - my directory is about 550gb). +一旦当我找到一个目录我想要`处理`一下(如删除,移动或压缩文件),我都会使用命令+点击屏幕[iTerm2][18]上部的目录名字来对那个目录执行搜索。 + Once I've found a directory I want to manage (to delete, move or compress files), I'll use the cmd + click the pathname at the top of the screen in [iTerm2][18] to launch finder to that directory. ![ncdu output][19] +还有另外一个选择[nnn][20],它提供了一个更漂亮的界面,尽管它也提供文件尺寸和使用情况,实际上它更像一个全功能的文件管理器。 + There's another [alternative called nnn][20] which offers a slightly nicer interface and although it does file sizes and usage by default, it's actually a fully fledged file manager. My `ncdu` is aliased to the following: + +我的`nudu`使用下面的别名链接: + ``` alias du="ncdu --color dark -rr -x --exclude .git --exclude node_modules" @@ -149,6 +228,13 @@ alias du="ncdu --color dark -rr -x --exclude .git --exclude node_modules" The options are: +选项有: + + * `--color dark` 使用颜色方案 + * `-rr` 只读模式(防止误删和运行新的登陆程序) + * `--exclude` 忽略不想操作的目录 + + * `--color dark` \- use a colour scheme * `-rr` \- read-only mode (prevents delete and spawn shell) * `--exclude` ignore directories I won't do anything about @@ -159,13 +245,20 @@ The options are: ### tldr > man +几乎所有的单独命令行工具都有一个相伴的手册,其可以被`man 命令`来调出,但是在`man`的输出里找到东西可有点然人困惑,而且在一个包含了所有的技术细节的输出里找东西也挺可怕的。 + It's amazing that nearly every single command line tool comes with a manual via `man `, but navigating the `man` output can be sometimes a little confusing, plus it can be daunting given all the technical information that's included in the manual output. +这就是TL;DR(译注:英文里`文档太长,没空去读`的缩写)项目的初衷。这个一个由社区驱动的文档系统,它针对的是命令行。就我现在用下来,我还没碰到过一个命令它没有相应的文档,你[也可以做贡献][22]. + This is where the TL;DR project comes in. It's a community driven documentation system that's available from the command line. So far in my own usage, I've not come across a command that's not been documented, but you can [also contribute too][22]. ![TLDR output for 'fd'][23] +作为一个小技巧,我将`tldr`的别名链接到`help`(这样输入会快一点。。。) + As a nicety, I've also aliased `tldr` to `help` (since it's quicker to type!): + ``` alias help='tldr' @@ -175,17 +268,29 @@ alias help='tldr' ### ack || ag > grep +`grep`毫无疑问是一个命令行上的强力工具,但是经过这些年它已经被一些工具超越了。其中两个叫`ack`和`ag`。 + `grep` is no doubt a powerful tool on the command line, but over the years it's been superseded by a number of tools. Two of which are `ack` and `ag`. +我个人来说`ack`和`ag`都尝试过,并没有非常明显的个人偏好,(那也就是说他们都很棒,并且很相似)。我倾向于默认只使用`ack`,因为这三个字符就在指尖,很好打。并且,`ack`有大量的`ack --bar`参数可以使用,(你一定会体会到这一点。) + I personally flitter between `ack` and `ag` without really remembering which I prefer (that's to say they're both very good and very similar!). I tend to default to `ack` only because it rolls of my fingers a little easier. Plus, `ack` comes with the mega `ack --bar` argument (I'll let you experiment)! +`ack`和`ag`都将使用正则表达式来表达搜索,并且对我的工作有关,我能指定搜索的文件类型而不用使用类似于`--js`或`--html`的文件标识(尽管`ag`比`ack`在文件类型过滤器里包括了更多的文件类型。) + Both `ack` and `ag` will (by default) use a regular expression to search, and extremely pertinent to my work, I can specify the file types to search within using flags like `--js` or `--html` (though here `ag` includes more files in the js filter than `ack`). +两个工具都支持常见的`grep`选项,如`-B`和`-A`用于在搜索的上下文里指代`之前`和`之后`。 + Both tools also support the usual `grep` options, like `-B` and `-A` for before and after context in the grep. ![ack in action][25] +因为`ack`不支持markdown(而我又恰好写了很多markdown), 我在我的`~/.ackrc`文件里放了如下的定制语句: + Since `ack` doesn't come with markdown support (and I write a lot in markdown), I've got this customisation in my `~/.ackrc` file: + + ``` --type-set=md=.md,.mkd,.markdown --pager=less -FRX @@ -198,11 +303,19 @@ Since `ack` doesn't come with markdown support (and I write a lot in markdown), ### jq > grep et al +我是[jq][29]的粉丝之一。当然一开始我也在它的语法里苦苦挣扎,还好我对查询语言还算有些使用心得,现在我对`jq`可以说是每天都要用。(不过从前我要么使用grep 或者使用一个叫[json][30]的工具,相比而言后者非常基础了。) + I'm a massive fanboy of [jq][29]. At first I struggled with the syntax, but I've since come around to the query language and use `jq` on a near daily basis (whereas before I'd either drop into node, use grep or use a tool called [json][30] which is very basic in comparison). +我甚至开始撰写一个`jq`的说明系列(有2500字并且还在增加),我还发布了一个[web tool][31]和一个Mac 上的应用(这个还没有发布。) + I've even started the process of writing a jq tutorial series (2,500 words and counting) and have published a [web tool][31] and a native mac app (yet to be released). +`jq`允许我传入一个 JSON 并且能非常简单的转变其为一个 JSON格式的适合我要求的结果。下面这个例子允许我用一个命令更新我的所有节点依赖(为了便于于都,我将其分成为多行。) + `jq` allows me to pass in JSON and transform the source very easily so that the JSON result fits my requirements. One such example allows me to update all my node dependencies in one command (broken into multiple lines for readability): + + ``` $ npm i $(echo $(\ npm outdated --json | \ @@ -235,6 +348,9 @@ The above command will list all the node dependencies that are out of date, and That result is then fed into the `npm install` command and voilà, I'm all upgraded (using the sledgehammer approach). ### Honourable mentions +### 很荣幸提及一些其他的工具 + +我也在开始尝试一些别的工具,但我还没有完全掌握他们。(出了`ponysay`,当我新启动一个命令行会话时,它就会出现。) Some of the other tools that I've started poking around with, but haven't used too often (with the exception of ponysay, which appears when I start a new terminal session!): @@ -246,6 +362,9 @@ Some of the other tools that I've started poking around with, but haven't used t ### What about you? +### 你有什么好点子吗? + +上面是我的命令行清单。能告诉我们你的吗?你有没有增强一些你每天都会用到的命令呢?我非常乐意知道。 So that's my list. How about you? What daily command line tools have you improved? I'd love to know. @@ -256,7 +375,7 @@ via: https://remysharp.com/2018/08/23/cli-improved 作者:[Remy Sharp][a] 选题:[lujun9972](https://github.com/lujun9972) -译者:[译者ID](https://github.com/译者ID) +译者:DavidChenLiang(https://github.com/DavidChenLiang) 校对:[校对者ID](https://github.com/校对者ID) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 From cafcbaa0523e00d37d2883abcb121b46154ea741 Mon Sep 17 00:00:00 2001 From: thecyanbird <2534930703@qq.com> Date: Mon, 15 Oct 2018 21:09:54 +0800 Subject: [PATCH 458/736] thecyanbird translating --- ... To Record Your Terminal And Generate Animated Gif Images.md | 2 ++ 1 file changed, 2 insertions(+) diff --git a/sources/tech/20181005 Terminalizer - A Tool To Record Your Terminal And Generate Animated Gif Images.md b/sources/tech/20181005 Terminalizer - A Tool To Record Your Terminal And Generate Animated Gif Images.md index 26d1941cc1..7b77a9cf73 100644 --- a/sources/tech/20181005 Terminalizer - A Tool To Record Your Terminal And Generate Animated Gif Images.md +++ b/sources/tech/20181005 Terminalizer - A Tool To Record Your Terminal And Generate Animated Gif Images.md @@ -1,3 +1,5 @@ +thecyanbird translating + Terminalizer – A Tool To Record Your Terminal And Generate Animated Gif Images ====== This is know topic for most of us and i don’t want to give you the detailed information about this flow. Also, we had written many article under this topics. From c7e00f9d275eb807a131975d27064a61d1477c50 Mon Sep 17 00:00:00 2001 From: "Xingyu.Wang" Date: Mon, 15 Oct 2018 23:00:39 +0800 Subject: [PATCH 459/736] PRF:20180817 How To Lock The Keyboard And Mouse, But Not The Screen In Linux.md @FSSlc --- ... And Mouse, But Not The Screen In Linux.md | 55 +++++++++++++------ 1 file changed, 37 insertions(+), 18 deletions(-) diff --git a/translated/tech/20180817 How To Lock The Keyboard And Mouse, But Not The Screen In Linux.md b/translated/tech/20180817 How To Lock The Keyboard And Mouse, But Not The Screen In Linux.md index 3a0a0592cc..9b0c6608dd 100644 --- a/translated/tech/20180817 How To Lock The Keyboard And Mouse, But Not The Screen In Linux.md +++ b/translated/tech/20180817 How To Lock The Keyboard And Mouse, But Not The Screen In Linux.md @@ -3,33 +3,38 @@ ![](https://www.ostechnix.com/wp-content/uploads/2017/09/Lock-The-Keyboard-And-Mouse-720x340.jpg) -我四岁的侄女是个好奇的孩子,她非常喜爱“阿凡达”电影,当阿凡达电影在播放时,她是如此的专注,好似眼睛粘在了屏幕上。但问题是当她观看电影时,她经常会碰到键盘上的某个键或者移动了鼠标,又或者是点击了鼠标的按钮。有时她非常意外地按了键盘上的某个键,从而将电影关闭或者暂停了。所以我就想找个方法来将键盘和鼠标都锁住,但屏幕不会被锁住。幸运的是,我在 Ubuntu 论坛上找到了一个完美的解决方法。假如在你正看着屏幕上的某些重要的事情时,你不想让你的小猫或者小狗在你的键盘上行走,或者让你的孩子在键盘上瞎搞一气,那我建议你试试 **xtrlock** 这个工具。它很简单但非常实用,你可以锁定屏幕的显示直到用户在键盘上输入自己设定的密码(译者注:就是用户自己的密码,例如用来打开屏保的那个密码,不需要单独设定)。在这篇简单的教程中,我将为你展示如何在 Linux 下锁住键盘和鼠标,而不锁掉屏幕。这个技巧几乎可以在所有的 Linux 操作系统中生效。 +我四岁的侄女是个好奇的孩子,她非常喜爱“阿凡达”电影,当阿凡达电影在播放时,她是如此的专注,好似眼睛粘在了屏幕上。但问题是当她观看电影时,她经常会碰到键盘上的某个键或者移动了鼠标,又或者是点击了鼠标的按钮。有时她非常意外地按了键盘上的某个键,从而将电影关闭或者暂停了。所以我就想找个方法来将键盘和鼠标都锁住,但屏幕不会被锁住。幸运的是,我在 Ubuntu 论坛上找到了一个完美的解决方法。假如在你正看着屏幕上的某些重要的事情时,你不想让你的小猫或者小狗在你的键盘上行走,或者让你的孩子在键盘上瞎搞一气,那我建议你试试 **xtrlock** 这个工具。它很简单但非常实用,你可以锁定屏幕的显示直到用户在键盘上输入自己设定的密码(LCTT 译注:就是用户自己的密码,例如用来打开屏保的那个密码,不需要单独设定)。在这篇简单的教程中,我将为你展示如何在 Linux 下锁住键盘和鼠标,而不锁掉屏幕。这个技巧几乎可以在所有的 Linux 操作系统中生效。 ### 安装 xtrlock xtrlock 软件包在大多数 Linux 操作系统的默认软件仓库中都可以获取到。所以你可以使用你安装的发行版的包管理器来安装它。 在 **Arch Linux** 及其衍生发行版中,运行下面的命令来安装它: + ``` $ sudo pacman -S xtrlock ``` 在 **Fedora** 上使用: + ``` $ sudo dnf install xtrlock ``` -在 **RHEL, CentOS** 上使用: +在 **RHEL、CentOS** 上使用: + ``` $ sudo yum install xtrlock ``` 在 **SUSE/openSUSE** 上使用: + ``` $ sudo zypper install xtrlock ``` -在 **Debian, Ubuntu, Linux Mint** 上使用: +在 **Debian、Ubuntu、Linux Mint** 上使用: + ``` $ sudo apt-get install xtrlock ``` @@ -38,41 +43,50 @@ $ sudo apt-get install xtrlock 安装好 xtrlock 后,你需要根据你的选择来创建一个快捷键,通过这个快捷键来锁住键盘和鼠标。 -在 **/usr/local/bin** 目录下创建一个名为 **lockkbmouse** 的新文件: +(LCTT 译注:译者在自己的系统(Arch + Deepin)中发现这里的到下面创建快捷键的部分可以不必做,依然生效。) + +在 `/usr/local/bin` 目录下创建一个名为 `lockkbmouse` 的新文件: + ``` $ sudo vi /usr/local/bin/lockkbmouse ``` 然后将下面的命令添加到这个文件中: + ``` #!/bin/bash sleep 1 && xtrlock ``` + 保存并关闭这个文件。 然后使用下面的命令来使得它可以被执行: + ``` $ sudo chmod a+x /usr/local/bin/lockkbmouse ``` 接着,我们就需要创建快捷键了。 +#### 创建快捷键 + **在 Arch Linux MATE 桌面中** -依次点击 **System -> Preferences -> Hardware -> keyboard Shortcuts** +依次点击 “System -> Preferences -> Hardware -> keyboard Shortcuts” -然后点击 **Add** 来创建快捷键。 +然后点击 “Add” 来创建快捷键。 ![][2] -首先键入你的这个快捷键的名称,然后将下面的命令填入命令框中,最后点击 **Apply** 按钮。 +首先键入你的这个快捷键的名称,然后将下面的命令填入命令框中,最后点击 “Apply” 按钮。 + ``` bash -c "sleep 1 && xtrlock" ``` ![][3] -为了能够给这个快捷键赋予快捷方式,需要选中它或者双击它然后输入你选定的快捷键组合,例如我使用 **Alt+k** 这组快捷键。 +为了能够给这个快捷键赋予快捷方式,需要选中它或者双击它然后输入你选定的快捷键组合,例如我使用 `Alt+k` 这组快捷键。 ![][4] @@ -80,16 +94,17 @@ bash -c "sleep 1 && xtrlock" **在 Ubuntu GNOME 桌面中** -依次进入 **System Settings -> Devices -> Keyboard**,然后点击 **+** 这个符号。 +依次进入 “System Settings -> Devices -> Keyboard”,然后点击 “+” 这个符号。 + +键入你快捷键的名称并将下面的命令加到命令框里面,然后点击 “Add” 按钮。 -键入你快捷键的名称并将下面的命令加到命令框里面,然后点击 **Add** 按钮。 ``` bash -c "sleep 1 && xtrlock" ``` ![][5] -接下来为这个新建的快捷键赋予快捷方式。我们只需要选择或者双击 **“Set shortcut”** 这个按钮就可以了。 +接下来为这个新建的快捷键赋予快捷方式。我们只需要选择或者双击 “Set shortcut” 这个按钮就可以了。 ![][6] @@ -97,7 +112,7 @@ bash -c "sleep 1 && xtrlock" ![][7] -输入你选定的快捷键组合,例如我使用 **Alt+k**。 +输入你选定的快捷键组合,例如我使用 `Alt+k`。 ![][8] @@ -113,23 +128,26 @@ bash -c "sleep 1 && xtrlock" ### 将键盘和鼠标解锁 -要将键盘和鼠标解锁,只需要输入你的密码然后敲击“Enter”键就可以了,在输入的过程中你将看不到密码。只需要输入然后敲 `ENTER` 键就可以了。在你输入了正确的密码后,鼠标和键盘就可以再工作了。假如你输入了一个错误的密码,你将听到警告声。按 **ESC** 来清除输入的错误密码,然后重新输入正确的密码。要去掉未完全输入完的密码中的一个字符,只需要按 **BACKSPACE** 或者 **DELETE** 键就可以了。 +要将键盘和鼠标解锁,只需要输入你的密码然后敲击回车键就可以了,在输入的过程中你将看不到密码。只需要输入然后敲回车键就可以了。在你输入了正确的密码后,鼠标和键盘就可以再工作了。假如你输入了一个错误的密码,你将听到警告声。按 `ESC` 来清除输入的错误密码,然后重新输入正确的密码。要去掉未完全输入完的密码中的一个字符,只需要按 `BACKSPACE` 或者 `DELETE` 键就可以了。 ### 要是我被永久地锁住了怎么办? -以防你被永久地锁定了屏幕,切换至一个 TTY(例如 CTRL+ALT+F2)然后运行: +以防你被永久地锁定了屏幕,切换至一个 TTY(例如 `CTRL+ALT+F2`)然后运行: + ``` $ sudo killall xtrlock ``` -或者你还可以使用 **chvt** 命令来在 TTY 和 X 会话之间切换。 +或者你还可以使用 `chvt` 命令来在 TTY 和 X 会话之间切换。 例如,如果要切换到 TTY1,则运行: + ``` $ sudo chvt 1 ``` 要切换回 X 会话,则键入: + ``` $ sudo chvt 7 ``` @@ -137,6 +155,7 @@ $ sudo chvt 7 不同的发行版使用了不同的快捷键组合来在不同的 TTY 间切换。请参考你安装的对应发行版的官方网站了解更多详情。 如果想知道更多 xtrlock 的信息,请参考 man 页: + ``` $ man xtrlock ``` @@ -145,7 +164,7 @@ $ man xtrlock **资源:** - * [**Ubuntu 论坛**][10] +* [**Ubuntu 论坛**][10] -------------------------------------------------------------------------------- @@ -154,7 +173,7 @@ via: https://www.ostechnix.com/lock-keyboard-mouse-not-screen-linux/ 作者:[SK][a] 选题:[lujun9972](https://github.com/lujun9972) 译者:[FSSlc](https://github.com/FSSlc) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 @@ -167,5 +186,5 @@ via: https://www.ostechnix.com/lock-keyboard-mouse-not-screen-linux/ [6]:http://www.ostechnix.com/wp-content/uploads/2018/01/set-shortcut-key-1.png [7]:http://www.ostechnix.com/wp-content/uploads/2018/01/set-shortcut-key-2.png [8]:http://www.ostechnix.com/wp-content/uploads/2018/01/set-shortcut-key-3.png -[9]:http://www.ostechnix.com/wp-content/uploads/2018/01/xtrlock-1.png +[9]:http://www.ostechnix.com/wp-content/uploads/2018/01/xtrclock-1.png [10]:https://ubuntuforums.org/showthread.php?t=993800 From e497745c703489f326dd3ff8df548c1c2c9e58a7 Mon Sep 17 00:00:00 2001 From: "Xingyu.Wang" Date: Mon, 15 Oct 2018 23:00:57 +0800 Subject: [PATCH 460/736] PUB:20180817 How To Lock The Keyboard And Mouse, But Not The Screen In Linux.md @FSSlc https://linux.cn/article-10119-1.html --- ...To Lock The Keyboard And Mouse, But Not The Screen In Linux.md | 0 1 file changed, 0 insertions(+), 0 deletions(-) rename {translated/tech => published}/20180817 How To Lock The Keyboard And Mouse, But Not The Screen In Linux.md (100%) diff --git a/translated/tech/20180817 How To Lock The Keyboard And Mouse, But Not The Screen In Linux.md b/published/20180817 How To Lock The Keyboard And Mouse, But Not The Screen In Linux.md similarity index 100% rename from translated/tech/20180817 How To Lock The Keyboard And Mouse, But Not The Screen In Linux.md rename to published/20180817 How To Lock The Keyboard And Mouse, But Not The Screen In Linux.md From d67b663d79cadcf8294d6cbcf512ceb1e6495f5a Mon Sep 17 00:00:00 2001 From: lctt-bot Date: Mon, 15 Oct 2018 17:00:22 +0000 Subject: [PATCH 461/736] Revert "[Translating] Know Your Storage- Block, File - Object" This reverts commit 8ab0b6ccd7fb6da546290ec9a663ee23f4ac8a84. --- sources/tech/20180911 Know Your Storage- Block, File - Object.md | 1 - 1 file changed, 1 deletion(-) diff --git a/sources/tech/20180911 Know Your Storage- Block, File - Object.md b/sources/tech/20180911 Know Your Storage- Block, File - Object.md index 186b41d41a..24f179d9d5 100644 --- a/sources/tech/20180911 Know Your Storage- Block, File - Object.md +++ b/sources/tech/20180911 Know Your Storage- Block, File - Object.md @@ -1,4 +1,3 @@ -translating by name1e5s Know Your Storage: Block, File & Object ====== From e1ebeb8486560ec920a217bb5488f084f18729c5 Mon Sep 17 00:00:00 2001 From: darksun Date: Tue, 16 Oct 2018 08:50:21 +0800 Subject: [PATCH 462/736] =?UTF-8?q?=E9=80=89=E9=A2=98:=20How=20To=20Browse?= =?UTF-8?q?=20And=20Read=20Entire=20Arch=20Wiki=20As=20Linux=20Man=20Pages?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...ead Entire Arch Wiki As Linux Man Pages.md | 151 ++++++++++++++++++ 1 file changed, 151 insertions(+) create mode 100644 sources/tech/20181015 How To Browse And Read Entire Arch Wiki As Linux Man Pages.md diff --git a/sources/tech/20181015 How To Browse And Read Entire Arch Wiki As Linux Man Pages.md b/sources/tech/20181015 How To Browse And Read Entire Arch Wiki As Linux Man Pages.md new file mode 100644 index 0000000000..fe61f32dda --- /dev/null +++ b/sources/tech/20181015 How To Browse And Read Entire Arch Wiki As Linux Man Pages.md @@ -0,0 +1,151 @@ +How To Browse And Read Entire Arch Wiki As Linux Man Pages +====== +![](https://www.ostechnix.com/wp-content/uploads/2018/10/arch-wiki-720x340.jpg) + +A while ago, I wrote a guide that described how to browse the Arch Wiki from your Terminal using a command line script named [**arch-wiki-cli**][1]. Using this script, anyone can easily navigate through entire Arch Wiki website and read it with a text browser of your choice. Obviously, an active Internet connection is required to use this script. Today, I stumbled upon a similar utility named **“Arch-wiki-man”**. As the name says, it is also used to read the Arch Wiki from command line, but it doesn’t require Internet connection. Arch-wiki-man program helps you to browse and read entire Arch Wiki as Linux man pages. It will display any article from Arch Wiki in man pages format. Also, you need not to be online to browse Arch Wiki. The entire Arch Wiki will be downloaded locally and the updates are pushed automatically every two days. So, you always have an up-to-date, local copy of the Arch Wiki on your system. + +### Installing Arch-wiki-man + +Arch-wiki-man is available in [**AUR**][2], so you can install it using any AUR helper programs, for example [**Yay**][3]. + +``` + $ yay -S arch-wiki-man +``` + +Alternatively, it can be installed using NPM package manager like below. Make sure you have [**installed NodeJS**][4] and run the following command to install it: + +``` + $ npm install -g arch-wiki-man +``` + +### Browse And Read Entire Arch Wiki As Linux Man Pages + +The typical syntax of Arch-wiki-man is: + +``` + $ awman +``` + +Let me show you some examples. + +**Search with one or more matches** + +Let us search for a [**Arch Linux installation guide**][5]. To do so, simply run: + +``` + $ awman Installation guide +``` + +The above command will search for the matches that contains the search term “Installation guide” in the Arch Wiki. If there are multiple matches for the given search term, a selection menu will appear. Choose the guide you want to read using **UP/DOWN arrows** or Vim-style keybindings ( **j/k** ) and hit ENTER to open it. The resulting guide will open in man pages format like below. + +![][6] + +Here, awman refers **a** rch **w** iki **m** an. + +All man command options are supported, so you can navigate through guide as the way you do when reading a man page. To view the help section, press **h**. + +![][7] + +To exit the selection menu without entering **man** , simply press **Ctrl+c**. + +To go back and/or quit man, type **q**. + +**Search matches in titles and descriptions** + +By default, Awman will search for the matches in titles only. You can, however, direct it to search for the matches in both the titles and descriptions as well. + +``` + $ awman -d vim +``` + +Or, + +``` + $ awman --desc-search vim +``` + +**Search for matches in contents** + +Apart from searching for matches in titles and descriptions, it is also possible to scan the contents for a match as well. Please note that this will significantly slower the search process. + +``` + $ awman -k emacs +``` + +Or, + +``` + $ awman --apropos emacs +``` + +**Open the search results in web browser** + +If you don’t want to view the arch wiki guides in man page format, you can open it in a web browser. To do so, run: + +``` + $ awman -w pacman +``` + +Or, + +``` + $ awman --web pacman +``` + +This command will open the resulting match in the default web browser rather than with **man** command. Please note that you need Internet connection to use this option. + +**Search in other languages** + +By default, Awman will open the Arch wiki pages in English. If you want to view the results in other languages, for example **Spanish** , simply do: + +``` + $ awman -l spanish codecs +``` + +![][8] + +To view the list of available language options, run: + +``` + + $ awman --list-languages + +``` + +**Update the local copy of Arch Wiki** + +Like I already said, the updates are pushed automatically every two days. If you want to update it manually, simply run: + +``` +$ awman-update +arch-wiki-man@1.3.0 /usr/lib/node_modules/arch-wiki-man +└── arch-wiki-md-repo@0.10.84 + +arch-wiki-md-repo has been successfully updated or reinstalled. +``` + +Cheers! + + + +-------------------------------------------------------------------------------- + +via: https://www.ostechnix.com/how-to-browse-and-read-entire-arch-wiki-as-linux-man-pages/ + +作者:[SK][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://www.ostechnix.com/author/sk/ +[b]: https://github.com/lujun9972 +[1]: https://www.ostechnix.com/search-arch-wiki-website-commandline/ +[2]: https://aur.archlinux.org/packages/arch-wiki-man/ +[3]: https://www.ostechnix.com/yay-found-yet-another-reliable-aur-helper/ +[4]: https://www.ostechnix.com/install-node-js-linux/ +[5]: https://www.ostechnix.com/install-arch-linux-latest-version/ +[6]: http://www.ostechnix.com/wp-content/uploads/2018/10/awman-1.gif +[7]: http://www.ostechnix.com/wp-content/uploads/2018/10/awman-2.png +[8]: https://www.ostechnix.com/wp-content/uploads/2018/10/awman-3-1.png From 8e336162c7618734bed4ff5e5667db5db64f45ca Mon Sep 17 00:00:00 2001 From: darksun Date: Tue, 16 Oct 2018 09:01:59 +0800 Subject: [PATCH 463/736] =?UTF-8?q?=E9=80=89=E9=A2=98:=20Lab=204:=20Preemp?= =?UTF-8?q?tive=20Multitasking?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...20181016 Lab 4- Preemptive Multitasking.md | 595 ++++++++++++++++++ 1 file changed, 595 insertions(+) create mode 100644 sources/tech/20181016 Lab 4- Preemptive Multitasking.md diff --git a/sources/tech/20181016 Lab 4- Preemptive Multitasking.md b/sources/tech/20181016 Lab 4- Preemptive Multitasking.md new file mode 100644 index 0000000000..9e510ed7c6 --- /dev/null +++ b/sources/tech/20181016 Lab 4- Preemptive Multitasking.md @@ -0,0 +1,595 @@ +Lab 4: Preemptive Multitasking +====== +### Lab 4: Preemptive Multitasking + +**Part A due Thursday, October 18, 2018 +Part B due Thursday, October 25, 2018 +Part C due Thursday, November 1, 2018** + +#### Introduction + +In this lab you will implement preemptive multitasking among multiple simultaneously active user-mode environments. + +In part A you will add multiprocessor support to JOS, implement round-robin scheduling, and add basic environment management system calls (calls that create and destroy environments, and allocate/map memory). + +In part B, you will implement a Unix-like `fork()`, which allows a user-mode environment to create copies of itself. + +Finally, in part C you will add support for inter-process communication (IPC), allowing different user-mode environments to communicate and synchronize with each other explicitly. You will also add support for hardware clock interrupts and preemption. + +##### Getting Started + +Use Git to commit your Lab 3 source, fetch the latest version of the course repository, and then create a local branch called `lab4` based on our lab4 branch, `origin/lab4`: + +``` + athena% cd ~/6.828/lab + athena% add git + athena% git pull + Already up-to-date. + athena% git checkout -b lab4 origin/lab4 + Branch lab4 set up to track remote branch refs/remotes/origin/lab4. + Switched to a new branch "lab4" + athena% git merge lab3 + Merge made by recursive. + ... + athena% +``` + +Lab 4 contains a number of new source files, some of which you should browse before you start: +| kern/cpu.h | Kernel-private definitions for multiprocessor support | +| kern/mpconfig.c | Code to read the multiprocessor configuration | +| kern/lapic.c | Kernel code driving the local APIC unit in each processor | +| kern/mpentry.S | Assembly-language entry code for non-boot CPUs | +| kern/spinlock.h | Kernel-private definitions for spin locks, including the big kernel lock | +| kern/spinlock.c | Kernel code implementing spin locks | +| kern/sched.c | Code skeleton of the scheduler that you are about to implement | + +##### Lab Requirements + +This lab is divided into three parts, A, B, and C. We have allocated one week in the schedule for each part. + +As before, you will need to do all of the regular exercises described in the lab and _at least one_ challenge problem. (You do not need to do one challenge problem per part, just one for the whole lab.) Additionally, you will need to write up a brief description of the challenge problem that you implemented. If you implement more than one challenge problem, you only need to describe one of them in the write-up, though of course you are welcome to do more. Place the write-up in a file called `answers-lab4.txt` in the top level of your `lab` directory before handing in your work. + +#### Part A: Multiprocessor Support and Cooperative Multitasking + +In the first part of this lab, you will first extend JOS to run on a multiprocessor system, and then implement some new JOS kernel system calls to allow user-level environments to create additional new environments. You will also implement _cooperative_ round-robin scheduling, allowing the kernel to switch from one environment to another when the current environment voluntarily relinquishes the CPU (or exits). Later in part C you will implement _preemptive_ scheduling, which allows the kernel to re-take control of the CPU from an environment after a certain time has passed even if the environment does not cooperate. + +##### Multiprocessor Support + +We are going to make JOS support "symmetric multiprocessing" (SMP), a multiprocessor model in which all CPUs have equivalent access to system resources such as memory and I/O buses. While all CPUs are functionally identical in SMP, during the boot process they can be classified into two types: the bootstrap processor (BSP) is responsible for initializing the system and for booting the operating system; and the application processors (APs) are activated by the BSP only after the operating system is up and running. Which processor is the BSP is determined by the hardware and the BIOS. Up to this point, all your existing JOS code has been running on the BSP. + +In an SMP system, each CPU has an accompanying local APIC (LAPIC) unit. The LAPIC units are responsible for delivering interrupts throughout the system. The LAPIC also provides its connected CPU with a unique identifier. In this lab, we make use of the following basic functionality of the LAPIC unit (in `kern/lapic.c`): + + * Reading the LAPIC identifier (APIC ID) to tell which CPU our code is currently running on (see `cpunum()`). + * Sending the `STARTUP` interprocessor interrupt (IPI) from the BSP to the APs to bring up other CPUs (see `lapic_startap()`). + * In part C, we program LAPIC's built-in timer to trigger clock interrupts to support preemptive multitasking (see `apic_init()`). + + + +A processor accesses its LAPIC using memory-mapped I/O (MMIO). In MMIO, a portion of _physical_ memory is hardwired to the registers of some I/O devices, so the same load/store instructions typically used to access memory can be used to access device registers. You've already seen one IO hole at physical address `0xA0000` (we use this to write to the VGA display buffer). The LAPIC lives in a hole starting at physical address `0xFE000000` (32MB short of 4GB), so it's too high for us to access using our usual direct map at KERNBASE. The JOS virtual memory map leaves a 4MB gap at `MMIOBASE` so we have a place to map devices like this. Since later labs introduce more MMIO regions, you'll write a simple function to allocate space from this region and map device memory to it. + +``` +Exercise 1. Implement `mmio_map_region` in `kern/pmap.c`. To see how this is used, look at the beginning of `lapic_init` in `kern/lapic.c`. You'll have to do the next exercise, too, before the tests for `mmio_map_region` will run. +``` + +###### Application Processor Bootstrap + +Before booting up APs, the BSP should first collect information about the multiprocessor system, such as the total number of CPUs, their APIC IDs and the MMIO address of the LAPIC unit. The `mp_init()` function in `kern/mpconfig.c` retrieves this information by reading the MP configuration table that resides in the BIOS's region of memory. + +The `boot_aps()` function (in `kern/init.c`) drives the AP bootstrap process. APs start in real mode, much like how the bootloader started in `boot/boot.S`, so `boot_aps()` copies the AP entry code (`kern/mpentry.S`) to a memory location that is addressable in the real mode. Unlike with the bootloader, we have some control over where the AP will start executing code; we copy the entry code to `0x7000` (`MPENTRY_PADDR`), but any unused, page-aligned physical address below 640KB would work. + +After that, `boot_aps()` activates APs one after another, by sending `STARTUP` IPIs to the LAPIC unit of the corresponding AP, along with an initial `CS:IP` address at which the AP should start running its entry code (`MPENTRY_PADDR` in our case). The entry code in `kern/mpentry.S` is quite similar to that of `boot/boot.S`. After some brief setup, it puts the AP into protected mode with paging enabled, and then calls the C setup routine `mp_main()` (also in `kern/init.c`). `boot_aps()` waits for the AP to signal a `CPU_STARTED` flag in `cpu_status` field of its `struct CpuInfo` before going on to wake up the next one. + +``` +Exercise 2. Read `boot_aps()` and `mp_main()` in `kern/init.c`, and the assembly code in `kern/mpentry.S`. Make sure you understand the control flow transfer during the bootstrap of APs. Then modify your implementation of `page_init()` in `kern/pmap.c` to avoid adding the page at `MPENTRY_PADDR` to the free list, so that we can safely copy and run AP bootstrap code at that physical address. Your code should pass the updated `check_page_free_list()` test (but might fail the updated `check_kern_pgdir()` test, which we will fix soon). +``` + +``` +Question + + 1. Compare `kern/mpentry.S` side by side with `boot/boot.S`. Bearing in mind that `kern/mpentry.S` is compiled and linked to run above `KERNBASE` just like everything else in the kernel, what is the purpose of macro `MPBOOTPHYS`? Why is it necessary in `kern/mpentry.S` but not in `boot/boot.S`? In other words, what could go wrong if it were omitted in `kern/mpentry.S`? +Hint: recall the differences between the link address and the load address that we have discussed in Lab 1. +``` + + +###### Per-CPU State and Initialization + +When writing a multiprocessor OS, it is important to distinguish between per-CPU state that is private to each processor, and global state that the whole system shares. `kern/cpu.h` defines most of the per-CPU state, including `struct CpuInfo`, which stores per-CPU variables. `cpunum()` always returns the ID of the CPU that calls it, which can be used as an index into arrays like `cpus`. Alternatively, the macro `thiscpu` is shorthand for the current CPU's `struct CpuInfo`. + +Here is the per-CPU state you should be aware of: + + * **Per-CPU kernel stack**. +Because multiple CPUs can trap into the kernel simultaneously, we need a separate kernel stack for each processor to prevent them from interfering with each other's execution. The array `percpu_kstacks[NCPU][KSTKSIZE]` reserves space for NCPU's worth of kernel stacks. + +In Lab 2, you mapped the physical memory that `bootstack` refers to as the BSP's kernel stack just below `KSTACKTOP`. Similarly, in this lab, you will map each CPU's kernel stack into this region with guard pages acting as a buffer between them. CPU 0's stack will still grow down from `KSTACKTOP`; CPU 1's stack will start `KSTKGAP` bytes below the bottom of CPU 0's stack, and so on. `inc/memlayout.h` shows the mapping layout. + + * **Per-CPU TSS and TSS descriptor**. +A per-CPU task state segment (TSS) is also needed in order to specify where each CPU's kernel stack lives. The TSS for CPU _i_ is stored in `cpus[i].cpu_ts`, and the corresponding TSS descriptor is defined in the GDT entry `gdt[(GD_TSS0 >> 3) + i]`. The global `ts` variable defined in `kern/trap.c` will no longer be useful. + + * **Per-CPU current environment pointer**. +Since each CPU can run different user process simultaneously, we redefined the symbol `curenv` to refer to `cpus[cpunum()].cpu_env` (or `thiscpu->cpu_env`), which points to the environment _currently_ executing on the _current_ CPU (the CPU on which the code is running). + + * **Per-CPU system registers**. +All registers, including system registers, are private to a CPU. Therefore, instructions that initialize these registers, such as `lcr3()`, `ltr()`, `lgdt()`, `lidt()`, etc., must be executed once on each CPU. Functions `env_init_percpu()` and `trap_init_percpu()` are defined for this purpose. + + + +``` +Exercise 3. Modify `mem_init_mp()` (in `kern/pmap.c`) to map per-CPU stacks starting at `KSTACKTOP`, as shown in `inc/memlayout.h`. The size of each stack is `KSTKSIZE` bytes plus `KSTKGAP` bytes of unmapped guard pages. Your code should pass the new check in `check_kern_pgdir()`. +``` + +``` +Exercise 4. The code in `trap_init_percpu()` (`kern/trap.c`) initializes the TSS and TSS descriptor for the BSP. It worked in Lab 3, but is incorrect when running on other CPUs. Change the code so that it can work on all CPUs. (Note: your new code should not use the global `ts` variable any more.) +``` + +When you finish the above exercises, run JOS in QEMU with 4 CPUs using make qemu CPUS=4 (or make qemu-nox CPUS=4), you should see output like this: + +``` + ... + Physical memory: 66556K available, base = 640K, extended = 65532K + check_page_alloc() succeeded! + check_page() succeeded! + check_kern_pgdir() succeeded! + check_page_installed_pgdir() succeeded! + SMP: CPU 0 found 4 CPU(s) + enabled interrupts: 1 2 + SMP: CPU 1 starting + SMP: CPU 2 starting + SMP: CPU 3 starting +``` + +###### Locking + +Our current code spins after initializing the AP in `mp_main()`. Before letting the AP get any further, we need to first address race conditions when multiple CPUs run kernel code simultaneously. The simplest way to achieve this is to use a _big kernel lock_. The big kernel lock is a single global lock that is held whenever an environment enters kernel mode, and is released when the environment returns to user mode. In this model, environments in user mode can run concurrently on any available CPUs, but no more than one environment can run in kernel mode; any other environments that try to enter kernel mode are forced to wait. + +`kern/spinlock.h` declares the big kernel lock, namely `kernel_lock`. It also provides `lock_kernel()` and `unlock_kernel()`, shortcuts to acquire and release the lock. You should apply the big kernel lock at four locations: + + * In `i386_init()`, acquire the lock before the BSP wakes up the other CPUs. + * In `mp_main()`, acquire the lock after initializing the AP, and then call `sched_yield()` to start running environments on this AP. + * In `trap()`, acquire the lock when trapped from user mode. To determine whether a trap happened in user mode or in kernel mode, check the low bits of the `tf_cs`. + * In `env_run()`, release the lock _right before_ switching to user mode. Do not do that too early or too late, otherwise you will experience races or deadlocks. + + +``` +Exercise 5. Apply the big kernel lock as described above, by calling `lock_kernel()` and `unlock_kernel()` at the proper locations. +``` + +How to test if your locking is correct? You can't at this moment! But you will be able to after you implement the scheduler in the next exercise. + +``` +Question + + 2. It seems that using the big kernel lock guarantees that only one CPU can run the kernel code at a time. Why do we still need separate kernel stacks for each CPU? Describe a scenario in which using a shared kernel stack will go wrong, even with the protection of the big kernel lock. +``` + +``` +Challenge! The big kernel lock is simple and easy to use. Nevertheless, it eliminates all concurrency in kernel mode. Most modern operating systems use different locks to protect different parts of their shared state, an approach called _fine-grained locking_. Fine-grained locking can increase performance significantly, but is more difficult to implement and error-prone. If you are brave enough, drop the big kernel lock and embrace concurrency in JOS! + +It is up to you to decide the locking granularity (the amount of data that a lock protects). As a hint, you may consider using spin locks to ensure exclusive access to these shared components in the JOS kernel: + + * The page allocator. + * The console driver. + * The scheduler. + * The inter-process communication (IPC) state that you will implement in the part C. +``` + + +##### Round-Robin Scheduling + +Your next task in this lab is to change the JOS kernel so that it can alternate between multiple environments in "round-robin" fashion. Round-robin scheduling in JOS works as follows: + + * The function `sched_yield()` in the new `kern/sched.c` is responsible for selecting a new environment to run. It searches sequentially through the `envs[]` array in circular fashion, starting just after the previously running environment (or at the beginning of the array if there was no previously running environment), picks the first environment it finds with a status of `ENV_RUNNABLE` (see `inc/env.h`), and calls `env_run()` to jump into that environment. + * `sched_yield()` must never run the same environment on two CPUs at the same time. It can tell that an environment is currently running on some CPU (possibly the current CPU) because that environment's status will be `ENV_RUNNING`. + * We have implemented a new system call for you, `sys_yield()`, which user environments can call to invoke the kernel's `sched_yield()` function and thereby voluntarily give up the CPU to a different environment. + + + +``` +Exercise 6. Implement round-robin scheduling in `sched_yield()` as described above. Don't forget to modify `syscall()` to dispatch `sys_yield()`. + +Make sure to invoke `sched_yield()` in `mp_main`. + +Modify `kern/init.c` to create three (or more!) environments that all run the program `user/yield.c`. + +Run make qemu. You should see the environments switch back and forth between each other five times before terminating, like below. + +Test also with several CPUS: make qemu CPUS=2. + + ... + Hello, I am environment 00001000. + Hello, I am environment 00001001. + Hello, I am environment 00001002. + Back in environment 00001000, iteration 0. + Back in environment 00001001, iteration 0. + Back in environment 00001002, iteration 0. + Back in environment 00001000, iteration 1. + Back in environment 00001001, iteration 1. + Back in environment 00001002, iteration 1. + ... + +After the `yield` programs exit, there will be no runnable environment in the system, the scheduler should invoke the JOS kernel monitor. If any of this does not happen, then fix your code before proceeding. +``` + +``` +Question + + 3. In your implementation of `env_run()` you should have called `lcr3()`. Before and after the call to `lcr3()`, your code makes references (at least it should) to the variable `e`, the argument to `env_run`. Upon loading the `%cr3` register, the addressing context used by the MMU is instantly changed. But a virtual address (namely `e`) has meaning relative to a given address context--the address context specifies the physical address to which the virtual address maps. Why can the pointer `e` be dereferenced both before and after the addressing switch? + 4. Whenever the kernel switches from one environment to another, it must ensure the old environment's registers are saved so they can be restored properly later. Why? Where does this happen? +``` + +``` +Challenge! Add a less trivial scheduling policy to the kernel, such as a fixed-priority scheduler that allows each environment to be assigned a priority and ensures that higher-priority environments are always chosen in preference to lower-priority environments. If you're feeling really adventurous, try implementing a Unix-style adjustable-priority scheduler or even a lottery or stride scheduler. (Look up "lottery scheduling" and "stride scheduling" in Google.) + +Write a test program or two that verifies that your scheduling algorithm is working correctly (i.e., the right environments get run in the right order). It may be easier to write these test programs once you have implemented `fork()` and IPC in parts B and C of this lab. +``` + +``` +Challenge! The JOS kernel currently does not allow applications to use the x86 processor's x87 floating-point unit (FPU), MMX instructions, or Streaming SIMD Extensions (SSE). Extend the `Env` structure to provide a save area for the processor's floating point state, and extend the context switching code to save and restore this state properly when switching from one environment to another. The `FXSAVE` and `FXRSTOR` instructions may be useful, but note that these are not in the old i386 user's manual because they were introduced in more recent processors. Write a user-level test program that does something cool with floating-point. +``` + +##### System Calls for Environment Creation + +Although your kernel is now capable of running and switching between multiple user-level environments, it is still limited to running environments that the _kernel_ initially set up. You will now implement the necessary JOS system calls to allow _user_ environments to create and start other new user environments. + +Unix provides the `fork()` system call as its process creation primitive. Unix `fork()` copies the entire address space of calling process (the parent) to create a new process (the child). The only differences between the two observable from user space are their process IDs and parent process IDs (as returned by `getpid` and `getppid`). In the parent, `fork()` returns the child's process ID, while in the child, `fork()` returns 0. By default, each process gets its own private address space, and neither process's modifications to memory are visible to the other. + +You will provide a different, more primitive set of JOS system calls for creating new user-mode environments. With these system calls you will be able to implement a Unix-like `fork()` entirely in user space, in addition to other styles of environment creation. The new system calls you will write for JOS are as follows: + + * `sys_exofork`: +This system call creates a new environment with an almost blank slate: nothing is mapped in the user portion of its address space, and it is not runnable. The new environment will have the same register state as the parent environment at the time of the `sys_exofork` call. In the parent, `sys_exofork` will return the `envid_t` of the newly created environment (or a negative error code if the environment allocation failed). In the child, however, it will return 0. (Since the child starts out marked as not runnable, `sys_exofork` will not actually return in the child until the parent has explicitly allowed this by marking the child runnable using....) + * `sys_env_set_status`: +Sets the status of a specified environment to `ENV_RUNNABLE` or `ENV_NOT_RUNNABLE`. This system call is typically used to mark a new environment ready to run, once its address space and register state has been fully initialized. + * `sys_page_alloc`: +Allocates a page of physical memory and maps it at a given virtual address in a given environment's address space. + * `sys_page_map`: +Copy a page mapping ( _not_ the contents of a page!) from one environment's address space to another, leaving a memory sharing arrangement in place so that the new and the old mappings both refer to the same page of physical memory. + * `sys_page_unmap`: +Unmap a page mapped at a given virtual address in a given environment. + + + +For all of the system calls above that accept environment IDs, the JOS kernel supports the convention that a value of 0 means "the current environment." This convention is implemented by `envid2env()` in `kern/env.c`. + +We have provided a very primitive implementation of a Unix-like `fork()` in the test program `user/dumbfork.c`. This test program uses the above system calls to create and run a child environment with a copy of its own address space. The two environments then switch back and forth using `sys_yield` as in the previous exercise. The parent exits after 10 iterations, whereas the child exits after 20. + +``` +Exercise 7. Implement the system calls described above in `kern/syscall.c` and make sure `syscall()` calls them. You will need to use various functions in `kern/pmap.c` and `kern/env.c`, particularly `envid2env()`. For now, whenever you call `envid2env()`, pass 1 in the `checkperm` parameter. Be sure you check for any invalid system call arguments, returning `-E_INVAL` in that case. Test your JOS kernel with `user/dumbfork` and make sure it works before proceeding. +``` + +``` +Challenge! Add the additional system calls necessary to _read_ all of the vital state of an existing environment as well as set it up. Then implement a user mode program that forks off a child environment, runs it for a while (e.g., a few iterations of `sys_yield()`), then takes a complete snapshot or _checkpoint_ of the child environment, runs the child for a while longer, and finally restores the child environment to the state it was in at the checkpoint and continues it from there. Thus, you are effectively "replaying" the execution of the child environment from an intermediate state. Make the child environment perform some interaction with the user using `sys_cgetc()` or `readline()` so that the user can view and mutate its internal state, and verify that with your checkpoint/restart you can give the child environment a case of selective amnesia, making it "forget" everything that happened beyond a certain point. +``` + +This completes Part A of the lab; make sure it passes all of the Part A tests when you run make grade, and hand it in using make handin as usual. If you are trying to figure out why a particular test case is failing, run ./grade-lab4 -v, which will show you the output of the kernel builds and QEMU runs for each test, until a test fails. When a test fails, the script will stop, and then you can inspect `jos.out` to see what the kernel actually printed. + +#### Part B: Copy-on-Write Fork + +As mentioned earlier, Unix provides the `fork()` system call as its primary process creation primitive. The `fork()` system call copies the address space of the calling process (the parent) to create a new process (the child). + +xv6 Unix implements `fork()` by copying all data from the parent's pages into new pages allocated for the child. This is essentially the same approach that `dumbfork()` takes. The copying of the parent's address space into the child is the most expensive part of the `fork()` operation. + +However, a call to `fork()` is frequently followed almost immediately by a call to `exec()` in the child process, which replaces the child's memory with a new program. This is what the the shell typically does, for example. In this case, the time spent copying the parent's address space is largely wasted, because the child process will use very little of its memory before calling `exec()`. + +For this reason, later versions of Unix took advantage of virtual memory hardware to allow the parent and child to _share_ the memory mapped into their respective address spaces until one of the processes actually modifies it. This technique is known as _copy-on-write_. To do this, on `fork()` the kernel would copy the address space _mappings_ from the parent to the child instead of the contents of the mapped pages, and at the same time mark the now-shared pages read-only. When one of the two processes tries to write to one of these shared pages, the process takes a page fault. At this point, the Unix kernel realizes that the page was really a "virtual" or "copy-on-write" copy, and so it makes a new, private, writable copy of the page for the faulting process. In this way, the contents of individual pages aren't actually copied until they are actually written to. This optimization makes a `fork()` followed by an `exec()` in the child much cheaper: the child will probably only need to copy one page (the current page of its stack) before it calls `exec()`. + +In the next piece of this lab, you will implement a "proper" Unix-like `fork()` with copy-on-write, as a user space library routine. Implementing `fork()` and copy-on-write support in user space has the benefit that the kernel remains much simpler and thus more likely to be correct. It also lets individual user-mode programs define their own semantics for `fork()`. A program that wants a slightly different implementation (for example, the expensive always-copy version like `dumbfork()`, or one in which the parent and child actually share memory afterward) can easily provide its own. + +##### User-level page fault handling + +A user-level copy-on-write `fork()` needs to know about page faults on write-protected pages, so that's what you'll implement first. Copy-on-write is only one of many possible uses for user-level page fault handling. + +It's common to set up an address space so that page faults indicate when some action needs to take place. For example, most Unix kernels initially map only a single page in a new process's stack region, and allocate and map additional stack pages later "on demand" as the process's stack consumption increases and causes page faults on stack addresses that are not yet mapped. A typical Unix kernel must keep track of what action to take when a page fault occurs in each region of a process's space. For example, a fault in the stack region will typically allocate and map new page of physical memory. A fault in the program's BSS region will typically allocate a new page, fill it with zeroes, and map it. In systems with demand-paged executables, a fault in the text region will read the corresponding page of the binary off of disk and then map it. + +This is a lot of information for the kernel to keep track of. Instead of taking the traditional Unix approach, you will decide what to do about each page fault in user space, where bugs are less damaging. This design has the added benefit of allowing programs great flexibility in defining their memory regions; you'll use user-level page fault handling later for mapping and accessing files on a disk-based file system. + +###### Setting the Page Fault Handler + +In order to handle its own page faults, a user environment will need to register a _page fault handler entrypoint_ with the JOS kernel. The user environment registers its page fault entrypoint via the new `sys_env_set_pgfault_upcall` system call. We have added a new member to the `Env` structure, `env_pgfault_upcall`, to record this information. + +``` +Exercise 8. Implement the `sys_env_set_pgfault_upcall` system call. Be sure to enable permission checking when looking up the environment ID of the target environment, since this is a "dangerous" system call. +``` + +###### Normal and Exception Stacks in User Environments + +During normal execution, a user environment in JOS will run on the _normal_ user stack: its `ESP` register starts out pointing at `USTACKTOP`, and the stack data it pushes resides on the page between `USTACKTOP-PGSIZE` and `USTACKTOP-1` inclusive. When a page fault occurs in user mode, however, the kernel will restart the user environment running a designated user-level page fault handler on a different stack, namely the _user exception_ stack. In essence, we will make the JOS kernel implement automatic "stack switching" on behalf of the user environment, in much the same way that the x86 _processor_ already implements stack switching on behalf of JOS when transferring from user mode to kernel mode! + +The JOS user exception stack is also one page in size, and its top is defined to be at virtual address `UXSTACKTOP`, so the valid bytes of the user exception stack are from `UXSTACKTOP-PGSIZE` through `UXSTACKTOP-1` inclusive. While running on this exception stack, the user-level page fault handler can use JOS's regular system calls to map new pages or adjust mappings so as to fix whatever problem originally caused the page fault. Then the user-level page fault handler returns, via an assembly language stub, to the faulting code on the original stack. + +Each user environment that wants to support user-level page fault handling will need to allocate memory for its own exception stack, using the `sys_page_alloc()` system call introduced in part A. + +###### Invoking the User Page Fault Handler + +You will now need to change the page fault handling code in `kern/trap.c` to handle page faults from user mode as follows. We will call the state of the user environment at the time of the fault the _trap-time_ state. + +If there is no page fault handler registered, the JOS kernel destroys the user environment with a message as before. Otherwise, the kernel sets up a trap frame on the exception stack that looks like a `struct UTrapframe` from `inc/trap.h`: + +``` + <-- UXSTACKTOP + trap-time esp + trap-time eflags + trap-time eip + trap-time eax start of struct PushRegs + trap-time ecx + trap-time edx + trap-time ebx + trap-time esp + trap-time ebp + trap-time esi + trap-time edi end of struct PushRegs + tf_err (error code) + fault_va <-- %esp when handler is run + +``` + +The kernel then arranges for the user environment to resume execution with the page fault handler running on the exception stack with this stack frame; you must figure out how to make this happen. The `fault_va` is the virtual address that caused the page fault. + +If the user environment is _already_ running on the user exception stack when an exception occurs, then the page fault handler itself has faulted. In this case, you should start the new stack frame just under the current `tf->tf_esp` rather than at `UXSTACKTOP`. You should first push an empty 32-bit word, then a `struct UTrapframe`. + +To test whether `tf->tf_esp` is already on the user exception stack, check whether it is in the range between `UXSTACKTOP-PGSIZE` and `UXSTACKTOP-1`, inclusive. + +``` +Exercise 9. Implement the code in `page_fault_handler` in `kern/trap.c` required to dispatch page faults to the user-mode handler. Be sure to take appropriate precautions when writing into the exception stack. (What happens if the user environment runs out of space on the exception stack?) +``` + +###### User-mode Page Fault Entrypoint + +Next, you need to implement the assembly routine that will take care of calling the C page fault handler and resume execution at the original faulting instruction. This assembly routine is the handler that will be registered with the kernel using `sys_env_set_pgfault_upcall()`. + +``` +Exercise 10. Implement the `_pgfault_upcall` routine in `lib/pfentry.S`. The interesting part is returning to the original point in the user code that caused the page fault. You'll return directly there, without going back through the kernel. The hard part is simultaneously switching stacks and re-loading the EIP. +``` + +Finally, you need to implement the C user library side of the user-level page fault handling mechanism. + +``` +Exercise 11. Finish `set_pgfault_handler()` in `lib/pgfault.c`. +``` + +###### Testing + +Run `user/faultread` (make run-faultread). You should see: + +``` + ... + [00000000] new env 00001000 + [00001000] user fault va 00000000 ip 0080003a + TRAP frame ... + [00001000] free env 00001000 +``` + +Run `user/faultdie`. You should see: + +``` + ... + [00000000] new env 00001000 + i faulted at va deadbeef, err 6 + [00001000] exiting gracefully + [00001000] free env 00001000 +``` + +Run `user/faultalloc`. You should see: + +``` + ... + [00000000] new env 00001000 + fault deadbeef + this string was faulted in at deadbeef + fault cafebffe + fault cafec000 + this string was faulted in at cafebffe + [00001000] exiting gracefully + [00001000] free env 00001000 +``` + +If you see only the first "this string" line, it means you are not handling recursive page faults properly. + +Run `user/faultallocbad`. You should see: + +``` + ... + [00000000] new env 00001000 + [00001000] user_mem_check assertion failure for va deadbeef + [00001000] free env 00001000 +``` + +Make sure you understand why `user/faultalloc` and `user/faultallocbad` behave differently. + +``` +Challenge! Extend your kernel so that not only page faults, but _all_ types of processor exceptions that code running in user space can generate, can be redirected to a user-mode exception handler. Write user-mode test programs to test user-mode handling of various exceptions such as divide-by-zero, general protection fault, and illegal opcode. +``` + +##### Implementing Copy-on-Write Fork + +You now have the kernel facilities to implement copy-on-write `fork()` entirely in user space. + +We have provided a skeleton for your `fork()` in `lib/fork.c`. Like `dumbfork()`, `fork()` should create a new environment, then scan through the parent environment's entire address space and set up corresponding page mappings in the child. The key difference is that, while `dumbfork()` copied _pages_ , `fork()` will initially only copy page _mappings_. `fork()` will copy each page only when one of the environments tries to write it. + +The basic control flow for `fork()` is as follows: + + 1. The parent installs `pgfault()` as the C-level page fault handler, using the `set_pgfault_handler()` function you implemented above. + + 2. The parent calls `sys_exofork()` to create a child environment. + + 3. For each writable or copy-on-write page in its address space below UTOP, the parent calls `duppage`, which should map the page copy-on-write into the address space of the child and then _remap_ the page copy-on-write in its own address space. [ Note: The ordering here (i.e., marking a page as COW in the child before marking it in the parent) actually matters! Can you see why? Try to think of a specific case where reversing the order could cause trouble. ] `duppage` sets both PTEs so that the page is not writeable, and to contain `PTE_COW` in the "avail" field to distinguish copy-on-write pages from genuine read-only pages. + +The exception stack is _not_ remapped this way, however. Instead you need to allocate a fresh page in the child for the exception stack. Since the page fault handler will be doing the actual copying and the page fault handler runs on the exception stack, the exception stack cannot be made copy-on-write: who would copy it? + +`fork()` also needs to handle pages that are present, but not writable or copy-on-write. + + 4. The parent sets the user page fault entrypoint for the child to look like its own. + + 5. The child is now ready to run, so the parent marks it runnable. + + + + +Each time one of the environments writes a copy-on-write page that it hasn't yet written, it will take a page fault. Here's the control flow for the user page fault handler: + + 1. The kernel propagates the page fault to `_pgfault_upcall`, which calls `fork()`'s `pgfault()` handler. + 2. `pgfault()` checks that the fault is a write (check for `FEC_WR` in the error code) and that the PTE for the page is marked `PTE_COW`. If not, panic. + 3. `pgfault()` allocates a new page mapped at a temporary location and copies the contents of the faulting page into it. Then the fault handler maps the new page at the appropriate address with read/write permissions, in place of the old read-only mapping. + + + +The user-level `lib/fork.c` code must consult the environment's page tables for several of the operations above (e.g., that the PTE for a page is marked `PTE_COW`). The kernel maps the environment's page tables at `UVPT` exactly for this purpose. It uses a [clever mapping trick][1] to make it to make it easy to lookup PTEs for user code. `lib/entry.S` sets up `uvpt` and `uvpd` so that you can easily lookup page-table information in `lib/fork.c`. + +`````` +Exercise 12. Implement `fork`, `duppage` and `pgfault` in `lib/fork.c`. + +Test your code with the `forktree` program. It should produce the following messages, with interspersed 'new env', 'free env', and 'exiting gracefully' messages. The messages may not appear in this order, and the environment IDs may be different. + + 1000: I am '' + 1001: I am '0' + 2000: I am '00' + 2001: I am '000' + 1002: I am '1' + 3000: I am '11' + 3001: I am '10' + 4000: I am '100' + 1003: I am '01' + 5000: I am '010' + 4001: I am '011' + 2002: I am '110' + 1004: I am '001' + 1005: I am '111' + 1006: I am '101' +``` + +``` +Challenge! Implement a shared-memory `fork()` called `sfork()`. This version should have the parent and child _share_ all their memory pages (so writes in one environment appear in the other) except for pages in the stack area, which should be treated in the usual copy-on-write manner. Modify `user/forktree.c` to use `sfork()` instead of regular `fork()`. Also, once you have finished implementing IPC in part C, use your `sfork()` to run `user/pingpongs`. You will have to find a new way to provide the functionality of the global `thisenv` pointer. +``` + +``` +Challenge! Your implementation of `fork` makes a huge number of system calls. On the x86, switching into the kernel using interrupts has non-trivial cost. Augment the system call interface so that it is possible to send a batch of system calls at once. Then change `fork` to use this interface. + +How much faster is your new `fork`? + +You can answer this (roughly) by using analytical arguments to estimate how much of an improvement batching system calls will make to the performance of your `fork`: How expensive is an `int 0x30` instruction? How many times do you execute `int 0x30` in your `fork`? Is accessing the `TSS` stack switch also expensive? And so on... + +Alternatively, you can boot your kernel on real hardware and _really_ benchmark your code. See the `RDTSC` (read time-stamp counter) instruction, defined in the IA32 manual, which counts the number of clock cycles that have elapsed since the last processor reset. QEMU doesn't emulate this instruction faithfully (it can either count the number of virtual instructions executed or use the host TSC, neither of which reflects the number of cycles a real CPU would require). +``` + +This ends part B. Make sure you pass all of the Part B tests when you run make grade. As usual, you can hand in your submission with make handin. + +#### Part C: Preemptive Multitasking and Inter-Process communication (IPC) + +In the final part of lab 4 you will modify the kernel to preempt uncooperative environments and to allow environments to pass messages to each other explicitly. + +##### Clock Interrupts and Preemption + +Run the `user/spin` test program. This test program forks off a child environment, which simply spins forever in a tight loop once it receives control of the CPU. Neither the parent environment nor the kernel ever regains the CPU. This is obviously not an ideal situation in terms of protecting the system from bugs or malicious code in user-mode environments, because any user-mode environment can bring the whole system to a halt simply by getting into an infinite loop and never giving back the CPU. In order to allow the kernel to _preempt_ a running environment, forcefully retaking control of the CPU from it, we must extend the JOS kernel to support external hardware interrupts from the clock hardware. + +###### Interrupt discipline + +External interrupts (i.e., device interrupts) are referred to as IRQs. There are 16 possible IRQs, numbered 0 through 15. The mapping from IRQ number to IDT entry is not fixed. `pic_init` in `picirq.c` maps IRQs 0-15 to IDT entries `IRQ_OFFSET` through `IRQ_OFFSET+15`. + +In `inc/trap.h`, `IRQ_OFFSET` is defined to be decimal 32. Thus the IDT entries 32-47 correspond to the IRQs 0-15. For example, the clock interrupt is IRQ 0. Thus, IDT[IRQ_OFFSET+0] (i.e., IDT[32]) contains the address of the clock's interrupt handler routine in the kernel. This `IRQ_OFFSET` is chosen so that the device interrupts do not overlap with the processor exceptions, which could obviously cause confusion. (In fact, in the early days of PCs running MS-DOS, the `IRQ_OFFSET` effectively _was_ zero, which indeed caused massive confusion between handling hardware interrupts and handling processor exceptions!) + +In JOS, we make a key simplification compared to xv6 Unix. External device interrupts are _always_ disabled when in the kernel (and, like xv6, enabled when in user space). External interrupts are controlled by the `FL_IF` flag bit of the `%eflags` register (see `inc/mmu.h`). When this bit is set, external interrupts are enabled. While the bit can be modified in several ways, because of our simplification, we will handle it solely through the process of saving and restoring `%eflags` register as we enter and leave user mode. + +You will have to ensure that the `FL_IF` flag is set in user environments when they run so that when an interrupt arrives, it gets passed through to the processor and handled by your interrupt code. Otherwise, interrupts are _masked_ , or ignored until interrupts are re-enabled. We masked interrupts with the very first instruction of the bootloader, and so far we have never gotten around to re-enabling them. + +``` +Exercise 13. Modify `kern/trapentry.S` and `kern/trap.c` to initialize the appropriate entries in the IDT and provide handlers for IRQs 0 through 15. Then modify the code in `env_alloc()` in `kern/env.c` to ensure that user environments are always run with interrupts enabled. + +Also uncomment the `sti` instruction in `sched_halt()` so that idle CPUs unmask interrupts. + +The processor never pushes an error code when invoking a hardware interrupt handler. You might want to re-read section 9.2 of the [80386 Reference Manual][2], or section 5.8 of the [IA-32 Intel Architecture Software Developer's Manual, Volume 3][3], at this time. + +After doing this exercise, if you run your kernel with any test program that runs for a non-trivial length of time (e.g., `spin`), you should see the kernel print trap frames for hardware interrupts. While interrupts are now enabled in the processor, JOS isn't yet handling them, so you should see it misattribute each interrupt to the currently running user environment and destroy it. Eventually it should run out of environments to destroy and drop into the monitor. +``` + +###### Handling Clock Interrupts + +In the `user/spin` program, after the child environment was first run, it just spun in a loop, and the kernel never got control back. We need to program the hardware to generate clock interrupts periodically, which will force control back to the kernel where we can switch control to a different user environment. + +The calls to `lapic_init` and `pic_init` (from `i386_init` in `init.c`), which we have written for you, set up the clock and the interrupt controller to generate interrupts. You now need to write the code to handle these interrupts. + +``` +Exercise 14. Modify the kernel's `trap_dispatch()` function so that it calls `sched_yield()` to find and run a different environment whenever a clock interrupt takes place. + +You should now be able to get the `user/spin` test to work: the parent environment should fork off the child, `sys_yield()` to it a couple times but in each case regain control of the CPU after one time slice, and finally kill the child environment and terminate gracefully. +``` + +This is a great time to do some _regression testing_. Make sure that you haven't broken any earlier part of that lab that used to work (e.g. `forktree`) by enabling interrupts. Also, try running with multiple CPUs using make CPUS=2 _target_. You should also be able to pass `stresssched` now. Run make grade to see for sure. You should now get a total score of 65/80 points on this lab. + +##### Inter-Process communication (IPC) + +(Technically in JOS this is "inter-environment communication" or "IEC", but everyone else calls it IPC, so we'll use the standard term.) + +We've been focusing on the isolation aspects of the operating system, the ways it provides the illusion that each program has a machine all to itself. Another important service of an operating system is to allow programs to communicate with each other when they want to. It can be quite powerful to let programs interact with other programs. The Unix pipe model is the canonical example. + +There are many models for interprocess communication. Even today there are still debates about which models are best. We won't get into that debate. Instead, we'll implement a simple IPC mechanism and then try it out. + +###### IPC in JOS + +You will implement a few additional JOS kernel system calls that collectively provide a simple interprocess communication mechanism. You will implement two system calls, `sys_ipc_recv` and `sys_ipc_try_send`. Then you will implement two library wrappers `ipc_recv` and `ipc_send`. + +The "messages" that user environments can send to each other using JOS's IPC mechanism consist of two components: a single 32-bit value, and optionally a single page mapping. Allowing environments to pass page mappings in messages provides an efficient way to transfer more data than will fit into a single 32-bit integer, and also allows environments to set up shared memory arrangements easily. + +###### Sending and Receiving Messages + +To receive a message, an environment calls `sys_ipc_recv`. This system call de-schedules the current environment and does not run it again until a message has been received. When an environment is waiting to receive a message, _any_ other environment can send it a message - not just a particular environment, and not just environments that have a parent/child arrangement with the receiving environment. In other words, the permission checking that you implemented in Part A will not apply to IPC, because the IPC system calls are carefully designed so as to be "safe": an environment cannot cause another environment to malfunction simply by sending it messages (unless the target environment is also buggy). + +To try to send a value, an environment calls `sys_ipc_try_send` with both the receiver's environment id and the value to be sent. If the named environment is actually receiving (it has called `sys_ipc_recv` and not gotten a value yet), then the send delivers the message and returns 0. Otherwise the send returns `-E_IPC_NOT_RECV` to indicate that the target environment is not currently expecting to receive a value. + +A library function `ipc_recv` in user space will take care of calling `sys_ipc_recv` and then looking up the information about the received values in the current environment's `struct Env`. + +Similarly, a library function `ipc_send` will take care of repeatedly calling `sys_ipc_try_send` until the send succeeds. + +###### Transferring Pages + +When an environment calls `sys_ipc_recv` with a valid `dstva` parameter (below `UTOP`), the environment is stating that it is willing to receive a page mapping. If the sender sends a page, then that page should be mapped at `dstva` in the receiver's address space. If the receiver already had a page mapped at `dstva`, then that previous page is unmapped. + +When an environment calls `sys_ipc_try_send` with a valid `srcva` (below `UTOP`), it means the sender wants to send the page currently mapped at `srcva` to the receiver, with permissions `perm`. After a successful IPC, the sender keeps its original mapping for the page at `srcva` in its address space, but the receiver also obtains a mapping for this same physical page at the `dstva` originally specified by the receiver, in the receiver's address space. As a result this page becomes shared between the sender and receiver. + +If either the sender or the receiver does not indicate that a page should be transferred, then no page is transferred. After any IPC the kernel sets the new field `env_ipc_perm` in the receiver's `Env` structure to the permissions of the page received, or zero if no page was received. + +###### Implementing IPC + +``` +Exercise 15. Implement `sys_ipc_recv` and `sys_ipc_try_send` in `kern/syscall.c`. Read the comments on both before implementing them, since they have to work together. When you call `envid2env` in these routines, you should set the `checkperm` flag to 0, meaning that any environment is allowed to send IPC messages to any other environment, and the kernel does no special permission checking other than verifying that the target envid is valid. + +Then implement the `ipc_recv` and `ipc_send` functions in `lib/ipc.c`. + +Use the `user/pingpong` and `user/primes` functions to test your IPC mechanism. `user/primes` will generate for each prime number a new environment until JOS runs out of environments. You might find it interesting to read `user/primes.c` to see all the forking and IPC going on behind the scenes. +``` + +``` +Challenge! Why does `ipc_send` have to loop? Change the system call interface so it doesn't have to. Make sure you can handle multiple environments trying to send to one environment at the same time. +``` + +``` +Challenge! The prime sieve is only one neat use of message passing between a large number of concurrent programs. Read C. A. R. Hoare, ``Communicating Sequential Processes,'' _Communications of the ACM_ 21(8) (August 1978), 666-667, and implement the matrix multiplication example. +``` + +``` +Challenge! One of the most impressive examples of the power of message passing is Doug McIlroy's power series calculator, described in [M. Douglas McIlroy, ``Squinting at Power Series,'' _Software--Practice and Experience_ , 20(7) (July 1990), 661-683][4]. Implement his power series calculator and compute the power series for _sin_ ( _x_ + _x_ ^3). +``` + +``` +Challenge! Make JOS's IPC mechanism more efficient by applying some of the techniques from Liedtke's paper, [Improving IPC by Kernel Design][5], or any other tricks you may think of. Feel free to modify the kernel's system call API for this purpose, as long as your code is backwards compatible with what our grading scripts expect. +``` + +**This ends part C.** Make sure you pass all of the make grade tests and don't forget to write up your answers to the questions and a description of your challenge exercise solution in `answers-lab4.txt`. + +Before handing in, use git status and git diff to examine your changes and don't forget to git add answers-lab4.txt. When you're ready, commit your changes with git commit -am 'my solutions to lab 4', then make handin and follow the directions. + +-------------------------------------------------------------------------------- + +via: https://pdos.csail.mit.edu/6.828/2018/labs/lab4/ + +作者:[csail.mit][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://pdos.csail.mit.edu +[b]: https://github.com/lujun9972 +[1]: https://pdos.csail.mit.edu/6.828/2018/labs/lab4/uvpt.html +[2]: https://pdos.csail.mit.edu/6.828/2018/labs/readings/i386/toc.htm +[3]: https://pdos.csail.mit.edu/6.828/2018/labs/readings/ia32/IA32-3A.pdf +[4]: https://swtch.com/~rsc/thread/squint.pdf +[5]: http://dl.acm.org/citation.cfm?id=168633 From 1320bb4d156696eccf7af7c00be71afbdc9aa38e Mon Sep 17 00:00:00 2001 From: geekpi Date: Tue, 16 Oct 2018 09:03:56 +0800 Subject: [PATCH 464/736] translated --- ...files with ls at the Linux command line.md | 75 ------------------- ...files with ls at the Linux command line.md | 73 ++++++++++++++++++ 2 files changed, 73 insertions(+), 75 deletions(-) delete mode 100644 sources/tech/20181003 Tips for listing files with ls at the Linux command line.md create mode 100644 translated/tech/20181003 Tips for listing files with ls at the Linux command line.md diff --git a/sources/tech/20181003 Tips for listing files with ls at the Linux command line.md b/sources/tech/20181003 Tips for listing files with ls at the Linux command line.md deleted file mode 100644 index fda48f1622..0000000000 --- a/sources/tech/20181003 Tips for listing files with ls at the Linux command line.md +++ /dev/null @@ -1,75 +0,0 @@ -translating---geekpi - -Tips for listing files with ls at the Linux command line -====== -Learn some of the Linux 'ls' command's most useful variations. -![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/button_push_open_keyboard_file_organize.png?itok=KlAsk1gx) - -One of the first commands I learned in Linux was `ls`. Knowing what’s in a directory where a file on your system resides is important. Being able to see and modify not just some but all of the files is also important. - -My first LInux cheat sheet was the [One Page Linux Manual][1] , which was released in1999 and became my go-to reference. I taped it over my desk and referred to it often as I began to explore Linux. Listing files with `ls -l` is introduced on the first page, at the bottom of the first column. - -Later, I would learn other iterations of this most basic command. Through the `ls` command, I began to learn about the complexity of the Linux file permissions and what was mine and what required root or sudo permission to change. I became very comfortable on the command line over time, and while I still use `ls -l` to find files in the directory, I frequently use `ls -al` so I can see hidden files that might need to be changed, like configuration files. - -According to an article by Eric Fischer about the `ls` command in the [Linux Documentation Project][2], the command's roots go back to the `listf` command on MIT’s Compatible Time Sharing System in 1961. When CTSS was replaced by [Multics][3], the command became `list`, with switches like `list -all`. According to [Wikipedia][4], `ls` appeared in the original version of AT&T Unix. The `ls` command we use today on Linux systems comes from the [GNU Core Utilities][5]. - -Most of the time, I use only a couple of iterations of the command. Looking inside a directory with `ls` or `ls -al` is how I generally use the command, but there are many other options that you should be familiar with. - -`$ ls -l` provides a simple list of the directory: - -![](https://opensource.com/sites/default/files/uploads/linux_ls_1_0.png) - -Using the man pages of my Fedora 28 system, I find that there are many other options to `ls`, all of which provide interesting and useful information about the Linux file system. By entering `man ls` at the command prompt, we can begin to explore some of the other options: - -![](https://opensource.com/sites/default/files/uploads/linux_ls_2_0.png) - -To sort the directory by file sizes, use `ls -lS`: - -![](https://opensource.com/sites/default/files/uploads/linux_ls_3_0.png) - -To list the contents in reverse order, use `ls -lr`: - -![](https://opensource.com/sites/default/files/uploads/linux_ls_4.png) - -To list contents by columns, use `ls -c`: - -![](https://opensource.com/sites/default/files/uploads/linux_ls_5.png) - -`ls -al` provides a list of all the files in the same directory: - -![](https://opensource.com/sites/default/files/uploads/linux_ls_6.png) - -Here are some additional options that I find useful and interesting: - - * List only the .txt files in the directory: `ls *.txt` - * List by file size: `ls -s` - * Sort by time and date: `ls -d` - * Sort by extension: `ls -X` - * Sort by file size: `ls -S` - * Long format with file size: `ls -ls` - * List only the .txt files in a directory: `ls *.txt` - - - -To generate a directory list in the specified format and send it to a file for later viewing, enter `ls -al > mydirectorylist`. Finally, one of the more exotic commands I found is `ls -R`, which provides a recursive list of all the directories on your computer and their contents. - -For a complete list of the all the iterations of the `ls` command, refer to the [GNU Core Utilities][6]. - --------------------------------------------------------------------------------- - -via: https://opensource.com/article/18/10/ls-command - -作者:[Don Watkins][a] -选题:[lujun9972](https://github.com/lujun9972) -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://opensource.com/users/don-watkins -[1]: http://hackerspace.cs.rutgers.edu/library/General/One_Page_Linux_Manual.pdf -[2]: http://www.tldp.org/LDP/LG/issue48/fischer.html -[3]: https://en.wikipedia.org/wiki/Multics -[4]: https://en.wikipedia.org/wiki/Ls -[5]: http://www.gnu.org/s/coreutils/ -[6]: https://www.gnu.org/software/coreutils/manual/html_node/ls-invocation.html#ls-invocation diff --git a/translated/tech/20181003 Tips for listing files with ls at the Linux command line.md b/translated/tech/20181003 Tips for listing files with ls at the Linux command line.md new file mode 100644 index 0000000000..b0fe9643da --- /dev/null +++ b/translated/tech/20181003 Tips for listing files with ls at the Linux command line.md @@ -0,0 +1,73 @@ +在 Linux 命令行中使用 ls 列出文件的提示 +====== +学习一些 Linux "ls" 命令最有用的变化。 +![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/button_push_open_keyboard_file_organize.png?itok=KlAsk1gx) + +我在 Linux 中最先学到的命令之一就是 `ls`。了解系统中文件所在目录中的内容非常重要。能够查看和修改不仅仅是一些文件还要所有文件也很重要。 + +我的第一个 Linux 备忘录是[单页 Linux 手册][1],它于 1999 年发布,它成为我的首选参考资料。当我开始探索 Linux 时,我把它贴在桌子上并经常参考它。它的第一页第一列的底部有使用 `ls -l` 列出文件的命令。 + +之后,我将学习这个最基本命令的其他迭代。通过 `ls` 命令,我开始了解 Linux 文件权限的复杂性以及哪些是我的文件,哪些需要 root 或者 root 权限来修改。随着时间的推移,我习惯使用命令行,虽然我仍然使用 `ls -l` 来查找目录中的文件,但我经常使用 `ls -al`,这样我就可以看到可能需要更改的隐藏文件,比如那些配置文件。 + +根据 Eric Fischer 在[Linux 文档项目][2]中关于 `ls` 命令的文章,该命令的根源可以追溯到 1961年 MIT 的相容分时系统 (CTSS +) 上的 `listf` 命令。当 CTSS 被 [Multics][3] 代替时,命令变为 `list`,并有像 `list -all` 的开关。根据[维基百科][4],“ls” 出现在 AT&T Unix 的原始版本中。我们今天在 Linux 系统上使用的 `ls` 命令来自 [GNU Core Utilities][5]。 + +大多数时候,我只使用几个迭代的命令。使用 `ls` 或 `ls -al` 查看目录内部是我通常使用该命令的方法,但是你还应该熟悉许多其他选项。 + +`$ ls -l` 提供了一个简单的目录列表: + +![](https://opensource.com/sites/default/files/uploads/linux_ls_1_0.png) + +使用我的 Fedora 28 系统中的手册页,我发现 `ls` 还有许多其他选项,所有这些选项都提供了有关 Linux 文件系统的有趣且有用的信息。通过在命令提示符下输入 `man ls`,我们可以开始探索其他一些选项: + +![](https://opensource.com/sites/default/files/uploads/linux_ls_2_0.png) + +要按文件大小对目录进行排序,请使用 `ls -lS`: + +![](https://opensource.com/sites/default/files/uploads/linux_ls_3_0.png) + +要以相反的顺序列出内容,请使用 `ls -lr`: + +![](https://opensource.com/sites/default/files/uploads/linux_ls_4.png) + +要按列列出内容,请使用 `ls -c`: + +![](https://opensource.com/sites/default/files/uploads/linux_ls_5.png) + +`ls -al` 提供了同一目录中所有文件的列表: + +![](https://opensource.com/sites/default/files/uploads/linux_ls_6.png) + +以下是我认为有用且有趣的一些其他选项: + + * 仅列出目录中的 .txt 文件:`ls * .txt` +  * 按文件大小列出:`ls -s` +  * 按时间和日期排序:`ls -d` +  * 按扩展名排序:`ls -X` +  * 按文件大小排序:`ls -S` +  * 带有文件大小的长格式:`ls -ls` + + + +要生成指定格式的目录列表并将其定向到文件供以后查看,请输入 `ls -al> mydirectorylist`。最后,我找到的一个更奇特的命令是 `ls -R`,它提供了计算机上所有目录及其内容的递归列表。 + +有关 `ls` 命令的所有迭代的完整列表,请参阅 [GNU Core Utilities][6]。 + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/18/10/ls-command + +作者:[Don Watkins][a] +选题:[lujun9972](https://github.com/lujun9972) +译者:[geekpi](https://github.com/geekpi) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/don-watkins +[1]: http://hackerspace.cs.rutgers.edu/library/General/One_Page_Linux_Manual.pdf +[2]: http://www.tldp.org/LDP/LG/issue48/fischer.html +[3]: https://en.wikipedia.org/wiki/Multics +[4]: https://en.wikipedia.org/wiki/Ls +[5]: http://www.gnu.org/s/coreutils/ +[6]: https://www.gnu.org/software/coreutils/manual/html_node/ls-invocation.html#ls-invocation From 9e3e000850e898efde26b06da0cacf868042e848 Mon Sep 17 00:00:00 2001 From: geekpi Date: Tue, 16 Oct 2018 09:07:03 +0800 Subject: [PATCH 465/736] translating --- .../tech/20181013 How to Install GRUB on Arch Linux (UEFI).md | 2 ++ 1 file changed, 2 insertions(+) diff --git a/sources/tech/20181013 How to Install GRUB on Arch Linux (UEFI).md b/sources/tech/20181013 How to Install GRUB on Arch Linux (UEFI).md index 97cb5e0362..e456c1ee0e 100644 --- a/sources/tech/20181013 How to Install GRUB on Arch Linux (UEFI).md +++ b/sources/tech/20181013 How to Install GRUB on Arch Linux (UEFI).md @@ -1,3 +1,5 @@ +translating---geekpi + How to Install GRUB on Arch Linux (UEFI) ====== From 7c430e0a34873d2f7353f5557aac57f02311dcd5 Mon Sep 17 00:00:00 2001 From: darksun Date: Tue, 16 Oct 2018 09:08:29 +0800 Subject: [PATCH 466/736] =?UTF-8?q?=E9=80=89=E9=A2=98:=20Lab=205:=20File?= =?UTF-8?q?=20system,=20Spawn=20and=20Shell?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...016 Lab 5- File system, Spawn and Shell.md | 345 ++++++++++++++++++ 1 file changed, 345 insertions(+) create mode 100644 sources/tech/20181016 Lab 5- File system, Spawn and Shell.md diff --git a/sources/tech/20181016 Lab 5- File system, Spawn and Shell.md b/sources/tech/20181016 Lab 5- File system, Spawn and Shell.md new file mode 100644 index 0000000000..e7e623db11 --- /dev/null +++ b/sources/tech/20181016 Lab 5- File system, Spawn and Shell.md @@ -0,0 +1,345 @@ +Lab 5: File system, Spawn and Shell +====== + +**Due Thursday, November 15, 2018 +** + +### Introduction + +In this lab, you will implement `spawn`, a library call that loads and runs on-disk executables. You will then flesh out your kernel and library operating system enough to run a shell on the console. These features need a file system, and this lab introduces a simple read/write file system. + +#### Getting Started + +Use Git to fetch the latest version of the course repository, and then create a local branch called `lab5` based on our lab5 branch, `origin/lab5`: + +``` + athena% cd ~/6.828/lab + athena% add git + athena% git pull + Already up-to-date. + athena% git checkout -b lab5 origin/lab5 + Branch lab5 set up to track remote branch refs/remotes/origin/lab5. + Switched to a new branch "lab5" + athena% git merge lab4 + Merge made by recursive. + ..... + athena% +``` + +The main new component for this part of the lab is the file system environment, located in the new `fs` directory. Scan through all the files in this directory to get a feel for what all is new. Also, there are some new file system-related source files in the `user` and `lib` directories, + +| fs/fs.c | Code that mainipulates the file system's on-disk structure. | +| fs/bc.c | A simple block cache built on top of our user-level page fault handling facility. | +| fs/ide.c | Minimal PIO-based (non-interrupt-driven) IDE driver code. | +| fs/serv.c | The file system server that interacts with client environments using file system IPCs. | +| lib/fd.c | Code that implements the general UNIX-like file descriptor interface. | +| lib/file.c | The driver for on-disk file type, implemented as a file system IPC client. | +| lib/console.c | The driver for console input/output file type. | +| lib/spawn.c | Code skeleton of the spawn library call. | + +You should run the pingpong, primes, and forktree test cases from lab 4 again after merging in the new lab 5 code. You will need to comment out the `ENV_CREATE(fs_fs)` line in `kern/init.c` because `fs/fs.c` tries to do some I/O, which JOS does not allow yet. Similarly, temporarily comment out the call to `close_all()` in `lib/exit.c`; this function calls subroutines that you will implement later in the lab, and therefore will panic if called. If your lab 4 code doesn't contain any bugs, the test cases should run fine. Don't proceed until they work. Don't forget to un-comment these lines when you start Exercise 1. + +If they don't work, use git diff lab4 to review all the changes, making sure there isn't any code you wrote for lab4 (or before) missing from lab 5. Make sure that lab 4 still works. + +#### Lab Requirements + +As before, you will need to do all of the regular exercises described in the lab and _at least one_ challenge problem. Additionally, you will need to write up brief answers to the questions posed in the lab and a short (e.g., one or two paragraph) description of what you did to solve your chosen challenge problem. If you implement more than one challenge problem, you only need to describe one of them in the write-up, though of course you are welcome to do more. Place the write-up in a file called `answers-lab5.txt` in the top level of your `lab5` directory before handing in your work. + +### File system preliminaries + +The file system you will work with is much simpler than most "real" file systems including that of xv6 UNIX, but it is powerful enough to provide the basic features: creating, reading, writing, and deleting files organized in a hierarchical directory structure. + +We are (for the moment anyway) developing only a single-user operating system, which provides protection sufficient to catch bugs but not to protect multiple mutually suspicious users from each other. Our file system therefore does not support the UNIX notions of file ownership or permissions. Our file system also currently does not support hard links, symbolic links, time stamps, or special device files like most UNIX file systems do. + +### On-Disk File System Structure + +Most UNIX file systems divide available disk space into two main types of regions: _inode_ regions and _data_ regions. UNIX file systems assign one _inode_ to each file in the file system; a file's inode holds critical meta-data about the file such as its `stat` attributes and pointers to its data blocks. The data regions are divided into much larger (typically 8KB or more) _data blocks_ , within which the file system stores file data and directory meta-data. Directory entries contain file names and pointers to inodes; a file is said to be _hard-linked_ if multiple directory entries in the file system refer to that file's inode. Since our file system will not support hard links, we do not need this level of indirection and therefore can make a convenient simplification: our file system will not use inodes at all and instead will simply store all of a file's (or sub-directory's) meta-data within the (one and only) directory entry describing that file. + +Both files and directories logically consist of a series of data blocks, which may be scattered throughout the disk much like the pages of an environment's virtual address space can be scattered throughout physical memory. The file system environment hides the details of block layout, presenting interfaces for reading and writing sequences of bytes at arbitrary offsets within files. The file system environment handles all modifications to directories internally as a part of performing actions such as file creation and deletion. Our file system does allow user environments to _read_ directory meta-data directly (e.g., with `read`), which means that user environments can perform directory scanning operations themselves (e.g., to implement the `ls` program) rather than having to rely on additional special calls to the file system. The disadvantage of this approach to directory scanning, and the reason most modern UNIX variants discourage it, is that it makes application programs dependent on the format of directory meta-data, making it difficult to change the file system's internal layout without changing or at least recompiling application programs as well. + +#### Sectors and Blocks + +Most disks cannot perform reads and writes at byte granularity and instead perform reads and writes in units of _sectors_. In JOS, sectors are 512 bytes each. File systems actually allocate and use disk storage in units of _blocks_. Be wary of the distinction between the two terms: _sector size_ is a property of the disk hardware, whereas _block size_ is an aspect of the operating system using the disk. A file system's block size must be a multiple of the sector size of the underlying disk. + +The UNIX xv6 file system uses a block size of 512 bytes, the same as the sector size of the underlying disk. Most modern file systems use a larger block size, however, because storage space has gotten much cheaper and it is more efficient to manage storage at larger granularities. Our file system will use a block size of 4096 bytes, conveniently matching the processor's page size. + +#### Superblocks + +![Disk layout][1] + +File systems typically reserve certain disk blocks at "easy-to-find" locations on the disk (such as the very start or the very end) to hold meta-data describing properties of the file system as a whole, such as the block size, disk size, any meta-data required to find the root directory, the time the file system was last mounted, the time the file system was last checked for errors, and so on. These special blocks are called _superblocks_. + +Our file system will have exactly one superblock, which will always be at block 1 on the disk. Its layout is defined by `struct Super` in `inc/fs.h`. Block 0 is typically reserved to hold boot loaders and partition tables, so file systems generally do not use the very first disk block. Many "real" file systems maintain multiple superblocks, replicated throughout several widely-spaced regions of the disk, so that if one of them is corrupted or the disk develops a media error in that region, the other superblocks can still be found and used to access the file system. + +#### File Meta-data + +![File structure][2] +The layout of the meta-data describing a file in our file system is described by `struct File` in `inc/fs.h`. This meta-data includes the file's name, size, type (regular file or directory), and pointers to the blocks comprising the file. As mentioned above, we do not have inodes, so this meta-data is stored in a directory entry on disk. Unlike in most "real" file systems, for simplicity we will use this one `File` structure to represent file meta-data as it appears _both on disk and in memory_. + +The `f_direct` array in `struct File` contains space to store the block numbers of the first 10 (`NDIRECT`) blocks of the file, which we call the file's _direct_ blocks. For small files up to 10*4096 = 40KB in size, this means that the block numbers of all of the file's blocks will fit directly within the `File` structure itself. For larger files, however, we need a place to hold the rest of the file's block numbers. For any file greater than 40KB in size, therefore, we allocate an additional disk block, called the file's _indirect block_ , to hold up to 4096/4 = 1024 additional block numbers. Our file system therefore allows files to be up to 1034 blocks, or just over four megabytes, in size. To support larger files, "real" file systems typically support _double-_ and _triple-indirect blocks_ as well. + +#### Directories versus Regular Files + +A `File` structure in our file system can represent either a _regular_ file or a directory; these two types of "files" are distinguished by the `type` field in the `File` structure. The file system manages regular files and directory-files in exactly the same way, except that it does not interpret the contents of the data blocks associated with regular files at all, whereas the file system interprets the contents of a directory-file as a series of `File` structures describing the files and subdirectories within the directory. + +The superblock in our file system contains a `File` structure (the `root` field in `struct Super`) that holds the meta-data for the file system's root directory. The contents of this directory-file is a sequence of `File` structures describing the files and directories located within the root directory of the file system. Any subdirectories in the root directory may in turn contain more `File` structures representing sub-subdirectories, and so on. + +### The File System + +The goal for this lab is not to have you implement the entire file system, but for you to implement only certain key components. In particular, you will be responsible for reading blocks into the block cache and flushing them back to disk; allocating disk blocks; mapping file offsets to disk blocks; and implementing read, write, and open in the IPC interface. Because you will not be implementing all of the file system yourself, it is very important that you familiarize yourself with the provided code and the various file system interfaces. + +### Disk Access + +The file system environment in our operating system needs to be able to access the disk, but we have not yet implemented any disk access functionality in our kernel. Instead of taking the conventional "monolithic" operating system strategy of adding an IDE disk driver to the kernel along with the necessary system calls to allow the file system to access it, we instead implement the IDE disk driver as part of the user-level file system environment. We will still need to modify the kernel slightly, in order to set things up so that the file system environment has the privileges it needs to implement disk access itself. + +It is easy to implement disk access in user space this way as long as we rely on polling, "programmed I/O" (PIO)-based disk access and do not use disk interrupts. It is possible to implement interrupt-driven device drivers in user mode as well (the L3 and L4 kernels do this, for example), but it is more difficult since the kernel must field device interrupts and dispatch them to the correct user-mode environment. + +The x86 processor uses the IOPL bits in the EFLAGS register to determine whether protected-mode code is allowed to perform special device I/O instructions such as the IN and OUT instructions. Since all of the IDE disk registers we need to access are located in the x86's I/O space rather than being memory-mapped, giving "I/O privilege" to the file system environment is the only thing we need to do in order to allow the file system to access these registers. In effect, the IOPL bits in the EFLAGS register provides the kernel with a simple "all-or-nothing" method of controlling whether user-mode code can access I/O space. In our case, we want the file system environment to be able to access I/O space, but we do not want any other environments to be able to access I/O space at all. + +``` +Exercise 1. `i386_init` identifies the file system environment by passing the type `ENV_TYPE_FS` to your environment creation function, `env_create`. Modify `env_create` in `env.c`, so that it gives the file system environment I/O privilege, but never gives that privilege to any other environment. + +Make sure you can start the file environment without causing a General Protection fault. You should pass the "fs i/o" test in make grade. +``` + +``` +Question + + 1. Do you have to do anything else to ensure that this I/O privilege setting is saved and restored properly when you subsequently switch from one environment to another? Why? +``` + + +Note that the `GNUmakefile` file in this lab sets up QEMU to use the file `obj/kern/kernel.img` as the image for disk 0 (typically "Drive C" under DOS/Windows) as before, and to use the (new) file `obj/fs/fs.img` as the image for disk 1 ("Drive D"). In this lab our file system should only ever touch disk 1; disk 0 is used only to boot the kernel. If you manage to corrupt either disk image in some way, you can reset both of them to their original, "pristine" versions simply by typing: + +``` + $ rm obj/kern/kernel.img obj/fs/fs.img + $ make +``` + +or by doing: + +``` + $ make clean + $ make +``` + +Challenge! Implement interrupt-driven IDE disk access, with or without DMA. You can decide whether to move the device driver into the kernel, keep it in user space along with the file system, or even (if you really want to get into the micro-kernel spirit) move it into a separate environment of its own. + +### The Block Cache + +In our file system, we will implement a simple "buffer cache" (really just a block cache) with the help of the processor's virtual memory system. The code for the block cache is in `fs/bc.c`. + +Our file system will be limited to handling disks of size 3GB or less. We reserve a large, fixed 3GB region of the file system environment's address space, from 0x10000000 (`DISKMAP`) up to 0xD0000000 (`DISKMAP+DISKMAX`), as a "memory mapped" version of the disk. For example, disk block 0 is mapped at virtual address 0x10000000, disk block 1 is mapped at virtual address 0x10001000, and so on. The `diskaddr` function in `fs/bc.c` implements this translation from disk block numbers to virtual addresses (along with some sanity checking). + +Since our file system environment has its own virtual address space independent of the virtual address spaces of all other environments in the system, and the only thing the file system environment needs to do is to implement file access, it is reasonable to reserve most of the file system environment's address space in this way. It would be awkward for a real file system implementation on a 32-bit machine to do this since modern disks are larger than 3GB. Such a buffer cache management approach may still be reasonable on a machine with a 64-bit address space. + +Of course, it would take a long time to read the entire disk into memory, so instead we'll implement a form of _demand paging_ , wherein we only allocate pages in the disk map region and read the corresponding block from the disk in response to a page fault in this region. This way, we can pretend that the entire disk is in memory. + +``` +Exercise 2. Implement the `bc_pgfault` and `flush_block` functions in `fs/bc.c`. `bc_pgfault` is a page fault handler, just like the one your wrote in the previous lab for copy-on-write fork, except that its job is to load pages in from the disk in response to a page fault. When writing this, keep in mind that (1) `addr` may not be aligned to a block boundary and (2) `ide_read` operates in sectors, not blocks. + +The `flush_block` function should write a block out to disk _if necessary_. `flush_block` shouldn't do anything if the block isn't even in the block cache (that is, the page isn't mapped) or if it's not dirty. We will use the VM hardware to keep track of whether a disk block has been modified since it was last read from or written to disk. To see whether a block needs writing, we can just look to see if the `PTE_D` "dirty" bit is set in the `uvpt` entry. (The `PTE_D` bit is set by the processor in response to a write to that page; see 5.2.4.3 in [chapter 5][3] of the 386 reference manual.) After writing the block to disk, `flush_block` should clear the `PTE_D` bit using `sys_page_map`. + +Use make grade to test your code. Your code should pass "check_bc", "check_super", and "check_bitmap". +``` + +`fs_init` function in `fs/fs.c` is a prime example of how to use the block cache. After initializing the block cache, it simply stores pointers into the disk map region in the `super` global variable. After this point, we can simply read from the `super` structure as if they were in memory and our page fault handler will read them from disk as necessary. + +``` +Challenge! The block cache has no eviction policy. Once a block gets faulted in to it, it never gets removed and will remain in memory forevermore. Add eviction to the buffer cache. Using the `PTE_A` "accessed" bits in the page tables, which the hardware sets on any access to a page, you can track approximate usage of disk blocks without the need to modify every place in the code that accesses the disk map region. Be careful with dirty blocks. +``` + +### The Block Bitmap + +After `fs_init` sets the `bitmap` pointer, we can treat `bitmap` as a packed array of bits, one for each block on the disk. See, for example, `block_is_free`, which simply checks whether a given block is marked free in the bitmap. + +``` +Exercise 3. Use `free_block` as a model to implement `alloc_block` in `fs/fs.c`, which should find a free disk block in the bitmap, mark it used, and return the number of that block. When you allocate a block, you should immediately flush the changed bitmap block to disk with `flush_block`, to help file system consistency. + +Use make grade to test your code. Your code should now pass "alloc_block". +``` + +### File Operations + +We have provided a variety of functions in `fs/fs.c` to implement the basic facilities you will need to interpret and manage `File` structures, scan and manage the entries of directory-files, and walk the file system from the root to resolve an absolute pathname. Read through _all_ of the code in `fs/fs.c` and make sure you understand what each function does before proceeding. + +``` +Exercise 4. Implement `file_block_walk` and `file_get_block`. `file_block_walk` maps from a block offset within a file to the pointer for that block in the `struct File` or the indirect block, very much like what `pgdir_walk` did for page tables. `file_get_block` goes one step further and maps to the actual disk block, allocating a new one if necessary. + +Use make grade to test your code. Your code should pass "file_open", "file_get_block", and "file_flush/file_truncated/file rewrite", and "testfile". +``` + +`file_block_walk` and `file_get_block` are the workhorses of the file system. For example, `file_read` and `file_write` are little more than the bookkeeping atop `file_get_block` necessary to copy bytes between scattered blocks and a sequential buffer. + +``` +Challenge! The file system is likely to be corrupted if it gets interrupted in the middle of an operation (for example, by a crash or a reboot). Implement soft updates or journalling to make the file system crash-resilient and demonstrate some situation where the old file system would get corrupted, but yours doesn't. +``` + +### The file system interface + +Now that we have the necessary functionality within the file system environment itself, we must make it accessible to other environments that wish to use the file system. Since other environments can't directly call functions in the file system environment, we'll expose access to the file system environment via a _remote procedure call_ , or RPC, abstraction, built atop JOS's IPC mechanism. Graphically, here's what a call to the file system server (say, read) looks like + +``` + Regular env FS env + +---------------+ +---------------+ + | read | | file_read | + | (lib/fd.c) | | (fs/fs.c) | +...|.......|.......|...|.......^.......|............... + | v | | | | RPC mechanism + | devfile_read | | serve_read | + | (lib/file.c) | | (fs/serv.c) | + | | | | ^ | + | v | | | | + | fsipc | | serve | + | (lib/file.c) | | (fs/serv.c) | + | | | | ^ | + | v | | | | + | ipc_send | | ipc_recv | + | | | | ^ | + +-------|-------+ +-------|-------+ + | | + +-------------------+ + +``` + +Everything below the dotted line is simply the mechanics of getting a read request from the regular environment to the file system environment. Starting at the beginning, `read` (which we provide) works on any file descriptor and simply dispatches to the appropriate device read function, in this case `devfile_read` (we can have more device types, like pipes). `devfile_read` implements `read` specifically for on-disk files. This and the other `devfile_*` functions in `lib/file.c` implement the client side of the FS operations and all work in roughly the same way, bundling up arguments in a request structure, calling `fsipc` to send the IPC request, and unpacking and returning the results. The `fsipc` function simply handles the common details of sending a request to the server and receiving the reply. + +The file system server code can be found in `fs/serv.c`. It loops in the `serve` function, endlessly receiving a request over IPC, dispatching that request to the appropriate handler function, and sending the result back via IPC. In the read example, `serve` will dispatch to `serve_read`, which will take care of the IPC details specific to read requests such as unpacking the request structure and finally call `file_read` to actually perform the file read. + +Recall that JOS's IPC mechanism lets an environment send a single 32-bit number and, optionally, share a page. To send a request from the client to the server, we use the 32-bit number for the request type (the file system server RPCs are numbered, just like how syscalls were numbered) and store the arguments to the request in a `union Fsipc` on the page shared via the IPC. On the client side, we always share the page at `fsipcbuf`; on the server side, we map the incoming request page at `fsreq` (`0x0ffff000`). + +The server also sends the response back via IPC. We use the 32-bit number for the function's return code. For most RPCs, this is all they return. `FSREQ_READ` and `FSREQ_STAT` also return data, which they simply write to the page that the client sent its request on. There's no need to send this page in the response IPC, since the client shared it with the file system server in the first place. Also, in its response, `FSREQ_OPEN` shares with the client a new "Fd page". We'll return to the file descriptor page shortly. + +``` +Exercise 5. Implement `serve_read` in `fs/serv.c`. + +`serve_read`'s heavy lifting will be done by the already-implemented `file_read` in `fs/fs.c` (which, in turn, is just a bunch of calls to `file_get_block`). `serve_read` just has to provide the RPC interface for file reading. Look at the comments and code in `serve_set_size` to get a general idea of how the server functions should be structured. + +Use make grade to test your code. Your code should pass "serve_open/file_stat/file_close" and "file_read" for a score of 70/150. +``` + +``` +Exercise 6. Implement `serve_write` in `fs/serv.c` and `devfile_write` in `lib/file.c`. + +Use make grade to test your code. Your code should pass "file_write", "file_read after file_write", "open", and "large file" for a score of 90/150. +``` + +### Spawning Processes + +We have given you the code for `spawn` (see `lib/spawn.c`) which creates a new environment, loads a program image from the file system into it, and then starts the child environment running this program. The parent process then continues running independently of the child. The `spawn` function effectively acts like a `fork` in UNIX followed by an immediate `exec` in the child process. + +We implemented `spawn` rather than a UNIX-style `exec` because `spawn` is easier to implement from user space in "exokernel fashion", without special help from the kernel. Think about what you would have to do in order to implement `exec` in user space, and be sure you understand why it is harder. + +``` +Exercise 7. `spawn` relies on the new syscall `sys_env_set_trapframe` to initialize the state of the newly created environment. Implement `sys_env_set_trapframe` in `kern/syscall.c` (don't forget to dispatch the new system call in `syscall()`). + +Test your code by running the `user/spawnhello` program from `kern/init.c`, which will attempt to spawn `/hello` from the file system. + +Use make grade to test your code. +``` + +``` +Challenge! Implement Unix-style `exec`. +``` + +``` +Challenge! Implement `mmap`-style memory-mapped files and modify `spawn` to map pages directly from the ELF image when possible. +``` + +### Sharing library state across fork and spawn + +The UNIX file descriptors are a general notion that also encompasses pipes, console I/O, etc. In JOS, each of these device types has a corresponding `struct Dev`, with pointers to the functions that implement read/write/etc. for that device type. `lib/fd.c` implements the general UNIX-like file descriptor interface on top of this. Each `struct Fd` indicates its device type, and most of the functions in `lib/fd.c` simply dispatch operations to functions in the appropriate `struct Dev`. + +`lib/fd.c` also maintains the _file descriptor table_ region in each application environment's address space, starting at `FDTABLE`. This area reserves a page's worth (4KB) of address space for each of the up to `MAXFD` (currently 32) file descriptors the application can have open at once. At any given time, a particular file descriptor table page is mapped if and only if the corresponding file descriptor is in use. Each file descriptor also has an optional "data page" in the region starting at `FILEDATA`, which devices can use if they choose. + +We would like to share file descriptor state across `fork` and `spawn`, but file descriptor state is kept in user-space memory. Right now, on `fork`, the memory will be marked copy-on-write, so the state will be duplicated rather than shared. (This means environments won't be able to seek in files they didn't open themselves and that pipes won't work across a fork.) On `spawn`, the memory will be left behind, not copied at all. (Effectively, the spawned environment starts with no open file descriptors.) + +We will change `fork` to know that certain regions of memory are used by the "library operating system" and should always be shared. Rather than hard-code a list of regions somewhere, we will set an otherwise-unused bit in the page table entries (just like we did with the `PTE_COW` bit in `fork`). + +We have defined a new `PTE_SHARE` bit in `inc/lib.h`. This bit is one of the three PTE bits that are marked "available for software use" in the Intel and AMD manuals. We will establish the convention that if a page table entry has this bit set, the PTE should be copied directly from parent to child in both `fork` and `spawn`. Note that this is different from marking it copy-on-write: as described in the first paragraph, we want to make sure to _share_ updates to the page. + +``` +Exercise 8. Change `duppage` in `lib/fork.c` to follow the new convention. If the page table entry has the `PTE_SHARE` bit set, just copy the mapping directly. (You should use `PTE_SYSCALL`, not `0xfff`, to mask out the relevant bits from the page table entry. `0xfff` picks up the accessed and dirty bits as well.) + +Likewise, implement `copy_shared_pages` in `lib/spawn.c`. It should loop through all page table entries in the current process (just like `fork` did), copying any page mappings that have the `PTE_SHARE` bit set into the child process. +``` + +Use make run-testpteshare to check that your code is behaving properly. You should see lines that say "`fork handles PTE_SHARE right`" and "`spawn handles PTE_SHARE right`". + +Use make run-testfdsharing to check that file descriptors are shared properly. You should see lines that say "`read in child succeeded`" and "`read in parent succeeded`". + +### The keyboard interface + +For the shell to work, we need a way to type at it. QEMU has been displaying output we write to the CGA display and the serial port, but so far we've only taken input while in the kernel monitor. In QEMU, input typed in the graphical window appear as input from the keyboard to JOS, while input typed to the console appear as characters on the serial port. `kern/console.c` already contains the keyboard and serial drivers that have been used by the kernel monitor since lab 1, but now you need to attach these to the rest of the system. + +``` +Exercise 9. In your `kern/trap.c`, call `kbd_intr` to handle trap `IRQ_OFFSET+IRQ_KBD` and `serial_intr` to handle trap `IRQ_OFFSET+IRQ_SERIAL`. +``` + +We implemented the console input/output file type for you, in `lib/console.c`. `kbd_intr` and `serial_intr` fill a buffer with the recently read input while the console file type drains the buffer (the console file type is used for stdin/stdout by default unless the user redirects them). + +Test your code by running make run-testkbd and type a few lines. The system should echo your lines back to you as you finish them. Try typing in both the console and the graphical window, if you have both available. + +### The Shell + +Run make run-icode or make run-icode-nox. This will run your kernel and start `user/icode`. `icode` execs `init`, which will set up the console as file descriptors 0 and 1 (standard input and standard output). It will then spawn `sh`, the shell. You should be able to run the following commands: + +``` + echo hello world | cat + cat lorem |cat + cat lorem |num + cat lorem |num |num |num |num |num + lsfd +``` + +Note that the user library routine `cprintf` prints straight to the console, without using the file descriptor code. This is great for debugging but not great for piping into other programs. To print output to a particular file descriptor (for example, 1, standard output), use `fprintf(1, "...", ...)`. `printf("...", ...)` is a short-cut for printing to FD 1. See `user/lsfd.c` for examples. + +``` +Exercise 10. + +The shell doesn't support I/O redirection. It would be nice to run sh +``` + +Only a small part of the message resembles JSON as we know it today. The message itself is actually an HTML document containing some JavaScript. The part that resembles JSON is just a JavaScript object literal being passed to a function called `receive()`. + +Crockford and Morningstar had decided that they could abuse an HTML frame to send themselves data. They could point a frame at a URL that would return an HTML document like the one above. When the HTML was received, the JavaScript would be run, passing the object literal back to the application. This worked as long as you were careful to sidestep browser protections preventing a sub-window from accessing its parent; you can see that Crockford and Mornginstar did that by explicitly setting the document domain. (This frame-based technique, sometimes called the hidden frame technique, was commonly used in the late 90s before the widespread implementation of XMLHttpRequest.) + +The amazing thing about the first JSON message is that it’s not obviously the first usage of a new kind of data format at all. It’s just JavaScript! In fact the idea of using JavaScript this way is so straightforward that Crockford himself has said that he wasn’t the first person to do it—he claims that somebody at Netscape was using JavaScript array literals to communicate information as early as 1996. Since the message is just JavaScript, it doesn’t require any kind of special parsing. The JavaScript interpreter can do it all. + +The first ever JSON message actually ran afoul of the JavaScript interpreter. JavaScript reserves an enormous number of words—there are 64 reserved words as of ECMAScript 6—and Crockford and Morningstar had unwittingly used one in their message. They had used `do` as a key, but `do` is reserved. Since JavaScript has so many reserved words, Crockford decided that, rather than avoid using all those reserved words, he would just mandate that all JSON keys be quoted. A quoted key would be treated as a string by the JavaScript interpreter, meaning that reserved words could be used safely. This is why JSON keys are quoted to this day. + +Crockford and Morningstar realized they had something that could be used in all sorts of applications. They wanted to name their format “JSML”, for JavaScript Markup Language, but found that the acronym was already being used for something called Java Speech Markup Language. So they decided to go with “JavaScript Object Notation”, or JSON. They began pitching it to clients but soon found that clients were unwilling to take a chance on an unknown technology that lacked an official specification. So Crockford decided he would write one. + +In 2002, Crockford bought the domain [JSON.org][2] and put up the JSON grammar and an example implementation of a parser. The website is still up, though it now includes a prominent link to the JSON ECMA standard ratified in 2013. After putting up the website, Crockford did little more to promote JSON, but soon found that lots of people were submitting JSON parser implementations in all sorts of different programming languages. JSON’s lineage clearly tied it to JavaScript, but it became apparent that JSON was well-suited to data interchange between arbitrary pairs of languages. + +### Doing AJAX Wrong + +JSON got a big boost in 2005. That year, a web designer and developer named Jesse James Garrett coined the term “AJAX” in a blog post. He was careful to stress that AJAX wasn’t any one new technology, but rather “several technologies, each flourishing in its own right, coming together in powerful new ways.” AJAX was the name that Garrett was giving to a new approach to web application development that he had noticed gaining favor. His blog post went on to describe how developers could leverage JavaScript and XMLHttpRequest to build new kinds of applications that were more responsive and stateful than the typical web page. He pointed to Gmail and Flickr as examples of websites already relying on AJAX techniques. + +The “X” in “AJAX” stood for XML, of course. But in a follow-up Q&A post, Garrett pointed to JSON as an entirely acceptable alternative to XML. He wrote that “XML is the most fully-developed means of getting data in and out of an AJAX client, but there’s no reason you couldn’t accomplish the same effects using a technology like JavaScript Object Notation or any similar means of structuring data.” + +Developers indeed found that they could easily use JSON to build AJAX applications and many came to prefer it to XML. And so, ironically, the interest in AJAX led to an explosion in JSON’s popularity. It was around this time that JSON drew the attention of the blogosphere. + +In 2006, Dave Winer, a prolific blogger and the engineer behind a number of XML-based technologies such as RSS and XML-RPC, complained that JSON was reinventing XML for no good reason. Though one might think that a contest between data interchange formats would be unlikely to engender death threats, Winer wrote: + +> No doubt I can write a routine to parse [JSON], but look at how deep they went to re-invent, XML itself wasn’t good enough for them, for some reason (I’d love to hear the reason). Who did this travesty? Let’s find a tree and string them up. Now. + +It’s easy to understand Winer’s frustration. XML has never been widely loved. Even Winer has said that he does not love XML. But XML was designed to be a system that could be used by everyone for almost anything imaginable. To that end, XML is actually a meta-language that allows you to define domain-specific languages for individual applications—RSS, the web feed technology, and SOAP (Simple Object Access Protocol) are examples. Winer felt that it was important to work toward consensus because of all the benefits a common interchange format could bring. He felt that XML’s flexibility should be able to accommodate everybody’s needs. And yet here was JSON, a format offering no benefits over XML except those enabled by throwing out the cruft that made XML so flexible. + +Crockford saw Winer’s blog post and left a comment on it. In response to the charge that JSON was reinventing XML, Crockford wrote, “The good thing about reinventing the wheel is that you can get a round one.” + +### JSON vs XML + +By 2014, JSON had been officially specified by both an ECMA standard and an RFC. It had its own MIME type. JSON had made it to the big leagues. + +Why did JSON become so much more popular than XML? + +On [JSON.org][2], Crockford summarizes some of JSON’s advantages over XML. He writes that JSON is easier for both humans and machines to understand, since its syntax is minimal and its structure is predictable. Other bloggers have focused on XML’s verbosity and “the angle bracket tax.” Each opening tag in XML must be matched with a closing tag, meaning that an XML document contains a lot of redundant information. This can make an XML document much larger than an equivalent JSON document when uncompressed, but, perhaps more importantly, it also makes an XML document harder to read. + +Crockford has also claimed that another enormous advantage for JSON is that JSON was designed as a data interchange format. It was meant to carry structured information between programs from the very beginning. XML, though it has been used for the same purpose, was originally designed as a document markup language. It evolved from SGML (Standard Generalized Markup Language), which in turn evolved from a markup language called Scribe, intended as a word processing system similar to LaTeX. In XML, a tag can contain what is called “mixed content,” or text with inline tags surrounding words or phrases. This recalls the image of an editor marking up a manuscript with a red or blue pen, which is arguably the central metaphor of a markup language. JSON, on the other hand, does not support a clear analogue to mixed content, but that means that its structure can be simpler. A document is best modeled as a tree, but by throwing out the document idea Crockford could limit JSON to dictionaries and arrays, the basic and familiar elements all programmers use to build their programs. + +Finally, my own hunch is that people disliked XML because it was confusing, and it was confusing because it seemed to come in so many different flavors. At first blush, it’s not obvious where the line is between XML proper and its sub-languages like RSS, ATOM, SOAP, or SVG. The first lines of a typical XML document establish the XML version and then the particular sub-language the XML document should conform to. That is a lot of variation to account for already, especially when compared to JSON, which is so straightforward that no new version of the JSON specification is ever expected to be written. The designers of XML, in their attempt to make XML the one data interchange format to rule them all, fell victim to that classic programmer’s pitfall: over-engineering. XML was so generalized that it was hard to use for something simple. + +In 2000, a campaign was launched to get HTML to conform to the XML standard. A specification was published for XML-compliant HTML, thereafter known as XHTML. Some browser vendors immediately started supporting the new standard, but it quickly became obvious that the vast HTML-producing public were unwilling to revise their habits. The new standard called for stricter validation of XHTML than had been the norm for HTML, but too many websites depended on HTML’s forgiving rules. By 2009, an attempt to write a second version of the XHTML standard was aborted when it became clear that the future of HTML was going to be HTML5, a standard that did not insist on XML compliance. + +If the XHTML effort had succeeded, then maybe XML would have become the common data format that its designers hoped it would be. Imagine a world in which HTML documents and API responses had the exact same structure. In such a world, JSON might not have become as ubiquitous as it is today. But I read the failure of XHTML as a kind of moral defeat for the XML camp. If XML wasn’t the best tool for HTML, then maybe there were better tools out there for other applications also. In that world, our world, it is easy to see how a format as simple and narrowly tailored as JSON could find great success. + +If you enjoyed this post, more like it come out every two weeks! Follow [@TwoBitHistory][3] on Twitter or subscribe to the [RSS feed][4] to make sure you know when a new post is out. + +-------------------------------------------------------------------------------- + +via: https://twobithistory.org/2017/09/21/the-rise-and-rise-of-json.html + +作者:[Two-Bit History][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://twobithistory.org +[b]: https://github.com/lujun9972 +[1]: https://twobithistory.org/images/json.svg +[2]: http://JSON.org +[3]: https://twitter.com/TwoBitHistory +[4]: https://twobithistory.org/feed.xml From 0b3dca8ac93c8cff7ba01f88ad8f3cf50a590b67 Mon Sep 17 00:00:00 2001 From: lctt-bot Date: Wed, 24 Oct 2018 17:00:34 +0000 Subject: [PATCH 666/736] Revert "Update 20180209 How writing can change your career for the better, even if you don-t identify as a writer.md" This reverts commit fb10ea9fff64158f48906d13a426f1499ebc0229. --- ...er for the better, even if you don-t identify as a writer.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/sources/talk/20180209 How writing can change your career for the better, even if you don-t identify as a writer.md b/sources/talk/20180209 How writing can change your career for the better, even if you don-t identify as a writer.md index 98d57bcca3..55618326c6 100644 --- a/sources/talk/20180209 How writing can change your career for the better, even if you don-t identify as a writer.md +++ b/sources/talk/20180209 How writing can change your career for the better, even if you don-t identify as a writer.md @@ -1,4 +1,4 @@ -How writing can change your career for the better, even if you don't identify as a writer Translating by FelixYFZ +How writing can change your career for the better, even if you don't identify as a writer ====== Have you read Marie Kondo's book [The Life-Changing Magic of Tidying Up][1]? Or did you, like me, buy it and read a little bit and then add it to the pile of clutter next to your bed? From 0975aa41ce1f6a49845e6e34b1c193d67c25951b Mon Sep 17 00:00:00 2001 From: darksun Date: Thu, 25 Oct 2018 06:51:22 +0800 Subject: [PATCH 667/736] =?UTF-8?q?=E9=80=89=E9=A2=98:=20Whatever=20Happen?= =?UTF-8?q?ed=20to=20the=20Semantic=20Web=3F?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...7 Whatever Happened to the Semantic Web.md | 106 ++++++++++++++++++ 1 file changed, 106 insertions(+) create mode 100644 sources/talk/20180527 Whatever Happened to the Semantic Web.md diff --git a/sources/talk/20180527 Whatever Happened to the Semantic Web.md b/sources/talk/20180527 Whatever Happened to the Semantic Web.md new file mode 100644 index 0000000000..22d48c150a --- /dev/null +++ b/sources/talk/20180527 Whatever Happened to the Semantic Web.md @@ -0,0 +1,106 @@ +Whatever Happened to the Semantic Web? +====== +In 2001, Tim Berners-Lee, inventor of the World Wide Web, published an article in Scientific American. Berners-Lee, along with two other researchers, Ora Lassila and James Hendler, wanted to give the world a preview of the revolutionary new changes they saw coming to the web. Since its introduction only a decade before, the web had fast become the world’s best means for sharing documents with other people. Now, the authors promised, the web would evolve to encompass not just documents but every kind of data one could imagine. + +They called this new web the Semantic Web. The great promise of the Semantic Web was that it would be readable not just by humans but also by machines. Pages on the web would be meaningful to software programs—they would have semantics—allowing programs to interact with the web the same way that people do. Programs could exchange data across the Semantic Web without having to be explicitly engineered to talk to each other. According to Berners-Lee, Lassila, and Hendler, a typical day living with the myriad conveniences of the Semantic Web might look something like this: + +> The entertainment system was belting out the Beatles’ “We Can Work It Out” when the phone rang. When Pete answered, his phone turned the sound down by sending a message to all the other local devices that had a volume control. His sister, Lucy, was on the line from the doctor’s office: “Mom needs to see a specialist and then has to have a series of physical therapy sessions. Biweekly or something. I’m going to have my agent set up the appointments.” Pete immediately agreed to share the chauffeuring. At the doctor’s office, Lucy instructed her Semantic Web agent through her handheld Web browser. The agent promptly retrieved the information about Mom’s prescribed treatment within a 20-mile radius of her home and with a rating of excellent or very good on trusted rating services. It then began trying to find a match between available appointment times (supplied by the agents of individual providers through their Web sites) and Pete’s and Lucy’s busy schedules. + +The vision was that the Semantic Web would become a playground for intelligent “agents.” These agents would automate much of the work that the world had only just learned to do on the web. + +![][1] + +For a while, this vision enticed a lot of people. After new technologies such as AJAX led to the rise of what Silicon Valley called Web 2.0, Berners-Lee began referring to the Semantic Web as Web 3.0. Many thought that the Semantic Web was indeed the inevitable next step. A New York Times article published in 2006 quotes a speech Berners-Lee gave at a conference in which he said that the extant web would, twenty years in the future, be seen as only the “embryonic” form of something far greater. A venture capitalist, also quoted in the article, claimed that the Semantic Web would be “profound,” and ultimately “as obvious as the web seems obvious to us today.” + +Of course, the Semantic Web we were promised has yet to be delivered. In 2018, we have “agents” like Siri that can do certain tasks for us. But Siri can only do what it can because engineers at Apple have manually hooked it up to a medley of web services each capable of answering only a narrow category of questions. An important consequence is that, without being large and important enough for Apple to care, you cannot advertise your services directly to Siri from your own website. Unlike the physical therapists that Berners-Lee and his co-authors imagined would be able to hang out their shingles on the web, today we are stuck with giant, centralized repositories of information. Today’s physical therapists must enter information about their practice into Google or Yelp, because those are the only services that the smartphone agents know how to use and the only ones human beings will bother to check. The key difference between our current reality and the promised Semantic future is best captured by this throwaway aside in the excerpt above: “…appointment times (supplied by the agents of individual providers through **their** Web sites)…” + +In fact, over the last decade, the web has not only failed to become the Semantic Web but also threatened to recede as an idea altogether. We now hardly ever talk about “the web” and instead talk about “the internet,” which as of 2016 has become such a common term that newspapers no longer capitalize it. (To be fair, they stopped capitalizing “web” too.) Some might still protest that the web and the internet are two different things, but the distinction gets less clear all the time. The web we have today is slowly becoming a glorified app store, just the easiest way among many to download software that communicates with distant servers using closed protocols and schemas, making it functionally identical to the software ecosystem that existed before the web. How did we get here? If the effort to build a Semantic Web had succeeded, would the web have looked different today? Or have there been so many forces working against a decentralized web for so long that the Semantic Web was always going to be stillborn? + +### Semweb Hucksters and Their Metacrap + +To some more practically minded engineers, the Semantic Web was, from the outset, a utopian dream. + +The basic idea behind the Semantic Web was that everyone would use a new set of standards to annotate their webpages with little bits of XML. These little bits of XML would have no effect on the presentation of the webpage, but they could be read by software programs to divine meaning that otherwise would only be available to humans. + +The bits of XML were a way of expressing metadata about the webpage. We are all familiar with metadata in the context of a file system: When we look at a file on our computers, we can see when it was created, when it was last updated, and whom it was originally created by. Likewise, webpages on the Semantic Web would be able to tell your browser who authored the page and perhaps even where that person went to school, or where that person is currently employed. In theory, this information would allow Semantic Web browsers to answer queries across a large collection of webpages. In their article for Scientific American, Berners-Lee and his co-authors explain that you could, for example, use the Semantic Web to look up a person you met at a conference whose name you only partially remember. + +Cory Doctorow, a blogger and digital rights activist, published an influential essay in 2001 that pointed out the many problems with depending on voluntarily supplied metadata. A world of “exhaustive, reliable” metadata would be wonderful, he argued, but such a world was “a pipe-dream, founded on self-delusion, nerd hubris, and hysterically inflated market opportunities.” Doctorow had found himself in a series of debates over the Semantic Web at tech conferences and wanted to catalog the serious issues that the Semantic Web enthusiasts (Doctorow calls them “semweb hucksters”) were overlooking. The essay, titled “Metacrap,” identifies seven problems, among them the obvious fact that most web users were likely to provide either no metadata at all or else lots of misleading metadata meant to draw clicks. Even if users were universally diligent and well-intentioned, in order for the metadata to be robust and reliable, users would all have to agree on a single representation for each important concept. Doctorow argued that in some cases a single representation might not be appropriate, desirable, or fair to all users. + +Indeed, the web had already seen people abusing the HTML `` tag (introduced at least as early as HTML 4) in an attempt to improve the visibility of their webpages in search results. In a 2004 paper, Ben Munat, then an academic at Evergreen State College, explains how search engines once experimented with using keywords supplied via the `` tag to index results, but soon discovered that unscrupulous webpage authors were including tags unrelated to the actual content of their webpage. As a result, search engines came to ignore the `` tag in favor of using complex algorithms to analyze the actual content of a webpage. Munat concludes that a general-purpose Semantic Web is unworkable, and that the focus should be on specific domains within medicine and science. + +Others have also seen the Semantic Web project as tragically flawed, though they have located the flaw elsewhere. Aaron Swartz, the famous programmer and another digital rights activist, wrote in an unfinished book about the Semantic Web published after his death that Doctorow was “attacking a strawman.” Nobody expected that metadata on the web would be thoroughly accurate and reliable, but the Semantic Web, or at least a more realistically scoped version of it, remained possible. The problem, in Swartz’ view, was the “formalizing mindset of mathematics and the institutional structure of academics” that the “semantic Webheads” brought to bear on the challenge. In forums like the World Wide Web Consortium (W3C), a huge amount of effort and discussion went into creating standards before there were any applications out there to standardize. And the standards that emerged from these “Talmudic debates” were so abstract that few of them ever saw widespread adoption. The few that did, like XML, were “uniformly scourges on the planet, offenses against hardworking programmers that have pushed out sensible formats (like JSON) in favor of overly-complicated hairballs with no basis in reality.” The Semantic Web might have thrived if, like the original web, its standards were eagerly adopted by everyone. But that never happened because—as [has been discussed][2] on this blog before—the putative benefits of something like XML are not easy to sell to a programmer when the alternatives are both entirely sufficient and much easier to understand. + +### Building the Semantic Web + +If the Semantic Web was not an outright impossibility, it was always going to require the contributions of lots of clever people working in concert. + +The long effort to build the Semantic Web has been said to consist of four phases. The first phase, which lasted from 2001 to 2005, was the golden age of Semantic Web activity. Between 2001 and 2005, the W3C issued a slew of new standards laying out the foundational technologies of the Semantic future. + +The most important of these was the Resource Description Framework (RDF). The W3C issued the first version of the RDF standard in 2004, but RDF had been floating around since 1997, when a W3C working group introduced it in a draft specification. RDF was originally conceived of as a tool for modeling metadata and was partly based on earlier attempts by Ramanathan Guha, an Apple engineer, to develop a metadata system for files stored on Apple computers. The Semantic Web working groups at W3C repurposed RDF to represent arbitrary kinds of general knowledge. + +RDF would be the grammar in which Semantic webpages expressed information. The grammar is a simple one: Facts about the world are expressed in RDF as triplets of subject, predicate, and object. Tim Bray, who worked with Ramanathan Guha on an early version of RDF, gives the following example, describing TV shows and movies: + +``` +@prefix rdf: . + +@prefix ex: . + + +ex:vincent_donofrio ex:starred_in ex:law_and_order_ci . + +ex:law_and_order_ci rdf:type ex:tv_show . + +ex:the_thirteenth_floor ex:similar_plot_as ex:the_matrix . +``` + +The syntax is not important, especially since RDF can be represented in a number of formats, including XML and JSON. This example is in a format called Turtle, which expresses RDF triplets as straightforward sentences terminated by periods. The three essential sentences, which appear above after the `@prefix` preamble, state three facts: Vincent Donofrio starred in Law and Order, Law and Order is a type of TV Show, and the movie The Thirteenth Floor has a similar plot as The Matrix. (If you don’t know who Vincent Donofrio is and have never seen The Thirteenth Floor, I, too, was watching Nickelodeon and sipping Capri Suns in 1999.) + +Other specifications finalized and drafted during this first era of Semantic Web development describe all the ways in which RDF can be used. RDF in Attributes (RDFa) defines how RDF can be embedded in HTML so that browsers, search engines, and other programs can glean meaning from a webpage. RDF Schema and another standard called OWL allows RDF authors to demarcate the boundary between valid and invalid RDF statements in their RDF documents. RDF Schema and OWL, in other words, are tools for creating what are known as ontologies, explicit specifications of what can and cannot be said within a specific domain. An ontology might include a rule, for example, expressing that no person can be the mother of another person without also being a parent of that person. The hope was that these ontologies would be widely used not only to check the accuracy of RDF found in the wild but also to make inferences about omitted information. + +In 2006, Tim Berners-Lee posted a short article in which he argued that the existing work on Semantic Web standards needed to be supplemented by a concerted effort to make semantic data available on the web. Furthermore, once on the web, it was important that semantic data link to other kinds of semantic data, ensuring the rise of a data-based web as interconnected as the existing web. Berners-Lee used the term “linked data” to describe this ideal scenario. Though “linked data” was in one sense just a recapitulation of the original vision for the Semantic Web, it became a term that people could rally around and thus amounted to a rebranding of the Semantic Web project. + +Berners-Lee’s article launched the second phase of the Semantic Web’s development, where the focus shifted from setting standards and building toy examples to creating and popularizing large RDF datasets. Perhaps the most successful of these datasets was [DBpedia][3], a giant repository of RDF triplets extracted from Wikipedia articles. DBpedia, which made heavy use of the Semantic Web standards that had been developed in the first half of the 2000s, was a standout example of what could be accomplished using the W3C’s new formats. Today DBpedia describes 4.58 million entities and is used by organizations like the NY Times, BBC, and IBM, which employed DBpedia as a knowledge source for IBM Watson, the Jeopardy-winning artificial intelligence system. + +![][4] + +The third phase of the Semantic Web’s development involved adapting the W3C’s standards to fit the actual practices and preferences of web developers. By 2008, JSON had begun its meteoric rise to popularity. Whereas XML came packaged with a bunch of associated technologies of indeterminate purpose (XLST, XPath, XQuery, XLink), JSON was just JSON. It was less verbose and more readable. Manu Sporny, an entrepreneur and member of the W3C, had already started using JSON at his company and wanted to find an easy way for RDFa and JSON to work together. The result would be JSON-LD, which in essence was RDF reimagined for a world that had chosen JSON over XML. Sporny, together with his CTO, Dave Longley, issued a draft specification of JSON-LD in 2010. For the next few years, JSON-LD and an updated RDF specification would be the primary focus of Semantic Web work at the W3C. JSON-LD could be used on its own or it could be embedded within a `