mirror of
https://github.com/LCTT/TranslateProject.git
synced 2025-01-25 23:11:02 +08:00
Merge branch 'master' of https://github.com/LCTT/TranslateProject into new
This commit is contained in:
commit
97258b9b85
230
published/20140114 Caffeinated 6.828:Lab 2 Memory Management.md
Normal file
230
published/20140114 Caffeinated 6.828:Lab 2 Memory Management.md
Normal file
@ -0,0 +1,230 @@
|
||||
Caffeinated 6.828:实验 2:内存管理
|
||||
======
|
||||
|
||||
### 简介
|
||||
|
||||
在本实验中,你将为你的操作系统写内存管理方面的代码。内存管理由两部分组成。
|
||||
|
||||
第一部分是内核的物理内存分配器,内核通过它来分配内存,以及在不需要时释放所分配的内存。分配器以<ruby>页<rt>page</rt></ruby>为单位分配内存,每个页的大小为 4096 字节。你的任务是去维护那个数据结构,它负责记录物理页的分配和释放,以及每个分配的页有多少进程共享它。本实验中你将要写出分配和释放内存页的全套代码。
|
||||
|
||||
第二个部分是虚拟内存的管理,它负责由内核和用户软件使用的虚拟内存地址到物理内存地址之间的映射。当使用内存时,x86 架构的硬件是由内存管理单元(MMU)负责执行映射操作来查阅一组页表。接下来你将要修改 JOS,以根据我们提供的特定指令去设置 MMU 的页表。
|
||||
|
||||
#### 预备知识
|
||||
|
||||
在本实验及后面的实验中,你将逐步构建你的内核。我们将会为你提供一些附加的资源。使用 Git 去获取这些资源、提交自[实验 1][1] 以来的改变(如有需要的话)、获取课程仓库的最新版本、以及在我们的实验 2 (`origin/lab2`)的基础上创建一个称为 `lab2` 的本地分支:
|
||||
|
||||
```c
|
||||
athena% cd ~/6.828/lab
|
||||
athena% add git
|
||||
athena% git pull
|
||||
Already up-to-date.
|
||||
athena% git checkout -b lab2 origin/lab2
|
||||
Branch lab2 set up to track remote branch refs/remotes/origin/lab2.
|
||||
Switched to a new branch "lab2"
|
||||
athena%
|
||||
```
|
||||
|
||||
上面的 `git checkout -b` 命令其实做了两件事情:首先它创建了一个本地分支 `lab2`,它跟踪给我们提供课程内容的远程分支 `origin/lab2` ,第二件事情是,它改变你的 `lab` 目录的内容以反映 `lab2` 分支上存储的文件的变化。Git 允许你在已存在的两个分支之间使用 `git checkout *branch-name*` 命令去切换,但是在你切换到另一个分支之前,你应该去提交那个分支上你做的任何有意义的变更。
|
||||
|
||||
现在,你需要将你在 `lab1` 分支中的改变合并到 `lab2` 分支中,命令如下:
|
||||
|
||||
```c
|
||||
athena% git merge lab1
|
||||
Merge made by recursive.
|
||||
kern/kdebug.c | 11 +++++++++--
|
||||
kern/monitor.c | 19 +++++++++++++++++++
|
||||
lib/printfmt.c | 7 +++----
|
||||
3 files changed, 31 insertions(+), 6 deletions(-)
|
||||
athena%
|
||||
```
|
||||
|
||||
在一些案例中,Git 或许并不知道如何将你的更改与新的实验任务合并(例如,你在第二个实验任务中变更了一些代码的修改)。在那种情况下,你使用 `git` 命令去合并,它会告诉你哪个文件发生了冲突,你必须首先去解决冲突(通过编辑冲突的文件),然后使用 `git commit -a` 去重新提交文件。
|
||||
|
||||
实验 2 包含如下的新源代码,后面你将逐个了解它们:
|
||||
|
||||
- `inc/memlayout.h`
|
||||
- `kern/pmap.c`
|
||||
- `kern/pmap.h`
|
||||
- `kern/kclock.h`
|
||||
- `kern/kclock.c`
|
||||
|
||||
`memlayout.h` 描述虚拟地址空间的布局,这个虚拟地址空间是通过修改 `pmap.c`、`memlayout.h` 和 `pmap.h` 所定义的 `PageInfo` 数据结构来实现的,这个数据结构用于跟踪物理内存页面是否被释放。`kclock.c` 和 `kclock.h` 维护 PC 上基于电池的时钟和 CMOS RAM 硬件,在此,BIOS 中记录了 PC 上安装的物理内存数量,以及其它的一些信息。在 `pmap.c` 中的代码需要去读取这个设备硬件,以算出在这个设备上安装了多少物理内存,但这部分代码已经为你完成了:你不需要知道 CMOS 硬件工作原理的细节。
|
||||
|
||||
特别需要注意的是 `memlayout.h` 和 `pmap.h`,因为本实验需要你去使用和理解的大部分内容都包含在这两个文件中。你或许还需要去看看 `inc/mmu.h` 这个文件,因为它也包含了本实验中用到的许多定义。
|
||||
|
||||
开始本实验之前,记得去添加 `exokernel` 以获取 QEMU 的 6.828 版本。
|
||||
|
||||
#### 实验过程
|
||||
|
||||
在你准备进行实验和写代码之前,先添加你的 `answers-lab2.txt` 文件到 Git 仓库,提交你的改变然后去运行 `make handin`。
|
||||
|
||||
```
|
||||
athena% git add answers-lab2.txt
|
||||
athena% git commit -am "my answer to lab2"
|
||||
[lab2 a823de9] my answer to lab2 4 files changed, 87 insertions(+), 10 deletions(-)
|
||||
athena% make handin
|
||||
```
|
||||
|
||||
正如前面所说的,我们将使用一个评级程序来分级你的解决方案,你可以在 `lab` 目录下运行 `make grade`,使用评级程序来测试你的内核。为了完成你的实验,你可以改变任何你需要的内核源代码和头文件。但毫无疑问的是,你不能以任何形式去改变或破坏评级代码。
|
||||
|
||||
### 第 1 部分:物理页面管理
|
||||
|
||||
操作系统必须跟踪物理内存页是否使用的状态。JOS 以“页”为最小粒度来管理 PC 的物理内存,以便于它使用 MMU 去映射和保护每个已分配的内存片段。
|
||||
|
||||
现在,你将要写内存的物理页分配器的代码。它将使用 `struct PageInfo` 对象的链表来保持对物理页的状态跟踪,每个对象都对应到一个物理内存页。在你能够编写剩下的虚拟内存实现代码之前,你需要先编写物理内存页面分配器,因为你的页表管理代码将需要去分配物理内存来存储页表。
|
||||
|
||||
> **练习 1**
|
||||
>
|
||||
> 在文件 `kern/pmap.c` 中,你需要去实现以下函数的代码(或许要按给定的顺序来实现)。
|
||||
>
|
||||
> - `boot_alloc()`
|
||||
> - `mem_init()`(只要能够调用 `check_page_free_list()` 即可)
|
||||
> - `page_init()`
|
||||
> - `page_alloc()`
|
||||
> - `page_free()`
|
||||
>
|
||||
> `check_page_free_list()` 和 `check_page_alloc()` 可以测试你的物理内存页分配器。你将需要引导 JOS 然后去看一下 `check_page_alloc()` 是否报告成功即可。如果没有报告成功,修复你的代码直到成功为止。你可以添加你自己的 `assert()` 以帮助你去验证是否符合你的预期。
|
||||
|
||||
本实验以及所有的 6.828 实验中,将要求你做一些检测工作,以便于你搞清楚它们是否按你的预期来工作。这个任务不需要详细描述你添加到 JOS 中的代码的细节。查找 JOS 源代码中你需要去修改的那部分的注释;这些注释中经常包含有技术规范和提示信息。你也可能需要去查阅 JOS 和 Intel 的技术手册、以及你的 6.004 或 6.033 课程笔记的相关部分。
|
||||
|
||||
### 第 2 部分:虚拟内存
|
||||
|
||||
在你开始动手之前,需要先熟悉 x86 内存管理架构的保护模式:即分段和页面转换。
|
||||
|
||||
> **练习 2**
|
||||
>
|
||||
> 如果你对 x86 的保护模式还不熟悉,可以查看 [Intel 80386 参考手册][2]的第 5 章和第 6 章。阅读这些章节(5.2 和 6.4)中关于页面转换和基于页面的保护。我们建议你也去了解关于段的章节;在虚拟内存和保护模式中,JOS 使用了分页、段转换、以及在 x86 上不能禁用的基于段的保护,因此你需要去理解这些基础知识。
|
||||
|
||||
#### 虚拟地址、线性地址和物理地址
|
||||
|
||||
在 x86 的专用术语中,一个<ruby>虚拟地址<rt>virtual address</rt></ruby>是由一个段选择器和在段中的偏移量组成。一个<ruby>线性地址<rt>linear address</rt></ruby>是在页面转换之前、段转换之后得到的一个地址。一个<ruby>物理地址<rt>physical address</rt></ruby>是段和页面转换之后得到的最终地址,它最终将进入你的物理内存中的硬件总线。
|
||||
|
||||
![](https://ws1.sinaimg.cn/large/0069RVTdly1fuxgrc398jj30gx04bgm1.jpg)
|
||||
|
||||
一个 C 指针是虚拟地址的“偏移量”部分。在 `boot/boot.S` 中我们安装了一个<ruby>全局描述符表<rt>Global Descriptor Table</rt></ruby>(GDT),它通过设置所有的段基址为 0,并且限制为 `0xffffffff` 来有效地禁用段转换。因此“段选择器”并不会生效,而线性地址总是等于虚拟地址的偏移量。在实验 3 中,为了设置权限级别,我们将与段有更多的交互。但是对于内存转换,我们将在整个 JOS 实验中忽略段,只专注于页转换。
|
||||
|
||||
回顾[实验 1][1] 中的第 3 部分,我们安装了一个简单的页表,这样内核就可以在 `0xf0100000` 链接的地址上运行,尽管它实际上是加载在 `0x00100000` 处的 ROM BIOS 的物理内存上。这个页表仅映射了 4MB 的内存。在实验中,你将要为 JOS 去设置虚拟内存布局,我们将从虚拟地址 `0xf0000000` 处开始扩展它,以映射物理内存的前 256MB,并映射许多其它区域的虚拟内存。
|
||||
|
||||
> **练习 3**
|
||||
>
|
||||
> 虽然 GDB 能够通过虚拟地址访问 QEMU 的内存,它经常用于在配置虚拟内存期间检查物理内存。在实验工具指南中复习 QEMU 的[监视器命令][3],尤其是 `xp` 命令,它可以让你去检查物理内存。要访问 QEMU 监视器,可以在终端中按 `Ctrl-a c`(相同的绑定返回到串行控制台)。
|
||||
>
|
||||
> 使用 QEMU 监视器的 `xp` 命令和 GDB 的 `x` 命令去检查相应的物理内存和虚拟内存,以确保你看到的是相同的数据。
|
||||
>
|
||||
> 我们的打过补丁的 QEMU 版本提供一个非常有用的 `info pg` 命令:它可以展示当前页表的一个具体描述,包括所有已映射的内存范围、权限、以及标志。原本的 QEMU 也提供一个 `info mem` 命令用于去展示一个概要信息,这个信息包含了已映射的虚拟内存范围和使用了什么权限。
|
||||
|
||||
在 CPU 上运行的代码,一旦处于保护模式(这是在 `boot/boot.S` 中所做的第一件事情)中,是没有办法去直接使用一个线性地址或物理地址的。所有的内存引用都被解释为虚拟地址,然后由 MMU 来转换,这意味着在 C 语言中的指针都是虚拟地址。
|
||||
|
||||
例如在物理内存分配器中,JOS 内存经常需要在不反向引用的情况下,去维护作为地址的一个很难懂的值或一个整数。有时它们是虚拟地址,而有时是物理地址。为便于在代码中证明,JOS 源文件中将它们区分为两种:类型 `uintptr_t` 表示一个难懂的虚拟地址,而类型 `physaddr_trepresents` 表示物理地址。这些类型其实不过是 32 位整数(`uint32_t`)的同义词,因此编译器不会阻止你将一个类型的数据指派为另一个类型!因为它们都是整数(而不是指针)类型,如果你想去反向引用它们,编译器将报错。
|
||||
|
||||
JOS 内核能够通过将它转换为指针类型的方式来反向引用一个 `uintptr_t` 类型。相反,内核不能反向引用一个物理地址,因为这是由 MMU 来转换所有的内存引用。如果你转换一个 `physaddr_t` 为一个指针类型,并反向引用它,你或许能够加载和存储最终结果地址(硬件将它解释为一个虚拟地址),但你并不会取得你想要的内存位置。
|
||||
|
||||
总结如下:
|
||||
|
||||
| C 类型 | 地址类型 |
|
||||
| ------------ | ------------ |
|
||||
| `T*` | 虚拟 |
|
||||
| `uintptr_t` | 虚拟 |
|
||||
| `physaddr_t` | 物理 |
|
||||
|
||||
> 问题:
|
||||
>
|
||||
> 1. 假设下面的 JOS 内核代码是正确的,那么变量 `x` 应该是什么类型?`uintptr_t` 还是 `physaddr_t` ?
|
||||
>
|
||||
> ![](https://ws3.sinaimg.cn/large/0069RVTdly1fuxgrbkqd3j30m302bmxc.jpg)
|
||||
>
|
||||
|
||||
JOS 内核有时需要去读取或修改它只知道其物理地址的内存。例如,添加一个映射到页表,可以要求分配物理内存去存储一个页目录,然后去初始化它们。然而,内核也和其它的软件一样,并不能跳过虚拟地址转换,内核并不能直接加载和存储物理地址。一个原因是 JOS 将重映射从虚拟地址 `0xf0000000` 处的物理地址 `0` 开始的所有的物理地址,以帮助内核去读取和写入它知道物理地址的内存。为转换一个物理地址为一个内核能够真正进行读写操作的虚拟地址,内核必须添加 `0xf0000000` 到物理地址以找到在重映射区域中相应的虚拟地址。你应该使用 `KADDR(pa)` 去做那个添加操作。
|
||||
|
||||
JOS 内核有时也需要能够通过给定的内核数据结构中存储的虚拟地址找到内存中的物理地址。内核全局变量和通过 `boot_alloc()` 分配的内存是在内核所加载的区域中,从 `0xf0000000` 处开始的这个所有物理内存映射的区域。因此,要转换这些区域中一个虚拟地址为物理地址时,内核能够只是简单地减去 `0xf0000000` 即可得到物理地址。你应该使用 `PADDR(va)` 去做那个减法操作。
|
||||
|
||||
#### 引用计数
|
||||
|
||||
在以后的实验中,你将会经常遇到多个虚拟地址(或多个环境下的地址空间中)同时映射到相同的物理页面上。你将在 `struct PageInfo` 数据结构中的 `pp_ref` 字段来记录一个每个物理页面的引用计数器。如果一个物理页面的这个计数器为 0,表示这个页面已经被释放,因为它不再被使用了。一般情况下,这个计数器应该等于所有页表中物理页面出现在 `UTOP` 之下的次数(`UTOP` 之上的映射大都是在引导时由内核设置的,并且它从不会被释放,因此不需要引用计数器)。我们也使用它去跟踪放到页目录页的指针数量,反过来就是,页目录到页表页的引用数量。
|
||||
|
||||
使用 `page_alloc` 时要注意。它返回的页面引用计数总是为 0,因此,一旦对返回页做了一些操作(比如将它插入到页表),`pp_ref` 就应该增加。有时这需要通过其它函数(比如,`page_instert`)来处理,而有时这个函数是直接调用 `page_alloc` 来做的。
|
||||
|
||||
#### 页表管理
|
||||
|
||||
现在,你将写一套管理页表的代码:去插入和删除线性地址到物理地址的映射表,并且在需要的时候去创建页表。
|
||||
|
||||
> **练习 4**
|
||||
>
|
||||
> 在文件 `kern/pmap.c` 中,你必须去实现下列函数的代码。
|
||||
>
|
||||
> - pgdir_walk()
|
||||
> - boot_map_region()
|
||||
> - page_lookup()
|
||||
> - page_remove()
|
||||
> - page_insert()
|
||||
>
|
||||
> `check_page()`,调用自 `mem_init()`,测试你的页表管理函数。在进行下一步流程之前你应该确保它成功运行。
|
||||
|
||||
### 第 3 部分:内核地址空间
|
||||
|
||||
JOS 分割处理器的 32 位线性地址空间为两部分:用户环境(进程),(我们将在实验 3 中开始加载和运行),它将控制其上的布局和低位部分的内容;而内核总是维护对高位部分的完全控制。分割线的定义是在 `inc/memlayout.h` 中通过符号 `ULIM` 来划分的,它为内核保留了大约 256MB 的虚拟地址空间。这就解释了为什么我们要在实验 1 中给内核这样的一个高位链接地址的原因:如是不这样做的话,内核的虚拟地址空间将没有足够的空间去同时映射到下面的用户空间中。
|
||||
|
||||
你可以在 `inc/memlayout.h` 中找到一个图表,它有助于你去理解 JOS 内存布局,这在本实验和后面的实验中都会用到。
|
||||
|
||||
#### 权限和故障隔离
|
||||
|
||||
由于内核和用户的内存都存在于它们各自环境的地址空间中,因此我们需要在 x86 的页表中使用权限位去允许用户代码只能访问用户所属地址空间的部分。否则,用户代码中的 bug 可能会覆写内核数据,导致系统崩溃或者发生各种莫名其妙的的故障;用户代码也可能会偷窥其它环境的私有数据。
|
||||
|
||||
对于 `ULIM` 以上部分的内存,用户环境没有任何权限,只有内核才可以读取和写入这部分内存。对于 `[UTOP,ULIM]` 地址范围,内核和用户都有相同的权限:它们可以读取但不能写入这个地址范围。这个地址范围是用于向用户环境暴露某些只读的内核数据结构。最后,低于 `UTOP` 的地址空间是为用户环境所使用的;用户环境将为访问这些内核设置权限。
|
||||
|
||||
#### 初始化内核地址空间
|
||||
|
||||
现在,你将去配置 `UTOP` 以上的地址空间:内核部分的地址空间。`inc/memlayout.h` 中展示了你将要使用的布局。我将使用函数去写相关的线性地址到物理地址的映射配置。
|
||||
|
||||
> **练习 5**
|
||||
>
|
||||
> 完成调用 `check_page()` 之后在 `mem_init()` 中缺失的代码。
|
||||
|
||||
现在,你的代码应该通过了 `check_kern_pgdir()` 和 `check_page_installed_pgdir()` 的检查。
|
||||
|
||||
> 问题:
|
||||
>
|
||||
> 1、在这个时刻,页目录中的条目(行)是什么?它们映射的址址是什么?以及它们映射到哪里了?换句话说就是,尽可能多地填写这个表:
|
||||
>
|
||||
> | 条目 | 虚拟地址基址 | 指向(逻辑上):|
|
||||
> | --- | ---------- | ------------- |
|
||||
> | 1023 | ? | 物理内存顶部 4MB 的页表 |
|
||||
> | 1022 | ? | ? |
|
||||
> | . | ? | ? |
|
||||
> | . | ? | ? |
|
||||
> | . | ? | ? |
|
||||
> | 2 | 0x00800000 | ? |
|
||||
> | 1 | 0x00400000 | ? |
|
||||
> | 0 | 0x00000000 | [参见下一问题] |
|
||||
>
|
||||
> 2、(来自课程 3) 我们将内核和用户环境放在相同的地址空间中。为什么用户程序不能去读取和写入内核的内存?有什么特殊机制保护内核内存?
|
||||
>
|
||||
> 3、这个操作系统能够支持的最大的物理内存数量是多少?为什么?
|
||||
>
|
||||
> 4、如果我们真的拥有最大数量的物理内存,有多少空间的开销用于管理内存?这个开销可以减少吗?
|
||||
>
|
||||
> 5、复习在 `kern/entry.S` 和 `kern/entrypgdir.c` 中的页表设置。一旦我们打开分页,EIP 仍是一个很小的数字(稍大于 1MB)。在什么情况下,我们转而去运行在 KERNBASE 之上的一个 EIP?当我们启用分页并开始在 KERNBASE 之上运行一个 EIP 时,是什么让我们能够一个很低的 EIP 上持续运行?为什么这种转变是必需的?
|
||||
|
||||
#### 地址空间布局的其它选择
|
||||
|
||||
在 JOS 中我们使用的地址空间布局并不是我们唯一的选择。一个操作系统可以在低位的线性地址上映射内核,而为用户进程保留线性地址的高位部分。然而,x86 内核一般并不采用这种方法,因为 x86 向后兼容模式之一(被称为“虚拟 8086 模式”)“不可改变地”在处理器使用线性地址空间的最下面部分,所以,如果内核被映射到这里是根本无法使用的。
|
||||
|
||||
虽然很困难,但是设计这样的内核是有这种可能的,即:不为处理器自身保留任何固定的线性地址或虚拟地址空间,而有效地允许用户级进程不受限制地使用整个 4GB 的虚拟地址空间 —— 同时还要在这些进程之间充分保护内核以及不同的进程之间相互受保护!
|
||||
|
||||
将内核的内存分配系统进行概括类推,以支持二次幂为单位的各种页大小,从 4KB 到一些你选择的合理的最大值。你务必要有一些方法,将较大的分配单位按需分割为一些较小的单位,以及在需要时,将多个较小的分配单位合并为一个较大的分配单位。想一想在这样的一个系统中可能会出现些什么样的问题。
|
||||
|
||||
这个实验做完了。确保你通过了所有的等级测试,并记得在 `answers-lab2.txt` 中写下你对上述问题的答案。提交你的改变(包括添加 `answers-lab2.txt` 文件),并在 `lab` 目录下输入 `make handin` 去提交你的实验。
|
||||
|
||||
------
|
||||
|
||||
via: https://sipb.mit.edu/iap/6.828/lab/lab2/
|
||||
|
||||
作者:[Mit](https://sipb.mit.edu/iap/6.828/lab/lab2/)
|
||||
译者:[qhwdw](https://github.com/qhwdw)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[1]: https://linux.cn/article-9740-1.html
|
||||
[2]: https://sipb.mit.edu/iap/6.828/readings/i386/toc.htm
|
||||
[3]: https://sipb.mit.edu/iap/6.828/labguide/#qemu
|
@ -0,0 +1,80 @@
|
||||
让决策更透明的三步
|
||||
======
|
||||
|
||||
> 当您使用这种决策技巧时,可以使你作为一个开源领导人做出决策时更透明。
|
||||
|
||||
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/GOV_Transparency_A.png?itok=2r47nFJB)
|
||||
|
||||
要让你的领导工作更加透明,其中一个最有效的方法就是将一个现有的流程开放给你的团队进行反馈,然后根据反馈去改变流程。下面这些练习能让透明度更加切实,并且它有助于让你在持续评估并调整你的工作的透明度时形成“肌肉记忆”。
|
||||
|
||||
我想说,你可以通过任何流程来完成这项工作 —— 即使有些流程看起来像是“禁区”流程,比如晋升或者调薪。但是如果第一次它对于初步实践来说太大了,那么你可能需要从一个不那么敏感的流程开始,比如旅行批准流程或者为你的团队寻找空缺候选人的系统。(举个例子,我在我们的招聘和晋升流程中使用了这种方式)
|
||||
|
||||
开放流程并使其更加透明可以建立你的信誉并增强团队成员对你的信任。它会使你以一种可能超乎你设想和舒适程度的方式“走在透明的路上”。以这种方式工作确实会产生额外的工作,尤其是在过程的开始阶段 —— 但是,最终这种方法对于让管理者(比如我)对团队成员更具责任,而且它会更加相容。
|
||||
|
||||
### 阶段一:选择一个流程
|
||||
|
||||
**第一步** 想想你的团队使用的一个普通的或常规的流程,但是这个流程通常不需要仔细检查。下面有一些例子:
|
||||
|
||||
* 招聘:如何创建职位描述、如何挑选面试团队、如何筛选候选人以及如何做出最终的招聘决定。
|
||||
* 规划:你的团队或组织如何确定年度或季度目标。
|
||||
* 升职:你如何选择并考虑升职候选人,并决定谁升职。
|
||||
* 经理绩效评估:谁有机会就经理绩效提供反馈,以及他们是如何反馈。
|
||||
* 旅游:旅游预算如何分配,以及你如何决定是否批准旅行(或提名某人是否旅行)。
|
||||
|
||||
上面的某个例子可能会引起你的共鸣,或者你可能会发现一些你觉得更合适的流程。也许你已经收到了关于某个特定流程的问题,又或者你发现自己屡次解释某个特定决策的逻辑依据。选择一些你能够控制或影响的东西 —— 一些你认为你的成员所关心的东西。
|
||||
|
||||
**第二步** 现在回答以下关于这个流程的问题:
|
||||
|
||||
* 该流程目前是否记录在一个所有成员都知道并可以访问的地方?如果没有,现在就开始创建文档(不必太详细;只需要解释这个流程的不同步骤以及它是如何工作的)。你可能会发现这个过程不够清晰或一致,无法记录到文档。在这种情况下,用你*认为*理想情况下所应该的方式去记录它。
|
||||
* 完成流程的文档是否说明了在不同的点上是如何做出决定?例如,在旅行批准流程中,它是否解释了如何批准或拒绝请求。
|
||||
* 流程的*输入信息*是什么?例如,在确定部门年度目标时,哪些数据用于关键绩效指标,查找或者采纳谁的反馈,谁有机会审查或“签字”。
|
||||
* 这个过程会做出什么*假设*?例如,在升职决策中,你是否认为所有的晋升候选人都会在适当的时间被他们的经理提出。
|
||||
* 流程的*输出物*是什么?例如,在评估经理的绩效时,评估的结果是否会与经理共享,该审查报告的任何方面是否会与经理的直接报告更广泛地共享(例如,改进的领域)?
|
||||
|
||||
回答上述问题时,避免作出判断。如果这个流程不能清楚地解释一个决定是如何做出的,那也可以接受。这些问题只是评估现状的一个机会。
|
||||
|
||||
接下来,修改流程的文档,直到你对它充分说明了流程并预测潜在的问题感到满意。
|
||||
|
||||
### 阶段二:收集反馈
|
||||
|
||||
下一个阶段涉及到与你的成员分享这个流程并要求反馈。分享说起来容易做起来难。
|
||||
|
||||
**第一步** 鼓励人们提供反馈。考虑一下实现此目的的各种机制:
|
||||
|
||||
* 把这个流程公布在人们可以在内部找到的地方,并提示他们可以在哪里发表评论或提供反馈。谷歌文档可以很好地评论特定的文本或直接提议文本中的更改。
|
||||
* 通过电子邮件分享过程文档,邀请反馈。
|
||||
* 提及流程文档,在团队会议或一对一的谈话时要求反馈。
|
||||
* 给人们一个他们可以提供反馈的时间窗口,并在此窗口内定期发送提醒。
|
||||
|
||||
如果你得不到太多的反馈,不要认为沉默就等于认可。你可以试着直接询问人们,他们为什么没有反馈。是因为他们太忙了吗?这个过程对他们来说不像你想的那么重要吗?你清楚地表达了你的要求吗?
|
||||
|
||||
**第二步** 迭代。当你获得关于流程的反馈时,鼓励团队对流程进行修改和迭代。加入改进的想法和建议,并要求确认预期的反馈已经被应用。如果你不同意某个建议,那就接受讨论,问问自己为什么不同意,以及一种方法和另一种方法的优点是什么。
|
||||
|
||||
设置一个收集反馈和迭代的时间窗口有助于向前推进。一旦收集和审查了反馈,你应当讨论和应用它,并且发布最终的流程供团队审查。
|
||||
|
||||
### 阶段三:实现
|
||||
|
||||
实现一个流程通常是计划中最困难的阶段。但如果你在修改过程中考虑了反馈意见,人们应该已经预料到了,并且可能会更支持你。从上面迭代过程中获得的文档是一个很好的工具,可以让你对实现负责。
|
||||
|
||||
**第一步** 审查实施需求。许多可以从提高透明度中获益的流程只需要做一点不同的事情,但是你确实需要检查你是否需要其他支持(例如工具)。
|
||||
|
||||
**第二步** 设置实现的时间表。与成员一起回顾时间表,这样他们就知道会发生什么。如果新流程需要对其他流程进行更改,请确保为人们提供足够的时间去适应新方式,并提供沟通和提醒。
|
||||
|
||||
**第三步** 跟进。在使用该流程 3-6 个月后,与你的成员联系,看看进展如何。新流程是否更加透明、更有效、更可预测?你有什么经验教训可以用来进一步改进这个流程吗?
|
||||
|
||||
### 关于作者
|
||||
|
||||
Sam Knuth —— 我有幸在 Red Hat 领导客户内容服务团队;我们生成提供给我们的客户的所有文档。我们的目标是为客户提供他们在企业中使用开源技术取得成功所需要的洞察力。在 Twitter 上与我联系([@samfw][1])。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/open-organization/17/9/exercise-in-transparent-decisions
|
||||
|
||||
作者:[Sam Knuth][a]
|
||||
译者:[MarineFish](https://github.com/MarineFish)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/samfw
|
||||
[1]: https://twitter.com/samfw
|
@ -0,0 +1,260 @@
|
||||
在 Ubuntu 和 Debian 上启用双因子身份验证的三种备选方案
|
||||
=====
|
||||
|
||||
> 如何为你的 SSH 服务器安装三种不同的双因子身份验证方案。
|
||||
|
||||
如今,安全比以往更加重要,保护 SSH 服务器是作为系统管理员可以做的最为重要的事情之一。传统地,这意味着禁用密码身份验证而改用 SSH 密钥。无疑这是你首先应该做的,但这并不意味着 SSH 无法变得更加安全。
|
||||
|
||||
双因子身份验证就是指需要两种身份验证才能登录。可以是密码和 SSH 密钥,也可以是密钥和第三方服务,比如 Google。这意味着单个验证方法的泄露不会危及服务器。
|
||||
|
||||
以下指南是为 SSH 启用双因子验证的三种方式。
|
||||
|
||||
当你修改 SSH 配置时,总是要确保有一个连接到服务器的第二终端。第二终端意味着你可以修复你在 SSH 配置中犯的任何错误。打开的终端将一直保持,即便 SSH 服务重启。
|
||||
|
||||
### SSH 密钥和密码
|
||||
|
||||
SSH 支持对登录要求不止一个身份验证方法。
|
||||
|
||||
在 `/etc/sh/sshd_config` 中的 SSH 服务器配置文件中的 `AuthenticationMethods` 选项中设置了身份验证方法。
|
||||
|
||||
当在 `/etc/ssh/sshd_config` 中添加下一行时,SSH 需要提交一个 SSH 密钥,然后提示输入密码:
|
||||
|
||||
```
|
||||
AuthenticationMethods "publickey,password"
|
||||
```
|
||||
|
||||
如果你想要根据使用情况设置这些方法,那么请使用以下附加配置:
|
||||
|
||||
```
|
||||
Match User jsmith
|
||||
AuthenticationMethods "publickey,password"
|
||||
```
|
||||
|
||||
当你已经编辑或保存了新的 `sshd_config` 文件,你应该通过运行以下程序来确保你没有犯任何错误:
|
||||
|
||||
```
|
||||
sshd -t
|
||||
```
|
||||
|
||||
任何导致 SSH 不能启动的语法或其他错误都将在这里标记出来。当 `ssh-t` 运行时没有错误,使用 `systemctl` 重新启动 SSH:
|
||||
|
||||
```
|
||||
systemctl restart sshd
|
||||
```
|
||||
|
||||
现在,你可以使用新终端登录,以核实你会被提示输入密码并需要 SSH 密钥。如果你用 `ssh-v`,例如:
|
||||
|
||||
```
|
||||
ssh -v jsmith@example.com
|
||||
```
|
||||
|
||||
你将可以看到登录的每一步。
|
||||
|
||||
注意,如果你确实将密码设置成必需的身份验证方法,你要确保将 `PasswordAuthentication` 选项设置成 `yes`。
|
||||
|
||||
### 使用 Google Authenticator 的 SSH
|
||||
|
||||
Google 在 Google 自己的产品上使用的双因子身份验证系统可以集成到你的 SSH 服务器中。如果你已经使用了Google Authenticator,那么此方法将非常方便。
|
||||
|
||||
虽然 libpam-google-authenticator 是由 Google 编写的,但它是[开源][1]的。此外,Google Authenticator 是由 Google 编写的,但并不需要 Google 帐户才能工作。多亏了 [Sitaram Chamarty][2] 的贡献。
|
||||
|
||||
如果你还没有在手机上安装和配置 Google Authenticator,请参阅 [这里][3]的说明。
|
||||
|
||||
首先,我们需要在服务器上安装 Google Authenticatior 安装包。以下命令将更新你的系统并安装所需的软件包:
|
||||
|
||||
```
|
||||
apt-get update
|
||||
apt-get upgrade
|
||||
apt-get install libpam-google-authenticator
|
||||
```
|
||||
|
||||
现在,我们需要在你的手机上使用 Google Authenticatior APP 注册服务器。这是通过首先运行我们刚刚安装的程序完成的:
|
||||
|
||||
```
|
||||
google-authenticator
|
||||
```
|
||||
|
||||
运行这个程序时,会问到几个问题。你应该以适合你的设置的方式回答,然而,最安全的选项是对每个问题回答 `y`。如果以后需要更改这些选项,您可以简单地重新运行 `google-authenticator` 并选择不同的选项。
|
||||
|
||||
当你运行 `google-authenticator` 时,一个二维码会被打印到终端上,有些代码看起来像这样:
|
||||
|
||||
```
|
||||
Your new secret key is: VMFY27TYDFRDNKFY
|
||||
Your verification code is 259652
|
||||
Your emergency scratch codes are:
|
||||
96915246
|
||||
70222983
|
||||
31822707
|
||||
25181286
|
||||
28919992
|
||||
```
|
||||
|
||||
你应该将所有这些代码记录到一个像密码管理器一样安全的位置。“scratch codes” 是单一的使用代码,即使你的手机不可用,它总是允许你访问。
|
||||
|
||||
要将服务器注册到 Authenticator APP 中,只需打开应用程序并点击右下角的红色加号即可。然后选择扫描条码选项,扫描打印到终端的二维码。你的服务器和应用程序现在连接。
|
||||
|
||||
回到服务器上,我们现在需要编辑用于 SSH 的 PAM (可插入身份验证模块),以便它使用我们刚刚安装的身份验证器安装包。PAM 是独立系统,负责 Linux 服务器上的大多数身份验证。
|
||||
|
||||
需要修改的 SSH PAM 文件位于 `/etc/pam.d/sshd` ,用以下命令编辑:
|
||||
|
||||
```
|
||||
nano /etc/pam.d/sshd
|
||||
```
|
||||
|
||||
在文件顶部添加以下行:
|
||||
|
||||
```
|
||||
auth required pam_google_authenticator.so
|
||||
```
|
||||
|
||||
此外,我们还需要注释掉一行,这样 PAM 就不会提示输入密码。改变这行:
|
||||
|
||||
```
|
||||
# Standard Un*x authentication.
|
||||
@include common-auth
|
||||
```
|
||||
|
||||
为如下:
|
||||
|
||||
```
|
||||
# Standard Un*x authentication.
|
||||
# @include common-auth
|
||||
```
|
||||
|
||||
接下来,我们需要编辑 SSH 服务器配置文件:
|
||||
|
||||
```
|
||||
nano /etc/ssh/sshd_config
|
||||
```
|
||||
|
||||
改变这一行:
|
||||
|
||||
```
|
||||
ChallengeResponseAuthentication no
|
||||
```
|
||||
|
||||
为:
|
||||
|
||||
```
|
||||
ChallengeResponseAuthentication yes
|
||||
```
|
||||
|
||||
接下来,添加以下代码行来启用两个身份验证方案:SSH 密钥和谷歌认证器(键盘交互):
|
||||
|
||||
```
|
||||
AuthenticationMethods "publickey,keyboard-interactive"
|
||||
```
|
||||
|
||||
在重新加载 SSH 服务器之前,最好检查一下在配置中没有出现任何错误。执行以下命令:
|
||||
|
||||
```
|
||||
sshd -t
|
||||
```
|
||||
|
||||
如果没有标识出任何错误,用新的配置重载 SSH:
|
||||
|
||||
```
|
||||
systemctl reload sshd.service
|
||||
```
|
||||
|
||||
现在一切都应该开始工作了。现在,当你登录到你的服务器时,你将需要使用 SSH 密钥,并且当你被提示输入:
|
||||
|
||||
```
|
||||
Verification code:
|
||||
```
|
||||
|
||||
打开 Authenticator APP 并输入为您的服务器显示的 6 位代码。
|
||||
|
||||
### Authy
|
||||
|
||||
[Authy][4] 是一个双重身份验证服务,与 Google 一样,它提供基于时间的代码。然而,Authy 不需要手机,因为它提供桌面和平板客户端。它们还支持离线身份验证,不需要 Google 帐户。
|
||||
|
||||
你需要从应用程序商店安装 Authy 应用程序,或 Authy [下载页面][5]所链接的桌面客户端。
|
||||
|
||||
安装完应用程序后,需要在服务器上使用 API 密钥。这个过程需要几个步骤:
|
||||
|
||||
1. 在[这里][6]注册一个账户。
|
||||
2. 向下滚动到 “Authy” 部分。
|
||||
3. 在帐户上启用双因子认证(2FA)。
|
||||
4. 回 “Authy” 部分。
|
||||
5. 为你的服务器创建一个新的应用程序。
|
||||
6. 从新应用程序的 “General Settings” 页面顶部获取 API 密钥。你需要 “PRODUCTION API KEY”旁边的眼睛符号来显示密钥。如图:
|
||||
|
||||
![](https://bash-prompt.net/images/guides/2FA/twilio-authy-api.png)
|
||||
|
||||
在某个安全的地方记下 API 密钥。
|
||||
|
||||
现在,回到服务器,以 root 身份运行以下命令:
|
||||
|
||||
```
|
||||
curl -O 'https://raw.githubusercontent.com/authy/authy-ssh/master/authy-ssh'
|
||||
bash authy-ssh install /usr/local/bin
|
||||
```
|
||||
|
||||
当提示时输入 API 键。如果输入错误,你始终可以编辑 `/usr/local/bin/authy-ssh` 再添加一次。
|
||||
|
||||
Authy 现已安装。但是,在为用户启用它之前,它不会开始工作。启用 Authy 的命令有以下形式:
|
||||
|
||||
```
|
||||
/usr/local/bin/authy-ssh enable <system-user> <your-email> <your-phone-country-code> <your-phone-number>
|
||||
```
|
||||
|
||||
root 登录的一些示例细节:
|
||||
|
||||
```
|
||||
/usr/local/bin/authy-ssh enable root john@example.com 44 20822536476
|
||||
```
|
||||
|
||||
如果一切顺利,你会看到:
|
||||
|
||||
```
|
||||
User was registered
|
||||
```
|
||||
|
||||
现在可以通过运行以下命令来测试 Authy:
|
||||
|
||||
```
|
||||
authy-ssh test
|
||||
```
|
||||
|
||||
最后,重载 SSH 实现新的配置:
|
||||
|
||||
```
|
||||
systemctl reload sshd.service
|
||||
```
|
||||
|
||||
Authy 现在正在工作,SSH 需要它才能登录。
|
||||
|
||||
现在,当你登录时,你将看到以下提示:
|
||||
|
||||
```
|
||||
Authy Token (type 'sms' to request a SMS token):
|
||||
```
|
||||
|
||||
你可以输入手机或桌面客户端的 Authy APP 上的代码。或者你可以输入 `sms`, Authy 会给你发送一条带有登录码的短信。
|
||||
|
||||
可以通过运行以下命令卸载 Authy:
|
||||
|
||||
```
|
||||
/usr/local/bin/authy-ssh uninstall
|
||||
```
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://bash-prompt.net/guides/ssh-2fa/
|
||||
|
||||
作者:[Elliot Cooper][a]
|
||||
译者:[cielllll](https://github.com/cielllll)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://bash-prompt.net
|
||||
[1]:https://github.com/google/google-authenticator-libpam
|
||||
[2]:https://plus.google.com/115609618223925128756
|
||||
[3]:https://support.google.com/accounts/answer/1066447?hl=en
|
||||
[4]:https://authy.com/
|
||||
[5]:https://authy.com/download/
|
||||
[6]:https://www.authy.com/signup
|
||||
[7]:/images/guides/2FA/twilio-authy-api.png
|
||||
|
@ -1,67 +1,71 @@
|
||||
在 Linux 上使用 Lutries 管理你的游戏
|
||||
======
|
||||
|
||||
![](https://www.ostechnix.com/wp-content/uploads/2018/01/Lutris-1-720x340.jpg)
|
||||
|
||||
让我们用游戏开始 2018 的第一天吧!今天我们要讨论的是 **Lutris**,一个 Linux 上的开源游戏平台。你可以使用 Lutries 安装、移除、配置、启动和管理你的游戏。它可以以一个界面帮你管理你的 Linux 游戏、Windows 游戏、仿真控制台游戏和浏览器游戏。它还包含社区编写的安装脚本,使得游戏的安装过程更加简单。
|
||||
今天我们要讨论的是 **Lutris**,一个 Linux 上的开源游戏平台。你可以使用 Lutries 安装、移除、配置、启动和管理你的游戏。它可以在一个单一界面中帮你管理你的 Linux 游戏、Windows 游戏、仿真控制台游戏和浏览器游戏。它还包含社区编写的安装脚本,使得游戏的安装过程更加简单。
|
||||
|
||||
Lutries 自动安装(或者你可以单击点击安装)了超过 20 个模拟器,它提供了从七十年代到现在的大多数游戏系统。目前支持的游戏系统如下:
|
||||
|
||||
* Native Linux
|
||||
* Linux 原生
|
||||
* Windows
|
||||
* Steam (Linux and Windows)
|
||||
* Steam (Linux 和 Windows)
|
||||
* MS-DOS
|
||||
* 街机
|
||||
* Amiga 电脑
|
||||
* Atari 8 和 16 位计算机和控制器
|
||||
* 浏览器 (Flash 或者 HTML5 游戏)
|
||||
* Commmodore 8 位计算机
|
||||
* 基于 SCUMM 的游戏和其他点击冒险游戏
|
||||
* Magnavox Odyssey², Videopac+
|
||||
* 基于 SCUMM 的游戏和其他点击式冒险游戏
|
||||
* Magnavox Odyssey²、Videopac+
|
||||
* Mattel Intellivision
|
||||
* NEC PC-Engine Turbographx 16, Supergraphx, PC-FX
|
||||
* Nintendo NES, SNES, Game Boy, Game Boy Advance, DS
|
||||
* Game Cube and Wii
|
||||
* Sega Master Sytem, Game Gear, Genesis, Dreamcast
|
||||
* SNK Neo Geo, Neo Geo Pocket
|
||||
* NEC PC-Engine Turbographx 16、Supergraphx、PC-FX
|
||||
* Nintendo NES、SNES、Game Boy、Game Boy Advance、DS
|
||||
* Game Cube 和 Wii
|
||||
* Sega Master Sytem、Game Gear、Genesis、Dreamcast
|
||||
* SNK Neo Geo、Neo Geo Pocket
|
||||
* Sony PlayStation
|
||||
* Sony PlayStation 2
|
||||
* Sony PSP
|
||||
* 像 Zork 这样的 Z-Machine 游戏
|
||||
* 还有更多
|
||||
|
||||
|
||||
|
||||
### 安装 Lutris
|
||||
|
||||
就像 Steam 一样,Lutries 包含两部分:网站和客户端程序。从网站你可以浏览可用的游戏,添加最喜欢的游戏到个人库,以及使用安装链接安装他们。
|
||||
|
||||
首先,我们还是来安装客户端。它目前支持 Arch Linux、Debian、Fedroa、Gentoo、openSUSE 和 Ubuntu。
|
||||
|
||||
对于 Arch Linux 和它的衍生版本,像是 Antergos, Manjaro Linux,都可以在 [**AUR**][1] 中找到。因此,你可以使用 AUR 帮助程序安装它。
|
||||
对于 **Arch Linux** 和它的衍生版本,像是 Antergos, Manjaro Linux,都可以在 [AUR][1] 中找到。因此,你可以使用 AUR 帮助程序安装它。
|
||||
|
||||
使用 [Pacaur][2]:
|
||||
|
||||
使用 [**Pacaur**][2]:
|
||||
```
|
||||
pacaur -S lutris
|
||||
```
|
||||
|
||||
使用 **[Packer][3]** :
|
||||
使用 [Packer][3]:
|
||||
|
||||
```
|
||||
packer -S lutris
|
||||
```
|
||||
|
||||
使用 [**Yaourt**][4]:
|
||||
使用 [Yaourt][4]:
|
||||
|
||||
```
|
||||
yaourt -S lutris
|
||||
```
|
||||
|
||||
使用 [**Yay**][5]:
|
||||
使用 [Yay][5]:
|
||||
|
||||
```
|
||||
yay -S lutris
|
||||
```
|
||||
|
||||
**Debian:**
|
||||
|
||||
在 **Debian 9.0** 上以 **root** 身份运行以下命令:
|
||||
在 **Debian 9.0** 上以 **root** 身份运行以下命令:
|
||||
|
||||
```
|
||||
echo 'deb http://download.opensuse.org/repositories/home:/strycore/Debian_9.0/ /' > /etc/apt/sources.list.d/lutris.list
|
||||
wget -nv https://download.opensuse.org/repositories/home:strycore/Debian_9.0/Release.key -O Release.key
|
||||
@ -71,6 +75,7 @@ apt-get install lutris
|
||||
```
|
||||
|
||||
在 **Debian 8.0** 上以 **root** 身份运行以下命令:
|
||||
|
||||
```
|
||||
echo 'deb http://download.opensuse.org/repositories/home:/strycore/Debian_8.0/ /' > /etc/apt/sources.list.d/lutris.list
|
||||
wget -nv https://download.opensuse.org/repositories/home:strycore/Debian_8.0/Release.key -O Release.key
|
||||
@ -79,19 +84,22 @@ apt-get update
|
||||
apt-get install lutris
|
||||
```
|
||||
|
||||
在 **Fedora 27** 上以 **root** 身份运行以下命令: r
|
||||
在 **Fedora 27** 上以 **root** 身份运行以下命令:
|
||||
|
||||
```
|
||||
dnf config-manager --add-repo https://download.opensuse.org/repositories/home:strycore/Fedora_27/home:strycore.repo
|
||||
dnf install lutris
|
||||
```
|
||||
|
||||
在 **Fedora 26** 上以 **root** 身份运行以下命令:
|
||||
在 **Fedora 26** 上以 **root** 身份运行以下命令:
|
||||
|
||||
```
|
||||
dnf config-manager --add-repo https://download.opensuse.org/repositories/home:strycore/Fedora_26/home:strycore.repo
|
||||
dnf install lutris
|
||||
```
|
||||
|
||||
在 **openSUSE Tumbleweed** 上以 **root** 身份运行以下命令:
|
||||
|
||||
```
|
||||
zypper addrepo https://download.opensuse.org/repositories/home:strycore/openSUSE_Tumbleweed/home:strycore.repo
|
||||
zypper refresh
|
||||
@ -99,13 +107,15 @@ zypper install lutris
|
||||
```
|
||||
|
||||
在 **openSUSE Leap 42.3** 上以 **root** 身份运行以下命令:
|
||||
|
||||
```
|
||||
zypper addrepo https://download.opensuse.org/repositories/home:strycore/openSUSE_Leap_42.3/home:strycore.repo
|
||||
zypper refresh
|
||||
zypper install lutris
|
||||
```
|
||||
|
||||
**Ubuntu 17.10**:
|
||||
**Ubuntu 17.10**:
|
||||
|
||||
```
|
||||
sudo sh -c "echo 'deb http://download.opensuse.org/repositories/home:/strycore/xUbuntu_17.10/ /' > /etc/apt/sources.list.d/lutris.list"
|
||||
wget -nv https://download.opensuse.org/repositories/home:strycore/xUbuntu_17.10/Release.key -O Release.key
|
||||
@ -114,7 +124,8 @@ sudo apt-get update
|
||||
sudo apt-get install lutris
|
||||
```
|
||||
|
||||
**Ubuntu 17.04**:
|
||||
**Ubuntu 17.04**:
|
||||
|
||||
```
|
||||
sudo sh -c "echo 'deb http://download.opensuse.org/repositories/home:/strycore/xUbuntu_17.04/ /' > /etc/apt/sources.list.d/lutris.list"
|
||||
wget -nv https://download.opensuse.org/repositories/home:strycore/xUbuntu_17.04/Release.key -O Release.key
|
||||
@ -123,7 +134,8 @@ sudo apt-get update
|
||||
sudo apt-get install lutris
|
||||
```
|
||||
|
||||
**Ubuntu 16.10**:
|
||||
**Ubuntu 16.10**:
|
||||
|
||||
```
|
||||
sudo sh -c "echo 'deb http://download.opensuse.org/repositories/home:/strycore/xUbuntu_16.10/ /' > /etc/apt/sources.list.d/lutris.list"
|
||||
wget -nv https://download.opensuse.org/repositories/home:strycore/xUbuntu_16.10/Release.key -O Release.key
|
||||
@ -132,7 +144,8 @@ sudo apt-get update
|
||||
sudo apt-get install lutris
|
||||
```
|
||||
|
||||
**Ubuntu 16.04**:
|
||||
**Ubuntu 16.04**:
|
||||
|
||||
```
|
||||
sudo sh -c "echo 'deb http://download.opensuse.org/repositories/home:/strycore/xUbuntu_16.04/ /' > /etc/apt/sources.list.d/lutris.list"
|
||||
wget -nv https://download.opensuse.org/repositories/home:strycore/xUbuntu_16.04/Release.key -O Release.key
|
||||
@ -141,71 +154,75 @@ sudo apt-get update
|
||||
sudo apt-get install lutris
|
||||
```
|
||||
|
||||
对于其他平台,参考 [**Lutris 下载链接**][6].
|
||||
对于其他平台,参考 [Lutris 下载链接][6]。
|
||||
|
||||
### 使用 Lutris 管理你的游戏
|
||||
|
||||
安装完成后,从菜单或者应用启动器里打开 Lutries。首次启动时,Lutries 的默认界面像下面这样:
|
||||
|
||||
[![][7]][8]
|
||||
![][8]
|
||||
|
||||
**登录你的 Lutris.net 账号**
|
||||
#### 登录你的 Lutris.net 账号
|
||||
|
||||
为了能同步你个人库中的游戏,下一步你需要在客户端中登录你的 Lutris.net 账号。如果你没有,先 [**注册一个新的账号**][9]。然后点击 **"连接到你的 Lutirs.net 账号同步你的库 "** 连接到 Lutries 客户端。
|
||||
为了能同步你个人库中的游戏,下一步你需要在客户端中登录你的 Lutris.net 账号。如果你没有,先 [注册一个新的账号][9]。然后点击 “Connecting to your Lutirs.net account to sync your library” 连接到 Lutries 客户端。
|
||||
|
||||
输入你的账号信息然后点击 **继续**。
|
||||
输入你的账号信息然后点击 “Connect”。
|
||||
|
||||
[![][7]][10]
|
||||
![][10]
|
||||
|
||||
现在你已经连接到你的 Lutries.net 账号了。
|
||||
|
||||
[![][7]][11]**Browse Games**
|
||||
![][11]
|
||||
|
||||
#### 浏览游戏
|
||||
|
||||
点击工具栏里的浏览图标(游戏控制器图标)可以搜索任何游戏。它会自动定向到 Lutries 网站的游戏页。你可以以字母顺序查看所有可用的游戏。Lutries 现在已经有了很多游戏,而且还有更多的不断添加进来。
|
||||
|
||||
[![][7]][12]
|
||||
![][12]
|
||||
|
||||
任选一个游戏,添加到你的库中。
|
||||
|
||||
[![][7]][13]
|
||||
![][13]
|
||||
|
||||
然后返回到你的 Lutries 客户端,点击 **菜单 - > Lutris -> 同步库**。现在你可以在本地的 Lutries 客户端中看到所有在库中的游戏了。
|
||||
然后返回到你的 Lutries 客户端,点击 “Menu -> Lutris -> Synchronize library”。现在你可以在本地的 Lutries 客户端中看到所有在库中的游戏了。
|
||||
|
||||
[![][7]][14]
|
||||
![][14]
|
||||
|
||||
如果你没有看到游戏,只需要重启一次。
|
||||
|
||||
**安装游戏**
|
||||
#### 安装游戏
|
||||
|
||||
安装游戏,只需要点击游戏,然后点击 **安装** 按钮。例如,我想在我的系统安装 [**2048**][15],就像你在底下的截图中看到的,它要求我选择一个版本去安装。因为它只有一个版本(例如,在线),它就会自动选择这个版本。点击 **继续**。
|
||||
安装游戏,只需要点击游戏,然后点击 “Install” 按钮。例如,我想在我的系统安装 [2048][15],就像你在底下的截图中看到的,它要求我选择一个版本去安装。因为它只有一个版本(例如,在线),它就会自动选择这个版本。点击 “Continue”。
|
||||
|
||||
[![][7]][16]Click Install:
|
||||
![][16]
|
||||
|
||||
[![][7]][17]
|
||||
点击“Install”:
|
||||
|
||||
![][17]
|
||||
|
||||
安装完成之后,你可以启动新安装的游戏或是关闭这个窗口,继续从你的库中安装其他游戏。
|
||||
|
||||
**导入 Steam 库**
|
||||
#### 导入 Steam 库
|
||||
|
||||
你也可以导入你的 Steam 库。在你的头像处点击 **"通过 Steam 登录"** 按钮。接下来你将被重定向到 Steam,输入你的账号信息。填写正确后,你的 Steam 账号将被连接到 Lutries 账号。请注意,为了同步库中的游戏,这里你的 Steam 账号将被公开。你可以在同步完成之后将其重新设为私密状态。
|
||||
你也可以导入你的 Steam 库。在你的头像处点击 “Sign in through Steam” 按钮。接下来你将被重定向到 Steam,输入你的账号信息。填写正确后,你的 Steam 账号将被连接到 Lutries 账号。请注意,为了同步库中的游戏,这里你的 Steam 账号将被公开。你可以在同步完成之后将其重新设为私密状态。
|
||||
|
||||
**手动添加游戏**
|
||||
#### 手动添加游戏
|
||||
|
||||
Lutries 有手动添加游戏的选项。在工具栏中点击 + 号登录。
|
||||
Lutries 有手动添加游戏的选项。在工具栏中点击 “+” 号登录。
|
||||
|
||||
[![][7]][18]
|
||||
![][18]
|
||||
|
||||
在下一个窗口,输入游戏名,在游戏信息栏选择一个运行器。运行器是指 Linux 上类似 wine,Steam 之类的程序,它们可以帮助你启动这个游戏。你可以从 菜单 -> 管理运行器 中安装运行器。
|
||||
在下一个窗口,输入游戏名,在游戏信息栏选择一个运行器。运行器是指 Linux 上类似 wine、Steam 之类的程序,它们可以帮助你启动这个游戏。你可以从 “Menu -> Manage” 中安装运行器。
|
||||
|
||||
[![][7]][19]
|
||||
![][19]
|
||||
|
||||
然后在下一栏中选择可执行文件或者 ISO。最后点击保存。有一个好消息是,你可以添加一个游戏的多个版本。
|
||||
|
||||
**移除游戏**
|
||||
#### 移除游戏
|
||||
|
||||
移除任何已安装的游戏,只需在 Lutries 客户端的本地库中点击对应的游戏。选择 **移除** 然后 **应用**。
|
||||
移除任何已安装的游戏,只需在 Lutries 客户端的本地库中点击对应的游戏。选择 “Remove” 然后 “Apply”。
|
||||
|
||||
[![][7]][20]
|
||||
![][20]
|
||||
|
||||
Lutries 就像 Steam。只是从网站向你的库中添加游戏,并在客户端中为你安装它们。
|
||||
|
||||
@ -215,15 +232,13 @@ Lutries 就像 Steam。只是从网站向你的库中添加游戏,并在客户
|
||||
|
||||
:)
|
||||
|
||||
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.ostechnix.com/manage-games-using-lutris-linux/
|
||||
|
||||
作者:[SK][a]
|
||||
译者:[dianbanjiu](https://github.com/dianbanjiu)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
@ -234,17 +249,16 @@ via: https://www.ostechnix.com/manage-games-using-lutris-linux/
|
||||
[4]:https://www.ostechnix.com/install-yaourt-arch-linux/
|
||||
[5]:https://www.ostechnix.com/yay-found-yet-another-reliable-aur-helper/
|
||||
[6]:https://lutris.net/downloads/
|
||||
[7]:data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7
|
||||
[8]:http://www.ostechnix.com/wp-content/uploads/2018/01/Lutris-1-1.png ()
|
||||
[8]:http://www.ostechnix.com/wp-content/uploads/2018/01/Lutris-1-1.png
|
||||
[9]:https://lutris.net/user/register/
|
||||
[10]:http://www.ostechnix.com/wp-content/uploads/2018/01/Lutris-2.png ()
|
||||
[11]:http://www.ostechnix.com/wp-content/uploads/2018/01/Lutris-3.png ()
|
||||
[12]:http://www.ostechnix.com/wp-content/uploads/2018/01/Lutris-15-1.png ()
|
||||
[13]:http://www.ostechnix.com/wp-content/uploads/2018/01/Lutris-16.png ()
|
||||
[14]:http://www.ostechnix.com/wp-content/uploads/2018/01/Lutris-6.png ()
|
||||
[10]:http://www.ostechnix.com/wp-content/uploads/2018/01/Lutris-2.png
|
||||
[11]:http://www.ostechnix.com/wp-content/uploads/2018/01/Lutris-3.png
|
||||
[12]:http://www.ostechnix.com/wp-content/uploads/2018/01/Lutris-15-1.png
|
||||
[13]:http://www.ostechnix.com/wp-content/uploads/2018/01/Lutris-16.png
|
||||
[14]:http://www.ostechnix.com/wp-content/uploads/2018/01/Lutris-6.png
|
||||
[15]:https://www.ostechnix.com/let-us-play-2048-game-terminal/
|
||||
[16]:http://www.ostechnix.com/wp-content/uploads/2018/01/Lutris-12.png ()
|
||||
[17]:http://www.ostechnix.com/wp-content/uploads/2018/01/Lutris-13.png ()
|
||||
[18]:http://www.ostechnix.com/wp-content/uploads/2018/01/Lutris-18-1.png ()
|
||||
[19]:http://www.ostechnix.com/wp-content/uploads/2018/01/Lutris-19.png ()
|
||||
[20]:http://www.ostechnix.com/wp-content/uploads/2018/01/Lutris-14-1.png ()
|
||||
[16]:http://www.ostechnix.com/wp-content/uploads/2018/01/Lutris-12.png
|
||||
[17]:http://www.ostechnix.com/wp-content/uploads/2018/01/Lutris-13.png
|
||||
[18]:http://www.ostechnix.com/wp-content/uploads/2018/01/Lutris-18-1.png
|
||||
[19]:http://www.ostechnix.com/wp-content/uploads/2018/01/Lutris-19.png
|
||||
[20]:http://www.ostechnix.com/wp-content/uploads/2018/01/Lutris-14-1.png
|
@ -1,118 +1,101 @@
|
||||
Python 数据科学入门
|
||||
======
|
||||
|
||||
> 不需要昂贵的工具即可领略数据科学的力量,从这些开源工具起步即可。
|
||||
|
||||
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/GOV_open_data_520x292.jpg?itok=R8rBrlk7)
|
||||
|
||||
无论你是一个具有数学或计算机科学背景的数据科学爱好者,还是一个其它领域的专家,数据科学提供的可能性都在你力所能及的范围内,而且你不需要昂贵的,高度专业化的企业软件。本文中讨论的开源工具就是你入门时所需的全部内容。
|
||||
无论你是一个具有数学或计算机科学背景的资深数据科学爱好者,还是一个其它领域的专家,数据科学提供的可能性都在你力所能及的范围内,而且你不需要昂贵的,高度专业化的企业级软件。本文中讨论的开源工具就是你入门时所需的全部内容。
|
||||
|
||||
[Python][1],其机器学习和数据科学库([pandas][2], [Keras][3], [TensorFlow][4], [scikit-learn][5], [SciPy][6], [NumPy][7] 等),以及大量可视化库([Matplotlib][8], [pyplot][9], [Plotly][10] 等)对于初学者和专家来说都是优秀的 FOSS(译注:全称为 Free and Open Source Software)工具。它们易于学习,很受欢迎且受到社区支持,并拥有为数据科学开发的最新技术和算法。它们是你在开始学习时可以获得的最佳工具集之一。
|
||||
[Python][1],其机器学习和数据科学库([pandas][2]、 [Keras][3]、 [TensorFlow][4]、 [scikit-learn][5]、 [SciPy][6]、 [NumPy][7] 等),以及大量可视化库([Matplotlib][8]、[pyplot][9]、 [Plotly][10] 等)对于初学者和专家来说都是优秀的自由及开源软件工具。它们易于学习,很受欢迎且受到社区支持,并拥有为数据科学而开发的最新技术和算法。它们是你在开始学习时可以获得的最佳工具集之一。
|
||||
|
||||
许多 Python 库都是建立在彼此之上的(称为依赖项),其基础是 [NumPy][7] 库。NumPy 专门为数据科学设计,经常用于在其 ndarray 数据类型中存储数据集的相关部分。ndarray 是一种方便的数据类型,用于将关系表中的记录存储为 `cvs` 文件或其它任何格式,反之亦然。将 scikit 功能应用于多维数组时,它特别方便。SQL 非常适合查询数据库,但是对于执行复杂和资源密集型的数据科学操作,在 ndarray 中存储数据可以提高效率和速度(确保在处理大量数据集时有足够的 RAM)。当你使用 pandas 进行知识提取和分析时,pandas 中的 DataFrame 数据类型和 NumPy 中的 ndarray 之间的无缝转换分别为提取和计算密集型操作创建了一个强大的组合。
|
||||
许多 Python 库都是建立在彼此之上的(称为依赖项),其基础是 [NumPy][7] 库。NumPy 专门为数据科学设计,经常被用于在其 ndarray 数据类型中存储数据集的相关部分。ndarray 是一种方便的数据类型,用于将关系表中的记录存储为 `cvs` 文件或其它任何格式,反之亦然。将 scikit 函数应用于多维数组时,它特别方便。SQL 非常适合查询数据库,但是对于执行复杂和资源密集型的数据科学操作,在 ndarray 中存储数据可以提高效率和速度(但请确保在处理大量数据集时有足够的 RAM)。当你使用 pandas 进行知识提取和分析时,pandas 中的 DataFrame 数据类型和 NumPy 中的 ndarray 之间的无缝转换分别为提取和计算密集型操作创建了一个强大的组合。
|
||||
|
||||
作为快速演示,让我们启动 Python shell 并在 pandas DataFrame 变量中加载来自巴尔的摩的犯罪统计数据的开放数据集,并查看加载的一部分 DataFrame:
|
||||
|
||||
为了快速演示,让我们启动 Python shel 并在 pandas DataFrame 变量中加载来自巴尔的摩(Baltimore)的犯罪统计数据的开放数据集,并查看加载 frame 的一部分:
|
||||
```
|
||||
>>> import pandas as pd
|
||||
|
||||
>>> crime_stats = pd.read_csv('BPD_Arrests.csv')
|
||||
|
||||
>>> crime_stats.head()
|
||||
```
|
||||
|
||||
![](https://opensource.com/sites/default/files/styles/panopoly_image_original/public/images/life-uploads/crime_stats_chart.jpg?itok=_rPXJYHz)
|
||||
|
||||
我们现在可以在这个 pandas DataFrame 上执行大多数查询就像我们可以在数据库中使用 SQL。例如,要获取 "Description"属性的所有唯一值,SQL 查询是:
|
||||
我们现在可以在这个 pandas DataFrame 上执行大多数查询,就像我们可以在数据库中使用 SQL 一样。例如,要获取 `Description` 属性的所有唯一值,SQL 查询是:
|
||||
|
||||
```
|
||||
$ SELECT unique(“Description”) from crime_stats;
|
||||
|
||||
```
|
||||
|
||||
利用 pandas DataFrame 编写相同的查询如下所示:
|
||||
|
||||
```
|
||||
>>> crime_stats['Description'].unique()
|
||||
|
||||
['COMMON ASSAULT' 'LARCENY' 'ROBBERY - STREET' 'AGG. ASSAULT'
|
||||
|
||||
'LARCENY FROM AUTO' 'HOMICIDE' 'BURGLARY' 'AUTO THEFT'
|
||||
|
||||
'ROBBERY - RESIDENCE' 'ROBBERY - COMMERCIAL' 'ROBBERY - CARJACKING'
|
||||
|
||||
'ASSAULT BY THREAT' 'SHOOTING' 'RAPE' 'ARSON']
|
||||
|
||||
>>> crime_stats['Description'].unique()
|
||||
['COMMON ASSAULT' 'LARCENY' 'ROBBERY - STREET' 'AGG. ASSAULT'
|
||||
'LARCENY FROM AUTO' 'HOMICIDE' 'BURGLARY' 'AUTO THEFT'
|
||||
'ROBBERY - RESIDENCE' 'ROBBERY - COMMERCIAL' 'ROBBERY - CARJACKING'
|
||||
'ASSAULT BY THREAT' 'SHOOTING' 'RAPE' 'ARSON']
|
||||
```
|
||||
|
||||
它返回的是一个 NumPy 数组(ndarray 类型):
|
||||
|
||||
```
|
||||
>>> type(crime_stats['Description'].unique())
|
||||
|
||||
<class 'numpy.ndarray'>
|
||||
|
||||
>>> type(crime_stats['Description'].unique())
|
||||
<class 'numpy.ndarray'>
|
||||
```
|
||||
|
||||
接下来让我们将这些数据输入神经网络,看看它能多准确地预测使用的武器类型,给出的数据包括犯罪事件,犯罪类型以及发生的地点:
|
||||
|
||||
```
|
||||
>>> from sklearn.neural_network import MLPClassifier
|
||||
|
||||
>>> import numpy as np
|
||||
|
||||
>>> from sklearn.neural_network import MLPClassifier
|
||||
>>> import numpy as np
|
||||
>>>
|
||||
|
||||
>>> prediction = crime_stats[[‘Weapon’]]
|
||||
|
||||
>>> predictors = crime_stats['CrimeTime', ‘CrimeCode’, ‘Neighborhood’]
|
||||
|
||||
>>> prediction = crime_stats[[‘Weapon’]]
|
||||
>>> predictors = crime_stats['CrimeTime', ‘CrimeCode’, ‘Neighborhood’]
|
||||
>>>
|
||||
|
||||
>>> nn_model = MLPClassifier(solver='lbfgs', alpha=1e-5, hidden_layer_sizes=(5,2), random_state=1)
|
||||
|
||||
>>> nn_model = MLPClassifier(solver='lbfgs', alpha=1e-5, hidden_layer_sizes=(5,
|
||||
2), random_state=1)
|
||||
>>>
|
||||
|
||||
>>>predict_weapon = nn_model.fit(prediction, predictors)
|
||||
|
||||
>>>predict_weapon = nn_model.fit(prediction, predictors)
|
||||
```
|
||||
|
||||
现在学习模型准备就绪,我们可以执行一些测试来确定其质量和可靠性。对于初学者,让我们输入一个训练集数据(用于训练模型的原始数据集的一部分,不包括在创建模型中):
|
||||
```
|
||||
>>> predict_weapon.predict(training_set_weapons)
|
||||
|
||||
array([4, 4, 4, ..., 0, 4, 4])
|
||||
|
||||
```
|
||||
|
||||
如你所见,它返回一个列表,每个数字预测训练集中每个记录的武器。我们之所以看到的是数字而不是武器名称,是因为大多数分类算法都是用数字优化的。对于分类数据,有一些技术可以将属性转换为数字表示。在这种情况下,使用的技术是 Label Encoder,使用 sklearn 预处理库中的 LabelEncoder 函数:`preprocessing.LabelEncoder()`。它能够对一个数据和其对应的数值表示来进行变换和逆变换。在这个例子中,我们可以使用 LabelEncoder() 的 `inverse_transform` 函数来查看武器 0 和 4 是什么:
|
||||
>>> predict_weapon.predict(training_set_weapons)
|
||||
array([4, 4, 4, ..., 0, 4, 4])
|
||||
```
|
||||
>>> preprocessing.LabelEncoder().inverse_transform(encoded_weapons)
|
||||
|
||||
array(['HANDS', 'FIREARM', 'HANDS', ..., 'FIREARM', 'FIREARM', 'FIREARM']
|
||||
如你所见,它返回一个列表,每个数字预测训练集中每个记录的武器。我们之所以看到的是数字而不是武器名称,是因为大多数分类算法都是用数字优化的。对于分类数据,有一些技术可以将属性转换为数字表示。在这种情况下,使用的技术是标签编码,使用 sklearn 预处理库中的 `LabelEncoder` 函数:`preprocessing.LabelEncoder()`。它能够对一个数据和其对应的数值表示来进行变换和逆变换。在这个例子中,我们可以使用 `LabelEncoder()` 的 `inverse_transform` 函数来查看武器 0 和 4 是什么:
|
||||
|
||||
```
|
||||
>>> preprocessing.LabelEncoder().inverse_transform(encoded_weapons)
|
||||
array(['HANDS', 'FIREARM', 'HANDS', ..., 'FIREARM', 'FIREARM', 'FIREARM']
|
||||
```
|
||||
|
||||
这很有趣,但为了了解这个模型的准确程度,我们将几个分数计算为百分比:
|
||||
```
|
||||
>>> nn_model.score(X, y)
|
||||
|
||||
```
|
||||
>>> nn_model.score(X, y)
|
||||
0.81999999999999995
|
||||
|
||||
```
|
||||
|
||||
这表明我们的神经网络模型准确度约为 82%。这个结果似乎令人印象深刻,但用于不同的犯罪数据集时,检查其有效性非常重要。还有其它测试来做这个,如相关性,混淆,矩阵等。尽管我们的模型有很高的准确率,但它对于一般犯罪数据集并不是非常有用,因为这个特定数据集具有不成比例的行数,其列出 ‘FIREARM’ 作为使用的武器。除非重新训练,否则我们的分类器最有可能预测 ‘FIREARM’,即使输入数据集有不同的分布。
|
||||
这表明我们的神经网络模型准确度约为 82%。这个结果似乎令人印象深刻,但用于不同的犯罪数据集时,检查其有效性非常重要。还有其它测试来做这个,如相关性、混淆、矩阵等。尽管我们的模型有很高的准确率,但它对于一般犯罪数据集并不是非常有用,因为这个特定数据集具有不成比例的行数,其列出 `FIREARM` 作为使用的武器。除非重新训练,否则我们的分类器最有可能预测 `FIREARM`,即使输入数据集有不同的分布。
|
||||
|
||||
在对数据进行分类之前清洗数据并删除异常值和畸形数据非常重要。预处理越好,我们的见解准确性就越高。此外,为模型或分类器提供过多数据(通常超过 90%)以获得更高的准确度是一个坏主意,因为它看起来准确但由于[过度拟合][11]而无效。
|
||||
|
||||
[Jupyter notebooks][12] 相对于命令行来说是一个很好的交互式替代品。虽然 CLI 对大多数事情都很好,但是当你想要运行代码片段以生成可视化时,Jupyter 会很出色。它比终端更好地格式化数据。
|
||||
[Jupyter notebooks][12] 相对于命令行来说是一个很好的交互式替代品。虽然 CLI 对于大多数事情都很好,但是当你想要运行代码片段以生成可视化时,Jupyter 会很出色。它比终端更好地格式化数据。
|
||||
|
||||
[这篇文章][13] 列出了一些最好的机器学习免费资源,但是还有很多其它的指导和教程。根据你的兴趣和爱好,你还会发现许多开放数据集可供使用。作为起点,由 [Kaggle][14] 维护的数据集,以及在州政府网站上提供的数据集是极好的资源。
|
||||
|
||||
(to 校正:最后这句话不知该如何理解)
|
||||
Payal Singh 将出席今年 3 月 8 日至 11 日在 California(加利福尼亚)的 Pasadena(帕萨迪纳)举行的 SCaLE16x。要参加并获得 50% 的门票优惠,[注册][15]使用优惠码**OSDC**。
|
||||
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/18/3/getting-started-data-science
|
||||
|
||||
作者:[Payal Singh][a]
|
||||
译者:[MjSeven](https://github.com/MjSeven)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
@ -1,10 +1,9 @@
|
||||
9 个提升开发者与设计师协作的方法
|
||||
======
|
||||
> 抛开成见,设计师和开发者的命运永远交织在一起。 以下是如何让每个人都在同一页面上。
|
||||
|
||||
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/BUS_consensuscollab1.png?itok=ULQdGjlV)
|
||||
|
||||
本文由我与 [Jason Porter][1] 共同完成。
|
||||
|
||||
在任何软件项目中,设计至关重要。设计师不像开发团队那样熟悉其内部工作,但迟早都要知道开发人员写代码的意图。
|
||||
|
||||
两边都有自己的成见。工程师经常认为设计师们古怪不理性,而设计师也认为工程师们死板要求高。在一天的工作快要结束时,情况会变得更加微妙。设计师和开发者们的命运永远交织在一起。
|
||||
@ -53,7 +52,7 @@
|
||||
|
||||
via: https://opensource.com/article/18/5/9-ways-improve-collaboration-developers-designers
|
||||
|
||||
作者:[Jason Brock][a]
|
||||
作者:[Jason Brock][a], [Jason Porter][1]
|
||||
选题:[lujun9972](https://github.com/lujun9972)
|
||||
译者:[LuuMing](https://github.com/LuuMing)
|
||||
校对:[pityonline](https://github.com/pityonline)
|
@ -0,0 +1,159 @@
|
||||
对 C++ 的忧虑?C++创始人警告:关于 C++ 的某些未来计划十分危险
|
||||
======
|
||||
|
||||
![](https://regmedia.co.uk/2018/06/15/shutterstock_38621860.jpg?x=442&y=293&crop=1)
|
||||
|
||||
今年早些时候,我们对 Bjarne Stroustrup 进行了采访。他是 C++ 语言的创始人,摩根士丹利技术部门的董事总经理,美国哥伦比亚大学计算机科学的客座教授。他写了[一封信][1],请那些关注编程语言进展的人去“想想瓦萨号!”
|
||||
|
||||
这句话对于丹麦人来说,毫无疑问,很容易理解。而那些对于 17 世纪的斯堪的纳维亚历史了解不多的人,还需要详细说明一下。瓦萨号是一艘瑞典军舰,由国王 Gustavus Adolphus 定做。它是当时波罗的海国家中最强大的军舰,但在 1628 年 8 月 10 日首航没几分钟之后就沉没了。
|
||||
|
||||
巨大的瓦萨号有一个难以解决的设计缺陷:头重脚轻,以至于它被[一阵狂风刮翻了][2]。通过援引这艘沉船的历史,Stroustrup 警示了 C++ 所面临的风险 —— 现在越来越多的特性被添加到了 C++ 中。
|
||||
|
||||
我们现在已经发现了好些能导致头重脚轻的特性。Stroustrup 在他的信中引用了 43 个提议。他认为那些参与 C++ 语言 ISO 标准演进的人(即所谓的 [WG21 小组][3])正在努力推进语言发展,但成员们的努力方向却并不一致。
|
||||
|
||||
在他的信中,他写道:
|
||||
|
||||
> 分开来看,许多提议都很有道理。但将它们综合到一起,这些提议是很愚蠢的,将危害 C++ 的未来。
|
||||
|
||||
他明确表示,他用瓦萨号作为比喻并不是说他认为不断提升会带来毁灭。我们应该吸取瓦萨号的教训,构建一个坚实的基础,从错误中学习并对新版本做彻底的测试。
|
||||
|
||||
在瑞士<ruby>拉普斯威尔<rt>Rapperswill</rt></ruby>召开 C++ 标准化委员会会议之后,本月早些时候,Stroustrup 接受了 *The Register* 的采访,回答了有关 C++ 语言下一步发展方向的几个问题。(最新版是去年刚发布的 C++17;下一个版本是 C++20,预计于 2020 年发布。)
|
||||
|
||||
*Register:*在您的信件《想想瓦萨号!》中,您写道:
|
||||
|
||||
> 在 C++11 开始的基础建设尚未完成,而 C++17 基本没有在使基础更加稳固、规范和完整方面做出改善。相反,却增加了重要接口的复杂度(原文为 surface complexity,直译“表面复杂度”),让人们需要学习的特性数量越来越多。C++ 可能在这种不成熟的提议的重压之下崩溃。我们不应该花费大量的时间为专家级用户们(比如我们自己)去创建越来越复杂的东西。~~(还要考虑普通用户的学习曲线,越复杂的东西越不易普及。)~~
|
||||
|
||||
**对新人来说,C++ 过难了吗?如果是这样,您认为怎样的特性让新人更易理解?**
|
||||
|
||||
*Stroustrup:*C++ 的有些东西对于新人来说确实很具有挑战性。
|
||||
|
||||
另一方面而言,C++ 中有些东西对于新人来说,比起 C 或上世纪九十年代的 C++ 更容易理解了。而难点是让大型社区专注于这些部分,并且帮助新手和非专业的 C++ 用户去规避那些对高级库实现提供支持的部分。
|
||||
|
||||
我建议使用 [C++ 核心准则][4]作为实现上述目标的一个辅助。
|
||||
|
||||
此外,我的“C++ 教程”也可以帮助人们在使用现代 C++ 时走上正确的方向,而不会迷失在自上世纪九十年代以来的复杂性中,或困惑于只有专家级用户才能理解的东西中。这本即将出版的第二版的“C++ 教程”涵盖了 C++17 和部分 C++20 的内容。
|
||||
|
||||
我和其他人给没有编程经验的大一新生教过 C++,只要你不去深入编程语言的每个晦涩难懂的角落,把注意力集中到 C++ 中最主流的部分,就可以在三个月内学会 C++。
|
||||
|
||||
“让简单的东西保持简单”是我长期追求的目标。比如 C++11 的 `range-for` 循环:
|
||||
|
||||
```
|
||||
for (int& x : v) ++x; // increment each element of the container v
|
||||
```
|
||||
|
||||
`v` 的位置可以是任何容器。在 C 和 C 风格的 C++ 中,它可能看起来是这样:
|
||||
|
||||
```
|
||||
for (int i=0; i<MAX; i++) ++v[i]; // increment each element of the array v
|
||||
```
|
||||
|
||||
一些人抱怨说添加了 `range-for` 循环让 C++ 变得更复杂了,很显然,他们是正确的,因为它添加了一个新特性。但它却让 C++ 用起来更简单,而且同时它还消除了使用传统 `for` 循环时会出现的一些常见错误。
|
||||
|
||||
另一个例子是 C++11 的<ruby>标准线程库<rt>standard thread library</rt></ruby>。它比起使用 POSIX 或直接使用 Windows 的 C API 来说更简单,并且更不易出错。
|
||||
|
||||
*Register:***您如何看待 C++ 现在的状况?**
|
||||
|
||||
*Stroustrup:*C++11 中作出了许多重大改进,并且我们在 C++14 上全面完成了改进工作。C++17 添加了相当多的新特性,但是没有提供对新技术的很多支持。C++20 目前看上去可能会成为一个重大改进版。编译器的状况非常好,标准库实现得也很优秀,非常接近最新的标准。C++17 现在已经可以使用,对于工具的支持正在逐步推进。已经有了许多第三方的库和好些新工具。然而,不幸的是,这些东西不太好找到。
|
||||
|
||||
我在《想想瓦萨号!》一文中所表达的担忧与标准化过程有关,对新东西的过度热情与完美主义的组合推迟了重大改进。“追求完美往往事与愿违”。在六月份拉普斯威尔的会议上有 160 人参与;在这样一个数量庞大且多样化的人群中很难取得一致意见。专家们也本来就有只为自己设计语言的倾向,这让他们不会时常在设计时考虑整个社区的需求。
|
||||
|
||||
*Register:***C++ 是否有一个理想的状态,或者与之相反,您只是为了程序员们的期望而努力,随时适应并且努力满足程序员们的需要?**
|
||||
|
||||
*Stroustrup:*二者都有。我很乐意看到 C++ 支持彻底保证<ruby>类型安全<rt>type-safe</rt></ruby>和<ruby>资源安全<rt>resource-safe</rt></ruby>的编程方式。这不应该通过限制适用性或增加性能损耗来实现,而是应该通过改进的表达能力和更好的性能来实现。通过让程序员使用更好的(和更易用的)语言工具可以达到这个目标,我们可以做到的。
|
||||
|
||||
终极目标不会马上实现,也不会单靠语言设计来实现。为了实现这一目标,我们需要改进语言特性、提供更好的库和静态分析,并且设立提升编程效率的规则。C++ 核心准则是我为了提升 C++ 代码质量而实行的广泛而长期的计划的一部分。
|
||||
|
||||
*Register:***目前 C++ 是否面临着可以预见的风险?如果有,它是以什么形式出现的?(如,迭代过于缓慢,新兴低级语言,等等……据您的观点来看,似乎是提出的提议过多。)**
|
||||
|
||||
*Stroustrup:*就是这样。今年我们已经收到了 400 篇文章。当然了,它们并不都是新提议。许多提议都与规范语言和标准库这一必需而乏味的工作相关,但是量大到难以管理。你可以在 [WG21 网站][6]上找到所有这些文章。
|
||||
|
||||
我写了《想想瓦萨号!》这封信作为一个呼吁,因为这种为了解决即刻需求(或者赶时髦)而不断增添语言特性,却对巩固语言基础(比如,改善<ruby>静态类型系统<rt>static type system</rt></ruby>)不管不问的倾向让我感到震惊。增加的任何新东西,无论它多小都会产生成本,比如实现、学习、工具升级。重大的特性改变能够改变我们对编程的想法,而它们才是我们必须关注的东西。
|
||||
|
||||
委员会已经设立了一个“指导小组”,这个小组由在语言、标准库、实现、以及工程实践领域中拥有不错履历的人组成。我是其中的成员之一。我们负责为重点领域写[一些关于发展方向、设计理念和建议重点发展领域的东西][7]。
|
||||
|
||||
对于 C++20,我们建议去关注:
|
||||
|
||||
* 概念
|
||||
* 模块(适度地模块化并带来编译时的显著改进)
|
||||
* Ranges(包括一些无限序列的扩展)
|
||||
* 标准库中的网络概念
|
||||
|
||||
在拉普斯威尔会议之后,这些都有了实现的机会,虽然模块和网络化都不是会议的重点讨论对象。我是一个乐观主义者,并且委员会的成员们都非常努力。
|
||||
|
||||
我并不担心其它语言或新语言会取代它。我喜欢编程语言。如果一门新的语言提供了独一无二的、非常有用的东西,那它就是我们的榜样,我们可以向它学习。当然,每门语言本身都有一些问题。C++ 的许多问题都与它广泛的应用领域、大量的使用人群和过度的热情有关。大多数语言的社区都会有这样的问题。
|
||||
|
||||
*Register:***关于 C++ 您是否重新考虑过任何架构方面的决策?**
|
||||
|
||||
*Stroustrup:*当我着手规划新版本时,我经常反思原来的决策和设计。关于这些,可以看我的《编程的历史》论文第 [1][8]、[2][9] 部分。
|
||||
|
||||
并没有让我觉得很后悔的重大决策。如果我必须重新做一次,我觉得和以前做的不会有太大的不同。
|
||||
|
||||
与以前一样,能够直接处理硬件加上零开销的抽象是设计的指导思想。使用<ruby>构造函数<rt>constructor</rt></ruby>和<ruby>析构函数<rt>destructor</rt></ruby>去处理资源是关键(<ruby>资源获取即初始化<rt>Resource Acquisition Is Initialization</rt><ruby>,RAII);<ruby>标准模板库<rt>Standard Template Library</rt></ruby>(STL) 就是解释 C++ 库能够做什么的一个很好的例子。
|
||||
|
||||
*Register:***在 2011 年被采纳的每三年发布一个新版本的节奏是否仍然有效?我之所以这样问是因为 Java 已经决定更快地迭代。**
|
||||
|
||||
*Stroustrup:*我认为 C++20 将会按时发布(就像 C++14 和 C++17 那样),并且主流的编译器也会立即采用它。我也希望 C++20 基于 C++17 能有重大的改进。
|
||||
|
||||
对于其它语言如何管理它们的版本,我并不十分关心。C++ 是由一个遵循 ISO 规则的委员会来管理的,而不是由某个大公司或某种“<ruby>终生的仁慈独裁者<rt>Beneficial Dictator Of Life</rt></ruby>(BDOL)”来管理。这一点不会改变。C++ 每三年发布一次的周期在 ISO 标准中是一个引人注目的创举。通常而言,周期应该是 5 或 10 年。
|
||||
|
||||
*Register:***在您的信中您写道:**
|
||||
|
||||
> 我们需要一个能够被“普通程序员”使用的,条理还算清楚的编程语言。他们主要关心的是,能否按时高质量地交付他们的应用程序。
|
||||
|
||||
改进语言能够解决这个问题吗?或者,我们还需要更容易获得的工具和教育支持?
|
||||
|
||||
*Stroustrup:*我尽力宣传我关于 C++ 的实质和使用方式的[理念][10],并且我鼓励其他人也和我采取相同的行动。
|
||||
|
||||
特别是,我鼓励讲师和作者们向 C++ 程序员们提出有用的建议,而不是去示范复杂的示例和技术来展示他们自己有多高明。我在 2017 年的 CppCon 大会上的演讲主题就是“学习和传授 C++”,并且也指出,我们需要更好的工具。
|
||||
|
||||
我在演讲中提到了构建技术支持和包管理器,这些历来都是 C++ 的弱项。标准化委员会现在有一个工具研究小组,或许不久的将来也会组建一个教育研究小组。
|
||||
|
||||
C++ 的社区以前是十分无组织性的,但是在过去的五年里,为了满足社区对新闻和技术支持的需要,出现了很多集会和博客。CppCon、isocpp.org、以及 Meeting++ 就是一些例子。
|
||||
|
||||
在一个庞大的委员会中做语言标准设计是很困难的。但是,对于所有的大型项目来说,委员会又是必不可少的。我很忧虑,但是关注它们并且面对问题是成功的必要条件。
|
||||
|
||||
*Register:***您如何看待 C++ 社区的流程?在沟通和决策方面你希望看到哪些变化?**
|
||||
|
||||
*Stroustrup:*C++ 并没有企业管理一般的“社区流程”;它所遵循的是 ISO 标准流程。我们不能对 ISO 的条例做大的改变。理想的情况是,我们设立一个小型的、全职的“秘书处”来做最终决策和方向管理,但这种理想情况是不会出现的。相反,我们有成百上千的人在线讨论,大约有 160 人在技术问题上进行投票,大约有 70 组织和 11 个国家的人在最终提议上正式投票。这样很混乱,但是有些时候它的确能发挥作用。
|
||||
|
||||
*Register:***在最后,您认为那些即将推出的 C++ 特性中,对 C++ 用户最有帮助的是哪些?**
|
||||
|
||||
*Stroustrup:*
|
||||
|
||||
* 那些能让编程显著变简单的概念。
|
||||
* <ruby>并行算法<rt>Parallel algorithms</rt></ruby> —— 如果要使用现代硬件的并发特性的话,这方法再简单不过了。
|
||||
* <ruby>协程<rt>Coroutines</rt></ruby>,如果委员会能够确定在 C++20 上推出。
|
||||
* 改进了组织源代码方式的,并且大幅改善了编译时间的模块。我希望能有这样的模块,但是还没办法确定我们能不能在 C++20 上推出。
|
||||
* 一个标准的网络库,但是还没办法确定我们能否在 C++20 上推出。
|
||||
|
||||
此外:
|
||||
|
||||
* Contracts(运行时检查的先决条件、后置条件、和断言)可能对许多人都非常重要。
|
||||
* date 和 time-zone 支持库可能对许多人(行业)非常重要。
|
||||
|
||||
*Register:***您还有想对读者们说的话吗?**
|
||||
|
||||
*Stroustrup:*如果 C++ 标准化委员会能够专注于重大问题,去解决重大问题,那么 C++20 将会非常优秀。但是在 C++20 推出之前,我们还有 C++17;无论如何,它仍然远超许多人对 C++ 的旧印象。®
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.theregister.co.uk/2018/06/18/bjarne_stroustrup_c_plus_plus/
|
||||
|
||||
作者:[Thomas Claburn][a]
|
||||
选题:[lujun9972](https://github.com/lujun9972)
|
||||
译者:[qhwdw](https://github.com/qhwdw)
|
||||
校对:[thecyanbird](https://github.com/thecyanbird)、[Northurland](https://github.com/Northurland)、[pityonline](https://github.com/pityonline)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: http://www.theregister.co.uk/Author/3190
|
||||
[1]: http://open-std.org/JTC1/SC22/WG21/docs/papers/2018/p0977r0.pdf
|
||||
[2]: https://www.vasamuseet.se/en/vasa-history/disaster
|
||||
[3]: http://open-std.org/JTC1/SC22/WG21/
|
||||
[4]: https://github.com/isocpp/CppCoreGuidelines/blob/master/CppCoreGuidelines.md
|
||||
[5]: https://go.theregister.co.uk/tl/1755/shttps://continuouslifecycle.london/
|
||||
[6]: http://www.open-std.org/jtc1/sc22/wg21/docs/papers/
|
||||
[7]: http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2018/p0939r0.pdf
|
||||
[8]: http://www.stroustrup.com/hopl-almost-final.pdf
|
||||
[9]: http://www.stroustrup.com/hopl2.pdf
|
||||
[10]: http://www.stroustrup.com/papers.html
|
@ -3,17 +3,18 @@
|
||||
|
||||
![](https://www.ostechnix.com/wp-content/uploads/2018/07/filesystem-720x340.png)
|
||||
|
||||
如你所知,Linux 支持非常多的文件系统,例如 Ext4、ext3、ext2、sysfs、securityfs、FAT16、FAT32、NTFS 等等,当前被使用最多的文件系统是 Ext4。你曾经疑惑过你的 Linux 系统使用的是什么类型的文件系统吗?没有疑惑过?不用担心!我们将帮助你。本指南将解释如何在类 Unix 的操作系统中查看已挂载的文件系统类型。
|
||||
如你所知,Linux 支持非常多的文件系统,例如 ext4、ext3、ext2、sysfs、securityfs、FAT16、FAT32、NTFS 等等,当前被使用最多的文件系统是 ext4。你曾经疑惑过你的 Linux 系统使用的是什么类型的文件系统吗?没有疑惑过?不用担心!我们将帮助你。本指南将解释如何在类 Unix 的操作系统中查看已挂载的文件系统类型。
|
||||
|
||||
### 在 Linux 中查看已挂载的文件系统类型
|
||||
|
||||
有很多种方法可以在 Linux 中查看已挂载的文件系统类型,下面我将给出 8 种不同的方法。那现在就让我们开始吧!
|
||||
|
||||
#### 方法 1 – 使用 `findmnt` 命令
|
||||
#### 方法 1 – 使用 findmnt 命令
|
||||
|
||||
这是查出文件系统类型最常使用的方法。**findmnt** 命令将列出所有已挂载的文件系统或者搜索出某个文件系统。`findmnt` 命令能够在 `/etc/fstab`、`/etc/mtab` 或 `/proc/self/mountinfo` 这几个文件中进行搜索。
|
||||
这是查出文件系统类型最常使用的方法。`findmnt` 命令将列出所有已挂载的文件系统或者搜索出某个文件系统。`findmnt` 命令能够在 `/etc/fstab`、`/etc/mtab` 或 `/proc/self/mountinfo` 这几个文件中进行搜索。
|
||||
|
||||
`findmnt` 预装在大多数的 Linux 发行版中,因为它是 `util-linux` 包的一部分。如果 `findmnt` 命令不可用,你可以安装这个软件包。例如,你可以使用下面的命令在基于 Debian 的系统中安装 `util-linux` 包:
|
||||
|
||||
`findmnt` 预装在大多数的 Linux 发行版中,因为它是 **util-linux** 包的一部分。为了防止 `findmnt` 命令不可用,你可以安装这个软件包。例如,你可以使用下面的命令在基于 Debian 的系统中安装 **util-linux** 包:
|
||||
```
|
||||
$ sudo apt install util-linux
|
||||
```
|
||||
@ -21,24 +22,27 @@ $ sudo apt install util-linux
|
||||
下面让我们继续看看如何使用 `findmnt` 来找出已挂载的文件系统。
|
||||
|
||||
假如你只敲 `findmnt` 命令而不带任何的参数或选项,它将像下面展示的那样以树状图形式列举出所有已挂载的文件系统。
|
||||
|
||||
```
|
||||
$ findmnt
|
||||
```
|
||||
|
||||
**示例输出:**
|
||||
示例输出:
|
||||
|
||||
![][2]
|
||||
|
||||
正如你看到的那样,`findmnt` 展示出了目标挂载点(TARGET)、源设备(SOURCE)、文件系统类型(FSTYPE)以及相关的挂载选项(OPTIONS),例如文件系统是否是可读可写或者只读的。以我的系统为例,我的根(`/`)文件系统的类型是 EXT4 。
|
||||
正如你看到的那样,`findmnt` 展示出了目标挂载点(`TARGET`)、源设备(`SOURCE`)、文件系统类型(`FSTYPE`)以及相关的挂载选项(`OPTIONS`),例如文件系统是否是可读可写或者只读的。以我的系统为例,我的根(`/`)文件系统的类型是 EXT4 。
|
||||
|
||||
假如你不想以树状图的形式来展示输出,可以使用 `-l` 选项来以简单平凡的形式来展示输出:
|
||||
|
||||
假如你不想以树状图的形式来展示输出,可以使用 **-l** 选项来以简单平凡的形式来展示输出:
|
||||
```
|
||||
$ findmnt -l
|
||||
```
|
||||
|
||||
![][3]
|
||||
|
||||
你还可以使用 **-t** 选项来列举出特定类型的文件系统,例如下面展示的 **ext4** 文件系统类型:
|
||||
你还可以使用 `-t` 选项来列举出特定类型的文件系统,例如下面展示的 `ext4` 文件系统类型:
|
||||
|
||||
```
|
||||
$ findmnt -t ext4
|
||||
TARGET SOURCE FSTYPE OPTIONS
|
||||
@ -47,15 +51,18 @@ TARGET SOURCE FSTYPE OPTIONS
|
||||
```
|
||||
|
||||
`findmnt` 还可以生成 `df` 类型的输出,使用命令
|
||||
|
||||
```
|
||||
$ findmnt --df
|
||||
```
|
||||
|
||||
或
|
||||
|
||||
```
|
||||
$ findmnt -D
|
||||
```
|
||||
|
||||
**示例输出:**
|
||||
示例输出:
|
||||
|
||||
```
|
||||
SOURCE FSTYPE SIZE USED AVAIL USE% TARGET
|
||||
@ -75,6 +82,7 @@ gvfsd-fuse fuse.gvfsd-fuse 0 0 0 - /run/user/1000/gvfs
|
||||
你还可以展示某个特定设备或者挂载点的文件系统类型。
|
||||
|
||||
查看某个特定的设备:
|
||||
|
||||
```
|
||||
$ findmnt /dev/sda1
|
||||
TARGET SOURCE FSTYPE OPTIONS
|
||||
@ -82,6 +90,7 @@ TARGET SOURCE FSTYPE OPTIONS
|
||||
```
|
||||
|
||||
查看某个特定的挂载点:
|
||||
|
||||
```
|
||||
$ findmnt /
|
||||
TARGET SOURCE FSTYPE OPTIONS
|
||||
@ -89,34 +98,38 @@ TARGET SOURCE FSTYPE OPTIONS
|
||||
```
|
||||
|
||||
你甚至还可以查看某个特定标签的文件系统的类型:
|
||||
|
||||
```
|
||||
$ findmnt LABEL=Storage
|
||||
```
|
||||
|
||||
更多详情,请参考其 man 手册。
|
||||
|
||||
```
|
||||
$ man findmnt
|
||||
```
|
||||
|
||||
`findmnt` 命令已足够完成在 Linux 中查看已挂载文件系统类型的任务,这个命令就是为了这个特定任务而生的。然而,还存在其他方法来查看文件系统的类型,假如你感兴趣的话,请接着让下看。
|
||||
`findmnt` 命令已足够完成在 Linux 中查看已挂载文件系统类型的任务,这个命令就是为了这个特定任务而生的。然而,还存在其他方法来查看文件系统的类型,假如你感兴趣的话,请接着往下看。
|
||||
|
||||
#### 方法 2 – 使用 `blkid` 命令
|
||||
#### 方法 2 – 使用 blkid 命令
|
||||
|
||||
**blkid** 命令被用来查找和打印块设备的属性。它也是 **util-linux** 包的一部分,所以你不必再安装它。
|
||||
`blkid` 命令被用来查找和打印块设备的属性。它也是 `util-linux` 包的一部分,所以你不必再安装它。
|
||||
|
||||
为了使用 `blkid` 命令来查看某个文件系统的类型,可以运行:
|
||||
|
||||
```
|
||||
$ blkid /dev/sda1
|
||||
```
|
||||
|
||||
#### 方法 3 – 使用 `df` 命令
|
||||
#### 方法 3 – 使用 df 命令
|
||||
|
||||
在类 Unix 的操作系统中,`df` 命令被用来报告文件系统的磁盘空间使用情况。为了查看所有已挂载文件系统的类型,只需要运行:
|
||||
|
||||
在类 Unix 的操作系统中, **df** 命令被用来报告文件系统的磁盘空间使用情况。为了查看所有已挂载文件系统的类型,只需要运行:
|
||||
```
|
||||
$ df -T
|
||||
```
|
||||
|
||||
**示例输出:**
|
||||
示例输出:
|
||||
|
||||
![][4]
|
||||
|
||||
@ -125,15 +138,17 @@ $ df -T
|
||||
- [针对新手的 df 命令教程](https://www.ostechnix.com/the-df-command-tutorial-with-examples-for-beginners/)
|
||||
|
||||
同样也可以参考其 man 手册:
|
||||
|
||||
```
|
||||
$ man df
|
||||
```
|
||||
|
||||
#### 方法 4 – 使用 `file` 命令
|
||||
#### 方法 4 – 使用 file 命令
|
||||
|
||||
**file** 命令可以判读出某个特定文件的类型,即便该文件没有文件后缀名也同样适用。
|
||||
`file` 命令可以判读出某个特定文件的类型,即便该文件没有文件后缀名也同样适用。
|
||||
|
||||
运行下面的命令来找出某个特定分区的文件系统类型:
|
||||
|
||||
```
|
||||
$ sudo file -sL /dev/sda1
|
||||
[sudo] password for sk:
|
||||
@ -141,13 +156,14 @@ $ sudo file -sL /dev/sda1
|
||||
```
|
||||
|
||||
查看其 man 手册可以知晓更多细节:
|
||||
|
||||
```
|
||||
$ man file
|
||||
```
|
||||
|
||||
#### 方法 5 – 使用 `fsck` 命令
|
||||
#### 方法 5 – 使用 fsck 命令
|
||||
|
||||
**fsck** 命令被用来检查某个文件系统是否健全或者修复它。你可以像下面那样通过将分区名字作为 `fsck` 的参数来查看该分区的文件系统类型:
|
||||
`fsck` 命令被用来检查某个文件系统是否健全或者修复它。你可以像下面那样通过将分区名字作为 `fsck` 的参数来查看该分区的文件系统类型:
|
||||
|
||||
```
|
||||
$ fsck -N /dev/sda1
|
||||
@ -156,15 +172,17 @@ fsck from util-linux 2.32
|
||||
```
|
||||
|
||||
如果想知道更多的内容,请查看其 man 手册:
|
||||
|
||||
```
|
||||
$ man fsck
|
||||
```
|
||||
|
||||
#### 方法 6 – 使用 `fstab` 命令
|
||||
#### 方法 6 – 使用 fstab 命令
|
||||
|
||||
**fstab** 是一个包含文件系统静态信息的文件。这个文件通常包含了挂载点、文件系统类型和挂载选项等信息。
|
||||
`fstab` 是一个包含文件系统静态信息的文件。这个文件通常包含了挂载点、文件系统类型和挂载选项等信息。
|
||||
|
||||
要查看某个文件系统的类型,只需要运行:
|
||||
|
||||
```
|
||||
$ cat /etc/fstab
|
||||
```
|
||||
@ -172,15 +190,17 @@ $ cat /etc/fstab
|
||||
![][5]
|
||||
|
||||
更多详情,请查看其 man 手册:
|
||||
|
||||
```
|
||||
$ man fstab
|
||||
```
|
||||
|
||||
#### 方法 7 – 使用 `lsblk` 命令
|
||||
#### 方法 7 – 使用 lsblk 命令
|
||||
|
||||
**lsblk** 命令可以展示设备的信息。
|
||||
`lsblk` 命令可以展示设备的信息。
|
||||
|
||||
要展示已挂载文件系统的信息,只需运行:
|
||||
|
||||
```
|
||||
$ lsblk -f
|
||||
NAME FSTYPE LABEL UUID MOUNTPOINT
|
||||
@ -193,15 +213,17 @@ sr0
|
||||
```
|
||||
|
||||
更多细节,可以参考它的 man 手册:
|
||||
|
||||
```
|
||||
$ man lsblk
|
||||
```
|
||||
|
||||
#### 方法 8 – 使用 `mount` 命令
|
||||
#### 方法 8 – 使用 mount 命令
|
||||
|
||||
**mount** 被用来在类 Unix 系统中挂载本地或远程的文件系统。
|
||||
`mount` 被用来在类 Unix 系统中挂载本地或远程的文件系统。
|
||||
|
||||
要使用 `mount` 命令查看文件系统的类型,可以像下面这样做:
|
||||
|
||||
```
|
||||
$ mount | grep "^/dev"
|
||||
/dev/sda2 on / type ext4 (rw,relatime,commit=360)
|
||||
@ -209,6 +231,7 @@ $ mount | grep "^/dev"
|
||||
```
|
||||
|
||||
更多详情,请参考其 man 手册的内容:
|
||||
|
||||
```
|
||||
$ man mount
|
||||
```
|
||||
@ -224,7 +247,7 @@ via: https://www.ostechnix.com/how-to-find-the-mounted-filesystem-type-in-linux/
|
||||
作者:[SK][a]
|
||||
选题:[lujun9972](https://github.com/lujun9972)
|
||||
译者:[FSSlc](https://github.com/FSSlc)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
@ -1,34 +1,32 @@
|
||||
如何在 Ubuntu 服务器中禁用终端欢迎消息中的广告
|
||||
如何禁用 Ubuntu 服务器中终端欢迎消息中的广告
|
||||
======
|
||||
|
||||
如果你正在使用最新的 Ubuntu 服务器版本,你可能已经注意到欢迎消息中有一些与 Ubuntu 服务器平台无关的促销链接。你可能已经知道 **MOTD**,即 **M**essage **O**f **T**he **D**ay 的开头首字母,在 Linux 系统每次登录时都会显示欢迎信息。通常,欢迎消息包含操作系统版本,基本系统信息,官方文档链接以及有关最新安全更新等的链接。这些是我们每次通过 SSH 或本地登录时通常会看到的内容。但是,最近在终端欢迎消息中出现了一些其他链接。我已经几次注意到这些链接,但我并在意,也从未点击过。以下是我的 Ubuntu 18.04 LTS 服务器上显示的终端欢迎消息。
|
||||
|
||||
![](http://www.ostechnix.com/wp-content/uploads/2018/08/Ubuntu-Terminal-welcome-message.png)
|
||||
|
||||
正如你在上面截图中所看到的,欢迎消息中有一个 bit.ly 链接和 Ubuntu wiki 链接。有些人可能会惊讶并想知道这是什么。其实欢迎信息中的链接无需担心。它可能看起来像广告,但并不是商业广告。链接实际上指的是 [**Ubuntu 官方博客**][1] 和 [**Ubuntu wiki**][2]。正如我之前所说,其中的一个链接是不相关的,没有任何与 Ubuntu 服务器相关的细节,这就是为什么我开头称它们为广告。
|
||||
(to 校正:这里是其中一个链接不相关还是两个链接都不相关)
|
||||
正如你在上面截图中所看到的,欢迎消息中有一个 bit.ly 链接和 Ubuntu wiki 链接。有些人可能会惊讶并想知道这是什么。其实欢迎信息中的链接无需担心。它可能看起来像广告,但并不是商业广告。链接实际上指向到了 [Ubuntu 官方博客][1] 和 [Ubuntu wiki][2]。正如我之前所说,其中的一个链接是不相关的,没有任何与 Ubuntu 服务器相关的细节,这就是为什么我开头称它们为广告。
|
||||
|
||||
虽然我们大多数人都不会访问 bit.ly 链接,但是有些人可能出于好奇去访问这些链接,结果失望地发现它只是指向一个外部链接。你可以使用任何 URL 短网址服务,例如 unshorten.it,在访问真正链接之前,查看它会指向哪里。或者,你只需在 bit.ly 链接的末尾输入加号(**+**)即可查看它们的实际位置以及有关链接的一些统计信息。
|
||||
虽然我们大多数人都不会访问 bit.ly 链接,但是有些人可能出于好奇去访问这些链接,结果失望地发现它只是指向一个外部链接。你可以使用任何 URL 去短网址服务,例如 unshorten.it,在访问真正链接之前,查看它会指向哪里。或者,你只需在 bit.ly 链接的末尾输入加号(`+`)即可查看它们的实际位置以及有关链接的一些统计信息。
|
||||
|
||||
![](http://www.ostechnix.com/wp-content/uploads/2018/08/shortlink.png)
|
||||
|
||||
### 什么是 MOTD 以及它是如何工作的?
|
||||
|
||||
2009 年,来自 Canonical 的 **Dustin Kirkland** 在 Ubuntu 中引入了 MOTD 的概念。它是一个灵活的框架,使管理员或发行包能够在 /etc/update-motd.d/* 位置添加可执行脚本,目的是生成在登录时显示有益的,有趣的消息。它最初是为 Landscape(Canonical 的商业服务)实现的,但是其它发行版维护者发现它很有用,并且在他们自己的发行版中也采用了这个特性。
|
||||
2009 年,来自 Canonical 的 Dustin Kirkland 在 Ubuntu 中引入了 MOTD 的概念。它是一个灵活的框架,使管理员或发行包能够在 `/etc/update-motd.d/` 位置添加可执行脚本,目的是生成在登录时显示有益的、有趣的消息。它最初是为 Landscape(Canonical 的商业服务)实现的,但是其它发行版维护者发现它很有用,并且在他们自己的发行版中也采用了这个特性。
|
||||
|
||||
如果你在 Ubuntu 系统中查看 **/etc/update-motd.d/**,你会看到一组脚本。一个是打印通用的 “ Welcome” 横幅。下一个打印 3 个链接,显示在哪里可以找到操作系统的帮助。另一个计算并显示本地系统包可以更新的数量。另一个脚本告诉你是否需要重新启动等等。
|
||||
如果你在 Ubuntu 系统中查看 `/etc/update-motd.d/`,你会看到一组脚本。一个是打印通用的 “欢迎” 横幅。下一个打印 3 个链接,显示在哪里可以找到操作系统的帮助。另一个计算并显示本地系统包可以更新的数量。另一个脚本告诉你是否需要重新启动等等。
|
||||
|
||||
从 Ubuntu 17.04 起,开发人员添加了 **/etc/update-motd.d/50-motd-news**,这是一个脚本用来在欢迎消息中包含一些附加信息。这些附加信息是:
|
||||
|
||||
1. 重要的关键信息,例如 ShellShock, Heartbleed 等
|
||||
从 Ubuntu 17.04 起,开发人员添加了 `/etc/update-motd.d/50-motd-news`,这是一个脚本用来在欢迎消息中包含一些附加信息。这些附加信息是:
|
||||
|
||||
1. 重要的关键信息,例如 ShellShock、Heartbleed 等
|
||||
2. 生命周期(EOL)消息,新功能可用性等
|
||||
|
||||
3. 在 Ubuntu 官方博客和其他有关 Ubuntu 的新闻中发布的一些有趣且有益的帖子
|
||||
|
||||
另一个特点是异步,启动后约 60 秒,systemd 计时器运行 “/etc/update-motd.d/50-motd-news –force” 脚本。它提供了 /etc/default/motd-news 脚本中定义的 3 个配置变量。默认值为:ENABLED=1, URLS=”<https://motd.ubuntu.com”>, WAIT=”5″。
|
||||
另一个特点是异步,启动后约 60 秒,systemd 计时器运行 `/etc/update-motd.d/50-motd-news –force` 脚本。它提供了 `/etc/default/motd-news` 脚本中定义的 3 个配置变量。默认值为:`ENABLED=1, URLS="https://motd.ubuntu.com", WAIT="5"`。
|
||||
|
||||
以下是 `/etc/default/motd-news` 文件的内容:
|
||||
|
||||
以下是 /etc/default/motd-news 文件的内容:
|
||||
```
|
||||
$ cat /etc/default/motd-news
|
||||
# Enable/disable the dynamic MOTD news service
|
||||
@ -50,20 +48,20 @@ URLS="https://motd.ubuntu.com"
|
||||
# Note that news messages are fetched in the background by
|
||||
# a systemd timer, so this should never block boot or login
|
||||
WAIT=5
|
||||
|
||||
```
|
||||
|
||||
好事情是 MOTD 是完全可定制的,所以你可以彻底禁用它(ENABLED=0),根据你的意愿更改或添加脚本,并以秒为单位更改等待时间。
|
||||
好事情是 MOTD 是完全可定制的,所以你可以彻底禁用它(`ENABLED=0`)、根据你的意愿更改或添加脚本、以秒为单位更改等待时间等等。
|
||||
|
||||
如果启用了 MOTD,那么 systemd 计时器作业将循环遍历每个 URL,将它们缩减到每行 80 个字符,最多 10 行,并将它们连接(to 校正:也可能是链接?)到 /var/cache/motd-news 中的缓存文件。此 systemd 计时器作业将每隔 12 小时运行并更新 /var/cache/motd-news。用户登录后,/var/cache/motd-news 的内容会打印到屏幕上。这就是 MOTD 的工作原理。
|
||||
如果启用了 MOTD,那么 systemd 计时器作业将循环遍历每个 URL,将它们的内容缩减到每行 80 个字符、最多 10 行,并将它们连接到 `/var/cache/motd-news` 中的缓存文件。此 systemd 计时器作业将每隔 12 小时运行并更新 `/var/cache/motd-news`。用户登录后,`/var/cache/motd-news` 的内容会打印到屏幕上。这就是 MOTD 的工作原理。
|
||||
|
||||
此外,`/etc/update-motd.d/50-motd-news` 文件中包含自定义的用户代理字符串,以报告有关计算机的信息。如果你查看 `/etc/update-motd.d/50-motd-news` 文件,你会看到:
|
||||
|
||||
此外,**/etc/update-motd.d/50-motd-news** 文件中包含自定义用户代理字符串,以报告有关计算机的信息。如果你查看 **/etc/update-motd.d/50-motd-news** 文件,你会看到
|
||||
```
|
||||
# Piece together the user agent
|
||||
USER_AGENT="curl/$curl_ver $lsb $platform $cpu $uptime"
|
||||
```
|
||||
|
||||
这意味着,MOTD 检索器将向 Canonical 报告你的**操作系统版本**,**硬件平台**,**CPU 类型**和**正常运行时间**。
|
||||
这意味着,MOTD 检索器将向 Canonical 报告你的操作系统版本、硬件平台、CPU 类型和正常运行时间。
|
||||
|
||||
到这里,希望你对 MOTD 有了一个基本的了解。
|
||||
|
||||
@ -72,11 +70,13 @@ USER_AGENT="curl/$curl_ver $lsb $platform $cpu $uptime"
|
||||
### 在 Ubuntu 服务器中禁用终端欢迎消息中的广告
|
||||
|
||||
要禁用这些广告,编辑文件:
|
||||
|
||||
```
|
||||
$ sudo vi /etc/default/motd-news
|
||||
```
|
||||
|
||||
找到以下行并将其值设置为 0(零)。
|
||||
找到以下行并将其值设置为 `0`(零)。
|
||||
|
||||
```
|
||||
[...]
|
||||
ENABLED=0
|
||||
@ -101,7 +101,7 @@ via: https://www.ostechnix.com/how-to-disable-ads-in-terminal-welcome-message-in
|
||||
作者:[SK][a]
|
||||
选题:[lujun9972](https://github.com/lujun9972)
|
||||
译者:[MjSeven](https://github.com/MjSeven)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
@ -1,12 +1,13 @@
|
||||
推动 DevOps 变革的三个方面
|
||||
======
|
||||
推动大规模的组织变革是一个痛苦的过程。对于 DevOps 来说,尽管也有阵痛,但变革带来的价值则相当可观。
|
||||
|
||||
> 推动大规模的组织变革是一个痛苦的过程。对于 DevOps 来说,尽管也有阵痛,但变革带来的价值则相当可观。
|
||||
|
||||
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/diversity-inclusion-transformation-change_20180927.png?itok=2E-g10hJ)
|
||||
|
||||
避免痛苦是一种强大的动力。一些研究表明,[植物也会通过遭受疼痛的过程][1]以采取措施来保护自己。我们人类有时也会刻意让自己受苦——在剧烈运动之后,身体可能会发生酸痛,但我们仍然坚持运动。那是因为当人认为整个过程利大于弊时,几乎可以忍受任何事情。
|
||||
|
||||
推动大规模的组织变革得过程确实是痛苦的。有人可能会因难以改变价值观和行为而感到痛苦,有人可能会因难以带领团队而感到痛苦,也有人可能会因难以开展工作而感到痛苦。但就 DevOps 而言,我可以说这些痛苦都是值得的。
|
||||
推动大规模的组织变革的过程确实是痛苦的。有人可能会因难以改变价值观和行为而感到痛苦,有人可能会因难以带领团队而感到痛苦,也有人可能会因难以开展工作而感到痛苦。但就 DevOps 而言,我可以说这些痛苦都是值得的。
|
||||
|
||||
我也曾经关注过一个团队耗费大量时间优化技术流程的过程,在这个过程中,团队逐渐将流程进行自动化改造,并最终获得了成功。
|
||||
|
||||
@ -14,60 +15,64 @@
|
||||
|
||||
图片来源:Lee Eason. CC BY-SA 4.0
|
||||
|
||||
这张图表充分表明了变革的价值。一家公司在我主导实行了 DevOps 转型之后,60 多个团队每月提交了超过 900 个发布请求。这些工作量的原耗时高达每个月 350 天,而这么多的工作量对于任何公司来说都是不可忽视的。除此以外,他们每月的部署次数从 100 次增加到了 9000 次,高危 bug 减少了 24%,工程师们更轻松了,<ruby>净推荐值<rt>Net Promoter Score</rt></ruby>(NPS)也提高了,而 NPS 提高反过来也让团队的 DevOps 转型更加顺利。正如 [Puppet 发布的 DevOps 报告][4]所预测的,用在技术流程改进上的投资可以在业务成果上明显地体现出来。
|
||||
这张图表充分表明了变革的价值。一家公司在我主导实行了 DevOps 转型之后,60 多个团队每月提交了超过 900 个发布请求。这些工作量的原耗时高达每个月 350 人/天,而这么多的工作量对于任何公司来说都是不可忽视的。除此以外,他们每月的部署次数从 100 次增加到了 9000 次,高危 bug 减少了 24%,工程师们更轻松了,<ruby>净推荐值<rt>Net Promoter Score</rt></ruby>(NPS)也提高了,而 NPS 提高反过来也让团队的 DevOps 转型更加顺利。正如 [Puppet 发布的 DevOps 报告][4]所预测的,用在技术流程改进上的投入可以在业务成果上明显地体现出来。
|
||||
|
||||
而 DevOps 主导者在推动变革是必须关注这三个方面:团队管理,团队文化和团队活力。
|
||||
而 DevOps 主导者在推动变革时必须关注这三个方面:团队管理,团队文化和团队活力。
|
||||
|
||||
### 团队管理
|
||||
|
||||
最重要的是,改进对技术流程的投入可以转化为更好的业务成果。
|
||||
|
||||
组织架构越大,业务领导与一线员工之间的距离就会越大,当然发生误解的可能性也会越大。而且各种技术工具和实际应用都在以日新月异的速度变化,这就导致业务领导几乎不可能对 DevOps 或敏捷开发的转型方向有一个亲身的了解。
|
||||
|
||||
DevOps 主导者必须和管理层密切合作,在进行决策的时候给出相关的意见,以帮助他们做出正确的决策。
|
||||
|
||||
公司的管理层只是知道 DevOps 会对产品部署的方式进行改进,而并不了解其中的具体过程。当管理层发现你在和软件团队执行自动化部署失败时,就会想要了解这件事情的细节。如果管理层了解到进行部署的是软件团队而不是专门的发布管理团队,就可能会坚持使用传统的变更流程来保证业务的正常运作。你可能会失去团队的信任,团队也可能不愿意作出进一步的改变。
|
||||
公司的管理层只是知道 DevOps 会对产品部署的方式进行改进,而并不了解其中的具体过程。假设你正在帮助一个软件开发团队实现自动化部署,当管理层得知某次部署失败时(这种情况是有的),就会想要了解这件事情的细节。如果管理层了解到进行部署的是软件团队而不是专门的发布管理团队,就可能会坚持使用传统的变更流程来保证业务的正常运作。你可能会失去团队的信任,团队也可能不愿意做出进一步的改变。
|
||||
|
||||
如果没有和管理层做好心理上的预期,一旦发生意外的生产事件,都会对你和管理层之间的信任造成难以消除的影响。所以,最好事先和管理层之间在各方面协调好,这会让你在后续的工作中避免很多麻烦。
|
||||
如果没有和管理层做好心理上的预期,一旦发生意外的生产事件,重建管理层的信任并得到他们的支持比事先对他们进行教育需要更长的时间。所以,最好事先和管理层在各方面协调好,这会让你在后续的工作中避免很多麻烦。
|
||||
|
||||
对于和管理层之间的协调,这里有两条建议:
|
||||
|
||||
* 一是**重视所有规章制度**。如果管理层对合同、安全等各方面有任何疑问,你都可以向法务或安全负责人咨询,这样做可以避免犯下后果严重的错误。
|
||||
* 二是**将管理层的重点关注的方面输出为量化指标**。举个例子,如果公司的目标是减少客户流失,而你调查得出计划外的停机是造成客户流失的主要原因,那么就可以让团队对故障的<ruby>平均检测时间<rt>Mean Time To Detection</rt></ruby>(MTTD)和<ruby>平均解决时间<rt>Mean Time To Resolution</rt></ruby>(MTTR)实行重点优化。你可以使用这些关键指标来量化团队的工作成果,而管理层对此也可以有一个直观的了解。
|
||||
|
||||
|
||||
* 一是**重视所有规章制度**。如果管理层对合同、安全等各方面有任何疑问,你都可以向法务或安全负责人咨询,这样做可以避免犯下后果严重的错误。
|
||||
* 二是**将管理层重点关注的方面输出为量化指标**。举个例子,如果公司的目标是减少客户流失,而你调查得出计划外的服务宕机是造成客户流失的主要原因,那么就可以让团队对故障的<ruby>平均排查时间<rt>Mean Time To Detection</rt></ruby>(MTTD)和<ruby>平均解决时间<rt>Mean Time To Resolution</rt></ruby>(MTTR)实行重点优化。你可以使用这些关键指标来量化团队的工作成果,而管理层对此也可以有一个直观的了解。
|
||||
|
||||
### 团队文化
|
||||
|
||||
DevOps 是一种专注于持续改进代码、构建、部署和操作流程的文化,而团队文化代表了团队的价值观和行为。从本质上说,团队文化是要塑造团队成员的行为方式,而这并不是一件容易的事。
|
||||
|
||||
我推荐一本叫做《[披着狼皮的 CIO][5]》的书。另外,研究心理学、阅读《[Drive][6]》、观看 Daniel Pink 的 [TED 演讲][7]、阅读《[千面英雄][7]》、了解每个人的心路历程,以上这些都是你推动公司技术变革所应该尝试去做的事情。
|
||||
我推荐一本叫做《[披着狼皮的 CIO][5]》的书。另外,研究心理学、阅读《[Drive][6]》、观看 Daniel Pink 的 [TED 演讲][7]、阅读《[千面英雄][7]》、了解每个人的心路历程,以上这些都是你推动公司技术变革所应该尝试去做的事情。如果这些你都没兴趣,说明你不是那个推动公司变革的人。如果你想成为那个人,那就开始学习吧!
|
||||
|
||||
理性的人大多都按照自己的价值观工作,然而团队通常没有让每个人都能达成共识的明确价值观。因此,你需要明确团队目前的价值观,包括价值观的形成过程和价值观的目标导向。也不能将这些价值观强加到团队成员身上,只需要让团队成员在目前的硬件条件下力所能及地做到最好就可以了
|
||||
从本质上说,改变一个人真不是件容易的事。
|
||||
|
||||
同时需要向团队成员阐明,公司正在发生组织上的变化,团队的价值观也随之改变,最好也厘清整个过程中将会作出什么变化。例如,公司以往或许是由于资金有限,一直将节约成本的原则放在首位,在研发新产品的时候,基础架构团队不得不通过共享数据库集群或服务器,从而导致了服务之间的紧密耦合。然而随着时间的推移,这种做法会产生难以维护的混乱,即使是一个小小的变化也可能造成无法预料的后果。这就导致交付团队难以执行变更控制流程,进而令变更停滞不前。
|
||||
理性的人大多都按照自己的价值观工作,然而团队通常没有让每个人都能达成共识的明确价值观。因此,你需要明确团队目前的价值观,包括价值观的形成过程和价值观的目标导向。但不能将这些价值观强加到团队成员身上,只需要让团队成员在现有条件下力所能及地做到最好就可以了。
|
||||
|
||||
如果这种状况持续多年,最终的结果将会是毫无创新、技术老旧、问题繁多以及产品品质低下,公司的发展到达了瓶颈,原本的价值观已经不再适用。所以,工作效率的优先级必须高于节约成本。
|
||||
同时需要向团队成员阐明,公司正在发生组织和团队目标的变化,团队的价值观也随之改变,最好也厘清整个过程中将会作出什么变化。例如,公司以往或许是由于资金有限,一直将节约成本的原则放在首位,在研发新产品的时候,基础架构团队不得不共享数据库集群或服务器,从而导致了服务之间的紧密耦合。然而随着时间的推移,这种做法会产生难以维护的混乱,即使是一个小小的变化也可能造成无法预料的后果。这就导致交付团队难以执行变更控制流程,进而令变更停滞不前。
|
||||
|
||||
你必须强调团队的价值观。每当团队按照价值观取得了一定的工作进展,都应该对团队作出激励。在团队部署出现失败时,鼓励他们承担风险、继续学习,同时指导团队如何改进他们的工作并表示支持。长此下来,团队成员就会对你产生信任,并逐渐切合团队的价值观。
|
||||
如果这种状况持续几年,最终的结果将会是毫无创新、技术老旧、问题繁多以及产品品质低下,公司的发展到达了瓶颈,原本的价值观已经不再适用。所以,工作效率的优先级必须高于节约成本。如果一个选择能让团队运作更好,另一个选择只是短期来看成本便宜,那你应该选择前者。
|
||||
|
||||
你必须反复强调团队的价值观。每当团队取得了一定的工作进展(即使探索创新时出现一些小的失误),都应该对团队作出激励。在团队部署出现失败时,鼓励他们承担风险、吸取教训,同时指导团队如何改进他们的工作并表示支持。长此下来,团队成员就会对你产生信任,不再顾虑为切合团队的价值观而做出改变。
|
||||
|
||||
### 团队活力
|
||||
|
||||
你有没有在会议上听过类似这样的话?“在张三度假回来之前,我们无法对这件事情做出评估。他是唯一一个了解代码的人”,或者是“我们完成不了这项任务,它在网络上需要跨团队合作,而防火墙管理员刚好请病假了”,又或者是“张三最清楚这个系统最好,他说是怎么样,通常就是怎么样”。那么如果团队在处理工作时,谁才是主力?就是张三。而且也一直会是他。
|
||||
你有没有在会议上听过类似这样的话?“在张三度假回来之前,我们无法对这件事情做出评估。他是唯一一个了解代码的人”,或者是“我们完成不了这项任务,它在网络上需要跨团队合作,而防火墙管理员刚好请病假了”,又或者是“张三最清楚这个系统,他说是怎么样,通常就是怎么样”。那么如果团队在处理工作时,谁才是主力?就是张三。而且也一直会是他。
|
||||
|
||||
我们一直都认为这就是软件开发的本质。但是如果我们不作出改变,这种循环就会一直保持下去。
|
||||
我们一直都认为这就是软件开发的自带属性。但是如果我们不作出改变,这种循环就会一直持续下去。
|
||||
|
||||
熵的存在会让团队自发地变得混乱和缺乏活力,团队的成员和主导者的都有责任控制这个熵并保持团队的活力。DevOps、敏捷开发、上云、代码重构这些行为都会令熵增加速,这是因为转型让团队需要学习更多新技能和专业知识以开展新工作。
|
||||
熵的存在会让团队自发地变得混乱和缺乏活力,团队的成员和主导者的都有责任控制这个熵并保持团队的活力。DevOps、敏捷开发、上云、代码重构这些行为都会令熵加速增长,这是因为转型让团队需要学习更多新技能和专业知识以开展新工作。
|
||||
|
||||
我们来看一个产品团队重构遗留代码的例子。像往常一样,他们在 AWS 上构建新的服务。而传统的系统则在数据中心部署,并由 IT 部门进行监控和备份。IT 部门会确保在基础架构的层面上满足应用的安全需求、进行灾难恢复测试、系统补丁、安装配置了入侵检测和防病毒代理,而且 IT 部门还保留了年度审计流程所需的变更控制记录。
|
||||
我们来看一个产品团队重构历史代码的例子。像往常一样,他们在 AWS 上构建新的服务。而传统的系统则在数据中心部署,并由 IT 部门进行监控和备份。IT 部门会确保在基础架构的层面上满足应用的安全需求、进行灾难恢复测试、系统补丁、安装配置了入侵检测和防病毒代理,而且 IT 部门还保留了年度审计流程所需的变更控制记录。
|
||||
|
||||
产品团队经常会犯一个致命的错误,就是认为 IT 部门是需要突破的瓶颈。他们希望脱离已有的 IT 部门并使用公有云,但实际上是他们忽视了 IT 部门提供的关键服务。迁移到云上只是以不同的方式实现这些关键服务,因为 AWS 也是一个数据中心,团队即使使用 AWS 也需要完成 IT 运维任务。
|
||||
产品团队经常会犯一个致命的错误,就是认为 IT 是消耗资源的部门,是需要突破的瓶颈。他们希望脱离已有的 IT 部门并使用公有云,但实际上是他们忽视了 IT 部门提供的关键服务。迁移到云上只是以不同的方式实现这些关键服务,因为 AWS 也是一个数据中心,团队即使使用 AWS 也需要完成 IT 运维任务。
|
||||
|
||||
实际上,产品团队在迁移到云时候也必须学习如何使用这些 IT 服务。因此,当产品团队开始重构遗留的代码并部署到云上时,也需要学习大量的技能才能正常运作。这些技能不会无师自通,必须自行学习或者聘用相关的人员,团队的主导者也必须积极进行管理。
|
||||
实际上,产品团队在向云迁移的时候也必须学习如何使用这些 IT 服务。因此,当产品团队开始重构历史代码并部署到云上时,也需要学习大量的技能才能正常运作。这些技能不会无师自通,必须自行学习或者聘用相关的人员,团队的主导者也必须积极进行管理。
|
||||
|
||||
在带领团队时,我找不到任何适合我的工具,因此我建立了 [Tekita.io][9] 这个项目。Tekata 免费而且容易使用。但相比起来,把注意力集中在人员和流程上更为重要,你需要不断学习,持续关注团队的弱项,因为它们会影响团队的交付能力,而修补这些弱项往往需要学习大量的新知识,这就需要团队成员之间有一个很好的协作。因此 76% 的年轻人都认为个人发展机会是公司文化[最重要的的一环][10]。
|
||||
在带领团队时,我找不到任何适合我的工具,因此我建立了 [Tekita.io][9] 这个项目。Tekata 免费而且容易使用。但相比起来,把注意力集中在人员和流程上更为重要,你需要不断学习,持续关注团队的短板,因为它们会影响团队的交付能力,而弥补这些短板往往需要学习大量的新知识,这就需要团队成员之间有一个很好的协作。因此 76% 的年轻人都认为个人发展机会是公司文化[最重要的的一环][10]。
|
||||
|
||||
### 效果就是最好的证明
|
||||
|
||||
DevOps 转型会改变团队的工作方式和文化,这需要得到管理层的支持和理解。同时,工作方式的改变意味着新技术的引入,所以在管理上也必须谨慎。但转型的最终结果是团队变得更高效、成员变得更积极、产品变得更优质,客户也变得更快乐。
|
||||
DevOps 转型会改变团队的工作方式和文化,这需要得到管理层的支持和理解。同时,工作方式的改变意味着新技术的引入,所以在管理上也必须谨慎。但转型的最终结果是团队变得更高效、成员变得更积极、产品变得更优质,客户也变得更满意。
|
||||
|
||||
Lee Eason 将于 10 月 21-23 日在北卡罗来纳州 Raleigh 举行的 [All Things Open][12] 上讲述 [DevOps 转型的故事][11]。
|
||||
|
||||
免责声明:本文中的内容仅为 Lee Eason 的个人立场,不代表 Ipreo 或 IHS Markit。
|
||||
|
||||
@ -78,7 +83,7 @@ via: https://opensource.com/article/18/10/tales-devops-transformation
|
||||
作者:[Lee Eason][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[HankChow](https://github.com/HankChow)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
校对:[pityonline](https://github.com/pityonline)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
@ -96,4 +101,3 @@ via: https://opensource.com/article/18/10/tales-devops-transformation
|
||||
[10]: https://www.execu-search.com/~/media/Resources/pdf/2017_Hiring_Outlook_eBook
|
||||
[11]: https://allthingsopen.org/talk/tales-from-a-devops-transformation/
|
||||
[12]: https://allthingsopen.org/
|
||||
|
@ -1,41 +1,41 @@
|
||||
Linux 命令行中使用 tcpdump 抓包
|
||||
在 Linux 命令行中使用 tcpdump 抓包
|
||||
======
|
||||
|
||||
Tcpdump 是一款灵活、功能强大的抓包工具,能有效地帮助排查网络故障问题。
|
||||
> `tcpdump` 是一款灵活、功能强大的抓包工具,能有效地帮助排查网络故障问题。
|
||||
|
||||
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/terminal_command_linux_desktop_code.jpg?itok=p5sQ6ODE)
|
||||
|
||||
根据我作为管理员的经验,在网络连接中经常遇到十分难以排查的故障问题。对于这类情况,tcpdump 便能派上用场。
|
||||
以我作为管理员的经验,在网络连接中经常遇到十分难以排查的故障问题。对于这类情况,`tcpdump` 便能派上用场。
|
||||
|
||||
Tcpdump 是一个命令行实用工具,允许你抓取和分析经过系统的流量数据包。它通常被用作于网络故障分析工具以及安全工具。
|
||||
`tcpdump` 是一个命令行实用工具,允许你抓取和分析经过系统的流量数据包。它通常被用作于网络故障分析工具以及安全工具。
|
||||
|
||||
Tcpdump 是一款强大的工具,支持多种选项和过滤规则,适用场景十分广泛。由于它是命令行工具,因此适用于在远程服务器或者没有图形界面的设备中收集数据包以便于事后分析。它可以在后台启动,也可以用 cron 等定时工具创建定时任务启用它。
|
||||
`tcpdump` 是一款强大的工具,支持多种选项和过滤规则,适用场景十分广泛。由于它是命令行工具,因此适用于在远程服务器或者没有图形界面的设备中收集数据包以便于事后分析。它可以在后台启动,也可以用 cron 等定时工具创建定时任务启用它。
|
||||
|
||||
本文中,我们将讨论 tcpdump 最常用的一些功能。
|
||||
本文中,我们将讨论 `tcpdump` 最常用的一些功能。
|
||||
|
||||
### 1\. 在 Linux 中安装 tcpdump
|
||||
### 1、在 Linux 中安装 tcpdump
|
||||
|
||||
Tcpdump 支持多种 Linux 发行版,所以你的系统中很有可能已经安装了它。用下面的命令检查一下是否已经安装了 tcpdump:
|
||||
`tcpdump` 支持多种 Linux 发行版,所以你的系统中很有可能已经安装了它。用下面的命令检查一下是否已经安装了 `tcpdump`:
|
||||
|
||||
```
|
||||
$ which tcpdump
|
||||
/usr/sbin/tcpdump
|
||||
```
|
||||
|
||||
如果还没有安装 tcpdump,你可以用软件包管理器安装它。
|
||||
例如,在 CentOS 或者 Red Hat Enterprise 系统中,用如下命令安装 tcpdump:
|
||||
如果还没有安装 `tcpdump`,你可以用软件包管理器安装它。
|
||||
例如,在 CentOS 或者 Red Hat Enterprise 系统中,用如下命令安装 `tcpdump`:
|
||||
|
||||
```
|
||||
$ sudo yum install -y tcpdump
|
||||
```
|
||||
|
||||
Tcpdump 依赖于 `libpcap`,该库文件用于捕获网络数据包。如果该库文件也没有安装,系统会根据依赖关系自动安装它。
|
||||
`tcpdump` 依赖于 `libpcap`,该库文件用于捕获网络数据包。如果该库文件也没有安装,系统会根据依赖关系自动安装它。
|
||||
|
||||
现在你可以开始抓包了。
|
||||
|
||||
### 2\. 用 tcpdump 抓包
|
||||
### 2、用 tcpdump 抓包
|
||||
|
||||
使用 tcpdump 抓包,需要管理员权限,因此下面的示例中绝大多数命令都是以 `sudo` 开头。
|
||||
使用 `tcpdump` 抓包,需要管理员权限,因此下面的示例中绝大多数命令都是以 `sudo` 开头。
|
||||
|
||||
首先,先用 `tcpdump -D` 命令列出可以抓包的网络接口:
|
||||
|
||||
@ -80,7 +80,7 @@ listening on any, link-type LINUX_SLL (Linux cooked), capture size 262144 bytes
|
||||
$
|
||||
```
|
||||
|
||||
Tcpdump 会持续抓包直到收到中断信号。你可以按 `Ctrl+C` 来停止抓包。正如上面示例所示,`tcpdump` 抓取了超过 9000 个数据包。在这个示例中,由于我是通过 `ssh` 连接到服务器,所以 tcpdump 也捕获了所有这类数据包。`-c` 选项可以用于限制 tcpdump 抓包的数量:
|
||||
`tcpdump` 会持续抓包直到收到中断信号。你可以按 `Ctrl+C` 来停止抓包。正如上面示例所示,`tcpdump` 抓取了超过 9000 个数据包。在这个示例中,由于我是通过 `ssh` 连接到服务器,所以 `tcpdump` 也捕获了所有这类数据包。`-c` 选项可以用于限制 `tcpdump` 抓包的数量:
|
||||
|
||||
```
|
||||
$ sudo tcpdump -i any -c 5
|
||||
@ -97,9 +97,9 @@ listening on any, link-type LINUX_SLL (Linux cooked), capture size 262144 bytes
|
||||
$
|
||||
```
|
||||
|
||||
如上所示,`tcpdump` 在抓取 5 个数据包后自动停止了抓包。这在有些场景中十分有用——比如你只需要抓取少量的数据包用于分析。当我们需要使用过滤规则抓取特定的数据包(如下所示)时,`-c` 的作用就十分突出了。
|
||||
如上所示,`tcpdump` 在抓取 5 个数据包后自动停止了抓包。这在有些场景中十分有用 —— 比如你只需要抓取少量的数据包用于分析。当我们需要使用过滤规则抓取特定的数据包(如下所示)时,`-c` 的作用就十分突出了。
|
||||
|
||||
在上面示例中,tcpdump 默认是将 IP 地址和端口号解析为对应的接口名以及服务协议名称。而通常在网络故障排查中,使用 IP 地址和端口号更便于分析问题;用 `-n` 选项显示 IP 地址,`-nn` 选项显示端口号:
|
||||
在上面示例中,`tcpdump` 默认是将 IP 地址和端口号解析为对应的接口名以及服务协议名称。而通常在网络故障排查中,使用 IP 地址和端口号更便于分析问题;用 `-n` 选项显示 IP 地址,`-nn` 选项显示端口号:
|
||||
|
||||
```
|
||||
$ sudo tcpdump -i any -c5 -nn
|
||||
@ -115,13 +115,13 @@ listening on any, link-type LINUX_SLL (Linux cooked), capture size 262144 bytes
|
||||
0 packets dropped by kernel
|
||||
```
|
||||
|
||||
如上所示,抓取的数据包中显示 IP 地址和端口号。这样还可以阻止 tcpdump 发出 DNS 查找,有助于在网络故障排查中减少数据流量。
|
||||
如上所示,抓取的数据包中显示 IP 地址和端口号。这样还可以阻止 `tcpdump` 发出 DNS 查找,有助于在网络故障排查中减少数据流量。
|
||||
|
||||
现在你已经会抓包了,让我们来分析一下这些抓包输出的含义吧。
|
||||
|
||||
### 3\. 理解抓取的报文
|
||||
### 3、理解抓取的报文
|
||||
|
||||
Tcpdump 能够抓取并解码多种协议类型的数据报文,如 TCP,UDP,ICMP 等等。虽然这里我们不可能介绍所有的数据报文类型,但可以分析下 TCP 类型的数据报文,来帮助你入门。更多有关 tcpdump 的详细介绍可以参考其 [帮助手册][1]。Tcpdump 抓取的 TCP 报文看起来如下:
|
||||
`tcpdump` 能够抓取并解码多种协议类型的数据报文,如 TCP、UDP、ICMP 等等。虽然这里我们不可能介绍所有的数据报文类型,但可以分析下 TCP 类型的数据报文,来帮助你入门。更多有关 `tcpdump` 的详细介绍可以参考其 [帮助手册][1]。`tcpdump` 抓取的 TCP 报文看起来如下:
|
||||
|
||||
```
|
||||
08:41:13.729687 IP 192.168.64.28.22 > 192.168.64.1.41916: Flags [P.], seq 196:568, ack 1, win 309, options [nop,nop,TS val 117964079 ecr 816509256], length 372
|
||||
@ -137,7 +137,7 @@ Tcpdump 能够抓取并解码多种协议类型的数据报文,如 TCP,UDP
|
||||
|
||||
在源 IP 和目的 IP 之后,可以看到是 TCP 报文标记段 `Flags [P.]`。该字段通常取值如下:
|
||||
|
||||
| Value | Flag Type | Description |
|
||||
| 值 | 标志类型 | 描述 |
|
||||
| ----- | --------- | ----------------- |
|
||||
| S | SYN | Connection Start |
|
||||
| F | FIN | Connection Finish |
|
||||
@ -149,19 +149,19 @@ Tcpdump 能够抓取并解码多种协议类型的数据报文,如 TCP,UDP
|
||||
|
||||
接下来是该数据包中数据的序列号。对于抓取的第一个数据包,该字段值是一个绝对数字,后续包使用相对数值,以便更容易查询跟踪。例如此处 `seq 196:568` 代表该数据包包含该数据流的第 196 到 568 字节。
|
||||
|
||||
接下来是 ack 值:`ack 1`。该数据包是数据发送方,ack 值为1。在数据接收方,该字段代表数据流上的下一个预期字节数据,例如,该数据流中下一个数据包的 ack 值应该是 568。
|
||||
接下来是 ack 值:`ack 1`。该数据包是数据发送方,ack 值为 1。在数据接收方,该字段代表数据流上的下一个预期字节数据,例如,该数据流中下一个数据包的 ack 值应该是 568。
|
||||
|
||||
接下来字段是接收窗口大小 `win 309`,它表示接收缓冲区中可用的字节数,后跟 TCP 选项如 MSS(最大段大小)或者窗口比例值。更详尽的 TCP 协议内容请参考 [Transmission Control Protocol(TCP) Parameters][2]。
|
||||
|
||||
最后,`length 372`代表数据包有效载荷字节长度。这个长度和 seq 序列号中字节数值长度是不一样的。
|
||||
最后,`length 372` 代表数据包有效载荷字节长度。这个长度和 seq 序列号中字节数值长度是不一样的。
|
||||
|
||||
现在让我们学习如何过滤数据报文以便更容易的分析定位问题。
|
||||
|
||||
### 4\. 过滤数据包
|
||||
### 4、过滤数据包
|
||||
|
||||
正如上面所提,tcpdump 可以抓取很多种类型的数据报文,其中很多可能和我们需要查找的问题并没有关系。举个例子,假设你正在定位一个与 web 服务器连接的网络问题,就不必关系 SSH 数据报文,因此在抓包结果中过滤掉 SSH 报文可能更便于你分析问题。
|
||||
正如上面所提,`tcpdump` 可以抓取很多种类型的数据报文,其中很多可能和我们需要查找的问题并没有关系。举个例子,假设你正在定位一个与 web 服务器连接的网络问题,就不必关系 SSH 数据报文,因此在抓包结果中过滤掉 SSH 报文可能更便于你分析问题。
|
||||
|
||||
Tcpdump 有很多参数选项可以设置数据包过滤规则,例如根据源 IP 以及目的 IP 地址,端口号,协议等等规则来过滤数据包。下面就介绍一些最常用的过滤方法。
|
||||
`tcpdump` 有很多参数选项可以设置数据包过滤规则,例如根据源 IP 以及目的 IP 地址,端口号,协议等等规则来过滤数据包。下面就介绍一些最常用的过滤方法。
|
||||
|
||||
#### 协议
|
||||
|
||||
@ -181,7 +181,7 @@ PING opensource.com (54.204.39.132) 56(84) bytes of data.
|
||||
64 bytes from ec2-54-204-39-132.compute-1.amazonaws.com (54.204.39.132): icmp_seq=1 ttl=47 time=39.6 ms
|
||||
```
|
||||
|
||||
回到运行 tcpdump 命令的终端中,可以看到它筛选出了 ICMP 报文。这里 tcpdump 并没有显示有关 `opensource.com`的域名解析数据包:
|
||||
回到运行 `tcpdump` 命令的终端中,可以看到它筛选出了 ICMP 报文。这里 `tcpdump` 并没有显示有关 `opensource.com` 的域名解析数据包:
|
||||
|
||||
```
|
||||
09:34:20.136766 IP rhel75 > ec2-54-204-39-132.compute-1.amazonaws.com: ICMP echo request, id 20361, seq 1, length 64
|
||||
@ -215,7 +215,7 @@ listening on any, link-type LINUX_SLL (Linux cooked), capture size 262144 bytes
|
||||
|
||||
#### 端口号
|
||||
|
||||
Tcpdump 可以根据服务类型或者端口号来筛选数据包。例如,抓取和 HTTP 服务相关的数据包:
|
||||
`tcpdump` 可以根据服务类型或者端口号来筛选数据包。例如,抓取和 HTTP 服务相关的数据包:
|
||||
|
||||
```
|
||||
$ sudo tcpdump -i any -c5 -nn port 80
|
||||
@ -303,11 +303,11 @@ listening on any, link-type LINUX_SLL (Linux cooked), capture size 262144 bytes
|
||||
|
||||
该例子中我们只抓取了来自源 IP 为 `192.168.122.98` 或者 `54.204.39.132` 的 HTTP (端口号80)的数据包。使用该方法就很容易抓取到数据流中交互双方的数据包了。
|
||||
|
||||
### 5\. 检查数据包内容
|
||||
### 5、检查数据包内容
|
||||
|
||||
在以上的示例中,我们只按数据包头部的信息来建立规则筛选数据包,例如源地址、目的地址、端口号等等。有时我们需要分析网络连接问题,可能需要分析数据包中的内容来判断什么内容需要被发送、什么内容需要被接收等。Tcpdump 提供了两个选项可以查看数据包内容,`-X` 以十六进制打印出数据报文内容,`-A` 打印数据报文的 ASCII 值。
|
||||
在以上的示例中,我们只按数据包头部的信息来建立规则筛选数据包,例如源地址、目的地址、端口号等等。有时我们需要分析网络连接问题,可能需要分析数据包中的内容来判断什么内容需要被发送、什么内容需要被接收等。`tcpdump` 提供了两个选项可以查看数据包内容,`-X` 以十六进制打印出数据报文内容,`-A` 打印数据报文的 ASCII 值。
|
||||
|
||||
例如,HTTP request 报文内容如下:
|
||||
例如,HTTP 请求报文内容如下:
|
||||
|
||||
```
|
||||
$ sudo tcpdump -i any -c10 -nn -A port 80
|
||||
@ -379,9 +379,9 @@ E..4..@.@.....zb6.'....P....o..............
|
||||
|
||||
这对定位一些普通 HTTP 调用 API 接口的问题很有用。当然如果是加密报文,这个输出也就没多大用了。
|
||||
|
||||
### 6\. 保存抓包数据
|
||||
### 6、保存抓包数据
|
||||
|
||||
Tcpdump 提供了保存抓包数据的功能以便后续分析数据包。例如,你可以夜里让它在那里抓包,然后早上起来再去分析它。同样当有很多数据包时,显示过快也不利于分析,将数据包保存下来,更有利于分析问题。
|
||||
`tcpdump` 提供了保存抓包数据的功能以便后续分析数据包。例如,你可以夜里让它在那里抓包,然后早上起来再去分析它。同样当有很多数据包时,显示过快也不利于分析,将数据包保存下来,更有利于分析问题。
|
||||
|
||||
使用 `-w` 选项来保存数据包而不是在屏幕上显示出抓取的数据包:
|
||||
|
||||
@ -398,7 +398,7 @@ tcpdump: listening on any, link-type LINUX_SLL (Linux cooked), capture size 2621
|
||||
|
||||
正如示例中所示,保存数据包到文件中时屏幕上就没有任何有关数据报文的输出,其中 `-c10` 表示抓取到 10 个数据包后就停止抓包。如果想有一些反馈来提示确实抓取到了数据包,可以使用 `-v` 选项。
|
||||
|
||||
Tcpdump 将数据包保存在二进制文件中,所以不能简单的用文本编辑器去打开它。使用 `-r` 选项参数来阅读该文件中的报文内容:
|
||||
`tcpdump` 将数据包保存在二进制文件中,所以不能简单的用文本编辑器去打开它。使用 `-r` 选项参数来阅读该文件中的报文内容:
|
||||
|
||||
```
|
||||
$ tcpdump -nn -r webserver.pcap
|
||||
@ -418,7 +418,7 @@ $
|
||||
|
||||
这里不需要管理员权限 `sudo` 了,因为此刻并不是在网络接口处抓包。
|
||||
|
||||
你还可以使用我们讨论过的任何过滤规则来过滤文件中的内容,就像使用实时数据一样。 例如,通过执行以下命令从源 IP 地址`54.204.39.132` 检查文件中的数据包:
|
||||
你还可以使用我们讨论过的任何过滤规则来过滤文件中的内容,就像使用实时数据一样。 例如,通过执行以下命令从源 IP 地址 `54.204.39.132` 检查文件中的数据包:
|
||||
|
||||
```
|
||||
$ tcpdump -nn -r webserver.pcap src 54.204.39.132
|
||||
@ -431,11 +431,11 @@ reading from file webserver.pcap, link-type LINUX_SLL (Linux cooked)
|
||||
|
||||
### 下一步做什么?
|
||||
|
||||
以上的基本功能已经可以帮助你使用强大的 tcpdump 抓包工具了。更多的内容请参考 [tcpdump网页][3] 以及它的 [帮助文件][4]。
|
||||
以上的基本功能已经可以帮助你使用强大的 `tcpdump` 抓包工具了。更多的内容请参考 [tcpdump 网站][3] 以及它的 [帮助文件][4]。
|
||||
|
||||
Tcpdump 命令行工具为分析网络流量数据包提供了强大的灵活性。如果需要使用图形工具来抓包请参考 [Wireshark][5]。
|
||||
`tcpdump` 命令行工具为分析网络流量数据包提供了强大的灵活性。如果需要使用图形工具来抓包请参考 [Wireshark][5]。
|
||||
|
||||
Wireshark 还可以用来读取 tcpdump 保存的 `pcap` 文件。你可以使用 tcpdump 命令行在没有 GUI 界面的远程机器上抓包然后在 Wireshark 中分析数据包。
|
||||
Wireshark 还可以用来读取 `tcpdump` 保存的 pcap 文件。你可以使用 `tcpdump` 命令行在没有 GUI 界面的远程机器上抓包然后在 Wireshark 中分析数据包。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
@ -444,7 +444,7 @@ via: https://opensource.com/article/18/10/introduction-tcpdump
|
||||
作者:[Ricardo Gerardi][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[jrg](https://github.com/jrglinux)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
@ -1,71 +1,74 @@
|
||||
Kali Linux:在开始使用之前你必须知道的 – FOSS Post
|
||||
在你开始使用 Kali Linux 之前必须知道的事情
|
||||
======
|
||||
|
||||
![](https://i1.wp.com/fosspost.org/wp-content/uploads/2018/10/kali-linux.png?fit=1237%2C527&ssl=1)
|
||||
|
||||
Kali Linux 在渗透测试和白帽子方面,是业界领先的 Linux 发行版。默认情况下,该发行版附带了大量黑客和渗透工具和软件,并且在全世界都得到了广泛认可。即使在那些甚至可能不知道 Linux 是什么的 Windows 用户中也是如此。
|
||||
Kali Linux 在渗透测试和白帽子方面是业界领先的 Linux 发行版。默认情况下,该发行版附带了大量入侵和渗透的工具和软件,并且在全世界都得到了广泛认可。即使在那些甚至可能不知道 Linux 是什么的 Windows 用户中也是如此。
|
||||
|
||||
由于后者的原因,许多人都试图单独使用 Kali Linux,尽管他们甚至不了解 Linux 系统的基础知识。原因可能各不相同,有的为了玩乐,有的是为了取悦女友而伪装成黑客,有的仅仅是试图破解邻居的 WiFi 网络以免费上网。如果你打算使用 Kali Linux,所有的这些都是不好的事情。
|
||||
由于后者的原因(LCTT 译注:Windows 用户),许多人都试图单独使用 Kali Linux,尽管他们甚至不了解 Linux 系统的基础知识。原因可能各不相同,有的为了玩乐,有的是为了取悦女友而伪装成黑客,有的仅仅是试图破解邻居的 WiFi 网络以免费上网。如果你打算使用 Kali Linux,记住,所有的这些都是不好的事情。
|
||||
|
||||
在计划使用 Kali Linux 之前,你应该了解一些提示。
|
||||
|
||||
### Kali Linux 不适合初学者
|
||||
|
||||
![](https://i0.wp.com/fosspost.org/wp-content/uploads/2018/10/Kali-Linux-000.png?resize=850%2C478&ssl=1)
|
||||
Kali Linux 默认 GNOME 桌面
|
||||
|
||||
如果你是几个月前刚开始使用 Linux 的人,或者你认为自己的知识水平低于平均水平,那么 Kali Linux 就不适合你。如果你打算问“如何在 Kali 上安装 Stream?如何让我的打印机在 Kali 上工作?如何解决 Kali 上的 APT 源错误?”这些东西,那么 Kali Linux 并不适合你。
|
||||
*Kali Linux 默认 GNOME 桌面*
|
||||
|
||||
Kali Linux 主要面向想要运行渗透测试的专家或想要学习成为白帽子和数字取证的人。但即使你来自后者,普通的 Kali Linux 用户在日常使用时也会遇到很多麻烦。他还被要求以非常谨慎的方式使用工具和软件,而不仅仅是“让我们安装并运行一切”。每一个工具必须小心使用,你安装的每一个软件都必须仔细检查。
|
||||
如果你是几个月前刚开始使用 Linux 的人,或者你认为自己的知识水平低于平均水平,那么 Kali Linux 就不适合你。如果你打算问“如何在 Kali 上安装 Steam?如何让我的打印机在 Kali 上工作?如何解决 Kali 上的 APT 源错误?”这些东西,那么 Kali Linux 并不适合你。
|
||||
|
||||
**建议阅读:** [Linux 系统的组件是什么?][1]
|
||||
Kali Linux 主要面向想要运行渗透测试套件的专家或想要学习成为白帽子和数字取证的人。但即使你属于后者,普通的 Kali Linux 用户在日常使用时也会遇到很多麻烦。他还被要求以非常谨慎的方式使用工具和软件,而不仅仅是“让我们安装并运行一切”。每一个工具必须小心使用,你安装的每一个软件都必须仔细检查。
|
||||
|
||||
普通 Linux 用户无法做正常的事情。(to 校正:这里什么意思呢?)一个更好的方法是花几周时间学习 Linux 及其守护进程,服务,软件,发行版及其工作方式,然后观看几十个关于白帽子攻击的视频和课程,然后再尝试使用 Kali 来应用你学习到的东西。
|
||||
**建议阅读:** [Linux 系统的组件有什么?][1]
|
||||
|
||||
普通 Linux 用户都无法自如地使用它。一个更好的方法是花几周时间学习 Linux 及其守护进程、服务、软件、发行版及其工作方式,然后观看几十个关于白帽子攻击的视频和课程,然后再尝试使用 Kali 来应用你学习到的东西。
|
||||
|
||||
### 它会让你被黑客攻击
|
||||
|
||||
![](https://i0.wp.com/fosspost.org/wp-content/uploads/2018/10/Kali-Linux-001.png?resize=850%2C478&ssl=1)
|
||||
Kali Linux 入侵和测试工具
|
||||
|
||||
*Kali Linux 入侵和测试工具*
|
||||
|
||||
在普通的 Linux 系统中,普通用户有一个账户,而 root 用户也有一个单独的账号。但在 Kali Linux 中并非如此。Kali Linux 默认使用 root 账户,不提供普通用户账户。这是因为 Kali 中几乎所有可用的安全工具都需要 root 权限,并且为了避免每分钟要求你输入 root 密码,所以这样设计。
|
||||
|
||||
当然,你可以简单地创建一个普通用户账户并开始使用它。但是,这种方式仍然不推荐,因为这不是 Kali Linux 系统设计的工作方式。然后,在使用程序,打开端口,调试软件时,你会遇到很多问题,你会发现为什么这个东西不起作用,最终却发现它是一个奇怪的权限错误。另外每次在系统上做任何事情时,你会被每次运行工具都要求输入密码而烦恼。
|
||||
当然,你可以简单地创建一个普通用户账户并开始使用它。但是,这种方式仍然不推荐,因为这不是 Kali Linux 系统设计的工作方式。使用普通用户在使用程序,打开端口,调试软件时,你会遇到很多问题,你会发现为什么这个东西不起作用,最终却发现它是一个奇怪的权限错误。另外每次在系统上做任何事情时,你会被每次运行工具都要求输入密码而烦恼。
|
||||
|
||||
现在,由于你被迫以 root 用户身份使用它,因此你在系统上运行的所有软件也将以 root 权限运行。如果你不知道自己在做什么,那么这很糟糕,因为如果 Firefox 中存在漏洞,并且你访问了一个受感染的网站,那么黑客能够在你的 PC 上获得全部 root 权限并入侵你。如果你使用的是普通用户账户,则会收到限制。此外,你安装和使用的某些工具可能会在你不知情的情况下打开端口并泄露信息,因此如果你不是非常小心,人们可能会以你尝试入侵他们的方式入侵你。
|
||||
现在,由于你被迫以 root 用户身份使用它,因此你在系统上运行的所有软件也将以 root 权限运行。如果你不知道自己在做什么,那么这很糟糕,因为如果 Firefox 中存在漏洞,并且你访问了一个受感染的网站,那么黑客能够在你的 PC 上获得全部 root 权限并入侵你。如果你使用的是普通用户账户,则会受到限制。此外,你安装和使用的某些工具可能会在你不知情的情况下打开端口并泄露信息,因此如果你不是非常小心,人们可能会以你尝试入侵他们的方式入侵你。
|
||||
|
||||
如果你在一些情况下访问于与 Kali Linux 相关的 Facebook 群组,你会发现这些群组中几乎有四分之一的帖子是人们在寻求帮助,因为有人入侵了他们。
|
||||
如果你曾经访问过与 Kali Linux 相关的 Facebook 群组,你会发现这些群组中几乎有四分之一的帖子是人们在寻求帮助,因为有人入侵了他们。
|
||||
|
||||
### 它可以让你入狱
|
||||
|
||||
Kali Linux 仅提供软件。那么,如何使用它们完全是你自己的责任。
|
||||
Kali Linux 只是提供了软件。那么,如何使用它们完全是你自己的责任。
|
||||
|
||||
在世界上大多数发达国家,使用针对公共 WiFi 网络或其他设备的渗透测试工具很容易让你入狱。现在不要以为你使用了 Kali 就无法被跟踪,许多系统都配置了复杂的日志记录设备来简单地跟踪试图监听或入侵其网络的人,你可能无意间成为其中的一个,那么它会毁掉你的生活。
|
||||
|
||||
永远不要对不属于你的设备或网络使用 Kali Linux 系统,也不要明确允许对它们进行入侵。如果你说你不知道你在做什么,在法庭上它不会被当作借口来接受。
|
||||
|
||||
### 修改了内核和软件
|
||||
### 修改了的内核和软件
|
||||
|
||||
Kali [基于][2] Debian(测试分支,这意味着 Kali Linux 使用滚动发布模型),因此它使用了 Debian 的大部分软件体系结构,你会发现 Kali Linux 中的大部分软件跟 Debian 中的没什么区别。
|
||||
Kali [基于][2] Debian(“测试”分支,这意味着 Kali Linux 使用滚动发布模型),因此它使用了 Debian 的大部分软件体系结构,你会发现 Kali Linux 中的大部分软件跟 Debian 中的没什么区别。
|
||||
|
||||
但是,Kali 修改了一些包来加强安全性并修复了一些可能的漏洞。例如,Kali 使用的 Linux 内核被打了补丁,允许在各种设备上进行无线注入。这些补丁通常在普通内核中不可用。此外,Kali Linux 不依赖于 Debian 服务器和镜像,而是通过自己的服务器构建软件包。以下是最新版本中的默认软件源:
|
||||
|
||||
```
|
||||
deb http://http.kali.org/kali kali-rolling main contrib non-free
|
||||
deb-src http://http.kali.org/kali kali-rolling main contrib non-free
|
||||
deb http://http.kali.org/kali kali-rolling main contrib non-free
|
||||
deb-src http://http.kali.org/kali kali-rolling main contrib non-free
|
||||
```
|
||||
|
||||
这就是为什么,对于某些特定的软件,当你在 Kali Linux 和 Fedora 中使用相同的程序时,你会发现不同的行为。你可以从 [git.kali.org][3] 中查看 Kali Linux 软件的完整列表。你还可以在 Kali Linux(GNOME)上找到我们[自己生成的已安装包列表][4]。
|
||||
|
||||
更重要的是,Kali Linux 官方文档极力建议不要添加任何其他第三方软件仓库,因为 Kali Linux 是一个滚动发行版,并且依赖于 Debian 测试,由于依赖关系冲突和包钩子,所以你很可能只是添加一个新的仓库源就会破坏系统。
|
||||
更重要的是,Kali Linux 官方文档极力建议不要添加任何其他第三方软件仓库,因为 Kali Linux 是一个滚动发行版,并且依赖于 Debian 测试分支,由于依赖关系冲突和包钩子,所以你很可能只是添加一个新的仓库源就会破坏系统。
|
||||
|
||||
### 不要安装 Kali Linux
|
||||
|
||||
![](https://i0.wp.com/fosspost.org/wp-content/uploads/2018/10/Kali-Linux-002.png?resize=750%2C504&ssl=1)
|
||||
|
||||
使用 Kali Linux 在 fosspost.org 上运行 wpscan
|
||||
*使用 Kali Linux 在 fosspost.org 上运行 wpscan*
|
||||
|
||||
我在极少数情况下使用 Kali Linux 来测试我部署的软件和服务器。但是,我永远不敢安装它并将其用作主系统。
|
||||
|
||||
如果你要将其用作主系统,那么你必须保留自己的个人文件,密码,数据以及系统上的所有内容。你还需要安装大量日常使用的软件,以解放你的生活。但正如我们上面提到的,使用 Kali Linux 是非常危险的,应该非常小心地进行,如果你被入侵了,你将丢失所有数据,并且可能会暴露给更多的人。如果你在做一些不合法的事情,你的个人信息也可用于跟踪你。如果你不小心使用这些工具,那么你甚至可能会毁掉自己的数据。
|
||||
如果你要将其用作主系统,那么你必须保留自己的个人文件、密码、数据以及系统上的所有内容。你还需要安装大量日常使用的软件,以解放你的生活。但正如我们上面提到的,使用 Kali Linux 是非常危险的,应该非常小心地进行,如果你被入侵了,你将丢失所有数据,并且可能会暴露给更多的人。如果你在做一些不合法的事情,你的个人信息也可用于跟踪你。如果你不小心使用这些工具,那么你甚至可能会毁掉自己的数据。
|
||||
|
||||
即使是专业的白帽子也不建议将其作为主系统安装,而是通过 USB 使用它来进行渗透测试工作,然后再回到普通的 Linux 发行版。
|
||||
|
||||
@ -83,7 +86,7 @@ via: https://fosspost.org/articles/must-know-before-using-kali-linux
|
||||
作者:[M.Hanny Sabbagh][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[MjSeven](https://github.com/MjSeven)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
@ -1,9 +1,11 @@
|
||||
使用极简浏览器 Min 浏览网页
|
||||
======
|
||||
|
||||
> 并非所有 web 浏览器都要做到无所不能,Min 就是一个极简主义风格的浏览器。
|
||||
|
||||
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/openweb-osdc-lead.png?itok=yjU4KliG)
|
||||
|
||||
现在还有开发新的网络浏览器的需要吗?即使现在浏览器领域已经成为了寡头市场,但仍然不断涌现出各种前所未有的浏览器产品。
|
||||
现在还有开发新的 Web 浏览器的需要吗?即使现在浏览器领域已经成为了寡头市场,但仍然不断涌现出各种前所未有的浏览器产品。
|
||||
|
||||
[Min][1] 就是其中一个。顾名思义,Min 是一个小的浏览器,也是一个极简主义的浏览器。但它麻雀虽小五脏俱全,而且还是一个开源的浏览器,它的 Apache 2.0 许可证引起了我的注意。
|
||||
|
||||
@ -29,7 +31,7 @@ Min 号称是更智能、更快速的浏览器。经过尝试以后,我觉得
|
||||
|
||||
Min 和其它浏览器一样,支持页面选项卡。它还有一个称为 Tasks 的功能,可以对打开的选项卡进行分组。
|
||||
|
||||
[DuckDuckGo][6]是我最喜欢的搜索引擎,而 Min 的默认搜索引擎恰好就是它,这正合我意。当然,如果你喜欢另一个搜索引擎,也可以在 Min 的偏好设置中配置你喜欢的搜索引擎作为默认搜索引擎。
|
||||
[DuckDuckGo][6] 是我最喜欢的搜索引擎,而 Min 的默认搜索引擎恰好就是它,这正合我意。当然,如果你喜欢另一个搜索引擎,也可以在 Min 的偏好设置中配置你喜欢的搜索引擎作为默认搜索引擎。
|
||||
|
||||
Min 没有使用类似 AdBlock 这样的插件来过滤你不想看到的内容,而是使用了一个名为 [EasyList][7] 的内置的广告拦截器,你可以使用它来屏蔽脚本和图片。另外 Min 还带有一个内置的防跟踪软件。
|
||||
|
||||
@ -54,7 +56,7 @@ Min 确实也有自己的缺点,例如它无法将网站添加为书签。替
|
||||
### 总结
|
||||
|
||||
Min 算是一个中规中矩的浏览器,它可以凭借轻量、快速的优点吸引很多极简主义的用户。但是对于追求多功能的用户来说,Min 就显得相当捉襟见肘了。
|
||||
.
|
||||
|
||||
所以,如果你想摆脱当今多功能浏览器的束缚,我觉得可以试用一下 Min。
|
||||
|
||||
|
||||
@ -65,7 +67,7 @@ via: https://opensource.com/article/18/10/min-web-browser
|
||||
作者:[Scott Nesbitt][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[HankChow](https://github.com/HankChow)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
@ -1,33 +1,42 @@
|
||||
如何分析并探索 Docker 容器镜像的内容
|
||||
======
|
||||
|
||||
![](https://www.ostechnix.com/wp-content/uploads/2018/10/dive-tool-720x340.png)
|
||||
|
||||
或许你已经了解到 Docker 容器镜像是一个轻量、独立、含有运行某个应用所需全部软件的可执行包,这也是为什么容器镜像会经常被开发者用于构建和分发应用。假如你很好奇一个 Docker 镜像里面包含了什么东西,那么这篇简要的指南或许会帮助到你。今天,我们将学会使用一个名为 **Dive** 的工具来分析和探索 Docker 镜像每层的内容。通过分析 Docker 镜像,我们可以发现在各个层之间可能重复的文件并通过移除它们来减小 Docker 镜像的大小。Dive 工具不仅仅是一个 Docker 镜像分析工具,它还可以帮助我们来构建镜像。Dive 是一个用 Go 编程语言编写的免费开源工具。
|
||||
或许你已经了解到 Docker 容器镜像是一个轻量、独立、含有运行某个应用所需全部软件的可执行包,这也是为什么容器镜像会经常被开发者用于构建和分发应用。假如你很好奇一个 Docker 镜像里面包含了什么东西,那么这篇简要的指南或许会帮助到你。今天,我们将学会使用一个名为 **Dive** 的工具来分析和探索 Docker 镜像每层的内容。
|
||||
|
||||
通过分析 Docker 镜像,我们可以发现在各个层之间可能重复的文件并通过移除它们来减小 Docker 镜像的大小。Dive 工具不仅仅是一个 Docker 镜像分析工具,它还可以帮助我们来构建镜像。Dive 是一个用 Go 编程语言编写的自由开源工具。
|
||||
|
||||
### 安装 Dive
|
||||
|
||||
首先从该项目的 [**发布页**][1] 下载最新版本,然后像下面展示的那样根据你所使用的发行版来安装它。
|
||||
首先从该项目的 [发布页][1] 下载最新版本,然后像下面展示的那样根据你所使用的发行版来安装它。
|
||||
|
||||
假如你正在使用 **Debian** 或者 **Ubuntu**,那么可以运行下面的命令来下载并安装它。
|
||||
|
||||
```
|
||||
$ wget https://github.com/wagoodman/dive/releases/download/v0.0.8/dive_0.0.8_linux_amd64.deb
|
||||
```
|
||||
|
||||
```
|
||||
$ sudo apt install ./dive_0.0.8_linux_amd64.deb
|
||||
```
|
||||
|
||||
**在 RHEL 或 CentOS 系统中**
|
||||
|
||||
```
|
||||
$ wget https://github.com/wagoodman/dive/releases/download/v0.0.8/dive_0.0.8_linux_amd64.rpm
|
||||
```
|
||||
|
||||
```
|
||||
$ sudo rpm -i dive_0.0.8_linux_amd64.rpm
|
||||
```
|
||||
|
||||
Dive 也可以使用 [**Linuxbrew**][2] 包管理器来安装。
|
||||
Dive 也可以使用 [Linuxbrew][2] 包管理器来安装。
|
||||
|
||||
```
|
||||
$ brew tap wagoodman/dive
|
||||
```
|
||||
|
||||
```
|
||||
$ brew install dive
|
||||
```
|
||||
@ -36,34 +45,37 @@ $ brew install dive
|
||||
|
||||
### 分析并探索 Docker 镜像的内容
|
||||
|
||||
要分析一个 Docker 镜像,只需要运行加上 Docker 镜像 ID的 dive 命令就可以了。你可以使用 `sudo docker images` 来得到 Docker 镜像的 ID。
|
||||
要分析一个 Docker 镜像,只需要运行加上 Docker 镜像 ID 的 `dive` 命令就可以了。你可以使用 `sudo docker images` 来得到 Docker 镜像的 ID。
|
||||
|
||||
```
|
||||
$ sudo dive ea4c82dcd15a
|
||||
```
|
||||
|
||||
上面命令中的 **ea4c82dcd15a** 是某个镜像的 id。
|
||||
上面命令中的 `ea4c82dcd15a` 是某个镜像的 ID。
|
||||
|
||||
然后 Dive 命令将快速地分析给定 Docker 镜像的内容并将它在终端中展示出来。
|
||||
然后 `dive` 命令将快速地分析给定 Docker 镜像的内容并将它在终端中展示出来。
|
||||
|
||||
![](https://www.ostechnix.com/wp-content/uploads/2018/10/Dive-1.png)
|
||||
|
||||
正如你在上面的截图中看到的那样,在终端的左边一栏列出了给定 Docker 镜像的各个层及其详细内容,浪费的空间大小等信息。右边一栏则给出了给定 Docker 镜像每一层的内容。你可以使用 **Ctrl+SPACEBAR** 来在左右栏之间切换,使用 **UP/DOWN** 上下键来在目录树中进行浏览。
|
||||
正如你在上面的截图中看到的那样,在终端的左边一栏列出了给定 Docker 镜像的各个层及其详细内容,浪费的空间大小等信息。右边一栏则给出了给定 Docker 镜像每一层的内容。你可以使用 `Ctrl+空格` 来在左右栏之间切换,使用 `UP`/`DOWN` 光标键来在目录树中进行浏览。
|
||||
|
||||
下面是 `Dive` 的快捷键列表:
|
||||
* **Ctrl+Spacebar** – 在左右栏之间切换
|
||||
* **Spacebar** – 展开或收起目录树
|
||||
* **Ctrl+A** – 文件树视图:展示或隐藏增加的文件
|
||||
* **Ctrl+R** – 文件树视图:展示或隐藏被移除的文件
|
||||
* **Ctrl+M** – 文件树视图:展示或隐藏被修改的文件
|
||||
* **Ctrl+U** – 文件树视图:展示或隐藏未修改的文件
|
||||
* **Ctrl+L** – 层视图:展示当前层的变化
|
||||
* **Ctrl+A** – 层视图:展示总的变化
|
||||
* **Ctrl+/** – 筛选文件
|
||||
* **Ctrl+C** – 退出
|
||||
下面是 `dive` 的快捷键列表:
|
||||
|
||||
在上面的例子中,我使用了 `sudo` 权限,这是因为我的 Docker 镜像存储在 **/var/lib/docker/** 目录中。假如你的镜像保存在你的家目录 `$HOME`或者在其他不属于 `root` 用户的目录,你就没有必要使用 `sudo` 命令。
|
||||
* `Ctrl+空格` —— 在左右栏之间切换
|
||||
* `空格` —— 展开或收起目录树
|
||||
* `Ctrl+A` —— 文件树视图:展示或隐藏增加的文件
|
||||
* `Ctrl+R` —— 文件树视图:展示或隐藏被移除的文件
|
||||
* `Ctrl+M` —— 文件树视图:展示或隐藏被修改的文件
|
||||
* `Ctrl+U` —— 文件树视图:展示或隐藏未修改的文件
|
||||
* `Ctrl+L` —— 层视图:展示当前层的变化
|
||||
* `Ctrl+A` —— 层视图:展示总的变化
|
||||
* `Ctrl+/` —— 筛选文件
|
||||
* `Ctrl+C` —— 退出
|
||||
|
||||
在上面的例子中,我使用了 `sudo` 权限,这是因为我的 Docker 镜像存储在 `/var/lib/docker/` 目录中。假如你的镜像保存在你的家目录 (`$HOME`)或者在其他不属于 `root` 用户的目录,你就没有必要使用 `sudo` 命令。
|
||||
|
||||
你还可以使用下面的单个命令来构建一个 Docker 镜像并立刻分析该镜像:
|
||||
|
||||
```
|
||||
$ dive build -t <some-tag>
|
||||
```
|
||||
@ -83,7 +95,7 @@ via: https://www.ostechnix.com/how-to-analyze-and-explore-the-contents-of-docker
|
||||
作者:[SK][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[FSSlc](https://github.com/FSSlc)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
@ -91,4 +103,4 @@ via: https://www.ostechnix.com/how-to-analyze-and-explore-the-contents-of-docker
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://github.com/wagoodman/dive/releases
|
||||
[2]: https://www.ostechnix.com/linuxbrew-common-package-manager-linux-mac-os-x/
|
||||
[3]: https://github.com/wagoodman/dive
|
||||
[3]: https://github.com/wagoodman/dive
|
@ -1,61 +0,0 @@
|
||||
Translating by MjSeven
|
||||
|
||||
|
||||
What developers need to know about security
|
||||
======
|
||||
|
||||
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/locks_keys_bridge_paris.png?itok=Bp0dsEc9)
|
||||
|
||||
DevOps doesn't mean that everyone needs to be an expert in both development and operations. This is especially true in larger organizations in which roles tend to be more specialized. Rather, DevOps thinking has evolved in a way that makes it more about the separation of concerns. To the degree that operations teams can deploy platforms for developers (whether on-premises or in a public cloud) and get out of the way, that's good news for both teams. Developers get a productive development environment and self-service. Operations can focus on keeping the underlying plumbing running and maintaining the platform.
|
||||
|
||||
It's a contract of sorts. Devs expect a stable and functional platform from ops. Ops expects that devs will be able to handle most of the tasks associated with developing apps on their own.
|
||||
|
||||
Devs expect a stable and functional platform from ops. Ops expects that devs will be able to handle most of the tasks associated with developing apps on their own.
|
||||
|
||||
That said, DevOps is also about better communication, collaboration, and transparency. It works better if it's not about merely a new type of wall between dev and ops. Ops needs to be sensitive to the type of tools developers want and need and the visibility they require, through monitoring and logging, to write better applications. Conversely, developers need some awareness of how the underlying infrastructure can be used most effectively and what can keep operations up at night (literally).
|
||||
|
||||
That said, DevOps is also about better communication, collaboration, and transparency. It works better if it's not about merely a new type of wall between dev and ops. Ops needs to be sensitive to the type of tools developers want and need and the visibility they require, through monitoring and logging, to write better applications. Conversely, developers need some awareness of how the underlying infrastructure can be used most effectively and what can keep operations up at night (literally).
|
||||
|
||||
The same principle applies more broadly to DevSecOps, a term that serves to explicitly remind us that security needs to be embedded throughout the entire DevOps pipeline from sourcing content to writing apps, building them, testing them, and running them in production. Developers (and operations) don't suddenly need to become security specialists in addition to the other hats they already wear. But they can often benefit from a greater awareness of security best practices (which may be different from what they've become accustomed to) and shifting away from a mindset that views security as some unfortunate obstacle.
|
||||
|
||||
Here are a few observations.
|
||||
|
||||
The Open Web Application Security Project ([OWASP][1]) [Top 10 list][2] provides a window into the top vulnerabilities in web applications. Many entries on the list will be familiar to web programmers. Cross-site scripting (XSS) and injection flaws are among the most common. What's striking though is that many of the flaws on the original 2007 list are still on 2017's list ([PDF][3]). Whether it's training or tooling that's most at fault, many of the same coding flaws keep popping up.
|
||||
|
||||
The situation is exacerbated by new platform technologies. For example, while containers don't necessarily require applications to be written differently, they dovetail with new patterns (such as [microservices][4] ) and can amplify the effects of certain security practices. For example, as my colleague [Dan Walsh][5] [@rhatdan][6] ) writes, "The biggest misconception in computing [is] you need root to run applications. The problem is not so much that devs think they need root. It is that they build this assumption into the services that they build, i.e., the services cannot run without root, making us all less secure."
|
||||
|
||||
Was defaulting to root access ever a good practice? Not really. But it was arguably (maybe) a defensible one with applications and systems that were otherwise sufficiently isolated by other means. But with everything connected, no real perimeter, multi-tenant workloads, users with many different levels of access rights—to say nothing of an ever more dangerous threat environment—there's far less leeway for shortcuts.
|
||||
|
||||
[Automation][7] should be an integral part of DevOps anyway. That automation needs to include security and compliance testing throughout the process. Where did the code come from? Are third-party technologies, products, or container images involved? Are there known security errata? Are there known common code flaws? Are secrets and personally identifiable information kept isolated? How do we authenticate? Who is authorized to deploy services and applications?
|
||||
|
||||
You're not writing your own crypto, are you?
|
||||
|
||||
Automate penetration testing where possible. Did I mention automation? It's an essential part of making security continuous rather than a checklist item that's done once in a while.
|
||||
|
||||
Does this sound hard? It probably is a bit. At least it may be different. But as a participant in a [DevOpsDays OpenSpaces][8] London said to me: "It's just technical testing. It's not magical or mysterious." He went on to say that it's not even that hard to get involved with security as a way to gain a broader understanding of the whole software lifecycle (which is not a bad skill to have). He also suggested taking part in incident response exercises or [capture the flag exercises][9]. You might even find they're fun.
|
||||
|
||||
This article is based on [a talk][10] the author will be giving at [Red Hat Summit 2018][11], which will be held May 8-10 in San Francisco. _[Register by May 7][11] to save US$ 500 off of registration. Use discount code **OPEN18** on the payment page to apply the discount._
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/18/4/what-developers-need-know-about-security
|
||||
|
||||
作者:[Gordon Haff][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
选题:[lujun9972](https://github.com/lujun9972)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://opensource.com/users/ghaff
|
||||
[1]:https://www.owasp.org/index.php/Main_Page
|
||||
[2]:https://www.owasp.org/index.php/Category:OWASP_Top_Ten_Project
|
||||
[3]:https://www.owasp.org/images/7/72/OWASP_Top_10-2017_%28en%29.pdf.pdf
|
||||
[4]:https://opensource.com/tags/microservices
|
||||
[5]:https://opensource.com/users/rhatdan
|
||||
[6]:https://twitter.com/rhatdan
|
||||
[7]:https://opensource.com/tags/automation
|
||||
[8]:https://www.devopsdays.org/open-space-format/
|
||||
[9]:https://dev.to/_theycallmetoni/capture-the-flag-its-a-game-for-hacki-mean-security-professionals
|
||||
[10]:https://agenda.summit.redhat.com/SessionDetail.aspx?id=154677
|
||||
[11]:https://www.redhat.com/en/summit/2018
|
@ -1,3 +1,4 @@
|
||||
hkurj translating
|
||||
Why schools of the future are open
|
||||
======
|
||||
|
||||
|
@ -1,81 +0,0 @@
|
||||
A 3-step process for making more transparent decisions
|
||||
======
|
||||
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/GOV_Transparency_A.png?itok=2r47nFJB)
|
||||
|
||||
One of the most powerful ways to make your work as a leader more transparent is to take an existing process, open it up for feedback from your team, and then change the process to account for this feedback. The following exercise makes transparency more tangible, and it helps develop the "muscle memory" needed for continually evaluating and adjusting your work with transparency in mind.
|
||||
|
||||
I would argue that you can undertake this activity this with any process--even processes that might seem "off limits," like the promotion or salary adjustment processes. But if that's too big for a first bite, then you might consider beginning with a less sensitive process, such as the travel approval process or your system for searching for candidates to fill open positions on your team. (I've done this with our hiring process and promotion processes, for example.)
|
||||
|
||||
Opening up processes and making them more transparent builds your credibility and enhances trust with team members. It forces you to "walk the transparency walk" in ways that might challenge your assumptions or comfort level. Working this way does create additional work, particularly at the beginning of the process--but, ultimately, this works well for holding managers (like me) accountable to team members, and it creates more consistency.
|
||||
|
||||
### Phase 1: Pick a process
|
||||
|
||||
**Step 1.** Think of a common or routine process your team uses, but one that is not generally open for scrutiny. Some examples might include:
|
||||
|
||||
* Hiring: How are job descriptions created, interview teams selected, candidates screened and final hiring decisions made?
|
||||
* Planning: How are your team or organizational goals determined for the year or quarter?
|
||||
* Promotions: How do you select candidates for promotion, consider them, and decide who gets promoted?
|
||||
* Manager performance appraisals: Who receives the opportunity to provide feedback on manager performance, and how are they able to do it?
|
||||
* Travel: How is the travel budget apportioned, and how do you make decisions about whether to approval travel (or whether to nominate someone for travel)?
|
||||
|
||||
|
||||
|
||||
One of the above examples may resonate with you, or you may identify something else that you feel is more appropriate. Perhaps you've received questions about a particular process, or you find yourself explaining the rationale for a particular kind of decision frequently. Choose something that you are able to control or influence--and something you believe your constituents care about.
|
||||
|
||||
**Step 2.** Now answer the following questions about the process:
|
||||
|
||||
* Is the process currently documented in a place that all constituents know about and can access? If not, go ahead and create that documentation now (it doesn't have to be too detailed; just explain the different steps of the process and how it works). You may find that the process isn't clear or consistent enough to document. In that case, document it the way you think it should work in the ideal case.
|
||||
* Does the completed process documentation explain how decisions are made at various points? For example, in a travel approval process, does it explain how a decision to approve or deny a request is made?
|
||||
* What are the inputs of the process? For example, when determining departmental goals for the year, what data is used for key performance indicators? Whose feedback is sought and incorporated? Who has the opportunity to review or "sign off"?
|
||||
* What assumptions does this process make? For example, in promotion decisions, do you assume that all candidates for promotion will be put forward by their managers at the appropriate time?
|
||||
* What are the outputs of the process? For example, in assessing the performance of the managers, is the result shared with the manager being evaluated? Are any aspects of the review shared more broadly with the manager's direct reports (areas for improvement, for example)?
|
||||
|
||||
|
||||
|
||||
Avoid making judgements when answering the above questions. If the process doesn't clearly explain how a decision is made, that might be fine. The questions are simply an opportunity to assess the current state.
|
||||
|
||||
Next, revise the documentation of the process until you are satisfied that it adequately explains the process and anticipates the potential questions.
|
||||
|
||||
### Phase 2: Gather feedback
|
||||
|
||||
The next phase involves sharing the process with your constituents and asking for feedback. Sharing is easier said than done.
|
||||
|
||||
**Step 1.** Encourage people to provide feedback. Consider a variety of mechanisms for doing this:
|
||||
|
||||
* Post the process somewhere people can find it internally and note where they can make comments or provide feedback. A Google document works great with the ability to comment on specific text or suggest changes directly in the text.
|
||||
* Share the process document via email, inviting feedback
|
||||
* Mention the process document and ask for feedback during team meetings or one-on-one conversations
|
||||
* Give people a time window within which to provide feedback, and send periodic reminders during that window.
|
||||
|
||||
|
||||
|
||||
If you don't get much feedback, don't assume that silence is equal to endorsement. Try asking people directly if they have any idea why feedback is not coming in. Are people too busy? Is the process not as important to people as you thought? Have you effectively articulated what you're asking for?
|
||||
|
||||
**Step 2.** Iterate. As you get feedback about the process, engage the team in revising and iterating on the process. Incorporate ideas and suggestions for improvement, and ask for confirmation that the intended feedback has been applied. If you don't agree with a suggestion, be open to the discussion and ask yourself why you don't agree and what the merits are of one method versus another.
|
||||
|
||||
Setting a timebox for collecting feedback and iterating is helpful to move things forward. Once feedback has been collected and reviewed, discussed and applied, post the final process for the team to review.
|
||||
|
||||
### Phase 3: Implement
|
||||
|
||||
Implementing a process is often the hardest phase of the initiative. But if you've taken account of feedback when revising your process, people should already been anticipating it and will likely be more supportive. The documentation you have from the iterative process above is a great tool to keep you accountable on the implementation.
|
||||
|
||||
**Step 1.** Review requirements for implementation. Many processes that can benefit from increased transparency simply require doing things a little differently, but you do want to review whether you need any other support (tooling, for example).
|
||||
|
||||
**Step 2.** Set a timeline for implementation. Review the timeline with constituents so they know what to expect. If the new process requires a process change for others, be sure to provide enough time for people to adapt to the new behavior, and provide communication and reminders.
|
||||
|
||||
**Step 3.** Follow up. After using the process for 3-6 months, check in with your constituents to see how it's going. Is the new process more transparent? More effective? More predictable? Do you have any lessons learned that could be used to improve the process further?
|
||||
|
||||
### About The Author
|
||||
Sam Knuth;I Have The Privilege To Lead The Customer Content Services Team At Red Hat;Which Produces All Of The Documentation We Provide For Our Customers. Our Goal Is To Provide Customers With The Insights They Need To Be Successful With Open Source Technology In The Enterprise. Connect With Me On Twitter
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/open-organization/17/9/exercise-in-transparent-decisions
|
||||
|
||||
作者:[a][Sam Knuth]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://opensource.com/users/samfw
|
@ -1,260 +0,0 @@
|
||||
Translating by cielllll
|
||||
|
||||
Three Alternatives for Enabling Two Factor Authentication For SSH On Ubuntu 16.04 And Debian Jessie
|
||||
======
|
||||
Security is now more important than ever and securing your SSH server is one of the most important things that you can do as a systems administrator. Traditionally this has meant disabling password authentication and instead using SSH keys. Whilst this is absolutely the first thing you should do that doesn't mean that SSH can't be made even more secure.
|
||||
|
||||
Two-factor authentication simply means that two means of identification are required to log in. These could be a password and an SSH key, or a key and a 3rd party service like Google. It means that the compromise of a single authentication method does not compromise the server.
|
||||
|
||||
The following guides are three ways to enable two-factor authentication for SSH.
|
||||
|
||||
Whenever you are modifying the configuration of SSH always ensure that you have a second terminal open to the server. The second terminal means that you will be able to fix any mistakes you make with the SSH configuration. Open terminals will stay open even through SSH restarts.
|
||||
|
||||
### SSH Key and Password
|
||||
|
||||
SSH supports the ability to require more than a single authentication method for logins.
|
||||
|
||||
The authentication methods are set with the `AuthenticationMethods` option in the SSH server's configuration file at `/etc/ssh/sshd_config`.
|
||||
|
||||
When the following line is added into `/etc/ssh/sshd_config` SSH requires an SSH key to be submitted and then a password is prompted for:
|
||||
```
|
||||
AuthenticationMethods "publickey,password"
|
||||
|
||||
```
|
||||
|
||||
If you want to set these methods on a per use basis then use the following additional configuration:
|
||||
```
|
||||
Match User jsmith
|
||||
AuthenticationMethods "publickey,password"
|
||||
|
||||
```
|
||||
|
||||
When you have edited and saved the new `sshd_config` file you should check that you did not make any errors by running this command:
|
||||
```
|
||||
sshd -t
|
||||
|
||||
```
|
||||
|
||||
Any syntax or other errors that would stop SSH from starting will be flagged here. When `ssh -t` runs without error use `systemctl` to restart SSH"
|
||||
```
|
||||
systemctl restart sshd
|
||||
|
||||
```
|
||||
|
||||
Now you can log in with a new terminal to check that you are prompted for a password and your SSH key is required. If you use `ssh -v` e.g.:
|
||||
```
|
||||
ssh -v jsmith@example.com
|
||||
|
||||
```
|
||||
|
||||
you will be able to see every step of the login.
|
||||
|
||||
Note, if you do set `password` as a required authentication method then you will need to ensure that `PasswordAuthentication` option is set to `yes`.
|
||||
|
||||
### SSH With Google Authenticator
|
||||
|
||||
Google's two-factor authentication system that is used on Google's own products can be integrated into your SSH server. This makes this method very convenient if you already have use the Google Authenticator app.
|
||||
|
||||
Although the `libpam-google-authenticator` is written by Google it is [open source][1]. Also, the Google Authenticator app is written by Google but does not require a Google account to work. Thanks to [Sitaram Chamarty][2] for the heads up on that.
|
||||
|
||||
If you don't already have the Google Authenticator app installed and configured on your phone please see the instructions [here][3].
|
||||
|
||||
First, we need to install the Google Authenticator package on the server. The following commands will update your system and install the needed packages:
|
||||
```
|
||||
apt-get update
|
||||
apt-get upgrade
|
||||
apt-get install libpam-google-authenticator
|
||||
|
||||
```
|
||||
|
||||
Now, we need to register the server with the Google Authenticator app on your phone. This is done by first running the program we just installed:
|
||||
```
|
||||
google-authenticator
|
||||
|
||||
```
|
||||
|
||||
You will be asked a few questions when you run this. You should answer in the way that suits your setup, however, the most secure options are to answer `y` to every question. If you need to change these later you can simply re-run `google-authenticator` and select different options.
|
||||
|
||||
When you run `google-authenticator` a QR code will be printed to the terminal and some codes that look like:
|
||||
```
|
||||
Your new secret key is: VMFY27TYDFRDNKFY
|
||||
Your verification code is 259652
|
||||
Your emergency scratch codes are:
|
||||
96915246
|
||||
70222983
|
||||
31822707
|
||||
25181286
|
||||
28919992
|
||||
|
||||
```
|
||||
|
||||
You should record all of these codes to a secure location like a password manager. The scratch codes are single use codes that will always allow you access even if your phone is unavailable.
|
||||
|
||||
All you need to do to register your server with the Authenticator app is to open the app and hit the red plus symbol on the bottom right. Then select the **Scan a barcode** option and scan the QR code that was printed to the terminal. Your server and the app are now linked.
|
||||
|
||||
Back on the server, we now need to edit the PAM (Pluggable Authentication Module) for SSH so that it uses the authenticator package we just installed. PAM is the standalone system that takes care of most authentication on a Linux server.
|
||||
|
||||
The PAM file for SSH that needs modifying is located at `/etc/pam.d/sshd` and edited with the following command:
|
||||
```
|
||||
nano /etc/pam.d/sshd
|
||||
|
||||
```
|
||||
|
||||
Add the following line to the top of the file:
|
||||
```
|
||||
auth required pam_google_authenticator.so
|
||||
|
||||
```
|
||||
|
||||
In addition, we also need to comment out a line so that PAM will not prompt for a password. Change this line:
|
||||
```
|
||||
# Standard Un*x authentication.
|
||||
@include common-auth
|
||||
|
||||
```
|
||||
|
||||
To this:
|
||||
```
|
||||
# Standard Un*x authentication.
|
||||
# @include common-auth
|
||||
|
||||
```
|
||||
|
||||
Next, we need to edit the SSH server configuration file:
|
||||
```
|
||||
nano /etc/ssh/sshd_config
|
||||
|
||||
```
|
||||
|
||||
And change this line:
|
||||
```
|
||||
ChallengeResponseAuthentication no
|
||||
|
||||
```
|
||||
|
||||
To:
|
||||
```
|
||||
ChallengeResponseAuthentication yes
|
||||
|
||||
```
|
||||
|
||||
Next, add the following line to enable two authentication schemes; SSH keys and Google Authenticator (keyboard-interactive):
|
||||
```
|
||||
AuthenticationMethods "publickey,keyboard-interactive"
|
||||
|
||||
```
|
||||
|
||||
Before we reload the SSH server it is a good idea to check that we did not make any errors in the configuration. This is done with the following command:
|
||||
```
|
||||
sshd -t
|
||||
|
||||
```
|
||||
|
||||
If this does not flag any errors, reload SSH with the new configuration:
|
||||
```
|
||||
systemctl reload sshd.service
|
||||
|
||||
```
|
||||
|
||||
Everything should now be working. Now, when you log into to your server you will need to use your SSH keys and when you are prompted for the:
|
||||
```
|
||||
Verification code:
|
||||
|
||||
```
|
||||
|
||||
open the Authenticator app and enter the 6 digit code that is displaying for your server.
|
||||
|
||||
### Authy
|
||||
|
||||
[Authy][4] is a two-factor authentication service that, like Google, offers time-based codes. However, Authy does not require a phone as they provide desktop and tables clients. They also enable offline authentication and do not require a Google account.
|
||||
|
||||
You will need to install the Authy app from your app store, or the desktop client all of which are linked to from the Authy [download page][5].
|
||||
|
||||
After you have installed the app you will need an API key that will be used on the server. This process requires a few steps:
|
||||
|
||||
1. Sign up for an account [here][6].
|
||||
2. Scroll down to the **Authy** section.
|
||||
3. Enable 2FA on the account.
|
||||
4. Return to the **Authy** section.
|
||||
5. Create a new Application for your server.
|
||||
6. Obtain the API key from the top of the `General Settings` page for the new Application. You need to click the eye symbol next to the `PRODUCTION API KEY` line to reveal the key. Shown here:
|
||||
|
||||
|
||||
|
||||
![][7]
|
||||
|
||||
Take a note of the API key somewhere secure.
|
||||
|
||||
Now, go back to your server and run the following commands as root:
|
||||
```
|
||||
curl -O 'https://raw.githubusercontent.com/authy/authy-ssh/master/authy-ssh'
|
||||
bash authy-ssh install /usr/local/bin
|
||||
|
||||
```
|
||||
|
||||
Enter the API key when prompted. If you input it incorrectly you can always edit `/usr/local/bin/authy-ssh.conf` and add it again.
|
||||
|
||||
Authy is now installed. However, it will not start working until it is enabled for a user. The command to enable Authy has the form:
|
||||
```
|
||||
/usr/local/bin/authy-ssh enable <system-user> <your-email> <your-phone-country-code> <your-phone-number>
|
||||
|
||||
```
|
||||
|
||||
With some example details for **root** logins:
|
||||
```
|
||||
/usr/local/bin/authy-ssh enable root john@example.com 44 20822536476
|
||||
|
||||
```
|
||||
|
||||
If everything was successful you will see:
|
||||
```
|
||||
User was registered
|
||||
|
||||
```
|
||||
|
||||
You can test Authy now by running the command:
|
||||
```
|
||||
authy-ssh test
|
||||
|
||||
```
|
||||
|
||||
Finally, reload SSH to implement the new configuration:
|
||||
```
|
||||
systemctl reload sshd.service
|
||||
|
||||
```
|
||||
|
||||
Authy is now working and will be required for SSH logins.
|
||||
|
||||
Now, when you log in you will see the following prompt:
|
||||
```
|
||||
Authy Token (type 'sms' to request a SMS token):
|
||||
|
||||
```
|
||||
|
||||
You can either enter the code from the Authy app on your phone or desktop client. Or you can type `sms` and Authy will send you an SMS message with a login code.
|
||||
|
||||
Authy is uninstalled by running the following:
|
||||
```
|
||||
/usr/local/bin/authy-ssh uninstall
|
||||
|
||||
```
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://bash-prompt.net/guides/ssh-2fa/
|
||||
|
||||
作者:[Elliot Cooper][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://bash-prompt.net
|
||||
[1]:https://github.com/google/google-authenticator-libpam
|
||||
[2]:https://plus.google.com/115609618223925128756
|
||||
[3]:https://support.google.com/accounts/answer/1066447?hl=en
|
||||
[4]:https://authy.com/
|
||||
[5]:https://authy.com/download/
|
||||
[6]:https://www.authy.com/signup
|
||||
[7]:/images/guides/2FA/twilio-authy-api.png
|
@ -1,3 +1,5 @@
|
||||
Translating by Jamkr
|
||||
|
||||
Scout out code problems with SonarQube
|
||||
======
|
||||
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/open%20source_collaboration_0.png?itok=YEl_GXbv)
|
||||
|
@ -1,3 +1,6 @@
|
||||
Translating by MjSeven
|
||||
|
||||
|
||||
For your first HTML code, let’s help Batman write a love letter
|
||||
============================================================
|
||||
|
||||
@ -553,360 +556,4 @@ We want to apply our styles to the specific div and img that we are using right
|
||||
<div id="letter-container">
|
||||
```
|
||||
|
||||
and here’s how to use this id in our embedded style as a selector:
|
||||
|
||||
```
|
||||
#letter-container{
|
||||
...
|
||||
}
|
||||
```
|
||||
|
||||
Notice the “#” symbol. It indicates that it is an id, and the styles inside {…} should apply to the element with that specific id only.
|
||||
|
||||
Let’s apply this to our code:
|
||||
|
||||
```
|
||||
<style>
|
||||
#letter-container{
|
||||
width:550px;
|
||||
}
|
||||
#header-bat-logo{
|
||||
width:100%;
|
||||
}
|
||||
</style>
|
||||
```
|
||||
|
||||
```
|
||||
<div id="letter-container">
|
||||
<h1>Bat Letter</h1>
|
||||
<img id="header-bat-logo" src="bat-logo.jpeg">
|
||||
<p>
|
||||
After all the battles we faught together, after all the difficult times we saw together, after all the good and bad moments we've been through, I think it's time I let you know how I feel about you.
|
||||
</p>
|
||||
```
|
||||
|
||||
```
|
||||
<h2>You are the light of my life</h2>
|
||||
<p>
|
||||
You complete my darkness with your light. I love:
|
||||
</p>
|
||||
<ul>
|
||||
<li>the way you see good in the worse</li>
|
||||
<li>the way you handle emotionally difficult situations</li>
|
||||
<li>the way you look at Justice</li>
|
||||
</ul>
|
||||
<p>
|
||||
I have learned a lot from you. You have occupied a special place in my heart over the time.
|
||||
</p>
|
||||
<h2>I have a confession to make</h2>
|
||||
<p>
|
||||
It feels like my chest <em>does</em> have a heart. You make my heart beat. Your smile brings smile on my face, your pain brings pain to my heart.
|
||||
</p>
|
||||
<p>
|
||||
I don't show my emotions, but I think this man behind the mask is falling for you.
|
||||
</p>
|
||||
<p><strong>I love you Superman.</strong></p>
|
||||
<p>
|
||||
Your not-so-secret-lover, <br>
|
||||
Batman
|
||||
</p>
|
||||
</div>
|
||||
```
|
||||
|
||||
Our HTML is ready with embedded styling.
|
||||
|
||||
However, you can see that as we include more styles, the <style></style> will get bigger. This can quickly clutter our main html file. So let’s go one step further and use linked styling by copying the content inside our style tag to a new file.
|
||||
|
||||
Create a new file in the project root directory and save it as style.css:
|
||||
|
||||
```
|
||||
#letter-container{
|
||||
width:550px;
|
||||
}
|
||||
#header-bat-logo{
|
||||
width:100%;
|
||||
}
|
||||
```
|
||||
|
||||
We don’t need to write `<style>` and `</style>` in our CSS file.
|
||||
|
||||
We need to link our newly created CSS file to our HTML file using the `<link>`tag in our html file. Here’s how we can do that:
|
||||
|
||||
```
|
||||
<link rel="stylesheet" type="text/css" href="style.css">
|
||||
```
|
||||
|
||||
We use the link element to include external resources inside your HTML document. It is mostly used to link Stylesheets. The three attributes that we are using are:
|
||||
|
||||
* rel: Relation. What relationship the linked file has to the document. The file with the .css extension is called a stylesheet, and so we keep rel=“stylesheet”.
|
||||
|
||||
* type: the Type of the linked file; it’s “text/css” for a CSS file.
|
||||
|
||||
* href: Hypertext Reference. Location of the linked file.
|
||||
|
||||
There is no </link> at the end of the link element. So, <link> is also a self-closing tag.
|
||||
|
||||
```
|
||||
<link rel="gf" type="cute" href="girl.next.door">
|
||||
```
|
||||
|
||||
If only getting a Girlfriend was so easy :D
|
||||
|
||||
Nah, that’s not gonna happen, let’s move on.
|
||||
|
||||
Here’s the content of our loveletter.html:
|
||||
|
||||
```
|
||||
<link rel="stylesheet" type="text/css" href="style.css">
|
||||
<div id="letter-container">
|
||||
<h1>Bat Letter</h1>
|
||||
<img id="header-bat-logo" src="bat-logo.jpeg">
|
||||
<p>
|
||||
After all the battles we faught together, after all the difficult times we saw together, after all the good and bad moments we've been through, I think it's time I let you know how I feel about you.
|
||||
</p>
|
||||
<h2>You are the light of my life</h2>
|
||||
<p>
|
||||
You complete my darkness with your light. I love:
|
||||
</p>
|
||||
<ul>
|
||||
<li>the way you see good in the worse</li>
|
||||
<li>the way you handle emotionally difficult situations</li>
|
||||
<li>the way you look at Justice</li>
|
||||
</ul>
|
||||
<p>
|
||||
I have learned a lot from you. You have occupied a special place in my heart over the time.
|
||||
</p>
|
||||
<h2>I have a confession to make</h2>
|
||||
<p>
|
||||
It feels like my chest <em>does</em> have a heart. You make my heart beat. Your smile brings smile on my face, your pain brings pain to my heart.
|
||||
</p>
|
||||
<p>
|
||||
I don't show my emotions, but I think this man behind the mask is falling for you.
|
||||
</p>
|
||||
<p><strong>I love you Superman.</strong></p>
|
||||
<p>
|
||||
Your not-so-secret-lover, <br>
|
||||
Batman
|
||||
</p>
|
||||
</div>
|
||||
```
|
||||
|
||||
and our style.css:
|
||||
|
||||
```
|
||||
#letter-container{
|
||||
width:550px;
|
||||
}
|
||||
#header-bat-logo{
|
||||
width:100%;
|
||||
}
|
||||
```
|
||||
|
||||
Save both the files and refresh, and your output in the browser should remain the same.
|
||||
|
||||
### A Few Formalities
|
||||
|
||||
Our love letter is almost ready to deliver to Batman, but there are a few formal pieces remaining.
|
||||
|
||||
Like any other programming language, HTML has also gone through many versions since its birth year(1990). The current version of HTML is HTML5.
|
||||
|
||||
So, how would the browser know which version of HTML you are using to code your page? To tell the browser that you are using HTML5, you need to include `<!DOCTYPE html>` at top of the page. For older versions of HTML, this line used to be different, but you don’t need to learn that because we don’t use them anymore.
|
||||
|
||||
Also, in previous HTML versions, we used to encapsulate the entire document inside `<html></html>` tag. The entire file was divided into two major sections: Head, inside `<head></head>`, and Body, inside `<body></body>`. This is not required in HTML5, but we still do this for compatibility reasons. Let’s update our code with `<Doctype>`, `<html>`, `<head>` and `<body>`:
|
||||
|
||||
```
|
||||
<!DOCTYPE html>
|
||||
<html>
|
||||
<head>
|
||||
<link rel="stylesheet" type="text/css" href="style.css">
|
||||
</head>
|
||||
<body>
|
||||
<div id="letter-container">
|
||||
<h1>Bat Letter</h1>
|
||||
<img id="header-bat-logo" src="bat-logo.jpeg">
|
||||
<p>
|
||||
After all the battles we faught together, after all the difficult times we saw together, after all the good and bad moments we've been through, I think it's time I let you know how I feel about you.
|
||||
</p>
|
||||
<h2>You are the light of my life</h2>
|
||||
<p>
|
||||
You complete my darkness with your light. I love:
|
||||
</p>
|
||||
<ul>
|
||||
<li>the way you see good in the worse</li>
|
||||
<li>the way you handle emotionally difficult situations</li>
|
||||
<li>the way you look at Justice</li>
|
||||
</ul>
|
||||
<p>
|
||||
I have learned a lot from you. You have occupied a special place in my heart over the time.
|
||||
</p>
|
||||
<h2>I have a confession to make</h2>
|
||||
<p>
|
||||
It feels like my chest <em>does</em> have a heart. You make my heart beat. Your smile brings smile on my face, your pain brings pain to my heart.
|
||||
</p>
|
||||
<p>
|
||||
I don't show my emotions, but I think this man behind the mask is falling for you.
|
||||
</p>
|
||||
<p><strong>I love you Superman.</strong></p>
|
||||
<p>
|
||||
Your not-so-secret-lover, <br>
|
||||
Batman
|
||||
</p>
|
||||
</div>
|
||||
</body>
|
||||
</html>
|
||||
```
|
||||
|
||||
The main content goes inside `<body>` and meta information goes inside `<head>`. So we keep the div inside `<body>` and load the stylesheets inside `<head>`.
|
||||
|
||||
Save and refresh, and your HTML page should display the same as earlier.
|
||||
|
||||
### Title in HTML
|
||||
|
||||
This is the last change. I promise.
|
||||
|
||||
You might have noticed that the title of the tab is displaying the path of the HTML file:
|
||||
|
||||
|
||||
![](https://cdn-images-1.medium.com/max/1000/1*PASKm4ji29hbcZXVSP8afg.jpeg)
|
||||
|
||||
We can use `<title>` tag to define a title for our HTML file. The title tag also, like the link tag, goes inside head. Let’s put “Bat Letter” in our title:
|
||||
|
||||
```
|
||||
<!DOCTYPE html>
|
||||
<html>
|
||||
<head>
|
||||
<title>Bat Letter</title>
|
||||
<link rel="stylesheet" type="text/css" href="style.css">
|
||||
</head>
|
||||
<body>
|
||||
<div id="letter-container">
|
||||
<h1>Bat Letter</h1>
|
||||
<img id="header-bat-logo" src="bat-logo.jpeg">
|
||||
<p>
|
||||
After all the battles we faught together, after all the difficult times we saw together, after all the good and bad moments we've been through, I think it's time I let you know how I feel about you.
|
||||
</p>
|
||||
<h2>You are the light of my life</h2>
|
||||
<p>
|
||||
You complete my darkness with your light. I love:
|
||||
</p>
|
||||
<ul>
|
||||
<li>the way you see good in the worse</li>
|
||||
<li>the way you handle emotionally difficult situations</li>
|
||||
<li>the way you look at Justice</li>
|
||||
</ul>
|
||||
<p>
|
||||
I have learned a lot from you. You have occupied a special place in my heart over the time.
|
||||
</p>
|
||||
<h2>I have a confession to make</h2>
|
||||
<p>
|
||||
It feels like my chest <em>does</em> have a heart. You make my heart beat. Your smile brings smile on my face, your pain brings pain to my heart.
|
||||
</p>
|
||||
<p>
|
||||
I don't show my emotions, but I think this man behind the mask is falling for you.
|
||||
</p>
|
||||
<p><strong>I love you Superman.</strong></p>
|
||||
<p>
|
||||
Your not-so-secret-lover, <br>
|
||||
Batman
|
||||
</p>
|
||||
</div>
|
||||
</body>
|
||||
</html>
|
||||
```
|
||||
|
||||
Save and refresh, and you will see that instead of the file path, “Bat Letter” is now displayed on the tab.
|
||||
|
||||
Batman’s Love Letter is now complete.
|
||||
|
||||
Congratulations! You made Batman’s Love Letter in HTML.
|
||||
|
||||
|
||||
![](https://cdn-images-1.medium.com/max/1000/1*qC8qtrYtxAB6cJfm9aVOOQ.jpeg)
|
||||
|
||||
### What we learned
|
||||
|
||||
We learned the following new concepts:
|
||||
|
||||
* The structure of an HTML document
|
||||
|
||||
* How to write elements in HTML (<p></p>)
|
||||
|
||||
* How to write styles inside the element using the style attribute (this is called inline styling, avoid this as much as you can)
|
||||
|
||||
* How to write styles of an element inside <style>…</style> (this is called embedded styling)
|
||||
|
||||
* How to write styles in a separate file and link to it in HTML using <link> (this is called a linked stylesheet)
|
||||
|
||||
* What is a tag name, attribute, opening tag, and closing tag
|
||||
|
||||
* How to give an id to an element using id attribute
|
||||
|
||||
* Tag selectors and id selectors in CSS
|
||||
|
||||
We learned the following HTML tags:
|
||||
|
||||
* <p>: for paragraphs
|
||||
|
||||
* <br>: for line breaks
|
||||
|
||||
* <ul>, <li>: to display lists
|
||||
|
||||
* <div>: for grouping elements of our letter
|
||||
|
||||
* <h1>, <h2>: for heading and sub heading
|
||||
|
||||
* <img>: to insert an image
|
||||
|
||||
* <strong>, <em>: for bold and italic text styling
|
||||
|
||||
* <style>: for embedded styling
|
||||
|
||||
* <link>: for including external an stylesheet
|
||||
|
||||
* <html>: to wrap the entire HTML document
|
||||
|
||||
* <!DOCTYPE html>: to let the browser know that we are using HTML5
|
||||
|
||||
* <head>: to wrap meta info, like <link> and <title>
|
||||
|
||||
* <body>: for the body of the HTML page that is actually displayed
|
||||
|
||||
* <title>: for the title of the HTML page
|
||||
|
||||
We learned the following CSS properties:
|
||||
|
||||
* width: to define the width of an element
|
||||
|
||||
* CSS units: “px” and “%”
|
||||
|
||||
That’s it for the day friends, see you in the next tutorial.
|
||||
|
||||
* * *
|
||||
|
||||
Want to learn Web Development with fun and engaging tutorials?
|
||||
|
||||
[Click here to get new Web Development tutorials every week.][4]
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
|
||||
作者简介:
|
||||
|
||||
Developer + Writer | supersarkar.com | twitter.com/supersarkar
|
||||
|
||||
-------------
|
||||
|
||||
|
||||
via: https://medium.freecodecamp.org/for-your-first-html-code-lets-help-batman-write-a-love-letter-64c203b9360b
|
||||
|
||||
作者:[Kunal Sarkar][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://medium.freecodecamp.org/@supersarkar
|
||||
[1]:https://www.pexels.com/photo/batman-black-and-white-logo-93596/
|
||||
[2]:https://code.visualstudio.com/
|
||||
[3]:https://www.pexels.com/photo/batman-black-and-white-logo-93596/
|
||||
[4]:http://supersarkar.com/
|
||||
and here’s how to use th
|
||||
|
@ -1,258 +0,0 @@
|
||||
translating by Flowsnow
|
||||
|
||||
Build a bikesharing app with Redis and Python
|
||||
======
|
||||
|
||||
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/google-bikes-yearbook.png?itok=BnmInwea)
|
||||
|
||||
I travel a lot on business. I'm not much of a car guy, so when I have some free time, I prefer to walk or bike around a city. Many of the cities I've visited on business have bikeshare systems, which let you rent a bike for a few hours. Most of these systems have an app to help users locate and rent their bikes, but it would be more helpful for users like me to have a single place to get information on all the bikes in a city that are available to rent.
|
||||
|
||||
To solve this problem and demonstrate the power of open source to add location-aware features to a web application, I combined publicly available bikeshare data, the [Python][1] programming language, and the open source [Redis][2] in-memory data structure server to index and query geospatial data.
|
||||
|
||||
The resulting bikeshare application incorporates data from many different sharing systems, including the [Citi Bike][3] bikeshare in New York City. It takes advantage of the General Bikeshare Feed provided by the Citi Bike system and uses its data to demonstrate some of the features that can be built using Redis to index geospatial data. The Citi Bike data is provided under the [Citi Bike data license agreement][4].
|
||||
|
||||
### General Bikeshare Feed Specification
|
||||
|
||||
The General Bikeshare Feed Specification (GBFS) is an [open data specification][5] developed by the [North American Bikeshare Association][6] to make it easier for map and transportation applications to add bikeshare systems into their platforms. The specification is currently in use by over 60 different sharing systems in the world.
|
||||
|
||||
The feed consists of several simple [JSON][7] data files containing information about the state of the system. The feed starts with a top-level JSON file referencing the URLs of the sub-feed data:
|
||||
```
|
||||
{
|
||||
|
||||
"data": {
|
||||
|
||||
"en": {
|
||||
|
||||
"feeds": [
|
||||
|
||||
{
|
||||
|
||||
"name": "system_information",
|
||||
|
||||
"url": "https://gbfs.citibikenyc.com/gbfs/en/system_information.json"
|
||||
|
||||
},
|
||||
|
||||
{
|
||||
|
||||
"name": "station_information",
|
||||
|
||||
"url": "https://gbfs.citibikenyc.com/gbfs/en/station_information.json"
|
||||
|
||||
},
|
||||
|
||||
. . .
|
||||
|
||||
]
|
||||
|
||||
}
|
||||
|
||||
},
|
||||
|
||||
"last_updated": 1506370010,
|
||||
|
||||
"ttl": 10
|
||||
|
||||
}
|
||||
|
||||
```
|
||||
|
||||
The first step is loading information about the bikesharing stations into Redis using data from the `system_information` and `station_information` feeds.
|
||||
|
||||
The `system_information` feed provides the system ID, which is a short code that can be used to create namespaces for Redis keys. The GBFS spec doesn't specify the format of the system ID, but does guarantee it is globally unique. Many of the bikeshare feeds use short names like coast_bike_share, boise_greenbike, or topeka_metro_bikes for system IDs. Others use familiar geographic abbreviations such as NYC or BA, and one uses a universally unique identifier (UUID). The bikesharing application uses the identifier as a prefix to construct unique keys for the given system.
|
||||
|
||||
The `station_information` feed provides static information about the sharing stations that comprise the system. Stations are represented by JSON objects with several fields. There are several mandatory fields in the station object that provide the ID, name, and location of the physical bike stations. There are also several optional fields that provide helpful information such as the nearest cross street or accepted payment methods. This is the primary source of information for this part of the bikesharing application.
|
||||
|
||||
### Building the database
|
||||
|
||||
I've written a sample application, [load_station_data.py][8], that mimics what would happen in a backend process for loading data from external sources.
|
||||
|
||||
### Finding the bikeshare stations
|
||||
|
||||
Loading the bikeshare data starts with the [systems.csv][9] file from the [GBFS repository on GitHub][5].
|
||||
|
||||
The repository's [systems.csv][9] file provides the discovery URL for registered bikeshare systems with an available GBFS feed. The discovery URL is the starting point for processing bikeshare information.
|
||||
|
||||
The `load_station_data` application takes each discovery URL found in the systems file and uses it to find the URL for two sub-feeds: system information and station information. The system information feed provides a key piece of information: the unique ID of the system. (Note: the system ID is also provided in the systems.csv file, but some of the identifiers in that file do not match the identifiers in the feeds, so I always fetch the identifier from the feed.) Details on the system, like bikeshare URLs, phone numbers, and emails, could be added in future versions of the application, so the data is stored in a Redis hash using the key `${system_id}:system_info`.
|
||||
|
||||
### Loading the station data
|
||||
|
||||
The station information provides data about every station in the system, including the system's location. The `load_station_data` application iterates over every station in the station feed and stores the data about each into a Redis hash using a key of the form `${system_id}:station:${station_id}`. The location of each station is added to a geospatial index for the bikeshare using the `GEOADD` command.
|
||||
|
||||
### Updating data
|
||||
|
||||
On subsequent runs, I don't want the code to remove all the feed data from Redis and reload it into an empty Redis database, so I carefully considered how to handle in-place updates of the data.
|
||||
|
||||
The code starts by loading the dataset with information on all the bikesharing stations for the system being processed into memory. When information is loaded for a station, the station (by key) is removed from the in-memory set of stations. Once all station data is loaded, we're left with a set containing all the station data that must be removed for that system.
|
||||
|
||||
The application iterates over this set of stations and creates a transaction to delete the station information, remove the station key from the geospatial indexes, and remove the station from the list of stations for the system.
|
||||
|
||||
### Notes on the code
|
||||
|
||||
There are a few interesting things to note in [the sample code][8]. First, items are added to the geospatial indexes using the `GEOADD` command but removed with the `ZREM` command. As the underlying implementation of the geospatial type uses sorted sets, items are removed using `ZREM`. A word of caution: For simplicity, the sample code demonstrates working with a single Redis node; the transaction blocks would need to be restructured to run in a cluster environment.
|
||||
|
||||
If you are using Redis 4.0 (or later), you have some alternatives to the `DELETE` and `HMSET` commands in the code. Redis 4.0 provides the [`UNLINK`][10] command as an asynchronous alternative to the `DELETE` command. `UNLINK` will remove the key from the keyspace, but it reclaims the memory in a separate thread. The [`HMSET`][11] command is [deprecated in Redis 4.0 and the `HSET` command is now variadic][12] (that is, it accepts an indefinite number of arguments).
|
||||
|
||||
### Notifying clients
|
||||
|
||||
At the end of the process, a notification is sent to the clients relying on our data. Using the Redis pub/sub mechanism, the notification goes out over the `geobike:station_changed` channel with the ID of the system.
|
||||
|
||||
### Data model
|
||||
|
||||
When structuring data in Redis, the most important thing to think about is how you will query the information. The two main queries the bikeshare application needs to support are:
|
||||
|
||||
* Find stations near us
|
||||
* Display information about stations
|
||||
|
||||
|
||||
|
||||
Redis provides two main data types that will be useful for storing our data: hashes and sorted sets. The [hash type][13] maps well to the JSON objects that represent stations; since Redis hashes don't enforce a schema, they can be used to store the variable station information.
|
||||
|
||||
Of course, finding stations geographically requires a geospatial index to search for stations relative to some coordinates. Redis provides [several commands][14] to build up a geospatial index using the [sorted set][15] data structure.
|
||||
|
||||
We construct keys using the format `${system_id}:station:${station_id}` for the hashes containing information about the stations and keys using the format `${system_id}:stations:location` for the geospatial index used to find stations.
|
||||
|
||||
### Getting the user's location
|
||||
|
||||
The next step in building out the application is to determine the user's current location. Most applications accomplish this through built-in services provided by the operating system. The OS can provide applications with a location based on GPS hardware built into the device or approximated from the device's available WiFi networks.
|
||||
|
||||
![](https://opensource.com/sites/default/files/styles/panopoly_image_original/public/u128651/rediscli_map.png?itok=icqk5543)
|
||||
|
||||
### Finding stations
|
||||
|
||||
After the user's location is found, the next step is locating nearby bikesharing stations. Redis' geospatial functions can return information on stations within a given distance of the user's current coordinates. Here's an example of this using the Redis command-line interface.
|
||||
|
||||
Imagine I'm at the Apple Store on Fifth Avenue in New York City, and I want to head downtown to Mood on West 37th to catch up with my buddy [Swatch][16]. I could take a taxi or the subway, but I'd rather bike. Are there any nearby sharing stations where I could get a bike for my trip?
|
||||
|
||||
The Apple store is located at 40.76384, -73.97297. According to the map, two bikeshare stations—Grand Army Plaza & Central Park South and East 58th St. & Madison—fall within a 500-foot radius (in blue on the map above) of the store.
|
||||
|
||||
I can use Redis' `GEORADIUS` command to query the NYC system index for stations within a 500-foot radius:
|
||||
```
|
||||
127.0.0.1:6379> GEORADIUS NYC:stations:location -73.97297 40.76384 500 ft
|
||||
|
||||
1) "NYC:station:3457"
|
||||
|
||||
2) "NYC:station:281"
|
||||
|
||||
```
|
||||
|
||||
Redis returns the two bikeshare locations found within that radius, using the elements in our geospatial index as the keys for the metadata about a particular station. The next step is looking up the names for the two stations:
|
||||
```
|
||||
127.0.0.1:6379> hget NYC:station:281 name
|
||||
|
||||
"Grand Army Plaza & Central Park S"
|
||||
|
||||
|
||||
|
||||
127.0.0.1:6379> hget NYC:station:3457 name
|
||||
|
||||
"E 58 St & Madison Ave"
|
||||
|
||||
```
|
||||
|
||||
Those keys correspond to the stations identified on the map above. If I want, I can add more flags to the `GEORADIUS` command to get a list of elements, their coordinates, and their distance from our current point:
|
||||
```
|
||||
127.0.0.1:6379> GEORADIUS NYC:stations:location -73.97297 40.76384 500 ft WITHDIST WITHCOORD ASC
|
||||
|
||||
1) 1) "NYC:station:281"
|
||||
|
||||
2) "289.1995"
|
||||
|
||||
3) 1) "-73.97371262311935425"
|
||||
|
||||
2) "40.76439830559216659"
|
||||
|
||||
2) 1) "NYC:station:3457"
|
||||
|
||||
2) "383.1782"
|
||||
|
||||
3) 1) "-73.97209256887435913"
|
||||
|
||||
2) "40.76302702144496237"
|
||||
|
||||
```
|
||||
|
||||
Looking up the names associated with those keys generates an ordered list of stations I can choose from. Redis doesn't provide directions or routing capability, so I use the routing features of my device's OS to plot a course from my current location to the selected bike station.
|
||||
|
||||
The `GEORADIUS` function can be easily implemented inside an API in your favorite development framework to add location functionality to an app.
|
||||
|
||||
### Other query commands
|
||||
|
||||
In addition to the `GEORADIUS` command, Redis provides three other commands for querying data from the index: `GEOPOS`, `GEODIST`, and `GEORADIUSBYMEMBER`.
|
||||
|
||||
The `GEOPOS` command can provide the coordinates for a given element from the geohash. For example, if I know there is a bikesharing station at West 38th and 8th and its ID is 523, then the element name for that station is NYC🚉523. Using Redis, I can find the station's longitude and latitude:
|
||||
```
|
||||
127.0.0.1:6379> geopos NYC:stations:location NYC:station:523
|
||||
|
||||
1) 1) "-73.99138301610946655"
|
||||
|
||||
2) "40.75466497634030105"
|
||||
|
||||
```
|
||||
|
||||
The `GEODIST` command provides the distance between two elements of the index. If I wanted to find the distance between the station at Grand Army Plaza & Central Park South and the station at East 58th St. & Madison, I would issue the following command:
|
||||
```
|
||||
127.0.0.1:6379> GEODIST NYC:stations:location NYC:station:281 NYC:station:3457 ft
|
||||
|
||||
"671.4900"
|
||||
|
||||
```
|
||||
|
||||
Finally, the `GEORADIUSBYMEMBER` command is similar to the `GEORADIUS` command, but instead of taking a set of coordinates, the command takes the name of another member of the index and returns all the members within a given radius centered on that member. To find all the stations within 1,000 feet of the Grand Army Plaza & Central Park South, enter the following:
|
||||
```
|
||||
127.0.0.1:6379> GEORADIUSBYMEMBER NYC:stations:location NYC:station:281 1000 ft WITHDIST
|
||||
|
||||
1) 1) "NYC:station:281"
|
||||
|
||||
2) "0.0000"
|
||||
|
||||
2) 1) "NYC:station:3132"
|
||||
|
||||
2) "793.4223"
|
||||
|
||||
3) 1) "NYC:station:2006"
|
||||
|
||||
2) "911.9752"
|
||||
|
||||
4) 1) "NYC:station:3136"
|
||||
|
||||
2) "940.3399"
|
||||
|
||||
5) 1) "NYC:station:3457"
|
||||
|
||||
2) "671.4900"
|
||||
|
||||
```
|
||||
|
||||
While this example focused on using Python and Redis to parse data and build an index of bikesharing system locations, it can easily be generalized to locate restaurants, public transit, or any other type of place developers want to help users find.
|
||||
|
||||
This article is based on [my presentation][17] at Open Source 101 in Raleigh this year.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/18/2/building-bikesharing-application-open-source-tools
|
||||
|
||||
作者:[Tague Griffith][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://opensource.com/users/tague
|
||||
[1]:https://www.python.org/
|
||||
[2]:https://redis.io/
|
||||
[3]:https://www.citibikenyc.com/
|
||||
[4]:https://www.citibikenyc.com/data-sharing-policy
|
||||
[5]:https://github.com/NABSA/gbfs
|
||||
[6]:http://nabsa.net/
|
||||
[7]:https://www.json.org/
|
||||
[8]:https://gist.github.com/tague/5a82d96bcb09ce2a79943ad4c87f6e15
|
||||
[9]:https://github.com/NABSA/gbfs/blob/master/systems.csv
|
||||
[10]:https://redis.io/commands/unlink
|
||||
[11]:https://redis.io/commands/hmset
|
||||
[12]:https://raw.githubusercontent.com/antirez/redis/4.0/00-RELEASENOTES
|
||||
[13]:https://redis.io/topics/data-types#Hashes
|
||||
[14]:https://redis.io/commands#geo
|
||||
[15]:https://redis.io/topics/data-types-intro#redis-sorted-sets
|
||||
[16]:https://twitter.com/swatchthedog
|
||||
[17]:http://opensource101.com/raleigh/talks/building-location-aware-apps-open-source-tools/
|
@ -1,3 +1,7 @@
|
||||
**translating by [erlinux](https://github.com/erlinux)**
|
||||
**PROJECT MANAGEMENT TOOL called [gn2.sh](https://github.com/lctt/lctt-cli)**
|
||||
|
||||
|
||||
How to analyze your system with perf and Python
|
||||
======
|
||||
|
||||
|
@ -1,165 +0,0 @@
|
||||
HackChow translating
|
||||
|
||||
5 alerting and visualization tools for sysadmins
|
||||
======
|
||||
These open source tools help users understand system behavior and output, and provide alerts for potential problems.
|
||||
|
||||
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/metrics_data_dashboard_system_computer_analytics.png?itok=oxAeIEI-)
|
||||
|
||||
You probably know (or can guess) what alerting and visualization tools are used for. Why would we discuss them as observability tools, especially since some systems include visualization as a feature?
|
||||
|
||||
Observability comes from control theory and describes our ability to understand a system based on its inputs and outputs. This article focuses on the output component of observability.
|
||||
|
||||
Alerting and visualization tools analyze the outputs of other systems and provide structured representations of these outputs. Alerts are basically a synthesized understanding of negative system outputs, and visualizations are disambiguated structured representations that facilitate user comprehension.
|
||||
|
||||
### Common types of alerts and visualizations
|
||||
|
||||
#### Alerts
|
||||
|
||||
Let’s first cover what alerts are _not_. Alerts should not be sent if the human responder can’t do anything about the problem. This includes alerts that are sent to multiple individuals with only a few who can respond, or situations where every anomaly in the system triggers an alert. This leads to alert fatigue and receivers ignoring all alerts within a specific medium until the system escalates to a medium that isn’t already saturated.
|
||||
|
||||
For example, if an operator receives hundreds of emails a day from the alerting system, that operator will soon ignore all emails from the alerting system. The operator will respond to a real incident only when he or she is experiencing the problem, emailed by a customer, or called by the boss. In this case, alerts have lost their meaning and usefulness.
|
||||
|
||||
Alerts are not a constant stream of information or a status update. They are meant to convey a problem from which the system can’t automatically recover, and they are sent only to the individual most likely to be able to recover the system. Everything that falls outside this definition isn’t an alert and will only damage your employees and company culture.
|
||||
|
||||
Everyone has a different set of alert types, so I won't discuss things like priority levels (P1-P5) or models that use words like "Informational," "Warning," and "Critical." Instead, I’ll describe the generic categories emergent in complex systems’ incident response.
|
||||
|
||||
You might have noticed I mentioned an “Informational” alert type right after I wrote that alerts shouldn’t be informational. Well, not everyone agrees, but I don’t consider something an alert if it isn’t sent to anyone. It is a data point that many systems refer to as an alert. It represents some event that should be known but not responded to. It is generally part of the visualization system of the alerting tool and not an event that triggers actual notifications. Mike Julian covers this and other aspects of alerting in his book [Practical Monitoring][1]. It's a must read for work in this area.
|
||||
|
||||
Non-informational alerts consist of types that can be responded to or require action. I group these into two categories: internal outage and external outage. (Most companies have more than two levels for prioritizing their response efforts.) Degraded system performance is considered an outage in this model, as the impact to each user is usually unknown.
|
||||
|
||||
Internal outages are a lower priority than external outages, but they still need to be responded to quickly. They often include internal systems that company employees use or components of applications that are visible only to company employees.
|
||||
|
||||
External outages consist of any system outage that would immediately impact a customer. These don’t include a system outage that prevents releasing updates to the system. They do include customer-facing application failures, database outages, and networking partitions that hurt availability or consistency if either can impact a user. They also include outages of tools that may not have a direct impact on users, as the application continues to run but this transparent dependency impacts performance. This is common when the system uses some external service or data source that isn’t necessary for full functionality but may cause delays as the application performs retries or handles errors from this external dependency.
|
||||
|
||||
### Visualizations
|
||||
|
||||
There are many visualization types, and I won’t cover them all here. It’s a fascinating area of research. On the data analytics side of my career, learning and applying that knowledge is a constant challenge. We need to provide simple representations of complex system outputs for the widest dissemination of information. [Google Charts][2] and [Tableau][3] have a wide selection of visualization types. We’ll cover the most common visualizations and some innovative solutions for quickly understanding systems.
|
||||
|
||||
#### Line chart
|
||||
|
||||
The line chart is probably the most common visualization. It does a pretty good job of producing an understanding of a system over time. A line chart in a metrics system would have a line for each unique metric or some aggregation of metrics. This can get confusing when there are a lot of metrics in the same dashboard (as shown below), but most systems can select specific metrics to view rather than having all of them visible. Also, anomalous behavior is easy to spot if it’s significant enough to escape the noise of normal operations. Below we can see purple, yellow, and light blue lines that might indicate anomalous behavior.
|
||||
|
||||
![](https://opensource.com/sites/default/files/uploads/monitoring_guide_line_chart.png)
|
||||
|
||||
Another feature of a line chart is that you can often stack them to show relationships. For example, you might want to look at requests on each server individually, but also in aggregate. This allows you to understand the overall system as well as each instance in the same graph.
|
||||
|
||||
![](https://opensource.com/sites/default/files/uploads/monitoring_guide_line_chart_aggregate.png)
|
||||
|
||||
#### Heatmaps
|
||||
|
||||
Another common visualization is the heatmap. It is useful when looking at histograms. This type of visualization is similar to a bar chart but can show gradients within the bars representing the different percentiles of the overall metric. For example, suppose you’re looking at request latencies and you want to quickly understand the overall trend as well as the distribution of all requests. A heatmap is great for this, and it can use color to disambiguate the quantity of each section with a quick glance.
|
||||
|
||||
The heatmap below shows the higher concentration around the centerline of the graph with an easy-to-understand visualization of the distribution vertically for each time bucket. We might want to review a couple of points in time where the distribution gets wide while the others are fairly tight like at 14:00. This distribution might be a negative performance indicator.
|
||||
|
||||
![](https://opensource.com/sites/default/files/uploads/monitoring_guide_histogram.png)
|
||||
|
||||
#### Gauges
|
||||
|
||||
The last common visualization I’ll cover here is the gauge, which helps users understand a single metric quickly. Gauges can represent a single metric, like your speedometer represents your driving speed or your gas gauge represents the amount of gas in your car. Similar to the gas gauge, most monitoring gauges clearly indicate what is good and what isn’t. Often (as is shown below), good is represented by green, getting worse by orange, and “everything is breaking” by red. The middle row below shows traditional gauges.
|
||||
|
||||
![](https://opensource.com/sites/default/files/uploads/monitoring_guide_gauges.png)
|
||||
|
||||
This image shows more than just traditional gauges. The other gauges are single stat representations that are similar to the function of the classic gauge. They all use the same color scheme to quickly indicate system health with just a glance. Arguably, the bottom row is probably the best example of a gauge that allows you to glance at a dashboard and know that everything is healthy (or not). This type of visualization is usually what I put on a top-level dashboard. It offers a full, high-level understanding of system health in seconds.
|
||||
|
||||
#### Flame graphs
|
||||
|
||||
A less common visualization is the flame graph, introduced by [Netflix’s Brendan Gregg][4] in 2011. It’s not ideal for dashboarding or quickly observing high-level system concerns; it’s normally seen when trying to understand a specific application problem. This visualization focuses on CPU and memory and the associated frames. The X-axis lists the frames alphabetically, and the Y-axis shows stack depth. Each rectangle is a stack frame and includes the function being called. The wider the rectangle, the more it appears in the stack. This method is invaluable when trying to diagnose system performance at the application level and I urge everyone to give it a try.
|
||||
|
||||
![](https://opensource.com/sites/default/files/uploads/monitoring_guide_flame_graph_0.png)
|
||||
|
||||
### Tool options
|
||||
|
||||
There are several commercial options for alerting, but since this is Opensource.com, I’ll cover only systems that are being used at scale by real companies that you can use at no cost. Hopefully, you’ll be able to contribute new and innovative features to make these systems even better.
|
||||
|
||||
### Alerting tools
|
||||
|
||||
#### Bosun
|
||||
|
||||
If you’ve ever done anything with computers and gotten stuck, the help you received was probably thanks to a Stack Exchange system. Stack Exchange runs many different websites around a crowdsourced question-and-answer model. [Stack Overflow][5] is very popular with developers, and [Super User][6] is popular with operations. However, there are now hundreds of sites ranging from parenting to sci-fi and philosophy to bicycles.
|
||||
|
||||
Stack Exchange open-sourced its alert management system, [Bosun][7], around the same time Prometheus and its [AlertManager][8] system were released. There were many similarities in the two systems, and that’s a really good thing. Like Prometheus, Bosun is written in Golang. Bosun’s scope is more extensive than Prometheus’ as it can interact with systems beyond metrics aggregation. It can also ingest data from log and event aggregation systems. It supports Graphite, InfluxDB, OpenTSDB, and Elasticsearch.
|
||||
|
||||
Bosun’s architecture consists of a single server binary, a backend like OpenTSDB, Redis, and [scollector agents][9]. The scollector agents automatically detect services on a host and report metrics for those processes and other system resources. This data is sent to a metrics backend. The Bosun server binary then queries the backends to determine if any alerts need to be fired. Bosun can also be used by tools like [Grafana][10] to query the underlying backends through one common interface. Redis is used to store state and metadata for Bosun.
|
||||
|
||||
A really neat feature of Bosun is that it lets you test your alerts against historical data. This was something I missed in Prometheus several years ago, when I had data for an issue I wanted alerts on but no easy way to test it. To make sure my alerts were working, I had to create and insert dummy data. This system alleviates that very time-consuming process.
|
||||
|
||||
Bosun also has the usual features like showing simple graphs and creating alerts. It has a powerful expression language for writing alerting rules. However, it only has email and HTTP notification configurations, which means connecting to Slack and other tools requires a bit more customization ([which its documentation covers][11]). Similar to Prometheus, Bosun can use templates for these notifications, which means they can look as awesome as you want them to. You can use all your HTML and CSS skills to create the baddest email alert anyone has ever seen.
|
||||
|
||||
#### Cabot
|
||||
|
||||
[Cabot][12] was created by a company called [Arachnys][13]. You may not know who Arachnys is or what it does, but you have probably felt its impact: It built the leading cloud-based solution for fighting financial crimes. That sounds pretty cool, right? At a previous company, I was involved in similar functions around [“know your customer"][14] laws. Most companies would consider it a very bad thing to be linked to a terrorist group, for example, funneling money through their systems. These solutions also help defend against less-atrocious offenders like fraudsters who could also pose a risk to the institution.
|
||||
|
||||
So why did Arachnys create Cabot? Well, it is kind of a Christmas present to everyone, as it was a Christmas project built because its developers couldn’t wrap their heads around [Nagios][15]. And really, who can blame them? Cabot was written with Django and Bootstrap, so it should be easy for most to contribute to the project. (Another interesting factoid: The name comes from the creator’s dog.)
|
||||
|
||||
The Cabot architecture is similar to Bosun in that it doesn’t collect any data. Instead, it accesses data through the APIs of the tools it is alerting for. Therefore, Cabot uses a pull (rather than a push) model for alerting. It reaches out into each system’s API and retrieves the information it needs to make a decision based on a specific check. Cabot stores the alerting data in a Postgres database and also has a cache using Redis.
|
||||
|
||||
Cabot natively supports [Graphite][16], but it also supports [Jenkins][17], which is rare in this area. [Arachnys][13] uses Jenkins like a centralized cron, but I like this idea of treating build failures like outages. Obviously, a build failure isn’t as critical as a production outage, but it could still alert the team and escalate if the failure isn’t resolved. Who actually checks Jenkins every time an email comes in about a build failure? Yeah, me too!
|
||||
|
||||
Another interesting feature is that Cabot can integrate with Google Calendar for on-call rotations. Cabot calls this feature Rota, which is a British term for a roster or rotation. This makes a lot of sense, and I wish other systems would take this idea further. Cabot doesn’t support anything more complex than primary and backup personnel, but there is certainly room for additional features. The docs say if you want something more advanced, you should look at a commercial option.
|
||||
|
||||
#### StatsAgg
|
||||
|
||||
[StatsAgg][18]? How did that make the list? Well, it’s not every day you come across a publishing company that has created an alerting platform. I think that deserves recognition. Of course, [Pearson][19] isn’t just a publishing company anymore; it has several web presences and a joint venture with [O’Reilly Media][20]. However, I still think of it as the company that published my schoolbooks and tests.
|
||||
|
||||
StatsAgg isn’t just an alerting platform; it’s also a metrics aggregation platform. And it’s kind of like a proxy for other systems. It supports Graphite, StatsD, InfluxDB, and OpenTSDB as inputs, but it can also forward those metrics to their respective platforms. This is an interesting concept, but potentially risky as loads increase on a central service. However, if the StatsAgg infrastructure is robust enough, it can still produce alerts even when a backend storage platform has an outage.
|
||||
|
||||
StatsAgg is written in Java and consists only of the main server and UI, which keeps complexity to a minimum. It can send alerts based on regular expression matching and is focused on alerting by service rather than host or instance. Its goal is to fill a void in the open source observability stack, and I think it does that quite well.
|
||||
|
||||
### Visualization tools
|
||||
|
||||
#### Grafana
|
||||
|
||||
Almost everyone knows about [Grafana][10], and many have used it. I have used it for years whenever I need a simple dashboard. The tool I used before was deprecated, and I was fairly distraught about that until Grafana made it okay. Grafana was gifted to us by Torkel Ödegaard. Like Cabot, Grafana was also created around Christmastime, and released in January 2014. It has come a long way in just a few years. It started life as a Kibana dashboarding system, and Torkel forked it into what became Grafana.
|
||||
|
||||
Grafana’s sole focus is presenting monitoring data in a more usable and pleasing way. It can natively gather data from Graphite, Elasticsearch, OpenTSDB, Prometheus, and InfluxDB. There’s an Enterprise version that uses plugins for more data sources, but there’s no reason those other data source plugins couldn’t be created as open source, as the Grafana plugin ecosystem already offers many other data sources.
|
||||
|
||||
What does Grafana do for me? It provides a central location for understanding my system. It is web-based, so anyone can access the information, although it can be restricted using different authentication methods. Grafana can provide knowledge at a glance using many different types of visualizations. However, it has started integrating alerting and other features that aren’t traditionally combined with visualizations.
|
||||
|
||||
Now you can set alerts visually. That means you can look at a graph, maybe even one showing where an alert should have triggered due to some degradation of the system, click on the graph where you want the alert to trigger, and then tell Grafana where to send the alert. That’s a pretty powerful addition that won’t necessarily replace an alerting platform, but it can certainly help augment it by providing a different perspective on alerting criteria.
|
||||
|
||||
Grafana has also introduced more collaboration features. Users have been able to share dashboards for a long time, meaning you don’t have to create your own dashboard for your [Kubernetes][21] cluster because there are several already available—with some maintained by Kubernetes developers and others by Grafana developers.
|
||||
|
||||
The most significant addition around collaboration is annotations. Annotations allow a user to add context to part of a graph. Other users can then use this context to understand the system better. This is an invaluable tool when a team is in the middle of an incident and communication and common understanding are critical. Having all the information right where you’re already looking makes it much more likely that knowledge will be shared across the team quickly. It’s also a nice feature to use during blameless postmortems when the team is trying to understand how the failure occurred and learn more about their system.
|
||||
|
||||
#### Vizceral
|
||||
|
||||
Netflix created [Vizceral][22] to understand its traffic patterns better when performing a traffic failover. Unlike Grafana, which is a more general tool, Vizceral serves a very specific use case. Netflix no longer uses this tool internally and says it is no longer actively maintained, but it still updates the tool periodically. I highlight it here primarily to point out an interesting visualization mechanism and how it can help solve a problem. It’s worth running it in a demo environment just to better grasp the concepts and witness what’s possible with these systems.
|
||||
|
||||
### What to read next
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/18/10/alerting-and-visualization-tools-sysadmins
|
||||
|
||||
作者:[Dan Barker][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/barkerd427
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://www.practicalmonitoring.com/
|
||||
[2]: https://developers.google.com/chart/interactive/docs/gallery
|
||||
[3]: https://libguides.libraries.claremont.edu/c.php?g=474417&p=3286401
|
||||
[4]: http://www.brendangregg.com/flamegraphs.html
|
||||
[5]: https://stackoverflow.com/
|
||||
[6]: https://superuser.com/
|
||||
[7]: http://bosun.org/
|
||||
[8]: https://prometheus.io/docs/alerting/alertmanager/
|
||||
[9]: https://bosun.org/scollector/
|
||||
[10]: https://grafana.com/
|
||||
[11]: https://bosun.org/notifications
|
||||
[12]: https://cabotapp.com/
|
||||
[13]: https://www.arachnys.com/
|
||||
[14]: https://en.wikipedia.org/wiki/Know_your_customer
|
||||
[15]: https://www.nagios.org/
|
||||
[16]: https://graphiteapp.org/
|
||||
[17]: https://jenkins.io/
|
||||
[18]: https://github.com/PearsonEducation/StatsAgg
|
||||
[19]: https://www.pearson.com/us/
|
||||
[20]: https://www.oreilly.com/
|
||||
[21]: https://opensource.com/resources/what-is-kubernetes
|
||||
[22]: https://github.com/Netflix/vizceral
|
@ -1,346 +0,0 @@
|
||||
Translating by qhwdw
|
||||
Lab 5: File system, Spawn and Shell
|
||||
======
|
||||
|
||||
**Due Thursday, November 15, 2018
|
||||
**
|
||||
|
||||
### Introduction
|
||||
|
||||
In this lab, you will implement `spawn`, a library call that loads and runs on-disk executables. You will then flesh out your kernel and library operating system enough to run a shell on the console. These features need a file system, and this lab introduces a simple read/write file system.
|
||||
|
||||
#### Getting Started
|
||||
|
||||
Use Git to fetch the latest version of the course repository, and then create a local branch called `lab5` based on our lab5 branch, `origin/lab5`:
|
||||
|
||||
```
|
||||
athena% cd ~/6.828/lab
|
||||
athena% add git
|
||||
athena% git pull
|
||||
Already up-to-date.
|
||||
athena% git checkout -b lab5 origin/lab5
|
||||
Branch lab5 set up to track remote branch refs/remotes/origin/lab5.
|
||||
Switched to a new branch "lab5"
|
||||
athena% git merge lab4
|
||||
Merge made by recursive.
|
||||
.....
|
||||
athena%
|
||||
```
|
||||
|
||||
The main new component for this part of the lab is the file system environment, located in the new `fs` directory. Scan through all the files in this directory to get a feel for what all is new. Also, there are some new file system-related source files in the `user` and `lib` directories,
|
||||
|
||||
| fs/fs.c | Code that mainipulates the file system's on-disk structure. |
|
||||
| fs/bc.c | A simple block cache built on top of our user-level page fault handling facility. |
|
||||
| fs/ide.c | Minimal PIO-based (non-interrupt-driven) IDE driver code. |
|
||||
| fs/serv.c | The file system server that interacts with client environments using file system IPCs. |
|
||||
| lib/fd.c | Code that implements the general UNIX-like file descriptor interface. |
|
||||
| lib/file.c | The driver for on-disk file type, implemented as a file system IPC client. |
|
||||
| lib/console.c | The driver for console input/output file type. |
|
||||
| lib/spawn.c | Code skeleton of the spawn library call. |
|
||||
|
||||
You should run the pingpong, primes, and forktree test cases from lab 4 again after merging in the new lab 5 code. You will need to comment out the `ENV_CREATE(fs_fs)` line in `kern/init.c` because `fs/fs.c` tries to do some I/O, which JOS does not allow yet. Similarly, temporarily comment out the call to `close_all()` in `lib/exit.c`; this function calls subroutines that you will implement later in the lab, and therefore will panic if called. If your lab 4 code doesn't contain any bugs, the test cases should run fine. Don't proceed until they work. Don't forget to un-comment these lines when you start Exercise 1.
|
||||
|
||||
If they don't work, use git diff lab4 to review all the changes, making sure there isn't any code you wrote for lab4 (or before) missing from lab 5. Make sure that lab 4 still works.
|
||||
|
||||
#### Lab Requirements
|
||||
|
||||
As before, you will need to do all of the regular exercises described in the lab and _at least one_ challenge problem. Additionally, you will need to write up brief answers to the questions posed in the lab and a short (e.g., one or two paragraph) description of what you did to solve your chosen challenge problem. If you implement more than one challenge problem, you only need to describe one of them in the write-up, though of course you are welcome to do more. Place the write-up in a file called `answers-lab5.txt` in the top level of your `lab5` directory before handing in your work.
|
||||
|
||||
### File system preliminaries
|
||||
|
||||
The file system you will work with is much simpler than most "real" file systems including that of xv6 UNIX, but it is powerful enough to provide the basic features: creating, reading, writing, and deleting files organized in a hierarchical directory structure.
|
||||
|
||||
We are (for the moment anyway) developing only a single-user operating system, which provides protection sufficient to catch bugs but not to protect multiple mutually suspicious users from each other. Our file system therefore does not support the UNIX notions of file ownership or permissions. Our file system also currently does not support hard links, symbolic links, time stamps, or special device files like most UNIX file systems do.
|
||||
|
||||
### On-Disk File System Structure
|
||||
|
||||
Most UNIX file systems divide available disk space into two main types of regions: _inode_ regions and _data_ regions. UNIX file systems assign one _inode_ to each file in the file system; a file's inode holds critical meta-data about the file such as its `stat` attributes and pointers to its data blocks. The data regions are divided into much larger (typically 8KB or more) _data blocks_ , within which the file system stores file data and directory meta-data. Directory entries contain file names and pointers to inodes; a file is said to be _hard-linked_ if multiple directory entries in the file system refer to that file's inode. Since our file system will not support hard links, we do not need this level of indirection and therefore can make a convenient simplification: our file system will not use inodes at all and instead will simply store all of a file's (or sub-directory's) meta-data within the (one and only) directory entry describing that file.
|
||||
|
||||
Both files and directories logically consist of a series of data blocks, which may be scattered throughout the disk much like the pages of an environment's virtual address space can be scattered throughout physical memory. The file system environment hides the details of block layout, presenting interfaces for reading and writing sequences of bytes at arbitrary offsets within files. The file system environment handles all modifications to directories internally as a part of performing actions such as file creation and deletion. Our file system does allow user environments to _read_ directory meta-data directly (e.g., with `read`), which means that user environments can perform directory scanning operations themselves (e.g., to implement the `ls` program) rather than having to rely on additional special calls to the file system. The disadvantage of this approach to directory scanning, and the reason most modern UNIX variants discourage it, is that it makes application programs dependent on the format of directory meta-data, making it difficult to change the file system's internal layout without changing or at least recompiling application programs as well.
|
||||
|
||||
#### Sectors and Blocks
|
||||
|
||||
Most disks cannot perform reads and writes at byte granularity and instead perform reads and writes in units of _sectors_. In JOS, sectors are 512 bytes each. File systems actually allocate and use disk storage in units of _blocks_. Be wary of the distinction between the two terms: _sector size_ is a property of the disk hardware, whereas _block size_ is an aspect of the operating system using the disk. A file system's block size must be a multiple of the sector size of the underlying disk.
|
||||
|
||||
The UNIX xv6 file system uses a block size of 512 bytes, the same as the sector size of the underlying disk. Most modern file systems use a larger block size, however, because storage space has gotten much cheaper and it is more efficient to manage storage at larger granularities. Our file system will use a block size of 4096 bytes, conveniently matching the processor's page size.
|
||||
|
||||
#### Superblocks
|
||||
|
||||
![Disk layout][1]
|
||||
|
||||
File systems typically reserve certain disk blocks at "easy-to-find" locations on the disk (such as the very start or the very end) to hold meta-data describing properties of the file system as a whole, such as the block size, disk size, any meta-data required to find the root directory, the time the file system was last mounted, the time the file system was last checked for errors, and so on. These special blocks are called _superblocks_.
|
||||
|
||||
Our file system will have exactly one superblock, which will always be at block 1 on the disk. Its layout is defined by `struct Super` in `inc/fs.h`. Block 0 is typically reserved to hold boot loaders and partition tables, so file systems generally do not use the very first disk block. Many "real" file systems maintain multiple superblocks, replicated throughout several widely-spaced regions of the disk, so that if one of them is corrupted or the disk develops a media error in that region, the other superblocks can still be found and used to access the file system.
|
||||
|
||||
#### File Meta-data
|
||||
|
||||
![File structure][2]
|
||||
The layout of the meta-data describing a file in our file system is described by `struct File` in `inc/fs.h`. This meta-data includes the file's name, size, type (regular file or directory), and pointers to the blocks comprising the file. As mentioned above, we do not have inodes, so this meta-data is stored in a directory entry on disk. Unlike in most "real" file systems, for simplicity we will use this one `File` structure to represent file meta-data as it appears _both on disk and in memory_.
|
||||
|
||||
The `f_direct` array in `struct File` contains space to store the block numbers of the first 10 (`NDIRECT`) blocks of the file, which we call the file's _direct_ blocks. For small files up to 10*4096 = 40KB in size, this means that the block numbers of all of the file's blocks will fit directly within the `File` structure itself. For larger files, however, we need a place to hold the rest of the file's block numbers. For any file greater than 40KB in size, therefore, we allocate an additional disk block, called the file's _indirect block_ , to hold up to 4096/4 = 1024 additional block numbers. Our file system therefore allows files to be up to 1034 blocks, or just over four megabytes, in size. To support larger files, "real" file systems typically support _double-_ and _triple-indirect blocks_ as well.
|
||||
|
||||
#### Directories versus Regular Files
|
||||
|
||||
A `File` structure in our file system can represent either a _regular_ file or a directory; these two types of "files" are distinguished by the `type` field in the `File` structure. The file system manages regular files and directory-files in exactly the same way, except that it does not interpret the contents of the data blocks associated with regular files at all, whereas the file system interprets the contents of a directory-file as a series of `File` structures describing the files and subdirectories within the directory.
|
||||
|
||||
The superblock in our file system contains a `File` structure (the `root` field in `struct Super`) that holds the meta-data for the file system's root directory. The contents of this directory-file is a sequence of `File` structures describing the files and directories located within the root directory of the file system. Any subdirectories in the root directory may in turn contain more `File` structures representing sub-subdirectories, and so on.
|
||||
|
||||
### The File System
|
||||
|
||||
The goal for this lab is not to have you implement the entire file system, but for you to implement only certain key components. In particular, you will be responsible for reading blocks into the block cache and flushing them back to disk; allocating disk blocks; mapping file offsets to disk blocks; and implementing read, write, and open in the IPC interface. Because you will not be implementing all of the file system yourself, it is very important that you familiarize yourself with the provided code and the various file system interfaces.
|
||||
|
||||
### Disk Access
|
||||
|
||||
The file system environment in our operating system needs to be able to access the disk, but we have not yet implemented any disk access functionality in our kernel. Instead of taking the conventional "monolithic" operating system strategy of adding an IDE disk driver to the kernel along with the necessary system calls to allow the file system to access it, we instead implement the IDE disk driver as part of the user-level file system environment. We will still need to modify the kernel slightly, in order to set things up so that the file system environment has the privileges it needs to implement disk access itself.
|
||||
|
||||
It is easy to implement disk access in user space this way as long as we rely on polling, "programmed I/O" (PIO)-based disk access and do not use disk interrupts. It is possible to implement interrupt-driven device drivers in user mode as well (the L3 and L4 kernels do this, for example), but it is more difficult since the kernel must field device interrupts and dispatch them to the correct user-mode environment.
|
||||
|
||||
The x86 processor uses the IOPL bits in the EFLAGS register to determine whether protected-mode code is allowed to perform special device I/O instructions such as the IN and OUT instructions. Since all of the IDE disk registers we need to access are located in the x86's I/O space rather than being memory-mapped, giving "I/O privilege" to the file system environment is the only thing we need to do in order to allow the file system to access these registers. In effect, the IOPL bits in the EFLAGS register provides the kernel with a simple "all-or-nothing" method of controlling whether user-mode code can access I/O space. In our case, we want the file system environment to be able to access I/O space, but we do not want any other environments to be able to access I/O space at all.
|
||||
|
||||
```
|
||||
Exercise 1. `i386_init` identifies the file system environment by passing the type `ENV_TYPE_FS` to your environment creation function, `env_create`. Modify `env_create` in `env.c`, so that it gives the file system environment I/O privilege, but never gives that privilege to any other environment.
|
||||
|
||||
Make sure you can start the file environment without causing a General Protection fault. You should pass the "fs i/o" test in make grade.
|
||||
```
|
||||
|
||||
```
|
||||
Question
|
||||
|
||||
1. Do you have to do anything else to ensure that this I/O privilege setting is saved and restored properly when you subsequently switch from one environment to another? Why?
|
||||
```
|
||||
|
||||
|
||||
Note that the `GNUmakefile` file in this lab sets up QEMU to use the file `obj/kern/kernel.img` as the image for disk 0 (typically "Drive C" under DOS/Windows) as before, and to use the (new) file `obj/fs/fs.img` as the image for disk 1 ("Drive D"). In this lab our file system should only ever touch disk 1; disk 0 is used only to boot the kernel. If you manage to corrupt either disk image in some way, you can reset both of them to their original, "pristine" versions simply by typing:
|
||||
|
||||
```
|
||||
$ rm obj/kern/kernel.img obj/fs/fs.img
|
||||
$ make
|
||||
```
|
||||
|
||||
or by doing:
|
||||
|
||||
```
|
||||
$ make clean
|
||||
$ make
|
||||
```
|
||||
|
||||
Challenge! Implement interrupt-driven IDE disk access, with or without DMA. You can decide whether to move the device driver into the kernel, keep it in user space along with the file system, or even (if you really want to get into the micro-kernel spirit) move it into a separate environment of its own.
|
||||
|
||||
### The Block Cache
|
||||
|
||||
In our file system, we will implement a simple "buffer cache" (really just a block cache) with the help of the processor's virtual memory system. The code for the block cache is in `fs/bc.c`.
|
||||
|
||||
Our file system will be limited to handling disks of size 3GB or less. We reserve a large, fixed 3GB region of the file system environment's address space, from 0x10000000 (`DISKMAP`) up to 0xD0000000 (`DISKMAP+DISKMAX`), as a "memory mapped" version of the disk. For example, disk block 0 is mapped at virtual address 0x10000000, disk block 1 is mapped at virtual address 0x10001000, and so on. The `diskaddr` function in `fs/bc.c` implements this translation from disk block numbers to virtual addresses (along with some sanity checking).
|
||||
|
||||
Since our file system environment has its own virtual address space independent of the virtual address spaces of all other environments in the system, and the only thing the file system environment needs to do is to implement file access, it is reasonable to reserve most of the file system environment's address space in this way. It would be awkward for a real file system implementation on a 32-bit machine to do this since modern disks are larger than 3GB. Such a buffer cache management approach may still be reasonable on a machine with a 64-bit address space.
|
||||
|
||||
Of course, it would take a long time to read the entire disk into memory, so instead we'll implement a form of _demand paging_ , wherein we only allocate pages in the disk map region and read the corresponding block from the disk in response to a page fault in this region. This way, we can pretend that the entire disk is in memory.
|
||||
|
||||
```
|
||||
Exercise 2. Implement the `bc_pgfault` and `flush_block` functions in `fs/bc.c`. `bc_pgfault` is a page fault handler, just like the one your wrote in the previous lab for copy-on-write fork, except that its job is to load pages in from the disk in response to a page fault. When writing this, keep in mind that (1) `addr` may not be aligned to a block boundary and (2) `ide_read` operates in sectors, not blocks.
|
||||
|
||||
The `flush_block` function should write a block out to disk _if necessary_. `flush_block` shouldn't do anything if the block isn't even in the block cache (that is, the page isn't mapped) or if it's not dirty. We will use the VM hardware to keep track of whether a disk block has been modified since it was last read from or written to disk. To see whether a block needs writing, we can just look to see if the `PTE_D` "dirty" bit is set in the `uvpt` entry. (The `PTE_D` bit is set by the processor in response to a write to that page; see 5.2.4.3 in [chapter 5][3] of the 386 reference manual.) After writing the block to disk, `flush_block` should clear the `PTE_D` bit using `sys_page_map`.
|
||||
|
||||
Use make grade to test your code. Your code should pass "check_bc", "check_super", and "check_bitmap".
|
||||
```
|
||||
|
||||
`fs_init` function in `fs/fs.c` is a prime example of how to use the block cache. After initializing the block cache, it simply stores pointers into the disk map region in the `super` global variable. After this point, we can simply read from the `super` structure as if they were in memory and our page fault handler will read them from disk as necessary.
|
||||
|
||||
```
|
||||
Challenge! The block cache has no eviction policy. Once a block gets faulted in to it, it never gets removed and will remain in memory forevermore. Add eviction to the buffer cache. Using the `PTE_A` "accessed" bits in the page tables, which the hardware sets on any access to a page, you can track approximate usage of disk blocks without the need to modify every place in the code that accesses the disk map region. Be careful with dirty blocks.
|
||||
```
|
||||
|
||||
### The Block Bitmap
|
||||
|
||||
After `fs_init` sets the `bitmap` pointer, we can treat `bitmap` as a packed array of bits, one for each block on the disk. See, for example, `block_is_free`, which simply checks whether a given block is marked free in the bitmap.
|
||||
|
||||
```
|
||||
Exercise 3. Use `free_block` as a model to implement `alloc_block` in `fs/fs.c`, which should find a free disk block in the bitmap, mark it used, and return the number of that block. When you allocate a block, you should immediately flush the changed bitmap block to disk with `flush_block`, to help file system consistency.
|
||||
|
||||
Use make grade to test your code. Your code should now pass "alloc_block".
|
||||
```
|
||||
|
||||
### File Operations
|
||||
|
||||
We have provided a variety of functions in `fs/fs.c` to implement the basic facilities you will need to interpret and manage `File` structures, scan and manage the entries of directory-files, and walk the file system from the root to resolve an absolute pathname. Read through _all_ of the code in `fs/fs.c` and make sure you understand what each function does before proceeding.
|
||||
|
||||
```
|
||||
Exercise 4. Implement `file_block_walk` and `file_get_block`. `file_block_walk` maps from a block offset within a file to the pointer for that block in the `struct File` or the indirect block, very much like what `pgdir_walk` did for page tables. `file_get_block` goes one step further and maps to the actual disk block, allocating a new one if necessary.
|
||||
|
||||
Use make grade to test your code. Your code should pass "file_open", "file_get_block", and "file_flush/file_truncated/file rewrite", and "testfile".
|
||||
```
|
||||
|
||||
`file_block_walk` and `file_get_block` are the workhorses of the file system. For example, `file_read` and `file_write` are little more than the bookkeeping atop `file_get_block` necessary to copy bytes between scattered blocks and a sequential buffer.
|
||||
|
||||
```
|
||||
Challenge! The file system is likely to be corrupted if it gets interrupted in the middle of an operation (for example, by a crash or a reboot). Implement soft updates or journalling to make the file system crash-resilient and demonstrate some situation where the old file system would get corrupted, but yours doesn't.
|
||||
```
|
||||
|
||||
### The file system interface
|
||||
|
||||
Now that we have the necessary functionality within the file system environment itself, we must make it accessible to other environments that wish to use the file system. Since other environments can't directly call functions in the file system environment, we'll expose access to the file system environment via a _remote procedure call_ , or RPC, abstraction, built atop JOS's IPC mechanism. Graphically, here's what a call to the file system server (say, read) looks like
|
||||
|
||||
```
|
||||
Regular env FS env
|
||||
+---------------+ +---------------+
|
||||
| read | | file_read |
|
||||
| (lib/fd.c) | | (fs/fs.c) |
|
||||
...|.......|.......|...|.......^.......|...............
|
||||
| v | | | | RPC mechanism
|
||||
| devfile_read | | serve_read |
|
||||
| (lib/file.c) | | (fs/serv.c) |
|
||||
| | | | ^ |
|
||||
| v | | | |
|
||||
| fsipc | | serve |
|
||||
| (lib/file.c) | | (fs/serv.c) |
|
||||
| | | | ^ |
|
||||
| v | | | |
|
||||
| ipc_send | | ipc_recv |
|
||||
| | | | ^ |
|
||||
+-------|-------+ +-------|-------+
|
||||
| |
|
||||
+-------------------+
|
||||
|
||||
```
|
||||
|
||||
Everything below the dotted line is simply the mechanics of getting a read request from the regular environment to the file system environment. Starting at the beginning, `read` (which we provide) works on any file descriptor and simply dispatches to the appropriate device read function, in this case `devfile_read` (we can have more device types, like pipes). `devfile_read` implements `read` specifically for on-disk files. This and the other `devfile_*` functions in `lib/file.c` implement the client side of the FS operations and all work in roughly the same way, bundling up arguments in a request structure, calling `fsipc` to send the IPC request, and unpacking and returning the results. The `fsipc` function simply handles the common details of sending a request to the server and receiving the reply.
|
||||
|
||||
The file system server code can be found in `fs/serv.c`. It loops in the `serve` function, endlessly receiving a request over IPC, dispatching that request to the appropriate handler function, and sending the result back via IPC. In the read example, `serve` will dispatch to `serve_read`, which will take care of the IPC details specific to read requests such as unpacking the request structure and finally call `file_read` to actually perform the file read.
|
||||
|
||||
Recall that JOS's IPC mechanism lets an environment send a single 32-bit number and, optionally, share a page. To send a request from the client to the server, we use the 32-bit number for the request type (the file system server RPCs are numbered, just like how syscalls were numbered) and store the arguments to the request in a `union Fsipc` on the page shared via the IPC. On the client side, we always share the page at `fsipcbuf`; on the server side, we map the incoming request page at `fsreq` (`0x0ffff000`).
|
||||
|
||||
The server also sends the response back via IPC. We use the 32-bit number for the function's return code. For most RPCs, this is all they return. `FSREQ_READ` and `FSREQ_STAT` also return data, which they simply write to the page that the client sent its request on. There's no need to send this page in the response IPC, since the client shared it with the file system server in the first place. Also, in its response, `FSREQ_OPEN` shares with the client a new "Fd page". We'll return to the file descriptor page shortly.
|
||||
|
||||
```
|
||||
Exercise 5. Implement `serve_read` in `fs/serv.c`.
|
||||
|
||||
`serve_read`'s heavy lifting will be done by the already-implemented `file_read` in `fs/fs.c` (which, in turn, is just a bunch of calls to `file_get_block`). `serve_read` just has to provide the RPC interface for file reading. Look at the comments and code in `serve_set_size` to get a general idea of how the server functions should be structured.
|
||||
|
||||
Use make grade to test your code. Your code should pass "serve_open/file_stat/file_close" and "file_read" for a score of 70/150.
|
||||
```
|
||||
|
||||
```
|
||||
Exercise 6. Implement `serve_write` in `fs/serv.c` and `devfile_write` in `lib/file.c`.
|
||||
|
||||
Use make grade to test your code. Your code should pass "file_write", "file_read after file_write", "open", and "large file" for a score of 90/150.
|
||||
```
|
||||
|
||||
### Spawning Processes
|
||||
|
||||
We have given you the code for `spawn` (see `lib/spawn.c`) which creates a new environment, loads a program image from the file system into it, and then starts the child environment running this program. The parent process then continues running independently of the child. The `spawn` function effectively acts like a `fork` in UNIX followed by an immediate `exec` in the child process.
|
||||
|
||||
We implemented `spawn` rather than a UNIX-style `exec` because `spawn` is easier to implement from user space in "exokernel fashion", without special help from the kernel. Think about what you would have to do in order to implement `exec` in user space, and be sure you understand why it is harder.
|
||||
|
||||
```
|
||||
Exercise 7. `spawn` relies on the new syscall `sys_env_set_trapframe` to initialize the state of the newly created environment. Implement `sys_env_set_trapframe` in `kern/syscall.c` (don't forget to dispatch the new system call in `syscall()`).
|
||||
|
||||
Test your code by running the `user/spawnhello` program from `kern/init.c`, which will attempt to spawn `/hello` from the file system.
|
||||
|
||||
Use make grade to test your code.
|
||||
```
|
||||
|
||||
```
|
||||
Challenge! Implement Unix-style `exec`.
|
||||
```
|
||||
|
||||
```
|
||||
Challenge! Implement `mmap`-style memory-mapped files and modify `spawn` to map pages directly from the ELF image when possible.
|
||||
```
|
||||
|
||||
### Sharing library state across fork and spawn
|
||||
|
||||
The UNIX file descriptors are a general notion that also encompasses pipes, console I/O, etc. In JOS, each of these device types has a corresponding `struct Dev`, with pointers to the functions that implement read/write/etc. for that device type. `lib/fd.c` implements the general UNIX-like file descriptor interface on top of this. Each `struct Fd` indicates its device type, and most of the functions in `lib/fd.c` simply dispatch operations to functions in the appropriate `struct Dev`.
|
||||
|
||||
`lib/fd.c` also maintains the _file descriptor table_ region in each application environment's address space, starting at `FDTABLE`. This area reserves a page's worth (4KB) of address space for each of the up to `MAXFD` (currently 32) file descriptors the application can have open at once. At any given time, a particular file descriptor table page is mapped if and only if the corresponding file descriptor is in use. Each file descriptor also has an optional "data page" in the region starting at `FILEDATA`, which devices can use if they choose.
|
||||
|
||||
We would like to share file descriptor state across `fork` and `spawn`, but file descriptor state is kept in user-space memory. Right now, on `fork`, the memory will be marked copy-on-write, so the state will be duplicated rather than shared. (This means environments won't be able to seek in files they didn't open themselves and that pipes won't work across a fork.) On `spawn`, the memory will be left behind, not copied at all. (Effectively, the spawned environment starts with no open file descriptors.)
|
||||
|
||||
We will change `fork` to know that certain regions of memory are used by the "library operating system" and should always be shared. Rather than hard-code a list of regions somewhere, we will set an otherwise-unused bit in the page table entries (just like we did with the `PTE_COW` bit in `fork`).
|
||||
|
||||
We have defined a new `PTE_SHARE` bit in `inc/lib.h`. This bit is one of the three PTE bits that are marked "available for software use" in the Intel and AMD manuals. We will establish the convention that if a page table entry has this bit set, the PTE should be copied directly from parent to child in both `fork` and `spawn`. Note that this is different from marking it copy-on-write: as described in the first paragraph, we want to make sure to _share_ updates to the page.
|
||||
|
||||
```
|
||||
Exercise 8. Change `duppage` in `lib/fork.c` to follow the new convention. If the page table entry has the `PTE_SHARE` bit set, just copy the mapping directly. (You should use `PTE_SYSCALL`, not `0xfff`, to mask out the relevant bits from the page table entry. `0xfff` picks up the accessed and dirty bits as well.)
|
||||
|
||||
Likewise, implement `copy_shared_pages` in `lib/spawn.c`. It should loop through all page table entries in the current process (just like `fork` did), copying any page mappings that have the `PTE_SHARE` bit set into the child process.
|
||||
```
|
||||
|
||||
Use make run-testpteshare to check that your code is behaving properly. You should see lines that say "`fork handles PTE_SHARE right`" and "`spawn handles PTE_SHARE right`".
|
||||
|
||||
Use make run-testfdsharing to check that file descriptors are shared properly. You should see lines that say "`read in child succeeded`" and "`read in parent succeeded`".
|
||||
|
||||
### The keyboard interface
|
||||
|
||||
For the shell to work, we need a way to type at it. QEMU has been displaying output we write to the CGA display and the serial port, but so far we've only taken input while in the kernel monitor. In QEMU, input typed in the graphical window appear as input from the keyboard to JOS, while input typed to the console appear as characters on the serial port. `kern/console.c` already contains the keyboard and serial drivers that have been used by the kernel monitor since lab 1, but now you need to attach these to the rest of the system.
|
||||
|
||||
```
|
||||
Exercise 9. In your `kern/trap.c`, call `kbd_intr` to handle trap `IRQ_OFFSET+IRQ_KBD` and `serial_intr` to handle trap `IRQ_OFFSET+IRQ_SERIAL`.
|
||||
```
|
||||
|
||||
We implemented the console input/output file type for you, in `lib/console.c`. `kbd_intr` and `serial_intr` fill a buffer with the recently read input while the console file type drains the buffer (the console file type is used for stdin/stdout by default unless the user redirects them).
|
||||
|
||||
Test your code by running make run-testkbd and type a few lines. The system should echo your lines back to you as you finish them. Try typing in both the console and the graphical window, if you have both available.
|
||||
|
||||
### The Shell
|
||||
|
||||
Run make run-icode or make run-icode-nox. This will run your kernel and start `user/icode`. `icode` execs `init`, which will set up the console as file descriptors 0 and 1 (standard input and standard output). It will then spawn `sh`, the shell. You should be able to run the following commands:
|
||||
|
||||
```
|
||||
echo hello world | cat
|
||||
cat lorem |cat
|
||||
cat lorem |num
|
||||
cat lorem |num |num |num |num |num
|
||||
lsfd
|
||||
```
|
||||
|
||||
Note that the user library routine `cprintf` prints straight to the console, without using the file descriptor code. This is great for debugging but not great for piping into other programs. To print output to a particular file descriptor (for example, 1, standard output), use `fprintf(1, "...", ...)`. `printf("...", ...)` is a short-cut for printing to FD 1. See `user/lsfd.c` for examples.
|
||||
|
||||
```
|
||||
Exercise 10.
|
||||
|
||||
The shell doesn't support I/O redirection. It would be nice to run sh <script instead of having to type in all the commands in the script by hand, as you did above. Add I/O redirection for < to `user/sh.c`.
|
||||
|
||||
Test your implementation by typing sh <script into your shell
|
||||
|
||||
Run make run-testshell to test your shell. `testshell` simply feeds the above commands (also found in `fs/testshell.sh`) into the shell and then checks that the output matches `fs/testshell.key`.
|
||||
```
|
||||
|
||||
```
|
||||
Challenge! Add more features to the shell. Possibilities include (a few require changes to the file system too):
|
||||
|
||||
* backgrounding commands (`ls &`)
|
||||
* multiple commands per line (`ls; echo hi`)
|
||||
* command grouping (`(ls; echo hi) | cat > out`)
|
||||
* environment variable expansion (`echo $hello`)
|
||||
* quoting (`echo "a | b"`)
|
||||
* command-line history and/or editing
|
||||
* tab completion
|
||||
* directories, cd, and a PATH for command-lookup.
|
||||
* file creation
|
||||
* ctl-c to kill the running environment
|
||||
|
||||
|
||||
|
||||
but feel free to do something not on this list.
|
||||
```
|
||||
|
||||
Your code should pass all tests at this point. As usual, you can grade your submission with make grade and hand it in with make handin.
|
||||
|
||||
**This completes the lab.** As usual, don't forget to run make grade and to write up your answers and a description of your challenge exercise solution. Before handing in, use git status and git diff to examine your changes and don't forget to git add answers-lab5.txt. When you're ready, commit your changes with git commit -am 'my solutions to lab 5', then make handin to submit your solution.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://pdos.csail.mit.edu/6.828/2018/labs/lab5/
|
||||
|
||||
作者:[csail.mit][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://pdos.csail.mit.edu
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://pdos.csail.mit.edu/6.828/2018/labs/lab5/disk.png
|
||||
[2]: https://pdos.csail.mit.edu/6.828/2018/labs/lab5/file.png
|
||||
[3]: http://pdos.csail.mit.edu/6.828/2011/readings/i386/s05_02.htm
|
@ -1,512 +0,0 @@
|
||||
Translating by qhwdw
|
||||
Lab 6: Network Driver
|
||||
======
|
||||
### Lab 6: Network Driver (default final project)
|
||||
|
||||
**Due on Thursday, December 6, 2018
|
||||
**
|
||||
|
||||
### Introduction
|
||||
|
||||
This lab is the default final project that you can do on your own.
|
||||
|
||||
Now that you have a file system, no self respecting OS should go without a network stack. In this the lab you are going to write a driver for a network interface card. The card will be based on the Intel 82540EM chip, also known as the E1000.
|
||||
|
||||
##### Getting Started
|
||||
|
||||
Use Git to commit your Lab 5 source (if you haven't already), fetch the latest version of the course repository, and then create a local branch called `lab6` based on our lab6 branch, `origin/lab6`:
|
||||
|
||||
```
|
||||
athena% cd ~/6.828/lab
|
||||
athena% add git
|
||||
athena% git commit -am 'my solution to lab5'
|
||||
nothing to commit (working directory clean)
|
||||
athena% git pull
|
||||
Already up-to-date.
|
||||
athena% git checkout -b lab6 origin/lab6
|
||||
Branch lab6 set up to track remote branch refs/remotes/origin/lab6.
|
||||
Switched to a new branch "lab6"
|
||||
athena% git merge lab5
|
||||
Merge made by recursive.
|
||||
fs/fs.c | 42 +++++++++++++++++++
|
||||
1 files changed, 42 insertions(+), 0 deletions(-)
|
||||
athena%
|
||||
```
|
||||
|
||||
The network card driver, however, will not be enough to get your OS hooked up to the Internet. In the new lab6 code, we have provided you with a network stack and a network server. As in previous labs, use git to grab the code for this lab, merge in your own code, and explore the contents of the new `net/` directory, as well as the new files in `kern/`.
|
||||
|
||||
In addition to writing the driver, you will need to create a system call interface to give access to your driver. You will implement missing network server code to transfer packets between the network stack and your driver. You will also tie everything together by finishing a web server. With the new web server you will be able to serve files from your file system.
|
||||
|
||||
Much of the kernel device driver code you will have to write yourself from scratch. This lab provides much less guidance than previous labs: there are no skeleton files, no system call interfaces written in stone, and many design decisions are left up to you. For this reason, we recommend that you read the entire assignment write up before starting any individual exercises. Many students find this lab more difficult than previous labs, so please plan your time accordingly.
|
||||
|
||||
##### Lab Requirements
|
||||
|
||||
As before, you will need to do all of the regular exercises described in the lab and _at least one_ challenge problem. Write up brief answers to the questions posed in the lab and a description of your challenge exercise in `answers-lab6.txt`.
|
||||
|
||||
#### QEMU's virtual network
|
||||
|
||||
We will be using QEMU's user mode network stack since it requires no administrative privileges to run. QEMU's documentation has more about user-net [here][1]. We've updated the makefile to enable QEMU's user-mode network stack and the virtual E1000 network card.
|
||||
|
||||
By default, QEMU provides a virtual router running on IP 10.0.2.2 and will assign JOS the IP address 10.0.2.15. To keep things simple, we hard-code these defaults into the network server in `net/ns.h`.
|
||||
|
||||
While QEMU's virtual network allows JOS to make arbitrary connections out to the Internet, JOS's 10.0.2.15 address has no meaning outside the virtual network running inside QEMU (that is, QEMU acts as a NAT), so we can't connect directly to servers running inside JOS, even from the host running QEMU. To address this, we configure QEMU to run a server on some port on the _host_ machine that simply connects through to some port in JOS and shuttles data back and forth between your real host and the virtual network.
|
||||
|
||||
You will run JOS servers on ports 7 (echo) and 80 (http). To avoid collisions on shared Athena machines, the makefile generates forwarding ports for these based on your user ID. To find out what ports QEMU is forwarding to on your development host, run make which-ports. For convenience, the makefile also provides make nc-7 and make nc-80, which allow you to interact directly with servers running on these ports in your terminal. (These targets only connect to a running QEMU instance; you must start QEMU itself separately.)
|
||||
|
||||
##### Packet Inspection
|
||||
|
||||
The makefile also configures QEMU's network stack to record all incoming and outgoing packets to `qemu.pcap` in your lab directory.
|
||||
|
||||
To get a hex/ASCII dump of captured packets use `tcpdump` like this:
|
||||
|
||||
```
|
||||
tcpdump -XXnr qemu.pcap
|
||||
```
|
||||
|
||||
Alternatively, you can use [Wireshark][2] to graphically inspect the pcap file. Wireshark also knows how to decode and inspect hundreds of network protocols. If you're on Athena, you'll have to use Wireshark's predecessor, ethereal, which is in the sipbnet locker.
|
||||
|
||||
##### Debugging the E1000
|
||||
|
||||
We are very lucky to be using emulated hardware. Since the E1000 is running in software, the emulated E1000 can report to us, in a user readable format, its internal state and any problems it encounters. Normally, such a luxury would not be available to a driver developer writing with bare metal.
|
||||
|
||||
The E1000 can produce a lot of debug output, so you have to enable specific logging channels. Some channels you might find useful are:
|
||||
|
||||
| Flag | Meaning |
|
||||
| --------- | ---------------------------------------------------|
|
||||
| tx | Log packet transmit operations |
|
||||
| txerr | Log transmit ring errors |
|
||||
| rx | Log changes to RCTL |
|
||||
| rxfilter | Log filtering of incoming packets |
|
||||
| rxerr | Log receive ring errors |
|
||||
| unknown | Log reads and writes of unknown registers |
|
||||
| eeprom | Log reads from the EEPROM |
|
||||
| interrupt | Log interrupts and changes to interrupt registers. |
|
||||
|
||||
To enable "tx" and "txerr" logging, for example, use make E1000_DEBUG=tx,txerr ....
|
||||
|
||||
Note: `E1000_DEBUG` flags only work in the 6.828 version of QEMU.
|
||||
|
||||
You can take debugging using software emulated hardware one step further. If you are ever stuck and do not understand why the E1000 is not responding the way you would expect, you can look at QEMU's E1000 implementation in `hw/e1000.c`.
|
||||
|
||||
#### The Network Server
|
||||
|
||||
Writing a network stack from scratch is hard work. Instead, we will be using lwIP, an open source lightweight TCP/IP protocol suite that among many things includes a network stack. You can find more information on lwIP [here][3]. In this assignment, as far as we are concerned, lwIP is a black box that implements a BSD socket interface and has a packet input port and packet output port.
|
||||
|
||||
The network server is actually a combination of four environments:
|
||||
|
||||
* core network server environment (includes socket call dispatcher and lwIP)
|
||||
* input environment
|
||||
* output environment
|
||||
* timer environment
|
||||
|
||||
|
||||
|
||||
The following diagram shows the different environments and their relationships. The diagram shows the entire system including the device driver, which will be covered later. In this lab, you will implement the parts highlighted in green.
|
||||
|
||||
![Network server architecture][4]
|
||||
|
||||
##### The Core Network Server Environment
|
||||
|
||||
The core network server environment is composed of the socket call dispatcher and lwIP itself. The socket call dispatcher works exactly like the file server. User environments use stubs (found in `lib/nsipc.c`) to send IPC messages to the core network environment. If you look at `lib/nsipc.c` you will see that we find the core network server the same way we found the file server: `i386_init` created the NS environment with NS_TYPE_NS, so we scan `envs`, looking for this special environment type. For each user environment IPC, the dispatcher in the network server calls the appropriate BSD socket interface function provided by lwIP on behalf of the user.
|
||||
|
||||
Regular user environments do not use the `nsipc_*` calls directly. Instead, they use the functions in `lib/sockets.c`, which provides a file descriptor-based sockets API. Thus, user environments refer to sockets via file descriptors, just like how they referred to on-disk files. A number of operations (`connect`, `accept`, etc.) are specific to sockets, but `read`, `write`, and `close` go through the normal file descriptor device-dispatch code in `lib/fd.c`. Much like how the file server maintained internal unique ID's for all open files, lwIP also generates unique ID's for all open sockets. In both the file server and the network server, we use information stored in `struct Fd` to map per-environment file descriptors to these unique ID spaces.
|
||||
|
||||
Even though it may seem that the IPC dispatchers of the file server and network server act the same, there is a key difference. BSD socket calls like `accept` and `recv` can block indefinitely. If the dispatcher were to let lwIP execute one of these blocking calls, the dispatcher would also block and there could only be one outstanding network call at a time for the whole system. Since this is unacceptable, the network server uses user-level threading to avoid blocking the entire server environment. For every incoming IPC message, the dispatcher creates a thread and processes the request in the newly created thread. If the thread blocks, then only that thread is put to sleep while other threads continue to run.
|
||||
|
||||
In addition to the core network environment there are three helper environments. Besides accepting messages from user applications, the core network environment's dispatcher also accepts messages from the input and timer environments.
|
||||
|
||||
##### The Output Environment
|
||||
|
||||
When servicing user environment socket calls, lwIP will generate packets for the network card to transmit. LwIP will send each packet to be transmitted to the output helper environment using the `NSREQ_OUTPUT` IPC message with the packet attached in the page argument of the IPC message. The output environment is responsible for accepting these messages and forwarding the packet on to the device driver via the system call interface that you will soon create.
|
||||
|
||||
##### The Input Environment
|
||||
|
||||
Packets received by the network card need to be injected into lwIP. For every packet received by the device driver, the input environment pulls the packet out of kernel space (using kernel system calls that you will implement) and sends the packet to the core server environment using the `NSREQ_INPUT` IPC message.
|
||||
|
||||
The packet input functionality is separated from the core network environment because JOS makes it hard to simultaneously accept IPC messages and poll or wait for a packet from the device driver. We do not have a `select` system call in JOS that would allow environments to monitor multiple input sources to identify which input is ready to be processed.
|
||||
|
||||
If you take a look at `net/input.c` and `net/output.c` you will see that both need to be implemented. This is mainly because the implementation depends on your system call interface. You will write the code for the two helper environments after you implement the driver and system call interface.
|
||||
|
||||
##### The Timer Environment
|
||||
|
||||
The timer environment periodically sends messages of type `NSREQ_TIMER` to the core network server notifying it that a timer has expired. The timer messages from this thread are used by lwIP to implement various network timeouts.
|
||||
|
||||
### Part A: Initialization and transmitting packets
|
||||
|
||||
Your kernel does not have a notion of time, so we need to add it. There is currently a clock interrupt that is generated by the hardware every 10ms. On every clock interrupt we can increment a variable to indicate that time has advanced by 10ms. This is implemented in `kern/time.c`, but is not yet fully integrated into your kernel.
|
||||
|
||||
```
|
||||
Exercise 1. Add a call to `time_tick` for every clock interrupt in `kern/trap.c`. Implement `sys_time_msec` and add it to `syscall` in `kern/syscall.c` so that user space has access to the time.
|
||||
```
|
||||
|
||||
Use make INIT_CFLAGS=-DTEST_NO_NS run-testtime to test your time code. You should see the environment count down from 5 in 1 second intervals. The "-DTEST_NO_NS" disables starting the network server environment because it will panic at this point in the lab.
|
||||
|
||||
#### The Network Interface Card
|
||||
|
||||
Writing a driver requires knowing in depth the hardware and the interface presented to the software. The lab text will provide a high-level overview of how to interface with the E1000, but you'll need to make extensive use of Intel's manual while writing your driver.
|
||||
|
||||
```
|
||||
Exercise 2. Browse Intel's [Software Developer's Manual][5] for the E1000. This manual covers several closely related Ethernet controllers. QEMU emulates the 82540EM.
|
||||
|
||||
You should skim over chapter 2 now to get a feel for the device. To write your driver, you'll need to be familiar with chapters 3 and 14, as well as 4.1 (though not 4.1's subsections). You'll also need to use chapter 13 as reference. The other chapters mostly cover components of the E1000 that your driver won't have to interact with. Don't worry about the details right now; just get a feel for how the document is structured so you can find things later.
|
||||
|
||||
While reading the manual, keep in mind that the E1000 is a sophisticated device with many advanced features. A working E1000 driver only needs a fraction of the features and interfaces that the NIC provides. Think carefully about the easiest way to interface with the card. We strongly recommend that you get a basic driver working before taking advantage of the advanced features.
|
||||
```
|
||||
|
||||
##### PCI Interface
|
||||
|
||||
The E1000 is a PCI device, which means it plugs into the PCI bus on the motherboard. The PCI bus has address, data, and interrupt lines, and allows the CPU to communicate with PCI devices and PCI devices to read and write memory. A PCI device needs to be discovered and initialized before it can be used. Discovery is the process of walking the PCI bus looking for attached devices. Initialization is the process of allocating I/O and memory space as well as negotiating the IRQ line for the device to use.
|
||||
|
||||
We have provided you with PCI code in `kern/pci.c`. To perform PCI initialization during boot, the PCI code walks the PCI bus looking for devices. When it finds a device, it reads its vendor ID and device ID and uses these two values as a key to search the `pci_attach_vendor` array. The array is composed of `struct pci_driver` entries like this:
|
||||
|
||||
```
|
||||
struct pci_driver {
|
||||
uint32_t key1, key2;
|
||||
int (*attachfn) (struct pci_func *pcif);
|
||||
};
|
||||
```
|
||||
|
||||
If the discovered device's vendor ID and device ID match an entry in the array, the PCI code calls that entry's `attachfn` to perform device initialization. (Devices can also be identified by class, which is what the other driver table in `kern/pci.c` is for.)
|
||||
|
||||
The attach function is passed a _PCI function_ to initialize. A PCI card can expose multiple functions, though the E1000 exposes only one. Here is how we represent a PCI function in JOS:
|
||||
|
||||
```
|
||||
struct pci_func {
|
||||
struct pci_bus *bus;
|
||||
|
||||
uint32_t dev;
|
||||
uint32_t func;
|
||||
|
||||
uint32_t dev_id;
|
||||
uint32_t dev_class;
|
||||
|
||||
uint32_t reg_base[6];
|
||||
uint32_t reg_size[6];
|
||||
uint8_t irq_line;
|
||||
};
|
||||
```
|
||||
|
||||
The above structure reflects some of the entries found in Table 4-1 of Section 4.1 of the developer manual. The last three entries of `struct pci_func` are of particular interest to us, as they record the negotiated memory, I/O, and interrupt resources for the device. The `reg_base` and `reg_size` arrays contain information for up to six Base Address Registers or BARs. `reg_base` stores the base memory addresses for memory-mapped I/O regions (or base I/O ports for I/O port resources), `reg_size` contains the size in bytes or number of I/O ports for the corresponding base values from `reg_base`, and `irq_line` contains the IRQ line assigned to the device for interrupts. The specific meanings of the E1000 BARs are given in the second half of table 4-2.
|
||||
|
||||
When the attach function of a device is called, the device has been found but not yet _enabled_. This means that the PCI code has not yet determined the resources allocated to the device, such as address space and an IRQ line, and, thus, the last three elements of the `struct pci_func` structure are not yet filled in. The attach function should call `pci_func_enable`, which will enable the device, negotiate these resources, and fill in the `struct pci_func`.
|
||||
|
||||
```
|
||||
Exercise 3. Implement an attach function to initialize the E1000. Add an entry to the `pci_attach_vendor` array in `kern/pci.c` to trigger your function if a matching PCI device is found (be sure to put it before the `{0, 0, 0}` entry that mark the end of the table). You can find the vendor ID and device ID of the 82540EM that QEMU emulates in section 5.2. You should also see these listed when JOS scans the PCI bus while booting.
|
||||
|
||||
For now, just enable the E1000 device via `pci_func_enable`. We'll add more initialization throughout the lab.
|
||||
|
||||
We have provided the `kern/e1000.c` and `kern/e1000.h` files for you so that you do not need to mess with the build system. They are currently blank; you need to fill them in for this exercise. You may also need to include the `e1000.h` file in other places in the kernel.
|
||||
|
||||
When you boot your kernel, you should see it print that the PCI function of the E1000 card was enabled. Your code should now pass the `pci attach` test of make grade.
|
||||
```
|
||||
|
||||
##### Memory-mapped I/O
|
||||
|
||||
Software communicates with the E1000 via _memory-mapped I/O_ (MMIO). You've seen this twice before in JOS: both the CGA console and the LAPIC are devices that you control and query by writing to and reading from "memory". But these reads and writes don't go to DRAM; they go directly to these devices.
|
||||
|
||||
`pci_func_enable` negotiates an MMIO region with the E1000 and stores its base and size in BAR 0 (that is, `reg_base[0]` and `reg_size[0]`). This is a range of _physical memory addresses_ assigned to the device, which means you'll have to do something to access it via virtual addresses. Since MMIO regions are assigned very high physical addresses (typically above 3GB), you can't use `KADDR` to access it because of JOS's 256MB limit. Thus, you'll have to create a new memory mapping. We'll use the area above MMIOBASE (your `mmio_map_region` from lab 4 will make sure we don't overwrite the mapping used by the LAPIC). Since PCI device initialization happens before JOS creates user environments, you can create the mapping in `kern_pgdir` and it will always be available.
|
||||
|
||||
```
|
||||
Exercise 4. In your attach function, create a virtual memory mapping for the E1000's BAR 0 by calling `mmio_map_region` (which you wrote in lab 4 to support memory-mapping the LAPIC).
|
||||
|
||||
You'll want to record the location of this mapping in a variable so you can later access the registers you just mapped. Take a look at the `lapic` variable in `kern/lapic.c` for an example of one way to do this. If you do use a pointer to the device register mapping, be sure to declare it `volatile`; otherwise, the compiler is allowed to cache values and reorder accesses to this memory.
|
||||
|
||||
To test your mapping, try printing out the device status register (section 13.4.2). This is a 4 byte register that starts at byte 8 of the register space. You should get `0x80080783`, which indicates a full duplex link is up at 1000 MB/s, among other things.
|
||||
```
|
||||
|
||||
Hint: You'll need a lot of constants, like the locations of registers and values of bit masks. Trying to copy these out of the developer's manual is error-prone and mistakes can lead to painful debugging sessions. We recommend instead using QEMU's [`e1000_hw.h`][6] header as a guideline. We don't recommend copying it in verbatim, because it defines far more than you actually need and may not define things in the way you need, but it's a good starting point.
|
||||
|
||||
##### DMA
|
||||
|
||||
You could imagine transmitting and receiving packets by writing and reading from the E1000's registers, but this would be slow and would require the E1000 to buffer packet data internally. Instead, the E1000 uses _Direct Memory Access_ or DMA to read and write packet data directly from memory without involving the CPU. The driver is responsible for allocating memory for the transmit and receive queues, setting up DMA descriptors, and configuring the E1000 with the location of these queues, but everything after that is asynchronous. To transmit a packet, the driver copies it into the next DMA descriptor in the transmit queue and informs the E1000 that another packet is available; the E1000 will copy the data out of the descriptor when there is time to send the packet. Likewise, when the E1000 receives a packet, it copies it into the next DMA descriptor in the receive queue, which the driver can read from at its next opportunity.
|
||||
|
||||
The receive and transmit queues are very similar at a high level. Both consist of a sequence of _descriptors_. While the exact structure of these descriptors varies, each descriptor contains some flags and the physical address of a buffer containing packet data (either packet data for the card to send, or a buffer allocated by the OS for the card to write a received packet to).
|
||||
|
||||
The queues are implemented as circular arrays, meaning that when the card or the driver reach the end of the array, it wraps back around to the beginning. Both have a _head pointer_ and a _tail pointer_ and the contents of the queue are the descriptors between these two pointers. The hardware always consumes descriptors from the head and moves the head pointer, while the driver always add descriptors to the tail and moves the tail pointer. The descriptors in the transmit queue represent packets waiting to be sent (hence, in the steady state, the transmit queue is empty). For the receive queue, the descriptors in the queue are free descriptors that the card can receive packets into (hence, in the steady state, the receive queue consists of all available receive descriptors). Correctly updating the tail register without confusing the E1000 is tricky; be careful!
|
||||
|
||||
The pointers to these arrays as well as the addresses of the packet buffers in the descriptors must all be _physical addresses_ because hardware performs DMA directly to and from physical RAM without going through the MMU.
|
||||
|
||||
#### Transmitting Packets
|
||||
|
||||
The transmit and receive functions of the E1000 are basically independent of each other, so we can work on one at a time. We'll attack transmitting packets first simply because we can't test receive without transmitting an "I'm here!" packet first.
|
||||
|
||||
First, you'll have to initialize the card to transmit, following the steps described in section 14.5 (you don't have to worry about the subsections). The first step of transmit initialization is setting up the transmit queue. The precise structure of the queue is described in section 3.4 and the structure of the descriptors is described in section 3.3.3. We won't be using the TCP offload features of the E1000, so you can focus on the "legacy transmit descriptor format." You should read those sections now and familiarize yourself with these structures.
|
||||
|
||||
##### C Structures
|
||||
|
||||
You'll find it convenient to use C `struct`s to describe the E1000's structures. As you've seen with structures like the `struct Trapframe`, C `struct`s let you precisely layout data in memory. C can insert padding between fields, but the E1000's structures are laid out such that this shouldn't be a problem. If you do encounter field alignment problems, look into GCC's "packed" attribute.
|
||||
|
||||
As an example, consider the legacy transmit descriptor given in table 3-8 of the manual and reproduced here:
|
||||
|
||||
```
|
||||
63 48 47 40 39 32 31 24 23 16 15 0
|
||||
+---------------------------------------------------------------+
|
||||
| Buffer address |
|
||||
+---------------|-------|-------|-------|-------|---------------+
|
||||
| Special | CSS | Status| Cmd | CSO | Length |
|
||||
+---------------|-------|-------|-------|-------|---------------+
|
||||
```
|
||||
|
||||
The first byte of the structure starts at the top right, so to convert this into a C struct, read from right to left, top to bottom. If you squint at it right, you'll see that all of the fields even fit nicely into a standard-size types:
|
||||
|
||||
```
|
||||
struct tx_desc
|
||||
{
|
||||
uint64_t addr;
|
||||
uint16_t length;
|
||||
uint8_t cso;
|
||||
uint8_t cmd;
|
||||
uint8_t status;
|
||||
uint8_t css;
|
||||
uint16_t special;
|
||||
};
|
||||
```
|
||||
|
||||
Your driver will have to reserve memory for the transmit descriptor array and the packet buffers pointed to by the transmit descriptors. There are several ways to do this, ranging from dynamically allocating pages to simply declaring them in global variables. Whatever you choose, keep in mind that the E1000 accesses physical memory directly, which means any buffer it accesses must be contiguous in physical memory.
|
||||
|
||||
There are also multiple ways to handle the packet buffers. The simplest, which we recommend starting with, is to reserve space for a packet buffer for each descriptor during driver initialization and simply copy packet data into and out of these pre-allocated buffers. The maximum size of an Ethernet packet is 1518 bytes, which bounds how big these buffers need to be. More sophisticated drivers could dynamically allocate packet buffers (e.g., to reduce memory overhead when network usage is low) or even pass buffers directly provided by user space (a technique known as "zero copy"), but it's good to start simple.
|
||||
|
||||
```
|
||||
Exercise 5. Perform the initialization steps described in section 14.5 (but not its subsections). Use section 13 as a reference for the registers the initialization process refers to and sections 3.3.3 and 3.4 for reference to the transmit descriptors and transmit descriptor array.
|
||||
|
||||
Be mindful of the alignment requirements on the transmit descriptor array and the restrictions on length of this array. Since TDLEN must be 128-byte aligned and each transmit descriptor is 16 bytes, your transmit descriptor array will need some multiple of 8 transmit descriptors. However, don't use more than 64 descriptors or our tests won't be able to test transmit ring overflow.
|
||||
|
||||
For the TCTL.COLD, you can assume full-duplex operation. For TIPG, refer to the default values described in table 13-77 of section 13.4.34 for the IEEE 802.3 standard IPG (don't use the values in the table in section 14.5).
|
||||
```
|
||||
|
||||
Try running make E1000_DEBUG=TXERR,TX qemu. If you are using the course qemu, you should see an "e1000: tx disabled" message when you set the TDT register (since this happens before you set TCTL.EN) and no further "e1000" messages.
|
||||
|
||||
Now that transmit is initialized, you'll have to write the code to transmit a packet and make it accessible to user space via a system call. To transmit a packet, you have to add it to the tail of the transmit queue, which means copying the packet data into the next packet buffer and then updating the TDT (transmit descriptor tail) register to inform the card that there's another packet in the transmit queue. (Note that TDT is an _index_ into the transmit descriptor array, not a byte offset; the documentation isn't very clear about this.)
|
||||
|
||||
However, the transmit queue is only so big. What happens if the card has fallen behind transmitting packets and the transmit queue is full? In order to detect this condition, you'll need some feedback from the E1000. Unfortunately, you can't just use the TDH (transmit descriptor head) register; the documentation explicitly states that reading this register from software is unreliable. However, if you set the RS bit in the command field of a transmit descriptor, then, when the card has transmitted the packet in that descriptor, the card will set the DD bit in the status field of the descriptor. If a descriptor's DD bit is set, you know it's safe to recycle that descriptor and use it to transmit another packet.
|
||||
|
||||
What if the user calls your transmit system call, but the DD bit of the next descriptor isn't set, indicating that the transmit queue is full? You'll have to decide what to do in this situation. You could simply drop the packet. Network protocols are resilient to this, but if you drop a large burst of packets, the protocol may not recover. You could instead tell the user environment that it has to retry, much like you did for `sys_ipc_try_send`. This has the advantage of pushing back on the environment generating the data.
|
||||
|
||||
```
|
||||
Exercise 6. Write a function to transmit a packet by checking that the next descriptor is free, copying the packet data into the next descriptor, and updating TDT. Make sure you handle the transmit queue being full.
|
||||
```
|
||||
|
||||
Now would be a good time to test your packet transmit code. Try transmitting just a few packets by directly calling your transmit function from the kernel. You don't have to create packets that conform to any particular network protocol in order to test this. Run make E1000_DEBUG=TXERR,TX qemu to run your test. You should see something like
|
||||
|
||||
```
|
||||
e1000: index 0: 0x271f00 : 9000002a 0
|
||||
...
|
||||
```
|
||||
|
||||
as you transmit packets. Each line gives the index in the transmit array, the buffer address of that transmit descriptor, the cmd/CSO/length fields, and the special/CSS/status fields. If QEMU doesn't print the values you expected from your transmit descriptor, check that you're filling in the right descriptor and that you configured TDBAL and TDBAH correctly. If you get "e1000: TDH wraparound @0, TDT x, TDLEN y" messages, that means the E1000 ran all the way through the transmit queue without stopping (if QEMU didn't check this, it would enter an infinite loop), which probably means you aren't manipulating TDT correctly. If you get lots of "e1000: tx disabled" messages, then you didn't set the transmit control register right.
|
||||
|
||||
Once QEMU runs, you can then run tcpdump -XXnr qemu.pcap to see the packet data that you transmitted. If you saw the expected "e1000: index" messages from QEMU, but your packet capture is empty, double check that you filled in every necessary field and bit in your transmit descriptors (the E1000 probably went through your transmit descriptors, but didn't think it had to send anything).
|
||||
|
||||
```
|
||||
Exercise 7. Add a system call that lets you transmit packets from user space. The exact interface is up to you. Don't forget to check any pointers passed to the kernel from user space.
|
||||
```
|
||||
|
||||
#### Transmitting Packets: Network Server
|
||||
|
||||
Now that you have a system call interface to the transmit side of your device driver, it's time to send packets. The output helper environment's goal is to do the following in a loop: accept `NSREQ_OUTPUT` IPC messages from the core network server and send the packets accompanying these IPC message to the network device driver using the system call you added above. The `NSREQ_OUTPUT` IPC's are sent by the `low_level_output` function in `net/lwip/jos/jif/jif.c`, which glues the lwIP stack to JOS's network system. Each IPC will include a page consisting of a `union Nsipc` with the packet in its `struct jif_pkt pkt` field (see `inc/ns.h`). `struct jif_pkt` looks like
|
||||
|
||||
```
|
||||
struct jif_pkt {
|
||||
int jp_len;
|
||||
char jp_data[0];
|
||||
};
|
||||
```
|
||||
|
||||
`jp_len` represents the length of the packet. All subsequent bytes on the IPC page are dedicated to the packet contents. Using a zero-length array like `jp_data` at the end of a struct is a common C trick (some would say abomination) for representing buffers without pre-determined lengths. Since C doesn't do array bounds checking, as long as you ensure there's enough unused memory following the struct, you can use `jp_data` as if it were an array of any size.
|
||||
|
||||
Be aware of the interaction between the device driver, the output environment and the core network server when there is no more space in the device driver's transmit queue. The core network server sends packets to the output environment using IPC. If the output environment is suspended due to a send packet system call because the driver has no more buffer space for new packets, the core network server will block waiting for the output server to accept the IPC call.
|
||||
|
||||
```
|
||||
Exercise 8. Implement `net/output.c`.
|
||||
```
|
||||
|
||||
You can use `net/testoutput.c` to test your output code without involving the whole network server. Try running make E1000_DEBUG=TXERR,TX run-net_testoutput. You should see something like
|
||||
|
||||
```
|
||||
Transmitting packet 0
|
||||
e1000: index 0: 0x271f00 : 9000009 0
|
||||
Transmitting packet 1
|
||||
e1000: index 1: 0x2724ee : 9000009 0
|
||||
...
|
||||
```
|
||||
|
||||
and tcpdump -XXnr qemu.pcap should output
|
||||
|
||||
|
||||
```
|
||||
reading from file qemu.pcap, link-type EN10MB (Ethernet)
|
||||
-5:00:00.600186 [|ether]
|
||||
0x0000: 5061 636b 6574 2030 30 Packet.00
|
||||
-5:00:00.610080 [|ether]
|
||||
0x0000: 5061 636b 6574 2030 31 Packet.01
|
||||
...
|
||||
```
|
||||
|
||||
To test with a larger packet count, try make E1000_DEBUG=TXERR,TX NET_CFLAGS=-DTESTOUTPUT_COUNT=100 run-net_testoutput. If this overflows your transmit ring, double check that you're handling the DD status bit correctly and that you've told the hardware to set the DD status bit (using the RS command bit).
|
||||
|
||||
Your code should pass the `testoutput` tests of make grade.
|
||||
|
||||
```
|
||||
Question
|
||||
|
||||
1. How did you structure your transmit implementation? In particular, what do you do if the transmit ring is full?
|
||||
```
|
||||
|
||||
|
||||
### Part B: Receiving packets and the web server
|
||||
|
||||
#### Receiving Packets
|
||||
|
||||
Just like you did for transmitting packets, you'll have to configure the E1000 to receive packets and provide a receive descriptor queue and receive descriptors. Section 3.2 describes how packet reception works, including the receive queue structure and receive descriptors, and the initialization process is detailed in section 14.4.
|
||||
|
||||
```
|
||||
Exercise 9. Read section 3.2. You can ignore anything about interrupts and checksum offloading (you can return to these sections if you decide to use these features later), and you don't have to be concerned with the details of thresholds and how the card's internal caches work.
|
||||
```
|
||||
|
||||
The receive queue is very similar to the transmit queue, except that it consists of empty packet buffers waiting to be filled with incoming packets. Hence, when the network is idle, the transmit queue is empty (because all packets have been sent), but the receive queue is full (of empty packet buffers).
|
||||
|
||||
When the E1000 receives a packet, it first checks if it matches the card's configured filters (for example, to see if the packet is addressed to this E1000's MAC address) and ignores the packet if it doesn't match any filters. Otherwise, the E1000 tries to retrieve the next receive descriptor from the head of the receive queue. If the head (RDH) has caught up with the tail (RDT), then the receive queue is out of free descriptors, so the card drops the packet. If there is a free receive descriptor, it copies the packet data into the buffer pointed to by the descriptor, sets the descriptor's DD (Descriptor Done) and EOP (End of Packet) status bits, and increments the RDH.
|
||||
|
||||
If the E1000 receives a packet that is larger than the packet buffer in one receive descriptor, it will retrieve as many descriptors as necessary from the receive queue to store the entire contents of the packet. To indicate that this has happened, it will set the DD status bit on all of these descriptors, but only set the EOP status bit on the last of these descriptors. You can either deal with this possibility in your driver, or simply configure the card to not accept "long packets" (also known as _jumbo frames_ ) and make sure your receive buffers are large enough to store the largest possible standard Ethernet packet (1518 bytes).
|
||||
|
||||
```
|
||||
Exercise 10. Set up the receive queue and configure the E1000 by following the process in section 14.4. You don't have to support "long packets" or multicast. For now, don't configure the card to use interrupts; you can change that later if you decide to use receive interrupts. Also, configure the E1000 to strip the Ethernet CRC, since the grade script expects it to be stripped.
|
||||
|
||||
By default, the card will filter out _all_ packets. You have to configure the Receive Address Registers (RAL and RAH) with the card's own MAC address in order to accept packets addressed to that card. You can simply hard-code QEMU's default MAC address of 52:54:00:12:34:56 (we already hard-code this in lwIP, so doing it here too doesn't make things any worse). Be very careful with the byte order; MAC addresses are written from lowest-order byte to highest-order byte, so 52:54:00:12 are the low-order 32 bits of the MAC address and 34:56 are the high-order 16 bits.
|
||||
|
||||
The E1000 only supports a specific set of receive buffer sizes (given in the description of RCTL.BSIZE in 13.4.22). If you make your receive packet buffers large enough and disable long packets, you won't have to worry about packets spanning multiple receive buffers. Also, remember that, just like for transmit, the receive queue and the packet buffers must be contiguous in physical memory.
|
||||
|
||||
You should use at least 128 receive descriptors
|
||||
```
|
||||
|
||||
You can do a basic test of receive functionality now, even without writing the code to receive packets. Run make E1000_DEBUG=TX,TXERR,RX,RXERR,RXFILTER run-net_testinput. `testinput` will transmit an ARP (Address Resolution Protocol) announcement packet (using your packet transmitting system call), which QEMU will automatically reply to. Even though your driver can't receive this reply yet, you should see a "e1000: unicast match[0]: 52:54:00:12:34:56" message, indicating that a packet was received by the E1000 and matched the configured receive filter. If you see a "e1000: unicast mismatch: 52:54:00:12:34:56" message instead, the E1000 filtered out the packet, which means you probably didn't configure RAL and RAH correctly. Make sure you got the byte ordering right and didn't forget to set the "Address Valid" bit in RAH. If you don't get any "e1000" messages, you probably didn't enable receive correctly.
|
||||
|
||||
Now you're ready to implement receiving packets. To receive a packet, your driver will have to keep track of which descriptor it expects to hold the next received packet (hint: depending on your design, there's probably already a register in the E1000 keeping track of this). Similar to transmit, the documentation states that the RDH register cannot be reliably read from software, so in order to determine if a packet has been delivered to this descriptor's packet buffer, you'll have to read the DD status bit in the descriptor. If the DD bit is set, you can copy the packet data out of that descriptor's packet buffer and then tell the card that the descriptor is free by updating the queue's tail index, RDT.
|
||||
|
||||
If the DD bit isn't set, then no packet has been received. This is the receive-side equivalent of when the transmit queue was full, and there are several things you can do in this situation. You can simply return a "try again" error and require the caller to retry. While this approach works well for full transmit queues because that's a transient condition, it is less justifiable for empty receive queues because the receive queue may remain empty for long stretches of time. A second approach is to suspend the calling environment until there are packets in the receive queue to process. This tactic is very similar to `sys_ipc_recv`. Just like in the IPC case, since we have only one kernel stack per CPU, as soon as we leave the kernel the state on the stack will be lost. We need to set a flag indicating that an environment has been suspended by receive queue underflow and record the system call arguments. The drawback of this approach is complexity: the E1000 must be instructed to generate receive interrupts and the driver must handle them in order to resume the environment blocked waiting for a packet.
|
||||
|
||||
```
|
||||
Exercise 11. Write a function to receive a packet from the E1000 and expose it to user space by adding a system call. Make sure you handle the receive queue being empty.
|
||||
```
|
||||
|
||||
```
|
||||
Challenge! If the transmit queue is full or the receive queue is empty, the environment and your driver may spend a significant amount of CPU cycles polling, waiting for a descriptor. The E1000 can generate an interrupt once it is finished with a transmit or receive descriptor, avoiding the need for polling. Modify your driver so that processing the both the transmit and receive queues is interrupt driven instead of polling.
|
||||
|
||||
Note that, once an interrupt is asserted, it will remain asserted until the driver clears the interrupt. In your interrupt handler make sure to clear the interrupt as soon as you handle it. If you don't, after returning from your interrupt handler, the CPU will jump back into it again. In addition to clearing the interrupts on the E1000 card, interrupts also need to be cleared on the LAPIC. Use `lapic_eoi` to do so.
|
||||
```
|
||||
|
||||
#### Receiving Packets: Network Server
|
||||
|
||||
In the network server input environment, you will need to use your new receive system call to receive packets and pass them to the core network server environment using the `NSREQ_INPUT` IPC message. These IPC input message should have a page attached with a `union Nsipc` with its `struct jif_pkt pkt` field filled in with the packet received from the network.
|
||||
|
||||
```
|
||||
Exercise 12. Implement `net/input.c`.
|
||||
```
|
||||
|
||||
Run `testinput` again with make E1000_DEBUG=TX,TXERR,RX,RXERR,RXFILTER run-net_testinput. You should see
|
||||
|
||||
```
|
||||
Sending ARP announcement...
|
||||
Waiting for packets...
|
||||
e1000: index 0: 0x26dea0 : 900002a 0
|
||||
e1000: unicast match[0]: 52:54:00:12:34:56
|
||||
input: 0000 5254 0012 3456 5255 0a00 0202 0806 0001
|
||||
input: 0010 0800 0604 0002 5255 0a00 0202 0a00 0202
|
||||
input: 0020 5254 0012 3456 0a00 020f 0000 0000 0000
|
||||
input: 0030 0000 0000 0000 0000 0000 0000 0000 0000
|
||||
```
|
||||
|
||||
The lines beginning with "input:" are a hexdump of QEMU's ARP reply.
|
||||
|
||||
Your code should pass the `testinput` tests of make grade. Note that there's no way to test packet receiving without sending at least one ARP packet to inform QEMU of JOS' IP address, so bugs in your transmitting code can cause this test to fail.
|
||||
|
||||
To more thoroughly test your networking code, we have provided a daemon called `echosrv` that sets up an echo server running on port 7 that will echo back anything sent over a TCP connection. Use make E1000_DEBUG=TX,TXERR,RX,RXERR,RXFILTER run-echosrv to start the echo server in one terminal and make nc-7 in another to connect to it. Every line you type should be echoed back by the server. Every time the emulated E1000 receives a packet, QEMU should print something like the following to the console:
|
||||
|
||||
```
|
||||
e1000: unicast match[0]: 52:54:00:12:34:56
|
||||
e1000: index 2: 0x26ea7c : 9000036 0
|
||||
e1000: index 3: 0x26f06a : 9000039 0
|
||||
e1000: unicast match[0]: 52:54:00:12:34:56
|
||||
```
|
||||
|
||||
At this point, you should also be able to pass the `echosrv` test.
|
||||
|
||||
```
|
||||
Question
|
||||
|
||||
2. How did you structure your receive implementation? In particular, what do you do if the receive queue is empty and a user environment requests the next incoming packet?
|
||||
```
|
||||
|
||||
|
||||
```
|
||||
Challenge! Read about the EEPROM in the developer's manual and write the code to load the E1000's MAC address out of the EEPROM. Currently, QEMU's default MAC address is hard-coded into both your receive initialization and lwIP. Fix your initialization to use the MAC address you read from the EEPROM, add a system call to pass the MAC address to lwIP, and modify lwIP to the MAC address read from the card. Test your change by configuring QEMU to use a different MAC address.
|
||||
```
|
||||
|
||||
```
|
||||
Challenge! Modify your E1000 driver to be "zero copy." Currently, packet data has to be copied from user-space buffers to transmit packet buffers and from receive packet buffers back to user-space buffers. A zero copy driver avoids this by having user space and the E1000 share packet buffer memory directly. There are many different approaches to this, including mapping the kernel-allocated structures into user space or passing user-provided buffers directly to the E1000. Regardless of your approach, be careful how you reuse buffers so that you don't introduce races between user-space code and the E1000.
|
||||
```
|
||||
|
||||
```
|
||||
Challenge! Take the zero copy concept all the way into lwIP.
|
||||
|
||||
A typical packet is composed of many headers. The user sends data to be transmitted to lwIP in one buffer. The TCP layer wants to add a TCP header, the IP layer an IP header and the MAC layer an Ethernet header. Even though there are many parts to a packet, right now the parts need to be joined together so that the device driver can send the final packet.
|
||||
|
||||
The E1000's transmit descriptor design is well-suited to collecting pieces of a packet scattered throughout memory, like the packet fragments created inside lwIP. If you enqueue multiple transmit descriptors, but only set the EOP command bit on the last one, then the E1000 will internally concatenate the packet buffers from these descriptors and only transmit the concatenated buffer when it reaches the EOP-marked descriptor. As a result, the individual packet pieces never need to be joined together in memory.
|
||||
|
||||
Change your driver to be able to send packets composed of many buffers without copying and modify lwIP to avoid merging the packet pieces as it does right now.
|
||||
```
|
||||
|
||||
```
|
||||
Challenge! Augment your system call interface to service more than one user environment. This will prove useful if there are multiple network stacks (and multiple network servers) each with their own IP address running in user mode. The receive system call will need to decide to which environment it needs to forward each incoming packet.
|
||||
|
||||
Note that the current interface cannot tell the difference between two packets and if multiple environments call the packet receive system call, each respective environment will get a subset of the incoming packets and that subset may include packets that are not destined to the calling environment.
|
||||
|
||||
Sections 2.2 and 3 in [this][7] Exokernel paper have an in-depth explanation of the problem and a method of addressing it in a kernel like JOS. Use the paper to help you get a grip on the problem, chances are you do not need a solution as complex as presented in the paper.
|
||||
```
|
||||
|
||||
#### The Web Server
|
||||
|
||||
A web server in its simplest form sends the contents of a file to the requesting client. We have provided skeleton code for a very simple web server in `user/httpd.c`. The skeleton code deals with incoming connections and parses the headers.
|
||||
|
||||
```
|
||||
Exercise 13. The web server is missing the code that deals with sending the contents of a file back to the client. Finish the web server by implementing `send_file` and `send_data`.
|
||||
```
|
||||
|
||||
Once you've finished the web server, start the webserver (make run-httpd-nox) and point your favorite browser at http:// _host_ : _port_ /index.html, where _host_ is the name of the computer running QEMU (If you're running QEMU on athena use `hostname.mit.edu` (hostname is the output of the `hostname` command on athena, or `localhost` if you're running the web browser and QEMU on the same computer) and _port_ is the port number reported for the web server by make which-ports . You should see a web page served by the HTTP server running inside JOS.
|
||||
|
||||
At this point, you should score 105/105 on make grade.
|
||||
|
||||
```
|
||||
Challenge! Add a simple chat server to JOS, where multiple people can connect to the server and anything that any user types is transmitted to the other users. To do this, you will have to find a way to communicate with multiple sockets at once _and_ to send and receive on the same socket at the same time. There are multiple ways to go about this. lwIP provides a MSG_DONTWAIT flag for recv (see `lwip_recvfrom` in `net/lwip/api/sockets.c`), so you could constantly loop through all open sockets, polling them for data. Note that, while `recv` flags are supported by the network server IPC, they aren't accessible via the regular `read` function, so you'll need a way to pass the flags. A more efficient approach is to start one or more environments for each connection and to use IPC to coordinate them. Conveniently, the lwIP socket ID found in the struct Fd for a socket is global (not per-environment), so, for example, the child of a `fork` inherits its parents sockets. Or, an environment can even send on another environment's socket simply by constructing an Fd containing the right socket ID.
|
||||
```
|
||||
|
||||
```
|
||||
Question
|
||||
|
||||
3. What does the web page served by JOS's web server say?
|
||||
4. How long approximately did it take you to do this lab?
|
||||
```
|
||||
|
||||
|
||||
**This completes the lab.** As usual, don't forget to run make grade and to write up your answers and a description of your challenge exercise solution. Before handing in, use git status and git diff to examine your changes and don't forget to git add answers-lab6.txt. When you're ready, commit your changes with git commit -am 'my solutions to lab 6', then make handin and follow the directions.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://pdos.csail.mit.edu/6.828/2018/labs/lab6/
|
||||
|
||||
作者:[csail.mit][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://pdos.csail.mit.edu
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: http://wiki.qemu.org/download/qemu-doc.html#Using-the-user-mode-network-stack
|
||||
[2]: http://www.wireshark.org/
|
||||
[3]: http://www.sics.se/~adam/lwip/
|
||||
[4]: https://pdos.csail.mit.edu/6.828/2018/labs/lab6/ns.png
|
||||
[5]: https://pdos.csail.mit.edu/6.828/2018/readings/hardware/8254x_GBe_SDM.pdf
|
||||
[6]: https://pdos.csail.mit.edu/6.828/2018/labs/lab6/e1000_hw.h
|
||||
[7]: http://pdos.csail.mit.edu/papers/exo:tocs.pdf
|
@ -1,177 +0,0 @@
|
||||
(translating by runningwater)
|
||||
How To Determine Which System Manager Is Running On Linux System
|
||||
======
|
||||
We all are heard about this word many times but only few of us know what is this exactly. We will show you how to identify the system manager.
|
||||
|
||||
I will try my best to let you know about this. Most of us know about System V and systemd system manager. System V (Sysv) is an old and traditional init system and system manager for old systems.
|
||||
|
||||
Systemd is a new init system and system manager which was adapted by most of the major distribution.
|
||||
|
||||
There are three major init systems are available in Linux which are very famous and still in use. Most of the Linux distribution falls under in one of the below init system.
|
||||
|
||||
### What is init System Manager?
|
||||
|
||||
In Linux/Unix based operating systems, init (short for initialization) is the first process that started during the system boot up by the kernel.
|
||||
|
||||
It’s holding a process id (PID) of 1. It will be running in the background continuously until the system is shut down.
|
||||
|
||||
Init looks at the `/etc/inittab` file to decide the Linux run level then it starts all other processes & applications in the background as per the run level.
|
||||
|
||||
BIOS, MBR, GRUB and Kernel processes were kicked up before hitting init process as part of Linux booting process.
|
||||
|
||||
Below are the available run levels for Linux (There are seven runlevels exist, from zero to six).
|
||||
|
||||
* **`0:`** halt
|
||||
* **`1:`** Single user mode
|
||||
* **`2:`** Multiuser, without NFS
|
||||
* **`3:`** Full multiuser mode
|
||||
* **`4:`** Unused
|
||||
* **`5:`** X11 (GUI – Graphical User Interface)
|
||||
* **`:`** reboot
|
||||
|
||||
|
||||
|
||||
Below three init systems are widely used in Linux.
|
||||
|
||||
* **`System V (Sys V):`** System V (Sys V) is one of the first and traditional init system for Unix like operating system.
|
||||
* **`Upstart:`** Upstart is an event-based replacement for the /sbin/init daemon.
|
||||
* **`systemd:`** Systemd is a new init system and system manager which was implemented/adapted into all the major Linux distributions over the traditional SysV init systems.
|
||||
|
||||
|
||||
|
||||
### What is System V (Sys V)?
|
||||
|
||||
System V (Sys V) is one of the first and traditional init system for Unix like operating system. init is the first process that started during the system boot up by the kernel and it’s a parent process for everything.
|
||||
|
||||
Most of the Linux distributions started using traditional init system called System V (Sys V) first. Over the years, several replacement init systems were released to address design limitations in the standard versions such as launchd, the Service Management Facility, systemd and Upstart.
|
||||
|
||||
But systemd has been adopted by several major Linux distributions over the traditional SysV init systems.
|
||||
|
||||
### How to identify the System V (Sys V) system manager on Linux
|
||||
|
||||
Run the following commands to identify that your system is running with System V (Sys V) system manager.
|
||||
|
||||
### Method-1: Using ps command
|
||||
|
||||
ps – report a snapshot of the current processes. ps displays information about a selection of the active processes.
|
||||
This output doesn’t give the exact results either System V (SysV) or upstart so, i would suggest you to go with other method to confirm this.
|
||||
|
||||
```
|
||||
# ps -p1 | grep "init\|upstart\|systemd"
|
||||
1 ? 00:00:00 init
|
||||
```
|
||||
|
||||
### Method-2: Using rpm command
|
||||
|
||||
RPM stands for `Red Hat Package Manager` is a powerful, command line [Package Management][1] utility for Red Hat based system such as (RHEL, CentOS, Fedora, openSUSE & Mageia) distributions. The utility allow you to install, upgrade, remove, query & verify the software on your Linux system/server. RPM files comes with `.rpm` extension.
|
||||
RPM package built with required libraries and dependency which will not conflicts other packages were installed on your system.
|
||||
|
||||
```
|
||||
# rpm -qf /sbin/init
|
||||
SysVinit-2.86-17.el5
|
||||
```
|
||||
|
||||
### What is Upstart?
|
||||
|
||||
Upstart is an event-based replacement for the /sbin/init daemon which handles starting of tasks and services during boot, stopping them during shutdown and supervising them while the system is running.
|
||||
|
||||
It was originally developed for the Ubuntu distribution, but is intended to be suitable for deployment in all Linux distributions as a replacement for the venerable System-V init.
|
||||
|
||||
It was used in Ubuntu from 9.10 to Ubuntu 14.10 & RHEL 6 based systems after that they are replaced with systemd.
|
||||
|
||||
### How to identify the Upstart system manager on Linux
|
||||
|
||||
Run the following commands to identify that your system is running with Upstart system manager.
|
||||
|
||||
### Method-1: Using ps command
|
||||
|
||||
ps – report a snapshot of the current processes. ps displays information about a selection of the active processes.
|
||||
This output doesn’t give the exact results either System V (SysV) or upstart so, i would suggest you to go with other method to confirm this.
|
||||
|
||||
```
|
||||
# ps -p1 | grep "init\|upstart\|systemd"
|
||||
1 ? 00:00:00 init
|
||||
```
|
||||
|
||||
### Method-2: Using rpm command
|
||||
|
||||
RPM stands for `Red Hat Package Manager` is a powerful, command line Package Management utility for Red Hat based system such as (RHEL, CentOS, Fedora, openSUSE & Mageia) distributions. The [RPM Command][2] allow you to install, upgrade, remove, query & verify the software on your Linux system/server. RPM files comes with `.rpm` extension.
|
||||
RPM package built with required libraries and dependency which will not conflicts other packages were installed on your system.
|
||||
|
||||
```
|
||||
# rpm -qf /sbin/init
|
||||
upstart-0.6.5-16.el6.x86_64
|
||||
```
|
||||
|
||||
### Method-3: Using /sbin/init file
|
||||
|
||||
The `/sbin/init` program will load or switch the root file system from memory to the hard disk.
|
||||
This is the main part of the boot process. The runlevel at the start of this process is “N” (none). The /sbin/init program initializes the system following the description in the /etc/inittab configuration file.
|
||||
|
||||
```
|
||||
# /sbin/init --version
|
||||
init (upstart 0.6.5)
|
||||
Copyright (C) 2010 Canonical Ltd.
|
||||
|
||||
This is free software; see the source for copying conditions. There is NO warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
|
||||
```
|
||||
|
||||
### What is systemd?
|
||||
|
||||
Systemd is a new init system and system manager which was implemented/adapted into all the major Linux distributions over the traditional SysV init systems.
|
||||
|
||||
systemd is compatible with SysV and LSB init scripts. It can work as a drop-in replacement for sysvinit system. systemd is the first process get started by kernel and holding PID 1.
|
||||
|
||||
It’s a parant process for everything and Fedora 15 is the first distribution which was adapted systemd instead of upstart. [systemctl][3] is command line utility and primary tool to manage the systemd daemons/services such as (start, restart, stop, enable, disable, reload & status).
|
||||
|
||||
systemd uses .service files Instead of bash scripts (SysVinit uses). systemd sorts all daemons into their own Linux cgroups and you can see the system hierarchy by exploring `/cgroup/systemd` file.
|
||||
|
||||
### How to identify the systemd system manager on Linux
|
||||
|
||||
Run the following commands to identify that your system is running with systemd system manager.
|
||||
|
||||
### Method-1: Using ps command
|
||||
|
||||
ps – report a snapshot of the current processes. ps displays information about a selection of the active processes.
|
||||
|
||||
```
|
||||
# ps -p1 | grep "init\|upstart\|systemd"
|
||||
1 ? 00:18:09 systemd
|
||||
```
|
||||
|
||||
### Method-2: Using rpm command
|
||||
|
||||
RPM stands for `Red Hat Package Manager` is a powerful, command line Package Management utility for Red Hat based system such as (RHEL, CentOS, Fedora, openSUSE & Mageia) distributions. The utility allow you to install, upgrade, remove, query & verify the software on your Linux system/server. RPM files comes with `.rpm` extension.
|
||||
RPM package built with required libraries and dependency which will not conflicts other packages were installed on your system.
|
||||
|
||||
```
|
||||
# rpm -qf /sbin/init
|
||||
systemd-219-30.el7_3.9.x86_64
|
||||
```
|
||||
|
||||
### Method-3: Using /sbin/init file
|
||||
|
||||
The `/sbin/init` program will load or switch the root file system from memory to the hard disk.
|
||||
This is the main part of the boot process. The runlevel at the start of this process is “N” (none). The /sbin/init program initializes the system following the description in the /etc/inittab configuration file.
|
||||
|
||||
```
|
||||
# file /sbin/init
|
||||
/sbin/init: symbolic link to `../lib/systemd/systemd'
|
||||
```
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.2daygeek.com/how-to-determine-which-init-system-manager-is-running-on-linux-system/
|
||||
|
||||
作者:[Prakash Subramanian][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[runningwater](https://github.com/runningwater)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.2daygeek.com/author/prakash/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://www.2daygeek.com/category/package-management/
|
||||
[2]: https://www.2daygeek.com/rpm-command-examples/
|
||||
[3]: https://www.2daygeek.com/how-to-check-all-running-services-in-linux/
|
@ -1,98 +0,0 @@
|
||||
Understanding Linux Links: Part 2
|
||||
======
|
||||
|
||||
![](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/links-fikri-rasyid-7853.jpg?itok=0jBT_1M2)
|
||||
|
||||
In the [first part of this series][1], we looked at hard links and soft links and discussed some of the various ways that linking can be useful. Linking may seem straightforward, but there are some non-obvious quirks you have to be aware of. That’s what we’ll be looking at here. Consider, for example, at the way we created the link to _libblah_ in the previous article. Notice how we linked from within the destination folder:
|
||||
|
||||
```
|
||||
cd /usr/local/lib
|
||||
|
||||
ln -s /usr/lib/libblah
|
||||
```
|
||||
|
||||
That will work. But this:
|
||||
|
||||
```
|
||||
cd /usr/lib
|
||||
|
||||
ln -s libblah /usr/local/lib
|
||||
```
|
||||
|
||||
That is, linking from within the original folder to the destination folder, will not work.
|
||||
|
||||
The reason for that is that _ln_ will think you are linking from inside _/usr/local/lib_ to _/usr/local/lib_ and will create a linked file from _libblah_ in _/usr/local/lib_ to _libblah_ also in _/usr/local/lib_. This is because all the link file gets is the name of the file ( _libblah_ ) but not the path to the file. The end result is a very broken link.
|
||||
|
||||
However, this:
|
||||
|
||||
```
|
||||
cd /usr/lib
|
||||
|
||||
ln -s /usr/lib/libblah /usr/local/lib
|
||||
```
|
||||
|
||||
will work. Then again, it would work regardless of from where you executed the instruction within the filesystem. Using absolute paths, that is, spelling out the whole the path, from root (/) drilling down to to the file or directory itself, is just best practice.
|
||||
|
||||
Another thing to note is that, as long as both _/usr/lib_ and _/usr/local/lib_ are on the same partition, making a hard link like this:
|
||||
|
||||
```
|
||||
cd /usr/lib
|
||||
|
||||
ln -s libblah /usr/local/lib
|
||||
```
|
||||
|
||||
will also work because hard links don't rely on pointing to a file within the filesystem to work.
|
||||
|
||||
Where hard links will not work is if you want to link across partitions. Say you have _fileA_ on partition A and the partition is mounted at _/path/to/partitionA/directory_. If you want to link _fileA_ to _/path/to/partitionB/directory_ that is on partition B, this will not work:
|
||||
|
||||
```
|
||||
ln /path/to/partitionA/directory/file /path/to/partitionB/directory
|
||||
```
|
||||
|
||||
As we saw previously, hard links are entries in a partition table that point to data on the *same partition*. You can't have an entry in the table of one partition pointing to data on another partition. Your only choice here would be to us a soft link:
|
||||
|
||||
```
|
||||
ln -s /path/to/partitionA/directory/file /path/to/partitionB/directory
|
||||
```
|
||||
|
||||
Another thing that soft links can do and hard links cannot is link to whole directories:
|
||||
|
||||
```
|
||||
ln -s /path/to/some/directory /path/to/some/other/directory
|
||||
```
|
||||
|
||||
will create a link to _/path/to/some/directory_ within _/path/to/some/other/directory_ without a hitch.
|
||||
|
||||
Trying to do the same by hard linking will show you an error saying that you are not allowed to do that. And the reason for that is unending recursiveness: if you have directory B inside directory A, and then you link A inside B, you have situation, because then A contains B within A inside B that incorporates A that encloses B, and so on ad-infinitum.
|
||||
|
||||
You can have recursive using soft links, but why would you do that to yourself?
|
||||
|
||||
### Should I use a hard or a soft link?
|
||||
|
||||
In general you can use soft links everywhere and for everything. In fact, there are situations in which you can only use soft links. That said, hard links are slightly more efficient: they take up less space on disk and are faster to access. On most machines you will not notice the difference, though: the difference in space and speed will be negligible given today's massive and speedy hard disks. However, if you are using Linux on an embedded system with a small storage and a low-powered processor, you may want to give hard links some consideration.
|
||||
|
||||
Another reason to use hard links is that a hard link is much more difficult to break. If you have a soft link and you accidentally move or delete the file it is pointing to, your soft link will be broken and point to... nothing. There is no danger of this happening with a hard link, since the hard link points directly to the data on the disk. Indeed, the space on the disk will not be flagged as free until the last hard link pointing to it is erased from the file system.
|
||||
|
||||
Soft links, on the other hand can do more than hard links and point to anything, be it file or directory. They can also point to items that are on different partitions. These two things alone often make them the only choice.
|
||||
|
||||
### Next Time
|
||||
|
||||
Now we have covered files and directories and the basic tools to manipulate them, you are ready to move onto the tools that let you explore the directory hierarchy, find data within files, and examine the contents. That's what we'll be dealing with in the next installment. See you then!
|
||||
|
||||
Learn more about Linux through the free ["Introduction to Linux" ][2]course from The Linux Foundation and edX.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.linux.com/blog/2018/10/understanding-linux-links-part-2
|
||||
|
||||
作者:[Paul Brown][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.linux.com/users/bro66
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://www.linux.com/blog/intro-to-linux/2018/10/linux-links-part-1
|
||||
[2]: https://training.linuxfoundation.org/linux-courses/system-administration-training/introduction-to-linux
|
@ -1,84 +0,0 @@
|
||||
Ultimate Plumber – Writing Linux Pipes With Instant Live Preview
|
||||
======
|
||||
|
||||
![](https://www.ostechnix.com/wp-content/uploads/2018/10/Ultimate-Plumber-720x340.jpg)
|
||||
|
||||
As you may already know, **Pipe** command is used to send the output of one command/program/process to another command/program/process for further processing in Unix-like operating systems. Using the Pipe command, we can combine two or more commands and redirect the standard input or output of one command to another easily and quickly. A pipe is represented by a vertical bar character ( **|** ) between two or more Linux commands. The general syntax of a pipe command is given below.
|
||||
|
||||
```
|
||||
Command-1 | Command-2 | Command-3 | …| Command-N
|
||||
```
|
||||
|
||||
If you use Pipe command often, I have a good news for you. Now, you can preview the Linux pipes results instantly while writing them. Say hello to **“Ultimate Plumber”** , shortly **UP** , a command line tool for writing Linux pipes with instant live preview. It is used to build complex Pipelines quickly, easily with instant, scrollable preview of the command results. The UP tool is quite handy if you often need to repeat piped commands to get the desired result.
|
||||
|
||||
In this brief guide, I will show you how to install UP and build complex Linux pipelines easily.
|
||||
|
||||
**Important warning:**
|
||||
|
||||
Please be careful when using this tool in production! It could be dangerous and you might inadvertently delete any important data. You must particularly be careful when using “rm” or “dd” commands with UP tool. You have been warned!
|
||||
|
||||
### Writing Linux Pipes With Instant Live Preview Using Ultimate Plumber
|
||||
|
||||
Here is a simple example to understand the underlying concept of UP. For example, let us pipe the output of **lshw** command into UP. To do so, type the following command in your Terminal and press ENTER:
|
||||
|
||||
```
|
||||
$ lshw |& up
|
||||
```
|
||||
|
||||
You will see an input box at the top of the screen as shown in the below screenshot.
|
||||
![](https://www.ostechnix.com/wp-content/uploads/2018/10/Ultimate-Plumber.png)
|
||||
In the input box, start typing any pipelines and press ENTER key to execute the command you just typed. Now, the Ultimate Plumber utility will immediately show you the output of the pipeline in the **scrollable window** below. You can browse through the results using **PgUp/PgDn** or **Ctrl+ <left arrow)/Ctrl+<right arrow>** keys.
|
||||
|
||||
Once you’re satisfied with the result, press **Ctrl-X** to exit the UP. The Linux pipe command you just built will be saved in a file named **up1.sh** in the current working directory. If this file is already exists, an additional file named **up2.sh** will be created to save the result. This will go on until 1000 files. If you don’t want to save the output, just press **Ctrl-C**.
|
||||
|
||||
You can view the contents of the upX.sh file with cat command. Here is the output of my **up2.sh** file:
|
||||
|
||||
```
|
||||
$ cat up2.sh
|
||||
#!/bin/bash
|
||||
grep network -A5 | grep : | cut -d: -f2- | paste - -
|
||||
```
|
||||
|
||||
If the command you piped into UP is long running, you will see a **~** (tilde) character in the top-left corner of the window. It means that UP is still waiting for the inputs. In such cases, you may need to freeze the Up’s input buffer size temporarily by pressing **Ctrl-S**. To unfreeze UP back, simply press **Ctrl-Q**. The current input buffer size of Ultimate Plumber is **40 MB**. Once you reached this limit, you will see a **+** (plus) sign on the top-left corner of the screen.
|
||||
|
||||
Here is the short demo of UP tool in action:
|
||||
![](https://www.ostechnix.com/wp-content/uploads/2018/10/up.gif)
|
||||
|
||||
### Installing Ultimate Plumber
|
||||
|
||||
Liked it? Great! Go ahead and install it on your Linux system and start using it. Installing UP is quite easy! All you have to do is open your Terminal and run the following two commands to install UP.
|
||||
|
||||
Download the latest Ultimate Plumber binary file from the [**releases page**][1] and put it in your path, for example **/usr/local/bin/**.
|
||||
|
||||
```
|
||||
$ sudo wget -O /usr/local/bin/up wget https://github.com/akavel/up/releases/download/v0.2.1/up
|
||||
```
|
||||
|
||||
Then, make the UP binary as executable using command:
|
||||
|
||||
```
|
||||
$ sudo chmod a+x /usr/local/bin/up
|
||||
```
|
||||
|
||||
Done! Start building Linux pipelines as described above!!
|
||||
|
||||
And, that’s all for now. Hope this was useful. More good stuffs to come. Stay tuned!
|
||||
|
||||
Cheers!
|
||||
|
||||
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.ostechnix.com/ultimate-plumber-writing-linux-pipes-with-instant-live-preview/
|
||||
|
||||
作者:[SK][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.ostechnix.com/author/sk/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://github.com/akavel/up/releases
|
@ -1,75 +0,0 @@
|
||||
Translating by StdioA
|
||||
|
||||
Design faster web pages, part 3: Font and CSS tweaks
|
||||
======
|
||||
|
||||
![](https://fedoramagazine.org/wp-content/uploads/2018/10/designfaster3-816x345.jpg)
|
||||
|
||||
Welcome back to this series of articles on designing faster web pages. [Part 1][1] and [part 2][2] of this series covered how to lose browser fat through optimizing and replacing images. This part looks at how to lose additional fat in CSS ([Cascading Style Sheets][3]) and fonts.
|
||||
|
||||
### Tweaking CSS
|
||||
|
||||
First things first: let’s look at where the problem originates. CSS was once a huge step forward. You can use it to style several pages from a central style sheet. Nowadays, many web developers use frameworks like Bootstrap.
|
||||
|
||||
While these frameworks are certainly helpful, many people simply copy and paste the whole framework. Bootstrap is huge; the “minimal” version of 4.0 is currently 144.9 KB. Perhaps in the era of terabytes of data, this isn’t much. But as they say, even small cattle makes a mess.
|
||||
|
||||
Look back at the [getfedora.org][4] example. Recall in [part 1][1], the first analysis showed the CSS files used nearly ten times more space than the HTML itself. Here’s a display of the stylesheets used:
|
||||
|
||||
![][5]
|
||||
|
||||
That’s nine different stylesheets. Many styles in them that are also unused on the page.
|
||||
|
||||
#### Remove, merge, and compress/minify
|
||||
|
||||
The font-awesome CSS inhabits the extreme end of included, unused styles. There are only three glyphs of the font used on the page. To make that up in KB, the font-awesome CSS used at getfedora.org is originally 25.2 KB. After cleaning out all unused styles, it’s only 1.3 KB. This is only about 4% of its original size! For Bootstrap CSS, the difference is 118.3 KB original, and 13.2 KB after removing unused styles.
|
||||
|
||||
The next question is, must there be a bootstrap.css and a font-awesome.css? Or can they be combined? Yes, they can. That doesn’t save much file space, but the browser now requests fewer files to succesfully render the page.
|
||||
|
||||
Finally, after merging the CSS files, try to remove unused styles and minify them. In this way, you save 10.1 KB for a final size of 4.3 KB.
|
||||
|
||||
Unfortunately, there’s no packaged “minifier” tool in Fedoras repositories yet. However, there are hundreds of online services to do that for you. Or you can use [CSS-HTML-JS Minify][6], which is Python, and therefore easy to isntall. There’s not an available tool to purify CSS, but there are web services like [UnCSS][7].
|
||||
|
||||
### Font improvement
|
||||
|
||||
[CSS3][8] came with something a lot of web developer like. They could define fonts the browser downloads in the background to render the page. Since then, a lot of web designers are very happy, especially after they discovered the usage of icon fonts for web design. Font sets like [Font Awesome][9] are quiet popular today and widely used. Here’s the size of that content:
|
||||
|
||||
```
|
||||
current free version 912 glyphs/icons, smallest set ttf 30.9KB, woff 14.7KB, woff2 12.2KB, svg 107.2KB, eot 31.2
|
||||
```
|
||||
|
||||
So the question is, do you need all the glyphs? In all probability, no. You can get rid of them with [FontForge][10], but that’s a lot of work. You could also use [Fontello][11]. Use the public instance, or set up your own, as it’s free software and available on [Github][12].
|
||||
|
||||
The downside of such customized font sets is you must host the font by yourself. You can’t use other online font services to provide updates. But this may not really be a downside, compared to faster performance.
|
||||
|
||||
### Conclusion
|
||||
|
||||
Now you’ve done everything you can to the content itself, to minimize what the browser loads and interprets. From now on, only tricks with the administration of the server can help.
|
||||
|
||||
One easy to do, but which many people do wrong, is decide on some intelligent caching. For instance, a CSS or picture file can be cached for a week. Whatever you do, if you use a proxy service like Cloudflare or build your own proxy, minimze the pages first. Users like fast loading pages. They’ll (silently) thank you for it, and the server will have a smaller load, too.
|
||||
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://fedoramagazine.org/design-faster-web-pages-part-3-font-css-tweaks/
|
||||
|
||||
作者:[Sirko Kemter][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://fedoramagazine.org/author/gnokii/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://fedoramagazine.org/design-faster-web-pages-part-1-image-compression/
|
||||
[2]: https://fedoramagazine.org/design-faster-web-pages-part-2-image-replacement/
|
||||
[3]: https://en.wikipedia.org/wiki/Cascading_Style_Sheets
|
||||
[4]: https://getfedora.org
|
||||
[5]: https://fedoramagazine.org/wp-content/uploads/2018/02/CSS_delivery_tool_-_Examine_how_a_page_uses_CSS_-_2018-02-24_15.00.46.png
|
||||
[6]: https://github.com/juancarlospaco/css-html-js-minify
|
||||
[7]: https://uncss-online.com/
|
||||
[8]: https://developer.mozilla.org/en-US/docs/Web/CSS/CSS3
|
||||
[9]: https://fontawesome.com/
|
||||
[10]: https://fontforge.github.io/en-US/
|
||||
[11]: http://fontello.com/
|
||||
[12]: https://github.com/fontello/fontello
|
@ -1,112 +0,0 @@
|
||||
Machine learning with Python: Essential hacks and tricks
|
||||
======
|
||||
Master machine learning, AI, and deep learning with Python.
|
||||
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/programming-code-keyboard-laptop.png?itok=pGfEfu2S)
|
||||
|
||||
It's never been easier to get started with machine learning. In addition to structured massive open online courses (MOOCs), there are a huge number of incredible, free resources available around the web. Here are a few that have helped me.
|
||||
|
||||
2. Learn to clearly differentiate between the buzzwords—for example, machine learning, artificial intelligence, deep learning, data science, computer vision, and robotics. Read or listen to talks by experts on each of them. Watch this [amazing video by Brandon Rohrer][1], an influential data scientist. Or this video about the [clear differences between various roles][2] associated with data science.
|
||||
|
||||
|
||||
3. Clearly set a goal for what you want to learn. Then go and take [that Coursera course][3]. Or take the one [from the University of Washington][4], which is pretty good too.
|
||||
|
||||
|
||||
5. If you are enthusiastic about taking online courses, check out this article for guidance on [choosing the right MOOC][5].
|
||||
|
||||
|
||||
6. Most of all, develop a feel for it. Join some good social forums, but resist the temptation to latch onto sensationalized headlines and news. Do your own reading to understand what it is and what it is not, where it might go, and what possibilities it can open up. Then sit back and think about how you can apply machine learning or imbue data science principles into your daily work. Build a simple regression model to predict the cost of your next lunch or download your electricity usage data from your energy provider and do a simple time-series plot in Excel to discover some pattern of usage. And after you are thoroughly enamored with machine learning, you can watch this video.
|
||||
|
||||
<https://www.youtube.com/embed/IpGxLWOIZy4>
|
||||
|
||||
### Is Python a good language for machine learning/AI?
|
||||
|
||||
Familiarity and moderate expertise in at least one high-level programming language is useful for beginners in machine learning. Unless you are a Ph.D. researcher working on a purely theoretical proof of some complex algorithm, you are expected to mostly use the existing machine learning algorithms and apply them in solving novel problems. This requires you to put on a programming hat.
|
||||
|
||||
There's a lot of talk about the best language for data science. While the debate rages, grab a coffee and read this insightful FreeCodeCamp article to learn about [data science languages][6] . Or, check out this post on KDnuggets to dive directly into the [Python vs. R debate][7]
|
||||
|
||||
For now, it's widely believed that Python helps developers be more productive from development to deployment and maintenance. Python's syntax is simpler and at a higher level when compared to Java, C, and C++. It has a vibrant community, open source culture, hundreds of high-quality libraries focused on machine learning, and a huge support base from big names in the industry (e.g., Google, Dropbox, Airbnb, etc.).
|
||||
|
||||
### Fundamental Python libraries
|
||||
|
||||
Assuming you go with the widespread opinion that Python is the best language for machine learning, there are a few core Python packages and libraries you need to master.
|
||||
|
||||
#### NumPy
|
||||
|
||||
Short for [Numerical Python][8], NumPy is the fundamental package required for high-performance scientific computing and data analysis in the Python ecosystem. It's the foundation on which nearly all of the higher-level tools, such as [Pandas][9] and [scikit-learn][10], are built. [TensorFlow][11] uses NumPy arrays as the fundamental building blocks underpinning Tensor objects and graphflow for deep learning tasks. Many NumPy operations are implemented in C, making them super fast. For data science and modern machine learning tasks, this is an invaluable advantage.
|
||||
|
||||
![](https://opensource.com/sites/default/files/uploads/machine-learning-python_numpy-cheat-sheet.jpeg)
|
||||
|
||||
#### Pandas
|
||||
|
||||
Pandas is the most popular library in the scientific Python ecosystem for doing general-purpose data analysis. Pandas is built upon a NumPy array, thereby preserving fast execution speed and offering many data engineering features, including:
|
||||
|
||||
* Reading/writing many different data formats
|
||||
* Selecting subsets of data
|
||||
* Calculating across rows and down columns
|
||||
* Finding and filling missing data
|
||||
* Applying operations to independent groups within the data
|
||||
* Reshaping data into different forms
|
||||
* Combing multiple datasets together
|
||||
* Advanced time-series functionality
|
||||
* Visualization through Matplotlib and Seaborn
|
||||
|
||||
![](https://opensource.com/sites/default/files/uploads/pandas_cheat_sheet_github.png)
|
||||
|
||||
#### Matplotlib and Seaborn
|
||||
|
||||
Data visualization and storytelling with data are essential skills for every data scientist because it's crtitical to be able to communicate insights from analyses to any audience effectively. This is an equally critical part of your machine learning pipeline, as you often have to perform an exploratory analysis of a dataset before deciding to apply a particular machine learning algorithm.
|
||||
|
||||
[Matplotlib][12] is the most widely used 2D Python visualization library. It's equipped with a dazzling array of commands and interfaces for producing publication-quality graphics from your data. This amazingly detailed and rich article will help you [get started with Matplotlib][13].
|
||||
|
||||
![](https://opensource.com/sites/default/files/uploads/matplotlib_gallery_-1.png)
|
||||
[Seaborn][14] is another great visualization library focused on statistical plotting. It provides an API (with flexible choices for plot style and color defaults) on top of Matplotlib, defines simple high-level functions for common statistical plot types, and integrates with functionality provided by Pandas. You can start with this great tutorial on [Seaborn for beginners][15].
|
||||
|
||||
![](https://opensource.com/sites/default/files/uploads/machine-learning-python_seaborn.png)
|
||||
|
||||
#### Scikit-learn
|
||||
|
||||
Scikit-learn is the most important general machine learning Python package to master. It features various [classification][16], [regression][17], and [clustering][18] algorithms, including [support vector machines][19], [random forests][20], [gradient boosting][21], [k-means][22], and [DBSCAN][23], and is designed to interoperate with the Python numerical and scientific libraries NumPy and [SciPy][24]. It provides a range of supervised and unsupervised learning algorithms via a consistent interface. The library has a level of robustness and support required for use in production systems. This means it has a deep focus on concerns such as ease of use, code quality, collaboration, documentation, and performance. Look at this [gentle introduction to machine learning vocabulary][25] used in the Scikit-learn universe or this article demonstrating [a simple machine learning pipeline][26] method using Scikit-learn.
|
||||
|
||||
This article was originally published on [Heartbeat][27] under [CC BY-SA 4.0][28].
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/18/10/machine-learning-python-essential-hacks-and-tricks
|
||||
|
||||
作者:[Tirthajyoti Sarkar][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/tirthajyoti
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://www.youtube.com/watch?v=tKa0zDDDaQk
|
||||
[2]: https://www.youtube.com/watch?v=Ura_ioOcpQI
|
||||
[3]: https://www.coursera.org/learn/machine-learning
|
||||
[4]: https://www.coursera.org/specializations/machine-learning
|
||||
[5]: https://towardsdatascience.com/how-to-choose-effective-moocs-for-machine-learning-and-data-science-8681700ed83f
|
||||
[6]: https://medium.freecodecamp.org/which-languages-should-you-learn-for-data-science-e806ba55a81f
|
||||
[7]: https://www.kdnuggets.com/2017/09/python-vs-r-data-science-machine-learning.html
|
||||
[8]: http://numpy.org/
|
||||
[9]: https://pandas.pydata.org/
|
||||
[10]: http://scikit-learn.org/
|
||||
[11]: https://www.tensorflow.org/
|
||||
[12]: https://matplotlib.org/
|
||||
[13]: https://realpython.com/python-matplotlib-guide/
|
||||
[14]: https://seaborn.pydata.org/
|
||||
[15]: https://www.datacamp.com/community/tutorials/seaborn-python-tutorial
|
||||
[16]: https://en.wikipedia.org/wiki/Statistical_classification
|
||||
[17]: https://en.wikipedia.org/wiki/Regression_analysis
|
||||
[18]: https://en.wikipedia.org/wiki/Cluster_analysis
|
||||
[19]: https://en.wikipedia.org/wiki/Support_vector_machine
|
||||
[20]: https://en.wikipedia.org/wiki/Random_forests
|
||||
[21]: https://en.wikipedia.org/wiki/Gradient_boosting
|
||||
[22]: https://en.wikipedia.org/wiki/K-means_clustering
|
||||
[23]: https://en.wikipedia.org/wiki/DBSCAN
|
||||
[24]: https://en.wikipedia.org/wiki/SciPy
|
||||
[25]: http://scikit-learn.org/stable/tutorial/basic/tutorial.html
|
||||
[26]: https://towardsdatascience.com/machine-learning-with-python-easy-and-robust-method-to-fit-nonlinear-data-19e8a1ddbd49
|
||||
[27]: https://heartbeat.fritz.ai/some-essential-hacks-and-tricks-for-machine-learning-with-python-5478bc6593f2
|
||||
[28]: https://creativecommons.org/licenses/by-sa/4.0/
|
@ -1,3 +1,4 @@
|
||||
zianglei translating
|
||||
How Do We Find Out The Installed Packages Came From Which Repository?
|
||||
======
|
||||
Sometimes you might want to know the installed packages came from which repository. This will helps you to troubleshoot when you are facing the package conflict issue.
|
||||
|
@ -1,3 +1,5 @@
|
||||
translating---geekpi
|
||||
|
||||
KRS: A new tool for gathering Kubernetes resource statistics
|
||||
======
|
||||
Zero-configuration tool simplifies gathering information, such as how many pods are running in a certain namespace.
|
||||
|
@ -0,0 +1,183 @@
|
||||
translating by Flowsnow
|
||||
|
||||
Create a containerized machine learning model
|
||||
======
|
||||
|
||||
![](https://fedoramagazine.org/wp-content/uploads/2018/10/machinelearning-816x345.jpg)
|
||||
|
||||
After data scientists have created a machine learning model, it has to be deployed into production. To run it on different infrastructures, using containers and exposing the model via a REST API is a common way to deploy a machine learning model. This article demonstrates how to roll out a [TensorFlow][1] machine learning model, with a REST API delivered by [Connexion][2] in a container with [Podman][3].
|
||||
|
||||
### Preparation
|
||||
|
||||
First, install Podman with the following command:
|
||||
|
||||
```
|
||||
sudo dnf -y install podman
|
||||
```
|
||||
|
||||
Next, create a new folder for the container and switch to that directory.
|
||||
|
||||
```
|
||||
mkdir deployment_container && cd deployment_container
|
||||
```
|
||||
|
||||
### REST API for the TensorFlow model
|
||||
|
||||
The next step is to create the REST-API for the machine learning model. This [github repository][4] contains a pretrained model, and well as the setup already configured for getting the REST API working.
|
||||
|
||||
Clone this in the deployment_container directory with the command:
|
||||
|
||||
```
|
||||
git clone https://github.com/svenboesiger/titanic_tf_ml_model.git
|
||||
```
|
||||
|
||||
#### prediction.py & ml_model/
|
||||
|
||||
The [prediction.py][5] file allows for a Tensorflow prediction, while the weights for the 20x20x20 neural network are located in folder [ml_model/][6].
|
||||
|
||||
#### swagger.yaml
|
||||
|
||||
The file swagger.yaml defines the API for the Connexion library using the [Swagger specification][7]. This file contains all of the information necessary to configure your server to provide input parameter validation, output response data validation, URL endpoint definition.
|
||||
|
||||
As a bonus Connexion will provide you also with a simple but useful single page web application that demonstrates using the API with JavaScript and updating the DOM with it.
|
||||
|
||||
```
|
||||
swagger: "2.0"
|
||||
info:
|
||||
description: This is the swagger file that goes with our server code
|
||||
version: "1.0.0"
|
||||
title: Tensorflow Podman Article
|
||||
consumes:
|
||||
- "application/json"
|
||||
produces:
|
||||
- "application/json"
|
||||
|
||||
|
||||
basePath: "/"
|
||||
|
||||
paths:
|
||||
/survival_probability:
|
||||
post:
|
||||
operationId: "prediction.post"
|
||||
tags:
|
||||
- "Prediction"
|
||||
summary: "The prediction data structure provided by the server application"
|
||||
description: "Retrieve the chance of surviving the titanic disaster"
|
||||
parameters:
|
||||
- in: body
|
||||
name: passenger
|
||||
required: true
|
||||
schema:
|
||||
$ref: '#/definitions/PredictionPost'
|
||||
responses:
|
||||
'201':
|
||||
description: 'Survival probability of an individual Titanic passenger'
|
||||
|
||||
definitions:
|
||||
PredictionPost:
|
||||
type: object
|
||||
```
|
||||
|
||||
#### server.py & requirements.txt
|
||||
|
||||
[server.py][8] defines an entry point to start the Connexion server.
|
||||
|
||||
```
|
||||
import connexion
|
||||
|
||||
app = connexion.App(__name__, specification_dir='./')
|
||||
|
||||
app.add_api('swagger.yaml')
|
||||
|
||||
if __name__ == '__main__':
|
||||
app.run(debug=True)
|
||||
```
|
||||
|
||||
[requirements.txt][9] defines the python requirements we need to run the program.
|
||||
|
||||
```
|
||||
connexion
|
||||
tensorflow
|
||||
pandas
|
||||
```
|
||||
|
||||
### Containerize!
|
||||
|
||||
For Podman to be able to build an image, create a new file called “Dockerfile” in the **deployment_container** directory created in the preparation step above:
|
||||
|
||||
```
|
||||
FROM fedora:28
|
||||
|
||||
# File Author / Maintainer
|
||||
MAINTAINER Sven Boesiger <donotspam@ujelang.com>
|
||||
|
||||
# Update the sources
|
||||
RUN dnf -y update --refresh
|
||||
|
||||
# Install additional dependencies
|
||||
RUN dnf -y install libstdc++
|
||||
|
||||
RUN dnf -y autoremove
|
||||
|
||||
# Copy the application folder inside the container
|
||||
ADD /titanic_tf_ml_model /titanic_tf_ml_model
|
||||
|
||||
# Get pip to download and install requirements:
|
||||
RUN pip3 install -r /titanic_tf_ml_model/requirements.txt
|
||||
|
||||
# Expose ports
|
||||
EXPOSE 5000
|
||||
|
||||
# Set the default directory where CMD will execute
|
||||
WORKDIR /titanic_tf_ml_model
|
||||
|
||||
# Set the default command to execute
|
||||
# when creating a new container
|
||||
CMD python3 server.py
|
||||
```
|
||||
|
||||
Next, build the container image with the command:
|
||||
|
||||
```
|
||||
podman build -t ml_deployment .
|
||||
```
|
||||
|
||||
### Run the container
|
||||
|
||||
With the Container image built and ready to go, you can run it locally with the command:
|
||||
|
||||
```
|
||||
podman run -p 5000:5000 ml_deployment
|
||||
```
|
||||
|
||||
Navigate to [http://0.0.0.0:5000/ui][10] in your web browser to access the Swagger/Connexion UI and to test-drive the model:
|
||||
|
||||
![][11]
|
||||
|
||||
Of course you can now also access the model with your application via the REST-API.
|
||||
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://fedoramagazine.org/create-containerized-machine-learning-model/
|
||||
|
||||
作者:[Sven Bösiger][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://fedoramagazine.org/author/r00nz/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://www.tensorflow.org
|
||||
[2]: https://connexion.readthedocs.io/en/latest/
|
||||
[3]: https://fedoramagazine.org/running-containers-with-podman/
|
||||
[4]: https://github.com/svenboesiger/titanic_tf_ml_model
|
||||
[5]: https://github.com/svenboesiger/titanic_tf_ml_model/blob/master/prediction.py
|
||||
[6]: https://github.com/svenboesiger/titanic_tf_ml_model/tree/master/ml_model/titanic
|
||||
[7]: https://github.com/OAI/OpenAPI-Specification/blob/master/versions/2.0.md
|
||||
[8]: https://github.com/svenboesiger/titanic_tf_ml_model/blob/master/server.py
|
||||
[9]: https://github.com/svenboesiger/titanic_tf_ml_model/blob/master/requirements.txt
|
||||
[10]: http://0.0.0.0:5000/
|
||||
[11]: https://fedoramagazine.org/wp-content/uploads/2018/10/Screenshot-from-2018-10-27-14-46-56-682x1024.png
|
@ -0,0 +1,78 @@
|
||||
How To Create A Bootable Linux USB Drive From Windows OS 7,8 and 10?
|
||||
======
|
||||
If you would like to learn about Linux, the first thing you have to do is install the Linux OS on your system.
|
||||
|
||||
It can be achieved in two ways either go with virtualization applications like Virtualbox, VMWare, etc, or install Linux on your system.
|
||||
|
||||
If you are preferring to move from windows OS to Linux OS or planning to install Linux operating system on your spare machine then you have to create a bootable USB stick for that.
|
||||
|
||||
We had wrote many articles for creating [bootable USB drive on Linux][1] such as [BootISO][2], [Etcher][3] and [dd command][4] but we never get an opportunity to write an article about creating Linux bootable USB drive in windows. Somehow, we got a opportunity today to perform this task.
|
||||
|
||||
In this article we are going to show you, how to create a bootable Ubuntu USB flash drive from windows 10.
|
||||
|
||||
These step will work for other Linux as well but you have to choose the corresponding OS from the drop down instead of Ubuntu.
|
||||
|
||||
### Step-1: Download Ubuntu ISO
|
||||
|
||||
Visit [Ubuntu releases][5] page and download a latest version. I would like to advise you to download a latest LTS version and not for a normal release.
|
||||
|
||||
Make sure you have downloaded the proper ISO by performing checksum using MD5 or SHA256. The output value should be matched with the Ubuntu releases page value.
|
||||
|
||||
### Step-2: Download Universal USB Installer
|
||||
|
||||
There are many applications are available for this but my preferred application is [Universal USB Installer][6] which is very simple to perform this task. Just visit Universal USB Installer page and download the app.
|
||||
|
||||
### Step-3: How To Create a bootable Ubuntu ISO using Universal USB Installer
|
||||
|
||||
There is no complication on this application to perform this. First connect your USB drive then hit the downloaded Universal USB Installer. Once it’s launched you can see the interface similar to us.
|
||||
![][8]
|
||||
|
||||
* **`Step-1:`** Select Ubuntu OS.
|
||||
* **`Step-2:`** Navigate to Ubuntu ISO downloaded location.
|
||||
* **`Step-3:`** By default it’s select a USB drive however verify this then check the option to format it.
|
||||
|
||||
|
||||
|
||||
![][9]
|
||||
|
||||
When you hit `Create` button, it will pop-up a window with warnings. No need to worry, just hit `Yes` to proceed further on this.
|
||||
![][10]
|
||||
|
||||
USB drive partition is in progress.
|
||||
![][11]
|
||||
|
||||
Wait for sometime to complete this. If you would like to move this process to background, yes, you can by hitting `Background` button.
|
||||
![][12]
|
||||
|
||||
Yes, it’s completed.
|
||||
![][13]
|
||||
|
||||
Now you are ready to perform [Ubuntu OS installation][14]. However, it’s offering a live mode also so, you can play around it if you want to try before performing the installation.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.2daygeek.com/create-a-bootable-live-usb-drive-from-windows-using-universal-usb-installer/
|
||||
|
||||
作者:[Prakash Subramanian][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.2daygeek.com/author/prakash/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://www.2daygeek.com/category/bootable-usb/
|
||||
[2]: https://www.2daygeek.com/bootiso-a-simple-bash-script-to-securely-create-a-bootable-usb-device-in-linux-from-iso-file/
|
||||
[3]: https://www.2daygeek.com/etcher-easy-way-to-create-a-bootable-usb-drive-sd-card-from-an-iso-image-on-linux/
|
||||
[4]: https://www.2daygeek.com/create-a-bootable-usb-drive-from-an-iso-image-using-dd-command-on-linux/
|
||||
[5]: http://releases.ubuntu.com/
|
||||
[6]: https://www.pendrivelinux.com/universal-usb-installer-easy-as-1-2-3/
|
||||
[7]: data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7
|
||||
[8]: https://www.2daygeek.com/wp-content/uploads/2018/11/create-a-live-linux-os-usb-from-windows-using-universal-usb-installer-1.png
|
||||
[9]: https://www.2daygeek.com/wp-content/uploads/2018/11/create-a-live-linux-os-usb-from-windows-using-universal-usb-installer-2.png
|
||||
[10]: https://www.2daygeek.com/wp-content/uploads/2018/11/create-a-live-linux-os-usb-from-windows-using-universal-usb-installer-3.png
|
||||
[11]: https://www.2daygeek.com/wp-content/uploads/2018/11/create-a-live-linux-os-usb-from-windows-using-universal-usb-installer-4.png
|
||||
[12]: https://www.2daygeek.com/wp-content/uploads/2018/11/create-a-live-linux-os-usb-from-windows-using-universal-usb-installer-5.png
|
||||
[13]: https://www.2daygeek.com/wp-content/uploads/2018/11/create-a-live-linux-os-usb-from-windows-using-universal-usb-installer-6.png
|
||||
[14]: https://www.2daygeek.com/how-to-install-ubuntu-16-04/
|
@ -0,0 +1,104 @@
|
||||
CPod: An Open Source, Cross-platform Podcast App
|
||||
======
|
||||
Podcasts are a great way to be entertained and informed. In fact, I listen to about ten different podcasts covering technology, mysteries, history, and comedy. Of course, [Linux podcasts][1] are also on this list.
|
||||
|
||||
Today, we will take a look at a simple cross-platform application for handling your podcasts.
|
||||
|
||||
![][2]
|
||||
Recommended podcasts and podcast search
|
||||
|
||||
### The Application
|
||||
|
||||
[CPod][3] is the creation of [Zack Guard (z————-)][4]. **It is an[Election][5] app** , which gives it the ability to run on the largest operating systems (Linux, Windows, Mac OS).
|
||||
|
||||
Trivia: CPod was originally named Cumulonimbus.
|
||||
|
||||
The majority of the application is taken up by two large panels to display content and options. A small bar along the left side of the screen gives you access to the different parts of the application. The different sections of CPod include Home, Queue, Subscriptions, Explore and Settings.
|
||||
|
||||
![cpod settings][6]Settings
|
||||
|
||||
### Features of CPod
|
||||
|
||||
Here is a list of features that CPod has to offer:
|
||||
|
||||
* Simple, clean design
|
||||
* Available on the top computer platforms
|
||||
* Available as a Snap
|
||||
* Search iTunes’ podcast directory
|
||||
* Download and play episodes without downloading
|
||||
* View podcast information and episode
|
||||
* Search for an individual episode of a podcast
|
||||
* Dark mode
|
||||
* Change playback speed
|
||||
* Keyboard shortcuts
|
||||
* Sync your podcast subscriptions with gpodder.net
|
||||
* Import and export subscriptions
|
||||
* Sort subscriptions based on length, date, download status, and play progress
|
||||
* Auto-fetch new episodes on application startup
|
||||
* Multiple language support
|
||||
|
||||
|
||||
|
||||
![search option in cpod application][7]Searching for ZFS episode
|
||||
|
||||
### Experiencing CPod on Linux
|
||||
|
||||
I ended up installing CPod on two systems: ArchLabs and Windows. There are two versions of CPod in the [Arch User Repository][8]. However, they are both out of date, one is version 1.14.0 and the other was 1.22.6. The most recent version of CPod is 1.27.0. Because of the version difference between ArchLabs and Windows, I had to different experiences. For this article, I will focus on 1.27.0, since that is the most current and has the most features.
|
||||
|
||||
Right out of the gate, I was able to find most of my favorite podcasts. I was able to add the ones that were not on the iTunes’ list by pasting in the URL for the RSS feed.
|
||||
|
||||
It was also very easy to find a particular episode of a podcast. for example, I was recently looking for an episode of [Late Night Linux][9] where they were talking about [ZFS][10]. I clicked on the podcast, typed “ZFS” in the search box and found it.
|
||||
|
||||
I quickly discovered that the easiest way to play a bunch of podcast episodes was to add them to the queue. Once they are in the queue, you can either stream them or download them. You can also reorder them by dragging and dropping. As each episode played, it displayed a visualization of the sound wave, along with the episode summary.
|
||||
|
||||
### Installating CPod
|
||||
|
||||
On [GitHub][11], you can download an AppImage or Deb file for Linux, a .exe file for Windows or a .dmg file for Mac OS.
|
||||
|
||||
You can also install CPod as a [Snap][12]. All you need to do is use the following command:
|
||||
|
||||
```
|
||||
sudo snap install cpod
|
||||
```
|
||||
|
||||
Like I said earlier, the [Arch User Repository][8] version of CPod is old. I already messaged one of the packagers. If you use Arch (or an Arch-based distro), I would recommend doing the same.
|
||||
|
||||
![cpod for Linux pidcasts][13]Playing one of my favorite podcasts
|
||||
|
||||
### Final Thoughts
|
||||
|
||||
Overall, I liked CPod. It was nice looking and simple to use. In fact, I like the original name (Cumulonimbus) better, but it is a bit of a mouthful.
|
||||
|
||||
I just had two problems with the application. First, I wish that the ratings were available for each podcast. Second, the menus that allow you to sort episodes based on length, date, download status, and play progress don’t work when the dork mode is turned on.
|
||||
|
||||
Have you ever used CPod? If not, what is your favorite podcast app? What are some of your favorite podcasts? Let us know in the comments below.
|
||||
|
||||
If you found this article interesting, please take a minute to share it on social media, Hacker News or [Red][14][d][14][it][14].
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://itsfoss.com/cpod-podcast-app/
|
||||
|
||||
作者:[John Paul][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://itsfoss.com/author/john/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://itsfoss.com/linux-podcasts/
|
||||
[2]: https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/10/cpod1.1.jpg
|
||||
[3]: https://github.com/z-------------/CPod
|
||||
[4]: https://github.com/z-------------
|
||||
[5]: https://electronjs.org/
|
||||
[6]: https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/10/cpod2.1.png
|
||||
[7]: https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/10/cpod4.1.jpg
|
||||
[8]: https://aur.archlinux.org/packages/?O=0&K=cpod
|
||||
[9]: https://latenightlinux.com/
|
||||
[10]: https://itsfoss.com/what-is-zfs/
|
||||
[11]: https://github.com/z-------------/CPod/releases
|
||||
[12]: https://snapcraft.io/cumulonimbus
|
||||
[13]: https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/10/cpod3.1.jpg
|
||||
[14]: http://reddit.com/r/linuxusersgroup
|
@ -0,0 +1,229 @@
|
||||
translating by dianbanjiu Commandline quick tips: How to locate a file
|
||||
======
|
||||
|
||||
![](https://fedoramagazine.org/wp-content/uploads/2018/10/commandlinequicktips-816x345.jpg)
|
||||
|
||||
We all have files on our computers — documents, photos, source code, you name it. So many of them. Definitely more than I can remember. And if not challenging, it might be time consuming to find the right one you’re looking for. In this post, we’ll have a look at how to make sense of your files on the command line, and especially how to quickly find the ones you’re looking for.
|
||||
|
||||
Good news is there are few quite useful utilities in the Linux commandline designed specifically to look for files on your computer. We’ll have a look at three of those: ls, tree, and find.
|
||||
|
||||
### ls
|
||||
|
||||
If you know where your files are, and you just need to list them or see information about them, ls is here for you.
|
||||
|
||||
Just running ls lists all visible files and directories in the current directory:
|
||||
|
||||
```
|
||||
$ ls
|
||||
Documents Music Pictures Videos notes.txt
|
||||
```
|
||||
|
||||
Adding the **-l** option shows basic information about the files. And together with the **-h** option you’ll see file sizes in a human-readable format:
|
||||
|
||||
```
|
||||
$ ls -lh
|
||||
total 60K
|
||||
drwxr-xr-x 2 adam adam 4.0K Nov 2 13:07 Documents
|
||||
drwxr-xr-x 2 adam adam 4.0K Nov 2 13:07 Music
|
||||
drwxr-xr-x 2 adam adam 4.0K Nov 2 13:13 Pictures
|
||||
drwxr-xr-x 2 adam adam 4.0K Nov 2 13:07 Videos
|
||||
-rw-r--r-- 1 adam adam 43K Nov 2 13:12 notes.txt
|
||||
```
|
||||
|
||||
**Is** can also search a specific place:
|
||||
|
||||
```
|
||||
$ ls Pictures/
|
||||
trees.png wallpaper.png
|
||||
```
|
||||
|
||||
Or a specific file — even with just a part of the name:
|
||||
|
||||
```
|
||||
$ ls *.txt
|
||||
notes.txt
|
||||
```
|
||||
|
||||
Something missing? Looking for a hidden file? No problem, use the **-a** option:
|
||||
|
||||
```
|
||||
$ ls -a
|
||||
. .bash_logout .bashrc Documents Pictures notes.txt
|
||||
.. .bash_profile .vimrc Music Videos
|
||||
```
|
||||
|
||||
There are many other useful options for **ls** , and you can combine them together to achieve what you need. Learn about them by running:
|
||||
|
||||
```
|
||||
$ man ls
|
||||
```
|
||||
|
||||
### tree
|
||||
|
||||
If you want to see, well, a tree structure of your files, tree is a good choice. It’s probably not installed by default which you can do yourself using the package manager DNF:
|
||||
|
||||
```
|
||||
$ sudo dnf install tree
|
||||
```
|
||||
|
||||
Running tree without any options or parameters shows the whole tree starting at the current directory. Just a warning, this output might be huge, because it will include all files and directories:
|
||||
|
||||
```
|
||||
$ tree
|
||||
.
|
||||
|-- Documents
|
||||
| |-- notes.txt
|
||||
| |-- secret
|
||||
| | `-- christmas-presents.txt
|
||||
| `-- work
|
||||
| |-- project-abc
|
||||
| | |-- README.md
|
||||
| | |-- do-things.sh
|
||||
| | `-- project-notes.txt
|
||||
| `-- status-reports.txt
|
||||
|-- Music
|
||||
|-- Pictures
|
||||
| |-- trees.png
|
||||
| `-- wallpaper.png
|
||||
|-- Videos
|
||||
`-- notes.txt
|
||||
```
|
||||
|
||||
If that’s too much, I can limit the number of levels it goes using the -L option followed by a number specifying the number of levels I want to see:
|
||||
|
||||
```
|
||||
$ tree -L 2
|
||||
.
|
||||
|-- Documents
|
||||
| |-- notes.txt
|
||||
| |-- secret
|
||||
| `-- work
|
||||
|-- Music
|
||||
|-- Pictures
|
||||
| |-- trees.png
|
||||
| `-- wallpaper.png
|
||||
|-- Videos
|
||||
`-- notes.txt
|
||||
```
|
||||
|
||||
You can also display a tree of a specific path:
|
||||
|
||||
```
|
||||
$ tree Documents/work/
|
||||
Documents/work/
|
||||
|-- project-abc
|
||||
| |-- README.md
|
||||
| |-- do-things.sh
|
||||
| `-- project-notes.txt
|
||||
`-- status-reports.txt
|
||||
```
|
||||
|
||||
To browse and search a huge tree, you can use it together with less:
|
||||
|
||||
```
|
||||
$ tree | less
|
||||
```
|
||||
|
||||
Again, there are other options you can use with three, and you can combine them together for even more power. The manual page has them all:
|
||||
|
||||
```
|
||||
$ man tree
|
||||
```
|
||||
|
||||
### find
|
||||
|
||||
And what about files that live somewhere in the unknown? Let’s find them!
|
||||
|
||||
In case you don’t have find on your system, you can install it using DNF:
|
||||
|
||||
```
|
||||
$ sudo dnf install findutils
|
||||
```
|
||||
|
||||
Running find without any options or parameters recursively lists all files and directories in the current directory.
|
||||
|
||||
```
|
||||
$ find
|
||||
.
|
||||
./Documents
|
||||
./Documents/secret
|
||||
./Documents/secret/christmas-presents.txt
|
||||
./Documents/notes.txt
|
||||
./Documents/work
|
||||
./Documents/work/status-reports.txt
|
||||
./Documents/work/project-abc
|
||||
./Documents/work/project-abc/README.md
|
||||
./Documents/work/project-abc/do-things.sh
|
||||
./Documents/work/project-abc/project-notes.txt
|
||||
./.bash_logout
|
||||
./.bashrc
|
||||
./Videos
|
||||
./.bash_profile
|
||||
./.vimrc
|
||||
./Pictures
|
||||
./Pictures/trees.png
|
||||
./Pictures/wallpaper.png
|
||||
./notes.txt
|
||||
./Music
|
||||
```
|
||||
|
||||
But the true power of find is that you can search by name:
|
||||
|
||||
```
|
||||
$ find -name do-things.sh
|
||||
./Documents/work/project-abc/do-things.sh
|
||||
```
|
||||
|
||||
Or just a part of a name — like the file extension. Let’s find all .txt files:
|
||||
|
||||
```
|
||||
$ find -name "*.txt"
|
||||
./Documents/secret/christmas-presents.txt
|
||||
./Documents/notes.txt
|
||||
./Documents/work/status-reports.txt
|
||||
./Documents/work/project-abc/project-notes.txt
|
||||
./notes.txt
|
||||
```
|
||||
|
||||
You can also look for files by size. That might be especially useful if you’re running out of space. Let’s list all files larger than 1 MB:
|
||||
|
||||
```
|
||||
$ find -size +1M
|
||||
./Pictures/trees.png
|
||||
./Pictures/wallpaper.png
|
||||
```
|
||||
|
||||
Searching a specific directory is also possible. Let’s say I want to find a file in my Documents directory, and I know it has the word “project” in its name:
|
||||
|
||||
```
|
||||
$ find Documents -name "*project*"
|
||||
Documents/work/project-abc
|
||||
Documents/work/project-abc/project-notes.txt
|
||||
```
|
||||
|
||||
Ah! That also showed the directory. One thing I can do is to limit the search query to files only:
|
||||
|
||||
```
|
||||
$ find Documents -name "*project*" -type f
|
||||
Documents/work/project-abc/project-notes.txt
|
||||
```
|
||||
|
||||
And again, find have many more options you can use, the man page might definitely help you:
|
||||
|
||||
```
|
||||
$ man find
|
||||
```
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://fedoramagazine.org/commandline-quick-tips-locate-file/
|
||||
|
||||
作者:[Adam Šamalík][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://fedoramagazine.org/author/asamalik/
|
||||
[b]: https://github.com/lujun9972
|
@ -0,0 +1,171 @@
|
||||
HankChow translating
|
||||
|
||||
Introducing pydbgen: A random dataframe/database table generator
|
||||
======
|
||||
Simple tool generates large database files with multiple tables to practice SQL commands for data science.
|
||||
|
||||
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/features_solutions_command_data.png?itok=4_VQN3RK)
|
||||
|
||||
When you start learning data science, often your biggest worry is not the algorithms or techniques but getting access to raw data. While there are many high-quality, real-life datasets available on the web for trying out cool machine learning techniques, I've found that the same is not true when it comes to learning SQL.
|
||||
|
||||
For data science, having a basic familiarity with SQL is almost as important as knowing how to write code in Python or R. But it's far easier to find toy datasets on Kaggle than it is to access a large enough database with real data (such as name, age, credit card, social security number, address, birthday, etc.) specifically designed or curated for machine learning tasks.
|
||||
|
||||
Wouldn't it be great to have a simple tool or library to generate a large database with multiple tables filled with data of your own choice?
|
||||
|
||||
Aside from beginners in data science, even seasoned software testers may find it useful to have a simple tool where, with a few lines of code, they can generate arbitrarily large data sets with random (fake), yet meaningful entries.
|
||||
|
||||
For this reason, I am glad to introduce a lightweight Python library called **[pydbgen][1]**. In this article, I'll briefly share some information about the package, and you can learn much more [by reading the docs][2].
|
||||
|
||||
### What is pydbgen?
|
||||
|
||||
Pydbgen is a lightweight, pure-Python library to generate random useful entries (e.g., name, address, credit card number, date, time, company name, job title, license plate number, etc.) and save them in a Pandas dataframe object, as an SQLite table in a database file, or in a Microsoft Excel file.
|
||||
|
||||
### How to install pydbgen
|
||||
|
||||
The current version (1.0.5) is hosted on PyPI (the Python Package Index repository). You need to have [Faker][3] installed to make this work. To install Pydbgen, enter:
|
||||
|
||||
```
|
||||
pip install pydbgen
|
||||
```
|
||||
|
||||
It has been tested on Python 3.6 and won't work on Python 2 installations.
|
||||
|
||||
### How to use it
|
||||
|
||||
To start using Pydbgen, initiate a **pydb** object.
|
||||
|
||||
```
|
||||
import pydbgen
|
||||
from pydbgen import pydbgen
|
||||
myDB=pydbgen.pydb()
|
||||
```
|
||||
|
||||
Then you can access the various internal functions exposed by the **pydb** object. For example, to print random US cities, enter:
|
||||
|
||||
```
|
||||
myDB.city_real()
|
||||
>> 'Otterville'
|
||||
for _ in range(10):
|
||||
print(myDB.license_plate())
|
||||
>> 8NVX937
|
||||
6YZH485
|
||||
XBY-564
|
||||
SCG-2185
|
||||
XMR-158
|
||||
6OZZ231
|
||||
CJN-850
|
||||
SBL-4272
|
||||
TPY-658
|
||||
SZL-0934
|
||||
```
|
||||
|
||||
By the way, if you enter **city** instead of **city_real** , it will return fictitious city names.
|
||||
|
||||
```
|
||||
print(myDB.gen_data_series(num=8,data_type='city'))
|
||||
>>
|
||||
New Michelle
|
||||
Robinborough
|
||||
Leebury
|
||||
Kaylatown
|
||||
Hamiltonfort
|
||||
Lake Christopher
|
||||
Hannahstad
|
||||
West Adamborough
|
||||
```
|
||||
|
||||
### Generate a Pandas dataframe with random entries
|
||||
|
||||
You can choose how many and what data types will be generated. Note that everything returns as string/texts.
|
||||
|
||||
```
|
||||
testdf=myDB.gen_dataframe(5,['name','city','phone','date'])
|
||||
testdf
|
||||
```
|
||||
|
||||
The resulting dataframe looks like the following image.
|
||||
|
||||
![](https://opensource.com/sites/default/files/uploads/pydbgen_pandas-dataframe.png)
|
||||
|
||||
### Generate a database table
|
||||
|
||||
You can choose how many and what data types will be generated. Everything is returned in the text/VARCHAR data type for the database. You can specify the database filename and the table name.
|
||||
|
||||
```
|
||||
myDB.gen_table(db_file='Testdb.DB',table_name='People',
|
||||
|
||||
fields=['name','city','street_address','email'])
|
||||
```
|
||||
|
||||
This generates a .db file which can be used with MySQL or the SQLite database server. The following image shows a database table opened in DB Browser for SQLite.
|
||||
![](https://opensource.com/sites/default/files/uploads/pydbgen_db-browser-for-sqlite.png)
|
||||
|
||||
### Generate an Excel file
|
||||
|
||||
Similar to the examples above, the following code will generate an Excel file with random data. Note that **phone_simple** is set to **False** so it can generate complex, long-form phone numbers. This can come in handy when you want to experiment with more involved data extraction codes.
|
||||
|
||||
```
|
||||
myDB.gen_excel(num=20,fields=['name','phone','time','country'],
|
||||
phone_simple=False,filename='TestExcel.xlsx')
|
||||
```
|
||||
|
||||
The resulting file looks like this image:
|
||||
![](https://opensource.com/sites/default/files/uploads/pydbgen_excel.png)
|
||||
|
||||
### Generate random email IDs for scrap use
|
||||
|
||||
A built-in method in pydbgen is **realistic_email** , which generates random email IDs from a seed name. This is helpful when you don't want to use your real email address on the web—but something close.
|
||||
|
||||
```
|
||||
for _ in range(10):
|
||||
print(myDB.realistic_email('Tirtha Sarkar'))
|
||||
>>
|
||||
Tirtha_Sarkar@gmail.com
|
||||
Sarkar.Tirtha@outlook.com
|
||||
Tirtha_S48@verizon.com
|
||||
Tirtha_Sarkar62@yahoo.com
|
||||
Tirtha.S46@yandex.com
|
||||
Tirtha.S@att.com
|
||||
Sarkar.Tirtha60@gmail.com
|
||||
TirthaSarkar@zoho.com
|
||||
Sarkar.Tirtha@protonmail.com
|
||||
Tirtha.S@comcast.net
|
||||
```
|
||||
|
||||
### Future improvements and user contributions
|
||||
|
||||
There may be many bugs in the current version—if you notice any and your program crashes during execution (except for a crash due to your incorrect entry), please let me know. Also, if you have a cool idea to contribute to the source code, the [GitHub repo][1] is open. Some questions readily come to mind:
|
||||
|
||||
* Can we integrate some machine learning/statistical modeling with this random data generator?
|
||||
* Should a visualization function be added to the generator?
|
||||
|
||||
|
||||
|
||||
The possibilities are endless and exciting!
|
||||
|
||||
If you have any questions or ideas to share, please contact me at [tirthajyoti[AT]gmail.com][4]. If you are, like me, passionate about machine learning and data science, please [add me on LinkedIn][5] or [follow me on Twitter][6]. Also, check my [GitHub repo][7] for other fun code snippets in Python, R, or MATLAB and some machine learning resources.
|
||||
|
||||
Originally published on [Towards Data Science][8]. Licensed under [CC BY-SA 4.0][9].
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/18/11/pydbgen-random-database-table-generator
|
||||
|
||||
作者:[Tirthajyoti Sarkar][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/tirthajyoti
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://github.com/tirthajyoti/pydbgen
|
||||
[2]: http://pydbgen.readthedocs.io/en/latest/
|
||||
[3]: https://faker.readthedocs.io/en/latest/index.html
|
||||
[4]: mailto:tirthajyoti@gmail.com
|
||||
[5]: https://www.linkedin.com/in/tirthajyoti-sarkar-2127aa7/
|
||||
[6]: https://twitter.com/tirthajyotiS
|
||||
[7]: https://github.com/tirthajyoti?tab=repositories
|
||||
[8]: https://towardsdatascience.com/introducing-pydbgen-a-random-dataframe-database-table-generator-b5c7bdc84be5
|
||||
[9]: https://creativecommons.org/licenses/by-sa/4.0/
|
104
sources/tech/20181105 Revisiting the Unix philosophy in 2018.md
Normal file
104
sources/tech/20181105 Revisiting the Unix philosophy in 2018.md
Normal file
@ -0,0 +1,104 @@
|
||||
Revisiting the Unix philosophy in 2018
|
||||
======
|
||||
The old strategy of building small, focused applications is new again in the modern microservices environment.
|
||||
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/brain_data.png?itok=RH6NA32X)
|
||||
|
||||
In 1984, Rob Pike and Brian W. Kernighan published an article called "[Program Design in the Unix Environment][1]" in the AT&T Bell Laboratories Technical Journal, in which they argued the Unix philosophy, using the example of BSD's **cat -v** implementation. In a nutshell that philosophy is: Build small, focused programs—in whatever language—that do only one thing but do this thing well, communicate via **stdin** / **stdout** , and are connected through pipes.
|
||||
|
||||
Sound familiar?
|
||||
|
||||
Yeah, I thought so. That's pretty much the [definition of microservices][2] offered by James Lewis and Martin Fowler:
|
||||
|
||||
> In short, the microservice architectural style is an approach to developing a single application as a suite of small services, each running in its own process and communicating with lightweight mechanisms, often an HTTP resource API.
|
||||
|
||||
While one *nix program or one microservice may be very limited or not even very interesting on its own, it's the combination of such independently working units that reveals their true benefit and, therefore, their power.
|
||||
|
||||
### *nix vs. microservices
|
||||
|
||||
The following table compares programs (such as **cat** or **lsof** ) in a *nix environment against programs in a microservices environment.
|
||||
|
||||
| | *nix | Microservices |
|
||||
| ----------------------------------- | -------------------------- | ----------------------------------- |
|
||||
| Unit of execution | program using stdin/stdout | service with HTTP or gRPC API |
|
||||
| Data flow | Pipes | ? |
|
||||
| Configuration & parameterization | Command-line arguments, | |
|
||||
| environment variables, config files | JSON/YAML docs | |
|
||||
| Discovery | Package manager, man, make | DNS, environment variables, OpenAPI |
|
||||
|
||||
Let's explore each line in slightly greater detail.
|
||||
|
||||
#### Unit of execution
|
||||
|
||||
**stdin** and writes output to **stdout**. A microservices setup deals with a service that exposes one or more communication interfaces, such as HTTP or gRPC APIs. In both cases, you'll find stateless examples (essentially a purely functional behavior) and stateful examples, where, in addition to the input, some internal (persisted) state decides what happens.
|
||||
|
||||
#### Data flow
|
||||
|
||||
The unit of execution in *nix (such as Linux) is an executable file (binary or interpreted script) that, ideally, reads input fromand writes output to. A microservices setup deals with a service that exposes one or more communication interfaces, such as HTTP or gRPC APIs. In both cases, you'll find stateless examples (essentially a purely functional behavior) and stateful examples, where, in addition to the input, some internal (persisted) state decides what happens.
|
||||
|
||||
Traditionally, *nix programs could communicate via pipes. In other words, thanks to [Doug McIlroy][3], you don't need to create temporary files to pass around and each can process virtually endless streams of data between processes. To my knowledge, there is nothing comparable to a pipe standardized in microservices, besides my little [Apache Kafka-based experiment from 2017][4].
|
||||
|
||||
#### Configuration and parameterization
|
||||
|
||||
How do you configure a program or service—either on a permanent or a by-call basis? Well, with *nix programs you essentially have three options: command-line arguments, environment variables, or full-blown config files. In microservices, you typically deal with YAML (or even worse, JSON) documents, defining the layout and configuration of a single microservice as well as dependencies and communication, storage, and runtime settings. Examples include [Kubernetes resource definitions][5], [Nomad job specifications][6], or [Docker Compose][7] files. These may or may not be parameterized; that is, either you have some templating language, such as [Helm][8] in Kubernetes, or you find yourself doing an awful lot of **sed -i** commands.
|
||||
|
||||
#### Discovery
|
||||
|
||||
How do you know what programs or services are available and how they are supposed to be used? Well, in *nix, you typically have a package manager as well as good old man; between them, they should be able to answer all the questions you might have. In a microservices setup, there's a bit more automation in finding a service. In addition to bespoke approaches like [Airbnb's SmartStack][9] or [Netflix's Eureka][10], there usually are environment variable-based or DNS-based [approaches][11] that allow you to discover services dynamically. Equally important, [OpenAPI][12] provides a de-facto standard for HTTP API documentation and design, and [gRPC][13] does the same for more tightly coupled high-performance cases. Last but not least, take developer experience (DX) into account, starting with writing good [Makefiles][14] and ending with writing your docs with (or in?) [**style**][15].
|
||||
|
||||
### Pros and cons
|
||||
|
||||
Both *nix and microservices offer a number of challenges and opportunities
|
||||
|
||||
#### Composability
|
||||
|
||||
It's hard to design something that has a clear, sharp focus and can also play well with others. It's even harder to get it right across different versions and to introduce respective error case handling capabilities. In microservices, this could mean retry logic and timeouts—maybe it's a better option to outsource these features into a service mesh? It's hard, but if you get it right, its reusability can be enormous.
|
||||
|
||||
#### Observability
|
||||
|
||||
In a monolith (in 2018) or a big program that tries to do it all (in 1984), it's rather straightforward to find the culprit when things go south. But, in a
|
||||
|
||||
```
|
||||
yes | tr \\n x | head -c 450m | grep n
|
||||
```
|
||||
|
||||
or a request path in a microservices setup that involves, say, 20 services, how do you even start to figure out which one is behaving badly? Luckily we have standards, notably [OpenCensus][16] and [OpenTracing][17]. Observability still might be the biggest single blocker if you are looking to move to microservices.
|
||||
|
||||
#### Global state
|
||||
|
||||
While it may not be such a big issue for *nix programs, in microservices, global state remains something of a discussion. Namely, how to make sure the local (persistent) state is managed effectively and how to make the global state consistent with as little effort as possible.
|
||||
|
||||
### Wrapping up
|
||||
|
||||
In the end, the question remains: Are you using the right tool for a given task? That is, in the same way a specialized *nix program implementing a range of functions might be the better choice for certain use cases or phases, it might be that a monolith [is the best option][18] for your organization or workload. Regardless, I hope this article helps you see the many, strong parallels between the Unix philosophy and microservices—maybe we can learn something from the former to benefit the latter.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/18/11/revisiting-unix-philosophy-2018
|
||||
|
||||
作者:[Michael Hausenblas][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/mhausenblas
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: http://harmful.cat-v.org/cat-v/
|
||||
[2]: https://martinfowler.com/articles/microservices.html
|
||||
[3]: https://en.wikipedia.org/wiki/Douglas_McIlroy
|
||||
[4]: https://speakerdeck.com/mhausenblas/distributed-named-pipes-and-other-inter-services-communication
|
||||
[5]: http://kubernetesbyexample.com/
|
||||
[6]: https://www.nomadproject.io/docs/job-specification/index.html
|
||||
[7]: https://docs.docker.com/compose/overview/
|
||||
[8]: https://helm.sh/
|
||||
[9]: https://github.com/airbnb/smartstack-cookbook
|
||||
[10]: https://github.com/Netflix/eureka
|
||||
[11]: https://kubernetes.io/docs/concepts/services-networking/service/#discovering-services
|
||||
[12]: https://www.openapis.org/
|
||||
[13]: https://grpc.io/
|
||||
[14]: https://suva.sh/posts/well-documented-makefiles/
|
||||
[15]: https://www.linux.com/news/improve-your-writing-gnu-style-checkers
|
||||
[16]: https://opencensus.io/
|
||||
[17]: https://opentracing.io/
|
||||
[18]: https://robertnorthard.com/devops-days-well-architected-monoliths-are-okay/
|
308
sources/tech/20181105 Some Good Alternatives To ‘du- Command.md
Normal file
308
sources/tech/20181105 Some Good Alternatives To ‘du- Command.md
Normal file
@ -0,0 +1,308 @@
|
||||
Some Good Alternatives To ‘du’ Command
|
||||
======
|
||||
|
||||
![](https://www.ostechnix.com/wp-content/uploads/2018/11/du-command-720x340.jpg)
|
||||
|
||||
As you may already know, the **“du”** command is used to compute and summarize the file and directory space usage in Unix-like systems. If you are a heavy user of du command, you will find this guide interesting! Today, I came across five good **alternatives to du** command. There could be many, but these are the ones that I am aware of at the moment. If I came across anything in future, I will add it in this list. Also, if you know any other alternatives, please let me know in the comment section below. I will review and add them in the list as well.
|
||||
|
||||
### 1\. Ncdu
|
||||
|
||||
The **Ncdu** is the popular alternative to du command in the Linux community. The developer of Ncdu is not satisfied with the performance of the du command, so he ended up creating his own. Ncdu is simple, yet fast disk usage analyzer written using **C** programming language with an **ncurses** interface to find which directories or files are taking up more space either on a local or remote systems. We already have published a detailed guide about Ncdu. Check the following link if you are interested to know more about it.
|
||||
|
||||
### 2\. Tin Summer
|
||||
|
||||
The **Tin Summer** is used to find the build artifacts that are taking up disk space. It is also an yet another good alternative for du command. Thanks to multi-threading, Tin-summer is significantly faster than du command when calculating the size of the big directories. Unlike Du command, it reads file sizes, not disk usage. Tin SUmmer is free, open source tool written using **Rust** programming language.
|
||||
|
||||
The developer claims Tin Summer is good alternative to du command, because,
|
||||
|
||||
* It is faster on larger directories compared to du command,
|
||||
* It displays the disk usage results in human-readable format by default,
|
||||
* It uses **regex** to exclude files/directories,
|
||||
* Provides sorted and colorized output,
|
||||
* Extensible,
|
||||
* And more.
|
||||
|
||||
|
||||
|
||||
**Installing Tin Summer**
|
||||
|
||||
To install Tin Summer, open your Terminal and run the following command:
|
||||
|
||||
```
|
||||
$ curl -LSfs https://japaric.github.io/trust/install.sh | sh -s -- --git vmchale/tin-summer
|
||||
```
|
||||
|
||||
Alternatively, you can install Tin Summer using **Cargo** package manager. Make sure you have installed **Rust** on your system as described in the following link.
|
||||
|
||||
After installing Rust, run the following command to install Tin Summer:
|
||||
|
||||
```
|
||||
$ cargo install tin-summer
|
||||
```
|
||||
|
||||
If either of the above mentioned methods doesn’t not work, download the latest binary from the [**releases page**][1] and compile and install it manually.
|
||||
|
||||
**Usage**
|
||||
|
||||
To find the file sizes in a current working directory, use this command:
|
||||
|
||||
```
|
||||
$ sn f
|
||||
749 MB ./.rustup/toolchains
|
||||
749 MB ./.rustup
|
||||
147 MB ./.cargo/bin
|
||||
147 MB ./.cargo
|
||||
900 MB .
|
||||
```
|
||||
|
||||
See? It displays a nicer input in human-readable format by default. You need not to use any extra flags (like **-h** in du command) to get this result.
|
||||
|
||||
To find the file sizes in a specific directory, mention the actual path like below:
|
||||
|
||||
```
|
||||
$ sn f <path-to-the-directory>
|
||||
```
|
||||
|
||||
We can also sort the list in the output as well. To display the sorted list of the top 5 biggest directories, run:
|
||||
|
||||
```
|
||||
$ sn sort /home/sk/ -n5
|
||||
749 MB /home/sk/.rustup
|
||||
749 MB /home/sk/.rustup/toolchains
|
||||
147 MB /home/sk/.cargo
|
||||
147 MB /home/sk/.cargo/bin
|
||||
2.6 MB /home/sk/mcelog
|
||||
900 MB /home/sk/
|
||||
```
|
||||
|
||||
For your information, the last result in the above output is the total size of the biggest directories in the given directory i.e **/home/sk/**. So, don’t wonder why you get six results instead of 5.
|
||||
|
||||
To search current directory for directories with build artifacts:
|
||||
|
||||
```
|
||||
$ sn ar
|
||||
```
|
||||
|
||||
Tin Summer can also search for directories containing artifacts that occupy a certain size of the disk space. Say for example, to search for directories containing artifacts that occupy more than **100MB** of disk space, run:
|
||||
|
||||
```
|
||||
$ sn ar -t100M
|
||||
```
|
||||
|
||||
Like already mentioned, Tin Summer is faster on larger directories, but it is also slower on small ones. However, the developer assures he will find a way to fix this in the future releases!
|
||||
|
||||
To get help, run:
|
||||
|
||||
```
|
||||
$ sn --help
|
||||
```
|
||||
|
||||
For more details, check the project’s GitHub repository given at the end of this guide.
|
||||
|
||||
### 3\. Dust
|
||||
|
||||
**Dust** (du+rust=dust) is more intuitive version of du utility. It will give us an instant overview of which directories are occupying the disk space without having to use **head** or **sort** commands. Like Tin Summer, it also displays the size of each directory in human-readable format by default. It is free, open source and written using **Rust** programming language.
|
||||
|
||||
**Installing Dust**
|
||||
|
||||
Since the dust utility is written in Rust, It can be installed using “cargo” package manager like below.
|
||||
|
||||
```
|
||||
$ cargo install du-dust
|
||||
```
|
||||
|
||||
Alternatively, you can download the latest binary from the [**releases page**][2] and install it as shown below. As of writing this guide, the latest version was **0.3.1**.
|
||||
|
||||
```
|
||||
$ wget https://github.com/bootandy/dust/releases/download/v0.3.1/dust-v0.3.1-x86_64-unknown-linux-gnu.tar.gz
|
||||
```
|
||||
|
||||
Extract the download file:
|
||||
|
||||
```
|
||||
$ tar -xvf dust-v0.3.1-x86_64-unknown-linux-gnu.tar.gz
|
||||
```
|
||||
|
||||
Finally, copy the executable file to your $PATH, for example **/usr/local/bin**.
|
||||
|
||||
```
|
||||
$ sudo mv dust /usr/local/bin/
|
||||
```
|
||||
|
||||
**Usage**
|
||||
|
||||
To find the total file sizes in the current directory and its sub-directories, run:
|
||||
|
||||
```
|
||||
$ dust
|
||||
```
|
||||
|
||||
Sample output:
|
||||
|
||||
![](http://www.ostechnix.com/wp-content/uploads/2018/11/dust-1.png)
|
||||
|
||||
We can also get the full path of all directories using **-p** flag.
|
||||
|
||||
```
|
||||
$ dust -p
|
||||
```
|
||||
|
||||
![dust 2][4]
|
||||
|
||||
To get the total size of multiple directories, just mention them with space-separated:
|
||||
|
||||
```
|
||||
$ dust <dir1> <dir2>
|
||||
```
|
||||
|
||||
Here are some more examples.
|
||||
|
||||
Show the apparent size of the files:
|
||||
|
||||
```
|
||||
$ dust -s
|
||||
```
|
||||
|
||||
Show particular number of directories only:
|
||||
|
||||
```
|
||||
$ dust -n 10
|
||||
```
|
||||
|
||||
Show 3 levels of sub-directories in the current directory:
|
||||
|
||||
```
|
||||
$ dust -d 3
|
||||
```
|
||||
|
||||
For help, run:
|
||||
|
||||
```
|
||||
$ dust -h
|
||||
```
|
||||
|
||||
For more details, refer the project’s GitHub page given at the end.
|
||||
|
||||
### 4\. Diskus
|
||||
|
||||
**Diskus** It is a simple and fast alternative command line utility to `du -sh`command. The diskus utility computes the total file size of the current directory. It is a parallelized version of `du -sh` or rather `du -sh --bytes` command. The developer of diskus utility claims that it is about **nine times faster** compared to ‘du -sh’. Diskus is minimal, fast and open source program written in **Rust** programming language.
|
||||
|
||||
**Installing diskus**
|
||||
|
||||
The diskus utility is available in [**AUR**][5], so you can install it on Arch-based systems using any AUR helper programs, for example [**Yay**][6] , as shown below.
|
||||
|
||||
```
|
||||
$ yay -S diskus
|
||||
```
|
||||
|
||||
On Ubuntu and its derivatives, download the latest diskus utility from the [**releases page**][7] and install it as shown below.
|
||||
|
||||
```
|
||||
$ wget "https://github.com/sharkdp/diskus/releases/download/v0.3.1/diskus_0.3.1_amd64.deb"
|
||||
|
||||
$ sudo dpkg -i diskus_0.3.1_amd64.deb
|
||||
```
|
||||
|
||||
Alternatively, you can install diskus using **Cargo** package manager. Make sure you have installed **Rust 1.29** or higher on your system as described in the link given above in “Installing Tin Summer” section.
|
||||
|
||||
Once you have Rust on your system, run the following command to install diskus:
|
||||
|
||||
```
|
||||
$ cargo install diskus
|
||||
```
|
||||
|
||||
**Usage**
|
||||
|
||||
Usually, when I want to check the total disk space used by a particular directory, I use the **-sh** flags with **du** command as shown below.
|
||||
|
||||
```
|
||||
$ du -sh dir
|
||||
```
|
||||
|
||||
Here, **-s** flag indicates summary.
|
||||
|
||||
Using Diskus tool, I find the total size of current working directory with command:
|
||||
|
||||
```
|
||||
$ diskus
|
||||
```
|
||||
|
||||
![](https://www.ostechnix.com/wp-content/uploads/2018/11/diskus-in-action.png)
|
||||
|
||||
I tested diskus to compute the total size of different directories in my Arch Linux system. The speed of computing the total size of the directory is pretty impressive! I must admit that this utility is quite faster than ‘du -sh’. Please be mindful that it can find the size of the current directory only at the moment.
|
||||
|
||||
For getting help, run:
|
||||
|
||||
```
|
||||
$ diskus -h
|
||||
```
|
||||
|
||||
For more details about Diskus, refer the official GitHub page (link at the end).
|
||||
|
||||
**Suggested read:**
|
||||
|
||||
### 5\. Duu
|
||||
|
||||
**Duu** , short for **D** irectory **U** sage **U** tility, is another tool to find the disk usage of given directory. It is a cross-platform, so you can use it on Windows, Mac OS and Linux operating systems. It is written in **Python** programming language.
|
||||
|
||||
**Installing Duu**
|
||||
|
||||
Make sure you have installed Python3. Python3 is available in the default repositories of most Linux distributions, so the installation wouldn’t be a problem.
|
||||
|
||||
Once Python3 is installed, download the latest Duu version from the official [**releases page**][8].
|
||||
|
||||
```
|
||||
$ wget https://github.com/jftuga/duu/releases/download/2.20/duu.py
|
||||
```
|
||||
|
||||
**Usage**
|
||||
|
||||
To find the disk space occupied by the current working directory, simply run:
|
||||
|
||||
```
|
||||
$ python3 duu.py
|
||||
```
|
||||
|
||||
Sample output:
|
||||
|
||||
![](https://www.ostechnix.com/wp-content/uploads/2018/11/duu.png)
|
||||
|
||||
As you can see in the above output, Duu utility will display a nice summary of total number of files and directories and their total size in bytes, KB and MB. It will also display the total size of each item.
|
||||
|
||||
To display the total disk usage of a specific directory, just mention the full path like below:
|
||||
|
||||
```
|
||||
$ python3 duu.py /home/sk/Downloads/
|
||||
```
|
||||
|
||||
For more details, refer Duu github page included at the end.
|
||||
|
||||
And, that’s all for now. Hope this was useful. You know now five alternatives to du command. Personally, I prefer Ncdu over all of them given in this guide. Now is your turn. Give them a try and let us know your thoughts on these tools in the comment section below.
|
||||
|
||||
More good stuffs to come. Stay tuned!
|
||||
|
||||
Cheers!
|
||||
|
||||
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.ostechnix.com/some-good-alternatives-to-du-command/
|
||||
|
||||
作者:[SK][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.ostechnix.com/author/sk/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://github.com/vmchale/tin-summer/releases
|
||||
[2]: https://github.com/bootandy/dust/releases
|
||||
[3]: data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7
|
||||
[4]: http://www.ostechnix.com/wp-content/uploads/2018/11/dust-2.png
|
||||
[5]: https://aur.archlinux.org/packages/diskus-bin/
|
||||
[6]: https://www.ostechnix.com/yay-found-yet-another-reliable-aur-helper/
|
||||
[7]: https://github.com/sharkdp/diskus/releases
|
||||
[8]: https://github.com/jftuga/duu/releases
|
@ -0,0 +1,55 @@
|
||||
关于安全,开发人员需要知道的
|
||||
======
|
||||
(to 校正:有些长句子理解得不好,望见谅)
|
||||
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/locks_keys_bridge_paris.png?itok=Bp0dsEc9)
|
||||
|
||||
DevOps 并不意味着每个人都需要成为开发和运维方面的专家。尤其在大型组织中,其中角色往往更加专业化。相反,DevOps 思想在某种程度上更多地是关注问题的分离。在某种程度上,运维团队可以为开发人员(无论是在本地云还是在公共云中)部署平台,并且不受影响,这对两个团队来说都是好消息。开发人员可以获得高效的开发环境和自助服务,运维人员可以专注于保持基础管道运行和维护平台。
|
||||
|
||||
这是一种约定。开发者期望从运维人员那里得到一个稳定和实用的平台,运维人员希望开发者能够自己处理与开发应用相关的大部分任务。
|
||||
|
||||
也就是说,DevOps 还涉及更好的沟通、合作和透明度。如果它不仅仅是一种介于开发和运维之间的新型壁垒,它的效果会更好。运维人员需要对开发者想要和需要的工具类型以及他们通过监视和日志记录来编写更好应用程序所需的可见性保持敏感。相反,开发人员需要了解如何才能使底层基础设施更有效地使用,以及什么能够在夜间(字面上)保持操作。(to 校正:这里意思是不是在无人时候操作)
|
||||
|
||||
同样的原则也适用于更广泛的 DevSecOps,这个术语明确地提醒我们,安全需要嵌入到整个 DevOps 管道中,从获取内容到编写应用程序、构建应用程序、测试应用程序以及在生产环境中运行它们。开发人员(和运维人员)不需要突然成为安全专家,除了他们的其它角色。但是,他们通常可以从对安全最佳实践(这可能不同于他们已经习惯的)的更高认识中获益,并从将安全视为一些不幸障碍的心态中转变出来。
|
||||
|
||||
以下是一些观察结果。
|
||||
|
||||
开放式 Web 应用程序安全项目(Open Web Application Security Project)([OWASP][1])[Top 10 列表]提供了一个窗口,可以了解 Web 应用程序中的主要漏洞。列表中的许多条目对 Web 程序员来说都很熟悉。跨站脚本(XSS)和注入漏洞是最常见的。但令人震惊的是,2007 年列表中的许多漏洞仍在 2017 年的列表中([PDF][3])。无论是培训还是工具,都有问题,许多相同的编码漏洞在不断出现。(to 校正:这句话不清楚)
|
||||
|
||||
新平台技术加剧了这种情况。例如,虽然容器不一定要求应用程序以不同的方式编写,但是它们与新模式(例如[微服务][4])相吻合,并且可以放大某些对于安全实践的影响。例如,我的同事 [Dan Walsh][5]([@rhatdan][6])写道:“计算机领域最大的误解是需要 root 权限来运行应用程序,问题是并不是所有开发者都认为他们需要 root,而是他们将这种假设构建到他们建设的服务中,即服务无法在非 root 情况下运行,而这降低了安全性。”
|
||||
|
||||
默认使用 root 访问是一个好的实践吗?并不是。但它可能(也许)是一个可以防御的应用程序和系统,否则就会被其它方法完全隔离。但是,由于所有东西都连接在一起,没有真正的边界,多用户工作负载,拥有许多不同级别访问权限的用户,更不用说更加危险的环境了,那么快捷方式的回旋余地就更小了。
|
||||
|
||||
[自动化][7]应该是 DevOps 不可分割的一部分。自动化需要覆盖整个过程中,包括安全和合规性测试。代码是从哪里来的?是否涉及第三方技术、产品或容器映像?是否有已知的安全勘误表?是否有已知的常见代码缺陷?秘密和个人身份信息是否被隔离?如何进行身份认证?谁被授权部署服务和应用程序?
|
||||
|
||||
你不是在写你自己的加密代码吧?
|
||||
|
||||
尽可能地自动化渗透测试。我提到过自动化没?它是使安全性持续的一个重要部分,而不是偶尔做一次的检查清单。
|
||||
|
||||
这听起来很难吗?可能有点。至少它是不同的。但是,作一名 [DevOpsDays OpenSpaces][8] 伦敦论坛的一名参与者对我说:“这只是技术测试。它既不神奇也不神秘。”他接着说,将安全作为一种更广泛地了解整个软件生命周期(这是一种不错的技能)的方法来参与进来并不难。他还建议参加事件响应练习或[捕获国旗练习][9]。你会发现它们很有趣。
|
||||
|
||||
本文基于作者将于 5 月 8 日至 10 日在旧金山举行的 [Red Hat Summit 2018][11] 上发表的演讲。_[5 月 7 日前注册][11]以节省 500 美元的注册。使用折扣代码**OPEN18**在支付页面应用折扣_
|
||||
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/18/4/what-developers-need-know-about-security
|
||||
|
||||
作者:[Gordon Haff][a]
|
||||
译者:[MjSeven](https://github.com/MjSeven)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
选题:[lujun9972](https://github.com/lujun9972)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://opensource.com/users/ghaff
|
||||
[1]:https://www.owasp.org/index.php/Main_Page
|
||||
[2]:https://www.owasp.org/index.php/Category:OWASP_Top_Ten_Project
|
||||
[3]:https://www.owasp.org/images/7/72/OWASP_Top_10-2017_%28en%29.pdf.pdf
|
||||
[4]:https://opensource.com/tags/microservices
|
||||
[5]:https://opensource.com/users/rhatdan
|
||||
[6]:https://twitter.com/rhatdan
|
||||
[7]:https://opensource.com/tags/automation
|
||||
[8]:https://www.devopsdays.org/open-space-format/
|
||||
[9]:https://dev.to/_theycallmetoni/capture-the-flag-its-a-game-for-hacki-mean-security-professionals
|
||||
[10]:https://agenda.summit.redhat.com/SessionDetail.aspx?id=154677
|
||||
[11]:https://www.redhat.com/en/summit/2018
|
@ -1,240 +0,0 @@
|
||||
# Caffeinated 6.828:实验 2:内存管理
|
||||
|
||||
### 简介
|
||||
|
||||
在本实验中,你将为你的操作系统写内存管理方面的代码。内存管理有两部分组成。
|
||||
|
||||
第一部分是内核的物理内存分配器,内核通过它来分配内存,以及在不需要时释放所分配的内存。分配器以页为单位分配内存,每个页的大小为 4096 字节。你的任务是去维护那个数据结构,它负责记录物理页的分配和释放,以及每个分配的页有多少进程共享它。本实验中你将要写出分配和释放内存页的全套代码。
|
||||
|
||||
第二个部分是虚拟内存的管理,它负责由内核和用户软件使用的虚拟内存地址到物理内存地址之间的映射。当使用内存时,x86 架构的硬件是由内存管理单元(MMU)负责执行映射操作来查阅一组页表。接下来你将要修改 JOS,以根据我们提供的特定指令去设置 MMU 的页表。
|
||||
|
||||
### 预备知识
|
||||
|
||||
在本实验及后面的实验中,你将逐步构建你的内核。我们将会为你提供一些附加的资源。使用 Git 去获取这些资源、提交自实验 1 以来的改变(如有需要的话)、获取课程仓库的最新版本、以及在我们的实验 2 (origin/lab2)的基础上创建一个称为 lab2 的本地分支:
|
||||
|
||||
```c
|
||||
athena% cd ~/6.828/lab
|
||||
athena% add git
|
||||
athena% git pull
|
||||
Already up-to-date.
|
||||
athena% git checkout -b lab2 origin/lab2
|
||||
Branch lab2 set up to track remote branch refs/remotes/origin/lab2.
|
||||
Switched to a new branch "lab2"
|
||||
athena%
|
||||
```
|
||||
|
||||
上面的 `git checkout -b` 命令其实做了两件事情:首先它创建了一个本地分支 `lab2`,它跟踪给我们提供课程内容的远程分支 `origin/lab2` ,第二件事情是,它更改的你的 `lab` 目录的内容反映到 `lab2` 分支上存储的文件中。Git 允许你在已存在的两个分支之间使用 `git checkout *branch-name*` 命令去切换,但是在你切换到另一个分支之前,你应该去提交那个分支上你做的任何出色的变更。
|
||||
|
||||
现在,你需要将你在 lab1 分支中的改变合并到 lab2 分支中,命令如下:
|
||||
|
||||
```c
|
||||
athena% git merge lab1
|
||||
Merge made by recursive.
|
||||
kern/kdebug.c | 11 +++++++++--
|
||||
kern/monitor.c | 19 +++++++++++++++++++
|
||||
lib/printfmt.c | 7 +++----
|
||||
3 files changed, 31 insertions(+), 6 deletions(-)
|
||||
athena%
|
||||
```
|
||||
|
||||
在一些案例中,Git 或许并不能找到如何将你的更改与新的实验任务合并(例如,你在第二个实验任务中变更了一些代码的修改)。在那种情况下,你使用 git 命令去合并,它会告诉你哪个文件发生了冲突,你必须首先去解决冲突(通过编辑冲突的文件),然后使用 `git commit -a` 去重新提交文件。
|
||||
|
||||
实验 2 包含如下的新源代码,后面你将遍历它们:
|
||||
|
||||
- inc/memlayout.h
|
||||
- kern/pmap.c
|
||||
- kern/pmap.h
|
||||
- kern/kclock.h
|
||||
- kern/kclock.c
|
||||
|
||||
`memlayout.h` 描述虚拟地址空间的布局,这个虚拟地址空间是通过修改 `pmap.c`、`memlayout.h` 和 `pmap.h` 所定义的 *PageInfo* 数据结构来实现的,这个数据结构用于跟踪物理内存页面是否被释放。`kclock.c` 和 `kclock.h` 维护 PC 基于电池的时钟和 CMOS RAM 硬件,在 BIOS 中记录了 PC 上安装的物理内存数量,以及其它的一些信息。在 `pmap.c` 中的代码需要去读取这个设备硬件信息,以算出在这个设备上安装了多少物理内存,这些只是由你来完成的一部分代码:你不需要知道 CMOS 硬件工作原理的细节。
|
||||
|
||||
特别需要注意的是 `memlayout.h` 和 `pmap.h`,因为本实验需要你去使用和理解的大部分内容都包含在这两个文件中。你或许还需要去复习 `inc/mmu.h` 这个文件,因为它也包含了本实验中用到的许多定义。
|
||||
|
||||
开始本实验之前,记得去添加 `exokernel` 以获取 QEMU 的 6.828 版本。
|
||||
|
||||
### 实验过程
|
||||
|
||||
在你准备进行实验和写代码之前,先添加你的 `answers-lab2.txt` 文件到 Git 仓库,提交你的改变然后去运行 `make handin`。
|
||||
|
||||
```c
|
||||
athena% git add answers-lab2.txt
|
||||
athena% git commit -am "my answer to lab2"
|
||||
[lab2 a823de9] my answer to lab2 4 files changed, 87 insertions(+), 10 deletions(-)
|
||||
athena% make handin
|
||||
```
|
||||
|
||||
正如前面所说的,我们将使用一个评级程序来分级你的解决方案,你可以在 `lab` 目录下运行 `make grade`,使用评级程序来测试你的内核。为了完成你的实验,你可以改变任何你需要的内核源代码和头文件。但毫无疑问的是,你不能以任何形式去改变或破坏评级代码。
|
||||
|
||||
### 第 1 部分:物理页面管理
|
||||
|
||||
操作系统必须跟踪物理内存页是否使用的状态。JOS 以页为最小粒度来管理 PC 的物理内存,以便于它使用 MMU 去映射和保护每个已分配的内存片段。
|
||||
|
||||
现在,你将要写内存的物理页分配器的代码。它使用链接到 `PageInfo` 数据结构的一组列表来保持对物理页的状态跟踪,每个列表都对应到一个物理内存页。在你能够写出剩下的虚拟内存实现之前,你需要先写出物理内存页面分配器,因为你的页表管理代码将需要去分配物理内存来存储页表。
|
||||
|
||||
> 练习 1
|
||||
>
|
||||
> 在文件 `kern/pmap.c` 中,你需要去实现以下函数的代码(或许要按给定的顺序来实现)。
|
||||
>
|
||||
> boot_alloc()
|
||||
>
|
||||
> mem_init()(只要能够调用 check_page_free_list() 即可)
|
||||
>
|
||||
> page_init()
|
||||
>
|
||||
> page_alloc()
|
||||
>
|
||||
> page_free()
|
||||
>
|
||||
> `check_page_free_list()` 和 `check_page_alloc()` 可以测试你的物理内存页分配器。你将需要引导 JOS 然后去看一下 `check_page_alloc()` 是否报告成功即可。如果没有报告成功,修复你的代码直到成功为止。你可以添加你自己的 `assert()` 以帮助你去验证是否符合你的预期。
|
||||
|
||||
本实验以及所有的 6.828 实验中,将要求你做一些检测工作,以便于你搞清楚它们是否按你的预期来工作。这个任务不需要详细描述你添加到 JOS 中的代码的细节。查找 JOS 源代码中你需要去修改的那部分的注释;这些注释中经常包含有技术规范和提示信息。你也可能需要去查阅 JOS、和 Intel 的技术手册、以及你的 6.004 或 6.033 课程笔记的相关部分。
|
||||
|
||||
### 第 2 部分:虚拟内存
|
||||
|
||||
在你开始动手之前,需要先熟悉 x86 内存管理架构的保护模式:即分段和页面转换。
|
||||
|
||||
> 练习 2
|
||||
>
|
||||
> 如果你对 x86 的保护模式还不熟悉,可以查看 Intel 80386 参考手册的第 5 章和第 6 章。阅读这些章节(5.2 和 6.4)中关于页面转换和基于页面的保护。我们建议你也去了解关于段的章节;在虚拟内存和保护模式中,JOS 使用了分页、段转换、以及在 x86 上不能禁用的基于段的保护,因此你需要去理解这些基础知识。
|
||||
|
||||
### 虚拟地址、线性地址和物理地址
|
||||
|
||||
在 x86 的专用术语中,一个虚拟地址是由一个段选择器和在段中的偏移量组成。一个线性地址是在页面转换之前、段转换之后得到的一个地址。一个物理地址是段和页面转换之后得到的最终地址,它最终将进入你的物理内存中的硬件总线。
|
||||
|
||||
![屏幕快照 2018-09-04 11.22.20](https://ws1.sinaimg.cn/large/0069RVTdly1fuxgrc398jj30gx04bgm1.jpg)
|
||||
|
||||
一个 C 指针是虚拟地址的“偏移量”部分。在 `boot/boot.S` 中我们安装了一个全局描述符表(GDT),它通过设置所有的段基址为 0,并且限制为 `0xffffffff` 来有效地禁用段转换。因此“段选择器”并不会生效,而线性地址总是等于虚拟地址的偏移量。在实验 3 中,为了设置权限级别,我们将与段有更多的交互。但是对于内存转换,我们将在整个 JOS 实验中忽略段,只专注于页转换。
|
||||
|
||||
回顾实验 1 中的第 3 部分,我们安装了一个简单的页表,这样内核就可以在 0xf0100000 链接的地址上运行,尽管它实际上是加载在 0x00100000 处的 ROM BIOS 的物理内存上。这个页表仅映射了 4MB 的内存。在实验中,你将要为 JOS 去设置虚拟内存布局,我们将从虚拟地址 0xf0000000 处开始扩展它,首先将物理内存扩展到 256MB,并映射许多其它区域的虚拟内存。
|
||||
|
||||
> 练习 3
|
||||
>
|
||||
> 虽然 GDB 能够通过虚拟地址访问 QEMU 的内存,它经常用于在配置虚拟内存期间检查物理内存。在实验工具指南中复习 QEMU 的监视器命令,尤其是 `xp` 命令,它可以让你去检查物理内存。访问 QEMU 监视器,可以在终端中按 `Ctrl-a c`(相同的绑定返回到串行控制台)。
|
||||
>
|
||||
> 使用 QEMU 监视器的 `xp` 命令和 GDB 的 `x` 命令去检查相应的物理内存和虚拟内存,以确保你看到的是相同的数据。
|
||||
>
|
||||
> 我们的打过补丁的 QEMU 版本提供一个非常有用的 `info pg` 命令:它可以展示当前页表的一个简单描述,包括所有已映射的内存范围、权限、以及标志。Stock QEMU 也提供一个 `info mem` 命令用于去展示一个概要信息,这个信息包含了已映射的虚拟内存范围和使用了什么权限。
|
||||
|
||||
在 CPU 上运行的代码,一旦处于保护模式(这是在 boot/boot.S 中所做的第一件事情)中,是没有办法去直接使用一个线性地址或物理地址的。所有的内存引用都被解释为虚拟地址,然后由 MMU 来转换,这意味着在 C 语言中的指针都是虚拟地址。
|
||||
|
||||
例如在物理内存分配器中,JOS 内存经常需要在不反向引用的情况下,去维护作为地址的一个很难懂的值或一个整数。有时它们是虚拟地址,而有时是物理地址。为便于在代码中证明,JOS 源文件中将它们区分为两种:类型 `uintptr_t` 表示一个难懂的虚拟地址,而类型 `physaddr_trepresents` 表示物理地址。这些类型其实不过是 32 位整数(uint32_t)的同义词,因此编译器不会阻止你将一个类型的数据指派为另一个类型!因为它们都是整数(而不是指针)类型,如果你想去反向引用它们,编译器将报错。
|
||||
|
||||
JOS 内核能够通过将它转换为指针类型的方式来反向引用一个 `uintptr_t` 类型。相反,内核不能反向引用一个物理地址,因为这是由 MMU 来转换所有的内存引用。如果你转换一个 `physaddr_t` 为一个指针类型,并反向引用它,你或许能够加载和存储最终结果地址(硬件将它解释为一个虚拟地址),但你并不会取得你想要的内存位置。
|
||||
|
||||
总结如下:
|
||||
|
||||
| C type | Address type |
|
||||
| ------------ | ------------ |
|
||||
| `T*` | Virtual |
|
||||
| `uintptr_t` | Virtual |
|
||||
| `physaddr_t` | Physical |
|
||||
|
||||
>问题:
|
||||
>
|
||||
>假设下面的 JOS 内核代码是正确的,那么变量 `x` 应该是什么类型?uintptr_t 还是 physaddr_t ?
|
||||
>
|
||||
>![屏幕快照 2018-09-04 11.48.54](https://ws3.sinaimg.cn/large/0069RVTdly1fuxgrbkqd3j30m302bmxc.jpg)
|
||||
>
|
||||
|
||||
JOS 内核有时需要去读取或修改它知道物理地址的内存。例如,添加一个映射到页表,可以要求分配物理内存去存储一个页目录,然后去初始化它们。然而,内核也和其它的软件一样,并不能跳过虚拟地址转换,内核并不能直接加载和存储物理地址。一个原因是 JOS 将重映射从虚拟地址 0xf0000000 处物理地址 0 开始的所有的物理地址,以帮助内核去读取和写入它知道物理地址的内存。为转换一个物理地址为一个内核能够真正进行读写操作的虚拟地址,内核必须添加 0xf0000000 到物理地址以找到在重映射区域中相应的虚拟地址。你应该使用 KADDR(pa) 去做那个添加操作。
|
||||
|
||||
JOS 内核有时也需要能够通过给定的内核数据结构中存储的虚拟地址找到内存中的物理地址。内核全局变量和通过 `boot_alloc()` 分配的内存是加载到内核的这些区域中,从 0xf0000000 处开始,到全部物理内存映射的区域。因此,在这些区域中转变一个虚拟地址为物理地址时,内核能够只是简单地减去 0xf0000000 即可得到物理地址。你应该使用 PADDR(va) 去做那个减法操作。
|
||||
|
||||
### 引用计数
|
||||
|
||||
在以后的实验中,你将会经常遇到多个虚拟地址(或多个环境下的地址空间中)同时映射到相同的物理页面上。你将在 PageInfo 数据结构中用 pp_ref 字段来提供一个引用到每个物理页面的计数器。如果一个物理页面的这个计数器为 0,表示这个页面已经被释放,因为它不再被使用了。一般情况下,这个计数器应该等于相应的物理页面出现在所有页表下面的 UTOP 的次数(UTOP 上面的映射大都是在引导时由内核设置的,并且它从不会被释放,因此不需要引用计数器)。我们也使用它去跟踪到页目录的指针数量,反过来就是,页目录到页表的数量。
|
||||
|
||||
使用 `page_alloc` 时要注意。它返回的页面引用计数总是为 0,因此,一旦对返回页做了一些操作(比如将它插入到页表),`pp_ref` 就应该增加。有时这需要通过其它函数(比如,`page_instert`)来处理,而有时这个函数是直接调用 `page_alloc` 来做的。
|
||||
|
||||
### 页表管理
|
||||
|
||||
现在,你将写一套管理页表的代码:去插入和删除线性地址到物理地址的映射表,并且在需要的时候去创建页表。
|
||||
|
||||
> 练习 4
|
||||
>
|
||||
> 在文件 `kern/pmap.c` 中,你必须去实现下列函数的代码。
|
||||
>
|
||||
> pgdir_walk()
|
||||
>
|
||||
> boot_map_region()
|
||||
>
|
||||
> page_lookup()
|
||||
>
|
||||
> page_remove()
|
||||
>
|
||||
> page_insert()
|
||||
>
|
||||
> `check_page()`,调用 `mem_init()`,测试你的页表管理动作。在进行下一步流程之前你应该确保它成功运行。
|
||||
|
||||
### 第 3 部分:内核地址空间
|
||||
|
||||
JOS 分割处理器的 32 位线性地址空间为两部分:用户环境(进程),我们将在实验 3 中开始加载和运行,它将控制其上的布局和低位部分的内容,而内核总是维护对高位部分的完全控制。线性地址的定义是在 `inc/memlayout.h` 中通过符号 ULIM 来划分的,它为内核保留了大约 256MB 的虚拟地址空间。这就解释了为什么我们要在实验 1 中给内核这样的一个高位链接地址的原因:如是不这样做的话,内核的虚拟地址空间将没有足够的空间去同时映射到下面的用户空间中。
|
||||
|
||||
你可以在 `inc/memlayout.h` 中找到一个图表,它有助于你去理解 JOS 内存布局,这在本实验和后面的实验中都会用到。
|
||||
|
||||
### 权限和故障隔离
|
||||
|
||||
由于内核和用户的内存都存在于它们各自环境的地址空间中,因此我们需要在 x86 的页表中使用权限位去允许用户代码只能访问用户所属地址空间的部分。否则,用户代码中的 bug 可能会覆写内核数据,导致系统崩溃或者发生各种莫名其妙的的故障;用户代码也可能会偷窥其它环境的私有数据。
|
||||
|
||||
对于 ULIM 以上部分的内存,用户环境没有任何权限,只有内核才可以读取和写入这部分内存。对于 [UTOP,ULIM] 地址范围,内核和用户都有相同的权限:它们可以读取但不能写入这个地址范围。这个地址范围是用于向用户环境暴露某些只读的内核数据结构。最后,低于 UTOP 的地址空间是为用户环境所使用的;用户环境将为访问这些内核设置权限。
|
||||
|
||||
### 初始化内核地址空间
|
||||
|
||||
现在,你将去配置 UTOP 以上的地址空间:内核部分的地址空间。`inc/memlayout.h` 中展示了你将要使用的布局。我将使用函数去写相关的线性地址到物理地址的映射配置。
|
||||
|
||||
> 练习 5
|
||||
>
|
||||
> 完成调用 `check_page()` 之后在 `mem_init()` 中缺失的代码。
|
||||
|
||||
现在,你的代码应该通过了 `check_kern_pgdir()` 和 `check_page_installed_pgdir()` 的检查。
|
||||
|
||||
> 问题:
|
||||
>
|
||||
> 1、在这个时刻,页目录中的条目(行)是什么?它们映射的址址是什么?以及它们映射到哪里了?换句话说就是,尽可能多地填写这个表:
|
||||
>
|
||||
> EntryBase Virtual AddressPoints to (logically):
|
||||
>
|
||||
> 1023 ? Page table for top 4MB of phys memory
|
||||
>
|
||||
> 1022 ? ?
|
||||
>
|
||||
> . ? ?
|
||||
>
|
||||
> . ? ?
|
||||
>
|
||||
> . ? ?
|
||||
>
|
||||
> 2 0x00800000 ?
|
||||
>
|
||||
> 1 0x00400000 ?
|
||||
>
|
||||
> 0 0x00000000 [see next question]
|
||||
>
|
||||
> 2、(来自课程 3) 我们将内核和用户环境放在相同的地址空间中。为什么用户程序不能去读取和写入内核的内存?有什么特殊机制保护内核内存?
|
||||
>
|
||||
> 3、这个操作系统能够支持的最大的物理内存数量是多少?为什么?
|
||||
>
|
||||
> 4、我们真实地拥有最大数量的物理内存吗?管理内存的开销有多少?这个开销可以减少吗?
|
||||
>
|
||||
> 5、复习在 `kern/entry.S` 和 `kern/entrypgdir.c` 中的页表设置。一旦我们打开分页,EIP 中是一个很小的数字(稍大于 1MB)。在什么情况下,我们转而去运行在 KERNBASE 之上的一个 EIP?当我们启用分页并开始在 KERNBASE 之上运行一个 EIP 时,是什么让我们能够持续运行一个很低的 EIP?为什么这种转变是必需的?
|
||||
|
||||
### 地址空间布局的其它选择
|
||||
|
||||
在 JOS 中我们使用的地址空间布局并不是我们唯一的选择。一个操作系统可以在低位的线性地址上映射内核,而为用户进程保留线性地址的高位部分。然而,x86 内核一般并不采用这种方法,而 x86 向后兼容模式是不这样做的其中一个原因,这种模式被称为“虚拟 8086 模式”,处理器使用线性地址空间的最下面部分是“不可改变的”,所以,如果内核被映射到这里是根本无法使用的。
|
||||
|
||||
虽然很困难,但是设计这样的内核是有这种可能的,即:不为处理器自身保留任何固定的线性地址或虚拟地址空间,而有效地允许用户级进程不受限制地使用整个 4GB 的虚拟地址空间 —— 同时还要在这些进程之间充分保护内核以及不同的进程之间相互受保护!
|
||||
|
||||
将内核的内存分配系统进行概括类推,以支持二次幂为单位的各种页大小,从 4KB 到一些你选择的合理的最大值。你务必要有一些方法,将较大的分配单位按需分割为一些较小的单位,以及在需要时,将多个较小的分配单位合并为一个较大的分配单位。想一想在这样的一个系统中可能会出现些什么样的问题。
|
||||
|
||||
这个实验做完了。确保你通过了所有的等级测试,并记得在 `answers-lab2.txt` 中写下你对上述问题的答案。提交你的改变(包括添加 `answers-lab2.txt` 文件),并在 `lab` 目录下输入 `make handin` 去提交你的实验。
|
||||
|
||||
------
|
||||
|
||||
via: <https://sipb.mit.edu/iap/6.828/lab/lab2/>
|
||||
|
||||
作者:[Mit][<https://sipb.mit.edu/iap/6.828/lab/lab2/>]
|
||||
译者:[qhwdw](https://github.com/qhwdw)
|
||||
校对:[校对者ID](https://github.com/%E6%A0%A1%E5%AF%B9%E8%80%85ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
@ -0,0 +1,206 @@
|
||||
使用Redis和Python构建一个共享单车的应用程序
|
||||
======
|
||||
|
||||
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/google-bikes-yearbook.png?itok=BnmInwea)
|
||||
|
||||
我经常出差。但不是一个汽车狂热分子,所以当我有空闲时,我更喜欢在城市中散步或者骑单车。我参观过的许多城市都有共享单车系统,你可以租个单车用几个小时。大多数系统都有一个应用程序来帮助用户定位和租用他们的单车,但对于像我这样的用户来说,在一个地方可以获得可租赁的城市中所有单车的信息会更有帮助。
|
||||
|
||||
为了解决这个问题并且展示开源的强大还有为 Web 应用程序添加位置感知的功能,我组合了可用的公开的共享单车数据,[Python][1] 编程语言以及开源的 [Redis][2] 内存数据结构服务,用来索引和查询地理空间数据。
|
||||
|
||||
由此诞生的共享单车应用程序包含来自很多不同的共享系统的数据,包括纽约市的 [Citi Bike][3] 共享单车系统(LCTT 译注:Citi Bike 是纽约市的一个私营公共单车系统。在2013年5月27日正式营运,是美国最大的公共单车系统。Citi Bike 的名称有两层意思。Citi 是计划赞助商花旗银行(CitiBank)的名字。同时,Citi 和英文中“城市(city)”一词的读音相同)。它利用了花旗单车系统提供的 <ruby>通用共享单车数据流<rt>General Bikeshare Feed</rt></ruby>,并利用其数据演示了一些使用 Redis 地理空间数据索引的功能。 花旗单车数据可以在 [花旗单车数据许可协议][4] 下提供。
|
||||
|
||||
### 通用共享单车数据流规范
|
||||
|
||||
通用共享单车数据流规范(GBFS)是由 [北美共享单车协会][6] 开发的 [开放数据规范][5],旨在使地图程序和运输程序更容易的将共享单车系统添加到对应平台中。 目前世界上有 60 多个不同的共享系统使用该规范。
|
||||
|
||||
Feed 流由几个简单的 [JSON][7] 数据文件组成,其中包含系统状态的信息。 Feed 流以引用了子 Feed 流数据的URL 的顶级 JSON 文件开头:
|
||||
|
||||
```
|
||||
{
|
||||
"data": {
|
||||
"en": {
|
||||
"feeds": [
|
||||
{
|
||||
"name": "system_information",
|
||||
"url": "https://gbfs.citibikenyc.com/gbfs/en/system_information.json"
|
||||
},
|
||||
{
|
||||
"name": "station_information",
|
||||
"url": "https://gbfs.citibikenyc.com/gbfs/en/station_information.json"
|
||||
},
|
||||
. . .
|
||||
]
|
||||
}
|
||||
},
|
||||
"last_updated": 1506370010,
|
||||
"ttl": 10
|
||||
}
|
||||
```
|
||||
|
||||
第一步是使用 `system_information` 和 `station_information` 的数据将共享单车站的信息加载到Redis中。
|
||||
|
||||
`system_information` 提供系统 ID,系统 ID 可用于为 Redis 密钥创建命名空间的简短编码。 GBFS 规范没有指定系统 ID 的格式,但需要确保它是全局唯一的。许多共享单车数据流使用诸如coast_bike_share,boise_greenbike 或者 topeka_metro_bikes 这样的短名称作为系统 ID。其他的使用常见的地理缩写,例如 NYC 或者 BA,并且使用通用唯一标识符(UUID)。 共享单车应用程序使用标识符作为前缀来为指定系统构造唯一键。
|
||||
|
||||
`station_information feed` 提供组成整个系统的共享单车站的静态信息。车站由具有多个字段的 JSON 对象表示。车站对象中有几个必填字段,用于提供物理单车站的 ID,名称和位置。还有几个可选字段提供有用的信息,例如最近的十字路口,可接受的付款方式。这是共享单车应用程序这一部分的主要信息来源。
|
||||
|
||||
### 建立数据库
|
||||
|
||||
我编写了一个示例应用程序 [load_station_data.py][8],它模仿后端进程中从外部源加载数据时会发生什么。
|
||||
|
||||
### 查找共享单车站
|
||||
|
||||
从 [GitHub 上 GBFS 仓库][5]中的 [systems.csv][9] 文件开始加载共享单车数据。
|
||||
|
||||
仓库中的 [systems.csv][9] 文件为已注册的共享单车系统提供可用的 GBFS 源发现的 URL。 发现的URL是处理共享单车信息的起点。
|
||||
|
||||
`load_station_data` 程序获取系统文件中找到的每个 URL,并使用它来查找两个子数据流的URL:系统信息和车站信息。 系统信息提供提供了一条关键信息:系统的唯一 ID。 (注意:系统 ID 也在 systems.csv 文件中提供,但文件中的某些标识符与数据流中的标识符不匹配,因此我总是从数据流中获取标识符。)系统上的详细信息,比如共享单车 URLS,电话号码和电子邮件, 可以在程序的后续版本中添加,因此使用 `${system_id}:system_info` 这个键将数据存储在 Redis 中。
|
||||
|
||||
### 载入车站数据
|
||||
|
||||
车站信息提供系统中每个车站的数据,包括系统的位置。 load_station_data 程序遍历车站数据流中的每个车站,并使用 `${system_id}:station:${station_id}` 形式的键将每个车站的数据存储到 Redis 中。 使用 `GEOADD` 命令将每个车站的位置添加到共享单车的地理空间索引中。
|
||||
|
||||
### 更新数据
|
||||
|
||||
在后续运行中,我不希望代码从 Redis 中删除所有 Feed 数据并将其重新加载到空的 Redis 数据库中,因此我仔细考虑了如何处理数据的原地更新。
|
||||
|
||||
代码首先将所有共享单车站的信息数据集加载到正在处理到内存中的系统中的。 为单个车站加载信息时,将从内存中的车站集合按照存储在 Redis 的键中删除该站。 加载完所有车站数据后,我们将留下一个包含该系统必须删除的所有车站数据的集合。
|
||||
|
||||
程序创建一个事务删除这组车站的信息,从地理空间索引中删除车站的键,并从系统的车站列表中删除车站。
|
||||
|
||||
### 代码注意点
|
||||
|
||||
需要注意在[示例代码][8]中有一些有趣的事情。 首先,使用 `GEOADD` 命令将所有数据项添加到地理空间索引中,使用 `ZREM` 命令将其删除。 由于地理空间类型的底层实现使用了有序集合,因此需要使用ZREM删除数据项。 需要注意的是:为简单起见,示例代码演示了如何使用单个 Redis 节点; 为了在集群环境中运行,需要重新构建事务块。
|
||||
|
||||
如果你使用的是 Redis 4.0(或更高版本),则可以在代码中使用 `DELETE` 和 `HMSET` 命令。 Redis 4.0 提供 `UNLINK` 命令作为 `DELETE` 命令的异步版本的替代。 `UNLINK` 命令将从键空间中删除键,但它会在单独的线程中回收内存。 在 Redis 4.0 中 [`HMSET` 命令已经被弃用了而且`HSET` 命令现在接收可变参数][12](即,它接受的参数个数不定)。
|
||||
|
||||
### 通知客户端
|
||||
|
||||
处理结束时,会向依赖我们数据的客户发送通知。 使用 Redis 发布/订阅机制,通知将通过 `geobike:station_changed` 通道和系统 ID 一起发出。
|
||||
|
||||
### 数据模型
|
||||
|
||||
在 Redis 中构建数据时,最重要的考虑因素是如何查询信息。 共享单车程序需要支持的两个主要查询是:
|
||||
|
||||
- 找到我们附近的车站
|
||||
- 显示车站相关的信息
|
||||
|
||||
Redis 提供了两种主要数据类型用于存储数据:哈希和有序集。 哈希类型很好地映射到表示车站的 JSON 对象; 由于 Redis 哈希不使用固定结构,因此它们可用于存储可变的车站信息。
|
||||
|
||||
当然,在地理位置上寻找站点需要地理空间索引来搜索相对于某些坐标的站点。 Redis 提供了几个使用有序集数据结构构建地理空间索引的命令。
|
||||
|
||||
我们使用 `${system_id}:station:${station_id}` 这种格式的键存储车站相关的信息,使用格 `${system_id}:stations:location` 这种格式的键查找车站的地理空间索引。
|
||||
|
||||
### 获取用户位置
|
||||
|
||||
构建应用程序的下一步是确定用户的当前位置。 大多数应用程序通过操作系统提供的内置服务来实现此目的。 操作系统可以基于设备内置的 GPS 硬件为应用程序提供定位,或者从设备的可用 WiFi 网络提供近似的定位。
|
||||
|
||||
![](https://opensource.com/sites/default/files/styles/panopoly_image_original/public/u128651/rediscli_map.png?itok=icqk5543)
|
||||
|
||||
### 查找车站
|
||||
|
||||
找到用户的位置后,下一步是找到附近的共享单车站。 Redis 的地理空间功能可以返回用户当前坐标在给定距离内的所有车站信息。 以下是使用 Redis 命令行界面的示例。
|
||||
|
||||
想象一下,我正在纽约市第五大道的苹果零售店,我想要向市中心方向前往位于西 37 街的 MOOD 布料店,与我的好友 [Swatch][16] 相遇。 我可以坐出租车或地铁,但我更喜欢骑单车。 附近有没有我可以使用的单车共享站呢?
|
||||
|
||||
苹果零售店位于 40.76384,-73.97297。 根据地图显示,在零售店 500 英尺半径范围内(地图上方的蓝色)有两个单车站,分别是陆军广场中央公园南单车站和东 58 街麦迪逊单车站。
|
||||
|
||||
我可以使用 Redis 的 `GEORADIUS` 命令查询 500 英尺半径范围内的车站的 NYC 系统索引:
|
||||
|
||||
```
|
||||
127.0.0.1:6379> GEORADIUS NYC:stations:location -73.97297 40.76384 500 ft
|
||||
1) "NYC:station:3457"
|
||||
2) "NYC:station:281"
|
||||
```
|
||||
|
||||
Redis 使用地理空间索引中的元素作为特定车站的元数据的键,返回在该半径内找到的两个共享单车站。 下一步是查找两个站的名称:
|
||||
```
|
||||
127.0.0.1:6379> hget NYC:station:281 name
|
||||
"Grand Army Plaza & Central Park S"
|
||||
|
||||
127.0.0.1:6379> hget NYC:station:3457 name
|
||||
"E 58 St & Madison Ave"
|
||||
```
|
||||
|
||||
这些键对应于上面地图上标识的车站。 如果需要,可以在 `GEORADIUS` 命令中添加更多标志来获取元素列表,每个元素的坐标以及它们与当前点的距离:
|
||||
|
||||
```
|
||||
127.0.0.1:6379> GEORADIUS NYC:stations:location -73.97297 40.76384 500 ft WITHDIST WITHCOORD ASC
|
||||
1) 1) "NYC:station:281"
|
||||
2) "289.1995"
|
||||
3) 1) "-73.97371262311935425"
|
||||
2) "40.76439830559216659"
|
||||
2) 1) "NYC:station:3457"
|
||||
2) "383.1782"
|
||||
3) 1) "-73.97209256887435913"
|
||||
2) "40.76302702144496237"
|
||||
```
|
||||
|
||||
查找与这些键关联的名称会生成一个我可以从中选择的车站的有序列表。 Redis 不提供路线的功能,因此我使用设备操作系统的路线功能绘制从当前位置到所选单车站的路线。
|
||||
|
||||
`GEORADIUS` 函数可以很轻松的在你喜欢的开发框架的 API 里实现,就可以向应用程序添加位置功能了。
|
||||
|
||||
### 其他的查询命令
|
||||
|
||||
除了 `GEORADIUS` 命令外,Redis 还提供了另外三个用于查询索引数据的命令:`GEOPOS`,`GEODIST` 和 `GEORADIUSBYMEMBER`。
|
||||
|
||||
`GEOPOS` 命令可以为 <ruby>地理哈希<rt>geohash</rt></ruby> 中的给定元素提供坐标(LCTT译注:geohash 是一种将二维的经纬度编码为一位的字符串的一种算法,常用于基于距离的查找算法和推荐算法)。 例如,如果我知道西 38 街 8 号有一个共享单车站,ID 是 523,那么该站的元素名称是`NYC:station:523`。 使用 Redis,我可以找到该站的经度和纬度:
|
||||
|
||||
```
|
||||
127.0.0.1:6379> geopos NYC:stations:location NYC:station:523
|
||||
1) 1) "-73.99138301610946655"
|
||||
2) "40.75466497634030105"
|
||||
```
|
||||
|
||||
`GEODIST` 命令提供两个索引元素之间的距离。 如果我想找到陆军广场中央公园南单车站与东 58 街麦迪逊单车站之间的距离,我会使用以下命令:
|
||||
```
|
||||
127.0.0.1:6379> GEODIST NYC:stations:location NYC:station:281 NYC:station:3457 ft
|
||||
"671.4900"
|
||||
```
|
||||
|
||||
最后,`GEORADIUSBYMEMBER` 命令与 `GEORADIUS` 命令类似,但该命令不是采用一组坐标,而是采用索引的另一个成员的名称,并返回以该成员为中心的给定半径内的所有成员。 要查找陆军广场中央公园南单车站 1000 英尺范围内的所有车站,请输入以下内容:
|
||||
```
|
||||
127.0.0.1:6379> GEORADIUSBYMEMBER NYC:stations:location NYC:station:281 1000 ft WITHDIST
|
||||
1) 1) "NYC:station:281"
|
||||
2) "0.0000"
|
||||
2) 1) "NYC:station:3132"
|
||||
2) "793.4223"
|
||||
3) 1) "NYC:station:2006"
|
||||
2) "911.9752"
|
||||
4) 1) "NYC:station:3136"
|
||||
2) "940.3399"
|
||||
5) 1) "NYC:station:3457"
|
||||
2) "671.4900"
|
||||
```
|
||||
|
||||
虽然此示例侧重于使用 Python 和 Redis 来解析数据并构建共享单车系统位置的索引,但可以很容易地衍生为定位餐馆,公共交通或者是开发人员希望帮助用户找到的任何其他类型的场所。
|
||||
|
||||
本文基于今年我在北卡罗来纳州罗利市的开源 101 会议上的[演讲][17]。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/18/2/building-bikesharing-application-open-source-tools
|
||||
|
||||
作者:[Tague Griffith][a]
|
||||
译者:[Flowsnow](https://github.com/Flowsnow)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/tague
|
||||
[1]: https://www.python.org/
|
||||
[2]: https://redis.io/
|
||||
[3]: https://www.citibikenyc.com/
|
||||
[4]: https://www.citibikenyc.com/data-sharing-policy
|
||||
[5]: https://github.com/NABSA/gbfs
|
||||
[6]: http://nabsa.net/
|
||||
[7]: https://www.json.org/
|
||||
[8]: https://gist.github.com/tague/5a82d96bcb09ce2a79943ad4c87f6e15
|
||||
[9]: https://github.com/NABSA/gbfs/blob/master/systems.csv
|
||||
[10]: https://redis.io/commands/unlink
|
||||
[11]: https://redis.io/commands/hmset
|
||||
[12]: https://raw.githubusercontent.com/antirez/redis/4.0/00-RELEASENOTES
|
||||
[13]: https://redis.io/topics/data-types#Hashes
|
||||
[14]: https://redis.io/commands#geo
|
||||
[15]: https://redis.io/topics/data-types-intro#redis-sorted-sets
|
||||
[16]: https://twitter.com/swatchthedog
|
||||
[17]: http://opensource101.com/raleigh/talks/building-location-aware-apps-open-source-tools/
|
@ -1,40 +1,36 @@
|
||||
17 Ways To Check Size Of Physical Memory (RAM) In Linux
|
||||
======
|
||||
Most of the system administrators checks CPU & Memory utilization when they were facing some performance issue.
|
||||
在 Linux 中 17 种方法来查看物理内存(RAM)
|
||||
=======
|
||||
|
||||
There is lot of utilities are available in Linux to check physical memory.
|
||||
大多数系统管理员在遇到性能问题时会检查 CPU 和内存利用率。
|
||||
|
||||
These commands are help us to check the physical RAM present in system, also allow users to check memory utilization in varies aspect.
|
||||
Linux 中有许多实用程序可以用于检查物理内存。
|
||||
|
||||
Most of us know only few commands and we are trying to include all the possible commands in this article.
|
||||
这些命令有助于我们检查系统中存在的物理 RAM,还允许用户检查各种方面的内存利用率。
|
||||
|
||||
You may think, why i want to know all these commands instead of knowing some of the specific and routine commands.
|
||||
我们大多数人只知道很少的命令,在本文中我们试图包含所有可能的命令。
|
||||
|
||||
Don’t think bad or don’t take in negative way because each one has different requirement and perception so, who’s looking for other purpose then this will very helpful for them.
|
||||
你可能会想,为什么我想知道所有这些命令,而不是知道一些特定的和例行的命令。
|
||||
|
||||
### What Is RAM
|
||||
不要认为不好或采取负面的方式,因为每个人都有不同的需求和看法,所以,对于那些在寻找其它目的的人,这对于他们非常有帮助。
|
||||
|
||||
Computer memory is a physical device which capable to store information temporarily or permanently. RAM stands for Random Access Memory is a volatile memory that stores information used by the operating system, software, and hardware.
|
||||
### 什么是 RAM
|
||||
|
||||
Two types of memory is available.
|
||||
计算机内存是能够临时或永久存储信息的物理设备。RAM 代表随机存取存储器,它是一种易失性存储器,用于存储操作系统,软件和硬件使用的信息。
|
||||
|
||||
* Primary Memory
|
||||
* Secondary Memory
|
||||
有两种类型的内存可供选择:
|
||||
* 主存
|
||||
* 辅助内存
|
||||
|
||||
主存是计算机的主存储器。CPU 可以直接读取或写入此内存。它固定在电脑的主板上。
|
||||
|
||||
* **`RAM:`** 随机存取存储器是临时存储。关闭计算机后,此信息将消失。
|
||||
* **`ROM:`** 只读存储器是永久存储,即使系统关闭也能保存数据。
|
||||
|
||||
Primary memory is the main memory of the computer. CPU can directly read or write on this memory. It is fixed on the motherboard of the computer.
|
||||
### 方法-1 : 使用 free 命令
|
||||
|
||||
* **`RAM:`** Random Access Memory is a temporary memory. This information will go away when the computer is turned off.
|
||||
* **`ROM:`** Read Only Memory is permanent memory, that holds the data even if the system is switched off.
|
||||
free 显示系统中空闲和已用的物理内存和交换内存的总量,以及内核使用的缓冲区和缓存。它通过解析 /proc/meminfo 来收集信息。
|
||||
|
||||
|
||||
|
||||
### Method-1 : Using free Command
|
||||
|
||||
free displays the total amount of free and used physical and swap memory in the system, as well as the buffers and caches used by the kernel. The information is gathered by parsing /proc/meminfo.
|
||||
|
||||
**Suggested Read :** [free – A Standard Command to Check Memory Usage Statistics (Free & Used) in Linux][1]
|
||||
**建议阅读:** [free – 在 Linux 系统中检查内存使用情况统计(空闲和已用)的标准命令][1]
|
||||
```
|
||||
$ free -m
|
||||
total used free shared buff/cache available
|
||||
@ -48,11 +44,11 @@ Swap: 12 1 11
|
||||
|
||||
```
|
||||
|
||||
### Method-2 : Using /proc/meminfo file
|
||||
### 方法-2 : 使用 /proc/meminfo 文件
|
||||
|
||||
/proc/meminfo is a virtual text file that contains a large amount of valuable information about the systems RAM usage.
|
||||
/proc/meminfo 是一个虚拟文本文件,它包含有关系统 RAM 使用情况的大量有价值的信息。
|
||||
|
||||
It’s report the amount of free and used memory (both physical and swap) on the system.
|
||||
它报告系统上的空闲和已用内存(物理和交换)的数量。
|
||||
```
|
||||
$ grep MemTotal /proc/meminfo
|
||||
MemTotal: 2041396 kB
|
||||
@ -65,11 +61,11 @@ $ grep MemTotal /proc/meminfo | awk '{print $2 / 1024 / 1024}'
|
||||
|
||||
```
|
||||
|
||||
### Method-3 : Using top Command
|
||||
### 方法-3 : 使用 top 命令
|
||||
|
||||
Top command is one of the basic command to monitor real-time system processes in Linux. It display system information and running processes information like uptime, average load, tasks running, number of users logged in, number of CPUs & cpu utilization, Memory & swap information. Run top command then hit `E` to bring the memory utilization in MB.
|
||||
Top 命令是 Linux 中监视实时系统进程的基本命令之一。它显示系统信息和运行的进程信息,如正常运行时间,平均负载,正在运行的任务,登录的用户数,CPU 数量和 CPU 利用率,以及内存和交换信息。运行 top 命令,然后按下 `E` 来使内存利用率以 MB 为单位。
|
||||
|
||||
**Suggested Read :** [TOP Command Examples to Monitor Server Performance][2]
|
||||
**建议阅读:** [TOP 命令示例监视服务器性能][2]
|
||||
```
|
||||
$ top
|
||||
|
||||
@ -86,11 +82,11 @@ MiB Swap: 12689.58+total, 11196.83+free, 1492.750 used. 306.465 avail Mem
|
||||
|
||||
```
|
||||
|
||||
### Method-4 : Using vmstat Command
|
||||
### 方法-4 : 使用 vmstat 命令
|
||||
|
||||
vmstat is a standard nifty tool that report virtual memory statistics of Linux system. vmstat reports information about processes, memory, paging, block IO, traps, and cpu activity. It helps Linux administrator to identify system bottlenecks while troubleshooting the issues.
|
||||
vmstat 是一个标准且漂亮的工具,它报告 Linux 系统的虚拟内存统计信息。vmstat 报告有关进程,内存,分页,块 IO,陷阱和 CPU 活动的信息。它有助于 Linux 管理员在故障检修时识别系统瓶颈。
|
||||
|
||||
**Suggested Read :** [vmstat – A Standard Nifty Tool to Report Virtual Memory Statistics][3]
|
||||
**建议阅读:** [vmstat – 一个报告虚拟内存统计信息的标准且漂亮的工具][3]
|
||||
```
|
||||
$ vmstat -s | grep "total memory"
|
||||
2041396 K total memory
|
||||
@ -103,13 +99,13 @@ $ vmstat -s | awk '{print $1 / 1024 / 1024}' | head -1
|
||||
|
||||
```
|
||||
|
||||
### Method-5 : Using nmon Command
|
||||
### 方法-5 : 使用 nmon 命令
|
||||
|
||||
nmon is a another nifty tool to monitor various system resources such as CPU, memory, network, disks, file systems, NFS, top processes, Power micro-partition and resources (Linux version & processors) on Linux terminal.
|
||||
nmon 是另一个很棒的工具,用于监视各种系统资源,如 CPU,内存,网络,磁盘,文件系统,NFS,top 进程,Power 微分区和 Linux 终端上的资源(Linux 版本和处理器)。
|
||||
|
||||
Just press `m` key to see memory utilization stats (cached, active, inactive, buffered, free in MB & free percent)
|
||||
只需按下 `m` 键,即可查看内存利用率统计数据(缓存,活动,非活动,缓冲,空闲,以 MB 和百分比为单位)。
|
||||
|
||||
**Suggested Read :** [nmon – A Nifty Tool To Monitor System Resources On Linux][4]
|
||||
**建议阅读:** [nmon – Linux 中一个监视系统资源的漂亮的工具][4]
|
||||
```
|
||||
┌nmon─14g──────[H for help]───Hostname=2daygeek──Refresh= 2secs ───07:24.44─────────────────┐
|
||||
│ Memory Stats ─────────────────────────────────────────────────────────────────────────────│
|
||||
@ -133,15 +129,14 @@ Just press `m` key to see memory utilization stats (cached, active, inactive, bu
|
||||
|
||||
```
|
||||
|
||||
### Method-6 : Using dmidecode Command
|
||||
### 方法-6 : 使用 dmidecode 命令
|
||||
|
||||
Dmidecode is a tool which reads a computer’s DMI (stands for Desktop Management Interface)
|
||||
(some say SMBIOS – stands for System Management BIOS) table contents and display system hardware information in a human-readable format.
|
||||
Dmidecode 是一个读取计算机 DMI表内容的工具,它以人类可读的格式显示系统硬件信息。(DMI 代表桌面管理接口,有人说 SMBIOS 代表系统管理 BIOS)
|
||||
|
||||
This table contains a description of the system’s hardware components, as well as other useful information such as serial number, Manufacturer information, Release Date, and BIOS revision, etc,.
|
||||
此表包含系统硬件组件的描述,以及其它有用信息,如序列号,制造商信息,发布日期和 BIOS 修改等。
|
||||
|
||||
**Suggested Read :**
|
||||
[Dmidecode – Easy Way To Get Linux System Hardware Information][5]
|
||||
**建议阅读:**
|
||||
[Dmidecode – 获取 Linux 系统硬件信息的简便方法][5]
|
||||
```
|
||||
# dmidecode -t memory | grep Size:
|
||||
Size: 8192 MB
|
||||
@ -171,7 +166,7 @@ This table contains a description of the system’s hardware components, as well
|
||||
|
||||
```
|
||||
|
||||
Print only installed RAM modules.
|
||||
只打印已安装的 RAM 模块。
|
||||
```
|
||||
|
||||
# dmidecode -t memory | grep Size: | grep -v "No Module Installed"
|
||||
@ -182,20 +177,20 @@ Print only installed RAM modules.
|
||||
|
||||
```
|
||||
|
||||
Sum all the installed RAM modules.
|
||||
汇总所有已安装的 RAM 模块。
|
||||
```
|
||||
# dmidecode -t memory | grep Size: | grep -v "No Module Installed" | awk '{sum+=$2}END{print sum}'
|
||||
32768
|
||||
|
||||
```
|
||||
|
||||
### Method-7 : Using hwinfo Command
|
||||
### 方法-7 : 使用 hwinfo 命令
|
||||
|
||||
hwinfo stands for hardware information tool is another great utility that used to probe for the hardware present in the system and display detailed information about varies hardware components in human readable format.
|
||||
hwinfo 代表硬件信息,它是另一个很棒的实用工具,用于探测系统中存在的硬件,并以人类可读的格式显示有关各种硬件组件的详细信息。
|
||||
|
||||
It reports information about CPU, RAM, keyboard, mouse, graphics card, sound, storage, network interface, disk, partition, bios, and bridge, etc,.
|
||||
它报告有关 CPU,RAM,键盘,鼠标,图形卡,声音,存储,网络接口,磁盘,分区,BIOS 和网桥等的信息。
|
||||
|
||||
**Suggested Read :** [hwinfo (Hardware Info) – A Nifty Tool To Detect System Hardware Information On Linux][6]
|
||||
**建议阅读:** [hwinfo(硬件信息)– 一个在 Linux 系统上检测系统硬件信息的好工具][6]
|
||||
```
|
||||
$ hwinfo --memory
|
||||
01: None 00.0: 10102 Main Memory
|
||||
@ -209,13 +204,13 @@ $ hwinfo --memory
|
||||
|
||||
```
|
||||
|
||||
### Method-8 : Using lshw Command
|
||||
### 方法-8 : 使用 lshw 命令
|
||||
|
||||
lshw (stands for Hardware Lister) is a small nifty tool that generates detailed reports about various hardware components on the machine such as memory configuration, firmware version, mainboard configuration, CPU version and speed, cache configuration, usb, network card, graphics cards, multimedia, printers, bus speed, etc.
|
||||
lshw(代表 Hardware Lister)是一个小巧的工具,可以生成机器上各种硬件组件的详细报告,如内存配置,固件版本,主板配置,CPU 版本和速度,缓存配置,USB,网卡,显卡,多媒体,打印机,总线速度等。
|
||||
|
||||
It’s generating hardware information by reading varies files under /proc directory and DMI table.
|
||||
它通过读取 /proc 目录和 DMI 表中的各种文件来生成硬件信息。
|
||||
|
||||
**Suggested Read :** [LSHW (Hardware Lister) – A Nifty Tool To Get A Hardware Information On Linux][7]
|
||||
**建议阅读:** [LSHW (Hardware Lister) – 一个在 Linux 上获取硬件信息的好工具][7]
|
||||
```
|
||||
$ sudo lshw -short -class memory
|
||||
[sudo] password for daygeek:
|
||||
@ -226,24 +221,24 @@ H/W path Device Class Description
|
||||
|
||||
```
|
||||
|
||||
### Method-9 : Using inxi Command
|
||||
### 方法-9 : 使用 inxi 命令
|
||||
|
||||
inxi is a nifty tool to check hardware information on Linux and offers wide range of option to get all the hardware information on Linux system that i never found in any other utility which are available in Linux. It was forked from the ancient and mindbendingly perverse yet ingenius infobash, by locsmif.
|
||||
inxi 是一个很棒的工具,它可以检查 Linux 上的硬件信息,并提供了大量的选项来获取 Linux 系统上的所有硬件信息,这些特性是我在 Linux 上的其它工具中从未发现的。它是从 locsmif 编写的古老的但至今看来都异常灵活的 infobash 演化而来的。
|
||||
|
||||
inxi is a script that quickly shows system hardware, CPU, drivers, Xorg, Desktop, Kernel, GCC version(s), Processes, RAM usage, and a wide variety of other useful information, also used for forum technical support & debugging tool.
|
||||
inxi 是一个脚本,它可以快速显示系统硬件,CPU,驱动程序,Xorg,桌面,内核,GCC 版本,进程,RAM 使用情况以及各种其它有用的信息,还可以用于论坛技术支持和调试工具。
|
||||
|
||||
**Suggested Read :** [inxi – A Great Tool to Check Hardware Information on Linux][8]
|
||||
**建议阅读:** [inxi – 一个检查 Linux 上硬件信息的好工具][8]
|
||||
```
|
||||
$ inxi -F | grep "Memory"
|
||||
Info: Processes: 234 Uptime: 3:10 Memory: 1497.3/1993.6MB Client: Shell (bash) inxi: 2.3.37
|
||||
|
||||
```
|
||||
|
||||
### Method-10 : Using screenfetch Command
|
||||
### 方法-10 : 使用 screenfetch 命令
|
||||
|
||||
screenFetch is a bash script. It will auto-detect your distribution and display an ASCII art version of that distribution’s logo and some valuable information to the right.
|
||||
screenFetch 是一个 bash 脚本。它将自动检测你的发行版,并在右侧显示该发行版标识的 ASCII 艺术版本和一些有价值的信息。
|
||||
|
||||
**Suggested Read :** [ScreenFetch – Display Linux System Information On Terminal With Distribution ASCII Art Logo][9]
|
||||
**建议阅读:** [ScreenFetch – 以 ASCII 艺术标志在终端显示 Linux 系统信息][9]
|
||||
```
|
||||
$ screenfetch
|
||||
./+o+- [email protected]
|
||||
@ -267,11 +262,11 @@ $ screenfetch
|
||||
|
||||
```
|
||||
|
||||
### Method-11 : Using neofetch Command
|
||||
### 方法-11 : 使用 neofetch 命令
|
||||
|
||||
Neofetch is a cross-platform and easy-to-use command line (CLI) script that collects your Linux system information and display it on the terminal next to an image, either your distributions logo or any ascii art of your choice.
|
||||
Neofetch 是一个跨平台且易于使用的命令行(CLI)脚本,它收集你的 Linux 系统信息,并将其作为一张图片显示在终端上,也可以是你的发行版徽标,或者是你选择的任何 ascii 艺术。
|
||||
|
||||
**Suggested Read :** [Neofetch – Shows Linux System Information With ASCII Distribution Logo][10]
|
||||
**建议阅读:** [Neofetch – 以 ASCII 分发标志来显示 Linux 系统信息][10]
|
||||
```
|
||||
$ neofetch
|
||||
.-/+oossssoo+/-. [email protected]
|
||||
@ -297,9 +292,9 @@ ossyNMMMNyMMhsssssssssssssshmmmhssssssso WM: GNOME Shell
|
||||
|
||||
```
|
||||
|
||||
### Method-12 : Using dmesg Command
|
||||
### 方法-12 : 使用 dmesg 命令
|
||||
|
||||
dmesg (stands for display message or driver message) is a command on most Unix-like operating systems that prints the message buffer of the kernel.
|
||||
dmesg(代表显示消息或驱动消息)是大多数类 unix 操作系统上的命令,用于打印内核的消息缓冲区。
|
||||
```
|
||||
$ dmesg | grep "Memory"
|
||||
[ 0.000000] Memory: 1985916K/2096696K available (12300K kernel code, 2482K rwdata, 4000K rodata, 2372K init, 2368K bss, 110780K reserved, 0K cma-reserved)
|
||||
@ -307,13 +302,13 @@ $ dmesg | grep "Memory"
|
||||
|
||||
```
|
||||
|
||||
### Method-13 : Using atop Command
|
||||
### 方法-13 : 使用 atop 命令
|
||||
|
||||
Atop is an ASCII full-screen system performance monitoring tool for Linux that is capable of reporting the activity of all server processes (even if processes have finished during the interval).
|
||||
Atop 是一个用于 Linux 的 ASCII 全屏系统性能监视工具,它能报告所有服务器进程的活动(即使进程在间隔期间已经完成)。
|
||||
|
||||
It’s logging of system and process activity for long-term analysis (By default, the log files are preserved for 28 days), highlighting overloaded system resources by using colors, etc. It shows network activity per process/thread with combination of the optional kernel module netatop.
|
||||
它记录系统和进程活动以进行长期分析(默认情况下,日志文件保存 28 天),通过使用颜色等来突出显示过载的系统资源。它结合可选的内核模块 netatop 显示每个进程或线程的网络活动。
|
||||
|
||||
**Suggested Read :** [Atop – Monitor real time system performance, resources, process & check resource utilization history][11]
|
||||
**建议阅读:** [Atop – 实时监控系统性能,资源,进程和检查资源利用历史][11]
|
||||
```
|
||||
$ atop -m
|
||||
|
||||
@ -340,11 +335,11 @@ NET | lo ---- | pcki 9 | pcko 9 | sp 0 Mbps | si 0 Kbps | so 0 Kbps | | coll 0 |
|
||||
|
||||
```
|
||||
|
||||
### Method-14 : Using htop Command
|
||||
### 方法-14 : 使用 htop 命令
|
||||
|
||||
htop is an interactive process viewer for Linux which was developed by Hisham using ncurses library. Htop have many of features and options compared to top command.
|
||||
htop 是由 Hisham 用 ncurses 库开发的用于 Linux 的交互式进程查看器。与 top 命令相比,htop 有许多特性和选项。
|
||||
|
||||
**Suggested Read :** [Monitor system resources using Htop command][12]
|
||||
**建议阅读:** [使用 Htop 命令监视系统资源][12]
|
||||
```
|
||||
$ htop
|
||||
|
||||
@ -365,13 +360,13 @@ $ htop
|
||||
|
||||
```
|
||||
|
||||
### Method-15 : Using corefreq Utility
|
||||
### 方法-15 : 使用 corefreq 实用程序
|
||||
|
||||
CoreFreq is a CPU monitoring software designed for Intel 64-bits Processors and supported architectures are Atom, Core2, Nehalem, SandyBridge and superior, AMD Family 0F.
|
||||
CoreFreq 是为 Intel 64 位处理器设计的 CPU 监控软件,支持的架构有 Atom,Core2,Nehalem,SandyBridge 和 superior,AMD 家族。(to 校正:这里 OF 最后什么意思)
|
||||
|
||||
CoreFreq provides a framework to retrieve CPU data with a high degree of precision.
|
||||
CoreFreq 提供了一个框架来以高精确度检索 CPU 数据。
|
||||
|
||||
**Suggested Read :** [CoreFreq – A Powerful CPU monitoring Tool for Linux Systems][13]
|
||||
**建议阅读:** [CoreFreq – 一个用于 Linux 系统的强大的 CPU 监控工具][13]
|
||||
```
|
||||
$ ./corefreq-cli -k
|
||||
Linux:
|
||||
@ -394,13 +389,13 @@ $ ./corefreq-cli -k | grep "Total RAM" | awk '{print $4 / 1024 / 1024}'
|
||||
|
||||
```
|
||||
|
||||
### Method-16 : Using glances Command
|
||||
### 方法-16 : 使用 glances 命令
|
||||
|
||||
Glances is a cross-platform curses-based system monitoring tool written in Python. We can say all in one place, like maximum of information in a minimum of space. It uses psutil library to get information from your system.
|
||||
Glances 是用 Python 编写的跨平台基于 curses(LCTT 译注:curses 是一个 Linux/Unix 下的图形函数库)的系统监控工具。我们可以说一物俱全,就像在最小的空间含有最大的信息。它使用 psutil 库从系统中获取信息。
|
||||
|
||||
Glances capable to monitor CPU, Memory, Load, Process list, Network interface, Disk I/O, Raid, Sensors, Filesystem (and folders), Docker, Monitor, Alert, System info, Uptime, Quicklook (CPU, MEM, LOAD), etc,.
|
||||
Glances 可以监视 CPU,内存,负载,进程列表,网络接口,磁盘 I/O,Raid,传感器,文件系统(和文件夹),Docker,监视器,警报,系统信息,正常运行时间,快速预览(CPU,内存,负载)等。
|
||||
|
||||
**Suggested Read :** [Glances (All in one Place)– An Advanced Real Time System Performance Monitoring Tool for Linux][14]
|
||||
**建议阅读:** [Glances (一物俱全)– 一个 Linux 的高级的实时系统性能监控工具][14]
|
||||
```
|
||||
$ glances
|
||||
|
||||
@ -425,17 +420,19 @@ sda1 9.46M 12K 0.0 4.9 1.78G 97.2M 6125 daygeek 0 S 1:36.57 0 0 /usr/lib/firefox
|
||||
|
||||
```
|
||||
|
||||
### Method-17 : Using gnome-system-monitor
|
||||
### 方法-17 : 使用 gnome-system-monitor
|
||||
|
||||
System Monitor is a tool to manage running processes and monitor system resources. It shows you what programs are running and how much processor time, memory, and disk space are being used.
|
||||
系统监视器是一个管理正在运行的进程和监视系统资源的工具。它向你显示正在运行的程序以及耗费的处理器时间,内存和磁盘空间。
|
||||
![][16]
|
||||
|
||||
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.2daygeek.com/easy-ways-to-check-size-of-physical-memory-ram-in-linux/
|
||||
|
||||
作者:[Ramya Nuvvula][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
译者:[MjSeven](https://github.com/MjSeven)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
@ -1,31 +1,29 @@
|
||||
How To Add, Enable And Disable A Repository In Linux
|
||||
如何在 Linux 中添加,启用和禁用一个仓库
|
||||
======
|
||||
Many of us using yum package manager to manage package installation, remove, update, search, etc, on RPM based system such as RHEL, CentOS, etc,.
|
||||
|
||||
Linux distributions gets most of its software from distribution official repositories. The official distribution repositories contain good amount of free and open source apps/software’s. It’s readily available to install and use.
|
||||
在基于 RPM 的系统上,例如 RHEL, CentOS 等,我们中的许多人使用 yum 包管理器来管理软件的安装,删除,更新,搜索等。
|
||||
|
||||
RPM based distribution doesn’t offer some of the packages in their official distribution repository due to some limitation and proprietary issue. Also it won’t offer latest version of core packages due to stability.
|
||||
Linux 发行版的大部分软件都来自发行版官方仓库。官方仓库包含大量免费和开源的应用和软件。它很容易安装和使用。
|
||||
|
||||
To overcome this situation/issue, we need to install/enable the requires third party repository. There are many third party repositories are available for RPM based systems but only few of the repositories are advised to use because they didn’t replace large amount of base packages.
|
||||
由于一些限制和专有问题,基于 RPM 的发行版在其官方仓库中没有提供某些包。另外,出于稳定性考虑,它不会提供最新版本的核心包。
|
||||
|
||||
**Suggested Read :**
|
||||
**(#)** [YUM Command To Manage Packages on RHEL/CentOS Systems][1]
|
||||
**(#)** [DNF (Fork of YUM) Command To Manage Packages on Fedora System][2]
|
||||
**(#)** [List of Command line Package Manager & Usage][3]
|
||||
**(#)** [A Graphical front-end tool for Linux Package Manager][4]
|
||||
为了克服这种情况,我们需要安装或启用需要的第三方仓库。对于基于 RPM 的系统,有许多第三方仓库可用,但建议使用的仓库很少,因为它们不会替换大量的基础包。
|
||||
|
||||
This can be done on RPM based system such as RHEL, CentOS, OEL, Fedora, etc,.
|
||||
**建议阅读:**
|
||||
**(#)** [在 RHEL/CentOS 系统中使用 YUM 命令管理包][1]
|
||||
**(#)** [在 Fedora 系统中使用 DNF (YUM 的分支) 命令来管理包][2]
|
||||
**(#)** [命令行包管理器和用法列表][3]
|
||||
**(#)** [Linux 包管理器的图形化工具][4]
|
||||
|
||||
* Fedora system uses “dnf config-manager [options] [section …]”
|
||||
* Other RPM based system uses “yum-config-manager [options] [section …]”
|
||||
这可以在基于 RPM 的系统上完成,比如 RHEL, CentOS, OEL, Fedora 等。
|
||||
* Fedora 系统使用 “dnf config-manager [options] [section …]”
|
||||
* 其它基于 RPM 的系统使用 “yum-config-manager [options] [section …]”
|
||||
|
||||
### 如何列出启用的仓库
|
||||
|
||||
只需运行以下命令即可检查系统上启用的仓库列表。
|
||||
|
||||
### How To List Enabled Repositories
|
||||
|
||||
Just run the below command to check list of enabled repositories on your system.
|
||||
|
||||
For CentOS/RHEL/OLE systems
|
||||
对于 CentOS/RHEL/OLE 系统:
|
||||
```
|
||||
# yum repolist
|
||||
Loaded plugins: fastestmirror, security
|
||||
@ -38,33 +36,32 @@ repolist: 8,014
|
||||
|
||||
```
|
||||
|
||||
For Fedora system
|
||||
对于 Fedora 系统:
|
||||
```
|
||||
# dnf repolist
|
||||
|
||||
```
|
||||
|
||||
### How To Add A New Repository In System
|
||||
### 如何在系统中添加一个新仓库
|
||||
|
||||
Every repositories commonly provide their own `.repo` file. To add such a repository to your system, run the
|
||||
following command as root user. In our case, we are going to add `EPEL Repository` and `IUS Community Repo`, see below.
|
||||
每个仓库通常都提供自己的 `.repo` 文件。要将此类仓库添加到系统中,使用 root 用户运行以下命令。在我们的例子中将添加 `EPEL Repository` 和 `IUS Community Repo`,见下文。
|
||||
|
||||
There is no `.repo` files are available for these repositories. Hence, we are installing by using below methods.
|
||||
但是没有 `.repo` 文件可用于这些仓库。因此,我们使用以下方法进行安装。
|
||||
|
||||
For **EPEL Repository** , since it’s available from CentOS extra repository so, run the below command to install it.
|
||||
对于 **EPEL Repository**,因为它可以从 CentOS 额外仓库获得(to 校正:额外仓库什么意思?),所以运行以下命令来安装它。
|
||||
```
|
||||
# yum install epel-release -y
|
||||
|
||||
```
|
||||
|
||||
For **IUS Community Repo** , run the below bash script to install it.
|
||||
对于 **IUS Community Repo**,运行以下 bash 脚本来安装。
|
||||
```
|
||||
# curl 'https://setup.ius.io/' -o setup-ius.sh
|
||||
# sh setup-ius.sh
|
||||
|
||||
```
|
||||
|
||||
If you have `.repo` file, simple run the following command to add a repository on RHEL/CentOS/OEL.
|
||||
如果你有 `.repo` 文件,在 RHEL/CentOS/OEL 中,只需运行以下命令来添加一个仓库。
|
||||
```
|
||||
# yum-config-manager --add-repo http://www.example.com/example.repo
|
||||
|
||||
@ -76,7 +73,7 @@ repo saved to /etc/yum.repos.d/example.repo
|
||||
|
||||
```
|
||||
|
||||
For Fedora system, run the below command to add a repository.
|
||||
对于 Fedora 系统,运行以下命令来添加一个仓库。
|
||||
```
|
||||
# dnf config-manager --add-repo http://www.example.com/example.repo
|
||||
|
||||
@ -84,9 +81,9 @@ adding repo from: http://www.example.com/example.repo
|
||||
|
||||
```
|
||||
|
||||
If you run `yum repolist` command after adding these repositories, you can able to see newly added repositories. Yes, i saw that.
|
||||
如果在添加这些仓库之后运行 `yum repolist` 命令,你就可以看到新添加的仓库了。Yes,我看到了。
|
||||
|
||||
Make a note: whenever you run “yum repolist” command, that automatically fetch updates from corresponding repository and save the caches in local system.
|
||||
注意:每当运行 “yum repolist” 命令时,该命令会自动从相应的仓库获取更新,并将缓存保存在本地系统中。
|
||||
```
|
||||
# yum repolist
|
||||
|
||||
@ -106,7 +103,7 @@ repolist: 20,909
|
||||
|
||||
```
|
||||
|
||||
Each repository has multiple channels such as Testing, Dev, Archive. You can understand this better by navigating to repository files location.
|
||||
每个仓库都有多个渠道,比如测试,开发和存档(Testing, Dev, Archive)。通过导航到仓库文件位置,你可以更好地理解这一点。
|
||||
```
|
||||
# ls -lh /etc/yum.repos.d
|
||||
total 64K
|
||||
@ -127,11 +124,11 @@ total 64K
|
||||
|
||||
```
|
||||
|
||||
### How To Enable A Repository In System
|
||||
### 如何在系统中启用一个仓库
|
||||
|
||||
When you add a new repository by default it’s enable the their stable repository that’s why we are getting the repository information when we ran “yum repolist” command. In some cases if you want to enable their Testing or Dev or Archive repo, use the following command. Also, we can enable any disabled repo using this command.
|
||||
当你在默认情况下添加一个新仓库时,它将启用它们的稳定仓库,这就是为什么我们在运行 “yum repolist” 命令时要获取仓库信息。在某些情况下,如果你希望启用它们的测试,开发或存档仓库,使用以下命令。另外,我们还可以使用此命令启用任何禁用的仓库。
|
||||
|
||||
To validate this, we are going to enable `epel-testing.repo` by running the below command.
|
||||
为了验证这一点,我们将启用 `epel-testing.repo`,运行下面的命令:
|
||||
```
|
||||
# yum-config-manager --enable epel-testing
|
||||
|
||||
@ -187,7 +184,7 @@ username =
|
||||
|
||||
```
|
||||
|
||||
Run the “yum repolist” command to check whether “epel-testing” is enabled or not. It’s enabled, i could able to see the repo.
|
||||
运行 “yum repolist” 命令来检查是否启用了 “epel-testing”。它被启用了,我可以从列表中看到它。
|
||||
```
|
||||
# yum repolist
|
||||
Loaded plugins: fastestmirror, security
|
||||
@ -220,23 +217,23 @@ repolist: 22,250
|
||||
|
||||
```
|
||||
|
||||
If you want to enable multiple repositories at once, use the below format. This command will enable epel, epel-testing, and ius repositories.
|
||||
如果你想同时启用多个仓库,使用以下格式。这个命令将启用 epel, epel-testing 和 ius 仓库。
|
||||
```
|
||||
# yum-config-manager --enable epel epel-testing ius
|
||||
|
||||
```
|
||||
|
||||
For Fedora system, run the below command to enable a repository.
|
||||
对于 Fedora 系统,运行下面的命令来启用仓库。
|
||||
```
|
||||
# dnf config-manager --set-enabled epel-testing
|
||||
|
||||
```
|
||||
|
||||
### How To Disable A Repository In System
|
||||
### 如何在系统中禁用一个仓库
|
||||
|
||||
Whenever you add a new repository by default it enables their stable repository that’s why we are getting the repository information when we ran “yum repolist” command. If you dont want to use the repository then disable that by running below command.
|
||||
无论何时你在默认情况下添加一个新的仓库,它都会启用它们的稳定仓库,这就是为什么我们在运行 “yum repolist” 命令时要获取仓库信息。如果你不想使用仓库,那么可以通过下面的命令来禁用它。
|
||||
|
||||
To validate this, we are going to disable `epel-testing.repo` & `ius.repo` by running below command.
|
||||
为了验证这点,我们将要禁用 `epel-testing.repo` 和 `ius.repo`,运行以下命令:
|
||||
```
|
||||
# yum-config-manager --disable epel-testing ius
|
||||
|
||||
@ -341,7 +338,7 @@ username =
|
||||
|
||||
```
|
||||
|
||||
Run the “yum repolist” command to check whether “epel-testing” & “ius” repositories are disabled or not. It’s disabled, i could not able to see those repo in the below list except “epel”.
|
||||
运行 “yum repolist” 命令检查 “epel-testing” 和 “ius” 仓库是否被禁用。它被禁用了,我不能看到那些仓库,除了 “epel”。
|
||||
```
|
||||
# yum repolist
|
||||
Loaded plugins: fastestmirror, security
|
||||
@ -357,7 +354,7 @@ repolist: 21,051
|
||||
|
||||
```
|
||||
|
||||
Alternatively, we can run the following command to see the details.
|
||||
或者,我们可以运行以下命令查看详细信息。
|
||||
```
|
||||
# yum repolist all | grep "epel*\|ius*"
|
||||
* epel: mirror.steadfast.net
|
||||
@ -382,16 +379,15 @@ ius-testing-source IUS Community Packages for Enterprise disabled
|
||||
|
||||
```
|
||||
|
||||
For Fedora system, run the below command to enable a repository.
|
||||
对于 Fedora 系统,运行以下命令来启用一个仓库。
|
||||
```
|
||||
# dnf config-manager --set-disabled epel-testing
|
||||
|
||||
```
|
||||
|
||||
Alternatively this can be done by editing the appropriate repo file manually. To do, open the corresponding repo file and change the value from `enabled=0`
|
||||
to `enabled=1` (To enable the repo) or from `enabled=1` to `enabled=0` (To disable the repo).
|
||||
或者,可以通过手动编辑适当的 repo 文件来完成。为此,打开相应的 repo 文件并将值从 `enabled=0` 改为 `enabled=1`(启用仓库)或从 `enabled=1` 变为 `enabled=0`(禁用仓库)。
|
||||
|
||||
From:
|
||||
即从:
|
||||
```
|
||||
[epel]
|
||||
name=Extra Packages for Enterprise Linux 6 - $basearch
|
||||
@ -403,8 +399,7 @@ gpgcheck=1
|
||||
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-6
|
||||
|
||||
```
|
||||
|
||||
To:
|
||||
改为
|
||||
```
|
||||
[epel]
|
||||
name=Extra Packages for Enterprise Linux 6 - $basearch
|
||||
@ -423,7 +418,7 @@ via: https://www.2daygeek.com/how-to-add-enable-disable-a-repository-dnf-yum-con
|
||||
|
||||
作者:[Prakash Subramanian][a]
|
||||
选题:[lujun9972](https://github.com/lujun9972)
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
译者:[MjSeven](https://github.com/MjSeven)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
@ -1,163 +0,0 @@
|
||||
# 关于 C ++ 的所有争论?Bjarne Stroustrup 警告他的 C++ 未来的计划很危险
|
||||
|
||||
![](https://regmedia.co.uk/2018/06/15/shutterstock_38621860.jpg?x=442&y=293&crop=1)
|
||||
|
||||
今年早些时候,我们**访谈**了 Bjarne Stroustrup,他是 C++ 语言的创始人,摩根士丹利技术部门的董事总经理,美国哥伦比亚大学计算机科学的客座教授,他写了[一封信][1]邀请那些关注编程语言演进的人去“想想瓦萨号!”
|
||||
|
||||
毫无疑问,对于丹麦人来说,这句话很容易理解,而那些对于 17 世纪的斯堪的纳维亚历史了解不多的人,还需要展开说一下。瓦萨号是一艘瑞典军舰,由国王 Gustavus Adolphus 委托建造。它是在 1628 年 8 月 10 日首航时,当时波罗的海国家中最强大的军舰,但是它在首航几分钟之后就沉没了。
|
||||
|
||||
巨大的瓦萨号有一个难以解决的设计缺陷:头重脚轻,以至于它被[一阵狂风刮翻了][2]。通过这段翻船历史的回忆,Stroustrup 警示了 C++ 所面临的风险,因为现在越来越多的特性被添加到了 C++ 中。
|
||||
|
||||
现在已经提议了不少这样的特性。Stroustrup 在他的信中引用了 43 条提议。他认为那些参与 C++ 语言 ISO 标准演进的人(指众所周知的 [WG21][3]),正在努力地让语言更高级,但他们的努力方向却并不一致。
|
||||
|
||||
在他的信中,他写道:
|
||||
|
||||
> 分开来看,许多提议都很有道理。但将它们综合到一起,这些提议是很愚蠢的,将危害 C++ 的未来。
|
||||
|
||||
他明确表示,不希望 C++ 重蹈瓦萨号的覆辙,这种渐近式的改进将敲响 C++ 的丧钟。相反,应该吸取瓦萨号的教训,构建一个坚实的基础,吸取经验教训,并做彻底的测试。
|
||||
|
||||
在瑞士拉普斯威尔(Rapperswill)召开的 C++ 标准化委员会会议之后,本月早些时候,Stroustrup 接受了_《The Register》_ 的采访,回答了有关 C++ 语言下一步发展方向方面的几个问题。(最新版是 C++17,它去年刚发布;下一个版本是 C++20,它正在开发中,预计于 2020 年发布。)
|
||||
|
||||
**Register:在你的信件《想想瓦萨号!》中,你写道:**
|
||||
|
||||
> 在 C++11 开始基础不再完整,而 C++17 中在使基础更加稳固、规范和完整方面几乎没有改善。相反地,却增加了重要接口的复杂度,让人们需要学习的特性数量越来越多。C++ 可能在这种提议的重压之下崩溃 —— 这些提议大多数都不成熟。我们不应该花费大量的时间为专家级用户们(比如我们自己)去创建越来越复杂的东西。~~(还要考虑普通用户的学习曲线,越复杂的东西越不易普及。)~~
|
||||
|
||||
**对新人来说,C++ 很难吗?如果是这样,你认为怎样的特性让新人更易理解?**
|
||||
|
||||
**Stroustrup:**C++ 的有些东西对于新人来说确实很难。
|
||||
|
||||
换句话说,C++ 中有些东西对于新人来说,比起 C 或上世纪九十年代的 C++ 更容易理解了。而难点是让大型社区专注于这些部分,并且帮助新手和普通 C++ 用户去规避那些对高级库实现提供支持的部分。
|
||||
|
||||
我建议使用 [C++ 核心准则][4] 作为实现上述目标的一个辅助。
|
||||
|
||||
此外,我的 “C++ 教程” 也可以帮助人们在使用现代 C++ 时走上正确的方向,而不会迷失在自上世纪九十年代以来的复杂性中,或困惑于只有专家级的用户才能理解的东西中。第二版的 “C++ 教程” 涵盖了 C++17 和部分 C++20 的内容,这本书即将要出版了。
|
||||
|
||||
我和其他人给没有编程经验的大一新生教过 C++,只要你不去深挖编程语言的每个晦涩难懂的角落,把注意力集中到 C++ 中最主流的部分,在三个月内新可以学会 C++。
|
||||
|
||||
“让简单的东西保持简单” 是我长期追求的目标。比如 C++11 的 `range-for` 循环:
|
||||
|
||||
```
|
||||
for (int& x : v) ++x; // increment each element of the container v
|
||||
|
||||
```
|
||||
|
||||
`v` 的位置可以是任何容器。在 C 和 C 风格的 C++ 中,它可能看到的是这样:
|
||||
|
||||
```
|
||||
for (int i=0; i<MAX; i++) ++v[i]; // increment each element of the array v
|
||||
|
||||
```
|
||||
|
||||
一些人报怨说添加了 `range-for` 循环让 C++ 变得更复杂了,很显然,他们是正确的,因为它添加了一个新特性,但它却让 C++ 用起来更简单。同时它还解决掉了传统 for 循环中出现的一些常见错误。
|
||||
|
||||
另外的一个例子是 C++11 的标准线程库。它比起使用 POSIX 或直接使用 Windows 的 C API 来说更简单,并且更不易出错。
|
||||
|
||||
**Register:你如何看待 C++ 现在的状况?**
|
||||
|
||||
**Stroustrup:** C++11 是 C++ 的最重大的改进版,并且在 C++14 上全面完成了改进工作。C++17 添加了相当多的新特性,但是没有提供对新技术的很多支持。C++20 目前看上去可能会成为一个重大改进版。编译器的状况和标准库实现的非常好,非常接近最新的标准。C++17 已经可用。持续改进了对工具的支持。已经有了许多第三方的库和许多新工具。而不幸的是,这些东西不太好找到。
|
||||
|
||||
我在《想想瓦萨号!》一文中所表达的担忧与标准化过程有关,对新东西的过度热情与完美主义的组合拖延了重大的改进。“追述完美是优秀的敌人”,在六月份拉普斯威尔的会议上有 160 人参与。在这样一个数量庞大和多样化的人群中很难取得一致意见。这就导致了专家们更多地为他们自己去设计,而不是为了整个社区。
|
||||
|
||||
**Register: C++ 是否有一个期望的状况,或为了期望的适应性而努力简化以满足程序员们在任意时间的需要?**
|
||||
|
||||
**Stroustrup:** 二者都有。我很乐意看到 C++ 支持彻底保证类型安全和资源安全的编程方式。这不应该通过限制适用性或增加成本来实现,而是应该通过改进的表达能力和性能来实现。我认为可以做到这些,通过让程序员使用更好的(更易用的)语言可以实现这一点。
|
||||
|
||||
终极目标不会马上实现,也不会单靠语言的设计来实现。为了让编程更高效,我们需要通过改进语言特性、最好的库、静态分析、以及规则的组合来实现。C++ 核心准则是我提升 C++ 代码质量的广泛而长远的方法。
|
||||
|
||||
**Register:对于 C++ 是否有明显的风险?如果有,它是如何产生的?(如,改进过于缓慢,新出现的低级语言,等等,从你的信中看,似乎是提议过多。)**
|
||||
|
||||
**Stroustrup:**毫无疑问,今年我们已经收到了 400 个提议。当然,它们并不都是新提议。许多提议都与规范语言和标准库这一必需而乏味的工作相关,但是量大到难以管理。你可以在 WG21 的网站上找到所有这些文章。
|
||||
|
||||
我写了《想想瓦萨号!》这封信作为一个呼吁。我感受到了这种压力,为解决紧急需要和赶时髦而增加语言特性,而不是去加强语言基础(比如,改善静态类型系统)。增加的任何新东西,无论它是多小都会产生成本,比如实现、学习、工具升级。重大的特性是那些改变我们编程思想的特性。那才是我们必须关注的东西。
|
||||
|
||||
委员会已经设立了一个”指导小组“,这个小组由在语言、标准库、实现、以及实际使用领域中拥有极强履历的人组成。我是其中的成员之一。我们负责为重点领域写一些关于方向、设计理念和建议方面的东西。
|
||||
|
||||
对于 C++20,我们建议去关注:
|
||||
|
||||
```
|
||||
概念
|
||||
模块(提供适当的模块化和令人称奇的编译时改进)
|
||||
Ranges(包括一些无限序列的扩展)
|
||||
标准库中的网络概念
|
||||
```
|
||||
|
||||
在拉普斯威尔会议之后,虽然带来的模块和网络化很显然只是一种延伸,但机会还是有的。我是一个乐观主义者,并且委员会的成员们都非常努力。
|
||||
|
||||
我并不担心其它语言或新语言会取代它。我喜欢编程语言。如果一个新的语言提供了其它编程语言没有提供的非常有用的东西,那它就是我们从中学习的榜样,当然,每个语言都有它自己的问题。许多 C++ 的问题都与它广泛的应用领域、大量的使用人群和过度的热情有关。大多数语言的社区都喜欢有这样的问题。
|
||||
|
||||
**Register:关于 C++ 你是否重新考虑过任何架构方面的决策?**
|
||||
|
||||
**Stroustrup:** 当我使用一些新的编程语言时,我经常思考 C++ 原来的决策和设计。例如,可以看我的《编程的历史》论文第 1、2 部分。
|
||||
|
||||
并没有让我觉得很懊悔的重大决策,如果让我重新再做一次决策,几乎不会对现有的特性做任何不同的改变。
|
||||
|
||||
与以前一样,能够直接处理硬件加上零开销的抽象是设计的指导思想。使用构造函数和析构函数去处理资源是关键(RAII),STL 就是在 C++ 库中能够做什么的一个很好的例子。
|
||||
|
||||
**Register:在 2011 年采纳的每三年发布一个标准的节奏是否仍然有效?我之所以这样问是因为 Java 为了更快地迭代,一直在解决需求。**
|
||||
|
||||
**Stroustrup:**我认为 C++20 将会按时发布(就像 C++14 和 C++17 那样),并且主要的编译器也会立即遵从它。我也希望 C++20 比起 C++17 能有重大的改进。
|
||||
|
||||
对于其它语言如何管理它们的发行版我并不焦虑。C++ 是由一个遵循 ISO 规则的委员会来管理的,并不是由一个大公司或一个”创造它的权威“来管理。这一点不会改变。关于 ISO 标准,C++ 每三年发布一次的周期是一个激动人心的创举。标准的周期是 5 或 10 年。
|
||||
|
||||
**Register:在你的信中你写道:**
|
||||
|
||||
```
|
||||
我们需要一个能够被”普通程序员“使用的条理还算清楚的编程语言,他们主要关心的是能否按时高质量地交付他们的应用程序。
|
||||
```
|
||||
|
||||
对语言的改变是否能够去解决这个问题,或者还可能涉及到更多容易获得的工具和教育支持?
|
||||
|
||||
**Stroustrup:**我努力去宣传我的理念 —— C++ 是什么以及如何使用它,并且我鼓励其他人也和我一样去做。
|
||||
|
||||
特别是,我鼓励讲师和作者们向 C++ 程序员们宣扬有用易用的理念,而不是去示范复杂的示例和技术来展示他们自己有多高明。我在 2017 年的 CppCon 大会上的演讲主题就是”学习和教学 C++“,并且也指出 C++ 需要更好的工具。
|
||||
|
||||
我在演讲中提到构建支持和包管理器。这些历来都是 C++ 的弱点项。标准化委员会现在有一个工具研究小组,或许不久的将来也会有一个教育研究小组。
|
||||
|
||||
C++ 的社区以前基本上是很乱的,但是在过去的五年里,为了满足社区对新闻和支持的需要,出现了很多会议和博客。CppCon、isocpp.org、以及 Meeting++ 就是这样的例子。
|
||||
|
||||
在委员会中做设计是非常困难的。但是,对于所有的大型项目来说,委员会又是必不可少的。我很关注它们,但是为了成功,关注和面对问题是必需的。
|
||||
|
||||
**Register:你如何看待 C++ 社区的流程?在沟通和决策方面你希望看到哪些变化?**
|
||||
|
||||
**Stroustrup:**C++ 并没有企业管理的”社区流程“;它有一个 ISO 标准流程。我们不能对 ISO 的角色做重大的改变。理想的情况是,我们设立一个小的全职的”秘书处“来做最终决策和方向管理,但这种理想情况是不会出现的。相反,我们有成百上千的人在线来讨论,大约有 160 人在技术问题上进行投票,大约有 70 组织和 11 个国家在结果提议上正式投票。这样是很混乱的,但是在将来某个时候我们会让它好起来。
|
||||
|
||||
**Register:最终你认为那些即将推出的 C++ 特性中,对 C++ 用户最有帮助的是哪些?**
|
||||
|
||||
**Stroustrup:**
|
||||
|
||||
```
|
||||
大大地简化了一般编程的概念
|
||||
并行算法 – 没有比使用现代化硬件的并发特性更好的方法了
|
||||
协程,如果委员会能够确定在 C++20 上推出。
|
||||
模块改进了组织源代码的方式,并且大幅改善了编译时间。我希望能有这样的模块,但是它还不能确定能否在 C++20 上推出。
|
||||
一个标准的网络库,但是它还不能确定能否在 C++20 上推出。
|
||||
```
|
||||
|
||||
此外:
|
||||
|
||||
```
|
||||
Contracts(运行时检查的先决条件、后置条件、和断言)可能对许多人都非常重要。
|
||||
date 和 time-zone 支持库可能对许多人(行业)非常重要。
|
||||
```
|
||||
|
||||
**Register:最后你还有需要向读者说的话吗?**
|
||||
|
||||
**Stroustrup:**如果 C++ 标准化委员会能够专注于重大问题,去解决重大问题,那么 C++20 将会是非常优秀的。但是在 C++20 推出之前,我们的 C++17 仍然是非常好的,它将改变很多人关于 C++ 已经落伍的旧印象。®
|
||||
|
||||
------
|
||||
|
||||
via: https://www.theregister.co.uk/2018/06/18/bjarne_stroustrup_c_plus_plus/
|
||||
|
||||
作者:[Thomas Claburn][a]
|
||||
选题:[lujun9972](https://github.com/lujun9972)
|
||||
译者:[qhwdw](https://github.com/qhwdw)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: http://www.theregister.co.uk/Author/3190
|
||||
[1]: http://open-std.org/JTC1/SC22/WG21/docs/papers/2018/p0977r0.pdf
|
||||
[2]: https://www.vasamuseet.se/en/vasa-history/disaster
|
||||
[3]: http://open-std.org/JTC1/SC22/WG21/
|
||||
[4]: https://github.com/isocpp/CppCoreGuidelines/blob/master/CppCoreGuidelines.md
|
||||
[5]: https://go.theregister.co.uk/tl/1755/shttps://continuouslifecycle.london/
|
@ -1,39 +1,40 @@
|
||||
SDKMAN – A CLI Tool To Easily Manage Multiple Software Development Kits
|
||||
SDKMAN – 轻松管理多个软件开发套件 (SDK) 的命令行工具
|
||||
======
|
||||
|
||||
![](https://www.ostechnix.com/wp-content/uploads/2018/08/sdkman-720x340.png)
|
||||
|
||||
Are you a developer who often install and test applications on different SDKs? I’ve got a good news for you! Say hello to **SDKMAN** , a CLI tool that helps you to easily manage multiple software development kits. It provides a convenient way to install, switch, list and remove candidates. Using SDKMAN, you can now manage parallel versions of multiple SDKs easily on any Unix-like operating system. It allows the developers to install Software Development Kits for the JVM such as Java, Groovy, Scala, Kotlin and Ceylon. Ant, Gradle, Grails, Maven, SBT, Spark, Spring Boot, Vert.x and many others are also supported. SDKMAN is free, light weight, open source and written in **Bash**.
|
||||
你是否是一个经常在不同的 SDK 下安装和测试应用的开发者?我有一个好消息要告诉你!**SDKMAN**,一个可以帮你轻松管理多个 SDK 的命令行工具。它为安装、切换、列出和移除 SDK 提供了一个简便的方式。有了 SDKMAN,你可以在任何类 Unix 的操作系统上轻松地并行管理多个 SDK 的多个版本。它允许开发者为 JVM 安装不同的 SDK,例如 Java、Groovy、Scala、Kotlin 和 Ceylon、Ant、Gradle、Grails、Maven、SBT、Spark、Spring Boot、Vert.x,以及许多其他支持的 SDK。SDKMAN 是免费、轻量、开源、使用 **Bash** 编写的程序。
|
||||
|
||||
### Installing SDKMAN
|
||||
### 安装 SDKMAN
|
||||
|
||||
Installing SDKMAN is trivial. First, make sure you have installed **zip** and **unzip** applications. It is available in the default repositories of most Linux distributions. For instance, to install unzip on Debian-based systems, simply run:
|
||||
安装 SDKMAN 很简单。首先,确保你已经安装了 **zip** 和 **unzip** 这两个应用。它们在大多数的 Linux 发行版的默认仓库中。
|
||||
例如,在基于 Debian 的系统上安装 unzip,只需要运行:
|
||||
```
|
||||
$ sudo apt-get install zip unzip
|
||||
|
||||
```
|
||||
|
||||
Then, install SDKMAN using command:
|
||||
然后使用下面的命令安装 SDKMAN:
|
||||
```
|
||||
$ curl -s "https://get.sdkman.io" | bash
|
||||
|
||||
```
|
||||
|
||||
It’s that simple. Once the installation is completed, run the following command:
|
||||
在安装完成之后,运行以下命令:
|
||||
```
|
||||
$ source "$HOME/.sdkman/bin/sdkman-init.sh"
|
||||
|
||||
```
|
||||
|
||||
If you want to install it in a custom location of your choice other than **$HOME/.sdkman** , for example **/usr/local/** , do:
|
||||
如果你希望自定义安装到其他位置,例如 **/usr/local/**,你可以这样做:
|
||||
```
|
||||
$ export SDKMAN_DIR="/usr/local/sdkman" && curl -s "https://get.sdkman.io" | bash
|
||||
|
||||
```
|
||||
|
||||
Make sure your user has full access rights to this folder.
|
||||
确保你的用户有足够的权限访问这个目录。
|
||||
|
||||
Finally, check if the installation is succeeded using command:
|
||||
最后,在安装完成后使用下面的命令检查一下:
|
||||
```
|
||||
$ sdk version
|
||||
==== BROADCAST =================================================================
|
||||
@ -46,17 +47,17 @@ SDKMAN 5.7.2+323
|
||||
|
||||
```
|
||||
|
||||
Congratulations! SDKMAN has been installed. Let us go ahead and see how to install and manage SDKs.
|
||||
恭喜你!SDKMAN 已经安装完成了。让我们接下来看如何安装和管理 SDKs 吧。
|
||||
|
||||
### Manage Multiple Software Development Kits
|
||||
### 管理多个 SDK
|
||||
|
||||
To view the list of available candidates(SDKs), run:
|
||||
查看可用的 SDK 清单,运行:
|
||||
```
|
||||
$ sdk list
|
||||
|
||||
```
|
||||
|
||||
Sample output would be:
|
||||
将会输出:
|
||||
```
|
||||
================================================================================
|
||||
Available Candidates
|
||||
@ -81,15 +82,15 @@ tasks.
|
||||
|
||||
```
|
||||
|
||||
As you can see, SDKMAN list one candidate at a time along with the description of the candidate and it’s official website and the installation command. Press ENTER key to list the next candidates.
|
||||
就像你看到的,SDK 每次列出众多 SDK 中的一个,以及该 SDK 的描述信息、官方网址和安装命令。按回车键继续下一个。
|
||||
|
||||
To install a SDK, for example Java JDK, run:
|
||||
安装一个新的 SDK,例如 Java JDK,运行:
|
||||
```
|
||||
$ sdk install java
|
||||
|
||||
```
|
||||
|
||||
Sample output:
|
||||
将会输出:
|
||||
```
|
||||
Downloading: java 8.0.172-zulu
|
||||
|
||||
@ -108,27 +109,27 @@ Setting java 8.0.172-zulu as default.
|
||||
|
||||
```
|
||||
|
||||
If you have multiple SDKs, it will prompt if you want the currently installed version to be set as **default**. Answering **Yes** will set the currently installed version as default.
|
||||
如果你安装了多个 SDK,它将会提示你是否想要将当前安装的版本设置为 **默认版本**。回答 **Yes** 将会把当前版本设置为默认版本。
|
||||
|
||||
To install particular version of a SDK, do:
|
||||
使用以下命令安装一个 SDK 的其他版本:
|
||||
```
|
||||
$ sdk install ant 1.10.1
|
||||
|
||||
```
|
||||
|
||||
If you already have local installation of a specific candidate, you can set it as local version like below.
|
||||
如果你之前已经在本地安装了一个 SDK,你可以像下面这样设置它为本地版本。
|
||||
```
|
||||
$ sdk install groovy 3.0.0-SNAPSHOT /path/to/groovy-3.0.0-SNAPSHOT
|
||||
|
||||
```
|
||||
|
||||
To list a particular candidates versions:
|
||||
列出一个 SDK 的多个版本:
|
||||
```
|
||||
$ sdk list ant
|
||||
|
||||
```
|
||||
|
||||
Sample output:
|
||||
将会输出
|
||||
```
|
||||
================================================================================
|
||||
Available Ant Versions
|
||||
@ -147,21 +148,21 @@ Available Ant Versions
|
||||
|
||||
```
|
||||
|
||||
Like I already said, If you have installed multiple versions, SDKMAN will prompt you if you want the currently installed version to be set as **default**. You can answer Yes to set it as default. Also, you can do that later by using the following command:
|
||||
像我之前说的,如果你安装了多个版本,SDKMAN 会提示你是否想要设置当前安装的版本为 **默认版本**。你可以回答 Yes 设置它为默认版本。当然,你也可以在稍后使用下面的命令设置:
|
||||
```
|
||||
$ sdk default ant 1.9.9
|
||||
|
||||
```
|
||||
|
||||
The above command will set Apache Ant version 1.9.9 as default.
|
||||
上面的命令将会设置 Apache Ant 1.9.9 为默认版本。
|
||||
|
||||
You can choose which version of an installed candidate to use by using the following command:
|
||||
你可以根据自己的需要选择使用任何已安装的 SDK 版本,仅需运行以下命令:
|
||||
```
|
||||
$ sdk use ant 1.9.9
|
||||
|
||||
```
|
||||
|
||||
To check what is currently in use for a Candidate, for example Java, run:
|
||||
检查某个具体 SDK 当前的版本号,例如 Java,运行:
|
||||
```
|
||||
$ sdk current java
|
||||
|
||||
@ -169,7 +170,7 @@ Using java version 8.0.172-zulu
|
||||
|
||||
```
|
||||
|
||||
To check what is currently in use for all Candidates, for example Java, run:
|
||||
检查所有当下在使用的 SDK 版本号,运行:
|
||||
```
|
||||
$ sdk current
|
||||
|
||||
@ -180,19 +181,19 @@ java: 8.0.172-zulu
|
||||
|
||||
```
|
||||
|
||||
To upgrade an outdated candidate, do:
|
||||
升级过时的 SDK,运行:
|
||||
```
|
||||
$ sdk upgrade scala
|
||||
|
||||
```
|
||||
|
||||
You can also check what is outdated for all Candidates as well.
|
||||
你也可以检查所有的 SDKs 中还有哪些是过时的。
|
||||
```
|
||||
$ sdk upgrade
|
||||
|
||||
```
|
||||
|
||||
SDKMAN has offline mode feature that allows the SDKMAN to function when working offline. You can enable or disable the offline mode at any time by using the following commands:
|
||||
SDKMAN 有离线模式,可以让 SDKMAN 在离线时也正常运作。你可以使用下面的命令在任何时间开启或者关闭离线模式:
|
||||
```
|
||||
$ sdk offline enable
|
||||
|
||||
@ -200,13 +201,13 @@ $ sdk offline disable
|
||||
|
||||
```
|
||||
|
||||
To remove an installed SDK, run:
|
||||
要移除已安装的 SDK,运行:
|
||||
```
|
||||
$ sdk uninstall ant 1.9.9
|
||||
|
||||
```
|
||||
|
||||
For more details, check the help section.
|
||||
要了解更多的细节,参阅帮助章节。
|
||||
```
|
||||
$ sdk help
|
||||
|
||||
@ -238,15 +239,15 @@ version : where optional, defaults to latest stable if not provided
|
||||
|
||||
```
|
||||
|
||||
### Update SDKMAN
|
||||
### 更新 SDKMAN
|
||||
|
||||
The following command installs a new version of SDKMAN if it is available.
|
||||
如果有可用的新版本,可以使用下面的命令安装:
|
||||
```
|
||||
$ sdk selfupdate
|
||||
|
||||
```
|
||||
|
||||
SDKMAN will also periodically check for any updates and let you know with instruction on how to update.
|
||||
SDKMAN 会定期检查更新,以及让你了解如何更新的指令。
|
||||
```
|
||||
WARNING: SDKMAN is out-of-date and requires an update.
|
||||
|
||||
@ -255,30 +256,29 @@ Adding new candidates(s): scala
|
||||
|
||||
```
|
||||
|
||||
### Remove cache
|
||||
### 清除缓存
|
||||
|
||||
It is recommended to clean the cache that contains the downloaded SDK binaries for time to time. To do so, simply run:
|
||||
建议时不时的清理缓存(包括那些下载的 SDK 的二进制文件)。仅需运行下面的命令就可以了:
|
||||
```
|
||||
$ sdk flush archives
|
||||
|
||||
```
|
||||
|
||||
It is also good to clean temporary folder to save up some space:
|
||||
它也可以用于清理空的文件夹,节省一点空间:
|
||||
```
|
||||
$ sdk flush temp
|
||||
|
||||
```
|
||||
|
||||
### Uninstall SDKMAN
|
||||
### 卸载 SDKMAN
|
||||
|
||||
If you don’t need SDKMAN or don’t like it, remove as shown below.
|
||||
如果你觉得不需要或者不喜欢 SDKMAN,可以使用下面的命令删除。
|
||||
```
|
||||
$ tar zcvf ~/sdkman-backup_$(date +%F-%kh%M).tar.gz -C ~/ .sdkman
|
||||
$ rm -rf ~/.sdkman
|
||||
|
||||
```
|
||||
|
||||
Finally, open your **.bashrc** , **.bash_profile** and/or **.profile** files and find and remove the following lines.
|
||||
最后打开你的 **.bashrc**,**.bash_profile** 和/或者 **.profile**,找到并删除下面这几行。
|
||||
```
|
||||
#THIS MUST BE AT THE END OF THE FILE FOR SDKMAN TO WORK!!!
|
||||
export SDKMAN_DIR="/home/sk/.sdkman"
|
||||
@ -286,11 +286,14 @@ export SDKMAN_DIR="/home/sk/.sdkman"
|
||||
|
||||
```
|
||||
|
||||
If you use ZSH, remove the above line from the **.zshrc** file.
|
||||
如果你使用的是 ZSH,就从 **.zshrc** 中删除上面这一行。
|
||||
|
||||
And, that’s all for today. I hope you find SDKMAN useful. More good stuffs to come. Stay tuned!
|
||||
这就是所有的内容了。我希望 SDKMAN 可以帮到你。还有更多的干货即将到来。敬请期待!
|
||||
|
||||
Cheers!
|
||||
祝近祺!
|
||||
|
||||
|
||||
:)
|
||||
|
||||
|
||||
|
||||
@ -300,7 +303,7 @@ via: https://www.ostechnix.com/sdkman-a-cli-tool-to-easily-manage-multiple-softw
|
||||
|
||||
作者:[SK][a]
|
||||
选题:[lujun9972](https://github.com/lujun9972)
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
译者:[dianbanjiu](https://github.com/dianbanjiu)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
@ -0,0 +1,162 @@
|
||||
5 个适合系统管理员使用的告警可视化工具
|
||||
======
|
||||
这些开源的工具能够通过输出帮助用户了解系统的运行状况,并对可能发生的潜在问题作出告警。
|
||||
|
||||
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/metrics_data_dashboard_system_computer_analytics.png?itok=oxAeIEI-)
|
||||
|
||||
你大概已经已经知道告警可视化工具是用来做什么的了。下面我们就要来说一下,为什么要讨论这样的工具,甚至某些系统专门将可视化作为特有的功能。
|
||||
|
||||
可观察性的概念来自控制理论,这个概念描述了我们通过对系统的输入和系统的输出来了解系统的能力。本文将重点介绍具有可观察性的输出组件。
|
||||
|
||||
告警可视化工具可以对系统的输出进行分析,进而对输出的信息结构化。告警实际上是对系统异常状态的描述,而可视化则是让用户能够直观理解的结构化表示。
|
||||
|
||||
### 常见的可视化告警
|
||||
|
||||
#### 告警
|
||||
|
||||
首先要明确一下告警的含义。在人员无法响应告警内容情况下,不应该发送告警。包括那些发给多个人,但只有其中少数人可以响应的告警,以及系统中的每个异常都触发的告警。因为这样会产生告警疲劳,告警接收者也往往会对这些过多的告警采取忽视的态度。
|
||||
|
||||
例如,如果管理员每天都会收到告警系统发来的数百封告警邮件,他就很容易会忽略告警系统的所有邮件。除非问题真正发生,并且受到了客户或上级的询问时,管理员才会重新重视告警信息。在这种情况下,告警已经失去了原有的意义和用途。
|
||||
|
||||
告警不是一个持续的信息流或者状态更新。告警的目的在于暴露系统无法自动恢复的问题,而且告警应该只发送给最有可能解决问题的人员。超出这个定义的内容都不应该作为告警,否则将会对实际工作造成不良的影响。
|
||||
|
||||
不同的告警体系都会有各自的告警类型,因此不能用优先级(P1-P5)或者诸如“信息”、“警告”、“严重”之类的字眼来一概而论,而应该使用一些通用的分类方式来对复杂系统事件进行描述。
|
||||
|
||||
刚才我提到了一个“信息”这个告警类型,但实际上告警不应该是一个信息,尽管有些人可能会不这样认为。但我觉得如果一个告警没有发送给任何一个人,它就不应该是警报,而只是一些在系统中被视为警报的数据点,代表了一些应该知晓但不需要响应的事件。它更应该作为告警可视化工具的一部分,而不是会导致触发告警的事件。《[实用监控][1]》是这个领域的必读书籍,其作者 Mike Julian 在书中就介绍了他自己关于告警的看法。
|
||||
|
||||
而非信息警报则代表告警需要被响应以及需要相关的操作。我将这些告警大致分为内部故障和外部故障两种类型,而对于大多数公司来说,通常会有两个以上的级别来确定响应告警的优先级。系统性能下降就是一种故障,因为这种现象对用户的影响通常都是未知的。
|
||||
|
||||
内部故障比外部故障的优先级低,但也需要快速响应。内部故障通常包括公司员工使用的内部系统或仅对公司员工可见的应用故障。
|
||||
|
||||
外部则包括任何会产生业务影响的系统故障,但不包括影响系统更新的故障。外部故障一般包括客户端应用故障、数据库故障和导致系统可用性或一致性失效的网络故障,这些都会影响用户的正常使用。对于不直接影响用户的依赖组件故障也属于外部故障,随着应用程序的不断运行,一旦依赖组件发生故障,系统的性能也会受到波及。这种情况对于使用某些外部服务或数据源的系统来说很常见,尽管这些外部服务或数据源对于可能不涉及到系统的主要功能,但是当系统在处理相关依赖组件的错误时可能会出现较明显的延迟。
|
||||
|
||||
### 可视化
|
||||
|
||||
可视化的种类有很多,我就不一一赘述了。这是一个有趣的研究领域,在我这些年的数据分析经历当中,学习和应用可视化方面的知识可以说是相当有挑战性。我们需要将复杂的系统输出通过直观的方式来向他人展示,才能有效地把信息传播出去。[Google Charts][2] 和 [Tableau][3] 都提供了很多可视化方面的工具。下面将会介绍一些最常见的可视化创新解决方案。
|
||||
|
||||
#### 折线图
|
||||
|
||||
折线图可能是最常见的可视化方式了,它可以让用户很直观地按照时间维度了解系统的情况。系统中每个不同的指标都会以一条独立的折线在图表中体现。但当同一个图表中同时存在多条折线时,就可能会对阅读有所影响(如下图所示),所以大多数情况下都可以选择仅查看其中的少数几条折线,而不是让所有折线同时显示。如果某个指标的数值产生了大于正常范围的波动,就会很容易发现。例如下图中异常的紫线、黄线、浅蓝线。
|
||||
|
||||
![](https://opensource.com/sites/default/files/uploads/monitoring_guide_line_chart.png)
|
||||
|
||||
折线图的另一个用法是可以将多条折线堆积起来以显示它们之间的关系。例如对于通过折线图反映服务器的请求数量,可以单独显示每台服务器上的请求,也可以把多台服务器的数据合在一起显示。这就可以在同一个图表中灵活查看整个系统中每个实例的情况了。
|
||||
|
||||
![](https://opensource.com/sites/default/files/uploads/monitoring_guide_line_chart_aggregate.png)
|
||||
|
||||
#### 热力图
|
||||
|
||||
另一种常见的可视化方式是热力图。热力图与条形图比较类似,还可以在条形图的基础上显示某部分在整体中占比的变化情况。例如在查看网络请求延时的时候,就可以使用热力图快速查看到所有网络请求的总体趋势和分布情况,另外,它可以使用不同颜色来表示不同部分的数值。
|
||||
|
||||
在以下这个热力图中,通过竖直方向上每个时间段的色块数量分布,可以清楚地看到大部分数据集中在整个范围的中心位置。我们还可以发现,大多数时间段的色块分布都是比较宽松的,而 14:00 到 15:00 这一段则分布得很密集,这样的分布有可能意味着一种不健康的状态。
|
||||
|
||||
![](https://opensource.com/sites/default/files/uploads/monitoring_guide_histogram.png)
|
||||
|
||||
#### 仪表图
|
||||
|
||||
还有一种常见的可视化方式是仪表图,用户可以通过仪表图快速了解单个指标。仪表一般用于单个指标的显示,例如车速表代表汽车的行驶速度、油量表代表油箱中的汽油量等等。大多数的仪表图都有一个共通点,就是会划分出所示指标的对应状态。如下图所示,绿色表示正常的状态,橙色表示不良的状态,而红色则表示极差的状态。中间一行则模拟了真实仪表的显示情况。
|
||||
|
||||
![](https://opensource.com/sites/default/files/uploads/monitoring_guide_gauges.png)
|
||||
|
||||
上面图表中,除了常规仪表样式的显示方式之外,还有较为直接的数据显示方式,配合相同的配色方案,一眼就可以看出各个指标所处的状态,这一点与和仪表的特点类似。所以,最下面一行可能是仪表图的最佳显示方式,用户不需要仔细阅读,就可以大致了解各个指标的不同状态。这种类型的可视化是我最常用的类型,在数秒钟之间,我就可以全面地总览系统各方面地运行情况。
|
||||
|
||||
#### 火焰图
|
||||
|
||||
由 [Netflix 的 Brendan Gregg][4] 在 2011 年开始使用的火焰图是一种较为少见地可视化方式。它不像仪表图那样可以从图表中快速得到关键信息,通常只会在需要解决某个应用的问题的时候才会用到这种图表。火焰图主要用于 CPU、内存和相关帧方面的表示,X 轴按字母顺序将帧一一列出,而 Y 轴则表示堆栈的深度。图中每个矩形都是一个标明了调用的函数的堆栈帧。矩形越宽,就表示它在堆栈中出现越频繁。在分析系统性能问题的时候,火焰图能够起到很大的作用,大家不妨尝试一下。
|
||||
|
||||
![](https://opensource.com/sites/default/files/uploads/monitoring_guide_flame_graph_0.png)
|
||||
|
||||
### 工具的选择
|
||||
|
||||
在告警工具方面,有几个商用的工具相当不错。但由于这是一篇介绍开源技术的文章,我只会介绍那些已经被广泛使用的免费工具。希望你也能够为这些工具贡献你自己的代码,让它们更加完善。
|
||||
|
||||
### 告警工具
|
||||
|
||||
#### Bosun
|
||||
|
||||
如果你的电脑出现问题,得多亏 Stack Exchange 你才能在网上查到解决办法。Stack Exchange 以众包问答的模式运营着很多不同类型的网站。其中就有广受开发者欢迎的 [Stack Overflow][5],以及运维方面有名的 [Super User][6]。除此以外,从育儿经验到科幻小说、从哲学讨论到单车论坛,Stack Exchange 都有涉猎。
|
||||
|
||||
Stack Exchange 开源了它的开源告警管理系统 [Bosun][7],同时也发布了使用 [AlertManager][8] 的 Prometheus 系统。这两个系统有共通点。Bosun 和 Prometheus 一样使用 Golang 开发,但 Bosun 比 Prometheus 更为强大,因为它可以使用<ruby>权值聚合<rt>metrics aggregation</rt></ruby>以外的方式与系统交互。Bosun 还可以从日志收集系统中提取数据,并且支持 Graphite、InfluxDB、OpenTSDB 和 Elasticsearch。
|
||||
|
||||
Bosun 的架构包括一个二进制服务文件,以及一个诸如 OpenTSDB 的后端、Redis 以及 [scollector agents][9]。 scollector agents 会自动检测主机上正在运行的服务,并反馈这些进程和其它的系统资源情况。这些数据将发送到后端。随后 Bosun 二进制服务文件会向后端发起查询,确定是否需要触发告警。也可以通过 [Grafana][10] 这些工具通过一个通用接口查询 Bosun 的底层后端。而 Redis 则用于存储 Bosun 的状态信息和元数据。
|
||||
|
||||
Bosun 有一个非常巧妙的功能,就是可以根据历史数据来测试告警。这是我几年前在使用 Prometheus 的时候就非常需要的功能,当时我有一个异常的数据需要产生告警,但没有一个可以用于测试的简便方法。为了确保告警能够正常触发,我不得不造出对应的数据来进行测试。而 Bosun 让这个步骤的耗时大大缩短。
|
||||
|
||||
Bosun 更是涵盖了所有常用过的功能,包括简单的图形化表示和告警的创建。它还带有强大的用于编写告警规则的表达式语言。但 Bosun 默认只带有电子邮件通知配置和 HTTP 通知配置,因此如果需要连接到 Slack 或其它工具,就需要对配置作出更大程度的定制化。类似于 Prometheus,Bosun 还可以使用模板通知,你可以使用 HTML 和 CSS 来创建你所需要的电子邮件通知。
|
||||
|
||||
#### Cabot
|
||||
|
||||
[Cabot][12] 由 [Arachnys][13] 公司开发。你或许对 Arachnys 公司并不了解,但它很有影响力:Arachnys 公司构建了一个基于云的先进解决方案,用于防范金融犯罪。在以前的公司,我也曾经参与过类似“[了解你的客户][14]”的工作。但大多数公司都认为与恐怖组织产生联系会造成相当不好的影响,因为恐怖组织可能会利用自己的系统来筹集资金。而这些解决方案将有助于防范欺诈类犯罪,尽管这类犯罪情节相对较轻,但仍然也会对机构产生风险。
|
||||
|
||||
Arachnys 公司为什么要开发 Cabot 呢?其实只是因为 Arachnys 的开发人员对 [Nagios][15] 不太熟悉。Cabot 的出现对很多人来说都是一个好消息,它基于 Django 和 Bootstrap 开发,因此如果相对这个项目做出自己的贡献,门槛并不高。另外值得一提的是,Cabot 这个名字来源于开发者的狗。
|
||||
|
||||
与 Bosun 类似,Cabot 也不对数据进行收集,而是使用监控对象的 API 提供的数据。因此,Cabot 告警的模式是 pull 而不是 push。它通过访问每个监控对象的 API,根据特定的指标检索所需的数据,然后将告警数据使用 Redis 缓存,进而持久化存储到 Postgres 数据库。
|
||||
|
||||
Cabot 的一个较为少见的特点是,它原生支持 [Graphite][16],同时也支持 [Jenkins][17]。Jenkins 在这里被视为一个集中式的 cron,它会以对待故障的方式去对待构建失败的状况。构建失败当然没有系统故障那么紧急,但一旦出现构建失败,还是需要团队采取措施去处理,毕竟并不是每个人在收到构建失败的电子邮件时都会亲自去检查 Jenkins。
|
||||
|
||||
Cabot 另一个有趣的功能是它可以接入 Google 日历安排值班人员,这个称为 Rota 的功能用处很大,希望其它告警系统也能加入类似的功能。Cabot 目前仅支持安排主备联系人,但还有继续改进的空间。它自己的文档也提到,如果需要全面的功能,更应该考虑付费的解决方案。
|
||||
|
||||
#### StatsAgg
|
||||
|
||||
[Pearson][19] 作为一家开发了 [StatsAgg][18] 告警平台的出版公司,这是极为罕见的,当然也很值得敬佩。除此以外,Pearson 还运营着另外几个网站,以及和 [O'Reilly Media][20] 合资的企业。但我仍然会将它视为出版教学书籍的公司。
|
||||
|
||||
StatsAgg 除了是一个告警平台,还是一个指标聚合平台,甚至也有点类似其它系统的代理。StatsAgg 支持通过 Graphite、StatsD、InfluxDB 和 OpenTSDB 输入数据,也支持将其转发到各种平台。但随着中心服务的负载不断增加,风险也不断增大。尽管如此,如果 StatsAgg 的基础架构足够强壮,即使后端存储平台出现故障,也不会对它产生告警的过程造成影响。
|
||||
|
||||
StatsAgg 是用 Java 开发的,为了尽可能降低复杂性,它仅包括主服务和一个 UI。StatsAgg 支持基于正则表达式匹配来发送告警,而且它更注重于服务方面的告警,而不是服务器基础告警。我认为它填补了开源监控工具方面的空白,而这正式它自己的目标。
|
||||
|
||||
### 可视化工具
|
||||
|
||||
#### Grafana
|
||||
|
||||
[Grafana][10] 的知名度很高,它也被广泛采用。每当我需要用到数据面板的时候,我总是会想到它,因为它比我使用过的任何一款类似的产品都要好。Grafana 由 Torkel Ödegaard 在圣诞节期间开发,并在 2014 年 1 月发布。在短短几年之间,它已经有了长足的发展。Grafana 基于 Kibana 开发,Torkel 开启了新的分支并将其命名为 Grafana。
|
||||
|
||||
Grafana 着重体现了实用性已经数据呈现的美观性。它可以原生地从 Graphite、Elasticsearch、OpenTSDB、Prometheus 和 InfluxDB 收集数据。此外有一个 Grafana 商用版插件可以从更多数据源获取数据,尽管这个插件没有开源,但 Grafana 的生态系统提供的各种数据源已经足够了。
|
||||
|
||||
Grafana 提供了一个集系统各种数据于一身的平台。它通过 web 来展示数据,任何人都有机会访问到相关信息,因此需要使用身份验证来对访问进行限制。Grafana 还支持不同类型的可视化方式,包括集成告警可视化的功能。
|
||||
|
||||
现在你可以更直观地设置告警了。通过Grafana,可以查看图表,还可以设置系统性能下降触发告警的阈值,并告诉 Grafana 应该如何发送告警。这是一个对告警体系非常强大的补充。告警平台不一定会因此而被取代,但告警系统一定会由此得到更多启发和发展。
|
||||
|
||||
Grafana 还引入了很多团队协作的功能。不同用户之间能够共享数据面板,你不再需要为 [Kubernetes][21] 集群创建独立的数据面板,因为由 Kubernetes 开发者和 Grafana 开发者共同维护的一些数据面板已经可以即插即用。
|
||||
|
||||
团队协作过程中一个重要的功能是注释。注释功能允许用户将上下文添加到图表当中,其他用户就可以通过上下文更直观地理解图表。当团队成员在处理某个事件,并且需要沟通和理解时,这个功能就十分重要了。将所有相关信息都放在需要的位置,可以让整个团队中快速达成共识。在团队需要调查故障原因和定位事件责任时,这个功能就可以发挥作用了。
|
||||
|
||||
#### Vizceral
|
||||
|
||||
[Vizceral][22] 由 Netflix 开发,用于在故障发生时更有效地了解流量的情况。Grafana 是一种通用性更强的工具,而 Vizceral 则专用于某些领域。 尽管 Netflix 表示已经不再在内部使用 Vizceral,也不再主动对其展开维护,但 Vizceral 仍然会定期更新。我在这里介绍这个工具,主要是为了介绍它的的可视化机制,以及如何利用它来协助解决问题。你可以在样例环境中用它来更好地掌握这一类系统的特性。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/18/10/alerting-and-visualization-tools-sysadmins
|
||||
|
||||
作者:[Dan Barker][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[HankChow](https://github.com/HankChow)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/barkerd427
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://www.practicalmonitoring.com/
|
||||
[2]: https://developers.google.com/chart/interactive/docs/gallery
|
||||
[3]: https://libguides.libraries.claremont.edu/c.php?g=474417&p=3286401
|
||||
[4]: http://www.brendangregg.com/flamegraphs.html
|
||||
[5]: https://stackoverflow.com/
|
||||
[6]: https://superuser.com/
|
||||
[7]: http://bosun.org/
|
||||
[8]: https://prometheus.io/docs/alerting/alertmanager/
|
||||
[9]: https://bosun.org/scollector/
|
||||
[10]: https://grafana.com/
|
||||
[11]: https://bosun.org/notifications
|
||||
[12]: https://cabotapp.com/
|
||||
[13]: https://www.arachnys.com/
|
||||
[14]: https://en.wikipedia.org/wiki/Know_your_customer
|
||||
[15]: https://www.nagios.org/
|
||||
[16]: https://graphiteapp.org/
|
||||
[17]: https://jenkins.io/
|
||||
[18]: https://github.com/PearsonEducation/StatsAgg
|
||||
[19]: https://www.pearson.com/us/
|
||||
[20]: https://www.oreilly.com/
|
||||
[21]: https://opensource.com/resources/what-is-kubernetes
|
||||
[22]: https://github.com/Netflix/vizceral
|
||||
|
339
translated/tech/20181016 Lab 5- File system, Spawn and Shell.md
Normal file
339
translated/tech/20181016 Lab 5- File system, Spawn and Shell.md
Normal file
@ -0,0 +1,339 @@
|
||||
实验 5:文件系统、Spawn 和 Shell
|
||||
======
|
||||
|
||||
### 简介
|
||||
|
||||
在本实验中,你将要去实现 `spawn`,它是一个加载和运行磁盘上可运行文件的库调用。然后,你接着要去充实你的内核和库,以使操作系统能够在控制台上运行一个 shell。而这些特性需要一个文件系统,本实验将引入一个可读/写的简单文件系统。
|
||||
|
||||
#### 预备知识
|
||||
|
||||
使用 Git 去获取最新版的课程仓库,然后创建一个命名为 `lab5` 的本地分支,去跟踪远程的 `origin/lab5` 分支:
|
||||
|
||||
```
|
||||
athena% cd ~/6.828/lab
|
||||
athena% add git
|
||||
athena% git pull
|
||||
Already up-to-date.
|
||||
athena% git checkout -b lab5 origin/lab5
|
||||
Branch lab5 set up to track remote branch refs/remotes/origin/lab5.
|
||||
Switched to a new branch "lab5"
|
||||
athena% git merge lab4
|
||||
Merge made by recursive.
|
||||
.....
|
||||
athena%
|
||||
```
|
||||
|
||||
在实验中这一部分的主要新组件是文件系统环境,它位于新的 `fs` 目录下。通过检查这个目录中的所有文件,我们来看一下新的文件都有什么。另外,在 `user` 和 `lib` 目录下还有一些文件系统相关的源文件。
|
||||
|
||||
fs/fs.c 维护文件系统在磁盘上结构的代码
|
||||
fs/bc.c 构建在我们的用户级页故障处理功能之上的一个简单的块缓存
|
||||
fs/ide.c 极简的基于 PIO(非中断驱动的)IDE 驱动程序代码
|
||||
fs/serv.c 使用文件系统 IPC 与客户端环境交互的文件系统服务器
|
||||
lib/fd.c 实现一个常见的类 UNIX 的文件描述符接口的代码
|
||||
lib/file.c 磁盘上文件类型的驱动,实现为一个文件系统 IPC 客户端
|
||||
lib/console.c 控制台输入/输出文件类型的驱动
|
||||
lib/spawn.c spawn 库调用的框架代码
|
||||
|
||||
你应该再次去运行 `pingpong`、`primes`、和 `forktree`,测试实验 4 完成后合并到新的实验 5 中的代码能否正确运行。你还需要在 `kern/init.c` 中注释掉 `ENV_CREATE(fs_fs)` 行,因为 `fs/fs.c` 将尝试去做一些 I/O,而 JOS 到目前为止还不具备该功能。同样,在 `lib/exit.c` 中临时注释掉对 `close_all()` 的调用;这个函数将调用你在本实验后面部分去实现的子程序,如果现在去调用,它将导致 JOS 内核崩溃。如果你的实验 4 的代码没有任何 bug,将很完美地通过这个测试。在它们都能正常工作之前是不能继续后续实验的。在你开始做练习 1 时,不要忘记去取消这些行上的注释。
|
||||
|
||||
如果它们不能正常工作,使用 `git diff lab4` 去重新评估所有的变更,确保你在实验 4(及以前)所写的代码在本实验中没有丢失。确保实验 4 仍然能正常工作。
|
||||
|
||||
#### 实验要求
|
||||
|
||||
和以前一样,你需要做本实验中所描述的所有常规练习和至少一个挑战问题。另外,你需要写出你在本实验中问题的详细答案,和你是如何解决挑战问题的一个简短(即:用一到两个段落)的描述。如果你实现了多个挑战问题,你只需要写出其中一个即可,当然,我们欢迎你做的越多越好。在你动手实验之前,将你的问题答案写入到你的 `lab5` 根目录下的一个名为 `answers-lab5.txt` 的文件中。
|
||||
|
||||
### 文件系统的雏形
|
||||
|
||||
你将要使用的文件系统比起大多数“真正的”文件系统(包括 xv6 UNIX 的文件系统)要简单的多,但它也是很强大的,足够去提供基本的特性:创建、读取、写入、和删除组织在层次目录结构中的文件。
|
||||
|
||||
到目前为止,我们开发的是一个单用户操作系统,它提供足够的保护并能去捕获 bug,但它还不能在多个不可信用户之间提供保护。因此,我们的文件系统还不支持 UNIX 的所有者或权限的概念。我们的文件系统目前也不支持硬链接、时间戳、或像大多数 UNIX 文件系统实现的那些特殊的设备文件。
|
||||
|
||||
### 磁盘上文件系统结构
|
||||
|
||||
主流的 UNIX 文件系统将可用磁盘空间分为两种主要的区域类型:节点区域和数据区域。UNIX 文件系统在文件系统中为每个文件分配一个节点;一个文件的节点保存了这个文件重要的元数据,比如它的 `stat` 属性和指向数据块的指针。数据区域被分为更大的(一般是 8 KB 或更大)数据块,它在文件系统中存储文件数据和目录元数据。目录条目包含文件名字和指向到节点的指针;如果文件系统中的多个目录条目指向到那个文件的节点上,则称该文件是硬链接的。由于我们的文件系统不支持硬链接,所以我们不需要这种间接的级别,并且因此可以更方便简化:我们的文件系统将压根就不使用节点,而是简单地将文件的(或子目录的)所有元数据保存在描述那个文件的(唯一的)目录条目中。
|
||||
|
||||
文件和目录逻辑上都是由一系列的数据块组成,它或许是很稀疏地分散到磁盘上,就像一个环境的虚拟地址空间上的页,能够稀疏地分散在物理内存中一样。文件系统环境隐藏了块布局的细节,只提供文件中任意偏移位置读写字节序列的接口。作为像文件创建和删除操作的一部分,文件系统环境服务程序在目录内部完成所有的修改。我们的文件系统允许用户环境去直接读取目录元数据(即:使用 `read`),这意味着用户环境自己就能够执行目录扫描操作(即:实现 `ls` 程序),而不用另外依赖对文件系统的特定调用。用这种方法做目录扫描的缺点是,(也是大多数现代 UNIX 操作系统变体摒弃它的原因)使得应用程序依赖目录元数据的格式,如果不改变或至少要重编译应用程序的前提下,去改变文件系统的内部布局将变得很困难。
|
||||
|
||||
#### 扇区和块
|
||||
|
||||
大多数磁盘都不能执行以字节为粒度的读写操作,而是以扇区为单位执行读写。在 JOS 中,每个扇区是 512 字节。文件系统实际上是以块为单位来分配和使用磁盘存储的。要注意这两个术语之间的区别:扇区大小是硬盘硬件的属性,而块大小是使用这个磁盘的操作系统上的术语。一个文件系统的块大小必须是底层磁盘的扇区大小的倍数。
|
||||
|
||||
UNIX xv6 文件系统使用 512 字节大小的块,与它底层磁盘的扇区大小一样。而大多数现代文件系统使用更大尺寸的块,因为现在存储空间变得很廉价了,而使用更大的粒度在存储管理上更高效。我们的文件系统将使用 4096 字节的块,以更方便地去匹配处理器上页的大小。
|
||||
|
||||
#### 超级块
|
||||
|
||||
![Disk layout][1]
|
||||
|
||||
文件系统一般在磁盘上的“易于查找”的位置(比如磁盘开始或结束的位置)保留一些磁盘块,用于保存描述整个文件系统属性的元数据,比如块大小、磁盘大小、用于查找根目录的任何元数据、文件系统最后一次挂载的时间、文件系统最后一次错误检查的时间等等。这些特定的块被称为超级块。
|
||||
|
||||
我们的文件系统只有一个超级块,它固定为磁盘的 1 号块。它的布局定义在 `inc/fs.h` 文件里的 `struct Super` 中。而 0 号块一般是保留的,用于去保存引导加载程序和分区表,因此文件系统一般不会去使用磁盘上比较靠前的块。许多“真实的”文件系统都维护多个超级块,并将它们复制到间隔较大的几个区域中,这样即便其中一个超级块坏了或超级块所在的那个区域产生了介质错误,其它的超级块仍然能够被找到并用于去访问文件系统。
|
||||
|
||||
#### 文件元数据
|
||||
|
||||
![File structure][2]
|
||||
元数据的布局是描述在我们的文件系统中的一个文件中,这个文件就是 `inc/fs.h` 中的 `struct File`。元数据包含文件的名字、大小、类型(普通文件还是目录)、指向构成这个文件的块的指针。正如前面所提到的,我们的文件系统中并没有节点,因此元数据是保存在磁盘上的一个目录条目中,而不是像大多数“真正的”文件系统那样保存在节点中。为简单起见,我们将使用 `File` 这一个结构去表示文件元数据,因为它要同时出现在磁盘上和内存中。
|
||||
|
||||
在 `struct File` 中的数组 `f_direct` 包含一个保存文件的前 10 个块(`NDIRECT`)的块编号的空间,我们称之为文件的直接块。对于最大 `10*4096 = 40KB` 的小文件,这意味着这个文件的所有块的块编号将全部直接保存在结构 `File` 中,但是,对于超过 40 KB 大小的文件,我们需要一个地方去保存文件剩余的块编号。所以我们分配一个额外的磁盘块,我们称之为文件的间接块,由它去保存最多 4096/4 = 1024 个额外的块编号。因此,我们的文件系统最多允许有 1034 个块,或者说不能超过 4MB 大小。为支持大文件,“真正的”文件系统一般都支持两个或三个间接块。
|
||||
|
||||
#### 目录与普通文件
|
||||
|
||||
我们的文件系统中的结构 `File` 既能够表示一个普通文件,也能够表示一个目录;这两种“文件”类型是由 `File` 结构中的 `type` 字段来区分的。除了文件系统根本就不需要解释的、分配给普通文件的数据块的内容之外,它使用完全相同的方式来管理普通文件和目录“文件”,文件系统将目录“文件”的内容解释为包含在目录中的一系列的由 `File` 结构所描述的文件和子目录。
|
||||
|
||||
在我们文件系统中的超级块包含一个结构 `File`(在 `struct Super` 中的 `root` 字段中),它用于保存文件系统的根目录的元数据。这个目录“文件”的内容是一系列的 `File` 结构所描述的、位于文件系统根目录中的文件和目录。在根目录中的任何子目录转而可以包含更多的 `File` 结构所表示的子目录,依此类推。
|
||||
|
||||
### 文件系统
|
||||
|
||||
本实验的目标并不是让你去实现完整的文件系统,你只需要去实现几个重要的组件即可。实践中,你将负责把块读入到块缓存中,并且刷新脏块到磁盘上;分配磁盘块;映射文件偏移量到磁盘块;以及实现读取、写入、和在 IPC 接口中打开。因为你并不去实现完整的文件系统,熟悉提供给你的代码和各种文件系统接口是非常重要的。
|
||||
|
||||
### 磁盘访问
|
||||
|
||||
我们的操作系统的文件系统环境需要能访问磁盘,但是我们在内核中并没有实现任何磁盘访问的功能。与传统的在内核中添加了 IDE 磁盘驱动程序、以及允许文件系统去访问它所必需的系统调用的“大一统”策略不同,我们将 IDE 磁盘驱动实现为用户级文件系统环境的一部分。我们仍然需要对内核做稍微的修改,是为了能够设置一些东西,以便于文件系统环境拥有实现磁盘访问本身所需的权限。
|
||||
|
||||
只要我们依赖轮询、基于 “编程的 I/O”(PIO)的磁盘访问,并且不使用磁盘中断,那么在用户空间中实现磁盘访问还是很容易的。也可以去实现由中断驱动的设备驱动程序(比如像 L3 和 L4 内核就是这么做的),但这样做的话,内核必须接收设备中断并将它派发到相应的用户模式环境上,这样实现的难度会更大。
|
||||
|
||||
x86 处理器在 EFLAGS 寄存器中使用 IOPL 位去确定保护模式中的代码是否允许执行特定的设备 I/O 指令,比如 `IN` 和 `OUT` 指令。由于我们需要的所有 IDE 磁盘寄存器都位于 x86 的 I/O 空间中而不是映射在内存中,所以,为了允许文件系统去访问这些寄存器,我们需要做的唯一的事情便是授予文件系统环境“I/O 权限”。实际上,在 EFLAGS 寄存器的 IOPL 位上规定,内核使用一个简单的“要么全都能访问、要么全都不能访问”的方法来控制用户模式中的代码能否访问 I/O 空间。在我们的案例中,我们希望文件系统环境能够去访问 I/O 空间,但我们又希望任何其它的环境完全不能访问 I/O 空间。
|
||||
|
||||
```markdown
|
||||
练习 1、`i386_init` 通过将类型 `ENV_TYPE_FS` 传递给你的环境创建函数 `env_create` 来识别文件系统。修改 `env.c` 中的 `env_create` ,以便于它只授予文件系统环境 I/O 的权限,而不授予任何其它环境 I/O 的权限。
|
||||
|
||||
确保你能启动这个文件系统环境,而不会产生一般保护故障。你应该要通过在 `make grade` 中的 "fs i/o" 测试。
|
||||
```
|
||||
|
||||
```markdown
|
||||
问题
|
||||
|
||||
1、当你从一个环境切换到另一个环境时,你是否需要做一些操作来确保 I/O 权限设置能被保存和正确地恢复?为什么?
|
||||
```
|
||||
|
||||
|
||||
注意本实验中的 `GNUmakefile` 文件,它用于设置 QEMU 去使用文件 `obj/kern/kernel.img` 作为磁盘 0 的镜像(一般情况下表示 DOS 或 Windows 中的 “C 盘”),以及使用(新)文件 `obj/fs/fs.img` 作为磁盘 1 的镜像(”D 盘“)。在本实验中,我们的文件系统应该仅与磁盘 1 有交互;而磁盘 0 仅用于去引导内核。如果你想去恢复其中一个有某些错误的磁盘镜像,你可以通过输入如下的命令,去重置它们到最初的、”崭新的“版本:
|
||||
|
||||
```
|
||||
$ rm obj/kern/kernel.img obj/fs/fs.img
|
||||
$ make
|
||||
```
|
||||
|
||||
或者:
|
||||
|
||||
```
|
||||
$ make clean
|
||||
$ make
|
||||
```
|
||||
|
||||
小挑战!实现中断驱动的 IDE 磁盘访问,既可以使用也可以不使用 DMA 模式。由你来决定是否将设备驱动移植进内核中、还是与文件系统一样保留在用户空间中、甚至是将它移植到一个它自己的的单独的环境中(如果你真的想了解微内核的本质的话)。
|
||||
|
||||
### 块缓存
|
||||
|
||||
在我们的文件系统中,我们将在处理器虚拟内存系统的帮助下,实现一个简单的”缓冲区“(实际上就是一个块缓冲区)。块缓存的代码在 `fs/bc.c` 文件中。
|
||||
|
||||
我们的文件系统将被限制为仅能处理 3GB 或更小的磁盘。我们保留一个大的、尺寸固定为 3GB 的文件系统环境的地址空间区域,从 0x10000000(`DISKMAP`)到 0xD0000000(`DISKMAP+DISKMAX`)作为一个磁盘的”内存映射版“。比如,磁盘的 0 号块被映射到虚拟地址 0x10000000 处,磁盘的 1 号块被映射到虚拟地址 0x10001000 处,依此类推。在 `fs/bc.c` 中的 `diskaddr` 函数实现从磁盘块编号到虚拟地址的转换(以及一些完整性检查)。
|
||||
|
||||
由于我们的文件系统环境在系统中有独立于所有其它环境的虚拟地址空间之外的、它自己的虚拟地址空间,并且文件系统环境仅需要做的事情就是实现文件访问,以这种方式去保留大多数文件系统环境的地址空间是很明智的。如果在一台 32 位机器上的”真实的“文件系统上这么做是很不方便的,因为现在的磁盘都远大于 3 GB。而在一台有 64 位地址空间的机器上,这样的缓存管理方法仍然是明智的。
|
||||
|
||||
当然,将整个磁盘读入到内存中需要很长时间,因此,我们将它实现成”按需“分页的形式,那样我们只在磁盘映射区域中分配页,并且当在这个区域中产生页故障时,从磁盘读取相关的块去响应这个页故障。通过这种方式,我们能够假装将整个磁盘装进了内存中。
|
||||
|
||||
```markdown
|
||||
练习 2、在 `fs/bc.c` 中实现 `bc_pgfault` 和 `flush_block` 函数。`bc_pgfault` 函数是一个页故障服务程序,就像你在前一个实验中编写的写时复制 fork 一样,只不过它的任务是从磁盘中加载页去响应一个页故障。在你编写它时,记住: (1) `addr` 可能并不会做边界对齐,并且 (2) 在扇区中的 `ide_read` 操作并不是以块为单位的。
|
||||
|
||||
(如果需要的话)函数 `flush_block` 应该会将一个块写入到磁盘上。如果在块缓存中没有块(也就是说,页没有映射)或者它不是一个脏块,那么 `flush_block` 将什么都不做。我们将使用虚拟内存硬件去跟踪,磁盘块自最后一次从磁盘读取或写入到磁盘之后是否被修改过。查看一个块是否需要写入时,我们只需要去查看 `uvpt` 条目中的 `PTE_D` 的 ”dirty“ 位即可。(`PTE_D` 位由处理器设置,用于表示那个页被写入;具体细节可以查看 x386 参考手册的 [第 5 章][3] 的 5.2.4.3 节)块被写入到磁盘上之后,`flush_block` 函数将使用 `sys_page_map` 去清除 `PTE_D` 位。
|
||||
|
||||
使用 `make grade` 去测试你的代码。你的代码应该能够通过 "check_bc"、"check_super"、和 "check_bitmap" 的测试。
|
||||
```
|
||||
|
||||
在 `fs/fs.c` 中的函数 `fs_init` 是块缓存使用的一个很好的示例。在初始化块缓存之后,它简单地在全局变量 `super` 中保存指针到磁盘映射区。在这之后,如果块在内存中,或我们的页故障服务程序按需将它们从磁盘上读入后,我们就能够简单地从 `super` 结构中读取块了。
|
||||
|
||||
```markdown
|
||||
小挑战!到现在为止,块缓存还没有清除策略。一旦某个块因为页故障被读入到缓存中之后,它将一直不会被清除,并且永远保留在内存中。给块缓存增加一个清除策略。在页表中使用 `PTE_A` 的 "accessed" 位来实现,任何环境访问一个页时,硬件就会设置这个位,你可以通过它来跟踪磁盘块的大致使用情况,而不需要修改访问磁盘映射区域的任何代码。使用脏块要小心。
|
||||
```
|
||||
|
||||
### 块位图
|
||||
|
||||
在 `fs_init` 设置了 `bitmap` 指针之后,我们可以认为 `bitmap` 是一个装满比特位的数组,磁盘上的每个块就是数组中的其中一个比特位。比如 `block_is_free`,它只是简单地在位图中检查给定的块是否被标记为空闲。
|
||||
|
||||
```markdown
|
||||
练习 3、使用 `free_block` 作为实现 `fs/fs.c` 中的 `alloc_block` 的一个模型,它将在位图中去查找一个空闲的磁盘块,并将它标记为已使用,然后返回块编号。当你分配一个块时,你应该立即使用 `flush_block` 将已改变的位图块刷新到磁盘上,以确保文件系统的一致性。
|
||||
|
||||
使用 `make grade` 去测试你的代码。现在,你的代码应该要通过 "alloc_block" 的测试。
|
||||
```
|
||||
|
||||
### 文件操作
|
||||
|
||||
在 `fs/fs.c` 中,我们提供一系列的函数去实现基本的功能,比如,你将需要去理解和管理结构 `File`、扫描和管理目录”文件“的条目、 以及从根目录开始遍历文件系统以解析一个绝对路径名。阅读 `fs/fs.c` 中的所有代码,并在你开始实验之前,确保你理解了每个函数的功能。
|
||||
|
||||
```markdown
|
||||
练习 4、实现 `file_block_walk` 和 `file_get_block`。`file_block_walk` 从一个文件中的块偏移量映射到 `struct File` 中那个块的指针上或间接块上,它非常类似于 `pgdir_walk` 在页表上所做的事。`file_get_block` 将更进一步,将去映射一个真实的磁盘块,如果需要的话,去分配一个新的磁盘块。
|
||||
|
||||
使用 `make grade` 去测试你的代码。你的代码应该要通过 "file_open"、"file_get_block"、以及 "file_flush/file_truncated/file rewrite"、和 "testfile" 的测试。
|
||||
```
|
||||
|
||||
`file_block_walk` 和 `file_get_block` 是文件系统中的”劳动模范“。比如,`file_read` 和 `file_write` 或多或少都在 `file_get_block` 上做必需的登记工作,然后在分散的块和连续的缓存之间复制字节。
|
||||
|
||||
```
|
||||
小挑战!如果操作在中途实然被打断(比如,突然崩溃或重启),文件系统很可能会产生错误。实现软件更新或日志处理的方式让文件系统的”崩溃可靠性“更好,并且演示一下旧的文件系统可能会崩溃,而你的更新后的文件系统不会崩溃的情况。
|
||||
```
|
||||
|
||||
### 文件系统接口
|
||||
|
||||
现在,我们已经有了文件系统环境自身所需的功能了,我们必须让其它希望使用文件系统的环境能够访问它。由于其它环境并不能直接调用文件系统环境中的函数,我们必须通过一个远程过程调用或 RPC、构建在 JOS 的 IPC 机制之上的抽象化来暴露对文件系统的访问。如下图所示,下图是对文件系统服务调用(比如:读取)的样子:
|
||||
|
||||
```
|
||||
Regular env FS env
|
||||
+---------------+ +---------------+
|
||||
| read | | file_read |
|
||||
| (lib/fd.c) | | (fs/fs.c) |
|
||||
...|.......|.......|...|.......^.......|...............
|
||||
| v | | | | RPC mechanism
|
||||
| devfile_read | | serve_read |
|
||||
| (lib/file.c) | | (fs/serv.c) |
|
||||
| | | | ^ |
|
||||
| v | | | |
|
||||
| fsipc | | serve |
|
||||
| (lib/file.c) | | (fs/serv.c) |
|
||||
| | | | ^ |
|
||||
| v | | | |
|
||||
| ipc_send | | ipc_recv |
|
||||
| | | | ^ |
|
||||
+-------|-------+ +-------|-------+
|
||||
| |
|
||||
+-------------------+
|
||||
|
||||
```
|
||||
|
||||
圆点虚线下面的过程是一个普通的环境对文件系统环境请求进行读取的简单机制。从(我们提供的)在任何文件描述符上的 `read` 工作开始,并简单地派发到相关的设备读取函数上,在我们的案例中是 `devfile_read`(我们还有更多的设备类型,比如管道)。`devfile_read` 实现了对磁盘上文件指定的 `read`。它和 `lib/file.c` 中的其它的 `devfile_*` 函数实现了客户端侧的文件系统操作,并且所有的工作大致都是以相同的方式来完成的,把参数打包进一个请求结构中,调用 `fsipc` 去发送 IPC 请求以及解包并返回结果。`fsipc` 函数把发送请求到服务器和接收来自服务器的回复的普通细节做了简化处理。
|
||||
|
||||
在 `fs/serv.c` 中可以找到文件系统服务器代码。它是一个 `serve` 函数的循环,无休止地接收基于 IPC 的请求,并派发请求到相关的服务函数,并通过 IPC 来回送结果。在读取示例中,`serve` 将派发到 `serve_read` 函数上,它将去处理读取请求的 IPC 细节,比如,解包请求结构并最终调用 `file_read` 去执行实际的文件读取动作。
|
||||
|
||||
回顾一下 JOS 的 IPC 机制,它让一个环境发送一个单个的 32 位数字和可选的共享页。从一个客户端向服务器发送一个请求,我们为请求类型使用 32 位的数字(文件系统服务器 RPC 是有编号的,就像系统调用那样的编号),然后通过 IPC 在共享页上的一个 `union Fsipc` 中存储请求参数。在客户端侧,我们已经在 `fsipcbuf` 处共享了页;在服务端,我们在 `fsreq`(`0x0ffff000`)处映射入站请求页。
|
||||
|
||||
服务器也通过 IPC 来发送响应。我们为函数的返回代码使用 32 位的数字。对于大多数 RPC,这已经涵盖了它们全部的返回代码。`FSREQ_READ` 和 `FSREQ_STAT` 也返回数据,它们只是被简单地写入到客户端发送它的请求时的页上。在 IPC 的响应中并不需要去发送这个页,因为这个页是文件系统服务器和客户端从一开始就共享的页。另外,在它的响应中,`FSREQ_OPEN` 与客户端共享一个新的 "Fd page”。我们将快捷地返回到文件描述符页上。
|
||||
|
||||
```markdown
|
||||
练习 5、实现 `fs/serv.c` 中的 `serve_read`。
|
||||
|
||||
`serve_read` 的重任将由已经在 `fs/fs.c` 中实现的 `file_read` 来承担(它实际上不过是对 `file_get_block` 的一连串调用)。对于文件读取,`serve_read` 只能提供 RPC 接口。查看 `serve_set_size` 中的注释和代码,去大体上了解服务器函数的结构。
|
||||
|
||||
使用 `make grade` 去测试你的代码。你的代码通过 "serve_open/file_stat/file_close" 和 "file_read" 的测试后,你得分应该是 70(总分为 150)。
|
||||
```
|
||||
|
||||
```markdown
|
||||
练习 6、实现 `fs/serv.c` 中的 `serve_write` 和 `lib/file.c` 中的 `devfile_write`。
|
||||
|
||||
使用 `make grade` 去测试你的代码。你的代码通过 "file_write"、"file_read after file_write"、"open"、和 "large file" 的测试后,得分应该是 90(总分为150)。
|
||||
```
|
||||
|
||||
### 进程增殖
|
||||
|
||||
我们给你提供了 `spawn` 的代码(查看 `lib/spawn.c` 文件),它用于创建一个新环境、从文件系统中加载一个程序镜像并启动子环境来运行这个程序。然后这个父进程独立于子环境持续运行。`spawn` 函数的行为,在效果上类似于UNIX 中的 `fork`,然后同时紧跟着 `fork` 之后在子进程中立即启动执行一个 `exec`。
|
||||
|
||||
我们实现的是 `spawn`,而不是一个类 UNIX 的 `exec`,因为 `spawn` 是很容易从用户空间中、以”外内核式“ 实现的,它无需来自内核的特别帮助。为了在用户空间中实现 `exec`,想一想你应该做什么?确保你理解了它为什么很难。
|
||||
|
||||
```markdown
|
||||
练习 7、`spawn` 依赖新的系统调用 `sys_env_set_trapframe` 去初始化新创建的环境的状态。实现 `kern/syscall.c` 中的 `sys_env_set_trapframe`。(不要忘记在 `syscall()` 中派发新系统调用)
|
||||
|
||||
运行来自 `kern/init.c` 中的 `user/spawnhello` 程序来测试你的代码`kern/init.c` ,它将尝试从文件系统中增殖 `/hello`。
|
||||
|
||||
使用 `make grade` 去测试你的代码。
|
||||
```
|
||||
|
||||
```markdown
|
||||
小挑战!实现 Unix 式的 `exec`。
|
||||
```
|
||||
|
||||
```markdown
|
||||
小挑战!实现 `mmap` 式的文件内存映射,并如果可能的话,修改 `spawn` 从 ELF 中直接映射页。
|
||||
```
|
||||
|
||||
### 跨 fork 和 spawn 共享库状态
|
||||
|
||||
UNIX 文件描述符是一个通称的概念,它还包括管道、控制台 I/O 等等。在 JOS 中,每个这类设备都有一个相应的 `struct Dev`,使用指针去指向到实现读取/写入/等等的函数上。对于那个设备类型,`lib/fd.c` 在其上实现了类 UNIX 的文件描述符接口。每个 `struct Fd` 表示它的设备类型,并且大多数 `lib/fd.c` 中的函数只是简单地派发操作到 `struct Dev` 中相应函数上。
|
||||
|
||||
`lib/fd.c` 也在每个应用程序环境的地址空间中维护一个文件描述符表区域,开始位置在 `FDTABLE` 处。这个区域为应该程序能够一次最多打开 `MAXFD`(当前为 32)个文件描述符而保留页的地址空间值(4KB)。在任意给定的时刻,当且仅当相应的文件描述符处于使用中时,一个特定的文件描述符表才会被映射。在区域的 `FILEDATA` 处开始的位置,每个文件描述符表也有一个可选的”数据页“,如果它们被选定,相应的设备就能使用它。
|
||||
|
||||
我们想跨 `fork` 和 `spawn` 共享文件描述符状态,但是文件描述符状态是保存在用户空间的内存中。而现在,在 `fork` 中,内存是标记为写时复制的,因此状态将被复制而不是共享。(这意味着环境不能在它们自己无法打开的文件中去搜索,并且管道不能跨一个 `fork` 去工作)在 `spawn` 上,内存将被保留,压根不会去复制。(事实上,增殖的环境从使用一个不打开的文件描述符去开始。)
|
||||
|
||||
我们将要修改 `fork`,以让它知道某些被”库管理的系统“所使用的、和总是被共享的内存区域。而不是去”硬编码“一个某些区域的列表,我们将在页表条目中设置一个”这些不使用“的位(就像我们在 `fork` 中使用的 `PTE_COW` 位一样)。
|
||||
|
||||
我们在 `inc/lib.h` 中定义了一个新的 `PTE_SHARE` 位,在 Intel 和 AMD 的手册中,这个位是被标记为”软件可使用的“的三个 PTE 位之一。我们将创建一个约定,如果一个页表条目中这个位被设置,那么在 `fork` 和 `spawn` 中应该直接从父环境中复制 PTE 到子环境中。注意它与标记为写时复制的差别:正如在第一段中所描述的,我们希望确保能共享页更新。
|
||||
|
||||
```markdown
|
||||
练习 8、修改 `lib/fork.c` 中的 `duppage`,以遵循最新约定。如果页表条目设置了 `PTE_SHARE` 位,仅直接复制映射。(你应该去使用 `PTE_SYSCALL`,而不是 `0xfff`,去从页表条目中掩掉有关的位。`0xfff` 仅选出可访问的位和脏位。)
|
||||
|
||||
同样的,在 `lib/spawn.c` 中实现 `copy_shared_pages`。它应该循环遍历当前进程中所有的页表条目(就像 `fork` 那样),复制任何设置了 `PTE_SHARE` 位的页映射到子进程中。
|
||||
```
|
||||
|
||||
使用 `make run-testpteshare` 去检查你的代码行为是否正确。正确的情况下,你应该会看到像 "`fork handles PTE_SHARE right`" 和 "`spawn handles PTE_SHARE right`” 这样的输出行。
|
||||
|
||||
使用 `make run-testfdsharing` 去检查文件描述符是否正确共享。正确的情况下,你应该会看到 "`read in child succeeded`" 和 "`read in parent succeeded`” 这样的输出行。
|
||||
|
||||
### 键盘接口
|
||||
|
||||
为了能让 shell 工作,我们需要一些方式去输入。QEMU 可以显示输出,我们将它的输出写入到 CGA 显示器上和串行端口上,但到目前为止,我们仅能够在内核监视器中接收输入。在 QEMU 中,我们从图形化窗口中的输入作为从键盘到 JOS 的输入,虽然键入到控制台的输入作为出现在串行端口上的字符的方式显现。在 `kern/console.c` 中已经包含了由我们自实验 1 以来的内核监视器所使用的键盘和串行端口的驱动程序,但现在你需要去把这些增加到系统中。
|
||||
|
||||
```markdown
|
||||
练习 9、在你的 `kern/trap.c` 中,调用 `kbd_intr` 去处理捕获 `IRQ_OFFSET+IRQ_KBD` 和 `serial_intr`,用它们去处理捕获 `IRQ_OFFSET+IRQ_SERIAL`。
|
||||
```
|
||||
|
||||
在 `lib/console.c` 中,我们为你实现了文件的控制台输入/输出。`kbd_intr` 和 `serial_intr` 将使用从最新读取到的输入来填充缓冲区,而控制台文件类型去排空缓冲区(默认情况下,控制台文件类型为 stdin/stdout,除非用户重定向它们)。
|
||||
|
||||
运行 `make run-testkbd` 并输入几行来测试你的代码。在你输入完成之后,系统将回显你输入的行。如果控制台和窗口都可以使用的话,尝试在它们上面都做一下测试。
|
||||
|
||||
### Shell
|
||||
|
||||
运行 `make run-icode` 或 `make run-icode-nox` 将运行你的内核并启动 `user/icode`。`icode` 又运行 `init`,它将设置控制台作为文件描述符 0 和 1(即:标准输入 stdin 和标准输出 stdout),然后增殖出环境 `sh`,就是 shell。之后你应该能够运行如下的命令了:
|
||||
|
||||
```
|
||||
echo hello world | cat
|
||||
cat lorem |cat
|
||||
cat lorem |num
|
||||
cat lorem |num |num |num |num |num
|
||||
lsfd
|
||||
```
|
||||
|
||||
注意用户库常规程序 `cprintf` 将直接输出到控制台,而不会使用文件描述符代码。这对调试非常有用,但是对管道连接其它程序却很不利。为将输出打印到特定的文件描述符(比如 1,它是标准输出 stdout),需要使用 `fprintf(1, "...", ...)`。`printf("...", ...)` 是将输出打印到文件描述符 1(标准输出 stdout) 的快捷方式。查看 `user/lsfd.c` 了解更多示例。
|
||||
|
||||
```markdown
|
||||
练习 10、
|
||||
这个 shell 不支持 I/O 重定向。如果能够运行 `run sh <script` 就更完美了,就不用将所有的命令手动去放入一个脚本中,就像上面那样。为 `<` 在 `user/sh.c` 中添加重定向的功能。
|
||||
|
||||
通过在你的 shell 中输入 `sh <script` 来测试你实现的重定向功能。
|
||||
|
||||
运行 `make run-testshell` 去测试你的 shell。`testshell` 只是简单地给 shell ”喂“上面的命令(也可以在 `fs/testshell.sh` 中找到),然后检查它的输出是否与 `fs/testshell.key` 一样。
|
||||
```
|
||||
|
||||
```markdown
|
||||
小挑战!给你的 shell 添加更多的特性。最好包括以下的特性(其中一些可能会要求修改文件系统):
|
||||
|
||||
* 后台命令 (`ls &`)
|
||||
* 一行中运行多个命令 (`ls; echo hi`)
|
||||
* 命令组 (`(ls; echo hi) | cat > out`)
|
||||
* 扩展环境变量 (`echo $hello`)
|
||||
* 引号 (`echo "a | b"`)
|
||||
* 命令行历史和/或编辑功能
|
||||
* tab 命令补全
|
||||
* 为命令行查找目录、cd 和路径
|
||||
* 文件创建
|
||||
* 用快捷键 `ctl-c` 去杀死一个运行中的环境
|
||||
|
||||
可做的事情还有很多,并不仅限于以上列表。
|
||||
```
|
||||
|
||||
到目前为止,你的代码应该要通过所有的测试。和以前一样,你可以使用 `make grade` 去评级你的提交,并且使用 `make handin` 上交你的实验。
|
||||
|
||||
**本实验到此结束。** 和以前一样,不要忘了运行 `make grade` 去做评级测试,并将你的练习答案和挑战问题的解决方案写下来。在动手实验之前,使用 `git status` 和 `git diff` 去检查你的变更,并不要忘记使用 `git add answers-lab5.txt` 去提交你的答案。完成之后,使用 `git commit -am 'my solutions to lab 5’` 去提交你的变更,然后使用 `make handin` 去提交你的解决方案。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://pdos.csail.mit.edu/6.828/2018/labs/lab5/
|
||||
|
||||
作者:[csail.mit][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[qhwdw](https://github.com/qhwdw)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://pdos.csail.mit.edu
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://pdos.csail.mit.edu/6.828/2018/labs/lab5/disk.png
|
||||
[2]: https://pdos.csail.mit.edu/6.828/2018/labs/lab5/file.png
|
||||
[3]: http://pdos.csail.mit.edu/6.828/2011/readings/i386/s05_02.htm
|
507
translated/tech/20181016 Lab 6- Network Driver.md
Normal file
507
translated/tech/20181016 Lab 6- Network Driver.md
Normal file
@ -0,0 +1,507 @@
|
||||
实验 6:网络驱动程序
|
||||
======
|
||||
### 实验 6:网络驱动程序(缺省的最终设计)
|
||||
|
||||
### 简介
|
||||
|
||||
这个实验是缺省的最终项目中你自己能够做的最后的实验。
|
||||
|
||||
现在你有了一个文件系统,一个典型的操作系统都应该有一个网络栈。在本实验中,你将继续为一个网卡去写一个驱动程序。这个网卡基于 Intel 82540EM 芯片,也就是众所周知的 E1000 芯片。
|
||||
|
||||
##### 预备知识
|
||||
|
||||
使用 Git 去提交你的实验 5 的源代码(如果还没有提交的话),获取课程仓库的最新版本,然后创建一个名为 `lab6` 的本地分支,它跟踪我们的远程分支 `origin/lab6`:
|
||||
|
||||
```c
|
||||
athena% cd ~/6.828/lab
|
||||
athena% add git
|
||||
athena% git commit -am 'my solution to lab5'
|
||||
nothing to commit (working directory clean)
|
||||
athena% git pull
|
||||
Already up-to-date.
|
||||
athena% git checkout -b lab6 origin/lab6
|
||||
Branch lab6 set up to track remote branch refs/remotes/origin/lab6.
|
||||
Switched to a new branch "lab6"
|
||||
athena% git merge lab5
|
||||
Merge made by recursive.
|
||||
fs/fs.c | 42 +++++++++++++++++++
|
||||
1 files changed, 42 insertions(+), 0 deletions(-)
|
||||
athena%
|
||||
```
|
||||
|
||||
然后,仅有网卡驱动程序并不能够让你的操作系统接入因特网。在新的实验 6 的代码中,我们为你提供了网络栈和一个网络服务器。与以前的实验一样,使用 git 去拉取这个实验的代码,合并到你自己的代码中,并去浏览新的 `net/` 目录中的内容,以及在 `kern/` 中的新文件。
|
||||
|
||||
除了写这个驱动程序以外,你还需要去创建一个访问你的驱动程序的系统调用。你将要去实现那些在网络服务器中缺失的代码,以便于在网络栈和你的驱动程序之间传输包。你还需要通过完成一个 web 服务器来将所有的东西连接到一起。你的新 web 服务器还需要你的文件系统来提供所需要的文件。
|
||||
|
||||
大部分的内核设备驱动程序代码都需要你自己去从头开始编写。本实验提供的指导比起前面的实验要少一些:没有框架文件、没有现成的系统调用接口、并且很多设计都由你自己决定。因此,我们建议你在开始任何单独练习之前,阅读全部的编写任务。许多学生都反应这个实验比前面的实验都难,因此请根据你的实际情况计划你的时间。
|
||||
|
||||
##### 实验要求
|
||||
|
||||
与以前一样,你需要做实验中全部的常规练习和至少一个挑战问题。在实验中写出你的详细答案,并将挑战问题的方案描述写入到 `answers-lab6.txt` 文件中。
|
||||
|
||||
#### QEMU 的虚拟网络
|
||||
|
||||
我们将使用 QEMU 的用户模式网络栈,因为它不需要以管理员权限运行。QEMU 的文档的[这里][1]有更多关于用户网络的内容。我们更新后的 makefile 启用了 QEMU 的用户模式网络栈和虚拟的 E1000 网卡。
|
||||
|
||||
缺省情况下,QEMU 提供一个运行在 IP 地址 10.2.2.2 上的虚拟路由器,它给 JOS 分配的 IP 地址是 10.0.2.15。为了简单起见,我们在 `net/ns.h` 中将这些缺省值硬编码到网络服务器上。
|
||||
|
||||
虽然 QEMU 的虚拟网络允许 JOS 随意连接因特网,但 JOS 的 10.0.2.15 的地址并不能在 QEMU 中的虚拟网络之外使用(也就是说,QEMU 还得做一个 NAT),因此我们并不能直接连接到 JOS 上运行的服务器,即便是从运行 QEMU 的主机上连接也不行。为解决这个问题,我们配置 QEMU 在主机的某些端口上运行一个服务器,这个服务器简单地连接到 JOS 中的一些端口上,并在你的真实主机和虚拟网络之间传递数据。
|
||||
|
||||
你将在端口 7(echo)和端口 80(http)上运行 JOS,为避免在共享的 Athena 机器上发生冲突,makefile 将为这些端口基于你的用户 ID 来生成转发端口。你可以运行 `make which-ports` 去找出是哪个 QEMU 端口转发到你的开发主机上。为方便起见,makefile 也提供 `make nc-7` 和 `make nc-80`,它允许你在终端上直接与运行这些端口的服务器去交互。(这些目标仅能连接到一个运行中的 QEMU 实例上;你必须分别去启动它自己的 QEMU)
|
||||
|
||||
##### 包检查
|
||||
|
||||
makefile 也可以配置 QEMU 的网络栈去记录所有的入站和出站数据包,并将它保存到你的实验目录中的 `qemu.pcap` 文件中。
|
||||
|
||||
使用 `tcpdump` 命令去获取一个捕获的 hex/ASCII 包转储:
|
||||
|
||||
```
|
||||
tcpdump -XXnr qemu.pcap
|
||||
```
|
||||
|
||||
或者,你可以使用 [Wireshark][2] 以图形化界面去检查 pcap 文件。Wireshark 也知道如何去解码和检查成百上千的网络协议。如果你在 Athena 上,你可以使用 Wireshark 的前辈:ethereal,它运行在加锁的保密互联网协议网络中。
|
||||
|
||||
##### 调试 E1000
|
||||
|
||||
我们非常幸运能够去使用仿真硬件。由于 E1000 是在软件中运行的,仿真的 E1000 能够给我们提供一个人类可读格式的报告、它的内部状态以及它遇到的任何问题。通常情况下,对祼机上做驱动程序开发的人来说,这是非常难能可贵的。
|
||||
|
||||
E1000 能够产生一些调试输出,因此你可以去打开一个专门的日志通道。其中一些对你有用的通道如下:
|
||||
|
||||
| 标志 | 含义 |
|
||||
| --------- | :----------------------- |
|
||||
| tx | 包发送日志 |
|
||||
| txerr | 包发送错误日志 |
|
||||
| rx | 到 RCTL 的日志通道 |
|
||||
| rxfilter | 入站包过滤日志 |
|
||||
| rxerr | 接收错误日志 |
|
||||
| unknown | 未知寄存器的读写日志 |
|
||||
| eeprom | 读取 EEPROM 的日志 |
|
||||
| interrupt | 中断和中断寄存器变更日志 |
|
||||
|
||||
例如,你可以使用 `make E1000_DEBUG=tx,txerr` 去打开 "tx" 和 "txerr" 日志功能。
|
||||
|
||||
注意:`E1000_DEBUG` 标志仅能在打了 6.828 补丁的 QEMU 版本上工作。
|
||||
|
||||
你可以使用软件去仿真硬件,来做进一步的调试工作。如果你使用它时卡壳了,不明白为什么 E1000 没有如你预期那样响应你,你可以查看在 `hw/e1000.c` 中的 QEMU 的 E1000 实现。
|
||||
|
||||
#### 网络服务器
|
||||
|
||||
从头开始写一个网络栈是很困难的。因此我们将使用 lwIP,它是一个开源的、轻量级 TCP/IP 协议套件,它能做包括一个网络栈在内的很多事情。你能在 [这里][3] 找到很多关于 IwIP 的信息。在这个任务中,对我们而言,lwIP 就是一个实现了一个 BSD 套接字接口和拥有一个包输入端口和包输出端口的黑盒子。
|
||||
|
||||
一个网络服务器其实就是一个有以下四个环境的混合体:
|
||||
|
||||
* 核心网络服务器环境(包括套接字调用派发器和 lwIP)
|
||||
* 输入环境
|
||||
* 输出环境
|
||||
* 定时器环境
|
||||
|
||||
|
||||
|
||||
下图展示了各个环境和它们之间的关系。下图展示了包括设备驱动的整个系统,我们将在后面详细讲到它。在本实验中,你将去实现图中绿色高亮的部分。
|
||||
|
||||
![Network server architecture][4]
|
||||
|
||||
##### 核心网络服务器环境
|
||||
|
||||
核心网络服务器环境由套接字调用派发器和 IwIP 自身组成的。套接字调用派发器就像一个文件服务器一样。用户环境使用 stubs(可以在 `lib/nsipc.c` 中找到它)去发送 IPC 消息到核心网络服务器环境。如果你看了 `lib/nsipc.c`,你就会发现核心网络服务器与我们创建的文件服务器 `i386_init` 的工作方式是一样的,`i386_init` 是使用 NS_TYPE_NS 创建的 NS 环境,因此我们检查 `envs`,去查找这个特殊的环境类型。对于每个用户环境的 IPC,网络服务器中的派发器将调用相应的、由 IwIP 提供的、代表用户的 BSD 套接字接口函数。
|
||||
|
||||
普通用户环境不能直接使用 `nsipc_*` 调用。而是通过在 `lib/sockets.c` 中的函数来使用它们,这些函数提供了基于文件描述符的套接字 API。以这种方式,用户环境通过文件描述符来引用套接字,就像它们引用磁盘上的文件一样。一些操作(`connect`、`accept`、等等)是特定于套接字的,但 `read`、`write`、和 `close` 是通过 `lib/fd.c` 中一般的文件描述符设备派发代码的。就像文件服务器对所有的打开的文件维护唯一的内部 ID 一样,lwIP 也为所有的打开的套接字生成唯一的 ID。不论是文件服务器还是网络服务器,我们都使用存储在 `struct Fd` 中的信息去映射每个环境的文件描述符到这些唯一的 ID 空间上。
|
||||
|
||||
尽管看起来文件服务器的网络服务器的 IPC 派发器行为是一样的,但它们之间还有很重要的差别。BSD 套接字调用(像 `accept` 和 `recv`)能够无限期阻塞。如果派发器让 lwIP 去执行其中一个调用阻塞,派发器也将被阻塞,并且在整个系统中,同一时间只能有一个未完成的网络调用。由于这种情况是无法接受的,所以网络服务器使用用户级线程以避免阻塞整个服务器环境。对于每个入站 IPC 消息,派发器将创建一个线程,然后在新创建的线程上来处理请求。如果线程被阻塞,那么只有那个线程被置入休眠状态,而其它线程仍然处于运行中。
|
||||
|
||||
除了核心网络环境外,还有三个辅助环境。核心网络服务器环境除了接收来自用户应用程序的消息之外,它的派发器也接收来自输入环境和定时器环境的消息。
|
||||
|
||||
##### 输出环境
|
||||
|
||||
在为用户环境套接字调用提供服务时,lwIP 将为网卡生成用于发送的包。IwIP 将使用 `NSREQ_OUTPUT` 去发送在 IPC 消息页参数中附加了包的 IPC 消息。输出环境负责接收这些消息,并通过你稍后创建的系统调用接口来转发这些包到设备驱动程序上。
|
||||
|
||||
##### 输入环境
|
||||
|
||||
网卡接收到的包需要传递到 lwIP 中。输入环境将每个由设备驱动程序接收到的包拉进内核空间(使用你将要实现的内核系统调用),并使用 `NSREQ_INPUT` IPC 消息将这些包发送到核心网络服务器环境。
|
||||
|
||||
包输入功能是独立于核心网络环境的,因为在 JOS 上同时实现接收 IPC 消息并从设备驱动程序中查询或等待包有点困难。我们在 JOS 中没有实现 `select` 系统调用,这是一个允许环境去监视多个输入源以识别准备处理哪个输入的系统调用。
|
||||
|
||||
如果你查看了 `net/input.c` 和 `net/output.c`,你将会看到在它们中都需要去实现那个系统调用。这主要是因为实现它要依赖你的系统调用接口。在你实现了驱动程序和系统调用接口之后,你将要为这两个辅助环境写这个代码。
|
||||
|
||||
##### 定时器环境
|
||||
|
||||
定时器环境周期性发送 `NSREQ_TIMER` 类型的消息到核心服务器,以提醒它那个定时器已过期。IwIP 使用来自线程的定时器消息来实现各种网络超时。
|
||||
|
||||
### Part A:初始化和发送包
|
||||
|
||||
你的内核还没有一个时间概念,因此我们需要去添加它。这里有一个由硬件产生的每 10 ms 一次的时钟中断。每收到一个时钟中断,我们将增加一个变量值,以表示时间已过去 10 ms。它在 `kern/time.c` 中已实现,但还没有完全集成到你的内核中。
|
||||
|
||||
```markdown
|
||||
练习 1、为 `kern/trap.c` 中的每个时钟中断增加一个到 `time_tick` 的调用。实现 `sys_time_msec` 并增加到 `kern/syscall.c` 中的 `syscall`,以便于用户空间能够访问时间。
|
||||
```
|
||||
|
||||
使用 `make INIT_CFLAGS=-DTEST_NO_NS run-testtime` 去测试你的代码。你应该会看到环境计数从 5 开始以 1 秒为间隔减少。"-DTEST_NO_NS” 参数禁止在网络服务器环境上启动,因为在当前它将导致 JOS 崩溃。
|
||||
|
||||
#### 网卡
|
||||
|
||||
写驱动程序要求你必须深入了解硬件和软件中的接口。本实验将给你提供一个如何使用 E1000 接口的高度概括的文档,但是你在写驱动程序时还需要大量去查询 Intel 的手册。
|
||||
|
||||
```markdown
|
||||
练习 2、为开发 E1000 驱动,去浏览 Intel 的 [软件开发者手册][5]。这个手册涵盖了几个与以太网控制器紧密相关的东西。QEMU 仿真了 82540EM。
|
||||
|
||||
现在,你应该去浏览第 2 章,以对设备获得一个整体概念。写驱动程序时,你需要熟悉第 3 到 14 章,以及 4.1(不包括 4.1 的子节)。你也应该去参考第 13 章。其它章涵盖了 E1000 的组件,你的驱动程序并不与这些组件去交互。现在你不用担心过多细节的东西;只需要了解文档的整体结构,以便于你后面需要时容易查找。
|
||||
|
||||
在阅读手册时,记住,E1000 是一个拥有很多高级特性的很复杂的设备,一个能让 E1000 工作的驱动程序仅需要它一小部分的特性和 NIC 提供的接口即可。仔细考虑一下,如何使用最简单的方式去使用网卡的接口。我们强烈推荐你在使用高级特性之前,只去写一个基本的、能够让网卡工作的驱动程序即可。
|
||||
```
|
||||
|
||||
##### PCI 接口
|
||||
|
||||
E1000 是一个 PCI 设备,也就是说它是插到主板的 PCI 总线插槽上的。PCI 总线有地址、数据、和中断线,并且 PCI 总线允许 CPU 与 PCI 设备通讯,以及 PCI 设备去读取和写入内存。一个 PCI 设备在它能够被使用之前,需要先发现它并进行初始化。发现 PCI 设备是 PCI 总线查找已安装设备的过程。初始化是分配 I/O 和内存空间、以及协商设备所使用的 IRQ 线的过程。
|
||||
|
||||
我们在 `kern/pci.c` 中已经为你提供了使用 PCI 的代码。PCI 初始化是在引导期间执行的,PCI 代码遍历PCI 总线来查找设备。当它找到一个设备时,它读取它的供应商 ID 和设备 ID,然后使用这两个值作为关键字去搜索 `pci_attach_vendor` 数组。这个数组是由像下面这样的 `struct pci_driver` 条目组成:
|
||||
|
||||
```c
|
||||
struct pci_driver {
|
||||
uint32_t key1, key2;
|
||||
int (*attachfn) (struct pci_func *pcif);
|
||||
};
|
||||
```
|
||||
|
||||
如果发现的设备的供应商 ID 和设备 ID 与数组中条目匹配,那么 PCI 代码将调用那个条目的 `attachfn` 去执行设备初始化。(设备也可以按类别识别,那是通过 `kern/pci.c` 中其它的驱动程序表来实现的。)
|
||||
|
||||
绑定函数是传递一个 _PCI 函数_ 去初始化。一个 PCI 卡能够发布多个函数,虽然这个 E1000 仅发布了一个。下面是在 JOS 中如何去表示一个 PCI 函数:
|
||||
|
||||
```c
|
||||
struct pci_func {
|
||||
struct pci_bus *bus;
|
||||
|
||||
uint32_t dev;
|
||||
uint32_t func;
|
||||
|
||||
uint32_t dev_id;
|
||||
uint32_t dev_class;
|
||||
|
||||
uint32_t reg_base[6];
|
||||
uint32_t reg_size[6];
|
||||
uint8_t irq_line;
|
||||
};
|
||||
```
|
||||
|
||||
上面的结构反映了在 Intel 开发者手册里第 4.1 节的表 4-1 中找到的一些条目。`struct pci_func` 的最后三个条目我们特别感兴趣的,因为它们将记录这个设备协商的内存、I/O、以及中断资源。`reg_base` 和 `reg_size` 数组包含最多六个基址寄存器或 BAR。`reg_base` 为映射到内存中的 I/O 区域(对于 I/O 端口而言是基 I/O 端口)保存了内存的基地址,`reg_size` 包含了以字节表示的大小或来自 `reg_base` 的相关基值的 I/O 端口号,而 `irq_line` 包含了为中断分配给设备的 IRQ 线。在表 4-2 的后半部分给出了 E1000 BAR 的具体涵义。
|
||||
|
||||
当设备调用了绑定函数后,设备已经被发现,但没有被启用。这意味着 PCI 代码还没有确定分配给设备的资源,比如地址空间和 IRQ 线,也就是说,`struct pci_func` 结构的最后三个元素还没有被填入。绑定函数将调用 `pci_func_enable`,它将去启用设备、协商这些资源、并在结构 `struct pci_func` 中填入它。
|
||||
|
||||
```markdown
|
||||
练习 3、实现一个绑定函数去初始化 E1000。添加一个条目到 `kern/pci.c` 中的数组 `pci_attach_vendor` 上,如果找到一个匹配的 PCI 设备就去触发你的函数(确保一定要把它放在表末尾的 `{0, 0, 0}` 条目之前)。你在 5.2 节中能找到 QEMU 仿真的 82540EM 的供应商 ID 和设备 ID。在引导期间,当 JOS 扫描 PCI 总线时,你也可以看到列出来的这些信息。
|
||||
|
||||
到目前为止,我们通过 `pci_func_enable` 启用了 E1000 设备。通过本实验我们将添加更多的初始化。
|
||||
|
||||
我们已经为你提供了 `kern/e1000.c` 和 `kern/e1000.h` 文件,这样你就不会把构建系统搞糊涂了。不过它们现在都是空的;你需要在本练习中去填充它们。你还可能在内核的其它地方包含这个 `e1000.h` 文件。
|
||||
|
||||
当你引导你的内核时,你应该会看到它输出的信息显示 E1000 的 PCI 函数已经启用。这时你的代码已经能够通过 `make grade` 的 `pci attach` 测试了。
|
||||
```
|
||||
|
||||
##### 内存映射的 I/O
|
||||
|
||||
软件与 E1000 通过内存映射的 I/O(MMIO) 来沟通。你在 JOS 的前面部分可能看到过 MMIO 两次:CGA 控制台和 LAPIC 都是通过写入和读取“内存”来控制和查询设备的。但这些读取和写入不是去往内存芯片的,而是直接到这些设备的。
|
||||
|
||||
`pci_func_enable` 为 E1000 协调一个 MMIO 区域,来存储它在 BAR 0 的基址和大小(也就是 `reg_base[0]` 和 `reg_size[0]`),这是一个分配给设备的一段物理内存地址,也就是说你可以通过虚拟地址访问它来做一些事情。由于 MMIO 区域一般分配高位物理地址(一般是 3GB 以上的位置),因此你不能使用 `KADDR` 去访问它们,因为 JOS 被限制为最大使用 256MB。因此,你可以去创建一个新的内存映射。我们将使用 `MMIOBASE`(从实验 4 开始,你的 `mmio_map_region` 区域应该确保不能被 LAPIC 使用的映射所覆盖)以上的部分。由于在 JOS 创建用户环境之前,PCI 设备就已经初始化了,因此你可以在 `kern_pgdir` 处创建映射,并且让它始终可用。
|
||||
|
||||
```markdown
|
||||
练习 4、在你的绑定函数中,通过调用 `mmio_map_region`(它就是你在实验 4 中写的,是为了支持 LAPIC 内存映射)为 E1000 的 BAR 0 创建一个虚拟地址映射。
|
||||
|
||||
你将希望在一个变量中记录这个映射的位置,以便于后面访问你映射的寄存器。去看一下 `kern/lapic.c` 中的 `lapic` 变量,它就是一个这样的例子。如果你使用一个指针指向设备寄存器映射,一定要声明它为 `volatile`;否则,编译器将允许缓存它的值,并可以在内存中再次访问它。
|
||||
|
||||
为测试你的映射,尝试去输出设备状态寄存器(第 12.4.2 节)。这是一个在寄存器空间中以字节 8 开头的 4 字节寄存器。你应该会得到 `0x80080783`,它表示以 1000 MB/s 的速度启用一个全双工的链路,以及其它信息。
|
||||
```
|
||||
|
||||
提示:你将需要一些常数,像寄存器位置和掩码位数。如果从开发者手册中复制这些东西很容易出错,并且导致调试过程很痛苦。我们建议你使用 QEMU 的 [`e1000_hw.h`][6] 头文件做为基准。我们不建议完全照抄它,因为它定义的值远超过你所需要,并且定义的东西也不见得就是你所需要的,但它仍是一个很好的参考。
|
||||
|
||||
##### DMA
|
||||
|
||||
你可能会认为是从 E1000 的寄存器中通过写入和读取来传送和接收数据包的,其实这样做会非常慢,并且还要求 E1000 在其中去缓存数据包。相反,E1000 使用直接内存访问(DMA)从内存中直接读取和写入数据包,而且不需要 CPU 参与其中。驱动程序负责为发送和接收队列分配内存、设置 DMA 描述符、以及配置 E1000 使用的队列位置,而在这些设置完成之后的其它工作都是异步方式进行的。发送包的时候,驱动程序复制它到发送队列的下一个 DMA 描述符中,并且通知 E1000 下一个发送包已就绪;当轮到这个包发送时,E1000 将从描述符中复制出数据。同样,当 E1000 接收一个包时,它从接收队列中将它复制到下一个 DMA 描述符中,驱动程序将能在下一次读取到它。
|
||||
|
||||
总体来看,接收队列和发送队列非常相似。它们都是由一系列的描述符组成。虽然这些描述符的结构细节有所不同,但每个描述符都包含一些标志和包含了包数据的一个缓存的物理地址(发送到网卡的数据包,或网卡将接收到的数据包写入到由操作系统分配的缓存中)。
|
||||
|
||||
队列被实现为一个环形数组,意味着当网卡或驱动到达数组末端时,它将重新回到开始位置。它有一个头指针和尾指针,队列的内容就是这两个指针之间的描述符。硬件就是从头开始移动头指针去消费描述符,在这期间驱动程序不停地添加描述符到尾部,并移动尾指针到最后一个描述符上。发送队列中的描述符表示等待发送的包(因此,在平静状态下,发送队列是空的)。对于接收队列,队列中的描述符是表示网卡能够接收包的空描述符(因此,在平静状态下,接收队列是由所有的可用接收描述符组成的)。正确的更新尾指针寄存器而不让 E1000 产生混乱是很有难度的;要小心!
|
||||
|
||||
指向到这些数组及描述符中的包缓存地址的指针都必须是物理地址,因为硬件是直接在物理内存中且不通过 MMU 来执行 DMA 的读写操作的。
|
||||
|
||||
#### 发送包
|
||||
|
||||
E1000 中的发送和接收功能本质上是独立的,因此我们可以同时进行发送接收。我们首先去攻克简单的数据包发送,因为我们在没有先去发送一个 “I'm here!" 包之前是无法测试接收包功能的。
|
||||
|
||||
首先,你需要初始化网卡以准备发送,详细步骤查看 14.5 节(不必着急看子节)。发送初始化的第一步是设置发送队列。队列的详细结构在 3.4 节中,描述符的结构在 3.3.3 节中。我们先不要使用 E1000 的 TCP offload 特性,因此你只需专注于 “传统的发送描述符格式” 即可。你应该现在就去阅读这些章节,并要熟悉这些结构。
|
||||
|
||||
##### C 结构
|
||||
|
||||
你可以用 C `struct` 很方便地描述 E1000 的结构。正如你在 `struct Trapframe` 中所看到的结构那样,C `struct` 可以让你很方便地在内存中描述准确的数据布局。C 可以在字段中插入数据,但是 E1000 的结构就是这样布局的,这样就不会是个问题。如果你遇到字段对齐问题,进入 GCC 查看它的 "packed” 属性。
|
||||
|
||||
查看手册中表 3-8 所给出的一个传统的发送描述符,将它复制到这里作为一个示例:
|
||||
|
||||
```
|
||||
63 48 47 40 39 32 31 24 23 16 15 0
|
||||
+---------------------------------------------------------------+
|
||||
| Buffer address |
|
||||
+---------------|-------|-------|-------|-------|---------------+
|
||||
| Special | CSS | Status| Cmd | CSO | Length |
|
||||
+---------------|-------|-------|-------|-------|---------------+
|
||||
```
|
||||
|
||||
从结构右上角第一个字节开始,我们将它转变成一个 C 结构,从上到下,从右到左读取。如果你从右往左看,你将看到所有的字段,都非常适合一个标准大小的类型:
|
||||
|
||||
```c
|
||||
struct tx_desc
|
||||
{
|
||||
uint64_t addr;
|
||||
uint16_t length;
|
||||
uint8_t cso;
|
||||
uint8_t cmd;
|
||||
uint8_t status;
|
||||
uint8_t css;
|
||||
uint16_t special;
|
||||
};
|
||||
```
|
||||
|
||||
你的驱动程序将为发送描述符数组去保留内存,并由发送描述符指向到包缓冲区。有几种方式可以做到,从动态分配页到在全局变量中简单地声明它们。无论你如何选择,记住,E1000 是直接访问物理内存的,意味着它能访问的任何缓存区在物理内存中必须是连续的。
|
||||
|
||||
处理包缓存也有几种方式。我们推荐从最简单的开始,那就是在驱动程序初始化期间,为每个描述符保留包缓存空间,并简单地将包数据复制进预留的缓冲区中或从其中复制出来。一个以太网包最大的尺寸是 1518 字节,这就限制了这些缓存区的大小。主流的成熟驱动程序都能够动态分配包缓存区(即:当网络使用率很低时,减少内存使用量),或甚至跳过缓存区,直接由用户空间提供(就是“零复制”技术),但我们还是从简单开始为好。
|
||||
|
||||
```markdown
|
||||
练习 5、执行一个 14.5 节中的初始化步骤(它的子节除外)。对于寄存器的初始化过程使用 13 节作为参考,对发送描述符和发送描述符数组参考 3.3.3 节和 3.4 节。
|
||||
|
||||
要记住,在发送描述符数组中要求对齐,并且数组长度上有限制。因为 TDLEN 必须是 128 字节对齐的,而每个发送描述符是 16 字节,你的发送描述符数组必须是 8 个发送描述符的倍数。并且不能使用超过 64 个描述符,以及不能在我们的发送环形缓存测试中溢出。
|
||||
|
||||
对于 TCTL.COLD,你可以假设为全双工操作。对于 TIPG、IEEE 802.3 标准的 IPG(不要使用 14.5 节中表上的值),参考在 13.4.34 节中表 13-77 中描述的缺省值。
|
||||
```
|
||||
|
||||
尝试运行 `make E1000_DEBUG=TXERR,TX qemu`。如果你使用的是打了 6.828 补丁的 QEMU,当你设置 TDT(发送描述符尾部)寄存器时你应该会看到一个 “e1000: tx disabled" 的信息,并且不会有更多 "e1000” 信息了。
|
||||
|
||||
现在,发送初始化已经完成,你可以写一些代码去发送一个数据包,并且通过一个系统调用使它可以访问用户空间。你可以将要发送的数据包添加到发送队列的尾部,也就是说复制数据包到下一个包缓冲区中,然后更新 TDT 寄存器去通知网卡在发送队列中有另外的数据包。(注意,TDT 是一个进入发送描述符数组的索引,不是一个字节偏移量;关于这一点文档中说明的不是很清楚。)
|
||||
|
||||
但是,发送队列只有这么大。如果网卡在发送数据包时卡住或发送队列填满时会发生什么状况?为了检测这种情况,你需要一些来自 E1000 的反馈。不幸的是,你不能只使用 TDH(发送描述符头)寄存器;文档上明确说明,从软件上读取这个寄存器是不可靠的。但是,如果你在发送描述符的命令字段中设置 RS 位,那么,当网卡去发送在那个描述符中的数据包时,网卡将设置描述符中状态字段的 DD 位,如果一个描述符中的 DD 位被设置,你就应该知道那个描述符可以安全地回收,并且可以用它去发送其它数据包。
|
||||
|
||||
如果用户调用你的发送系统调用,但是下一个描述符的 DD 位没有设置,表示那个发送队列已满,该怎么办?在这种情况下,你该去决定怎么办了。你可以简单地丢弃数据包。网络协议对这种情况的处理很灵活,但如果你丢弃大量的突发数据包,协议可能不会去重新获得它们。可能需要你替代网络协议告诉用户环境让它重传,就像你在 `sys_ipc_try_send` 中做的那样。在环境上回推产生的数据是有好处的。
|
||||
|
||||
```
|
||||
练习 6、写一个函数去发送一个数据包,它需要检查下一个描述符是否空闲、复制包数据到下一个描述符并更新 TDT。确保你处理的发送队列是满的。
|
||||
```
|
||||
|
||||
现在,应该去测试你的包发送代码了。通过从内核中直接调用你的发送函数来尝试发送几个包。在测试时,你不需要去创建符合任何特定网络协议的数据包。运行 `make E1000_DEBUG=TXERR,TX qemu` 去测试你的代码。你应该看到类似下面的信息:
|
||||
|
||||
```c
|
||||
e1000: index 0: 0x271f00 : 9000002a 0
|
||||
...
|
||||
```
|
||||
|
||||
在你发送包时,每行都给出了在发送数组中的序号、那个发送的描述符的缓存地址、`cmd/CSO/length` 字段、以及 `special/CSS/status` 字段。如果 QEMU 没有从你的发送描述符中输出你预期的值,检查你的描述符中是否有合适的值和你配置的正确的 TDBAL 和 TDBAH。如果你收到的是 "e1000: TDH wraparound @0, TDT x, TDLEN y" 的信息,意味着 E1000 的发送队列持续不断地运行(如果 QEMU 不去检查它,它将是一个无限循环),这意味着你没有正确地维护 TDT。如果你收到了许多 "e1000: tx disabled" 的信息,那么意味着你没有正确设置发送控制寄存器。
|
||||
|
||||
一旦 QEMU 运行,你就可以运行 `tcpdump -XXnr qemu.pcap` 去查看你发送的包数据。如果从 QEMU 中看到预期的 "e1000: index” 信息,但你捕获的包是空的,再次检查你发送的描述符,是否填充了每个必需的字段和位。(E1000 或许已经遍历了你的发送描述符,但它认为不需要去发送)
|
||||
|
||||
```
|
||||
练习 7、添加一个系统调用,让你从用户空间中发送数据包。详细的接口由你来决定。但是不要忘了检查从用户空间传递给内核的所有指针。
|
||||
```
|
||||
|
||||
#### 发送包:网络服务器
|
||||
|
||||
现在,你已经有一个系统调用接口可以发送包到你的设备驱动程序端了。输出辅助环境的目标是在一个循环中做下面的事情:从核心网络服务器中接收 `NSREQ_OUTPUT` IPC 消息,并使用你在上面增加的系统调用去发送伴随这些 IPC 消息的数据包。这个 `NSREQ_OUTPUT` IPC 是通过 `net/lwip/jos/jif/jif.c` 中的 `low_level_output` 函数来发送的。它集成 lwIP 栈到 JOS 的网络系统。每个 IPC 将包含一个页,这个页由一个 `union Nsipc` 和在 `struct jif_pkt pkt` 字段中的一个包组成(查看 `inc/ns.h`)。`struct jif_pkt` 看起来像下面这样:
|
||||
|
||||
```c
|
||||
struct jif_pkt {
|
||||
int jp_len;
|
||||
char jp_data[0];
|
||||
};
|
||||
```
|
||||
|
||||
`jp_len` 表示包的长度。在 IPC 页上的所有后续字节都是为了包内容。在结构的结尾处使用一个长度为 0 的数组来表示缓存没有一个预先确定的长度(像 `jp_data` 一样),这是一个常见的 C 技巧(也有人说这是一个令人讨厌的做法)。因为 C 并不做数组边界的检查,只要你确保结构后面有足够的未使用内存即可,你可以把 `jp_data` 作为一个任意大小的数组来使用。
|
||||
|
||||
当设备驱动程序的发送队列中没有足够的空间时,一定要注意在设备驱动程序、输出环境和核心网络服务器之间的交互。核心网络服务器使用 IPC 发送包到输出环境。如果输出环境在由于一个发送包的系统调用而挂起,导致驱动程序没有足够的缓存去容纳新数据包,这时核心网络服务器将阻塞以等待输出服务器去接收 IPC 调用。
|
||||
|
||||
```markdown
|
||||
练习 8、实现 `net/output.c`。
|
||||
```
|
||||
|
||||
你可以使用 `net/testoutput.c` 去测试你的输出代码而无需整个网络服务器参与。尝试运行 `make E1000_DEBUG=TXERR,TX run-net_testoutput`。你将看到如下的输出:
|
||||
|
||||
```c
|
||||
Transmitting packet 0
|
||||
e1000: index 0: 0x271f00 : 9000009 0
|
||||
Transmitting packet 1
|
||||
e1000: index 1: 0x2724ee : 9000009 0
|
||||
...
|
||||
```
|
||||
|
||||
运行 `tcpdump -XXnr qemu.pcap` 将输出:
|
||||
|
||||
|
||||
```c
|
||||
reading from file qemu.pcap, link-type EN10MB (Ethernet)
|
||||
-5:00:00.600186 [|ether]
|
||||
0x0000: 5061 636b 6574 2030 30 Packet.00
|
||||
-5:00:00.610080 [|ether]
|
||||
0x0000: 5061 636b 6574 2030 31 Packet.01
|
||||
...
|
||||
```
|
||||
|
||||
使用更多的数据包去测试,可以运行 `make E1000_DEBUG=TXERR,TX NET_CFLAGS=-DTESTOUTPUT_COUNT=100 run-net_testoutput`。如果它导致你的发送队列溢出,再次检查你的 DD 状态位是否正确,以及是否告诉硬件去设置 DD 状态位(使用 RS 命令位)。
|
||||
|
||||
你的代码应该会通过 `make grade` 的 `testoutput` 测试。
|
||||
|
||||
```
|
||||
问题
|
||||
|
||||
1、你是如何构造你的发送实现的?在实践中,如果发送缓存区满了,你该如何处理?
|
||||
```
|
||||
|
||||
|
||||
### Part B:接收包和 web 服务器
|
||||
|
||||
#### 接收包
|
||||
|
||||
就像你在发送包中做的那样,你将去配置 E1000 去接收数据包,并提供一个接收描述符队列和接收描述符。在 3.2 节中描述了接收包的操作,包括接收队列结构和接收描述符、以及在 14.4 节中描述的详细的初始化过程。
|
||||
|
||||
```
|
||||
练习 9、阅读 3.2 节。你可以忽略关于中断和 offload 校验和方面的内容(如果在后面你想去使用这些特性,可以再返回去阅读),你现在不需要去考虑阈值的细节和网卡内部缓存是如何工作的。
|
||||
```
|
||||
|
||||
除了接收队列是由一系列的等待入站数据包去填充的空缓存包以外,接收队列的其它部分与发送队列非常相似。所以,当网络空闲时,发送队列是空的(因为所有的包已经被发送出去了),而接收队列是满的(全部都是空缓存包)。
|
||||
|
||||
当 E1000 接收一个包时,它首先与网卡的过滤器进行匹配检查(例如,去检查这个包的目标地址是否为这个 E1000 的 MAC 地址),如果这个包不匹配任何过滤器,它将忽略这个包。否则,E1000 尝试从接收队列头部去检索下一个接收描述符。如果头(RDH)追上了尾(RDT),那么说明接收队列已经没有空闲的描述符了,所以网卡将丢弃这个包。如果有空闲的接收描述符,它将复制这个包的数据到描述符指向的缓存中,设置这个描述符的 DD 和 EOP 状态位,并递增 RDH。
|
||||
|
||||
如果 E1000 在一个接收描述符中接收到了一个比包缓存还要大的数据包,它将按需从接收队列中检索尽可能多的描述符以保存数据包的全部内容。为表示发生了这种情况,它将在所有的这些描述符上设置 DD 状态位,但仅在这些描述符的最后一个上设置 EOP 状态位。在你的驱动程序上,你可以去处理这种情况,也可以简单地配置网卡拒绝接收这种”长包“(这种包也被称为”巨帧“),你要确保接收缓存有足够的空间尽可能地去存储最大的标准以太网数据包(1518 字节)。
|
||||
|
||||
```markdown
|
||||
练习 10、设置接收队列并按 14.4 节中的流程去配置 E1000。你可以不用支持 ”长包“ 或多播。到目前为止,我们不用去配置网卡使用中断;如果你在后面决定去使用接收中断时可以再去改。另外,配置 E1000 去除以太网的 CRC 校验,因为我们的评级脚本要求必须去掉校验。
|
||||
|
||||
默认情况下,网卡将过滤掉所有的数据包。你必须使用网卡的 MAC 地址去配置接收地址寄存器(RAL 和 RAH)以接收发送到这个网卡的数据包。你可以简单地硬编码 QEMU 的默认 MAC 地址 52:54:00:12:34:56(我们已经在 lwIP 中硬编码了这个地址,因此这样做不会有问题)。使用字节顺序时要注意;MAC 地址是从低位字节到高位字节的方式来写的,因此 52:54:00:12 是 MAC 地址的低 32 位,而 34:56 是它的高 16 位。
|
||||
|
||||
E1000 的接收缓存区大小仅支持几个指定的设置值(在 13.4.22 节中描述的 RCTL.BSIZE 值)。如果你的接收包缓存够大,并且拒绝长包,那你就不用担心跨越多个缓存区的包。另外,要记住的是,和发送一样,接收队列和包缓存必须是连接的物理内存。
|
||||
|
||||
你应该使用至少 128 个接收描述符。
|
||||
```
|
||||
|
||||
现在,你可以做接收功能的基本测试了,甚至都无需写代码去接收包了。运行 `make E1000_DEBUG=TX,TXERR,RX,RXERR,RXFILTER run-net_testinput`。`testinput` 将发送一个 ARP(地址解析协议)通告包(使用你的包发送的系统调用),而 QEMU 将自动回复它,即便是你的驱动尚不能接收这个回复,你也应该会看到一个 "e1000: unicast match[0]: 52:54:00:12:34:56" 的消息,表示 E1000 接收到一个包,并且匹配了配置的接收过滤器。如果你看到的是一个 "e1000: unicast mismatch: 52:54:00:12:34:56” 消息,表示 E1000 过滤掉了这个包,意味着你的 RAL 和 RAH 的配置不正确。确保你按正确的顺序收到了字节,并不要忘记设置 RAH 中的 "Address Valid” 位。如果你没有收到任何 "e1000” 消息,或许是你没有正确地启用接收功能。
|
||||
|
||||
现在,你准备去实现接收数据包。为了接收数据包,你的驱动程序必须持续跟踪希望去保存下一下接收到的包的描述符(提示:按你的设计,这个功能或许已经在 E1000 中的一个寄存器来实现了)。与发送类似,官方文档上表示,RDH 寄存器状态并不能从软件中可靠地读取,因为确定一个包是否被发送到描述符的包缓存中,你需要去读取描述符中的 DD 状态位。如果 DD 位被设置,你就可以从那个描述符的缓存中复制出这个数据包,然后通过更新队列的尾索引 RDT 来告诉网卡那个描述符是空闲的。
|
||||
|
||||
如果 DD 位没有被设置,表明没有接收到包。这就与发送队列满的情况一样,这时你可以有几种做法。你可以简单地返回一个 ”重传“ 错误来要求对端重发一次。对于满的发送队列,由于那是个临时状况,这种做法还是很好的,但对于空的接收队列来说就不太合理了,因为接收队列可能会保持好长一段时间的空的状态。第二个方法是挂起调用环境,直到在接收队列中处理了这个包为止。这个策略非常类似于 `sys_ipc_recv`。就像在 IPC 的案例中,因为我们每个 CPU 仅有一个内核栈,一旦我们离开内核,栈上的状态就会被丢弃。我们需要设置一个标志去表示那个环境由于接收队列下溢被挂起并记录系统调用参数。这种方法的缺点是过于复杂:E1000 必须被指示去产生接收中断,并且驱动程序为了恢复被阻塞等待一个包的环境,必须处理这个中断。
|
||||
|
||||
```
|
||||
练习 11、写一个函数从 E1000 中接收一个包,然后通过一个系统调用将它发布到用户空间。确保你将接收队列处理成空的。
|
||||
```
|
||||
|
||||
```markdown
|
||||
小挑战!如果发送队列是满的或接收队列是空的,环境和你的驱动程序可能会花费大量的 CPU 周期是轮询、等待一个描述符。一旦完成发送或接收描述符,E1000 能够产生一个中断,以避免轮询。修改你的驱动程序,处理发送和接收队列是以中断而不是轮询的方式进行。
|
||||
|
||||
注意,一旦确定为中断,它将一直处于中断状态,直到你的驱动程序明确处理完中断为止。在你的中断服务程序中,一旦处理完成要确保清除掉中断状态。如果你不那样做,从你的中断服务程序中返回后,CPU 将再次跳转到你的中断服务程序中。除了在 E1000 网卡上清除中断外,也需要使用 `lapic_eoi` 在 LAPIC 上清除中断。
|
||||
```
|
||||
|
||||
#### 接收包:网络服务器
|
||||
|
||||
在网络服务器输入环境中,你需要去使用你的新的接收系统调用以接收数据包,并使用 `NSREQ_INPUT` IPC 消息将它传递到核心网络服务器环境。这些 IPC 输入消息应该会有一个页,这个页上绑定了一个 `union Nsipc`,它的 `struct jif_pkt pkt` 字段中有从网络上接收到的包。
|
||||
|
||||
```markdown
|
||||
练习 12、实现 `net/input.c`。
|
||||
```
|
||||
|
||||
使用 `make E1000_DEBUG=TX,TXERR,RX,RXERR,RXFILTER run-net_testinput` 再次运行 `testinput`,你应该会看到:
|
||||
|
||||
```c
|
||||
Sending ARP announcement...
|
||||
Waiting for packets...
|
||||
e1000: index 0: 0x26dea0 : 900002a 0
|
||||
e1000: unicast match[0]: 52:54:00:12:34:56
|
||||
input: 0000 5254 0012 3456 5255 0a00 0202 0806 0001
|
||||
input: 0010 0800 0604 0002 5255 0a00 0202 0a00 0202
|
||||
input: 0020 5254 0012 3456 0a00 020f 0000 0000 0000
|
||||
input: 0030 0000 0000 0000 0000 0000 0000 0000 0000
|
||||
```
|
||||
|
||||
"input:” 打头的行是一个 QEMU 的 ARP 回复的十六进制转储。
|
||||
|
||||
你的代码应该会通过 `make grade` 的 `testinput` 测试。注意,在没有发送至少一个包去通知 QEMU 中的 JOS 的 IP 地址上时,是没法去测试包接收的,因此在你的发送代码中的 bug 可能会导致测试失败。
|
||||
|
||||
为彻底地测试你的网络代码,我们提供了一个称为 `echosrv` 的守护程序,它在端口 7 上设置运行 `echo` 的服务器,它将回显通过 TCP 连接发送给它的任何内容。使用 `make E1000_DEBUG=TX,TXERR,RX,RXERR,RXFILTER run-echosrv` 在一个终端中启动 `echo` 服务器,然后在另一个终端中通过 `make nc-7` 去连接它。你输入的每一行都被这个服务器回显出来。每次在仿真的 E1000 上接收到一个包,QEMU 将在控制台上输出像下面这样的内容:
|
||||
|
||||
```c
|
||||
e1000: unicast match[0]: 52:54:00:12:34:56
|
||||
e1000: index 2: 0x26ea7c : 9000036 0
|
||||
e1000: index 3: 0x26f06a : 9000039 0
|
||||
e1000: unicast match[0]: 52:54:00:12:34:56
|
||||
```
|
||||
|
||||
做到这一点后,你应该也就能通过 `echosrv` 的测试了。
|
||||
|
||||
```
|
||||
问题
|
||||
|
||||
2、你如何构造你的接收实现?在实践中,如果接收队列是空的并且一个用户环境要求下一个入站包,你怎么办?
|
||||
```
|
||||
|
||||
|
||||
```
|
||||
小挑战!在开发者手册中阅读关于 EEPROM 的内容,并写出从 EEPROM 中加载 E1000 的 MAC 地址的代码。目前,QEMU 的默认 MAC 地址是硬编码到你的接收初始化代码和 lwIP 中的。修复你的初始化代码,让它能够从 EEPROM 中读取 MAC 地址,和增加一个系统调用去传递 MAC 地址到 lwIP 中,并修改 lwIP 去从网卡上读取 MAC 地址。通过配置 QEMU 使用一个不同的 MAC 地址去测试你的变更。
|
||||
```
|
||||
|
||||
```
|
||||
小挑战!修改你的 E1000 驱动程序去使用 "零复制" 技术。目前,数据包是从用户空间缓存中复制到发送包缓存中,和从接收包缓存中复制回到用户空间缓存中。一个使用 ”零复制“ 技术的驱动程序可以通过直接让用户空间和 E1000 共享包缓存内存来实现。还有许多不同的方法去实现 ”零复制“,包括映射内容分配的结构到用户空间或直接传递用户提供的缓存到 E1000。不论你选择哪种方法,都要注意你如何利用缓存的问题,因为你不能在用户空间代码和 E1000 之间产生争用。
|
||||
```
|
||||
|
||||
```
|
||||
小挑战!把 ”零复制“ 的概念用到 lwIP 中。
|
||||
|
||||
一个典型的包是由许多头构成的。用户发送的数据被发送到 lwIP 中的一个缓存中。TCP 层要添加一个 TCP 包头,IP 层要添加一个 IP 包头,而 MAC 层有一个以太网头。甚至还有更多的部分增加到包上,这些部分要正确地连接到一起,以便于设备驱动程序能够发送最终的包。
|
||||
|
||||
E1000 的发送描述符设计是非常适合收集分散在内存中的包片段的,像在 IwIP 中创建的包的帧。如果你排队多个发送描述符,但仅设置最后一个描述符的 EOP 命令位,那么 E1000 将在内部把这些描述符串成包缓存,并在它们标记完 EOP 后仅发送串起来的缓存。因此,独立的包片段不需要在内存中把它们连接到一起。
|
||||
|
||||
修改你的驱动程序,以使它能够发送由多个缓存且无需复制的片段组成的包,并且修改 lwIP 去避免它合并包片段,因为它现在能够正确处理了。
|
||||
```
|
||||
|
||||
```markdown
|
||||
小挑战!增加你的系统调用接口,以便于它能够为多于一个的用户环境提供服务。如果有多个网络栈(和多个网络服务器)并且它们各自都有自己的 IP 地址运行在用户模式中,这将是非常有用的。接收系统调用将决定它需要哪个环境来转发每个入站的包。
|
||||
|
||||
注意,当前的接口并不知道两个包之间有何不同,并且如果多个环境去调用包接收的系统调用,各个环境将得到一个入站包的子集,而那个子集可能并不包含调用环境指定的那个包。
|
||||
|
||||
在 [这篇][7] 外内核论文的 2.2 节和 3 节中对这个问题做了深度解释,并解释了在内核中(如 JOS)处理它的一个方法。用这个论文中的方法去解决这个问题,你不需要一个像论文中那么复杂的方案。
|
||||
```
|
||||
|
||||
#### Web 服务器
|
||||
|
||||
一个最简单的 web 服务器类型是发送一个文件的内容到请求的客户端。我们在 `user/httpd.c` 中提供了一个非常简单的 web 服务器的框架代码。这个框架内码处理入站连接并解析请求头。
|
||||
|
||||
```markdown
|
||||
练习 13、这个 web 服务器中缺失了发送一个文件的内容到客户端的处理代码。通过实现 `send_file` 和 `send_data` 完成这个 web 服务器。
|
||||
```
|
||||
|
||||
在你完成了这个 web 服务器后,启动这个 web 服务器(`make run-httpd-nox`),使用你喜欢的浏览器去浏览 http:// _host_ : _port_ /index.html 地址。其中 _host_ 是运行 QEMU 的计算机的名字(如果你在 athena 上运行 QEMU,使用 `hostname.mit.edu`(其中 hostname 是在 athena 上运行 `hostname` 命令的输出,或者如果你在运行 QEMU 的机器上运行 web 浏览器的话,直接使用 `localhost`),而 _port_ 是 web 服务器运行 `make which-ports` 命令报告的端口号。你应该会看到一个由运行在 JOS 中的 HTTP 服务器提供的一个 web 页面。
|
||||
|
||||
到目前为止,你的评级测试得分应该是 105 分(满分为105)。
|
||||
|
||||
```markdown
|
||||
小挑战!在 JOS 中添加一个简单的聊天服务器,多个人可以连接到这个服务器上,并且任何用户输入的内容都被发送到其它用户。为实现它,你需要找到一个一次与多个套接字通讯的方法,并且在同一时间能够在同一个套接字上同时实现发送和接收。有多个方法可以达到这个目的。lwIP 为 `recv`(查看 `net/lwip/api/sockets.c` 中的 `lwip_recvfrom`)提供了一个 MSG_DONTWAIT 标志,以便于你不断地轮询所有打开的套接字。注意,虽然网络服务器的 IPC 支持 `recv` 标志,但是通过普通的 `read` 函数并不能访问它们,因此你需要一个方法来传递这个标志。一个更高效的方法是为每个连接去启动一个或多个环境,并且使用 IPC 去协调它们。而且碰巧的是,对于一个套接字,在结构 Fd 中找到的 lwIP 套接字 ID 是全局的(不是每个环境私有的),因此,比如一个 `fork` 的子环境继承了它的父环境的套接字。或者,一个环境通过构建一个包含了正确套接字 ID 的 Fd 就能够发送到另一个环境的套接字上。
|
||||
```
|
||||
|
||||
```
|
||||
问题
|
||||
|
||||
3、由 JOS 的 web 服务器提供的 web 页面显示了什么?
|
||||
4. 你做这个实验大约花了多长的时间?
|
||||
```
|
||||
|
||||
**本实验到此结束了。**一如既往,不要忘了运行 `make grade` 并去写下你的答案和挑战问题的解决方案的描述。在你动手之前,使用 `git status` 和 `git diff` 去检查你的变更,并不要忘了去 `git add answers-lab6.txt`。当你完成之后,使用 `git commit -am 'my solutions to lab 6’` 去提交你的变更,然后 `make handin` 并关注它的动向。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://pdos.csail.mit.edu/6.828/2018/labs/lab6/
|
||||
|
||||
作者:[csail.mit][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[qhwdw](https://github.com/qhwdw)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://pdos.csail.mit.edu
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: http://wiki.qemu.org/download/qemu-doc.html#Using-the-user-mode-network-stack
|
||||
[2]: http://www.wireshark.org/
|
||||
[3]: http://www.sics.se/~adam/lwip/
|
||||
[4]: https://pdos.csail.mit.edu/6.828/2018/labs/lab6/ns.png
|
||||
[5]: https://pdos.csail.mit.edu/6.828/2018/readings/hardware/8254x_GBe_SDM.pdf
|
||||
[6]: https://pdos.csail.mit.edu/6.828/2018/labs/lab6/e1000_hw.h
|
||||
[7]: http://pdos.csail.mit.edu/papers/exo:tocs.pdf
|
@ -0,0 +1,175 @@
|
||||
弄清 Linux 系统运行何种系统管理程序
|
||||
======
|
||||
虽然我们经常听到系统管理器这词,但很少有人深究其确切意义。现在我们将向你展示其区别。
|
||||
|
||||
我会尽自己所能来解释清楚一切。我们大多都知道 System V 和 systemd 两种系统管理器。 System V (简写 Sysv) 是老系统所使用的古老且传统的 init 进程和系统管理器。
|
||||
|
||||
Systemd 是全新的 init 进程和系统管理器,并且适配大部分主发布版本 Linux 系统。
|
||||
|
||||
Linux 系统中主要有三种 init 进程系统,很出名且仍在使用。大多数 Linux 发布版本都使用其中之一。
|
||||
|
||||
### 什么是初始化系统管理器 (init System Manager)?
|
||||
|
||||
在基于 Linux/Unix 的操作系统中,init (初始化的简称) 是内核启动系统时开启的第一个进程。
|
||||
|
||||
它持有的进程 ID(PID)号为 1,其在后台一直运行着,直到关机。
|
||||
|
||||
Init 会查找 `/etc/inittab` 文件中相应配置信息来确定系统的运行级别,然后根据运行级别启动所有的后台进程和后台应用。
|
||||
|
||||
作为 Linux 启动过程的一部分,BIOS,MBR,GRUB 和内核进程此进程之前就被激活了。
|
||||
|
||||
下面列出的是 Linux 的可用运行级别(存在七个运行级别,从零到六)。
|
||||
|
||||
* **`0:`** 停机
|
||||
* **`1:`** 单用户模式
|
||||
* **`2:`** 多用户, 无 NFS (译者注:Network File System 即网络文件系统)
|
||||
* **`3:`** 全功能多用户模式
|
||||
* **`4:`** 未使用
|
||||
* **`5:`** X11 (GUI – 图形用户界面)
|
||||
* **`6:`** 重启
|
||||
|
||||
|
||||
|
||||
下面列出的是 Linux 系统中广泛使用的三种 init 进程系统。
|
||||
|
||||
* **`System V (Sys V):`** System V(Sys V)是类 Unix 操作系统的首款传统的 `init` 进程系统。
|
||||
* **`Upstart:`** Upstart 基于事件驱动,是 `/sbin/init` 守护进程的替代品。
|
||||
* **`systemd:`** Systemd 是一款全新的 `init` 进程系统和系统管理器,它通过传统的 `SysV init` 进程系统来实现/适配全部的 Linux 主版本。
|
||||
|
||||
|
||||
|
||||
### 什么是 System V (Sys V)?
|
||||
|
||||
System V(Sys V)是类 Unix 操作系统的首款传统的 `init` 进程系统。init 是内核启动系统期间启动的第一个进程,它是所有进程的父进程。
|
||||
|
||||
起初,大多数 Linux 发行版都使用名为 System V(Sys V)的传统 `init` 进程系统。 多年来,为了解决标准版本中的设计限制,发布了几个替代的 init 进程系统,例如launchd、Service Management Facility、systemd 和 Upstart。
|
||||
|
||||
但只有 systemd 最终被几个主要 Linux 发行版本所采用,而放弃传统的 SysV。
|
||||
|
||||
### 在 Linux 上如何识别出 `System V(Sys V)` 系统管理器
|
||||
|
||||
在系统上运行如下命令来查看是否在运行着 System V (Sys V) 系统管理器:
|
||||
|
||||
### 方法 1: 使用 `ps` 命令
|
||||
|
||||
**ps** – 显示当前进程快照。`ps` 会显示当前活动进程的信息。其输出区分不出是 System V(SysV) 还是 upstart,所以我建议使用其它方法。
|
||||
|
||||
```
|
||||
# ps -p1 | grep "init\|upstart\|systemd"
|
||||
1 ? 00:00:00 init
|
||||
```
|
||||
|
||||
### 方法 2: 使用 `rpm` 命令
|
||||
|
||||
RPM 即 `Red Hat Package Manager (红帽包管理)`,是一款功能强大的[安装包管理][1]命令行具,在基于 Red Hat 的发布系统中使用,如 RHEL、CentOS、Fedora、openSUSE 和 Mageia。此工具可以在系统/服务上对软件进行安装、更新、删除、查询及验证等操作。通常 RPM 文件都带有 `.rpm` 后缀。
|
||||
RPM 会使用必须的库和依赖库来构建软件,并具不会与系统上安装的其它包冲突。
|
||||
|
||||
```
|
||||
# rpm -qf /sbin/init
|
||||
SysVinit-2.86-17.el5
|
||||
```
|
||||
|
||||
### 什么是 Upstart?
|
||||
|
||||
Upstart 基于事件驱动,是 `/sbin/init` 守护进程的替代品。用来启动、停止及监视系统的所有任务和服务。
|
||||
|
||||
最初,它是为 Ubuntu 系统而开发的,但也可以在所有的 Linux 发布版本中部署运行,以替代古老的 System-V init 进程系统。
|
||||
|
||||
它在 Ubuntu 9.10 到 14.10 版本和基于 RHEL 6 的系统中使用,之后的 Linux 版本被 systemd 取代了。
|
||||
|
||||
### 在 Linux 上如何识别出 `Upstart` 系统管理器
|
||||
|
||||
在系统上运行如下命令来查看是否在运行着 Upstart 系统管理器:
|
||||
|
||||
### 方法 1: 使用 `ps` 命令
|
||||
|
||||
**ps** – 显示当前进程快照。`ps` 会显示当前活动进程的信息。其输出区分不出是 System V(SysV) 还是 upstart,所以我建议使用其它方法。
|
||||
|
||||
```
|
||||
# ps -p1 | grep "init\|upstart\|systemd"
|
||||
1 ? 00:00:00 init
|
||||
```
|
||||
|
||||
### 方法 2: 使用 `rpm` 命令
|
||||
|
||||
RPM 即 `Red Hat Package Manager (红帽包管理)`,是一款功能强大的安装包管理命令行具,在基于 Red Hat 的发布系统中使用,如 RHEL、CentOS、Fedora、openSUSE 和 Mageia。此[ RPM 命令][2]可以让你在系统/服务上对软件进行安装、更新、删除、查询及验证等操作。通常 RPM 文件都带有 `.rpm` 后缀。
|
||||
RPM 会使用必须的库和依赖库来构建软件,并具不会与系统上安装的其它包冲突。
|
||||
|
||||
```
|
||||
# rpm -qf /sbin/init
|
||||
upstart-0.6.5-16.el6.x86_64
|
||||
```
|
||||
|
||||
### 方法 3: 使用 `/sbin/init` 文件
|
||||
|
||||
`/sbin/init` 程序会将根文件系统从内存加载或切换到磁盘。
|
||||
这是启动过程的主要部分。这个进程开始时的运行级别为 “N”(无)。`/sbin/init` 此程序会按照 `/etc/inittab` 配制文件的描述来初始化系统。
|
||||
|
||||
```
|
||||
# /sbin/init --version
|
||||
init (upstart 0.6.5)
|
||||
Copyright (C) 2010 Canonical Ltd.
|
||||
|
||||
This is free software; see the source for copying conditions. There is NO warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
|
||||
```
|
||||
|
||||
### 什么是 systemd?
|
||||
|
||||
Systemd 是一款全新的 `init` 进程系统和系统管理器,它通过传统的 `SysV init` 进程系统来实现/适配全部的 Linux 主版本。
|
||||
|
||||
systemd 与 SysV 和 LSB (全称:Linux Standards Base) init 脚本兼容。它可以作为 sysv init 系统的直接替代品。其是内核启动的第一个进程并占有 1 的 PID。
|
||||
|
||||
它是所有进程的父进程,Fedora 15 是第一个采用 systemd 而不是 upstart 的发行版本。[systemctl][3] 是一款命令行工具,它是管理 systemd 守护进程/服务(如 start、restart、stop、enable、disable、reload 和 status )的主要工具。
|
||||
|
||||
systemd 使用 `.service` 文件而不是 bash 脚本(SysV init 使用)。systemd 把所有守护进程按顺序排列到自己 Cgroups (译者注:Cgroups 是 control groups 的缩写,是 Linux 内核提供的一种可以限制、记录、隔离进程组(process groups)所使用的物理资源(如:cpu,memory,IO等等)的机制。最初由 google 的工程师提出,后来被整合进Linux内核。Cgroups 也是 LXC 为实现虚拟化所使用的资源管理手段,可以说没有 cgroups 就没有 LXC。)中,所以通过探索 `/ cgroup/systemd` 文件就可以查看系统层次结构。
|
||||
|
||||
### 在 Linux 上如何识别出 `systemd` 系统管理器
|
||||
|
||||
在系统上运行如下命令来查看是否在运行着 systemd 系统管理器:
|
||||
|
||||
### 方法 1: 使用 `ps` 命令
|
||||
|
||||
**ps** – 显示当前进程快照。`ps` 会显示当前活动进程的信息。
|
||||
|
||||
```
|
||||
# ps -p1 | grep "init\|upstart\|systemd"
|
||||
1 ? 00:18:09 systemd
|
||||
```
|
||||
|
||||
### 方法 2: 使用 `rpm` 命令
|
||||
|
||||
RPM 即 `Red Hat Package Manager (红帽包管理)`,是一款功能强大的安装包管理命令行具,在基于 Red Hat 的发布系统中使用,如 RHEL、CentOS、Fedora、openSUSE 和 Mageia。此工具可以在系统/服务上对软件进行安装、更新、删除、查询及验证等操作。通常 RPM 文件都带有 `.rpm` 后缀。
|
||||
|
||||
RPM 会使用必须的库和依赖库来构建软件,并具不会与系统上安装的其它包冲突。
|
||||
|
||||
```
|
||||
# rpm -qf /sbin/init
|
||||
systemd-219-30.el7_3.9.x86_64
|
||||
```
|
||||
|
||||
### 方法 3: 使用 `/sbin/init` 文件
|
||||
|
||||
`/sbin/init` 程序会将根文件系统从内存加载或切换到磁盘。
|
||||
这是启动过程的主要部分。这个进程开始时的运行级别为 “N”(无)。`/sbin/init` 此程序会按照 `/etc/inittab` 配制文件的描述来初始化系统。
|
||||
|
||||
```
|
||||
# file /sbin/init
|
||||
/sbin/init: symbolic link to `../lib/systemd/systemd'
|
||||
```
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.2daygeek.com/how-to-determine-which-init-system-manager-is-running-on-linux-system/
|
||||
|
||||
作者:[Prakash Subramanian][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[runningwater](https://github.com/runningwater)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.2daygeek.com/author/prakash/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://www.2daygeek.com/category/package-management/
|
||||
[2]: https://www.2daygeek.com/rpm-command-examples/
|
||||
[3]: https://www.2daygeek.com/how-to-check-all-running-services-in-linux/
|
@ -0,0 +1,98 @@
|
||||
理解 Linux 链接 (二)
|
||||
======
|
||||
|
||||
![](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/links-fikri-rasyid-7853.jpg?itok=0jBT_1M2)
|
||||
|
||||
在[本系列的第一篇文章中][1],我们认识了硬链接,软链接,知道在很多时候链接是非常有用的。链接看起来比较简单,但是也有一些不易察觉的奇怪的地方需要注意。这就是我们这篇文章中要讲的。例如,像一下我们在前一篇文章中创建的指向 `libblah` 的链接。请注意,我们是如何从目标文件夹中创建链接的。
|
||||
|
||||
```
|
||||
cd /usr/local/lib
|
||||
|
||||
ln -s /usr/lib/libblah
|
||||
```
|
||||
|
||||
这样是可以工作的,但是下面的这个例子却是不行的。
|
||||
|
||||
```
|
||||
cd /usr/lib
|
||||
|
||||
ln -s libblah /usr/local/lib
|
||||
```
|
||||
|
||||
也就是说,从原始文件夹内到目标文件夹之间的链接将不起作用。
|
||||
|
||||
出现这种情况的原因是 `ln` 会把它当作是你在 `/usr/local/lib` 中创建一个到 `/usr/local/lib` 的链接,并在 `/usr/local/lib` 中创建了从 `libblah` 到 `libblah` 的一个链接。这是因为所有链接文件获取的是文件的名称(`libblah`),而不是文件的路径,最终的结果将会产生一个坏的链接。
|
||||
|
||||
然而,请看下面的这种情况。
|
||||
|
||||
```
|
||||
cd /usr/lib
|
||||
|
||||
ln -s /usr/lib/libblah /usr/local/lib
|
||||
```
|
||||
|
||||
是可以工作的。奇怪的事情又来了,不管你在文件系统的任何位置执行指令,它都可以好好的工作。使用绝对路径,也就是说,指定整个完整的路径,从根目录(`/`)开始到需要的文件或者是文件夹,是最好的实现方式。
|
||||
|
||||
其它需要注意的事情是,只要 `/usr/lib` 和 `/usr/local/lib` 在一个分区上,做一个如下的硬链接:
|
||||
|
||||
```
|
||||
cd /usr/lib
|
||||
|
||||
ln libblah /usr/local/lib
|
||||
```
|
||||
|
||||
也是可以工作的,因为硬链接不依赖于指向文件系统内的文件来工作。
|
||||
|
||||
如果硬链接不起作用,那么可能是你想跨分区之间建立一个硬链接。就比如说,你有分区A上有文件 `fileA` ,并且把这个分区挂载到 `/path/to/partitionA/directory` 目录,而你又想从 `fileA` 链接到分区B上 `/path/to/partitionB/directory` 目录,这样是行不通的。
|
||||
|
||||
```
|
||||
ln /path/to/partitionA/directory/file /path/to/partitionB/directory
|
||||
```
|
||||
|
||||
正如我们之前说的一样,硬链接是分区表中指向的是同一个分区的数据的条目,你不能把一个分区表的条目指向另一个分区上的数据,这种情况下,你只能选择创建一个软链接:
|
||||
|
||||
```
|
||||
ln -s /path/to/partitionA/directory/file /path/to/partitionB/directory
|
||||
```
|
||||
|
||||
另一个软链接能做到,而硬链接不能的是链接到一个目录。
|
||||
|
||||
```
|
||||
ln -s /path/to/some/directory /path/to/some/other/directory
|
||||
```
|
||||
|
||||
这将在 `/path/to/some/other/directory` 中创建 `/path/to/some/directory` 的链接,没有任何问题。
|
||||
|
||||
当你使用硬链接做同样的事情的时候,会提示你一个错误,说不允许那么做。而不允许这么做的原因量会导致无休止的递归:如果你在目录A中有一个目录B,然后你在目录B中链接A,就会出现同样的情况,在目录A中,目录A包含了目录B,而在目录B中又包含了A,然后又包含了B,等等无穷无尽。
|
||||
|
||||
当然你可以在递归中使用软链接,但你为什么要那样做呢?
|
||||
|
||||
### 我应该使用硬链接还是软链接呢?
|
||||
|
||||
通常,你可以在任何地方使用软链接做任何事情。实际上,在有些情况下你只能使用软软链接。话说回来,硬链接的效率要稍高一些:它们占用的磁盘空间更少,访问速度更快。在大多数的机器上, 发你可以忽略这一点点的差异,因为:在磁盘空间越来越大,访问速度越来越快的今天,空间和速度的差异可以忽略不计。不过,如果你是在一个有小存储和低功耗的处理器上使用嵌入式系统上使用 linux, 则可能需要考虑使用硬链接。
|
||||
|
||||
另一个使用硬链接的原因是硬链接不容易破碎。假设你有一个软链接,而你意外的移动或者删除了它指向的文件,那么你的软链接将会破碎,并指向了一个不存在的东西。这种情况是不会发生在硬链接中的,因为硬链接直接指向的是磁盘上的数据。实际上,磁盘上的空间不不会被标记为空闲,除非最后一个指向它的硬链接把它从文件系统中擦除掉。
|
||||
|
||||
软链接,在另一方面比硬链接可以做更多的事情,而且可以指向任何东西,可以是文件或目录。它也可以指向不在同一个分区上的文件和目录。仅这两个不同,我们就可以做出唯一的选择了。
|
||||
|
||||
### 下期
|
||||
|
||||
现在我们已经介绍了文件和目录以及操作它们的工具,你是否已经准备好转到这些工具,可以浏览目录层次结构,可以查找文件中的数据,也可以检查目录。这就是我们下一期中要做的事情。下期见。
|
||||
|
||||
你可以通过Linux 基金会和edX [Linux 简介][2]了解更多关于Linux的免费课程。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.linux.com/blog/2018/10/understanding-linux-links-part-2
|
||||
|
||||
作者:[Paul Brown][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.linux.com/users/bro66
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://www.linux.com/blog/intro-to-linux/2018/10/linux-links-part-1
|
||||
[2]: https://training.linuxfoundation.org/linux-courses/system-administration-training/introduction-to-linux
|
@ -0,0 +1,83 @@
|
||||
使用 Ultimate Plumber 即时预览管道命令结果
|
||||
======
|
||||
|
||||
![](https://www.ostechnix.com/wp-content/uploads/2018/10/Ultimate-Plumber-720x340.jpg)
|
||||
|
||||
管道命令的作用是将一个命令/程序/进程的输出发送给另一个命令/程序/进程,以便将输出结果进行进一步的处理。我们可以通过使用管道命令把多个命令组合起来,使一个命令的标准输入或输出重定向到另一个命令。两个或多个 Linux 命令之间的竖线字符(|)表示在命令之间使用管道命令。管道命令的一般语法如下所示:
|
||||
|
||||
```
|
||||
Command-1 | Command-2 | Command-3 | …| Command-N
|
||||
```
|
||||
|
||||
`Ultimate Plumber`(简称 `UP`)是一个命令行工具,它可以用于即时预览管道命令结果。如果你在使用 Linux 时经常会用到管道命令,就可以通过它更好地运用管道命令了。它可以预先显示执行管道命令后的结果,而且是即时滚动地显示,让你可以轻松构建复杂的管道。
|
||||
|
||||
下文将会介绍如何安装 `UP` 并用它将复杂管道命令的编写变得简单。
|
||||
|
||||
|
||||
**重要警告:**
|
||||
|
||||
在生产环境中请谨慎使用 `UP`!在使用它的过程中,有可能会在无意中删除重要数据,尤其是搭配 `rm` 或 `dd` 命令时需要更加小心。勿谓言之不预。
|
||||
|
||||
### 使用 Ultimate Plumber 即时预览管道命令
|
||||
|
||||
下面给出一个简单的例子介绍 `UP` 的使用方法。如果需要将 `lshw` 命令的输出传递给 `UP`,只需要在终端中输入以下命令,然后回车:
|
||||
|
||||
```
|
||||
$ lshw |& up
|
||||
```
|
||||
|
||||
你会在屏幕顶部看到一个输入框,如下图所示。
|
||||
![](https://www.ostechnix.com/wp-content/uploads/2018/10/Ultimate-Plumber.png)
|
||||
|
||||
在输入命令的过程中,输入管道符号并回车,就可以立即执行已经输入了的命令。`Ultimate Plumber` 会在下方的可滚动窗口中即时显示管道命令的输出。在这种状态下,你可以通过 `PgUp`/`PgDn` 键或 `ctrl + ←`/`ctrl + →` 组合键来查看结果。
|
||||
|
||||
当你满意执行结果之后,可以使用 `ctrl + x` 组合键退出 `UP`。而退出前编写的管道命令则会保存在当前工作目录的文件中,并命名为 `up1.sh`。如果这个文件名已经被占用,就会命名为 `up2.sh`、`up3.sh` 等等以此类推,直到第 1000 个文件。如果你不需要将管道命令保存输出,只需要使用 `ctrl + c` 组合键退出即可。
|
||||
|
||||
通过 `cat` 命令可以查看 `upX.sh` 文件的内容。例如以下是我的 `up2.sh` 文件的输出内容:
|
||||
|
||||
```
|
||||
$ cat up2.sh
|
||||
#!/bin/bash
|
||||
grep network -A5 | grep : | cut -d: -f2- | paste - -
|
||||
```
|
||||
|
||||
如果通过管道发送到 `UP` 的命令运行时间太长,终端窗口的左上角会显示一个波浪号(~)字符,这就表示 `UP` 在等待前一个命令的输出结果作为输入。在这种情况下,你可能需要使用 `ctrl + s` 组合键暂时冻结 `UP` 的输入缓冲区大小。在需要解冻的时候,使用 `ctrl + q` 组合键即可。`Ultimate Plumber` 的输入缓冲区大小一般为 40 MB,到达这个限制之后,屏幕的左上角会显示一个加号。
|
||||
|
||||
以下是 `UP` 命令的一个简单演示:
|
||||
![](https://www.ostechnix.com/wp-content/uploads/2018/10/up.gif)
|
||||
|
||||
### 安装 Ultimate Plumber
|
||||
|
||||
喜欢这个工具的话,你可以在你的 Linux 系统上安装使用。安装过程也相当简单,只需要在终端里执行以下两个命令就可以安装 `UP` 了。
|
||||
|
||||
首先从 Ultimate Plumber 的[发布页面][1]下载最新的二进制文件,并将放在你系统的某个路径下,例如`/usr/local/bin/`。
|
||||
|
||||
```
|
||||
$ sudo wget -O /usr/local/bin/up wget https://github.com/akavel/up/releases/download/v0.2.1/up
|
||||
```
|
||||
|
||||
然后向 `UP` 二进制文件赋予可执行权限:
|
||||
|
||||
```
|
||||
$ sudo chmod a+x /usr/local/bin/up
|
||||
```
|
||||
|
||||
至此,你已经完成了 `UP` 的安装,可以开始编写你的管道命令了。
|
||||
|
||||
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.ostechnix.com/ultimate-plumber-writing-linux-pipes-with-instant-live-preview/
|
||||
|
||||
作者:[SK][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[HankChow](https://github.com/HankChow)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.ostechnix.com/author/sk/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://github.com/akavel/up/releases
|
||||
|
@ -0,0 +1,73 @@
|
||||
设计更快的网页(三):字体和 CSS 转换
|
||||
======
|
||||
|
||||
![](https://fedoramagazine.org/wp-content/uploads/2018/10/designfaster3-816x345.jpg)
|
||||
|
||||
欢迎回到我们为了构建更快网页所写的系列文章。本系列的[第一][1]和[第二][2]部分讲述了如何通过优化和替换图片来减少浏览器脂肪。本部分会着眼于在 CSS([层叠式样式表][3])和字体中减掉更多的脂肪。
|
||||
|
||||
### CSS 转换
|
||||
|
||||
首先,我们先来看看问题的源头。CSS 的出现曾是技术的一大进步。你可以用一个集中式的样式表来装饰多个网页。如今很多 Web 开发者都会使用 Bootstrap 这样的框架。
|
||||
|
||||
这些框架当然方便,可是很多人都会将整个框架直接复制粘贴走。Bootstrap 非常大:目前 Bootstrap 4.0 的“最小”版本也有 144.9 KB. 在这个以 TB 来计数据的时代,它可能不算多。但就像所说的那样,一头小牛也能搞出大麻烦。
|
||||
|
||||
我们回头来看 [getfedora.org][4] 的例子。我们在[第一部分][1]中提过,第一个分析结果显示 CSS 文件占用的空间几乎比 HTML 本身还要大十倍。这里显示了所有用到的样式表:
|
||||
|
||||
![][5]
|
||||
|
||||
那是九个不同的样式表。其中的很多样式在这个页面中并没有用上。
|
||||
|
||||
#### 移除、合并、以及压缩/缩小化
|
||||
|
||||
Font-awesome CSS 代表了包含未使用样式的极端。这个页面中只用到了这个字体的三个字形。如果以 KB 为单位,getfedora.org 用到的 font-awesome CSS 最初有 25.2 KB. 在清理掉所有未使用的样式后,它只有 1.3 KB 了。这只有原来体积的 4% 左右!对于 Bootstrap CSS,原来它有 118.3 KB,清理掉无用的样式后只有 13.2 KB,这就是差异。
|
||||
|
||||
下一个问题是,我们必须要这样一个 `bootstrap.css` 和 `font-awesome.css` 吗?或者,它们能不能合起来呢?没错,它们可以。这样虽然不会节省更多的文件空间,但浏览器成功渲染页面所需要发起的请求更少了。
|
||||
|
||||
最后,在合并 CSS 文件后,尝试去除无用样式并缩小它们。这样,它们只有 4.3 KB 大小,而你省掉了 10.1 KB.
|
||||
|
||||
不幸的是,在 Fedora 软件仓库中,还没有打包好的缩小工具。不过,有几百种在线服务可以帮到你。或者,你也可以使用 [CSS-HTML-JS Minify][6],它用 Python 编写,所以容易安装。现在没有一个可用的工具来净化 CSS,不过我们有 [UnCSS][7] 这样的 Web 服务。
|
||||
|
||||
### 字体改进
|
||||
|
||||
[CSS3][8] 带来了很多开发人员喜欢的东西。它可以定义一些渲染页面所用的字体,并让浏览器在后台下载。此后,很多 Web 设计师都很开心,尤其是在他们发现了 Web 设计中图标字体的用法之后。像 [Font Awesome][9] 这样的字体集现在非常流行,也被广泛使用。这是这个字体集的大小:
|
||||
|
||||
```
|
||||
current free version 912 glyphs/icons, smallest set ttf 30.9KB, woff 14.7KB, woff2 12.2KB, svg 107.2KB, eot 31.2
|
||||
```
|
||||
|
||||
所以问题是,你需要所有的字形吗?很可能不需要。你可以通过 [FontForge][10] 来摆脱这些无用字形,但这需要很大的工作量。你还可以用 [Fontello][11]. 你可以使用公共实例,也可以配置你自己的版本,因为它是自由软件,可以在 [Github][12] 上找到。
|
||||
|
||||
这种自定义字体集的缺点在于,你必须自己来托管字体文件。你也没法使用其它在线服务来提供更新。但与更快的性能相比,这可能算不上一个缺点。
|
||||
|
||||
### 总结
|
||||
|
||||
现在,你已经做完了所有对内容本身的操作,来最大限度地减少浏览器加载和解释的内容。从现在开始,只有服务器的管理技巧才才能帮到你了。
|
||||
|
||||
有一个很简单,但很多人都做错了的事情,就是使用一些智能缓存。比如,CSS 或者图片文件可以缓存一周。但无论如何,如果你用了 Cloudflare 这样的代理服务或者自己构建了代理,首先要做的都应该是缩小页面。用户喜欢可以快速加载的页面。他们会(默默地)感谢你,服务器的负载也会更小。
|
||||
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://fedoramagazine.org/design-faster-web-pages-part-3-font-css-tweaks/
|
||||
|
||||
作者:[Sirko Kemter][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[StdioA](https://github.com/StdioA)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://fedoramagazine.org/author/gnokii/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://fedoramagazine.org/design-faster-web-pages-part-1-image-compression/
|
||||
[2]: https://fedoramagazine.org/design-faster-web-pages-part-2-image-replacement/
|
||||
[3]: https://en.wikipedia.org/wiki/Cascading_Style_Sheets
|
||||
[4]: https://getfedora.org
|
||||
[5]: https://fedoramagazine.org/wp-content/uploads/2018/02/CSS_delivery_tool_-_Examine_how_a_page_uses_CSS_-_2018-02-24_15.00.46.png
|
||||
[6]: https://github.com/juancarlospaco/css-html-js-minify
|
||||
[7]: https://uncss-online.com/
|
||||
[8]: https://developer.mozilla.org/en-US/docs/Web/CSS/CSS3
|
||||
[9]: https://fontawesome.com/
|
||||
[10]: https://fontforge.github.io/en-US/
|
||||
[11]: http://fontello.com/
|
||||
[12]: https://github.com/fontello/fontello
|
@ -0,0 +1,123 @@
|
||||
Python 机器学习的必备技巧
|
||||
======
|
||||
> 尝试使用 Python 掌握机器学习、人工智能和深度学习。
|
||||
|
||||
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/programming-code-keyboard-laptop.png?itok=pGfEfu2S)
|
||||
|
||||
想要入门机器学习并不难。除了<ruby>大规模网络公开课<rt>Massive Open Online Courses</rt></ruby>(MOOCs)之外,还有很多其它优秀的免费资源。下面我分享一些我觉得比较有用的方法。
|
||||
|
||||
1. 阅览一些关于这方面的视频、文章或者书籍,例如 [The Master Algorithm: How the Quest for the Ultimate Learning Machine Will Remake Our World][29],你肯定会喜欢这些[关于机器学习的互动页面][30]。
|
||||
|
||||
2. 对于“机器学习”、“人工智能”、“深度学习”、“数据科学”、“计算机视觉”和“机器人技术”这一堆新名词,你需要知道它们之前的区别。你可以阅览这些领域的专家们的演讲,例如[数据科学家 Brandon Rohrer 的这个视频][1]。
|
||||
|
||||
3. 明确你自己的学习目标,并选择合适的 [Coursera 课程][3],或者参加高校的网络公开课。例如[华盛顿大学的课程][4]就很不错。
|
||||
|
||||
4. 关注优秀的博客:例如 [KDnuggets][32] 的博客、[Mark Meloon][33] 的博客、[Brandon Rohrer][34] 的博客、[Open AI][35] 的博客,这些都值得推荐。
|
||||
|
||||
5. 如果你对在线课程有很大兴趣,后文中会有如何[正确选择 MOOC 课程][31]的指导。
|
||||
|
||||
6. 最重要的是,培养自己对这些技术的兴趣。加入一些优秀的社交论坛,专注于阅读和了解,将这些技术的背景知识和发展方向理解透彻,并积极思考在日常生活和工作中如何应用机器学习或数据科学的原理。例如建立一个简单的回归模型来预测下一次午餐的成本,又或者是从电力公司的网站上下载历史电费数据,在 Excel 中进行简单的时序分析以发现某种规律。在你对这些技术产生了浓厚兴趣之后,可以观看以下这个视频。
|
||||
|
||||
<https://www.youtube.com/embed/IpGxLWOIZy4>
|
||||
|
||||
### Python 是机器学习和人工智能方面的最佳语言吗?
|
||||
|
||||
除非你是一名专业的研究一些复杂算法纯理论证明的研究人员,否则,对于一个机器学习的入门者来说,需要熟悉至少一种高级编程语言一家相关的专业知识。因为大多数情况下都是需要考虑如何将机器学习算法应用于解决实际问题,而这需要有一定的编程能力作为基础。
|
||||
|
||||
哪一种语言是数据科学的最佳语言?这个讨论一直没有停息过。对于这方面,你可以提起精神来看一下 FreeCodeCamp 上这一篇关于[数据科学语言][6]的文章,又或者是 KDnuggets 关于 [Python 和 R][7] 之间的深入探讨。
|
||||
|
||||
目前人们普遍认为 Python 在开发、部署、维护各方面的效率都是比较高的。与 Java、C 和 C++ 这些较为传统的语言相比,Python 的语法更为简单和高级。而且 Python 拥有活跃的社区群体、广泛的开源文化、数百个专用于机器学习的优质代码库,以及来自业界巨头(包括Google、Dropbox、Airbnb 等)的强大技术支持。
|
||||
|
||||
### 基础 Python 库
|
||||
|
||||
如果你打算使用 Python 实施机器学习,你必须掌握一些 Python 包和库的使用方法。
|
||||
|
||||
#### NumPy
|
||||
|
||||
NumPy 的完整名称是 [Numerical Python][8],它是 Python 生态里高性能科学计算和数据分析都需要用到的基础包,几乎所有高级工具(例如 [Pandas][9] 和 [scikit-learn][10])都依赖于它。[TensorFlow][11] 使用了 NumPy 数组作为基础构建块以支持 Tensor 对象和深度学习的图形流。很多 NumPy 操作的速度都非常快,因为它们都是通过 C 实现的。高性能对于数据科学和现代机器学习来说是一个非常宝贵的优势。
|
||||
|
||||
![](https://opensource.com/sites/default/files/uploads/machine-learning-python_numpy-cheat-sheet.jpeg)
|
||||
|
||||
#### Pandas
|
||||
|
||||
Pandas 是 Python 生态中用于进行通用数据分析的最受欢迎的库。Pandas 基于 NumPy 数组构建,在保证了可观的执行速度的同时,还提供了许多数据工程方面的功能,包括:
|
||||
|
||||
* 对多种不同数据格式的读写操作
|
||||
* 选择数据子集
|
||||
* 跨行列计算
|
||||
* 查找并补充缺失的数据
|
||||
* 将操作应用于数据中的独立组
|
||||
* 按照多种格式转换数据
|
||||
* 组合多个数据集
|
||||
* 高级时间序列功能
|
||||
* 通过 Matplotlib 和 Seaborn 进行可视化
|
||||
|
||||
![](https://opensource.com/sites/default/files/uploads/pandas_cheat_sheet_github.png)
|
||||
|
||||
#### Matplotlib 和 Seaborn
|
||||
|
||||
数据可视化和数据分析是数据科学家的必备技能,毕竟仅凭一堆枯燥的数据是无法有效地将背后蕴含的信息向受众传达的。这两项技能对于机器学习来说同样重要,因为首先要对数据集进行一个探索性分析,才能更准确地选择合适的机器学习算法。
|
||||
|
||||
[Matplotlib][12] 是应用最广泛的 2D Python 可视化库。它包含海量的命令和接口,可以让你根据数据生成高质量的图表。要学习使用 Matplotlib,可以参考这篇详尽的[文章][13]。
|
||||
|
||||
![](https://opensource.com/sites/default/files/uploads/matplotlib_gallery_-1.png)
|
||||
|
||||
[Seaborn][14] 也是一个强大的用于统计和绘图的可视化库。它在 Matplotlib 的基础上提供样式灵活的 API、用于统计和绘图的常见高级函数,还可以和 Pandas 提供的功能相结合。要学习使用 Seaborn,可以参考这篇优秀的[教程][15]。
|
||||
|
||||
![](https://opensource.com/sites/default/files/uploads/machine-learning-python_seaborn.png)
|
||||
|
||||
#### Scikit-learn
|
||||
|
||||
Scikit-learn 是机器学习方面通用的重要 Python 包。它实现了多种[分类][16]、[回归][17]和[聚类][18]算法,包括[支持向量机][19]、[随机森林][20]、[梯度增强][21]、[k-means 算法][22]和 [DBSCAN 算法][23],可以与 Python 的数值库 NumPy 和科学计算库 [SciPy][24] 结合使用。它通过兼容的接口提供了有监督和无监督的学习算法。Scikit-learn 的强壮性让它可以稳定运行在生产环境中,同时它在易用性、代码质量、团队协作、文档和性能等各个方面都有良好的表现。可以参考这篇基于 Scikit-learn 的[机器学习入门][25],或者这篇基于 Scikit-learn 的[简单机器学习用例演示][26]。
|
||||
|
||||
本文使用 [CC BY-SA 4.0][28] 许可,在 [Heartbeat][27] 上首发。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/18/10/machine-learning-python-essential-hacks-and-tricks
|
||||
|
||||
作者:[Tirthajyoti Sarkar][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[HankChow](https://github.com/HankChow)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/tirthajyoti
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://www.youtube.com/watch?v=tKa0zDDDaQk
|
||||
[2]: https://www.youtube.com/watch?v=Ura_ioOcpQI
|
||||
[3]: https://www.coursera.org/learn/machine-learning
|
||||
[4]: https://www.coursera.org/specializations/machine-learning
|
||||
[5]: https://towardsdatascience.com/how-to-choose-effective-moocs-for-machine-learning-and-data-science-8681700ed83f
|
||||
[6]: https://medium.freecodecamp.org/which-languages-should-you-learn-for-data-science-e806ba55a81f
|
||||
[7]: https://www.kdnuggets.com/2017/09/python-vs-r-data-science-machine-learning.html
|
||||
[8]: http://numpy.org/
|
||||
[9]: https://pandas.pydata.org/
|
||||
[10]: http://scikit-learn.org/
|
||||
[11]: https://www.tensorflow.org/
|
||||
[12]: https://matplotlib.org/
|
||||
[13]: https://realpython.com/python-matplotlib-guide/
|
||||
[14]: https://seaborn.pydata.org/
|
||||
[15]: https://www.datacamp.com/community/tutorials/seaborn-python-tutorial
|
||||
[16]: https://en.wikipedia.org/wiki/Statistical_classification
|
||||
[17]: https://en.wikipedia.org/wiki/Regression_analysis
|
||||
[18]: https://en.wikipedia.org/wiki/Cluster_analysis
|
||||
[19]: https://en.wikipedia.org/wiki/Support_vector_machine
|
||||
[20]: https://en.wikipedia.org/wiki/Random_forests
|
||||
[21]: https://en.wikipedia.org/wiki/Gradient_boosting
|
||||
[22]: https://en.wikipedia.org/wiki/K-means_clustering
|
||||
[23]: https://en.wikipedia.org/wiki/DBSCAN
|
||||
[24]: https://en.wikipedia.org/wiki/SciPy
|
||||
[25]: http://scikit-learn.org/stable/tutorial/basic/tutorial.html
|
||||
[26]: https://towardsdatascience.com/machine-learning-with-python-easy-and-robust-method-to-fit-nonlinear-data-19e8a1ddbd49
|
||||
[27]: https://heartbeat.fritz.ai/some-essential-hacks-and-tricks-for-machine-learning-with-python-5478bc6593f2
|
||||
[28]: https://creativecommons.org/licenses/by-sa/4.0/
|
||||
[29]: https://www.goodreads.com/book/show/24612233-the-master-algorithm
|
||||
[30]: http://www.r2d3.us/visual-intro-to-machine-learning-part-1/
|
||||
[31]: https://towardsdatascience.com/how-to-choose-effective-moocs-for-machine-learning-and-data-science-8681700ed83f
|
||||
[32]: https://www.kdnuggets.com/
|
||||
[33]: http://www.markmeloon.com/
|
||||
[34]: https://brohrer.github.io/blog.html
|
||||
[35]: https://blog.openai.com/
|
||||
|
Loading…
Reference in New Issue
Block a user