mirror of
https://github.com/LCTT/TranslateProject.git
synced 2025-01-07 22:11:09 +08:00
commit
690d418b14
@ -1,21 +1,21 @@
|
||||
[调试器的工作原理:第一篇-基础][21]
|
||||
调试器的工作原理(一):基础篇
|
||||
============================================================
|
||||
|
||||
这是调试器工作原理系列文章的第一篇,我不确定这个系列会有多少篇文章,会涉及多少话题,但我仍会从这篇基础开始。
|
||||
|
||||
### 这一篇会讲什么
|
||||
|
||||
我将为大家展示 Linux 中调试器的主要构成模块 - ptrace 系统调用。这篇文章所有代码都是基于 32 位 Ubuntu 操作系统.值得注意的是,尽管这些代码是平台相关的,将他们移植到其他平台应该并不困难。
|
||||
我将为大家展示 Linux 中调试器的主要构成模块 - `ptrace` 系统调用。这篇文章所有代码都是基于 32 位 Ubuntu 操作系统。值得注意的是,尽管这些代码是平台相关的,将它们移植到其它平台应该并不困难。
|
||||
|
||||
### 缘由
|
||||
|
||||
为了理解我们要做什么,让我们先考虑下调试器为了完成调试都需要什么资源。调试器可以开始一个进程并调试这个进程,又或者将自己同某个已经存在的进程关联起来。调试器能够单步执行代码,设定断点并且将程序执行到断点,检查变量的值并追踪堆栈。许多调试器有着更高级的特性,例如在调试器的地址空间内执行表达式或者调用函数,甚至可以在进程执行过程中改变代码并观察效果。
|
||||
为了理解我们要做什么,让我们先考虑下调试器为了完成调试都需要什么资源。调试器可以开始一个进程并调试这个进程,又或者将自己同某个已经存在的进程关联起来。调试器能够单步执行代码,设定断点并且将程序执行到断点,检查变量的值并追踪堆栈。许多调试器有着更高级的特性,例如在调试器的地址空间内执行表达式或者调用函数,甚至可以在进程执行过程中改变代码并观察效果。
|
||||
|
||||
尽管现代的调试器都十分的复杂 [[1]][13],但他们的工作的原理却是十分的简单。调试器的基础是操作系统与编译器 / 链接器提供的一些基础服务,其余的部分只是[简单的编程][14]。
|
||||
尽管现代的调试器都十分的复杂(我没有检查,但我确信 gdb 的代码行数至少有六位数),但它们的工作的原理却是十分的简单。调试器的基础是操作系统与编译器 / 链接器提供的一些基础服务,其余的部分只是[简单的编程][14]而已。
|
||||
|
||||
### Linux 的调试 - ptrace
|
||||
|
||||
Linux 调试器中的瑞士军刀便是 ptrace 系统调用 [[2]][15]。这是一种复杂却强大的工具,可以允许一个进程控制另外一个进程并从内部替换被控制进程的内核镜像的值[[3]][16].。
|
||||
Linux 调试器中的瑞士军刀便是 `ptrace` 系统调用(使用 man 2 ptrace 命令可以了解更多)。这是一种复杂却强大的工具,可以允许一个进程控制另外一个进程并从<ruby>内部替换<rt>Peek and poke</rt></ruby>被控制进程的内核镜像的值(Peek and poke 在系统编程中是很知名的叫法,指的是直接读写内存内容)。
|
||||
|
||||
接下来会深入分析。
|
||||
|
||||
@ -49,7 +49,7 @@ int main(int argc, char** argv)
|
||||
}
|
||||
```
|
||||
|
||||
看起来相当的简单:我们用 fork 命令创建了一个新的子进程。if 语句的分支执行子进程(这里称之为“target”),else if 的分支执行父进程(这里称之为“debugger”)。
|
||||
看起来相当的简单:我们用 `fork` 创建了一个新的子进程(这篇文章假定读者有一定的 Unix/Linux 编程经验。我假定你知道或至少了解 fork、exec 族函数与 Unix 信号)。if 语句的分支执行子进程(这里称之为 “target”),`else if` 的分支执行父进程(这里称之为 “debugger”)。
|
||||
|
||||
下面是 target 进程的代码:
|
||||
|
||||
@ -69,18 +69,18 @@ void run_target(const char* programname)
|
||||
}
|
||||
```
|
||||
|
||||
这段代码中最值得注意的是 ptrace 调用。在 "sys/ptrace.h" 中,ptrace 是如下定义的:
|
||||
这段代码中最值得注意的是 `ptrace` 调用。在 `sys/ptrace.h` 中,`ptrace` 是如下定义的:
|
||||
|
||||
```
|
||||
long ptrace(enum __ptrace_request request, pid_t pid,
|
||||
void *addr, void *data);
|
||||
```
|
||||
|
||||
第一个参数是 _request_,这是许多预定义的 PTRACE_* 常量中的一个。第二个参数为请求分配进程 ID。第三个与第四个参数是地址与数据指针,用于操作内存。上面代码段中的ptrace调用发起了 PTRACE_TRACEME 请求,这意味着该子进程请求系统内核让其父进程跟踪自己。帮助页面上对于 request 的描述很清楚:
|
||||
第一个参数是 `_request_`,这是许多预定义的 `PTRACE_*` 常量中的一个。第二个参数为请求分配进程 ID。第三个与第四个参数是地址与数据指针,用于操作内存。上面代码段中的 `ptrace` 调用发起了 `PTRACE_TRACEME` 请求,这意味着该子进程请求系统内核让其父进程跟踪自己。帮助页面上对于 request 的描述很清楚:
|
||||
|
||||
> 意味着该进程被其父进程跟踪。任何传递给该进程的信号(除了 SIGKILL)都将通过 wait() 方法阻塞该进程并通知其父进程。**此外,该进程的之后所有调用 exec() 动作都将导致 SIGTRAP 信号发送到此进程上,使得父进程在新的程序执行前得到取得控制权的机会**。如果一个进程并不需要它的的父进程跟踪它,那么这个进程不应该发送这个请求。(pid,addr 与 data 暂且不提)
|
||||
> 意味着该进程被其父进程跟踪。任何传递给该进程的信号(除了 `SIGKILL`)都将通过 `wait()` 方法阻塞该进程并通知其父进程。**此外,该进程的之后所有调用 `exec()` 动作都将导致 `SIGTRAP` 信号发送到此进程上,使得父进程在新的程序执行前得到取得控制权的机会**。如果一个进程并不需要它的的父进程跟踪它,那么这个进程不应该发送这个请求。(pid、addr 与 data 暂且不提)
|
||||
|
||||
我高亮了这个例子中我们需要注意的部分。在 ptrace 调用后,run_target 接下来要做的就是通过 execl 传参并调用。如同高亮部分所说明,这将导致系统内核在 execl 创建进程前暂时停止,并向父进程发送信号。
|
||||
我高亮了这个例子中我们需要注意的部分。在 `ptrace` 调用后,`run_target` 接下来要做的就是通过 `execl` 传参并调用。如同高亮部分所说明,这将导致系统内核在 `execl` 创建进程前暂时停止,并向父进程发送信号。
|
||||
|
||||
是时候看看父进程做什么了。
|
||||
|
||||
@ -110,11 +110,11 @@ void run_debugger(pid_t child_pid)
|
||||
}
|
||||
```
|
||||
|
||||
如前文所述,一旦子进程调用了 exec,子进程会停止并被发送 SIGTRAP 信号。父进程会等待该过程的发生并在第一个 wait() 处等待。一旦上述事件发生了,wait() 便会返回,由于子进程停止了父进程便会收到信号(如果子进程由于信号的发送停止了,WIFSTOPPED 就会返回 true)。
|
||||
如前文所述,一旦子进程调用了 `exec`,子进程会停止并被发送 `SIGTRAP` 信号。父进程会等待该过程的发生并在第一个 `wait()` 处等待。一旦上述事件发生了,`wait()` 便会返回,由于子进程停止了父进程便会收到信号(如果子进程由于信号的发送停止了,`WIFSTOPPED` 就会返回 `true`)。
|
||||
|
||||
父进程接下来的动作就是整篇文章最需要关注的部分了。父进程会将 PTRACE_SINGLESTEP 与子进程ID作为参数调用 ptrace 方法。这就会告诉操作系统,“请恢复子进程,但在它执行下一条指令前阻塞”。周而复始地,父进程等待子进程阻塞,循环继续。当 wait() 中传出的信号不再是子进程的停止信号时,循环终止。在跟踪器(父进程)运行期间,这将会是被跟踪进程(子进程)传递给跟踪器的终止信号(如果子进程终止 WIFEXITED 将返回 true)。
|
||||
父进程接下来的动作就是整篇文章最需要关注的部分了。父进程会将 `PTRACE_SINGLESTEP` 与子进程 ID 作为参数调用 `ptrace` 方法。这就会告诉操作系统,“请恢复子进程,但在它执行下一条指令前阻塞”。周而复始地,父进程等待子进程阻塞,循环继续。当 `wait()` 中传出的信号不再是子进程的停止信号时,循环终止。在跟踪器(父进程)运行期间,这将会是被跟踪进程(子进程)传递给跟踪器的终止信号(如果子进程终止 `WIFEXITED` 将返回 `true`)。
|
||||
|
||||
icounter 存储了子进程执行指令的次数。这么看来我们小小的例子也完成了些有用的事情 - 在命令行中指定程序,它将执行该程序并记录它从开始到结束所需要的 cpu 指令数量。接下来就让我们这么做吧。
|
||||
`icounter` 存储了子进程执行指令的次数。这么看来我们小小的例子也完成了些有用的事情 - 在命令行中指定程序,它将执行该程序并记录它从开始到结束所需要的 cpu 指令数量。接下来就让我们这么做吧。
|
||||
|
||||
### 测试
|
||||
|
||||
@ -131,9 +131,9 @@ int main()
|
||||
|
||||
```
|
||||
|
||||
令我惊讶的是,跟踪器花了相当长的时间,并报告整个执行过程共有超过 100,000 条指令执行。仅仅是一条输出语句?什么造成了这种情况?答案很有趣[[5]][18]。Linux 的 gcc 默认会动态的将程序与 c 的运行时库动态地链接。这就意味着任何程序运行前的第一件事是需要动态库加载器去查找程序运行所需要的共享库。这些代码的数量很大 - 别忘了我们的跟踪器要跟踪每一条指令,不仅仅是主函数的,而是“整个过程中的指令”。
|
||||
令我惊讶的是,跟踪器花了相当长的时间,并报告整个执行过程共有超过 100,000 条指令执行。仅仅是一条输出语句?什么造成了这种情况?答案很有趣(至少你同我一样痴迷与机器/汇编语言)。Linux 的 gcc 默认会动态的将程序与 c 的运行时库动态地链接。这就意味着任何程序运行前的第一件事是需要动态库加载器去查找程序运行所需要的共享库。这些代码的数量很大 - 别忘了我们的跟踪器要跟踪每一条指令,不仅仅是主函数的,而是“整个进程中的指令”。
|
||||
|
||||
所以当我将测试程序使用静态编译时(通过比较,可执行文件会多出 500 KB 左右的大小,这部分是 C 运行时库的静态链接),跟踪器提示只有大概 7000 条指令被执行。这个数目仍然不小,但是考虑到在主函数执行前 libc 的初始化以及主函数执行后的清除代码,这个数目已经是相当不错了。此外,printf 也是一个复杂的函数。
|
||||
所以当我将测试程序使用静态编译时(通过比较,可执行文件会多出 500 KB 左右的大小,这部分是 C 运行时库的静态链接),跟踪器提示只有大概 7000 条指令被执行。这个数目仍然不小,但是考虑到在主函数执行前 libc 的初始化以及主函数执行后的清除代码,这个数目已经是相当不错了。此外,`printf` 也是一个复杂的函数。
|
||||
|
||||
仍然不满意的话,我需要的是“可以测试”的东西 - 例如可以完整记录每一个指令运行的程序执行过程。这当然可以通过汇编代码完成。所以我找到了这个版本的 “Hello, world!” 并编译了它。
|
||||
|
||||
@ -168,13 +168,11 @@ len equ $ - msg
|
||||
```
|
||||
|
||||
|
||||
当然,现在跟踪器提示 7 条指令被执行了,这样一来很容易区分他们。
|
||||
|
||||
当然,现在跟踪器提示 7 条指令被执行了,这样一来很容易区分它们。
|
||||
|
||||
### 深入指令流
|
||||
|
||||
|
||||
上面那个汇编语言编写的程序使得我可以向你介绍 ptrace 的另外一个强大的用途 - 详细显示被跟踪进程的状态。下面是 run_debugger 函数的另一个版本:
|
||||
上面那个汇编语言编写的程序使得我可以向你介绍 `ptrace` 的另外一个强大的用途 - 详细显示被跟踪进程的状态。下面是 `run_debugger` 函数的另一个版本:
|
||||
|
||||
```
|
||||
void run_debugger(pid_t child_pid)
|
||||
@ -209,15 +207,7 @@ void run_debugger(pid_t child_pid)
|
||||
}
|
||||
```
|
||||
|
||||
|
||||
不同仅仅存在于 while 循环的开始几行。这个版本里增加了两个新的 ptrace 调用。第一条将进程的寄存器值读取进了一个结构体中。 sys/user.h 定义有 user_regs_struct。如果你查看头文件,头部的注释这么写到:
|
||||
|
||||
```
|
||||
/* The whole purpose of this file is for GDB and GDB only.
|
||||
Don't read too much into it. Don't use it for
|
||||
anything other than GDB unless know what you are
|
||||
doing. */
|
||||
```
|
||||
不同仅仅存在于 `while` 循环的开始几行。这个版本里增加了两个新的 `ptrace` 调用。第一条将进程的寄存器值读取进了一个结构体中。 `sys/user.h` 定义有 `user_regs_struct`。如果你查看头文件,头部的注释这么写到:
|
||||
|
||||
```
|
||||
/* 这个文件只为了 GDB 而创建
|
||||
@ -226,7 +216,7 @@ void run_debugger(pid_t child_pid)
|
||||
```
|
||||
|
||||
|
||||
不知道你做何感想,但这让我觉得我们找对地方了。回到例子中,一旦我们在 regs 变量中取得了寄存器的值,我们就可以通过将 PTRACE_PEEKTEXT 作为参数、 regs.eip(x86 上的扩展指令指针)作为地址,调用 ptrace ,读取当前进程的当前指令。下面是新跟踪器所展示出的调试效果:
|
||||
不知道你做何感想,但这让我觉得我们找对地方了。回到例子中,一旦我们在 `regs` 变量中取得了寄存器的值,我们就可以通过将 `PTRACE_PEEKTEXT` 作为参数、 `regs.eip`(x86 上的扩展指令指针)作为地址,调用 `ptrace` ,读取当前进程的当前指令(警告:如同我上面所说,文章很大程度上是平台相关的。我简化了一些设定 - 例如,x86 指令集不需要调整到 4 字节,我的32位 Ubuntu unsigned int 是 4 字节。事实上,许多平台都不需要。从内存中读取指令需要预先安装完整的反汇编器。我们这里没有,但实际的调试器是有的)。下面是新跟踪器所展示出的调试效果:
|
||||
|
||||
```
|
||||
$ simple_tracer traced_helloworld
|
||||
@ -244,7 +234,7 @@ Hello, world!
|
||||
```
|
||||
|
||||
|
||||
现在,除了 icounter,我们也可以观察到指令指针与它每一步所指向的指令。怎么来判断这个结果对不对呢?使用 objdump -d 处理可执行文件:
|
||||
现在,除了 `icounter`,我们也可以观察到指令指针与它每一步所指向的指令。怎么来判断这个结果对不对呢?使用 `objdump -d` 处理可执行文件:
|
||||
|
||||
```
|
||||
$ objdump -d traced_helloworld
|
||||
@ -263,62 +253,36 @@ Disassembly of section .text:
|
||||
804809b: cd 80 int $0x80
|
||||
```
|
||||
|
||||
|
||||
这个结果和我们跟踪器的结果就很容易比较了。
|
||||
|
||||
|
||||
### 将跟踪器关联到正在运行的进程
|
||||
|
||||
|
||||
如你所知,调试器也能关联到已经运行的进程。现在你应该不会惊讶,ptrace 通过 以PTRACE_ATTACH 为参数调用也可以完成这个过程。这里我不会展示示例代码,通过上文的示例代码应该很容易实现这个过程。出于学习目的,这里使用的方法更简便(因为我们在子进程刚开始就可以让它停止)。
|
||||
|
||||
如你所知,调试器也能关联到已经运行的进程。现在你应该不会惊讶,`ptrace` 通过以 `PTRACE_ATTACH` 为参数调用也可以完成这个过程。这里我不会展示示例代码,通过上文的示例代码应该很容易实现这个过程。出于学习目的,这里使用的方法更简便(因为我们在子进程刚开始就可以让它停止)。
|
||||
|
||||
### 代码
|
||||
|
||||
|
||||
上文中的简单的跟踪器(更高级的,可以打印指令的版本)的完整c源代码可以在[这里][20]找到。它是通过 4.4 版本的 gcc 以 -Wall -pedantic --std=c99 编译的。
|
||||
|
||||
上文中的简单的跟踪器(更高级的,可以打印指令的版本)的完整c源代码可以在[这里][20]找到。它是通过 4.4 版本的 gcc 以 `-Wall -pedantic --std=c99` 编译的。
|
||||
|
||||
### 结论与计划
|
||||
|
||||
诚然,这篇文章并没有涉及很多内容 - 我们距离亲手完成一个实际的调试器还有很长的路要走。但我希望这篇文章至少可以使得调试这件事少一些神秘感。`ptrace` 是功能多样的系统调用,我们目前只展示了其中的一小部分。
|
||||
|
||||
诚然,这篇文章并没有涉及很多内容 - 我们距离亲手完成一个实际的调试器还有很长的路要走。但我希望这篇文章至少可以使得调试这件事少一些神秘感。ptrace 是功能多样的系统调用,我们目前只展示了其中的一小部分。
|
||||
|
||||
|
||||
单步调试代码很有用,但也只是在一定程度上有用。上面我通过c的“Hello World!”做了示例。为了执行主函数,可能需要上万行代码来初始化c的运行环境。这并不是很方便。最理想的是在main函数入口处放置断点并从断点处开始分步执行。为此,在这个系列的下一篇,我打算展示怎么实现断点。
|
||||
|
||||
|
||||
单步调试代码很有用,但也只是在一定程度上有用。上面我通过 C 的 “Hello World!” 做了示例。为了执行主函数,可能需要上万行代码来初始化 C 的运行环境。这并不是很方便。最理想的是在 `main` 函数入口处放置断点并从断点处开始分步执行。为此,在这个系列的下一篇,我打算展示怎么实现断点。
|
||||
|
||||
### 参考
|
||||
|
||||
|
||||
撰写此文时参考了如下文章
|
||||
|
||||
* [Playing with ptrace, Part I][11]
|
||||
* [How debugger works][12]
|
||||
|
||||
|
||||
|
||||
[1] 我没有检查,但我确信 gdb 的代码行数至少有六位数。
|
||||
|
||||
[2] 使用 man 2 ptrace 命令可以了解更多。
|
||||
|
||||
[3] Peek and poke 在系统编程中是很知名的叫法,指的是直接读写内存内容。
|
||||
|
||||
[4] 这篇文章假定读者有一定的 Unix/Linux 编程经验。我假定你知道(至少了解概念)fork,exec 族函数与 Unix 信号。
|
||||
|
||||
[5] 至少你同我一样痴迷与机器/汇编语言。
|
||||
|
||||
[6] 警告:如同我上面所说,文章很大程度上是平台相关的。我简化了一些设定 - 例如,x86指令集不需要调整到 4 字节(我的32位 Ubuntu unsigned int 是 4 字节)。事实上,许多平台都不需要。从内存中读取指令需要预先安装完整的反汇编器。我们这里没有,但实际的调试器是有的。
|
||||
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://eli.thegreenplace.net/2011/01/23/how-debuggers-work-part-1
|
||||
|
||||
作者:[Eli Bendersky][a]
|
||||
译者:[译者ID](https://github.com/YYforymj)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
译者:[YYforymj](https://github.com/YYforymj)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
165
published/20150112 Data-Oriented Hash Table.md
Normal file
165
published/20150112 Data-Oriented Hash Table.md
Normal file
@ -0,0 +1,165 @@
|
||||
深入解析面向数据的哈希表性能
|
||||
============================================================
|
||||
|
||||
最近几年中,面向数据的设计已经受到了很多的关注 —— 一种强调内存中数据布局的编程风格,包括如何访问以及将会引发多少的 cache 缺失。由于在内存读取操作中缺失所占的数量级要大于命中的数量级,所以缺失的数量通常是优化的关键标准。这不仅仅关乎那些对性能有要求的 code-data 结构设计的软件,由于缺乏对内存效益的重视而成为软件运行缓慢、膨胀的一个很大因素。
|
||||
|
||||
|
||||
高效缓存数据结构的中心原则是将事情变得平滑和线性。比如,在大部分情况下,存储一个序列元素更倾向于使用普通数组而不是链表 —— 每一次通过指针来查找数据都会为 cache 缺失增加一份风险;而普通数组则可以预先获取,并使得内存系统以最大的效率运行
|
||||
|
||||
如果你知道一点内存层级如何运作的知识,下面的内容会是想当然的结果——但是有时候即便它们相当明显,测试一下任不失为一个好主意。几年前 [Baptiste Wicht 测试过了 `std::vector` vs `std::list` vs `std::deque`][4],(后者通常使用分块数组来实现,比如:一个数组的数组)。结果大部分会和你预期的保持一致,但是会存在一些违反直觉的东西。作为实例:在序列链表的中间位置做插入或者移除操作被认为会比数组快,但如果该元素是一个 POD 类型,并且不大于 64 字节或者在 64 字节左右(在一个 cache 流水线内),通过对要操作的元素周围的数组元素进行移位操作要比从头遍历链表来的快。这是由于在遍历链表以及通过指针插入/删除元素的时候可能会导致不少的 cache 缺失,相对而言,数组移位则很少会发生。(对于更大的元素,非 POD 类型,或者你已经有了指向链表元素的指针,此时和预期的一样,链表胜出)
|
||||
|
||||
|
||||
多亏了类似 Baptiste 这样的数据,我们知道了内存布局如何影响序列容器。但是关联容器,比如 hash 表会怎么样呢?已经有了些权威推荐:[Chandler Carruth 推荐的带局部探测的开放寻址][5](此时,我们没必要追踪指针),以及[Mike Acton 推荐的在内存中将 value 和 key 隔离][6](这种情况下,我们可以在每一个 cache 流水线中得到更多的 key), 这可以在我们必须查找多个 key 时提高局部性能。这些想法很有意义,但再一次的说明:测试永远是好习惯,但由于我找不到任何数据,所以只好自己收集了。
|
||||
|
||||
### 测试
|
||||
|
||||
我测试了四个不同的 quick-and-dirty 哈希表实现,另外还包括 `std::unordered_map` 。这五个哈希表都使用了同一个哈希函数 —— Bob Jenkins 的 [SpookyHash][8](64 位哈希值)。(由于哈希函数在这里不是重点,所以我没有测试不同的哈希函数;我同样也没有检测我的分析中的总内存消耗。)实现会通过简短的代码在测试结果表中标注出来。
|
||||
|
||||
* **UM**: `std::unordered_map` 。在 VS2012 和 libstdc++-v3 (libstdc++-v3: gcc 和 clang 都会用到这东西)中,UM 是以链表的形式实现,所有的元素都在链表中,bucket 数组中存储了链表的迭代器。VS2012 中,则是一个双链表,每一个 bucket 存储了起始迭代器和结束迭代器;libstdc++ 中,是一个单链表,每一个 bucket 只存储了一个起始迭代器。这两种情况里,链表节点是独立申请和释放的。最大负载因子是 1 。
|
||||
* **Ch**:分离的、链状 buket 指向一个元素节点的单链表。为了避免分开申请每一个节点,元素节点存储在普通数组池中。未使用的节点保存在一个空闲链表中。最大负载因子是 1。
|
||||
* **OL**:开地址线性探测 —— 每一个 bucket 存储一个 62 bit 的 hash 值,一个 2 bit 的状态值(包括 empty,filled,removed 三个状态),key 和 vale 。最大负载因子是 2/3。
|
||||
* **DO1**:“data-oriented 1” —— 和 OL 相似,但是哈希值、状态值和 key、values 分离在两个隔离的平滑数组中。
|
||||
* **DO2**:“data-oriented 2” —— 与 OL 类似,但是哈希/状态,keys 和 values 被分离在 3 个相隔离的平滑数组中。
|
||||
|
||||
|
||||
在我的所有实现中,包括 VS2012 的 UM 实现,默认使用尺寸为 2 的 n 次方。如果超出了最大负载因子,则扩展两倍。在 libstdc++ 中,UM 默认尺寸是一个素数。如果超出了最大负载因子,则扩展为下一个素数大小。但是我不认为这些细节对性能很重要。素数是一种对低 bit 位上没有足够熵的低劣 hash 函数的挽救手段,但是我们正在用的是一个很好的 hash 函数。
|
||||
|
||||
OL,DO1 和 DO2 的实现将共同的被称为 OA(open addressing)——稍后我们将发现它们在性能特性上非常相似。在每一个实现中,单元数从 100 K 到 1 M,有效负载(比如:总的 key + value 大小)从 8 到 4 k 字节我为几个不同的操作记了时间。 keys 和 values 永远是 POD 类型,keys 永远是 8 个字节(除了 8 字节的有效负载,此时 key 和 value 都是 4 字节)因为我的目的是为了测试内存影响而不是哈希函数性能,所以我将 key 放在连续的尺寸空间中。每一个测试都会重复 5 遍,然后记录最小的耗时。
|
||||
|
||||
测试的操作在这里:
|
||||
|
||||
* **Fill**:将一个随机的 key 序列插入到表中(key 在序列中是唯一的)。
|
||||
* **Presized fill**:和 Fill 相似,但是在插入之间我们先为所有的 key 保留足够的内存空间,以防止在 fill 过程中 rehash 或者重申请。
|
||||
* **Lookup**:执行 100 k 次随机 key 查找,所有的 key 都在 table 中。
|
||||
* **Failed lookup**: 执行 100 k 次随机 key 查找,所有的 key 都不在 table 中。
|
||||
* **Remove**:从 table 中移除随机选择的半数元素。
|
||||
* **Destruct**:销毁 table 并释放内存。
|
||||
|
||||
你可以[在这里下载我的测试代码][9]。这些代码只能在 64 机器上编译(包括Windows和Linux)。在 `main()` 函数顶部附近有一些开关,你可把它们打开或者关掉——如果全开,可能会需要一两个小时才能结束运行。我收集的结果也放在了那个打包文件里的 Excel 表中。(注意: Windows 和 Linux 在不同的 CPU 上跑的,所以时间不具备可比较性)代码也跑了一些单元测试,用来验证所有的 hash 表实现都能运行正确。
|
||||
|
||||
我还顺带尝试了附加的两个实现:Ch 中第一个节点存放在 bucket 中而不是 pool 里,二次探测的开放寻址。
|
||||
这两个都不足以好到可以放在最终的数据里,但是它们的代码仍放在了打包文件里面。
|
||||
|
||||
### 结果
|
||||
|
||||
这里有成吨的数据!!
|
||||
这一节我将详细的讨论一下结果,但是如果你对此不感兴趣,可以直接跳到下一节的总结。
|
||||
|
||||
#### Windows
|
||||
|
||||
这是所有的测试的图表结果,使用 Visual Studio 2012 编译,运行于 Windows 8.1 和 Core i7-4710HQ 机器上。(点击可以放大。)
|
||||
|
||||
[
|
||||
![Results for VS 2012, Windows 8.1, Core i7-4710HQ](http://reedbeta.com/blog/data-oriented-hash-table/results-vs2012.png "Results for VS 2012, Windows 8.1, Core i7-4710HQ")
|
||||
][12]
|
||||
|
||||
从左至右是不同的有效负载大小,从上往下是不同的操作(注意:不是所有的Y轴都是相同的比例!)我将为每一个操作总结一下主要趋向。
|
||||
|
||||
**Fill**:
|
||||
|
||||
在我的 hash 表中,Ch 稍比任何的 OA 变种要好。随着哈希表大小和有效负载的加大,差距也随之变大。我猜测这是由于 Ch 只需要从一个空闲链表中拉取一个元素,然后把它放在 bucket 前面,而 OA 不得不搜索一部分 bucket 来找到一个空位置。所有的 OA 变种的性能表现基本都很相似,当然 DO1 稍微有点优势。
|
||||
|
||||
在小负载的情况,UM 几乎是所有 hash 表中表现最差的 —— 因为 UM 为每一次的插入申请(内存)付出了沉重的代价。但是在 128 字节的时候,这些 hash 表基本相当,大负载的时候 UM 还赢了点。因为,我所有的实现都需要重新调整元素池的大小,并需要移动大量的元素到新池里面,这一点我几乎无能为力;而 UM 一旦为元素申请了内存后便不需要移动了。注意大负载中图表上夸张的跳步!这更确认了重新调整大小带来的问题。相反,UM 只是线性上升 —— 只需要重新调整 bucket 数组大小。由于没有太多隆起的地方,所以相对有效率。
|
||||
|
||||
**Presized fill**:
|
||||
|
||||
大致和 Fill 相似,但是图示结果更加的线性光滑,没有太大的跳步(因为不需要 rehash ),所有的实现差距在这一测试中要缩小了些。大负载时 UM 依然稍快于 Ch,问题还是在于重新调整大小上。Ch 仍是稳步少快于 OA 变种,但是 DO1 比其它的 OA 稍有优势。
|
||||
|
||||
**Lookup**:
|
||||
|
||||
所有的实现都相当的集中。除了最小负载时,DO1 和 OL 稍快,其余情况下 UM 和 DO2 都跑在了前面。(LCTT 译注: 你确定?)真的,我无法描述 UM 在这一步做的多么好。尽管需要遍历链表,但是 UM 还是坚守了面向数据的本性。
|
||||
|
||||
顺带一提,查找时间和 hash 表的大小有着很弱的关联,这真的很有意思。
|
||||
哈希表查找时间期望上是一个常量时间,所以在的渐进视图中,性能不应该依赖于表的大小。但是那是在忽视了 cache 影响的情况下!作为具体的例子,当我们在具有 10 k 条目的表中做 100 k 次查找时,速度会便变快,因为在第一次 10 k - 20 k 次查找后,大部分的表会处在 L3 中。
|
||||
|
||||
**Failed lookup**:
|
||||
|
||||
相对于成功查找,这里就有点更分散一些。DO1 和 DO2 跑在了前面,但 UM 并没有落下,OL 则是捉襟见肘啊。我猜测,这可能是因为 OL 整体上具有更长的搜索路径,尤其是在失败查询时;内存中,hash 值在 key 和 value 之飘来荡去的找不着出路,我也很受伤啊。DO1 和 DO2 具有相同的搜索长度,但是它们将所有的 hash 值打包在内存中,这使得问题有所缓解。
|
||||
|
||||
**Remove**:
|
||||
|
||||
DO2 很显然是赢家,但 DO1 也未落下。Ch 则落后,UM 则是差的不是一丁半点(主要是因为每次移除都要释放内存);差距随着负载的增加而拉大。移除操作是唯一不需要接触数据的操作,只需要 hash 值和 key 的帮助,这也是为什么 DO1 和 DO2 在移除操作中的表现大相径庭,而其它测试中却保持一致。(如果你的值不是 POD 类型的,并需要析构,这种差异应该是会消失的。)
|
||||
|
||||
**Destruct**:
|
||||
|
||||
Ch 除了最小负载,其它的情况都是最快的(最小负载时,约等于 OA 变种)。所有的 OA 变种基本都是相等的。注意,在我的 hash 表中所做的所有析构操作都是释放少量的内存 buffer 。但是 [在Windows中,释放内存的消耗和大小成比例关系][13]。(而且,这是一个很显著的开支 —— 申请 ~1 GB 的内存需要 ~100 ms 的时间去释放!)
|
||||
|
||||
UM 在析构时是最慢的一个(小负载时,慢的程度可以用数量级来衡量),大负载时依旧是稍慢些。对于 UM 来讲,释放每一个元素而不是释放一组数组真的是一个硬伤。
|
||||
|
||||
#### Linux
|
||||
|
||||
我还在装有 Linux Mint 17.1 的 Core i5-4570S 机器上使用 gcc 4.8 和 clang 3.5 来运行了测试。gcc 和 clang 的结果很相像,因此我只展示了 gcc 的;完整的结果集合包含在了代码下载打包文件中,链接在上面。(点击图来缩放)
|
||||
|
||||
[
|
||||
![Results for g++ 4.8, Linux Mint 17.1, Core i5-4570S](http://reedbeta.com/blog/data-oriented-hash-table/results-g++4.8.png "Results for g++ 4.8, Linux Mint 17.1, Core i5-4570S")
|
||||
][15]
|
||||
|
||||
大部分结果和 Windows 很相似,因此我只高亮了一些有趣的不同点。
|
||||
|
||||
**Lookup**:
|
||||
|
||||
这里 DO1 跑在前头,而在 Windows 中 DO2 更快些。(LCTT 译注: 这里原文写错了吧?)同样,UM 和 Ch 落后于其它所有的实现——过多的指针追踪,然而 OA 只需要在内存中线性的移动即可。至于 Windows 和 Linux 结果为何不同,则不是很清楚。UM 同样比 Ch 慢了不少,特别是大负载时,这很奇怪;我期望的是它们可以基本相同。
|
||||
|
||||
**Failed lookup**:
|
||||
|
||||
UM 再一次落后于其它实现,甚至比 OL 还要慢。我再一次无法理解为何 UM 比 Ch 慢这么多,Linux 和 Windows 的结果为何有着如此大的差距。
|
||||
|
||||
|
||||
**Destruct**:
|
||||
|
||||
在我的实现中,小负载的时候,析构的消耗太少了,以至于无法测量;在大负载中,线性增加的比例和创建的虚拟内存页数量相关,而不是申请到的数量?同样,要比 Windows 中的析构快上几个数量级。但是并不是所有的都和 hash 表有关;我们在这里可以看出不同系统和运行时内存系统的表现。貌似,Linux 释放大内存块是要比 Windows 快上不少(或者 Linux 很好的隐藏了开支,或许将释放工作推迟到了进程退出,又或者将工作推给了其它线程或者进程)。
|
||||
|
||||
UM 由于要释放每一个元素,所以在所有的负载中都比其它慢上几个数量级。事实上,我将图片做了剪裁,因为 UM 太慢了,以至于破坏了 Y 轴的比例。
|
||||
|
||||
### 总结
|
||||
|
||||
好,当我们凝视各种情况下的数据和矛盾的结果时,我们可以得出什么结果呢?我想直接了当的告诉你这些 hash 表变种中有一个打败了其它所有的 hash 表,但是这显然不那么简单。不过我们仍然可以学到一些东西。
|
||||
|
||||
首先,在大多数情况下我们“很容易”做的比 `std::unordered_map` 还要好。我为这些测试所写的所有实现(它们并不复杂;我只花了一两个小时就写完了)要么是符合 `unordered_map` 要么是在其基础上做的提高,除了大负载(超过128字节)中的插入性能, `unordered_map` 为每一个节点独立申请存储占了优势。(尽管我没有测试,我同样期望 `unordered_map` 能在非 POD 类型的负载上取得胜利。)具有指导意义的是,如果你非常关心性能,不要假设你的标准库中的数据结构是高度优化的。它们可能只是在 C++ 标准的一致性上做了优化,但不是性能。:P
|
||||
|
||||
其次,如果不管在小负载还是超负载中,若都只用 DO1 (开放寻址,线性探测,hashes/states 和 key/vaules分别处在隔离的普通数组中),那可能不会有啥好表现。这不是最快的插入,但也不坏(还比 `unordered_map` 快),并且在查找,移除,析构中也很快。你所知道的 —— “面向数据设计”完成了!
|
||||
|
||||
注意,我的为这些哈希表做的测试代码远未能用于生产环境——它们只支持 POD 类型,没有拷贝构造函数以及类似的东西,也未检测重复的 key,等等。我将可能尽快的构建一些实际的 hash 表,用于我的实用库中。为了覆盖基础部分,我想我将有两个变种:一个基于 DO1,用于小的,移动时不需要太大开支的负载;另一个用于链接并且避免重新申请和移动元素(就像 `unordered_map` ),用于大负载或者移动起来需要大开支的负载情况。这应该会给我带来最好的两个世界。
|
||||
|
||||
与此同时,我希望你们会有所启迪。最后记住,如果 Chandler Carruth 和 Mike Acton 在数据结构上给你提出些建议,你一定要听。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
|
||||
作者简介:
|
||||
|
||||
我是一名图形程序员,目前在西雅图做自由职业者。之前我在 NVIDIA 的 DevTech 软件团队中工作,并在美少女特工队工作室中为 PS3 和 PS4 的 Infamous 系列游戏开发渲染技术。
|
||||
|
||||
自 2002 年起,我对图形非常感兴趣,并且已经完成了一系列的工作,包括:雾、大气雾霾、体积照明、水、视觉效果、粒子系统、皮肤和头发阴影、后处理、镜面模型、线性空间渲染、和 GPU 性能测量和优化。
|
||||
|
||||
你可以在我的博客了解更多和我有关的事,处理图形,我还对理论物理和程序设计感兴趣。
|
||||
|
||||
你可以在 nathaniel.reed@gmail.com 或者在 Twitter(@Reedbeta)/Google+ 上关注我。我也会经常在 StackExchange 上回答计算机图形的问题。
|
||||
|
||||
--------------
|
||||
|
||||
via: http://reedbeta.com/blog/data-oriented-hash-table/
|
||||
|
||||
作者:[Nathan Reed][a]
|
||||
译者:[sanfusu](https://github.com/sanfusu)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://reedbeta.com/about/
|
||||
[1]:http://reedbeta.com/blog/data-oriented-hash-table/
|
||||
[2]:http://reedbeta.com/blog/category/coding/
|
||||
[3]:http://reedbeta.com/blog/data-oriented-hash-table/#comments
|
||||
[4]:http://baptiste-wicht.com/posts/2012/12/cpp-benchmark-vector-list-deque.html
|
||||
[5]:https://www.youtube.com/watch?v=fHNmRkzxHWs
|
||||
[6]:https://www.youtube.com/watch?v=rX0ItVEVjHc
|
||||
[7]:http://reedbeta.com/blog/data-oriented-hash-table/#the-tests
|
||||
[8]:http://burtleburtle.net/bob/hash/spooky.html
|
||||
[9]:http://reedbeta.com/blog/data-oriented-hash-table/hash-table-tests.zip
|
||||
[10]:http://reedbeta.com/blog/data-oriented-hash-table/#the-results
|
||||
[11]:http://reedbeta.com/blog/data-oriented-hash-table/#windows
|
||||
[12]:http://reedbeta.com/blog/data-oriented-hash-table/results-vs2012.png
|
||||
[13]:https://randomascii.wordpress.com/2014/12/10/hidden-costs-of-memory-allocation/
|
||||
[14]:http://reedbeta.com/blog/data-oriented-hash-table/#linux
|
||||
[15]:http://reedbeta.com/blog/data-oriented-hash-table/results-g++4.8.png
|
||||
[16]:http://reedbeta.com/blog/data-oriented-hash-table/#conclusions
|
@ -1,26 +1,24 @@
|
||||
|
||||
What engineers and marketers can learn from each other
|
||||
============================================================
|
||||
工程师和市场营销人员之间能够相互学习什么?
|
||||
============================================================
|
||||
|
||||
### 营销人员觉得工程师在工作中都太严谨了;而工程师则认为营销人员都很懒散。但是他们都错了。
|
||||
> 营销人员觉得工程师在工作中都太严谨了;而工程师则认为营销人员毫无用处。但是他们都错了。
|
||||
|
||||
![What engineers and marketers can learn from each other](https://opensource.com/sites/default/files/styles/image-full-size/public/images/business/BIZ_fortunecookie3.png?itok=dlRX4vO9 "What engineers and marketers can learn from each other")
|
||||
图片来源 :
|
||||
|
||||
opensource.com
|
||||
图片来源:opensource.com
|
||||
|
||||
在 B2B 行业从事多年的销售实践过程中,我经常听到工程师对营销人员的各种误解。下面这些是比较常见的:
|
||||
|
||||
* ”搞市场营销真是浪费钱,还不如把更多的资金投入到实际的产品开发中来。“
|
||||
* ”那些营销人员只是一个劲儿往墙上贴各种广告,还祈祷着它们不要掉下来。这么做有啥科学依据啊?“
|
||||
* ”谁愿意去看哪些广告啊?“
|
||||
* ”对待一个营销人员最好的办法就是不订阅,不关注,也不理睬。“
|
||||
* “搞市场营销真是浪费钱,还不如把更多的资金投入到实际的产品开发中来。”
|
||||
* “那些营销人员只是一个劲儿往墙上贴各种广告,还祈祷着它们不要掉下来。这么做有啥科学依据啊?”
|
||||
* “谁愿意去看哪些广告啊?”
|
||||
* “对待一个营销人员最好的办法就是不听,不看,也不理睬。”
|
||||
|
||||
这是我最感兴趣的一点:
|
||||
_“营销人员都很懒散。”_
|
||||
|
||||
最后一点说的不对,不够全面,懒散实际上是阻碍一个公司发展的巨大绊脚石。
|
||||
_“市场营销无足轻重。”_
|
||||
|
||||
最后一点说的不对,而且不仅如此,它实际上是阻碍一个公司创新的巨大绊脚石。
|
||||
|
||||
我来跟大家解释一下原因吧。
|
||||
|
||||
@ -28,27 +26,27 @@ _“营销人员都很懒散。”_
|
||||
|
||||
这些工程师的的评论让我十分的苦恼,因为我从中看到了自己当年的身影。
|
||||
|
||||
你们知道吗?我曾经也跟你们一样是一位自豪的技术极客。我在 Rensselaer Polytechnic 学院的电气工程专业本科毕业后便在美国空军开始了我的职业生涯,而且美国空军在那段时间还发动了军事上的沙漠风暴行动。在那里我主要负责开发并部属一套智能的实时战况分析系统,用于根据各种各样的数据源来构建出战场上的画面。
|
||||
你们知道吗?我曾经也跟你们一样是一位自豪的技术极客。我在 Rensselaer Polytechnic 学院的电气工程专业本科毕业后便在美国空军担任军官开始了我的职业生涯,而且美国空军在那段时间还发动了沙漠风暴行动。在那里我主要负责开发并部属一套智能的实时战况分析系统,用于综合几个的数据源来构建出战场态势。
|
||||
|
||||
在我离开空军之后,我本打算去麻省理工学院攻读博士学位。但是上校强烈建议我去报读这个学校的商学院。“你真的想一辈子待实验室里吗?”他问我。“你想就这么去大学里当个教书匠吗?Jackie ,你在组织管理那些复杂的工作中比较有天赋。我觉得你非常有必要去了解下 MIT 的斯隆商学院。”
|
||||
|
||||
我觉得自己也可以同时参加一些 MIT 技术方面的课程,因此我采纳了他的建议。但是,如果要参加市场营销管理方面的课程,我还有很长的路要走,这完全是在浪费时间。因此,在日常工作学习中,我始终是用自己所擅长的分析能力去解决一切问题。
|
||||
我觉得自己也可以同时参加一些 MIT 技术方面的课程,因此我采纳了他的建议。然而,如果要参加市场营销管理方面的课程,我还有很长的路要走,这完全是在浪费时间。因此,在日常工作学习中,我始终是用自己所擅长的分析能力去解决一切问题。
|
||||
|
||||
不久后,我在波士顿咨询集团公司做咨询顾问工作。在那六年的时间里,我经常听到大家对我的评论: Jackie ,你太没远见了。考虑问题也不够周全。你总是通过自己的分析数据去找答案。“
|
||||
不久后,我在波士顿咨询集团公司做咨询顾问工作。在那六年的时间里,我经常听到大家对我的评论: “Jackie ,你太没远见了。考虑问题也不够周全。你总是通过自己的分析去找答案。”
|
||||
|
||||
确实如此啊,我很赞同他们的想法——因为这个世界的工作方式本该如此,任何问题都要基于数据进行分析,不对吗?直到现在我才意识到(我多么希望自己早一些发现自己的问题)自己以前惯用的分析问题的方法遗漏了很多重要的东西:开放的心态,艺术修养,情感——人和创造性思维相关的因素。
|
||||
确实如此啊,我很赞同他们的想法——因为这个世界的工作方式本该如此,任何问题都要基于数据进行分析,不对吗?直到现在我才意识到(我多么希望自己早一些发现自己的问题)自己以前惯用的分析问题的方法遗漏了很多重要的东西:开放的心态、艺术修养、情感——人和创造性思维相关的因素。
|
||||
|
||||
我在 2001 年 9 月 11 日加入达美航空公司不久后,被调去管理消费者市场部门,之前我意识到的所有问题变得更加明显。这本来不是我的强项,但是在公司需要的情况下,我也愿意出手相肋。
|
||||
我在 2001 年 9 月 11 日加入达美航空公司不久后,被调去管理消费者市场部门,之前我意识到的所有问题变得更加明显。市场方面本来不是我的强项,但是在公司需要的情况下,我也愿意出手相肋。
|
||||
|
||||
但是突然之间,我一直惯用的方法获取到的常规数据分析结果却与实际情况完全相反。这个问题导致上千人(包括航线内外的人)受到影响。我忽略了一个很重要的人本身的情感因素。我所面临的问题需要各种各样的解决方案才能处理,而不是简单的从那些死板的分析数据中就能得到答案。
|
||||
但是突然之间,我一直惯用的方法获取到的分析结果却与实际情况完全相反。这个问题导致上千人(包括航线内外的人)受到影响。我忽略了一个很重要的人本身的情感因素。我所面临的问题需要各种各样的解决方案才能处理,而不是简单的从那些死板的数据中就能得到答案。
|
||||
|
||||
那段时间,我快速地学到了很多东西,因为如果我们想把达美航空公司恢复到正常状态,还需要做很多的工作——市场营销更像是一个以解决问题为导向,以用户为中心的充满挑战性的大工程,只是销售人员和工程师这两大阵营都没有迅速地意识到这个问题。
|
||||
那段时间,我快速地学到了很多东西,因为如果我们想把达美航空公司恢复到正常状态,还需要做很多的工作——市场营销更像是一个以解决问题为导向、以用户为中心的充满挑战性的大工程,只是销售人员和工程师这两大阵营都没有迅速地意识到这个问题。
|
||||
|
||||
### 两大文化差异
|
||||
|
||||
工程管理和市场营销之间的这个“巨大鸿沟”确实是根深蒂固的,这跟 C.P. Snow (英语物理化学家和小说家)提出的[“两大文化差异"问题][1]很相似。具有科学素质的工程师和具有艺术细胞的营销人员操着不同的语言,不同的文化观念导致他们不同的价值取向。
|
||||
工程管理和市场营销之间的这个“巨大鸿沟”确实是根深蒂固的,这跟(著名的科学家、小说家) C.P. Snow 提出的[“两大文化差异”问题][1]很相似。具有科学素质的工程师和具有艺术细胞的营销人员操着不同的语言,不同的文化观念导致他们不同的价值取向。
|
||||
|
||||
但是,事实上他们比想象中有更多的相似之处。华盛顿大学[最新研究][2](由微软、谷歌和美国国家科学基金会共同赞助)发现”一个伟大软件工程师必须具备哪些优秀的素质,“毫无疑问,一个伟大的销售人员同样也应该具备这些素质。例如,专家们给出的一些优秀品质如下:
|
||||
但是,事实上他们比想象中有更多的相似之处。一个由微软、谷歌和美国国家科学基金会共同赞助的华盛顿大学的[最新研究][2]发现了“一个伟大软件工程师必须具备哪些优秀的素质”,毫无疑问,一个伟大的销售人员同样也应该具备这些素质。例如,专家们给出的一些优秀品质如下:
|
||||
|
||||
* 充满激情
|
||||
* 性格开朗
|
||||
@ -56,29 +54,29 @@ _“营销人员都很懒散。”_
|
||||
* 技艺精湛
|
||||
* 解决复杂难题的能力
|
||||
|
||||
这些只是其中很小的一部分!当然,并不是所有的素质都适用于市场营销人员,但是如果用文氏图来表示这“两大文化“的交集,就很容易看出营销人员和工程师之间的关系要远比我们想象中密切得多。他们都是竭力去解决与用户或客户相关的难题,只是他们所采取的方式和角度不一致罢了。
|
||||
这些只是其中很小的一部分!当然,并不是所有的素质都适用于市场营销人员,但是如果用文氏图来表示这“两大文化”的交集,就很容易看出营销人员和工程师之间的关系要远比我们想象中密切得多。他们都是竭力去解决与用户或客户相关的难题,只是他们所采取的方式和角度不一致罢了。
|
||||
|
||||
看到上面的那几点后,我深深的陷入思考:_要是这两类员工彼此之间再多了解对方一些会怎样呢?这会给公司带来很强大的动力吧?_
|
||||
|
||||
确实如此。我在红帽公司就亲眼看到过样的情形,我身边都是一些“思想极端”的员工,要是之前,肯定早被我炒鱿鱼了。我相信公司里绝对发生过很多次类似这样的事情,一个销售人员看完工程师递交上来的分析报表后,心想,“这些书呆子,思想太局限了。真是一叶障目,不见泰山;两豆塞耳,不闻雷霆。”
|
||||
确实如此。我在红帽公司就亲眼看到过样的情形,我身边都是一些早些年肯定被我当成“想法疯狂”而无视的人。而且我猜销售人员看到工程师后(同时或某一次),心想,“这些数据呆瓜,真是只见树木不见森林。”
|
||||
|
||||
现在我才明白了公司里有这两种人才的重要性。在现实工作当中,工程师和营销人员都是围绕着客户、创新及数据分析来完成工作。如果他们能够懂得相互尊重、彼此理解、相辅相成,那么我们将会看到公司里所产生的那种积极强大的动力,这种超乎寻常的革新力量要远比两个独立的团队强大得多。
|
||||
|
||||
### 听一听他们的想法
|
||||
### 听一听疯子(和呆瓜)的想法
|
||||
|
||||
成功案例:_建立开放式组织_
|
||||
成功案例:《开放式组织》
|
||||
|
||||
在红帽任职期间,我的主要工作就是想办法提升公司的品牌影响力——但是我从未想过让公司的 CEO 去写一本书。我把公司多个部门的“想法极端”的同事召集在一起,希望他们帮我设计出一个新颖的解决方案来提升公司的影响力,结果他们提出让公司的 CEO 写书这样一个想法。
|
||||
在红帽任职期间,我的主要工作就是想办法提升公司的品牌影响力——但是就是给我一百万年我也不会想到让公司的 CEO 去写一本书。我把公司多个部门的“想法疯狂”的同事召集在一起,希望他们帮我设计出一个新颖的解决方案来提升公司的影响力,结果他们提出让公司的 CEO 写书这样一个想法。
|
||||
|
||||
当我听到这个想法的时候,我很快意识到应该把红帽公司一些经典的管理模式写入到这本书里:它将对整个开源社区的创业者带来很重要的参考价值,同时也有助于宣扬开源精神。通过优先考虑这两方面的作用,我们提升了红帽在整个开源软件世界中的品牌价值,红帽是一个可靠的随时准备着为客户在[数字化颠覆][3]年代指明方向的公司。
|
||||
当我听到这个想法的时候,我很快意识到这正是典型的红帽方式:它将对整个开源社区的从业者带来很重要的参考价值,同时也有助于宣扬开源精神。通过优先考虑这两方面的作用,我们提升了红帽在整个开源软件世界中的品牌价值——红帽是一个可靠的随时准备着为客户在[数字化颠覆][3]年代指明方向的公司。
|
||||
|
||||
这一点才是主要的:确切的说是指导红帽工程师解决代码问题的共同精神力量。 Red Hatters 小组一直在催着我赶紧把开放式组织的模式在全公司推广起来,以显示出内外部程序员共同推动整个开源社区发展的强大动力之一:那就是强烈的共享欲望。
|
||||
这一点才是主要的:确切的说是指导红帽工程师解决代码问题的共同精神力量。 Red Hatters 小组敦促我出版《开放式组织》,这显示出来自内部和外部社区的程序员共同推动整个开源社区发展的强大动力之一:那就是强烈的共享欲望。
|
||||
|
||||
最后,要把开放式组织的管理模式完全推广起来,还需要大家的共同能力,包括工程师们强大的数据分析能力和营销人员美好的艺术素养。这个项目让我更加坚定自己的想法,工程师和营销人员有更多的相似之处。
|
||||
最后,要把《开放式组织》这本书完成,还需要大家的共同能力,包括工程师们强大的数据分析能力和营销人员美好的艺术素养。这个项目让我更加坚定自己的想法,工程师和营销人员有更多的相似之处。
|
||||
|
||||
但是,有些东西我还得强调下:开放模式的实现,要求公司上下没有任何偏见,不能偏袒工程师和市场营销人员任何一方文化。一个更加理想的开放式环境能够促使员工之间和平共处,并在这个组织规定的范围内点燃大家的热情。
|
||||
|
||||
所以,这绝对不是我听到大家所说的懒散之意。
|
||||
这对我来说如春风拂面。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
@ -94,7 +92,7 @@ via: https://opensource.com/open-organization/17/1/engineers-marketers-can-learn
|
||||
|
||||
作者:[Jackie Yeaney][a]
|
||||
译者:[rusking](https://github.com/rusking)
|
||||
校对:[Bestony](https://github.com/Bestony)
|
||||
校对:[Bestony](https://github.com/Bestony), [wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
@ -1,233 +0,0 @@
|
||||
translating by XLCYun
|
||||
|
||||
Reactive programming vs. Reactive systems
|
||||
============================================================
|
||||
|
||||
>Landing on a set of simple reactive design principles in a sea of constant confusion and overloaded expectations.
|
||||
|
||||
![Micro Fireworks](https://d3tdunqjn7n0wj.cloudfront.net/360x240/micro_fireworks-db2d0a45f22f348719b393dd98ebefa2.jpg)
|
||||
|
||||
Download Konrad Malawski's free ebook "[Why Reactive? Foundational Principles for Enterprise Adoption][5]" to dive deeper into the technical aspects and benefits of Reactive.
|
||||
|
||||
Since co-authoring the "[Reactive Manifesto][23]" in 2013, we’ve seen the topic of reactive go from being a virtually unacknowledged technique for constructing applications—used by only fringe projects within a select few corporations—to become part of the overall platform strategy in numerous big players in the middleware field. This article aims to define and clarify the different aspects of reactive by looking at the differences between writing code in a _reactive programming_ style, and the design of _reactive systems_ as a cohesive whole.
|
||||
|
||||
### Reactive is a set of design principles
|
||||
|
||||
One recent indicator of success is that "reactive" has become an overloaded term and is now being associated with several different things to different people—in good company with words like "streaming," "lightweight," and "real-time."
|
||||
|
||||
Consider the following analogy: When looking at an athletic team (think: baseball, basketball, etc.) it’s not uncommon to see it composed of exceptional individuals, yet when they come together something doesn’t click and they lack the synergy to operate effectively as a team and lose to an "inferior" team.From the perspective of this article, reactive is a set of design principles, a way of thinking about systems architecture and design in a distributed environment where implementation techniques, tooling, and design patterns are components of a larger whole—a system.
|
||||
|
||||
This analogy illustrates the difference between a set of reactive applications put together without thought—even though _individually_ they’re great—and a reactive system. In a reactive system, it’s the _interaction between the individual parts_ that makes all the difference, which is the ability to operate individually yet act in concert to achieve their intended result.
|
||||
|
||||
_A reactive system_ is an architectural style that allows multiple individual applications to coalesce as a single unit, reacting to its surroundings, while remaining aware of each other—this could manifest as being able to scale up/down, load balancing, and even taking some of these steps proactively.
|
||||
|
||||
It’s possible to write a single application in a reactive style (i.e. using reactive programming); however, that’s merely one piece of the puzzle. Though each of the above aspects may seem to qualify as "reactive," in and of themselves they do not make a _system_ reactive.
|
||||
|
||||
When people talk about "reactive" in the context of software development and design, they generally mean one of three things:
|
||||
|
||||
* Reactive systems (architecture and design)
|
||||
* Reactive programming (declarative event-based)
|
||||
* Functional reactive programming (FRP)
|
||||
|
||||
We’ll examine what each of these practices and techniques mean, with emphasis on the first two. More specifically, we’ll discuss when to use them, how they relate to each other, and what you can expect the benefits from each to be—particularly in the context of building systems for multicore, cloud, and mobile architectures.
|
||||
|
||||
Let’s start by talking about functional reactive programming, and why we chose to exclude it from further discussions in this article.
|
||||
|
||||
### Functional reactive programming (FRP)
|
||||
|
||||
_Functional reactive programming_, commonly called _FRP_, is most frequently misunderstood. FRP was very [precisely defined][24] 20 years ago by Conal Elliott. The term has most recently been used incorrectly[1][8] to describe technologies like Elm, Bacon.js, and Reactive Extensions (RxJava, Rx.NET, RxJS) amongst others. Most libraries claiming to support FRP are almost exclusively talking about _reactive programming_ and it will therefore not be discussed further.
|
||||
|
||||
### Reactive programming
|
||||
|
||||
_Reactive programming_, not to be confused with _functional reactive programming_, is a subset of asynchronous programming and a paradigm where the availability of new information drives the logic forward rather than having control flow driven by a thread-of-execution.
|
||||
|
||||
It supports decomposing the problem into multiple discrete steps where each can be executed in an asynchronous and non-blocking fashion, and then be composed to produce a workflow—possibly unbounded in its inputs or outputs.
|
||||
|
||||
[Asynchronous][25] is defined by the Oxford Dictionary as “not existing or occurring at the same time,” which in this context means that the processing of a message or event is happening at some arbitrary time, possibly in the future. This is a very important technique in reactive programming since it allows for [non-blocking][26] execution—where threads of execution competing for a shared resource don’t need to wait by blocking (preventing the thread of execution from performing other work until current work is done), and can as such perform other useful work while the resource is occupied. Amdahl’s Law[2][9] tells us that contention is the biggest enemy of scalability, and therefore a reactive program should rarely, if ever, have to block.
|
||||
|
||||
Reactive programming is generally _event-driven_, in contrast to reactive systems, which are _message-driven_—the distinction between event-driven and message-driven is clarified later in this article.
|
||||
|
||||
The application program interface (API) for reactive programming libraries are generally either:
|
||||
|
||||
* Callback-based—where anonymous side-effecting callbacks are attached to event sources, and are being invoked when events pass through the dataflow chain.
|
||||
* Declarative—through functional composition, usually using well-established combinators like _map_, _filter_, _fold_etc.
|
||||
|
||||
Most libraries provide a mix of these two styles, often with the addition of stream-based operators like windowing, counts, triggers, etc.
|
||||
|
||||
It would be reasonable to claim that reactive programming is related to [dataflow programming][27], since the emphasis is on the _flow of data_ rather than the _flow of control_.
|
||||
|
||||
Examples of programming abstractions that support this programming technique are:
|
||||
|
||||
* [Futures/Promises][10]—containers of a single value, many-read/single-write semantics where asynchronous transformations of the value can be added even if it is not yet available.
|
||||
* Streams—as in [reactive streams][11]: unbounded flows of data processing, enabling asynchronous, non-blocking, back-pressured transformation pipelines between a multitude of sources and destinations.
|
||||
* [Dataflow variables][12]—single assignment variables (memory-cells) which can depend on input, procedures and other cells, so that they are automatically updated on change. A practical example is spreadsheets—where the change of the value in a cell ripples through all dependent functions, producing new values downstream.
|
||||
|
||||
Popular libraries supporting the reactive programming techniques on the JVM include, but are not limited to, Akka Streams, Ratpack, Reactor, RxJava, and Vert.x. These libraries implement the reactive streams specification, which is a standard for interoperability between reactive programming libraries on the JVM, and according to its own description is “...an initiative to provide a standard for asynchronous stream processing with non-blocking back pressure.”
|
||||
|
||||
The primary benefits of reactive programming are: increased utilization of computing resources on multicore and multi-CPU hardware; and increased performance by reducing serialization points as per Amdahl’s Law and, by extension, Günther’s Universal Scalability Law[3][13].
|
||||
|
||||
A secondary benefit is one of developer productivity as traditional programming paradigms have all struggled to provide a straightforward and maintainable approach to dealing with asynchronous and non-blocking computation and I/O. Reactive programming solves most of the challenges here since it typically removes the need for explicit coordination between active components.
|
||||
|
||||
Where reactive programming shines is in the creation of components and composition of workflows. In order to take full advantage of asynchronous execution, the inclusion of [back-pressure][28] is crucial to avoid over-utilization, or rather unbounded consumption of resources.
|
||||
|
||||
Even though reactive programming is a very useful piece when constructing modern software, in order to reason about a system at a higher level one has to use another tool: _reactive architecture_—the process of designing reactive systems. Furthermore, it is important to remember that there are many programming paradigms and reactive programming is but one of them, so just as with any tool, it is not intended for any and all use-cases.
|
||||
|
||||
### Event-driven vs. message-driven
|
||||
|
||||
As mentioned previously, reactive programming—focusing on computation through ephemeral dataflow chains—tend to be _event-driven_, while reactive systems—focusing on resilience and elasticity through the communication, and coordination, of distributed systems—is [_message-driven_][29][4][14](also referred to as _messaging_).
|
||||
|
||||
The main difference between a message-driven system with long-lived addressable components, and an event-driven dataflow-driven model, is that messages are inherently directed, events are not. Messages have a clear (single) destination, while events are facts for others to observe. Furthermore, messaging is preferably asynchronous, with the sending and the reception decoupled from the sender and receiver respectively.
|
||||
|
||||
The glossary in the Reactive Manifesto [defines the conceptual difference as][30]:
|
||||
|
||||
> A message is an item of data that is sent to a specific destination. An event is a signal emitted by a component upon reaching a given state. In a message-driven system addressable recipients await the arrival of messages and react to them, otherwise lying dormant. In an event-driven system notification listeners are attached to the sources of events such that they are invoked when the event is emitted. This means that an event-driven system focuses on addressable event sources while a message-driven system concentrates on addressable recipients.
|
||||
|
||||
Messages are needed to communicate across the network and form the basis for communication in distributed systems, while events on the other hand are emitted locally. It is common to use messaging under the hood to bridge an event-driven system across the network by sending events inside messages. This allows maintaining the relative simplicity of the event-driven programming model in a distributed context and can work very well for specialized and well-scoped use cases (e.g., AWS Lambda, Distributed Stream Processing products like Spark Streaming, Flink, Kafka, and Akka Streams over Gearpump, and distributed Publish Subscribe products like Kafka and Kinesis).
|
||||
|
||||
However, it is a trade-off: what one gains in abstraction and simplicity of the programming model, one loses in terms of control. Messaging forces us to embrace the reality and constraints of distributed systems—things like partial failures, failure detection, dropped/duplicated/reordered messages, eventual consistency, managing multiple concurrent realities, etc.—and tackle them head on instead of hiding them behind a leaky abstraction—pretending that the network is not there—as has been done too many times in the past (e.g. EJB, [RPC][31], [CORBA][32], and [XA][33]).
|
||||
|
||||
These differences in semantics and applicability have profound implications in the application design, including things like _resilience_, _elasticity_, _mobility_, _location transparency,_ and _management_ of the complexity of distributed systems, which will be explained further in this article.
|
||||
|
||||
In a reactive system, especially one which uses reactive programming, both events and messages will be present—as one is a great tool for communication (messages), and another is a great way of representing facts (events).
|
||||
|
||||
### Reactive systems and architecture
|
||||
|
||||
_Reactive systems_—as defined by the Reactive Manifesto—is a set of architectural design principles for building modern systems that are well prepared to meet the increasing demands that applications face today.
|
||||
|
||||
The principles of reactive systems are most definitely not new, and can be traced back to the '70s and '80s and the seminal work by Jim Gray and Pat Helland on the [Tandem System][34] and Joe Armstrong and Robert Virding on [Erlang][35]. However, these people were ahead of their time and it is not until the last 5-10 years that the technology industry have been forced to rethink current best practices for enterprise system development and learn to apply the hard-won knowledge of the reactive principles on today’s world of multicore, cloud computing, and the Internet of Things.
|
||||
|
||||
The foundation for a reactive system is _message-passing_, which creates a temporal boundary between components that allows them to be decoupled in _time_—this allows for concurrency—and _space_—which allows for distribution and mobility. This decoupling is a requirement for full [isolation][36]between components, and forms the basis for both _resilience_ and _elasticity_.
|
||||
|
||||
### From programs to systems
|
||||
|
||||
The world is becoming increasingly interconnected. We are no longer building _programs_—end-to-end logic to calculate something for a single operator—as much as we are building _systems_.
|
||||
|
||||
Systems are complex by definition—each consisting of a multitude of components, who in and of themselves also can be systems—which mean software is increasingly dependent on other software to function properly.
|
||||
|
||||
The systems we create today are to be operated on computers small and large, few and many, near each other or half a world away. And at the same time users’ expectations have become harder and harder to meet as everyday human life is increasingly dependent on the availability of systems to function smoothly.
|
||||
|
||||
In order to deliver systems that users—and businesses—can depend on, they have to be _responsive_, for it does not matter if something provides the correct response if the response is not available when it is needed. In order to achieve this, we need to make sure that responsiveness can be maintained under failure (_resilience_) and under load (_elasticity_). To make that happen, we make these systems _message-driven_, and we call them _reactive systems_.
|
||||
|
||||
### The resilience of reactive systems
|
||||
|
||||
Resilience is about responsiveness _under failure_ and is an inherent functional property of the system, something that needs to be designed for, and not something that can be added in retroactively. Resilience is beyond fault-tolerance—it’s not about graceful degradation—even though that is a very useful trait for systems—but about being able to fully recover from failure: to _self-heal_. This requires component isolation and containment of failures in order to avoid failures spreading to neighbouring components—resulting in, often catastrophic, cascading failure scenarios.
|
||||
|
||||
So the key to building resilient, self-healing systems is to allow failures to be: contained, reified as messages, sent to other components (that act as supervisors), and managed from a safe context outside the failed component. Here, being message-driven is the enabler: moving away from strongly coupled, brittle, deeply nested synchronous call chains that everyone learned to suffer through, or ignore. The idea is to decouple the management of failures from the call chain, freeing the client from the responsibility of handling the failures of the server.
|
||||
|
||||
### The elasticity of reactive systems
|
||||
|
||||
[Elasticity][37] is about _responsiveness under load_—meaning that the throughput of a system scales up or down (as well as in or out) automatically to meet varying demand as resources are proportionally added or removed. It is the essential element needed to take advantage of the promises of cloud computing: allowing systems to be resource efficient, cost-efficient, environment-friendly and pay-per-use.
|
||||
|
||||
Systems need to be adaptive—allow intervention-less auto-scaling, replication of state and behavior, load-balancing of communication, failover, and upgrades, without rewriting or even reconfiguring the system. The enabler for this is _location transparency_: the ability to scale the system in the same way, using the same programming abstractions, with the same semantics, _across all dimensions of scale_—from CPU cores to data centers.
|
||||
|
||||
As the Reactive Manifesto [puts it][38]:
|
||||
|
||||
> One key insight that simplifies this problem immensely is to realize that we are all doing distributed computing. This is true whether we are running our systems on a single node (with multiple independent CPUs communicating over the QPI link) or on a cluster of nodes (with independent machines communicating over the network). Embracing this fact means that there is no conceptual difference between scaling vertically on multicore or horizontally on the cluster. This decoupling in space [...], enabled through asynchronous message-passing, and decoupling of the runtime instances from their references is what we call Location Transparency.
|
||||
|
||||
So no matter where the recipient resides, we communicate with it in the same way. The only way that can be done semantically equivalent is via messaging.
|
||||
|
||||
### The productivity of reactive systems
|
||||
|
||||
As most systems are inherently complex by nature, one of the most important aspects is to make sure that a system architecture will impose a minimal reduction of productivity, in the development and maintenance of components, while at the same time reducing the operational _accidental complexity_ to a minimum.
|
||||
|
||||
This is important since during the lifecycle of a system—if not properly designed—it will become harder and harder to maintain, and require an ever-increasing amount of time and effort to understand, in order to localize and to rectify problems.
|
||||
|
||||
Reactive systems are the most _productive_ systems architecture that we know of (in the context of multicore, cloud and mobile architectures):
|
||||
|
||||
* Isolation of failures offer [bulkheads][15] between components, preventing failures to cascade, which limits the scope and severity of failures.
|
||||
* Supervisor hierarchies offer multiple levels of defenses paired with self-healing capabilities, which remove a lot of transient failures from ever incurring any operational cost to investigate.
|
||||
* Message-passing and location transparency allow for components to be taken offline and replaced or rerouted without affecting the end-user experience, reducing the cost of disruptions, their relative urgency, and also the resources required to diagnose and rectify.
|
||||
* Replication reduces the risk of data loss, and lessens the impact of failure on the availability of retrieval and storage of information.
|
||||
* Elasticity allows for conservation of resources as usage fluctuates, allowing for minimizing operational costs when load is low, and minimizing the risk of outages or urgent investment into scalability as load increases.
|
||||
|
||||
Thus, reactive systems allows for the creation systems that cope well under failure, varying load and change over time—all while offering a low cost of ownership over time.
|
||||
|
||||
### How does reactive programming relate to reactive systems?
|
||||
|
||||
Reactive programming is a great technique for managing internal logic and dataflow transformation, locally within the components, as a way of optimizing code clarity, performance and resource efficiency. Reactive systems, being a set of architectural principles, puts the emphasis on distributed communication and gives us tools to tackle resilience and elasticity in distributed systems.
|
||||
|
||||
One common problem with only leveraging reactive programming is that its tight coupling between computation stages in an event-driven callback-based or declarative program makes _resilience_ harder to achieve as its transformation chains are often ephemeral and its stages—the callbacks or combinators—are anonymous, i.e. not addressable.
|
||||
|
||||
This means that they usually handle success or failure directly without signaling it to the outside world. This lack of addressability makes recovery of individual stages harder to achieve as it is typically unclear where exceptions should, or even could, be propagated. As a result, failures are tied to ephemeral client requests instead of to the overall health of the component—if one of the stages in the dataflow chain fails, then the whole chain needs to be restarted, and the client notified. This is in contrast to a message-driven reactive system which has the ability to self-heal, without necessitating notifying the client.
|
||||
|
||||
Another contrast to the reactive systems approach is that pure reactive programming allows decoupling in _time_, but not _space_ (unless leveraging message-passing to distribute the dataflow graph under the hood, across the network, as discussed previously). As mentioned, decoupling in time allows for _concurrency_, but it is decoupling in space that allows for _distribution_, and _mobility_—allowing for not only static but also dynamic topologies—which is essential for _elasticity_.
|
||||
|
||||
A lack of location transparency makes it hard to scale out a program purely based on reactive programming techniques adaptively in an elastic fashion and therefore requires layering additional tools, such as a message bus, data grid, or bespoke network protocols on top. This is where the message-driven programming of reactive systems shines, since it is a communication abstraction that maintains its programming model and semantics across all dimensions of scale, and therefore reduces system complexity and cognitive overhead.
|
||||
|
||||
A commonly cited problem of callback-based programming is that while writing such programs may be comparatively easy, it can have real consequences in the long run.
|
||||
|
||||
For example, systems based on anonymous callbacks provide very little insight when you need to reason about them, maintain them, or most importantly figure out what, where, and why production outages and misbehavior occur.
|
||||
|
||||
Libraries and platforms designed for reactive systems (such as the [Akka][39] project and the [Erlang][40] platform) have long learned this lesson and are relying on long-lived addressable components that are easier to reason about in the long run. When failures occurs, the component is uniquely identifiable along with the message that caused the failure. With the concept of addressability at the core of the component model, monitoring solutions have a _meaningful_ way to present data that is gathered—leveraging the identities that are propagated.
|
||||
|
||||
The choice of a good programming paradigm, one that enforces things like addressability and failure management, has proven to be invaluable in production, as it is designed with the harshness of reality in mind, to _expect and embrace failure_ rather than the lost cause of trying to prevent it.
|
||||
|
||||
All in all, reactive programming is a very useful implementation technique, which can be used in a reactive architecture. Remember that it will only help manage one part of the story: dataflow management through asynchronous and nonblocking execution—usually only within a single node or service. Once there are multiple nodes, there is a need to start thinking hard about things like data consistency, cross-node communication, coordination, versioning, orchestration, failure management, separation of concerns and responsibilities etc.—i.e. system architecture.
|
||||
|
||||
Therefore, to maximize the value of reactive programming, use it as one of the tools to construct a reactive system. Building a reactive system requires more than abstracting away OS-specific resources and sprinkling asynchronous APIs and [circuit breakers][41] on top of an existing, legacy, software stack. It should be about embracing the fact that you are building a distributed system comprising multiple services—that all need to work together, providing a consistent and responsive experience, not just when things work as expected but also in the face of failure and under unpredictable load.
|
||||
|
||||
### Summary
|
||||
|
||||
Enterprises and middleware vendors alike are beginning to embrace reactive, with 2016 witnessing a huge growth in corporate interest in adopting reactive. In this article, we have described reactive systems as being the end goal—assuming the context of multicore, cloud and mobile architectures—for enterprises, with reactive programming serving as one of the important tools.
|
||||
|
||||
Reactive programming offers productivity for developers—through performance and resource efficiency—at the component level for internal logic and dataflow transformation. Reactive systems offer productivity for architects and DevOps practitioners—through resilience and elasticity—at the system level, for building _cloud native_ and other large-scale distributed systems. We recommend combining the techniques of reactive programming within the design principles of reactive systems.
|
||||
|
||||
```
|
||||
1 According to Conal Elliott, the inventor of FRP, in [this presentation][16][↩][17]
|
||||
2 [Amdahl’s Law][18] shows that the theoretical speedup of a system is limited by the serial parts, which means that the system can experience diminishing returns as new resources are added. [↩][19]
|
||||
3 Neil Günter’s [Universal Scalability Law][20] is an essential tool in understanding the effects of contention and coordination in concurrent and distributed systems, and shows that the cost of coherency in a system can lead to negative results, as new resources are added to the system.[↩][21]
|
||||
4 Messaging can be either synchronous (requiring the sender and receiver to be available at the same time) or asynchronous (allowing them to be decoupled in time). Discussing the semantic differences is out scope for this article.[↩][22]
|
||||
```
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.oreilly.com/ideas/reactive-programming-vs-reactive-systems
|
||||
|
||||
作者:[Jonas Bonér][a] , [Viktor Klang][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 组织编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://www.oreilly.com/people/e0b57-jonas-boner
|
||||
[b]:https://www.oreilly.com/people/f96106d4-4ce6-41d9-9d2b-d24590598fcd
|
||||
[1]:https://www.flickr.com/photos/pixel_addict/2301302732
|
||||
[2]:http://conferences.oreilly.com/oscon/oscon-tx?intcmp=il-prog-confreg-update-ostx17_new_site_oscon_17_austin_right_rail_cta
|
||||
[3]:https://www.oreilly.com/people/e0b57-jonas-boner
|
||||
[4]:https://www.oreilly.com/people/f96106d4-4ce6-41d9-9d2b-d24590598fcd
|
||||
[5]:http://www.oreilly.com/programming/free/why-reactive.csp?intcmp=il-webops-free-product-na_new_site_reactive_programming_vs_reactive_systems_text_cta
|
||||
[6]:http://conferences.oreilly.com/oscon/oscon-tx?intcmp=il-prog-confreg-update-ostx17_new_site_oscon_17_austin_right_rail_cta
|
||||
[7]:http://conferences.oreilly.com/oscon/oscon-tx?intcmp=il-prog-confreg-update-ostx17_new_site_oscon_17_austin_right_rail_cta
|
||||
[8]:https://www.oreilly.com/ideas/reactive-programming-vs-reactive-systems#dfref-footnote-1
|
||||
[9]:https://www.oreilly.com/ideas/reactive-programming-vs-reactive-systems#dfref-footnote-2
|
||||
[10]:https://en.wikipedia.org/wiki/Futures_and_promises
|
||||
[11]:http://reactive-streams.org/
|
||||
[12]:https://en.wikipedia.org/wiki/Oz_(programming_language)#Dataflow_variables_and_declarative_concurrency
|
||||
[13]:https://www.oreilly.com/ideas/reactive-programming-vs-reactive-systems#dfref-footnote-3
|
||||
[14]:https://www.oreilly.com/ideas/reactive-programming-vs-reactive-systems#dfref-footnote-4
|
||||
[15]:http://skife.org/architecture/fault-tolerance/2009/12/31/bulkheads.html
|
||||
[16]:https://begriffs.com/posts/2015-07-22-essence-of-frp.html
|
||||
[17]:https://www.oreilly.com/ideas/reactive-programming-vs-reactive-systems#ref-footnote-1
|
||||
[18]:https://en.wikipedia.org/wiki/Amdahl%2527s_law
|
||||
[19]:https://www.oreilly.com/ideas/reactive-programming-vs-reactive-systems#ref-footnote-2
|
||||
[20]:http://www.perfdynamics.com/Manifesto/USLscalability.html
|
||||
[21]:https://www.oreilly.com/ideas/reactive-programming-vs-reactive-systems#ref-footnote-3
|
||||
[22]:https://www.oreilly.com/ideas/reactive-programming-vs-reactive-systems#ref-footnote-4
|
||||
[23]:http://www.reactivemanifesto.org/
|
||||
[24]:http://conal.net/papers/icfp97/
|
||||
[25]:http://www.reactivemanifesto.org/glossary#Asynchronous
|
||||
[26]:http://www.reactivemanifesto.org/glossary#Non-Blocking
|
||||
[27]:https://en.wikipedia.org/wiki/Dataflow_programming
|
||||
[28]:http://www.reactivemanifesto.org/glossary#Back-Pressure
|
||||
[29]:http://www.reactivemanifesto.org/glossary#Message-Driven
|
||||
[30]:http://www.reactivemanifesto.org/glossary#Message-Driven
|
||||
[31]:https://christophermeiklejohn.com/pl/2016/04/12/rpc.html
|
||||
[32]:https://queue.acm.org/detail.cfm?id=1142044
|
||||
[33]:https://cs.brown.edu/courses/cs227/archives/2012/papers/weaker/cidr07p15.pdf
|
||||
[34]:http://www.hpl.hp.com/techreports/tandem/TR-86.2.pdf
|
||||
[35]:http://erlang.org/download/armstrong_thesis_2003.pdf
|
||||
[36]:http://www.reactivemanifesto.org/glossary#Isolation
|
||||
[37]:http://www.reactivemanifesto.org/glossary#Elasticity
|
||||
[38]:http://www.reactivemanifesto.org/glossary#Location-Transparency
|
||||
[39]:http://akka.io/
|
||||
[40]:https://www.erlang.org/
|
||||
[41]:http://martinfowler.com/bliki/CircuitBreaker.html
|
@ -1,5 +1,9 @@
|
||||
|
||||
translating by xiaow6
|
||||
|
||||
Why do developers who could work anywhere flock to the world’s most expensive cities?
|
||||
============================================================
|
||||
|
||||
![](https://tctechcrunch2011.files.wordpress.com/2017/04/img_20170401_1835042.jpg?w=977)
|
||||
|
||||
Politicians and economists [lament][10] that certain alpha regions — SF, LA, NYC, Boston, Toronto, London, Paris — attract all the best jobs while becoming repellently expensive, reducing economic mobility and contributing to further bifurcation between haves and have-nots. But why don’t the best jobs move elsewhere?
|
||||
@ -39,13 +43,13 @@ via: https://techcrunch.com/2017/04/02/why-do-developers-who-could-work-anywhere
|
||||
[a]:https://techcrunch.com/author/jon-evans/
|
||||
[1]:https://techcrunch.com/2017/04/02/why-do-developers-who-could-work-anywhere-flock-to-the-worlds-most-expensive-cities/#comments
|
||||
[2]:https://techcrunch.com/2017/04/02/why-do-developers-who-could-work-anywhere-flock-to-the-worlds-most-expensive-cities/#
|
||||
[3]:http://twitter.com/share?via=techcrunch&url=http://tcrn.ch/2owXJ0C&text=Why%20do%20developers%20who%20could%20work%20anywhere%20flock%20to%20the%20world%E2%80%99s%20most%20expensive%C2%A0cities%3F&hashtags=
|
||||
[4]:https://www.linkedin.com/shareArticle?mini=true&url=https%3A%2F%2Ftechcrunch.com%2F2017%2F04%2F02%2Fwhy-do-developers-who-could-work-anywhere-flock-to-the-worlds-most-expensive-cities%2F&title=Why%20do%20developers%20who%20could%20work%20anywhere%20flock%20to%20the%20world%E2%80%99s%20most%20expensive%C2%A0cities%3F
|
||||
[3]:http://twitter.com/share?via=techcrunch&url=http://tcrn.ch/2owXJ0C&text=Why%20do%20developers%20who%20could%20work%20anywhere%20flock%20to%20the%20world%E2%80%99s%20most%20expensive%C2%A0cities%3F&hashtags=
|
||||
[4]:https://www.linkedin.com/shareArticle?mini=true&url=https%3A%2F%2Ftechcrunch.com%2F2017%2F04%2F02%2Fwhy-do-developers-who-could-work-anywhere-flock-to-the-worlds-most-expensive-cities%2F&title=Why%20do%20developers%20who%20could%20work%20anywhere%20flock%20to%20the%20world%E2%80%99s%20most%20expensive%C2%A0cities%3F
|
||||
[5]:https://plus.google.com/share?url=https://techcrunch.com/2017/04/02/why-do-developers-who-could-work-anywhere-flock-to-the-worlds-most-expensive-cities/
|
||||
[6]:http://www.reddit.com/submit?url=https://techcrunch.com/2017/04/02/why-do-developers-who-could-work-anywhere-flock-to-the-worlds-most-expensive-cities/&title=Why%20do%20developers%20who%20could%20work%20anywhere%20flock%20to%20the%20world%E2%80%99s%20most%20expensive%C2%A0cities%3F
|
||||
[6]:http://www.reddit.com/submit?url=https://techcrunch.com/2017/04/02/why-do-developers-who-could-work-anywhere-flock-to-the-worlds-most-expensive-cities/&title=Why%20do%20developers%20who%20could%20work%20anywhere%20flock%20to%20the%20world%E2%80%99s%20most%20expensive%C2%A0cities%3F
|
||||
[7]:http://www.stumbleupon.com/badge/?url=https://techcrunch.com/2017/04/02/why-do-developers-who-could-work-anywhere-flock-to-the-worlds-most-expensive-cities/
|
||||
[8]:mailto:?subject=Why%20do%20developers%20who%20could%20work%20anywhere%20flock%20to%20the%20world%E2%80%99s%20most%20expensive%C2%A0cities?&body=Article:%20https://techcrunch.com/2017/04/02/why-do-developers-who-could-work-anywhere-flock-to-the-worlds-most-expensive-cities/
|
||||
[9]:https://share.flipboard.com/bookmarklet/popout?v=2&title=Why%20do%20developers%20who%20could%20work%20anywhere%20flock%20to%20the%20world%E2%80%99s%20most%20expensive%C2%A0cities%3F&url=https://techcrunch.com/2017/04/02/why-do-developers-who-could-work-anywhere-flock-to-the-worlds-most-expensive-cities/
|
||||
[8]:mailto:?subject=Why%20do%20developers%20who%20could%20work%20anywhere%20flock%20to%20the%20world%E2%80%99s%20most%20expensive%C2%A0cities?&body=Article:%20https://techcrunch.com/2017/04/02/why-do-developers-who-could-work-anywhere-flock-to-the-worlds-most-expensive-cities/
|
||||
[9]:https://share.flipboard.com/bookmarklet/popout?v=2&title=Why%20do%20developers%20who%20could%20work%20anywhere%20flock%20to%20the%20world%E2%80%99s%20most%20expensive%C2%A0cities%3F&url=https://techcrunch.com/2017/04/02/why-do-developers-who-could-work-anywhere-flock-to-the-worlds-most-expensive-cities/
|
||||
[10]:https://mobile.twitter.com/Noahpinion/status/846054187288866
|
||||
[11]:http://happyfuncorp.com/
|
||||
[12]:https://twitter.com/rezendi
|
||||
|
@ -1,322 +0,0 @@
|
||||
GitFuture is translating.
|
||||
|
||||
Top open source creative tools in 2016
|
||||
============================================================
|
||||
|
||||
### Whether you want to manipulate images, edit audio, or animate stories, there's a free and open source tool to do the trick.
|
||||
|
||||
![Top 34 open source creative tools in 2016 ](https://opensource.com/sites/default/files/styles/image-full-size/public/u23316/art-yearbook-paint-draw-create-creative.png?itok=KgEF_IN_ "Top 34 open source creative tools in 2016 ")
|
||||
|
||||
>Image by : opensource.com
|
||||
|
||||
A few years ago, I gave a lightning talk at Red Hat Summit that took attendees on a tour of the [2012 open source creative tools][12] landscape. Open source tools have evolved a lot in the past few years, so let's take a tour of 2016 landscape.
|
||||
|
||||
### Core applications
|
||||
|
||||
These six applications are the juggernauts of open source design tools. They are well-established, mature projects with full feature sets, stable releases, and active development communities. All six applications are cross-platform; each is available on Linux, OS X, and Windows, although in some cases the Linux versions are the most quickly updated. These applications are so widely known, I've also included highlights of the latest features available that you may have missed if you don't closely follow their development.
|
||||
|
||||
If you'd like to follow new developments more closely, and perhaps even help out by testing the latest development versions of the first four of these applications—GIMP, Inkscape, Scribus, and MyPaint—you can install them easily on Linux using [Flatpak][13]. Nightly builds of each of these applications are available via Flatpak by [following the instructions][14] for _Nightly Graphics Apps_. One thing to note: If you'd like to install brushes or other extensions to each Flatpak version of the app, the directory to drop the extensions in will be under the directory corresponding to the application inside the **~/.var/app** directory.
|
||||
|
||||
### GIMP
|
||||
|
||||
[GIMP][15] [celebrated its 20th anniversary in 2015][16], making it one of the oldest open source creative applications out there. GIMP is a solid program for photo manipulation, basic graphic creation, and illustration. You can start using GIMP by trying simple tasks, such as cropping and resizing images, and over time work into a deep set of functionality. Available for Linux, Mac OS X, and Windows, GIMP is cross-platform and can open and export to a wide breadth of file formats, including those popularized by its proprietary analogue, Photoshop.
|
||||
|
||||
The GIMP team is currently working toward the 2.10 release; [2.8.18][17] is the latest stable version. More exciting is the unstable version, [2.9.4][18], with a revamped user interface featuring space-saving symbolic icons and dark themes, improved color management, more GEGL-based filters with split-preview, MyPaint brush support (shown in screenshot below), symmetrical drawing, and command-line batch processing. For more details, check out [the full release notes][19].
|
||||
|
||||
![GIMP screenshot](https://opensource.com/sites/default/files/gimp_520.png "GIMP screenshot")
|
||||
|
||||
### Inkscape
|
||||
|
||||
[Inkscape][20] is a richly featured vector-based graphic design workhorse. Use it to create simple graphics, diagrams, layouts, or icon art.
|
||||
|
||||
The latest stable version is [0.91][21]; similarly to GIMP, more excitement can be found in a pre-release version, 0.92pre3, which was released November 2016\. The premiere feature of the latest pre-release is the [gradient mesh feature][22](demonstrated in screenshot below); new features introduce in the 0.91 release include [power stroke][23] for fully configurable calligraphic strokes (the "open" in "opensource.com" in the screenshot below uses powerstroke), the on-canvas measure tool, and [the new symbols dialog][24] (shown in the right side of the screenshot below). (Many symbol libraries for Inkscape are available on GitHub; [Xaviju's inkscape-open-symbols set][25] is fantastic.) A new feature available in development/nightly builds is the _Objects_ dialog that catalogs all objects in a document and provides tools to manage them.
|
||||
|
||||
![Inkscape screenshot](https://opensource.com/sites/default/files/inkscape_520.png "Inkscape screenshot")
|
||||
|
||||
### Scribus
|
||||
|
||||
[Scribus][26] is a powerful desktop publishing and page layout tool. Scribus enables you to create sophisticated and beautiful items, including newsletters, books, and magazines, as well as other print pieces. Scribus has color management tools that can handle and output CMYK and spot colors for files that are ready for reliable reproduction at print shops.
|
||||
|
||||
[1.4.6][27] is the latest stable release of Scribus; the [1.5.x][28] series of releases is the most exciting as they serve as a preview to the upcoming 1.6.0 release. Version 1.5.3 features a Krita file (*.KRA) file import tool; other developments in the 1.5.x series include the _Table_ tool, text frame welding, footnotes, additional PDF formats for export, improved dictionary support, dockable palettes, a symbols tool, and expanded file format support.
|
||||
|
||||
![Scribus screenshot](https://opensource.com/sites/default/files/scribus_520.png "Scribus screenshot")
|
||||
|
||||
### MyPaint
|
||||
|
||||
[MyPaint][29] is a drawing tablet-centric expressive drawing and illustration tool. It's lightweight and has a minimal interface with a rich set of keyboard shortcuts so that you can focus on your drawing without having to drop your pen.
|
||||
|
||||
[MyPaint 1.2.0][30] is the latest stable release and includes new features, such as the [intuitive inking tool][31] for tracing over pencil drawings, new flood fill tool, layer groups, brush and color history panel, user interface revamp including a dark theme and small symbolic icons, and editable vector layers. To try out the latest developments in MyPaint, I recommend installing the nightly Flatpak build, although there have not been significant feature additions since the 1.2.0 release.
|
||||
|
||||
![MyPaint screenshot](https://opensource.com/sites/default/files/mypaint_520.png "MyPaint screenshot")
|
||||
|
||||
### Blender
|
||||
|
||||
Initially released in January 1995, [Blender][32], like GIMP, has been around for more than 20 years. Blender is a powerful open source 3D creation suite that includes tools for modeling, sculpting, rendering, realistic materials, rigging, animation, compositing, video editing, game creation, and simulation.
|
||||
|
||||
The latest stable Blender release is [2.78a][33]. The 2.78 release was a large one and includes features such as the revamped _Grease Pencil_ 2D animation tool; VR rendering support for spherical stereo images; and a new drawing tool for freehand curves.
|
||||
|
||||
![Inkscape screenshot](https://opensource.com/sites/default/files/blender_520.png "Inkscape screenshot")
|
||||
|
||||
To try out the latest exciting Blender developments, you have many options, including:
|
||||
|
||||
* The Blender Foundation makes [unstable daily builds][2] available on the official Blender website.
|
||||
* If you're looking for builds that include particular in-development features, [graphicall.org][3] is a community-moderated site that provides special versions of Blender (and occasionally other open source creative apps) to enable artists to try out the latest available code and experiments.
|
||||
* Mathieu Bridon has made development versions of Blender available via Flatpak. See his blog post for details: [Blender nightly in Flatpak][4].
|
||||
|
||||
### Krita
|
||||
|
||||
[Krita][34] is a digital drawing application with a deep set of capabilities. The application is geared toward illustrators, concept artists, and comic artists and is fully loaded with extras, such as brushes, palettes, patterns, and templates.
|
||||
|
||||
The latest stable version is [Krita 3.0.1][35], released in September 2016\. Features new to the 3.0.x series include 2D frame-by-frame animation; improved layer management and functionality; expanded and more usable shortcuts; improvements to grids, guides, and snapping; and soft-proofing.
|
||||
|
||||
![Krita screenshot](https://opensource.com/sites/default/files/krita_520.png "Krita screenshot")
|
||||
|
||||
### Video tools
|
||||
|
||||
There are many, many options for open source video editing tools. Of the members of the pack, [Flowblade][36] is a newcomer and Kdenlive is the established, newbie-friendly, and most fully featured contender. The main criteria that may help you eliminate some of this array of options is supported platforms—some of these only support Linux. These all have active upstreams and the latest stable versions of each have been released recently, within weeks of each other.
|
||||
|
||||
### Kdenlive
|
||||
|
||||
[Kdenlive][37], which was initially released back in 2002, is a powerful non-linear video editor available for Linux and OS X (although the OS X version is out-of-date). Kdenlive has a user-friendly drag-and-drop-based user interface that accommodates beginners, and with the depth experts need.
|
||||
|
||||
Learn how to use Kdenlive with an [multi-part Kdenlive tutorial series][38] by Seth Kenlon.
|
||||
|
||||
* Latest Stable: 16.08.2 (October 2016)
|
||||
|
||||
![](https://opensource.com/sites/default/files/images/life-uploads/kdenlive_6_leader.png)
|
||||
|
||||
### Flowblade
|
||||
|
||||
Released in 2012, [Flowblade][39], a Linux-only video editor, is a relative newcomer.
|
||||
|
||||
* Latest Stable: 1.8 (September 2016)
|
||||
|
||||
### Pitivi
|
||||
|
||||
[Pitivi][40] is a user-friendly free and open source video editor. Pitivi is written in [Python][41] (the "Pi" in Pitivi), uses the [GStreamer][42] multimedia framework, and has an active community.
|
||||
|
||||
* Latest stable: 0.97 (August 2016)
|
||||
* Get the [latest version with Flatpak][5]
|
||||
|
||||
### Shotcut
|
||||
|
||||
[Shotcut][43] is a free, open source, cross-platform video editor that started [back in 2004][44] and was later rewritten by current lead developer [Dan Dennedy][45].
|
||||
|
||||
* Latest stable: 16.11 (November 2016)
|
||||
* 4K resolution support
|
||||
* Ships as a tarballed binary
|
||||
|
||||
|
||||
|
||||
### OpenShot Video Editor
|
||||
|
||||
Started in 2008, [OpenShot Video Editor][46] is a free, open source, easy-to-use, cross-platform video editor.
|
||||
|
||||
* Latest stable: [2.1][6] (August 2016)
|
||||
|
||||
|
||||
### Utilities
|
||||
|
||||
### SwatchBooker
|
||||
|
||||
[SwatchBooker][47] is a handy utility, and although it hasn't been updated in a few years, it's still useful. SwatchBooker helps users legally obtain color swatches from various manufacturers in a format that you can use with other free and open source tools, including Scribus.
|
||||
|
||||
### GNOME Color Manager
|
||||
|
||||
[GNOME Color Manager][48] is the built-in color management system for the GNOME desktop environment, the default desktop for a bunch of Linux distros. The tool allows you to create profiles for your display devices using a colorimeter, and also allows you to load/managed ICC color profiles for those devices.
|
||||
|
||||
### GNOME Wacom Control
|
||||
|
||||
[The GNOME Wacom controls][49] allow you to configure your Wacom tablet in the GNOME desktop environment; you can modify various options for interacting with the tablet, including customizing the sensitivity of the tablet and which monitors the tablet maps to.
|
||||
|
||||
### Xournal
|
||||
|
||||
[Xournal][50] is a humble but solid app that allows you to hand write/doodle notes using a tablet. Xournal is a useful tool for signing or otherwise annotating PDF documents.
|
||||
|
||||
### PDF Mod
|
||||
|
||||
[PDF Mod][51] is a handy utility for editing PDFs. PDF Mod lets users remove pages, add pages, bind multiple single PDFs together into a single PDF, reorder the pages, and rotate the pages.
|
||||
|
||||
### SparkleShare
|
||||
|
||||
[SparkleShare][52] is a git-backed file-sharing tool artists use to collaborate and share assets. Hook it up to a GitLab repo and you've got a nice open source infrastructure for asset management. The SparkleShare front end nullifies the inscrutability of git by providing a dropbox-like interface on top of it.
|
||||
|
||||
### Photography
|
||||
|
||||
### Darktable
|
||||
|
||||
[Darktable][53] is an application that allows you to develop digital RAW files and has a rich set of tools for the workflow management and non-destructive editing of photographic images. Darktable includes support for an extensive range of popular cameras and lenses.
|
||||
|
||||
![Changing color balance screenshot](https://opensource.com/sites/default/files/dt_colour.jpg "Changing color balance screenshot")
|
||||
|
||||
### Entangle
|
||||
|
||||
[Entangle][54] allows you to tether your digital camera to your computer and enables you to control your camera completely from the computer.
|
||||
|
||||
### Hugin
|
||||
|
||||
[Hugin][55] is a tool that allows you to stitch together photos in order to create panoramic photos.
|
||||
|
||||
### 2D animation
|
||||
|
||||
### Synfig Studio
|
||||
|
||||
[Synfig Studio][56] is a vector-based 2D animation suite that also supports bitmap artwork and is tablet-friendly.
|
||||
|
||||
### Blender Grease Pencil
|
||||
|
||||
I covered Blender above, but particularly notable from a recent release is [a refactored grease pencil feature][57], which adds the ability to create 2D animations.
|
||||
|
||||
|
||||
### Krita
|
||||
|
||||
[Krita][58] also now provides 2D animation functionality.
|
||||
|
||||
|
||||
### Music and audio editing
|
||||
|
||||
### Audacity
|
||||
|
||||
[Audacity][59] is popular, user-friendly tool for editing audio files and recording sound.
|
||||
|
||||
### Ardour
|
||||
|
||||
[Ardour][60] is a digital audio workstation with an interface centered around a record, edit, and mix workflow. It's a little more complicated than Audacity to use but allows for automation and is generally more sophisticated. (Available for Linux, Mac OS X, and Windows.)
|
||||
|
||||
### Hydrogen
|
||||
|
||||
[Hydrogen][61] is an open source drum machine with an intuitive interface. It provides the ability to create and arrange various patterns using synthesized instruments.
|
||||
|
||||
### Mixxx
|
||||
|
||||
[Mixxx][62] is a four-deck DJ suite that allows you to DJ and mix songs together with powerful controls, including beat looping, time stretching, and pitch bending, as well as live broadcast your mixes and interface with DJ hardware controllers.
|
||||
|
||||
### Rosegarden
|
||||
|
||||
[Rosegarden][63] is a music composition suite that includes tools for score writing and music composition/editing and provides an audio and MIDI sequencer.
|
||||
|
||||
### MuseScore
|
||||
|
||||
[MuseScore][64] is a music score creation, notation, and editing tool with a community of musical score contributors.
|
||||
|
||||
### Additional creative tools
|
||||
|
||||
### MakeHuman
|
||||
|
||||
[MakeHuman][65] is a 3D graphical tool for creating photorealistic models of humanoid forms.
|
||||
|
||||
<iframe allowfullscreen="" frameborder="0" height="293" src="https://www.youtube.com/embed/WiEDGbRnXdE?rel=0" width="520"></iframe>
|
||||
|
||||
### Natron
|
||||
|
||||
[Natron][66] is a node-based compositor tool used for video post-production and motion graphic and special effect design.
|
||||
|
||||
### FontForge
|
||||
|
||||
[FontForge][67] is a typeface creation and editing tool. It allows you to edit letter forms in a typeface as well as generate fonts for using those typeface designs.
|
||||
|
||||
### Valentina
|
||||
|
||||
[Valentina][68] is an application for drafting sewing patterns.
|
||||
|
||||
### Calligra Flow
|
||||
|
||||
[Calligra Flow][69] is a Visio-like diagramming tool. (Available for Linux, Mac OS X, and Windows.)
|
||||
|
||||
### Resources
|
||||
|
||||
There are a lot of toys and goodies to try out there. Need some inspiration to start your exploration? These websites and conference are chock-full of tutorials and beautiful creative works to inspire you get you going:
|
||||
|
||||
1. [pixls.us][7]: Blog hosted by photographer Pat David that focuses on free and open source tools and workflow for professional photographers.
|
||||
2. [David Revoy's Blog][8] The blog of David Revoy, an immensely talented free and open source illustrator, concept artist, and advocate, with credits on several of the Blender Foundation films.
|
||||
3. [The Open Source Creative Podcast][9]: Hosted by Opensource.com community moderator and columnist [Jason van Gumster][10], who is a Blender and GIMP expert, and author of _[Blender for Dummies][1]_, this podcast is directed squarely at those of us who enjoy open source creative tools and the culture around them.
|
||||
4. [Libre Graphics Meeting][11]: Annual conference for free and open source creative software developers and the creatives who use the software. This is the place to find out about what cool features are coming down the pipeline in your favorite open source creative tools, and to enjoy what their users are creating with them.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
作者简介:
|
||||
|
||||
![](https://opensource.com/sites/default/files/styles/profile_pictures/public/pictures/picture-343-8e0fb148b105b450634e30acd8f5b22b.png?itok=oxzTm70z)
|
||||
|
||||
Máirín Duffy - Máirín is a principal interaction designer at Red Hat. She is passionate about software freedom and free & open source tools, particularly in the creative domain: her favorite application is Inkscape (http://inkscape.org).
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/16/12/yearbook-top-open-source-creative-tools-2016
|
||||
|
||||
作者:[Máirín Duffy][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://opensource.com/users/mairin
|
||||
[1]:http://www.blenderbasics.com/
|
||||
[2]:https://builder.blender.org/download/
|
||||
[3]:http://graphicall.org/
|
||||
[4]:https://mathieu.daitauha.fr/blog/2016/09/23/blender-nightly-in-flatpak/
|
||||
[5]:https://pitivi.wordpress.com/2016/07/18/get-pitivi-directly-from-us-with-flatpak/
|
||||
[6]:http://www.openshotvideo.com/2016/08/openshot-21-released.html
|
||||
[7]:http://pixls.us/
|
||||
[8]:http://davidrevoy.com/
|
||||
[9]:http://monsterjavaguns.com/podcast/
|
||||
[10]:https://opensource.com/users/jason-van-gumster
|
||||
[11]:http://libregraphicsmeeting.org/2016/
|
||||
[12]:https://opensource.com/life/12/9/tour-through-open-source-creative-tools
|
||||
[13]:https://opensource.com/business/16/8/flatpak
|
||||
[14]:http://flatpak.org/apps.html
|
||||
[15]:https://opensource.com/tags/gimp
|
||||
[16]:https://www.gimp.org/news/2015/11/22/20-years-of-gimp-release-of-gimp-2816/
|
||||
[17]:https://www.gimp.org/news/2016/07/14/gimp-2-8-18-released/
|
||||
[18]:https://www.gimp.org/news/2016/07/13/gimp-2-9-4-released/
|
||||
[19]:https://www.gimp.org/news/2016/07/13/gimp-2-9-4-released/
|
||||
[20]:https://opensource.com/tags/inkscape
|
||||
[21]:http://wiki.inkscape.org/wiki/index.php/Release_notes/0.91
|
||||
[22]:http://wiki.inkscape.org/wiki/index.php/Mesh_Gradients
|
||||
[23]:https://www.youtube.com/watch?v=IztyV-Dy4CE
|
||||
[24]:https://inkscape.org/cs/~doctormo/%E2%98%85symbols-dialog
|
||||
[25]:https://github.com/Xaviju/inkscape-open-symbols
|
||||
[26]:https://opensource.com/tags/scribus
|
||||
[27]:https://www.scribus.net/scribus-1-4-6-released/
|
||||
[28]:https://www.scribus.net/scribus-1-5-2-released/
|
||||
[29]:http://mypaint.org/
|
||||
[30]:http://mypaint.org/blog/2016/01/15/mypaint-1.2.0-released/
|
||||
[31]:https://github.com/mypaint/mypaint/wiki/v1.2-Inking-Tool
|
||||
[32]:https://opensource.com/tags/blender
|
||||
[33]:http://www.blender.org/features/2-78/
|
||||
[34]:https://opensource.com/tags/krita
|
||||
[35]:https://krita.org/en/item/krita-3-0-1-update-brings-numerous-fixes/
|
||||
[36]:https://opensource.com/life/16/9/10-reasons-flowblade-linux-video-editor
|
||||
[37]:https://opensource.com/tags/kdenlive
|
||||
[38]:https://opensource.com/life/11/11/introduction-kdenlive
|
||||
[39]:http://jliljebl.github.io/flowblade/
|
||||
[40]:http://pitivi.org/
|
||||
[41]:http://wiki.pitivi.org/wiki/Why_Python%3F
|
||||
[42]:https://gstreamer.freedesktop.org/
|
||||
[43]:http://shotcut.org/
|
||||
[44]:http://permalink.gmane.org/gmane.comp.lib.fltk.general/2397
|
||||
[45]:http://www.dennedy.org/
|
||||
[46]:http://openshot.org/
|
||||
[47]:http://www.selapa.net/swatchbooker/
|
||||
[48]:https://help.gnome.org/users/gnome-help/stable/color.html.en
|
||||
[49]:https://help.gnome.org/users/gnome-help/stable/wacom.html.en
|
||||
[50]:http://xournal.sourceforge.net/
|
||||
[51]:https://wiki.gnome.org/Apps/PdfMod
|
||||
[52]:https://www.sparkleshare.org/
|
||||
[53]:https://opensource.com/life/16/4/how-use-darktable-digital-darkroom
|
||||
[54]:https://entangle-photo.org/
|
||||
[55]:http://hugin.sourceforge.net/
|
||||
[56]:https://opensource.com/article/16/12/synfig-studio-animation-software-tutorial
|
||||
[57]:https://wiki.blender.org/index.php/Dev:Ref/Release_Notes/2.78/GPencil
|
||||
[58]:https://opensource.com/tags/krita
|
||||
[59]:https://opensource.com/tags/audacity
|
||||
[60]:https://ardour.org/
|
||||
[61]:http://www.hydrogen-music.org/
|
||||
[62]:http://mixxx.org/
|
||||
[63]:http://www.rosegardenmusic.com/
|
||||
[64]:https://opensource.com/life/16/03/musescore-tutorial
|
||||
[65]:http://makehuman.org/
|
||||
[66]:https://natron.fr/
|
||||
[67]:http://fontforge.github.io/en-US/
|
||||
[68]:http://valentina-project.org/
|
||||
[69]:https://www.calligra.org/flow/
|
@ -1,69 +0,0 @@
|
||||
Why we need an open model to design and evaluate public policy
|
||||
============================================================
|
||||
|
||||
### Imagine an app that allows citizens to test drive proposed policies.
|
||||
|
||||
[up][3]
|
||||
![Why we need an open model to design and evaluate public policy](https://opensource.com/sites/default/files/styles/image-full-size/public/images/government/GOV_citizen_participation.jpg?itok=eeLWQgev "Why we need an open model to design and evaluate public policy")
|
||||
Image by :
|
||||
|
||||
opensource.com
|
||||
|
||||
In the months leading up to political elections, public debate intensifies and citizens are exposed to a proliferation of information around policy options. In a data-driven society where new insights have been informing decision-making, a deeper understanding of this information has never been more important, yet the public still hasn't realized the full potential of public policy modeling.
|
||||
|
||||
At a time where the concept of "open government" is constantly evolving to keep pace with new technological advances, government policy models and analysis could be the new generation of open knowledge.
|
||||
|
||||
Government Open Source Models (GOSMs) refer to the idea that government-developed models, whose purpose is to design and evaluate policy, are freely available to everyone to use, distribute, and modify without restrictions. The community could potentially improve the quality, reliability, and accuracy of policy modeling, creating new data-driven apps that benefit the public.
|
||||
|
||||
Today's generation interacts with technology like it's second nature, absorbing vast amounts of information tacitly. What if we could interact with different public policies in a virtual, immersive environment using a GOSM?
|
||||
|
||||
Imagine an app that allows citizens to test drive proposed policies to determine the future in which they want to live. They would instinctively learn the key drivers and what to expect. Before long the public would have a deeper understanding of public policy impacts and become more savvy at navigating the controversial terrains of public debate.
|
||||
|
||||
Why haven't we had greater access to these models before? The reason lies behind the veils of public policy modeling.
|
||||
|
||||
In a society as complex as the one we live in, quantifying policy impacts is a difficult task and has been described as a "fine-art." Moreover, most government policy models are based on administrative and other privately held data. Nevertheless, policy analysts valiantly go about their quest with the aim of guiding policy design, and many a political battle has been won with a quantitative call to arms.
|
||||
|
||||
Numbers are powerful. They build credibility and are often used as a rationale for introducing new policies. The development of public policy models lends power to both politicians and bureaucrats, who may be reluctant to disrupt the status quo. Giving that up may not be easy, but GOSMs provide an opportunity for unprecedented public policy reform.
|
||||
|
||||
GOSMs level the playing field for everyone: politicians, the media, lobby groups, stakeholders, and the general public. By opening the doors of policy evaluation to the community, governments can tap into new and undiscovered capabilities for creativity, innovation, and efficiency within the public sphere. But what are the practical implications for the strategic interactions between stakeholders and governments in public policy design?
|
||||
|
||||
GOSMs are unique because they are primarily a tool for designing public policy and do not necessarily require re-distribution for private gains. Stakeholders and lobby groups could potentially employ GOSMs along with their own privately held information to gain new insights into the workings of the policy environment in which they are economic players for private benefit.
|
||||
|
||||
Could GOSMs become a weapon where stakeholders hold the balance of power in public debate and strategize for optimal benefit?
|
||||
|
||||
Being a modifiable public good, GOSMs are, in notion, funded by the taxpayer and attributed to the state. Would it be ethically appropriate for private entities to gain from GOSMs without passing on the benefits to society? Unlike apps that may be used for more efficient service provision, alternate policy proposals are more likely to be used by consultancies and contribute to public debate.
|
||||
|
||||
The open source community has frequently used the "copyleft license" to ensure that code and any derivative works under this license remains open to everyone. This works well when the product of value is the code itself, which requires re-distribution for maximum benefit. However, what if the code or GOSM redistribution is incidental to the main product, which may be new strategic insights into the existing policy environment?
|
||||
|
||||
At a time when privately collected data is becoming prolific, the real value behind GOSMs may be the underlying data, which could be used to refine the models themselves. Ultimately, government is the only consumer with the authority to implement policy, and stakeholders may choose to share their modified GOSMs in negotiations.
|
||||
|
||||
The big challenge government has when publicly releasing policy models is increasing transparency while protecting privacy. Ideally, releasing GOSMs would require securing closed data in a way that preserves the key features of the modeling.
|
||||
|
||||
Publicly releasing GOSMs empower citizens by promoting a greater understanding and participation into our democracy, which would lead to improved policy outcomes and greater public satisfaction. In an open government utopia, open public policy development will be a collaborative effort between government and the community, where knowledge, data, and analysis are freely available to everyone.
|
||||
|
||||
_Learn more in Audrey Lobo-Pulo's talk at linux.conf.au 2017 ([#lca2017][1]) in Hobart: [Publicly Releasing Government Models][2]._
|
||||
|
||||
_Disclaimer: The views presented in this article belong to Audrey Lobo-Pulo and are not necessarily those of the Australian Government._
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
作者简介:
|
||||
|
||||
![](https://opensource.com/sites/default/files/styles/profile_pictures/public/pictures/1-_mg_2552.jpg?itok=-RflZ4Wv)
|
||||
|
||||
Audrey Lobo-Pulo - Dr. Audrey Lobo-Pulo is a co-founder of Phoensight and is an advocate for open government and open source software in government modelling. A physicist, she moved to working in economic policy modelling after joining the Australian Public Service. Audrey has been involved in modelling a wide range of economic policy options and is currently interested in government open data and open policy modelling. Audrey's vision for government is to bring data science to public policy analytics.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/17/1/government-open-source-models
|
||||
|
||||
作者:[Audrey Lobo-Pulo ][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://opensource.com/users/audrey-lobo-pulo
|
||||
[1]:https://twitter.com/search?q=%23lca2017&src=typd
|
||||
[2]:https://linux.conf.au/schedule/presentation/31/
|
||||
[3]:https://opensource.com/article/17/1/government-open-source-models?rate=p9P_dJ3xMrvye9a6xiz6K_Hc8pdKmRvMypzCNgYthA0
|
@ -1,3 +1,5 @@
|
||||
vim-kakali translating
|
||||
|
||||
3 open source music players: Aqualung, Lollypop, and GogglesMM
|
||||
============================================================
|
||||
![3 open source music players: Aqualung, Lollypop, and GogglesMM](https://opensource.com/sites/default/files/styles/image-full-size/public/images/life/music-birds-recording-520.png?itok=wvh1g4Lw "3 open source music players: Aqualung, Lollypop, and GogglesMM")
|
||||
|
@ -1,562 +0,0 @@
|
||||
How to Install Elastic Stack on CentOS 7
|
||||
============================================================
|
||||
|
||||
### On this page
|
||||
|
||||
1. [Step 1 - Prepare the Operating System][1]
|
||||
2. [Step 2 - Install Java][2]
|
||||
3. [Step 3 - Install and Configure Elasticsearch][3]
|
||||
4. [Step 4 - Install and Configure Kibana with Nginx][4]
|
||||
5. [Step 5 - Install and Configure Logstash][5]
|
||||
6. [Step 6 - Install and Configure Filebeat on the CentOS Client][6]
|
||||
7. [Step 7 - Install and Configure Filebeat on the Ubuntu Client][7]
|
||||
8. [Step 8 - Testing][8]
|
||||
9. [Reference][9]
|
||||
|
||||
**Elasticsearch** is an open source search engine based on Lucene, developed in Java. It provides a distributed and multitenant full-text search engine with an HTTP Dashboard web-interface (Kibana). The data is queried, retrieved and stored with a JSON document scheme. Elasticsearch is a scalable search engine that can be used to search for all kind of text documents, including log files. Elasticsearch is the heart of the 'Elastic Stack' or ELK Stack.
|
||||
|
||||
**Logstash** is an open source tool for managing events and logs. It provides real-time pipelining for data collections. Logstash will collect your log data, convert the data into JSON documents, and store them in Elasticsearch.
|
||||
|
||||
**Kibana** is an open source data visualization tool for Elasticsearch. Kibana provides a pretty dashboard web interface. It allows you to manage and visualize data from Elasticsearch. It's not just beautiful, but also powerful.
|
||||
|
||||
In this tutorial, I will show you how to install and configure Elastic Stack on a CentOS 7 server for monitoring server logs. Then I'll show you how to install 'Elastic beats' on a CentOS 7 and a Ubuntu 16 client operating system.
|
||||
|
||||
**Prerequisite**
|
||||
|
||||
* CentOS 7 64 bit with 4GB of RAM - elk-master
|
||||
* CentOS 7 64 bit with 1 GB of RAM - client1
|
||||
* Ubuntu 16 64 bit with 1GB of RAM - client2
|
||||
|
||||
### Step 1 - Prepare the Operating System
|
||||
|
||||
In this tutorial, we will disable SELinux on the CentOS 7 server. Edit the SELinux configuration file.
|
||||
|
||||
vim /etc/sysconfig/selinux
|
||||
|
||||
Change SELINUX value from enforcing to disabled.
|
||||
|
||||
SELINUX=disabled
|
||||
|
||||
Then reboot the server.
|
||||
|
||||
reboot
|
||||
|
||||
Login to the server again and check the SELinux state.
|
||||
|
||||
getenforce
|
||||
|
||||
Make sure the result is disabled.
|
||||
|
||||
### Step 2 - Install Java
|
||||
|
||||
Java is required for the Elastic stack deployment. Elasticsearch requires Java 8, it is recommended to use the Oracle JDK 1.8\. I will install Java 8 from the official Oracle rpm package.
|
||||
|
||||
Download Java 8 JDK with the wget command.
|
||||
|
||||
wget --no-cookies --no-check-certificate --header "Cookie: gpw_e24=http:%2F%2Fwww.oracle.com%2F; oraclelicense=accept-securebackup-cookie" "http://download.oracle.com/otn-pub/java/jdk/8u77-b02/jdk-8u77-linux-x64.rpm"
|
||||
|
||||
Then install it with this rpm command;
|
||||
|
||||
rpm -ivh jdk-8u77-linux-x64.rpm
|
||||
|
||||
Finally, check java JDK version to ensure that it is working properly.
|
||||
|
||||
java -version
|
||||
|
||||
You will see Java version of the server.
|
||||
|
||||
### Step 3 - Install and Configure Elasticsearch
|
||||
|
||||
In this step, we will install and configure Elasticsearch. I will install Elasticsearch from an rpm package provided by elastic.co and configure it to run on localhost (to make the setup secure and ensure that it is not reachable from the outside).
|
||||
|
||||
Before installing Elasticsearch, add the elastic.co key to the server.
|
||||
|
||||
rpm --import https://artifacts.elastic.co/GPG-KEY-elasticsearch
|
||||
|
||||
Next, download Elasticsearch 5.1 with wget and then install it.
|
||||
|
||||
wget https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-5.1.1.rpm
|
||||
rpm -ivh elasticsearch-5.1.1.rpm
|
||||
|
||||
Elasticsearch is installed. Now go to the configuration directory and edit the elasticsaerch.yml configuration file.
|
||||
|
||||
cd /etc/elasticsearch/
|
||||
vim elasticsearch.yml
|
||||
|
||||
Enable memory lock for Elasticsearch by removing a comment on line 40\. This disables memory swapping for Elasticsearch.
|
||||
|
||||
bootstrap.memory_lock: true
|
||||
|
||||
In the 'Network' block, uncomment the network.host and http.port lines.
|
||||
|
||||
network.host: localhost
|
||||
http.port: 9200
|
||||
|
||||
Save the file and exit the editor.
|
||||
|
||||
Now edit the elasticsearch.service file for the memory lock configuration.
|
||||
|
||||
vim /usr/lib/systemd/system/elasticsearch.service
|
||||
|
||||
Uncomment LimitMEMLOCK line.
|
||||
|
||||
LimitMEMLOCK=infinity
|
||||
|
||||
Save and exit.
|
||||
|
||||
Edit the sysconfig configuration file for Elasticsearch.
|
||||
|
||||
vim /etc/sysconfig/elasticsearch
|
||||
|
||||
Uncomment line 60 and make sure the value is 'unlimited'.
|
||||
|
||||
MAX_LOCKED_MEMORY=unlimited
|
||||
|
||||
Save and exit.
|
||||
|
||||
The Elasticsearch configuration is finished. Elasticsearch will run on the localhost IP address on port 9200, we disabled memory swapping for it by enabling mlockall on the CentOS server.
|
||||
|
||||
Reload systemd, enable Elasticsearch to start at boot time, then start the service.
|
||||
|
||||
sudo systemctl daemon-reload
|
||||
sudo systemctl enable elasticsearch
|
||||
sudo systemctl start elasticsearch
|
||||
|
||||
Wait a second for Eelasticsearch to start, then check the open ports on the server, make sure 'state' for port 9200 is 'LISTEN'.
|
||||
|
||||
netstat -plntu
|
||||
|
||||
[
|
||||
![Check elasticsearch running on port 9200](https://www.howtoforge.com/images/how-to-install-elastic-stack-on-centos-7/1.png)
|
||||
][10]
|
||||
|
||||
Then check the memory lock to ensure that mlockall is enabled, and check that Elasticsearch is running with the commands below.
|
||||
|
||||
curl -XGET 'localhost:9200/_nodes?filter_path=**.mlockall&pretty'
|
||||
curl -XGET 'localhost:9200/?pretty'
|
||||
|
||||
You will see the results below.
|
||||
|
||||
[
|
||||
![Check memory lock elasticsearch and check status](https://www.howtoforge.com/images/how-to-install-elastic-stack-on-centos-7/2.png)
|
||||
][11]
|
||||
|
||||
### Step 4 - Install and Configure Kibana with Nginx
|
||||
|
||||
In this step, we will install and configure Kibana with a Nginx web server. Kibana will listen on the localhost IP address and Nginx acts as a reverse proxy for the Kibana application.
|
||||
|
||||
Download Kibana 5.1 with wget, then install it with the rpm command:
|
||||
|
||||
wget https://artifacts.elastic.co/downloads/kibana/kibana-5.1.1-x86_64.rpm
|
||||
rpm -ivh kibana-5.1.1-x86_64.rpm
|
||||
|
||||
Now edit the Kibana configuration file.
|
||||
|
||||
vim /etc/kibana/kibana.yml
|
||||
|
||||
Uncomment the configuration lines for server.port, server.host and elasticsearch.url.
|
||||
|
||||
server.port: 5601
|
||||
server.host: "localhost"
|
||||
elasticsearch.url: "http://localhost:9200"
|
||||
|
||||
Save and exit.
|
||||
|
||||
Add Kibana to run at boot and start it.
|
||||
|
||||
sudo systemctl enable kibana
|
||||
sudo systemctl start kibana
|
||||
|
||||
Kibana will run on port 5601 as node application.
|
||||
|
||||
netstat -plntu
|
||||
|
||||
[
|
||||
![Kibana running as node application on port 5601](https://www.howtoforge.com/images/how-to-install-elastic-stack-on-centos-7/3.png)
|
||||
][12]
|
||||
|
||||
The Kibana installation is finished. Now we need to install Nginx and configure it as reverse proxy to be able to access Kibana from the public IP address.
|
||||
|
||||
Nginx is available in the Epel repository, install epel-release with yum.
|
||||
|
||||
yum -y install epel-release
|
||||
|
||||
Next, install the Nginx and httpd-tools package.
|
||||
|
||||
yum -y install nginx httpd-tools
|
||||
|
||||
The httpd-tools package contains tools for the web server, we will use htpasswd basic authentication for Kibana.
|
||||
|
||||
Edit the Nginx configuration file and remove the **'server { }**' block, so we can add a new virtual host configuration.
|
||||
|
||||
cd /etc/nginx/
|
||||
vim nginx.conf
|
||||
|
||||
Remove the server { } block.
|
||||
|
||||
[
|
||||
![Remove Server Block on Nginx configuration](https://www.howtoforge.com/images/how-to-install-elastic-stack-on-centos-7/4.png)
|
||||
][13]
|
||||
|
||||
Save and exit.
|
||||
|
||||
Now we need to create a new virtual host configuration file in the conf.d directory. Create the new file 'kibana.conf' with vim.
|
||||
|
||||
vim /etc/nginx/conf.d/kibana.conf
|
||||
|
||||
Paste the configuration below.
|
||||
|
||||
```
|
||||
server {
|
||||
listen 80;
|
||||
|
||||
server_name elk-stack.co;
|
||||
|
||||
auth_basic "Restricted Access";
|
||||
auth_basic_user_file /etc/nginx/.kibana-user;
|
||||
|
||||
location / {
|
||||
proxy_pass http://localhost:5601;
|
||||
proxy_http_version 1.1;
|
||||
proxy_set_header Upgrade $http_upgrade;
|
||||
proxy_set_header Connection 'upgrade';
|
||||
proxy_set_header Host $host;
|
||||
proxy_cache_bypass $http_upgrade;
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
Save and exit.
|
||||
|
||||
Then create a new basic authentication file with the htpasswd command.
|
||||
|
||||
sudo htpasswd -c /etc/nginx/.kibana-user admin
|
||||
TYPE YOUR PASSWORD
|
||||
|
||||
Test the Nginx configuration and make sure there is no error. Then add Nginx to run at the boot time and start Nginx.
|
||||
|
||||
nginx -t
|
||||
systemctl enable nginx
|
||||
systemctl start nginx
|
||||
|
||||
[
|
||||
![Add nginx virtual host configuration for Kibana Application](https://www.howtoforge.com/images/how-to-install-elastic-stack-on-centos-7/5.png)
|
||||
][14]
|
||||
|
||||
### Step 5 - Install and Configure Logstash
|
||||
|
||||
In this step, we will install Logsatash and configure it to centralize server logs from clients with filebeat, then filter and transform the Syslog data and move it into the stash (Elasticsearch).
|
||||
|
||||
Download Logstash and install it with rpm.
|
||||
|
||||
wget https://artifacts.elastic.co/downloads/logstash/logstash-5.1.1.rpm
|
||||
rpm -ivh logstash-5.1.1.rpm
|
||||
|
||||
Generate a new SSL certificate file so that the client can identify the elastic server.
|
||||
|
||||
Go to the tls directory and edit the openssl.cnf file.
|
||||
|
||||
cd /etc/pki/tls
|
||||
vim openssl.cnf
|
||||
|
||||
Add a new line in the '[ v3_ca ]' section for the server identification.
|
||||
|
||||
[ v3_ca ]
|
||||
|
||||
# Server IP Address
|
||||
subjectAltName = IP: 10.0.15.10
|
||||
|
||||
Save and exit.
|
||||
|
||||
Generate the certificate file with the openssl command.
|
||||
|
||||
openssl req -config /etc/pki/tls/openssl.cnf -x509 -days 3650 -batch -nodes -newkey rsa:2048 -keyout /etc/pki/tls/private/logstash-forwarder.key -out /etc/pki/tls/certs/logstash-forwarder.crt
|
||||
|
||||
The certificate files can be found in the '/etc/pki/tls/certs/' and '/etc/pki/tls/private/' directories.
|
||||
|
||||
Next, we will create new configuration files for Logstash. We will create a new 'filebeat-input.conf' file to configure the log sources for filebeat, then a 'syslog-filter.conf' file for syslog processing and the 'output-elasticsearch.conf' file to define the Elasticsearch output.
|
||||
|
||||
Go to the logstash configuration directory and create the new configuration files in the 'conf.d' subdirectory.
|
||||
|
||||
cd /etc/logstash/
|
||||
vim conf.d/filebeat-input.conf
|
||||
|
||||
Input configuration: paste the configuration below.
|
||||
|
||||
```
|
||||
input {
|
||||
beats {
|
||||
port => 5443
|
||||
ssl => true
|
||||
ssl_certificate => "/etc/pki/tls/certs/logstash-forwarder.crt"
|
||||
ssl_key => "/etc/pki/tls/private/logstash-forwarder.key"
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
Save and exit.
|
||||
|
||||
Create the syslog-filter.conf file.
|
||||
|
||||
vim conf.d/syslog-filter.conf
|
||||
|
||||
Paste the configuration below.
|
||||
|
||||
```
|
||||
filter {
|
||||
if [type] == "syslog" {
|
||||
grok {
|
||||
match => { "message" => "%{SYSLOGTIMESTAMP:syslog_timestamp} %{SYSLOGHOST:syslog_hostname} %{DATA:syslog_program}(?:\[%{POSINT:syslog_pid}\])?: %{GREEDYDATA:syslog_message}" }
|
||||
add_field => [ "received_at", "%{@timestamp}" ]
|
||||
add_field => [ "received_from", "%{host}" ]
|
||||
}
|
||||
date {
|
||||
match => [ "syslog_timestamp", "MMM d HH:mm:ss", "MMM dd HH:mm:ss" ]
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
We use a filter plugin named '**grok**' to parse the syslog files.
|
||||
|
||||
Save and exit.
|
||||
|
||||
Create the output configuration file 'output-elasticsearch.conf'.
|
||||
|
||||
vim conf.d/output-elasticsearch.conf
|
||||
|
||||
Paste the configuration below.
|
||||
|
||||
```
|
||||
output {
|
||||
elasticsearch { hosts => ["localhost:9200"]
|
||||
hosts => "localhost:9200"
|
||||
manage_template => false
|
||||
index => "%{[@metadata][beat]}-%{+YYYY.MM.dd}"
|
||||
document_type => "%{[@metadata][type]}"
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
Save and exit.
|
||||
|
||||
Finally add logstash to start at boot time and start the service.
|
||||
|
||||
sudo systemctl enable logstash
|
||||
sudo systemctl start logstash
|
||||
|
||||
[
|
||||
![Logstash started on port 5443 with SSL Connection](https://www.howtoforge.com/images/how-to-install-elastic-stack-on-centos-7/6.png)
|
||||
][15]
|
||||
|
||||
### Step 6 - Install and Configure Filebeat on the CentOS Client
|
||||
|
||||
Beats are data shippers, lightweight agents that can be installed on the client nodes to send huge amounts of data from the client machine to the Logstash or Elasticsearch server. There are 4 beats available, 'Filebeat' for 'Log Files', 'Metricbeat' for 'Metrics', 'Packetbeat' for 'Network Data' and 'Winlogbeat' for the Windows client 'Event Log'.
|
||||
|
||||
In this tutorial, I will show you how to install and configure 'Filebeat' to transfer data log files to the Logstash server over an SSL connection.
|
||||
|
||||
Login to the client1 server. Then copy the certificate file from the elastic server to the client1 server.
|
||||
|
||||
ssh root@client1IP
|
||||
|
||||
Copy the certificate file with the scp command.
|
||||
|
||||
scp root@elk-serverIP:~/logstash-forwarder.crt .
|
||||
TYPE elk-server password
|
||||
|
||||
Create a new directory and move certificate file to that directory.
|
||||
|
||||
sudo mkdir -p /etc/pki/tls/certs/
|
||||
mv ~/logstash-forwarder.crt /etc/pki/tls/certs/
|
||||
|
||||
Next, import the elastic key on the client1 server.
|
||||
|
||||
rpm --import https://artifacts.elastic.co/GPG-KEY-elasticsearch
|
||||
|
||||
Download Filebeat and install it with rpm.
|
||||
|
||||
wget https://artifacts.elastic.co/downloads/beats/filebeat/filebeat-5.1.1-x86_64.rpm
|
||||
rpm -ivh filebeat-5.1.1-x86_64.rpm
|
||||
|
||||
Filebeat has been installed, go to the configuration directory and edit the file 'filebeat.yml'.
|
||||
|
||||
cd /etc/filebeat/
|
||||
vim filebeat.yml
|
||||
|
||||
In the paths section on line 21, add the new log files. We will add two files '/var/log/secure' for ssh activity and '/var/log/messages' for the server log.
|
||||
|
||||
paths:
|
||||
- /var/log/secure
|
||||
- /var/log/messages
|
||||
|
||||
Add a new configuration on line 26 to define the syslog type files.
|
||||
|
||||
document-type: syslog
|
||||
|
||||
Filebeat is using Elasticsearch as the output target by default. In this tutorial, we will change it to Logshtash. Disable Elasticsearch output by adding comments on the lines 83 and 85.
|
||||
|
||||
Disable elasticsearch output.
|
||||
|
||||
#-------------------------- Elasticsearch output ------------------------------
|
||||
#output.elasticsearch:
|
||||
# Array of hosts to connect to.
|
||||
# hosts: ["localhost:9200"]
|
||||
|
||||
Now add the new logstash output configuration. Uncomment the logstash output configuration and change all value to the configuration that is shown below.
|
||||
|
||||
output.logstash:
|
||||
# The Logstash hosts
|
||||
hosts: ["10.0.15.10:5443"]
|
||||
bulk_max_size: 1024
|
||||
ssl.certificate_authorities: ["/etc/pki/tls/certs/logstash-forwarder.crt"]
|
||||
template.name: "filebeat"
|
||||
template.path: "filebeat.template.json"
|
||||
template.overwrite: false
|
||||
|
||||
Save the file and exit vim.
|
||||
|
||||
Add Filebeat to start at boot time and start it.
|
||||
|
||||
sudo systemctl enable filebeat
|
||||
sudo systemctl start filebeat
|
||||
|
||||
### Step 7 - Install and Configure Filebeat on the Ubuntu Client
|
||||
|
||||
Connect to the server by ssh.
|
||||
|
||||
ssh root@ubuntu-clientIP
|
||||
|
||||
Copy the certificate file to the client with the scp command.
|
||||
|
||||
scp root@elk-serverIP:~/logstash-forwarder.crt .
|
||||
|
||||
Create a new directory for the certificate file and move the file to that directory.
|
||||
|
||||
sudo mkdir -p /etc/pki/tls/certs/
|
||||
mv ~/logstash-forwarder.crt /etc/pki/tls/certs/
|
||||
|
||||
Add the elastic key to the server.
|
||||
|
||||
wget -qO - https://artifacts.elastic.co/GPG-KEY-elasticsearch | sudo apt-key add -
|
||||
|
||||
Download the Filebeat .deb package and install it with the dpkg command.
|
||||
|
||||
wget https://artifacts.elastic.co/downloads/beats/filebeat/filebeat-5.1.1-amd64.deb
|
||||
dpkg -i filebeat-5.1.1-amd64.deb
|
||||
|
||||
Go to the filebeat configuration directory and edit the file 'filebeat.yml' with vim.
|
||||
|
||||
cd /etc/filebeat/
|
||||
vim filebeat.yml
|
||||
|
||||
Add the new log file paths in the paths configuration section.
|
||||
|
||||
paths:
|
||||
- /var/log/auth.log
|
||||
- /var/log/syslog
|
||||
|
||||
Set the document type to syslog.
|
||||
|
||||
document-type: syslog
|
||||
|
||||
Disable elasticsearch output by adding comments to the lines shown below.
|
||||
|
||||
#-------------------------- Elasticsearch output ------------------------------
|
||||
#output.elasticsearch:
|
||||
# Array of hosts to connect to.
|
||||
# hosts: ["localhost:9200"]
|
||||
|
||||
Enable logstash output, uncomment the configuration and change the values as shown below.
|
||||
|
||||
output.logstash:
|
||||
# The Logstash hosts
|
||||
hosts: ["10.0.15.10:5443"]
|
||||
bulk_max_size: 1024
|
||||
ssl.certificate_authorities: ["/etc/pki/tls/certs/logstash-forwarder.crt"]
|
||||
template.name: "filebeat"
|
||||
template.path: "filebeat.template.json"
|
||||
template.overwrite: false
|
||||
|
||||
Save the file and exit vim.
|
||||
|
||||
Add Filebeat to start at boot time and start it.
|
||||
|
||||
sudo systemctl enable filebeat
|
||||
sudo systemctl start filebeat
|
||||
|
||||
Check the service status.
|
||||
|
||||
systemctl status filebeat
|
||||
|
||||
[
|
||||
![Filebeat is running on the client Ubuntu](https://www.howtoforge.com/images/how-to-install-elastic-stack-on-centos-7/12.png)
|
||||
][16]
|
||||
|
||||
### Step 8 - Testing
|
||||
|
||||
Open your web browser and visit the elastic stack domain that you used in the Nginx configuration, mine is 'elk-stack.co'. Login as admin user with your password and press Enter to log in to the Kibana dashboard.
|
||||
|
||||
[
|
||||
![Login to the Kibana Dashboard with Basic Auth](https://www.howtoforge.com/images/how-to-install-elastic-stack-on-centos-7/7.png)
|
||||
][17]
|
||||
|
||||
Create a new default index 'filebeat-*' and click on the 'Create' button.
|
||||
|
||||
[
|
||||
![Create First index filebeat for Kibana](https://www.howtoforge.com/images/how-to-install-elastic-stack-on-centos-7/8.png)
|
||||
][18]
|
||||
|
||||
Th default index has been created. If you have multiple beats on the elastic stack, you can configure the default beat with just one click on the 'star' button.
|
||||
|
||||
[
|
||||
![Filebeat index as default index on Kibana Dashboard](https://www.howtoforge.com/images/how-to-install-elastic-stack-on-centos-7/9.png)
|
||||
][19]
|
||||
|
||||
Go to the '**Discover**' menu and you will see all the log file from the elk-client1 and elk-client2 servers.
|
||||
|
||||
[
|
||||
![Discover all Log Files from the Servers](https://www.howtoforge.com/images/how-to-install-elastic-stack-on-centos-7/10.png)
|
||||
][20]
|
||||
|
||||
An example of JSON output from the elk-client1 server log for an invalid ssh login.
|
||||
|
||||
[
|
||||
![JSON output for Failed SSH Login](https://www.howtoforge.com/images/how-to-install-elastic-stack-on-centos-7/11.png)
|
||||
][21]
|
||||
|
||||
And there is much more that you can do with Kibana dashboard, just play around with the available options.
|
||||
|
||||
Elastic Stack has been installed on a CentOS 7 server. Filebeat has been installed on a CentOS 7 and a Ubuntu client.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.howtoforge.com/tutorial/how-to-install-elastic-stack-on-centos-7/
|
||||
|
||||
作者:[Muhammad Arul][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://www.howtoforge.com/tutorial/how-to-install-elastic-stack-on-centos-7/
|
||||
[1]:https://www.howtoforge.com/tutorial/how-to-install-elastic-stack-on-centos-7/#step-nbspprepare-the-operating-system
|
||||
[2]:https://www.howtoforge.com/tutorial/how-to-install-elastic-stack-on-centos-7/#step-install-java
|
||||
[3]:https://www.howtoforge.com/tutorial/how-to-install-elastic-stack-on-centos-7/#step-install-and-configure-elasticsearch
|
||||
[4]:https://www.howtoforge.com/tutorial/how-to-install-elastic-stack-on-centos-7/#step-install-and-configure-kibana-with-nginx
|
||||
[5]:https://www.howtoforge.com/tutorial/how-to-install-elastic-stack-on-centos-7/#step-install-and-configure-logstash
|
||||
[6]:https://www.howtoforge.com/tutorial/how-to-install-elastic-stack-on-centos-7/#step-install-and-configure-filebeat-on-the-centos-client
|
||||
[7]:https://www.howtoforge.com/tutorial/how-to-install-elastic-stack-on-centos-7/#step-install-and-configure-filebeat-on-the-ubuntu-client
|
||||
[8]:https://www.howtoforge.com/tutorial/how-to-install-elastic-stack-on-centos-7/#step-testing
|
||||
[9]:https://www.howtoforge.com/tutorial/how-to-install-elastic-stack-on-centos-7/#reference
|
||||
[10]:https://www.howtoforge.com/images/how-to-install-elastic-stack-on-centos-7/big/1.png
|
||||
[11]:https://www.howtoforge.com/images/how-to-install-elastic-stack-on-centos-7/big/2.png
|
||||
[12]:https://www.howtoforge.com/images/how-to-install-elastic-stack-on-centos-7/big/3.png
|
||||
[13]:https://www.howtoforge.com/images/how-to-install-elastic-stack-on-centos-7/big/4.png
|
||||
[14]:https://www.howtoforge.com/images/how-to-install-elastic-stack-on-centos-7/big/5.png
|
||||
[15]:https://www.howtoforge.com/images/how-to-install-elastic-stack-on-centos-7/big/6.png
|
||||
[16]:https://www.howtoforge.com/images/how-to-install-elastic-stack-on-centos-7/big/12.png
|
||||
[17]:https://www.howtoforge.com/images/how-to-install-elastic-stack-on-centos-7/big/7.png
|
||||
[18]:https://www.howtoforge.com/images/how-to-install-elastic-stack-on-centos-7/big/8.png
|
||||
[19]:https://www.howtoforge.com/images/how-to-install-elastic-stack-on-centos-7/big/9.png
|
||||
[20]:https://www.howtoforge.com/images/how-to-install-elastic-stack-on-centos-7/big/10.png
|
||||
[21]:https://www.howtoforge.com/images/how-to-install-elastic-stack-on-centos-7/big/11.png
|
@ -1,195 +0,0 @@
|
||||
ictlyh Translating
|
||||
lnav – An Advanced Console Based Log File Viewer for Linux
|
||||
============================================================
|
||||
|
||||
[LNAV][3] stands for Log file Navigator is an advanced console based log file viewer for Linux. It does the same job how other file viewers doing like cat, more, tail, etc but have more enhanced features which is not available in normal file viewers (especially, it will comes with set of color and easy to read format).
|
||||
|
||||
This can decompresses all the compressed log files (zip, gzip, bzip) on the fly and merge them together for easy navigation. lnav Merge more than one log files (Single Log View) into a single view based on message timestamps which will reduce multiple windows open. The color bars on the left-hand side help to show which file a message belongs to.
|
||||
|
||||
The number of warnings and errors are highlighted in the display (Yellow & Red), so that we can easily see where the problems have occurred. New log lines are automatically loaded.
|
||||
|
||||
It display the log messages from all files sorted by the message timestamps. Top & Bottom status bars will tell you, where you are in the logs. If you want to grep any particular pattern, just type your inputs on search prompt which will be highlighted instantly.
|
||||
|
||||
The built-in log message parser can automatically discover and extract the each lines with detailed information.
|
||||
|
||||
A server log is a log file which is created and frequently updated by a server to capture all the activity for the particular service or application. This can be very useful when you have an issue with application or service. In log files you can get all the information about the issue like when it start behaving abnormal based on warning or error message.
|
||||
|
||||
When you open a log file with normal file viewer, it will display all the details in plain format (If i want to tell you in straight forward, plain white) it’s very difficult to identify/understand where is warning & errors messages are there. To overcome this kind of situation and quickly find the warning & error message to troubleshoot the issue, lnav comes in handy for a better solution.
|
||||
|
||||
Most of the common Linux log files are located at `/var/log/`.
|
||||
|
||||
**lnav automatically detect below log formats**
|
||||
|
||||
* Common Web Access Log format
|
||||
* CUPS page_log
|
||||
* Syslog
|
||||
* Glog
|
||||
* VMware ESXi/vCenter Logs
|
||||
* dpkg.log
|
||||
* uwsgi
|
||||
* “Generic” – Any message that starts with a timestamp
|
||||
* Strace
|
||||
* sudo
|
||||
* gzib & bizp
|
||||
|
||||
**Awesome lnav features**
|
||||
|
||||
* Single Log View – All log file contents are merged into a single view based on message timestamps.
|
||||
* Automatic Log Format Detection – Most of the log format is supported by lnav
|
||||
* Filters – regular expressions based filters can be performed.
|
||||
* Timeline View
|
||||
* Pretty-Print View
|
||||
* Query Logs Using SQL
|
||||
* Automatic Data Extraction
|
||||
* “Live” Operation
|
||||
* Syntax Highlighting
|
||||
* Tab-completion
|
||||
* Session information is saved automatically and restored when you are viewing the same set of files.
|
||||
* Headless Mode
|
||||
|
||||
#### How to install lnav on Linux
|
||||
|
||||
Most of the distribution (Debian, Ubuntu, Mint, Fedora, suse, openSUSE, Arch Linux, Manjaro, Mageia, etc.) has the lnav package by default, so we can easily install it from distribution official repository with help of package manager. For CentOS/RHEL we need to enable **[EPEL Repository][1]**.
|
||||
|
||||
```
|
||||
[Install lnav on Debian/Ubuntu/LinuxMint]
|
||||
$ sudo apt-get install lnav
|
||||
|
||||
[Install lnav on RHEL/CentOS]
|
||||
$ sudo yum install lnav
|
||||
|
||||
[Install lnav on Fedora]
|
||||
$ sudo dnf install lnav
|
||||
|
||||
[Install lnav on openSUSE]
|
||||
$ sudo zypper install lnav
|
||||
|
||||
[Install lnav on Mageia]
|
||||
$ sudo urpmi lnav
|
||||
|
||||
[Install lnav on Arch Linux based system]
|
||||
$ yaourt -S lnav
|
||||
```
|
||||
|
||||
If the distribution doesn’t have the lnav package don’t worry, Developer offering the `.rpm & .deb`packages, so we can easily install without any issues. Make sure you have to download the latest one from [developer github page][4].
|
||||
|
||||
```
|
||||
[Install lnav on Debian/Ubuntu/LinuxMint]
|
||||
$ sudo wget https://github.com/tstack/lnav/releases/download/v0.8.1/lnav_0.8.1_amd64.deb
|
||||
$ sudo dpkg -i lnav_0.8.1_amd64.deb
|
||||
|
||||
[Install lnav on RHEL/CentOS]
|
||||
$ sudo yum install https://github.com/tstack/lnav/releases/download/v0.8.1/lnav-0.8.1-1.x86_64.rpm
|
||||
|
||||
[Install lnav on Fedora]
|
||||
$ sudo dnf install https://github.com/tstack/lnav/releases/download/v0.8.1/lnav-0.8.1-1.x86_64.rpm
|
||||
|
||||
[Install lnav on openSUSE]
|
||||
$ sudo zypper install https://github.com/tstack/lnav/releases/download/v0.8.1/lnav-0.8.1-1.x86_64.rpm
|
||||
|
||||
[Install lnav on Mageia]
|
||||
$ sudo rpm -ivh https://github.com/tstack/lnav/releases/download/v0.8.1/lnav-0.8.1-1.x86_64.rpm
|
||||
```
|
||||
|
||||
#### Run lnav without any argument
|
||||
|
||||
By default lnav brings `syslog` file when you are running without any arguments.
|
||||
|
||||
```
|
||||
# lnav
|
||||
```
|
||||
|
||||
[
|
||||
![](http://www.2daygeek.com/wp-content/uploads/2017/01/lnav-advanced-log-file-viewer-1.png)
|
||||
][5]
|
||||
|
||||
#### To view specific logs with lnav
|
||||
|
||||
To view specific logs with lnav, add the log file `path` followed by lnav command. For example we are going to view `/var/log/dpkg.log` logs.
|
||||
|
||||
```
|
||||
# lnav /var/log/dpkg.log
|
||||
```
|
||||
|
||||
[
|
||||
![](http://www.2daygeek.com/wp-content/uploads/2017/01/lnav-advanced-log-file-viewer-2.png)
|
||||
][6]
|
||||
|
||||
#### To view multiple log files with lnav
|
||||
|
||||
To view multiple log files with lnav, add the log files `path` one by one with single space followed by lnav command. For example we are going to view `/var/log/dpkg.log` & `/var/log/kern.log` logs.
|
||||
|
||||
The color bars on the left-hand side help to show which file a message belongs to. Alternatively top bar also showing the current log file name. Most of the application used to open multiple windows or horizontal or vertical windows within the window to display more than one log but lnav doing in different way (It display multiple logs in the same window based on date combination).
|
||||
|
||||
```
|
||||
# lnav /var/log/dpkg.log /var/log/kern.log
|
||||
```
|
||||
|
||||
[
|
||||
![](http://www.2daygeek.com/wp-content/uploads/2017/01/lnav-advanced-log-file-viewer-3.png)
|
||||
][7]
|
||||
|
||||
#### To view older/compressed logs with lnav
|
||||
|
||||
To view older/compressed logs which will decompresses all the compressed log files (zip, gzip, bzip) on the fly, add `-r` option followed by lnav command.
|
||||
|
||||
```
|
||||
# lnav -r /var/log/Xorg.0.log.old.gz
|
||||
```
|
||||
|
||||
[
|
||||
![](http://www.2daygeek.com/wp-content/uploads/2017/01/lnav-advanced-log-file-viewer-6.png)
|
||||
][8]
|
||||
|
||||
#### Histogram view
|
||||
|
||||
First run `lnav` then hit `i` to Switch to/from the histogram view.
|
||||
[
|
||||
![](http://www.2daygeek.com/wp-content/uploads/2017/01/lnav-advanced-log-file-viewer-4.png)
|
||||
][9]
|
||||
|
||||
#### View log parser results
|
||||
|
||||
First run `lnav` then hit `p` to Toggle the display of the log parser results.
|
||||
[
|
||||
![](http://www.2daygeek.com/wp-content/uploads/2017/01/lnav-advanced-log-file-viewer-5.png)
|
||||
][10]
|
||||
|
||||
#### Syntax Highlighting
|
||||
|
||||
You can search any given string which will be highlighting on screen. First run `lnav` then hit `/` and type the string which you want to grep. For testing purpose, i’m searching `Default` string, See the below screenshot.
|
||||
[
|
||||
![](http://www.2daygeek.com/wp-content/uploads/2017/01/lnav-advanced-log-file-viewer-7.png)
|
||||
][11]
|
||||
|
||||
#### Tab-completion
|
||||
|
||||
The command prompt supports tab-completion for almost all operations. For example, when doing a search, you can tab-complete words that are displayed on screen rather than having to do a copy & paste. For testing purpose, i’m searching `/var/log/Xorg` string, See the below screenshot.
|
||||
[
|
||||
![](http://www.2daygeek.com/wp-content/uploads/2017/01/lnav-advanced-log-file-viewer-8.png)
|
||||
][12]
|
||||
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://www.2daygeek.com/install-and-use-advanced-log-file-viewer-navigator-lnav-in-linux/
|
||||
|
||||
作者:[Magesh Maruthamuthu][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://www.2daygeek.com/author/magesh/
|
||||
[1]:http://www.2daygeek.com/install-enable-epel-repository-on-rhel-centos-scientific-linux-oracle-linux/
|
||||
[2]:http://www.2daygeek.com/author/magesh/
|
||||
[3]:http://lnav.org/
|
||||
[4]:https://github.com/tstack/lnav/releases
|
||||
[5]:http://www.2daygeek.com/wp-content/uploads/2017/01/lnav-advanced-log-file-viewer-1.png
|
||||
[6]:http://www.2daygeek.com/wp-content/uploads/2017/01/lnav-advanced-log-file-viewer-2.png
|
||||
[7]:http://www.2daygeek.com/wp-content/uploads/2017/01/lnav-advanced-log-file-viewer-3.png
|
||||
[8]:http://www.2daygeek.com/wp-content/uploads/2017/01/lnav-advanced-log-file-viewer-6.png
|
||||
[9]:http://www.2daygeek.com/wp-content/uploads/2017/01/lnav-advanced-log-file-viewer-4.png
|
||||
[10]:http://www.2daygeek.com/wp-content/uploads/2017/01/lnav-advanced-log-file-viewer-5.png
|
||||
[11]:http://www.2daygeek.com/wp-content/uploads/2017/01/lnav-advanced-log-file-viewer-7.png
|
||||
[12]:http://www.2daygeek.com/wp-content/uploads/2017/01/lnav-advanced-log-file-viewer-8.png
|
@ -1,129 +0,0 @@
|
||||
translating by Flowsnow
|
||||
|
||||
# [Use tmux for a more powerful terminal][3]
|
||||
|
||||
|
||||
![](https://cdn.fedoramagazine.org/wp-content/uploads/2017/01/tmux-945x400.jpg)
|
||||
|
||||
Some Fedora users spend most or all their time at a [command line][4] terminal. The terminal gives you access to your whole system, as well as thousands of powerful utilities. However, it only shows you one command line session at a time by default. Even with a large terminal window, the entire window only shows one session. This wastes space, especially on large monitors and high resolution laptop screens. But what if you could break up that terminal into multiple sessions? This is precisely where _tmux_ is handy — some say indispensable.
|
||||
|
||||
### Install and start _tmux_
|
||||
|
||||
The _tmux_ utility gets its name from being a terminal muxer, or multiplexer. In other words, it can break your single terminal session into multiple sessions. It manages both _windows_ and _panes_ :
|
||||
|
||||
* A _window_ is a single view — that is, an assortment of things shown in your terminal.
|
||||
* A _pane_ is one part of that view, often a terminal session.
|
||||
|
||||
To get started, install the _tmux_ utility on your system. You’ll need to have _sudo_ setup for your user account ([check out this article][5] for instructions if needed).
|
||||
|
||||
```
|
||||
sudo dnf -y install tmux
|
||||
```
|
||||
|
||||
Run the utility to get started:
|
||||
|
||||
tmux
|
||||
|
||||
### The status bar
|
||||
|
||||
At first, it might seem like nothing happens, other than a status bar that appears at the bottom of the terminal:
|
||||
|
||||
![Start of tmux session](https://cdn.fedoramagazine.org/wp-content/uploads/2017/01/Screenshot-from-2017-02-04-12-54-41.png)
|
||||
|
||||
The bottom bar shows you:
|
||||
|
||||
* _[0] _ – You’re in the first session that was created by the _tmux_ server. Numbering starts with 0\. The server tracks all sessions whether they’re still alive or not.
|
||||
* _0:username@host:~_ – Information about the first window of that session. Numbering starts with 0\. The terminal in the active pane of the window is owned by _username_ at hostname _host_ . The current directory is _~ _ (the home directory).
|
||||
* _*_ – Shows that you’re currently in this window.
|
||||
* _“hostname” _ – the hostname of the _tmux_ server you’re using.
|
||||
* Also, the date and time on that particular host is shown.
|
||||
|
||||
The information bar will change as you add more windows and panes to the session.
|
||||
|
||||
### Basics of tmux
|
||||
|
||||
Stretch your terminal window to make it much larger. Now let’s experiment with a few simple commands to create additional panes. All commands by default start with _Ctrl+b_ .
|
||||
|
||||
* Hit _Ctrl+b, “_ to split the current single pane horizontally. Now you have two command line panes in the window, one on top and one on bottom. Notice that the new bottom pane is your active pane.
|
||||
* Hit _Ctrl+b, %_ to split the current pane vertically. Now you have three command line panes in the window. The new bottom right pane is your active pane.
|
||||
|
||||
![tmux window with three panes](https://cdn.fedoramagazine.org/wp-content/uploads/2017/01/Screenshot-from-2017-02-04-12-54-59.png)
|
||||
|
||||
Notice the highlighted border around your current pane. To navigate around panes, do any of the following:
|
||||
|
||||
* Hit _Ctrl+b _ and then an arrow key.
|
||||
* Hit _Ctrl+b, q_ . Numbers appear on the panes briefly. During this time, you can hit the number for the pane you want.
|
||||
|
||||
Now, try using the panes to run different commands. For instance, try this:
|
||||
|
||||
* Use _ls_ to show directory contents in the top pane.
|
||||
* Start _vi_ in the bottom left pane to edit a text file.
|
||||
* Run _top_ in the bottom right pane to monitor processes on your system.
|
||||
|
||||
The display will look something like this:
|
||||
|
||||
![tmux session with three panes running different commands](https://cdn.fedoramagazine.org/wp-content/uploads/2017/01/Screenshot-from-2017-02-04-12-57-51.png)
|
||||
|
||||
So far, this example has only used one window with multiple panes. You can also run multiple windows in your session.
|
||||
|
||||
* To create a new window, hit _Ctrl+b, c._ Notice that the status bar now shows two windows running. (Keen readers will see this in the screenshot above.)
|
||||
* To move to the previous window, hit _Ctrl+b, p._
|
||||
* If you want to move to the next window, hit _Ctrl+b, n_ .
|
||||
* To immediately move to a specific window (0-9), hit _Ctrl+b_ followed by the window number.
|
||||
|
||||
If you’re wondering how to close a pane, simply quit that specific command line shell using _exit_ , _logout_ , or _Ctrl+d._ Once you close all panes in a window, that window disappears as well.
|
||||
|
||||
### Detaching and attaching
|
||||
|
||||
One of the most powerful features of _tmux_ is the ability to detach and reattach to a session. You can leave your windows and panes running when you detach. Moreover, you can even logout of the system entirely. Then later you can login to the same system, reattach to the _tmux_ session, and see all your windows and panes where you left them. The commands you were running stay running while you’re detached.
|
||||
|
||||
To detach from a session, hit _Ctrl+b, d._ The session disappears and you’ll be back at the standard single shell. To reattach to the session, use this command:
|
||||
|
||||
```
|
||||
tmux attach-session
|
||||
```
|
||||
|
||||
This function is also a lifesaver when your network connection to a host is shaky. If your connection fails, all the processes in the session will stay running. Once your connection is back up, you can resume your work as if nothing happened.
|
||||
|
||||
And if that weren’t enough, on top of multiple windows and panes per session, you can also run multiple sessions. You can list these and then attach to the correct one by number or name:
|
||||
|
||||
```
|
||||
tmux list-sessions
|
||||
```
|
||||
|
||||
### Further reading
|
||||
|
||||
This article only scratches the surface of _tmux’_ s capabilities. You can manipulate your sessions in many other ways:
|
||||
|
||||
* Swap one pane with another
|
||||
* Move a pane to another window (in the same or a different session!)
|
||||
* Set keybindings that perform your favorite commands automatically
|
||||
* Configure a _~/.tmux.conf_ file with your favorite settings by default so each new session looks the way you like
|
||||
|
||||
For a full explanation of all commands, check out these references:
|
||||
|
||||
* The official [manual page][1]
|
||||
* This [eBook][2] all about _tmux_
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
作者简介:
|
||||
|
||||
Paul W. Frields has been a Linux user and enthusiast since 1997, and joined the Fedora Project in 2003, shortly after launch. He was a founding member of the Fedora Project Board, and has worked on documentation, website publishing, advocacy, toolchain development, and maintaining software. He joined Red Hat as Fedora Project Leader from February 2008 to July 2010, and remains with Red Hat as an engineering manager. He currently lives with his wife and two children in Virginia.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://fedoramagazine.org/use-tmux-more-powerful-terminal/
|
||||
|
||||
作者:[Paul W. Frields][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://fedoramagazine.org/author/pfrields/
|
||||
[1]: http://man.openbsd.org/OpenBSD-current/man1/tmux.1
|
||||
[2]: https://pragprog.com/book/bhtmux2/tmux-2
|
||||
[3]: https://fedoramagazine.org/use-tmux-more-powerful-terminal/
|
||||
[4]: http://www.cryptonomicon.com/beginning.html
|
||||
[5]: https://fedoramagazine.org/howto-use-sudo/
|
@ -1,3 +1,5 @@
|
||||
translating by Flowsnow!
|
||||
|
||||
Many SQL Performance Problems Stem from “Unnecessary, Mandatory Work”
|
||||
============================================================
|
||||
|
||||
|
@ -0,0 +1,234 @@
|
||||
响应式编程vs.响应式系统
|
||||
============================================================
|
||||
|
||||
>在恒久的迷惑与过多期待的海洋中,登上一组简单响应式设计原则的小岛。
|
||||
|
||||
>
|
||||
|
||||
![Micro Fireworks](https://d3tdunqjn7n0wj.cloudfront.net/360x240/micro_fireworks-db2d0a45f22f348719b393dd98ebefa2.jpg)
|
||||
|
||||
下载 Konrad Malawski 的免费电子书[《为什么选择响应式?企业应用中的基本原则》][5],深入了解更多响应式技术的知识与好处。
|
||||
|
||||
自从2013年一起合作写了[《响应式宣言》][23]之后,我们看着响应式从一种几乎无人知晓的软件构建技术——当时只有少数几个公司的边缘项目使用了这一技术——最后成为中间件领域(middleware field)大佬们全平台战略中的一部分。本文旨在定义和澄清响应式各个方面的概念,方法是比较在_响应式编程_风格下,以及把_响应式系统_视作一个紧密整体的设计方法下写代码的不同。
|
||||
|
||||
### 响应式是一组设计原则
|
||||
响应式技术目前成功的标志之一是“响应式”成为了一个热词,并且跟一些不同的事物与人联系在了一起——常常伴随着像“流(streaming)”,“轻量级(lightweight)”和“实时(real-time)”这样的词。
|
||||
|
||||
举个例子:当我们看到一支运动队时(像棒球队或者篮球队),我们一般会把他们看成一个个单独个体的组合,但是当他们之间碰撞不出火花,无法像一个团队一样高效地协作时,他们就会输给一个“更差劲”的队伍。从这篇文章的角度来看,响应式是一组设计原则,一种关于系统架构与设计的思考方式,一种关于在一个分布式环境下,当实现技术(implementation techniques),工具和设计模式都只是一个更大系统的一部分时如何设计的思考方式。
|
||||
|
||||
这个例子展示了不经考虑地将一堆软件拼揍在一起——尽管单独来看,这些软件都很优秀——和响应式系统之间的不同。在一个响应式系统中,正是_不同组件(parts)间的相互作用_让响应式系统如此不同,它使得不同组件能够独立地运作,同时又一致协作从而达到最终想要的结果。
|
||||
|
||||
_一个响应式系统_ 是一种架构风格(architectural style),它允许许多独立的应用结合在一起成为一个单元,共同响应它们所处的环境,同时保留着对单元内其它应用的“感知”——这能够表现为它能够做到放大/缩小规模(scale up/down),负载平衡,甚至能够主动地执行这些步骤。
|
||||
|
||||
以响应式的风格(或者说,通过响应式编程)写一个软件是可能的;然而,那也不过是拼图中的一块罢了。虽然在上面的提到的各个方面似乎都足以称其为“响应式的”,但仅就其它们自身而言,还不足以让一个_系统_成为响应式的。
|
||||
|
||||
当人们在软件开发与设计的语境下谈论“响应式”时,他们的意思通常是以下三者之一:
|
||||
|
||||
* 响应式系统(架构与设计)
|
||||
* 响应式编程(基于声明的事件的)
|
||||
* 函数响应式编程(FRP)
|
||||
|
||||
我们将检查这些做法与技术的意思,特别是前两个。更明确地说,我们会在使用它们的时候讨论它们,例如它们是怎么联系在一起的,从它们身上又能到什么样的好处——特别是在为多核、云或移动架构搭建系统的情境下。
|
||||
|
||||
让我们先来说一说函数响应式编程吧,以及我们在本文后面不再讨论它的原因。
|
||||
|
||||
### 函数响应式编程(FRP)
|
||||
|
||||
_函数响应式编程_,通常被称作_FRP_,是最常被误解的。FRP在二十年前就被Conal Elliott[精确地定义][24]了。但是最近这个术语却被错误地用来描述一些像Elm,Bacon.js的技术以及其它技术中的响应式插件(RxJava, Rx.NET, RxJS)。许多的库(libraries)声称他们支持FRP,事实上他们说的并非_响应式编程_,因此我们不会再进一步讨论它们。
|
||||
|
||||
### 响应式编程
|
||||
|
||||
_响应式编程_,不要把它跟_函数响应式编程_混淆了,它是异步编程下的一个子集,也是一种范式,在这种范式下,由新信息的有效性(availability)推动逻辑的前进,而不是让一条执行线程(a thread-of-execution)去推动控制流(control flow)。
|
||||
|
||||
它能够把问题分解为多个独立的步骤,这些独立的步骤可以以异步且非阻塞(non-blocking)的方式被执行,最后再组合在一起产生一条工作流(workflow)——它的输入和输出可能是非绑定的(unbounded)。
|
||||
|
||||
[“异步地(Asynchronous)”][25]被牛津词典定义为“不在同一时刻存在或发生”,在我们的语境下,它意味着一条消息或者一个事件可发生在任何时刻,有可能是在未来。这在响应式编程中是非常重要的一项技术,因为响应式编程允许[非阻塞式(non-blocking)]的执行方式——执行线程在竞争一块共享资源时不会因为阻塞(blocking)而陷入等待(防止执行线程在当前的工作完成之前执行任何其它操作),而是在共享资源被占用的期间转而去做其它工作。阿姆达尔定律(Amdahl's Law)[2][9]告诉我们,竞争是可伸缩性(scalability)最大的敌人,所以一个响应式系统应当在极少数的情况下才不得不做阻塞工作。
|
||||
|
||||
响应式编程一般是_事件驱动(event-driven)_ ,相比之下,响应式系统则是_消息驱动(message-driven)_ 的——事件驱动与消息驱动之间的差别会在文章后面阐明。
|
||||
|
||||
响应式编程库的应用程序接口(API)一般是以下二者之一:
|
||||
|
||||
* 基于回调的(Callback-based)——匿名的间接作用(side-effecting)回调函数被绑定在事件源(event sources)上,当事件被放入数据流(dataflow chain)中时,回调函数被调用。
|
||||
* 声明式的(Declarative)——通过函数的组合,通常是使用一些固定的函数,像 _map_, _filter_, _fold_ 等等。
|
||||
|
||||
大部分的库会混合这两种风格,一般还带有基于流(stream-based)的操作符(operators),像windowing, counts, triggers。
|
||||
|
||||
说响应式编程跟[数据流编程(dataflow programming)][27]有关是很合理的,因为它强调的是_数据流_而不是_控制流_。
|
||||
|
||||
举几个为这种编程技术提供支持的的编程抽象概念:
|
||||
|
||||
* [Futures/Promises][10]——一个值的容器,具有读共享/写独占(many-read/single-write)的语义,即使变量尚不可用也能够添加异步的值转换操作。
|
||||
* 流(streams)-[响应式流][11]——无限制的数据处理流,支持异步,非阻塞式,支持多个源与目的的反压转换管道(back-pressured transformation pipelines)。
|
||||
* [数据流变量][12]——依赖于输入,过程(procedures)或者其它单元的单赋值变量(存储单元)(single assignment variables),它能够自动更新值的改变。其中一个应用例子是表格软件——一个单元的值的改变会像涟漪一样荡开,影响到所有依赖于它的函数,顺流而下地使它们产生新的值。
|
||||
|
||||
在JVM中,支持响应式编程的流行库有Akka Streams、Ratpack、Reactor、RxJava和Vert.x等等。这些库实现了响应式编程的规范,成为JVM上响应式编程库之间的互通标准(standard for interoperability),并且根据它自身的叙述是“……一个为如何处理非阻塞式反压异步流提供标准的倡议”
|
||||
|
||||
响应式编程的基本好处是:提高多核和多CPU硬件的计算资源利用率;根据阿姆达尔定律以及引申的Günther的通用可伸缩性定律[3][13](Günther’s Universal Scalability Law),通过减少序列化点(serialization points)来提高性能。
|
||||
|
||||
另一个好处是开发者生产效率,传统的编程范式都尽力想提供一个简单直接的可持续的方法来处理异步非阻塞式计算和I/O。在响应式编程中,因活动(active)组件之间通常不需要明确的协作,从而也就解决了其中大部分的挑战。
|
||||
|
||||
响应式编程真正的发光点在于组件的创建跟工作流的组合。为了在异步执行上取得最大的优势,把[反压(back-pressure)][28]加进来是很重要,这样能避免过度使用,或者确切地说,无限度的消耗资源。
|
||||
|
||||
尽管如此,响应式编程在搭建现代软件上仍然非常有用,为了在更高层次上理解(reason about)一个系统,那么必须要使用到另一个工具:_响应式架构_——设计响应式系统的方法。此外,要记住编程范式有很多,而响应式编程仅仅只是其中一个,所以如同其它工具一样,响应式编程并不是万金油,它不意图适用于任何情况。
|
||||
|
||||
### 事件驱动 vs. 消息驱动
|
||||
如上面提到的,响应式编程——专注于短时间的数据流链条上的计算——因此倾向于_事件驱动_,而响应式系统——关注于通过分布式系统的通信和协作所得到的弹性和韧性——则是[_消息驱动的_][29][4][14](或者称之为 _消息式(messaging)_ 的)。
|
||||
|
||||
一个拥有长期存活的可寻址(long-lived addressable)组件的消息驱动系统跟一个事件驱动的数据流驱动模型的不同在于,消息具有固定的导向,而事件则没有。消息会有明确的(一个)去向,而事件则只是一段等着被观察(observe)的信息。另外,消息式(messaging)更适用于异步,因为消息的发送与接收和发送者和接收者是分离的。
|
||||
|
||||
响应式宣言中的术语表定义了两者之间[概念上的不同][30]:
|
||||
> 一条消息就是一则被送往一个明确目的地的数据。一个事件则是达到某个给定状态的组件发出的一个信号。在一个消息驱动系统中,可寻址到的接收者等待消息的到来然后响应它,否则保持休眠状态。在一个事件驱动系统中,通知的监听者被绑定到消息源上,这样当消息被发出时它就会被调用。这意味着一个事件驱动系统专注于可寻址的事件源而消息驱动系统专注于可寻址的接收者。
|
||||
|
||||
分布式系统需要通过消息在网络上传输进行交流,以实现其沟通基础,与之相反,事件的发出则是本地的。在底层通过发送包裹着事件的消息来搭建跨网络的事件驱动系统的做法很常见。这样能够维持在分布式环境下事件驱动编程模型的相对简易性并且在某些特殊的和合理范围内的使用案例上工作得很好。
|
||||
|
||||
然而,这是有利有弊的:在编程模型的抽象性和简易性上得一分,在控制上就减一分。消息强迫我们去拥抱分布式系统的真实性和一致性——像局部错误(partial failures),错误侦测(failure detection),丢弃/复制/重排序 消息(dropped/duplicated/reordered messages),最后还有一致性,管理多个并发真实性等等——然后直面它们,去处理它们,而不是像过去无数次一样,藏在一个蹩脚的抽象面罩后——假装网络并不存在(例如EJB, [RPC][31], [CORBA][32], 和 [XA][33])。
|
||||
|
||||
这些在语义学和适用性上的不同在应用设计中有着深刻的含义,包括分布式系统的复杂性(complexity)中的 _弹性(resilience)_, _韧性(elasticity)_,_移动性(mobility)_,_位置透明性(location transparency)_ 和 _管理(management)_,这些在文章后面再进行介绍。
|
||||
|
||||
在一个响应式系统中,特别是使用了响应式编程技术的,这样的系统中就即有事件也有消息——一个是用于沟通的强大工具(消息),而另一个则呈现现实(事件)。
|
||||
|
||||
### 响应式系统和架构
|
||||
|
||||
_响应式系统_ —— 如同在《响应式宣言》中定义的那样——是一组用于搭建现代系统——已充分准备好满足如今应用程序所面对的不断增长的需求的现代系统——的架构设计原则。
|
||||
|
||||
响应式系统的原则决对不是什么新东西,它可以被追溯到70和80年代Jim Gray和Pat Helland在[串级系统(Tandem System)][34]上和Joe aomstrong和Robert Virding在[Erland][35]上做出的重大工作。然而,这些人在当时都超越了时代,只有到了最近5-10年,技术行业才被不得不反思当前企业系统最好的开发实践活动并且学习如何将来之不易的响应式原则应用到今天这个多核、云计算和物联网的世界中。
|
||||
|
||||
响应式系统的基石是_消息传递(message-passing)_ ,消息传递为两个组件之间创建一条暂时的边界,使得他们能够在 _时间_ 上分离——实现并发性——和 _空间(space)_ ——实现分布式(distribution)与移动性(mobility)。这种分离是两个组件完全[隔离(isolation)][36]以及实现 _弹性(resilience)_ 和 _韧性(elasticity)_ 基础的必需条件。
|
||||
|
||||
### 从程序到系统
|
||||
|
||||
这个世界的连通性正在变得越来越高。我们构建 _程序_ ——为单个操作子计算某些东西的端到端逻辑——已经不如我们构建 _系统_ 来得多了。
|
||||
|
||||
系统从定义上来说是复杂的——每一部分都包含多个组件,每个组件的自身或其子组件也可以是一个系统——这意味着软件要正常工作已经越来越依赖于其它软件。
|
||||
|
||||
我们今天构建的系统会在多个计算机上被操作,小型的或大型的,数量少的或数量多的,相近的或远隔半个地球的。同时,由于人们的生活正变得越来越依赖于系统顺畅运行的有效性,用户的期望也变得越得越来越难以满足。
|
||||
|
||||
为了实现用户——和企业——能够依赖的系统,这些系统必须是 _灵敏的(responsive)_ ,这样无论是某个东西提供了一个正确的响应,还是当需要一个响应时响应无法使用,都不会有影响。为了达到这一点,我们必须保证在错误( _弹性_ )和欠载( _韧性_ )下,系统仍然能够保持灵敏性。为了实现这一点,我们把系统设计为 _消息驱动的_ ,我们称其为 _响应式系统_ 。
|
||||
|
||||
### 响应式系统的弹性
|
||||
|
||||
弹性是与 _错误下_ 的灵敏性(responsiveness)有关的,它是系统内在的功能特性,是需要被设计的东西,而不是能够被动的加入系统中的东西。弹性是大于容错性的——弹性无关于故障退化(graceful degradation)——虽然故障退化对于系统来说是很有用的一种特性——与弹性相关的是与从错误中完全恢复达到 _自愈_ 的能力。这就需要组件的隔离以及组件对错误的包容,以免错误散播到其相邻组件中去——否则,通常会导致灾难性的连锁故障。
|
||||
|
||||
因此构建一个弹性的,自愈(self-healing)系统的关键是允许错误被:容纳,具体化为消息,发送给其他的(担当监管者的(supervisors))组件,从而在错误组件之外修复出一个安全环境。在这,消息驱动是其促成因素:远离高度耦合的、脆弱的深层嵌套的同步调用链,大家长期要么学会忍受其煎熬或直接忽略。解决的想法是将调用链中的错误管理分离,将客户端从处理服务端错误的责任中解放出来。
|
||||
|
||||
### 响应式系统的韧性
|
||||
|
||||
[韧性(Elasticity)][37]是关于 _欠载下的灵敏性(responsiveness)_ 的——意味着一个系统的吞吐量在资源增加或减少时能够自动地相应增加或减少(scales up or down)(同样能够向内或外扩展(scales in or out))以满足不同的需求。这是利用云计算承诺的特性所必需的因素:使系统利用资源更加有效,成本效益更佳,对环境友好以及实现按次付费。
|
||||
|
||||
系统必须能够在不重写甚至不重新设置的情况下,适应性地——即无需介入自动伸缩——响应状态及行为,沟通负载均衡,故障转移(failover),以及升级。实现这些的就是 _位置透明性(location transparency)_ :使用同一个方法,同样的编程抽象,同样的语义,在所有向度中伸缩(scaling)系统的能力——从CPU核心到数据中心。
|
||||
|
||||
如同《响应式宣言》所述:
|
||||
|
||||
> 一个极大地简化问题的关键洞见在于意识到我们都在使用分布式计算。无论我们的操作系统是运行在一个单一结点上(拥有多个独立的CPU,并通过QPI链接进行交流),还是在一个节点集群(cluster of nodes,独立的机器,通过网络进行交流)上。拥抱这个事实意味着在垂直方向上多核的伸缩与在水平方面上集群的伸缩并无概念上的差异。在空间上的解耦 [...],是通过异步消息传送以及运行时实例与其引用解耦从而实现的,这就是我们所说的位置透明性。
|
||||
|
||||
因此,不论接收者在哪里,我们都以同样的方式与它交流。唯一能够在语义上等同实现的方式是消息传送。
|
||||
|
||||
### 响应式系统的生产效率
|
||||
|
||||
既然大多数的系统生来即是复杂的,那么其中一个最重要的点即是保证一个系统架构在开发和维护组件时,最小程度地减低生产效率,同时将操作的 _偶发复杂性(accidental complexity_ 降到最低。
|
||||
|
||||
这一点很重要,因为在一个系统的生命周期中——如果系统的设计不正确——系统的维护会变得越来越困难,理解、定位和解决问题所需要花费时间和精力会不断地上涨。
|
||||
|
||||
响应式系统是我们所知的最具 _生产效率_ 的系统架构(在多核、云及移动架构的背景下):
|
||||
|
||||
* 错误的隔离为组件与组件之间裹上[舱壁][15](译者注:当船遭到损坏进水时,舱壁能够防止水从损坏的船舱流入其他船舱),防止引发连锁错误,从而限制住错误的波及范围以及严重性。
|
||||
|
||||
* 监管者的层级制度提供了多个等级的防护,搭配以自我修复能力,避免了许多曾经在侦查(inverstigate)时引发的操作代价(cost)——大量的瞬时故障(transient failures)。
|
||||
|
||||
* 消息传送和位置透明性允许组件被卸载下线、代替或重新布线(rerouted)同时不影响终端用户的使用体验,并降低中断的代价、它们的相对紧迫性以及诊断和修正所需的资源。
|
||||
|
||||
* 复制减少了数据丢失的风险,减轻了数据检索(retrieval)和存储的有效性错误的影响。
|
||||
|
||||
* 韧性允许在使用率波动时保存资源,允许在负载很低时,最小化操作开销,并且允许在负载增加时,最小化运行中断(outgae)或紧急投入(urgent investment)伸缩性的风险。
|
||||
|
||||
因此,响应式系统使生成系统(creation systems)很好的应对错误、随时间变化的负载——同时还能保持低运营成本。
|
||||
|
||||
### 响应式编程与响应式系统的关联
|
||||
|
||||
响应式编程是一种管理内部逻辑(internal logic)和数据流转换(dataflow transformation)的好技术,在本地的组件中,做为一种优化代码清晰度、性能以及资源利用率的方法。响应式系统,是一组架构上的原则,旨在强调分布式信息交流并为我们提供一种处理分布式系统弹性与韧性的工具。
|
||||
|
||||
只使用响应式编程常遇到的一个问题,是一个事件驱动的基于回调的或声明式的程序中两个计算阶段的高度耦合(tight coupling),使得 _弹性_ 难以实现,因此时它的转换链通常存活时间短,并且它的各个阶段——回调函数或组合子(combinator)——是匿名的,也就是不可寻址的。
|
||||
|
||||
这意味着,它通常在内部处理成功与错误的状态而不会向外界发送相应的信号。这种寻址能力的缺失导致单个阶段(stages)很难恢复,因为它通常并不清楚异常应该,甚至不清楚异常可以,发送到何处去。
|
||||
|
||||
另一个与响应式系统方法的不同之处在于单纯的响应式编程允许 _时间_ 上的解耦(decoupling),但不允许 _空间_ 上的(除非是如上面所述的,在底层通过网络传送消息来分发(distribute)数据流)。正如叙述的,在时间上的解耦使 _并发性_ 成为可能,但是是空间上的解耦使 _分布(distribution)_ 和 _移动性(mobility)_ (使得不仅仅静态拓扑可用,还包括了动态拓扑)成为可能的——而这些正是 _韧性_ 所必需的要素。
|
||||
|
||||
位置透明性的缺失使得很难以韧性方式对一个基于适应性响应式编程技术的程序进行向外扩展,因为这样就需要分附加工具,例如消息总线(message bus),数据网格(data grid)或者在顶层的定制网络协议(bespoke network protocol)。而这点正是响应式系统的消息驱动编程的闪光的地方,因为它是一个包含了其编程模型和所有伸缩向度语义的交流抽象概念,因此降低了复杂性与认知超载。
|
||||
|
||||
对于基于回调的编程,常会被提及的一个问题是写这样的程序或许相对来说会比较简单,但最终会引发一些真正的后果。
|
||||
|
||||
例如,对于基于匿名回调的系统,当你想理解它们,维护它们或最重要的是在生产供应中断(production outages)或错误行为发生时,你想知道到底发生了什么、发生在哪以及为什么发生,但此时它们只提供极少的内部信息。
|
||||
|
||||
为响应式系统设计的库与平台(例如[Akka][39]项目和[Erlang][40]平台)学到了这一点,它们依赖于那些更容易理解的长期存活的可寻址组件。当错误发生时,根据导致错误的消息可以找到唯一的组件。当可寻址的概念存在组件模型的核心中时,监控方案(monitoring solution)就有了一个 _有意义_ 的方式来呈现它收集的数据——利用传播(propagated)的身份标识。
|
||||
|
||||
一个好的编程范式的选择,一个选择实现像可寻址能力和错误管理这些东西的范式,已经被证明在生产中是无价的,因它在设计中承认了现实并非一帆风顺,_接受并拥抱错误的出现_ 而不是毫无希望地去尝试避免错误。
|
||||
|
||||
总而言之,响应式编程是一个非常有用的实现技术,可以用在响应式架构当中。但是记住这只能帮助管理一部分:异步且非阻塞执行下的数据流管理——通常只在单个结点或服务中。当有多个结点时,就需要开始认真地考虑像数据一致性(data consistency)、跨结点沟通(cross-node communication)、协调(coordination)、版本控制(versioning)、编制(orchestration)、错误管理(failure management)、关注与责任(concerns and responsibilities)的分离等等的东西——也即是:系统架构。
|
||||
|
||||
因此,要最大化响应式编程的价值,就把它作为构建响应式系统的工具来使用。构建一个响应式系统需要的不仅是在一个已存在的遗留下来的软件栈(software stack)上抽象掉特定的操作系统资源和少量的异步API和[断路器(circuit breakers)][41]。此时应该拥抱你在创建一个包含多个服务的分布式系统这一事实——这意味着所有东西都要共同合作,提供一致性与灵敏的体验,而不仅仅是如预期工作,但同时还要在发生错误和不可预料的负载下正常工作。
|
||||
|
||||
### 总结
|
||||
|
||||
企业和中间件供应商在目睹了应用响应式所带来的企业利润增长后,同样开始拥抱响应式。在本文中,我们把响应式系统做为企业最终目标进行描述——假设了多核、云和移动架构的背景——而响应式编程则从中担任重要工具的角色。
|
||||
|
||||
响应式编程在内部逻辑及数据流转换的组件层次上为开发者提高了生产率——通过性能与资源的有效利用实现。而响应式系统在构建 _原生云(cloud native)_ 和其它大型分布式系统的系统层次上为架构师及DevOps从业者提高了生产率——通过弹性与韧性。我们建议在响应式系统设计原则中结合响应式编程技术。
|
||||
|
||||
```
|
||||
1 参考Conal Elliott,FRP的发明者,见[这个演示][16][↩][17]
|
||||
2 [Amdahl 定律][18]揭示了系统理论上的加速会被一系列的子部件限制,这意味着系统在新的资源加入后会出现收益递减(diminishing returns)。 [↩][19]
|
||||
3 Neil Günter的[通用可伸缩性定律(Universal Scalability Law)][20]是理解并发与分布式系统的竞争与协作的重要工具,它揭示了当新资源加入到系统中时,保持一致性的开销会导致不好的结果。
|
||||
4 消息可以是同步的(要求发送者和接受者同时存在),也可以是异步的(允许他们在时间上解耦)。其语义上的区别超出本文的讨论范围。[↩][22]
|
||||
```
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.oreilly.com/ideas/reactive-programming-vs-reactive-systems
|
||||
|
||||
作者:[Jonas Bonér][a] , [Viktor Klang][b]
|
||||
译者:[XLCYun](https://github.com/XLCYun)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 组织编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://www.oreilly.com/people/e0b57-jonas-boner
|
||||
[b]:https://www.oreilly.com/people/f96106d4-4ce6-41d9-9d2b-d24590598fcd
|
||||
[1]:https://www.flickr.com/photos/pixel_addict/2301302732
|
||||
[2]:http://conferences.oreilly.com/oscon/oscon-tx?intcmp=il-prog-confreg-update-ostx17_new_site_oscon_17_austin_right_rail_cta
|
||||
[3]:https://www.oreilly.com/people/e0b57-jonas-boner
|
||||
[4]:https://www.oreilly.com/people/f96106d4-4ce6-41d9-9d2b-d24590598fcd
|
||||
[5]:http://www.oreilly.com/programming/free/why-reactive.csp?intcmp=il-webops-free-product-na_new_site_reactive_programming_vs_reactive_systems_text_cta
|
||||
[6]:http://conferences.oreilly.com/oscon/oscon-tx?intcmp=il-prog-confreg-update-ostx17_new_site_oscon_17_austin_right_rail_cta
|
||||
[7]:http://conferences.oreilly.com/oscon/oscon-tx?intcmp=il-prog-confreg-update-ostx17_new_site_oscon_17_austin_right_rail_cta
|
||||
[8]:https://www.oreilly.com/ideas/reactive-programming-vs-reactive-systems#dfref-footnote-1
|
||||
[9]:https://www.oreilly.com/ideas/reactive-programming-vs-reactive-systems#dfref-footnote-2
|
||||
[10]:https://en.wikipedia.org/wiki/Futures_and_promises
|
||||
[11]:http://reactive-streams.org/
|
||||
[12]:https://en.wikipedia.org/wiki/Oz_(programming_language)#Dataflow_variables_and_declarative_concurrency
|
||||
[13]:https://www.oreilly.com/ideas/reactive-programming-vs-reactive-systems#dfref-footnote-3
|
||||
[14]:https://www.oreilly.com/ideas/reactive-programming-vs-reactive-systems#dfref-footnote-4
|
||||
[15]:http://skife.org/architecture/fault-tolerance/2009/12/31/bulkheads.html
|
||||
[16]:https://begriffs.com/posts/2015-07-22-essence-of-frp.html
|
||||
[17]:https://www.oreilly.com/ideas/reactive-programming-vs-reactive-systems#ref-footnote-1
|
||||
[18]:https://en.wikipedia.org/wiki/Amdahl%2527s_law
|
||||
[19]:https://www.oreilly.com/ideas/reactive-programming-vs-reactive-systems#ref-footnote-2
|
||||
[20]:http://www.perfdynamics.com/Manifesto/USLscalability.html
|
||||
[21]:https://www.oreilly.com/ideas/reactive-programming-vs-reactive-systems#ref-footnote-3
|
||||
[22]:https://www.oreilly.com/ideas/reactive-programming-vs-reactive-systems#ref-footnote-4
|
||||
[23]:http://www.reactivemanifesto.org/
|
||||
[24]:http://conal.net/papers/icfp97/
|
||||
[25]:http://www.reactivemanifesto.org/glossary#Asynchronous
|
||||
[26]:http://www.reactivemanifesto.org/glossary#Non-Blocking
|
||||
[27]:https://en.wikipedia.org/wiki/Dataflow_programming
|
||||
[28]:http://www.reactivemanifesto.org/glossary#Back-Pressure
|
||||
[29]:http://www.reactivemanifesto.org/glossary#Message-Driven
|
||||
[30]:http://www.reactivemanifesto.org/glossary#Message-Driven
|
||||
[31]:https://christophermeiklejohn.com/pl/2016/04/12/rpc.html
|
||||
[32]:https://queue.acm.org/detail.cfm?id=1142044
|
||||
[33]:https://cs.brown.edu/courses/cs227/archives/2012/papers/weaker/cidr07p15.pdf
|
||||
[34]:http://www.hpl.hp.com/techreports/tandem/TR-86.2.pdf
|
||||
[35]:http://erlang.org/download/armstrong_thesis_2003.pdf
|
||||
[36]:http://www.reactivemanifesto.org/glossary#Isolation
|
||||
[37]:http://www.reactivemanifesto.org/glossary#Elasticity
|
||||
[38]:http://www.reactivemanifesto.org/glossary#Location-Transparency
|
||||
[39]:http://akka.io/
|
||||
[40]:https://www.erlang.org/
|
||||
[41]:http://martinfowler.com/bliki/CircuitBreaker.html
|
@ -0,0 +1,321 @@
|
||||
2016 年度开源创作工具
|
||||
============================================================
|
||||
|
||||
### 无论你是想修改图片,编译音频,还是创作故事,这里的免费开源的工具都能帮你做到。
|
||||
|
||||
![2016 年度 36 个开源创作工具](https://opensource.com/sites/default/files/styles/image-full-size/public/u23316/art-yearbook-paint-draw-create-creative.png?itok=KgEF_IN_ "Top 34 open source creative tools in 2016 ")
|
||||
|
||||
>图片来源 : opensource.com
|
||||
|
||||
几年前,我在 Red Hat 总结会上做了一个简单的演讲,给与会者展示了 [2012 年度开源创作工具][12]。开源软件在过去几年里发展迅速,现在我们来看看 2016 年的相关软件。
|
||||
|
||||
### 核心应用
|
||||
(译注:以下 6 款软件是“核心应用”的子类,认为应该使用四级标题,下同。校对时请删除该句)
|
||||
|
||||
这六款应用是开源的设计软件中的最强王者。它们做的很棒,拥有完善的功能特征集、稳定发行版以及活跃的开发者社区,是很成熟的项目。这六款应用都是跨平台的,每一个都能在 Linux,OS X 和 Windows 上使用,不过大多数情况下 Linux 版本一般都是最先更新的。这些应用广为人知,我已经把最新特性的重要部分写进来了,如果你不是非常了解它们的开发情况,你有可能会忽视这些特性。
|
||||
|
||||
如果你想要对这些软件做更深层次的了解,或许你想帮助测试这四个软件 —— GIMP,Inkscape,Scribus,以及 MyPaint 的最新版本,在 Linux 机器上你可以用 [Flatpak][13] 软件轻松地安装它们。[按照指令][14] 日更绘图应用(_Nightly Graphics Apps_),每个应用都能在当天晚上通过 Flatpak 获取。有一件事要注意:如果你要给每个应用的 Flatpak 版本安装笔刷或者其它扩展,移除扩展的目录将会位于相应应用的目录 **~/.var/app**。
|
||||
|
||||
#### GIMP
|
||||
|
||||
[GIMP][15] [在 2015 年迎来了它的 20 周岁][16],使得它成为这里资历最久的开源创造型应用之一。GIMP 是一款强大的应用,可以处理图片,创作简单的绘画,以及插图。你可以通过简单的任务来尝试 GIMP,比如裁剪、缩放图片,然后循序渐进使用它的其它功能。GIMP 可以在 Linux,Mac OS X 以及 Windows 上使用,是一款跨平台的应用,而且能够打开、导出一系列格式的文件,包括在与之相似的软件 Photoshop 上广为应用的那些格式。
|
||||
|
||||
GIMP 开发团队正在忙着 2.10 发行版的工作;[2.8.18][17] 是最新的稳定版本。更振奋人心的是非稳定版,[2.9.4][18],拥有全新的用户界面,旨在节省空间的标志性图标和黑色主题,改进了颜色管理,更多的基于 GEGL 的支持分离预览的过滤器,支持 MyPaint 笔刷(如下图所示),对称绘图以及命令行批次处理。想了解更多信息,请关注 [发行版完整笔记][19]。
|
||||
|
||||
![GIMP 截图](https://opensource.com/sites/default/files/gimp_520.png "GIMP 截图")
|
||||
|
||||
#### Inkscape
|
||||
|
||||
[Inkscape][20] 是一款富有特色的矢量绘图设计软件。可以用它来创作简单的图形,图表,设计或者图标。
|
||||
|
||||
最新的稳定版是 [0.91][21] 版本;与 GIMP 相似,更多有趣的东西能在先行版 0.92pre3 版本中找到,发布于 2016 年 11 月。最新推出的先行版的突出特点是 [梯度网格特性(gradient mesh feature)][22](如下图所示);0.91 发行版里介绍的新特性包括:[动力冲程(power stroke)][23] 用于完全可配置的书法笔画(下图的 “opensource.com” 中的 “open” 用的就是动力冲程技术),画布上的测量工具,以及 [全新的符号对话框][24](如下图右侧所示)。(很多符号库可以从 GitHub 上获得;[Xaviju's inkscape-open-symbols set][25] 就很不错。)_物体_对话框是在改进版或每日构建中可用的新特性,可以为一个文档中的所有物体登记,提供工具来管理这些物体。
|
||||
|
||||
![Inkscape 截图](https://opensource.com/sites/default/files/inkscape_520.png "Inkscape 截图")
|
||||
|
||||
#### Scribus
|
||||
|
||||
|
||||
[Scribus][26] 是一款强大的桌面发布和页面设计工具。Scribus 让你能够创造精致美丽的物品,包括信封,书籍,杂质以及其它印刷品。Scribus 的颜色管理工具可以处理和输出 CMYK 格式,还能给印刷商店中可靠的复制品上色。
|
||||
|
||||
[1.4.6][27] 是 Scribus 的最新稳定版本;[1.5.x][28] 系列的发行版更令人期待,因为它们是即将到来的 1.6.0 发行版的预览。1.5.3 版本包含了 Krita 文件(*.KRA)导入工具; 1.5.x 系列中其它的改进包括了 _表格_ 工具,文本框对齐,脚注,导出可选 PDF 格式,改进的字典,可驻留的颜色板,符号工具,扩展的文件格式支持。
|
||||
|
||||
![Scribus 截图](https://opensource.com/sites/default/files/scribus_520.png "Scribus 截图")
|
||||
|
||||
#### MyPaint
|
||||
|
||||
[MyPaint][29] 是一款中央绘图的昂贵的绘图和插画工具。它很轻巧,界面虽小,但快捷键丰富,因此你能够不用放下笔,专心于绘图。
|
||||
|
||||
[MyPaint 1.2.0][30] 是最新的稳定版本,包含了一些新特性,诸如 [直观上墨工具][31] 用来跟踪铅笔绘图的轨迹,新的填充工具,笔刷和颜色的历史面板,用户界面的改进包括尅色主题和一些代表性的图标,以及一些可编辑的矢量层。想要尝试 MyPaint 里的最新改进,我建议安装日更的 Flatpak 构建,尽管自从 1.2.0 版本没有添加重要的特性。
|
||||
|
||||
![MyPaint 截图](https://opensource.com/sites/default/files/mypaint_520.png "MyPaint 截图")
|
||||
|
||||
#### Blender
|
||||
|
||||
[Blender][32] 最初发布于 1995 年一月,像 GIMP 一样,已经有 20 多年的历史了。Blender 是一款功能强大的开源 3D 制作套件,包含建模,雕刻,渲染,真实材质,绳索,动画,影像合成,视频编辑,游戏创作以及模拟。
|
||||
|
||||
Blender 最新的稳定版是 [2.78a][33]。2.78 版本很庞大,包含的特性有:改进的 2D _蜡笔(Grease Pencil)_ 动画工具;针对球面立体图片的 VR 渲染支持;以及新的手绘曲线的绘图工具。
|
||||
|
||||
![Inkscape 截图](https://opensource.com/sites/default/files/blender_520.png "Inkscape 截图")
|
||||
|
||||
要尝试最新的 Blender 开发工具,有很多种选择,包括:
|
||||
|
||||
* Blender 基金会让官方网址能够提供 [不稳定的每日构建版][2]。
|
||||
* 如果你在寻找包含特殊的正在开发的特性,[graphicall.org][3] 是一个适合社区的网站,能够提供特殊版本的 Blender(偶尔还有其它的创新型开源应用),让艺术家能够尝试最新的代码和试验品。
|
||||
* Mathieu Bridon 通过 Flatpak 做了 Blender 的一个 开发版本。查看它的博客以了解详情:[Flatpak 上日更的 Blender(Blender nightly in Flatpak)][4]
|
||||
|
||||
#### Krita
|
||||
|
||||
[Krita][34] 是一款拥有一系列功能的数字绘图应用。这款应用贴合插画师,印象画师以及漫画家的需求,有很多附件,比如笔刷,颜色版,图案以及模版。
|
||||
|
||||
最新的稳定版是 [Krita 3.0.1][35],于 2016 年 9 月发布。3.0.x 系列的新特性包括 2D 逐帧动画;改进的层管理器和功能;扩展的常用快捷键;改进网格,向导和图形捕捉;还有软打样。
|
||||
|
||||
![Krita 截图](https://opensource.com/sites/default/files/krita_520.png "Krita 截图")
|
||||
|
||||
### 视频处理工具
|
||||
|
||||
关于开源的视频编辑工具则有很多很多。这这些工具之中,[Flowblade][36] 是新推出的,而 Kdenlive 则是构建完善,对新手友好,功能最全的竞争者。对你排除某些选项有所帮助的主要标准是它们所支持的平台,其中一些只支持 Linux 平台。它们的软件上游都很活跃,最新的稳定版都于近期发布,发布时间相差不到一周。
|
||||
|
||||
#### Kdenlive
|
||||
|
||||
[Kdenlive][37],最初于 2002 年发布,是一款强大的非线性视频编辑器,有 Linux 和 OS X 版本(但是 OS X 版本已经过时了)。Kdenlive 有用户友好的、基于拖拽的用户界面,适合初学者,又有专业人员需要的深层次功能。
|
||||
|
||||
可以看看 Seth Kenlon 写的 [Kdenlive 系列教程(multi-part Kdenlive tutorial series)][38],了解如何使用 Kdenlive。
|
||||
|
||||
* 最新稳定版: 16.08.2 (2016 年 10 月)
|
||||
|
||||
![](https://opensource.com/sites/default/files/images/life-uploads/kdenlive_6_leader.png)
|
||||
|
||||
#### Flowblade
|
||||
|
||||
2012 年发布, [Flowblade][39],只有 Linux 版本的视频编辑器,是个相当不错的后期之秀。
|
||||
|
||||
* 最新稳定版: 1.8 (2016 年 9 月)
|
||||
|
||||
#### Pitivi
|
||||
|
||||
[Pitivi][40] 是用好友好型的免费开源视频编辑器。Pitivi 是用 [Python][41] 编写的(“Pitivi” 中的 “Pi”),使用了 [GStreamer][42] 多媒体框架,社区活跃。
|
||||
|
||||
* 最新稳定版: 0.97 (2016 年 8 月)
|
||||
* 通过 Flatpak 获取 [最新版本][5]
|
||||
|
||||
#### Shotcut
|
||||
|
||||
[Shotcut][43] 是一款免费开源跨平台的视频编辑器,[早在 2004 年]就发布了,之后由现在的主要开发者 [Dan Dennedy][45] 重写。
|
||||
|
||||
* 最新稳定版: 16.11 (2016 年 11 月)
|
||||
* 支持 4K 分辨率
|
||||
* Ships as a tarballed binary
|
||||
|
||||
|
||||
|
||||
#### OpenShot Video Editor
|
||||
|
||||
始于 2008 年,[OpenShot Video Editor][46] 是一款免费、开源、易于使用、跨平台的视频编辑器。
|
||||
|
||||
* 最新稳定版: [2.1][6] (2016 年 8 月)
|
||||
|
||||
|
||||
### 其它工具
|
||||
|
||||
#### SwatchBooker
|
||||
|
||||
[SwatchBooker][47] 是一款很方便的工具,尽管它近几年都没有更新了,但它还是很有用。SwatchBooler 能帮助用户从各大制造商那里合法地获取颜色样本,你可以用其它免费开源的工具处理它导出的格式,包括 Scribus。
|
||||
|
||||
#### GNOME Color Manager
|
||||
|
||||
[GNOME Color Manager][48] 是 GNOME 桌面环境内建的颜色管理器,而 GNOME 是 Linux 中某些发行版的默认桌面。这个工具让你能够用颜色标尺为自己的显示设备创建属性文件,还可以为这些设备加载/管理 ICC 颜色属性文件。
|
||||
|
||||
#### GNOME Wacom Control
|
||||
|
||||
[The GNOME Wacom controls][49] 允许你在 GNOME 桌面环境中配置自己的手写板;你可以修改手写板交互的很多选项,包括自定义手写板灵敏度,以及手写板映射到哪块屏幕上。
|
||||
|
||||
#### Xournal
|
||||
|
||||
[Xournal][50] 是一款简单但可靠的应用,你能够用手写板进行手写或者在笔记上涂鸦。Xournal 是一款有用的签名工具,也可以用来注解 PDF 文档。
|
||||
|
||||
#### PDF Mod
|
||||
|
||||
[PDF Mod][51] 是一款编辑 PDF 文件很方便的工具。PDF Mod 让用户可以移除页面,添加页面,将多个 PDF 文档合并成一个单独的 PDF 文件,重新排列页面,旋转页面等。
|
||||
|
||||
#### SparkleShare
|
||||
|
||||
[SparkleShare][52] 是一款基于 git 的文件分享工具,艺术家用来合作和分享资源。它挂放在 GitLab 仓库上,你能够获得一个精妙的开源架构,可以用于资源管理。SparkleShare 的前端通过在顶部提供一个类似下拉框界面,取消了 git 的不可预测性。
|
||||
|
||||
### 摄影
|
||||
|
||||
#### Darktable
|
||||
|
||||
[Darktable][53] 是一款能让你开发原始数字文件的应用,有一系列工具,可以管理工作流,无损编辑图片。Darktable 支持许多流行的相机和滤镜。
|
||||
|
||||
![改变颜色平衡度的图片](https://opensource.com/sites/default/files/dt_colour.jpg "改变颜色平衡度的图片")
|
||||
|
||||
#### Entangle
|
||||
|
||||
[Entangle][54] 允许你将数字相机连接到电脑上,让你能从电脑上完全控制相机。
|
||||
|
||||
#### Hugin
|
||||
|
||||
[Hugin][55] 是一款工具,让你可以拼接照片,从而制作全景照片。
|
||||
|
||||
### 2D 动画
|
||||
|
||||
#### Synfig Studio
|
||||
|
||||
[Synfig Studio][56] 是基于矢量的二维动画套件,支持位图原图,在平板上用起来方便。
|
||||
|
||||
#### Blender Grease Pencil
|
||||
|
||||
我在前面讲过了 Blender,但值得注意的是,最近的发行版里的 [重构的蜡笔特性(a refactored grease pencil feature)][57],添加了创作二维动画的功能。
|
||||
|
||||
#### Krita
|
||||
|
||||
[Krita][58] 现在同样提供了二维动画功能
|
||||
|
||||
|
||||
### 音频编辑
|
||||
|
||||
#### Audacity
|
||||
|
||||
[Audacity][59] 在编辑音频文件,记录声音方面很有名,是用户友好型的工具。
|
||||
|
||||
#### Ardour
|
||||
|
||||
[Ardour][60] 是一款数字音频工作软件,界面中间是录音,编辑和混合工作流。使用上它比 Audacity 要稍微难一点,但它允许自动操作,并且更高端。(有 Linux,Mac OS X 和 Windows 版本)
|
||||
|
||||
#### Hydrogen
|
||||
|
||||
[Hydrogen][61] 是一款开源的电子鼓,界面直观。它可以用合成的乐器创作、整理各种乐谱。
|
||||
|
||||
#### Mixxx
|
||||
|
||||
[Mixxx][62] 是四层次的 DJ 套件,让你能够用强有力的操作把 DJ 和 其它歌曲混合在一起,包含节拍循环,时间延长,音高变化,还可以用 DJ 硬件控制器直播混音界面。
|
||||
|
||||
### Rosegarden
|
||||
|
||||
[Rosegarden][63] 是一款作曲软件,有乐谱编写和音乐作曲或编辑的软件,提供音频和 MIDI 音序器。(译注:MIDI 即 Musical Instrument Digital Interface 乐器数字接口)
|
||||
|
||||
#### MuseScore
|
||||
|
||||
[MuseScore][64] 是乐谱创作,记谱和编辑的软件,它还有个乐谱贡献者社区。
|
||||
|
||||
### 其它具有创造力的工具
|
||||
|
||||
#### MakeHuman
|
||||
|
||||
[MakeHuman][65] 是一款三维绘图工具,可以创造人型的真实模型。
|
||||
|
||||
<iframe allowfullscreen="" frameborder="0" height="293" src="https://www.youtube.com/embed/WiEDGbRnXdE?rel=0" width="520"></iframe>
|
||||
|
||||
#### Natron
|
||||
|
||||
[Natron][66] 是基于节点的合成工具,用于视频后期制作,动态图象和设计特效。
|
||||
|
||||
#### FontForge
|
||||
|
||||
[FontForge][67] 是创作和编辑字体的工具。允许你编辑某个字体中的字符形态,也能够为这个设计生成字体。
|
||||
|
||||
#### Valentina
|
||||
|
||||
[Valentina][68] 是用来设计接合方式的应用。
|
||||
|
||||
#### Calligra Flow
|
||||
|
||||
[Calligra Flow][69] 是一款插画工具,类似 Visio(有 Linux,Mac OS X 和 Windows 版本)。
|
||||
|
||||
#### Resources
|
||||
|
||||
这里有很多小玩意和彩蛋值得尝试。需要一点灵感来探索?这些网站和论坛有很多教程和精美的成品能够激发你开始创作:
|
||||
|
||||
1. [pixls.us][7]: 摄影师 Pat David 管理的博客,他专注于专业摄影师使用的免费开源的软件和工作流。
|
||||
2. [David Revoy's Blog][8] David Revoy 的博客,热爱免费开源,非常有天赋的插画师,概念派画师和开源倡议者,对 Blender 基金会电影有很大贡献。
|
||||
3. [The Open Source Creative Podcast][9]: 由 Opensource.com 社区版主和专栏作家 [Jason van Gumster][10] 管理,他是 Blender 和 GIMP 的专家, [《Blender for Dummies》][1] 的作者,该文章正好是面向我们这些热爱开源创作工具和这些工具周边的文化的人。
|
||||
4. [Libre Graphics Meeting][11]: 免费开源创作软件的开发者和使用这些软件的创作者的年度会议。这是个好地方,你可以通过它找到你喜爱的开源创作软件将会推出哪些有意思的特性,还可以了解到这些软件的用户用它们在做什么。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
作者简介:
|
||||
|
||||
![](https://opensource.com/sites/default/files/styles/profile_pictures/public/pictures/picture-343-8e0fb148b105b450634e30acd8f5b22b.png?itok=oxzTm70z)
|
||||
|
||||
Máirín Duffy - Máirín 是 Red Hat 的首席交互设计师。她热衷于自由免费软件和开源工具,尤其是在创作领域:她最喜欢的应用是 [Inkscape](http://inkscape.org)。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/16/12/yearbook-top-open-source-creative-tools-2016
|
||||
|
||||
作者:[Máirín Duffy][a]
|
||||
译者:[GitFuture](https://github.com/GitFuture)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://opensource.com/users/mairin
|
||||
[1]:http://www.blenderbasics.com/
|
||||
[2]:https://builder.blender.org/download/
|
||||
[3]:http://graphicall.org/
|
||||
[4]:https://mathieu.daitauha.fr/blog/2016/09/23/blender-nightly-in-flatpak/
|
||||
[5]:https://pitivi.wordpress.com/2016/07/18/get-pitivi-directly-from-us-with-flatpak/
|
||||
[6]:http://www.openshotvideo.com/2016/08/openshot-21-released.html
|
||||
[7]:http://pixls.us/
|
||||
[8]:http://davidrevoy.com/
|
||||
[9]:http://monsterjavaguns.com/podcast/
|
||||
[10]:https://opensource.com/users/jason-van-gumster
|
||||
[11]:http://libregraphicsmeeting.org/2016/
|
||||
[12]:https://opensource.com/life/12/9/tour-through-open-source-creative-tools
|
||||
[13]:https://opensource.com/business/16/8/flatpak
|
||||
[14]:http://flatpak.org/apps.html
|
||||
[15]:https://opensource.com/tags/gimp
|
||||
[16]:https://www.gimp.org/news/2015/11/22/20-years-of-gimp-release-of-gimp-2816/
|
||||
[17]:https://www.gimp.org/news/2016/07/14/gimp-2-8-18-released/
|
||||
[18]:https://www.gimp.org/news/2016/07/13/gimp-2-9-4-released/
|
||||
[19]:https://www.gimp.org/news/2016/07/13/gimp-2-9-4-released/
|
||||
[20]:https://opensource.com/tags/inkscape
|
||||
[21]:http://wiki.inkscape.org/wiki/index.php/Release_notes/0.91
|
||||
[22]:http://wiki.inkscape.org/wiki/index.php/Mesh_Gradients
|
||||
[23]:https://www.youtube.com/watch?v=IztyV-Dy4CE
|
||||
[24]:https://inkscape.org/cs/~doctormo/%E2%98%85symbols-dialog
|
||||
[25]:https://github.com/Xaviju/inkscape-open-symbols
|
||||
[26]:https://opensource.com/tags/scribus
|
||||
[27]:https://www.scribus.net/scribus-1-4-6-released/
|
||||
[28]:https://www.scribus.net/scribus-1-5-2-released/
|
||||
[29]:http://mypaint.org/
|
||||
[30]:http://mypaint.org/blog/2016/01/15/mypaint-1.2.0-released/
|
||||
[31]:https://github.com/mypaint/mypaint/wiki/v1.2-Inking-Tool
|
||||
[32]:https://opensource.com/tags/blender
|
||||
[33]:http://www.blender.org/features/2-78/
|
||||
[34]:https://opensource.com/tags/krita
|
||||
[35]:https://krita.org/en/item/krita-3-0-1-update-brings-numerous-fixes/
|
||||
[36]:https://opensource.com/life/16/9/10-reasons-flowblade-linux-video-editor
|
||||
[37]:https://opensource.com/tags/kdenlive
|
||||
[38]:https://opensource.com/life/11/11/introduction-kdenlive
|
||||
[39]:http://jliljebl.github.io/flowblade/
|
||||
[40]:http://pitivi.org/
|
||||
[41]:http://wiki.pitivi.org/wiki/Why_Python%3F
|
||||
[42]:https://gstreamer.freedesktop.org/
|
||||
[43]:http://shotcut.org/
|
||||
[44]:http://permalink.gmane.org/gmane.comp.lib.fltk.general/2397
|
||||
[45]:http://www.dennedy.org/
|
||||
[46]:http://openshot.org/
|
||||
[47]:http://www.selapa.net/swatchbooker/
|
||||
[48]:https://help.gnome.org/users/gnome-help/stable/color.html.en
|
||||
[49]:https://help.gnome.org/users/gnome-help/stable/wacom.html.en
|
||||
[50]:http://xournal.sourceforge.net/
|
||||
[51]:https://wiki.gnome.org/Apps/PdfMod
|
||||
[52]:https://www.sparkleshare.org/
|
||||
[53]:https://opensource.com/life/16/4/how-use-darktable-digital-darkroom
|
||||
[54]:https://entangle-photo.org/
|
||||
[55]:http://hugin.sourceforge.net/
|
||||
[56]:https://opensource.com/article/16/12/synfig-studio-animation-software-tutorial
|
||||
[57]:https://wiki.blender.org/index.php/Dev:Ref/Release_Notes/2.78/GPencil
|
||||
[58]:https://opensource.com/tags/krita
|
||||
[59]:https://opensource.com/tags/audacity
|
||||
[60]:https://ardour.org/
|
||||
[61]:http://www.hydrogen-music.org/
|
||||
[62]:http://mixxx.org/
|
||||
[63]:http://www.rosegardenmusic.com/
|
||||
[64]:https://opensource.com/life/16/03/musescore-tutorial
|
||||
[65]:http://makehuman.org/
|
||||
[66]:https://natron.fr/
|
||||
[67]:http://fontforge.github.io/en-US/
|
||||
[68]:http://valentina-project.org/
|
||||
[69]:https://www.calligra.org/flow/
|
@ -0,0 +1,69 @@
|
||||
为何我们需要一个开放模型来设计评估公共政策
|
||||
============================================================
|
||||
|
||||
### 想象一个 app 可以让市民测试驱动提出的政策。
|
||||
|
||||
[up][3]
|
||||
![Why we need an open model to design and evaluate public policy](https://opensource.com/sites/default/files/styles/image-full-size/public/images/government/GOV_citizen_participation.jpg?itok=eeLWQgev "Why we need an open model to design and evaluate public policy")
|
||||
图片提供:
|
||||
|
||||
opensource.com
|
||||
|
||||
在政治选举之前的几个月中,公众辩论会加剧,并且公民面临大量的政策选择信息。在数据驱动的社会中,新的见解一直在为决策提供信息,对这些信息的深入了解从未如此重要,但公众仍然没有意识到公共政策建模的全部潜力。
|
||||
|
||||
在“开放政府”的概念不断演变以跟上新技术进步的时代,政府的政策模型和分析可能是新一代的开放知识。
|
||||
|
||||
政府开源模型 (GOSM) 是指政府开发的模型,其目的是设计和评估政策,免费提供给所有人使用、分发、不受限制地修改。社区可以提高政策建模的质量、可靠性和准确性,创造有利于公众的新的数据驱动程序。
|
||||
|
||||
今天的这代与技术相互作用,就像它的第二大本质,它默认吸收了大量的信息。如果我们可以在使用 GOSM 的虚拟、沉浸式环境中与不同的公共政策进行互动那会如何?
|
||||
|
||||
想象一下有一个允许公民测试推动政策来确定他们想要生活的未来的程序。他们会本能地学习关键的驱动因素和所需要的东西。不久之后,公众将更深入地了解公共政策的影响,并更加精明地引导有争议的公众辩论。
|
||||
|
||||
为什么我们以前没有更好的使用这些模型?原因在于公共政策建模的神秘面纱。
|
||||
|
||||
在一个如我们所生活的复杂的社会中,量化政策影响是一项艰巨的任务,并被被描述为一种“美好艺术”。此外,大多数政府政策模型都是基于行政和其他私人持有的数据。然而,政策分析师为了指导政策设计而勇于追求,多次以大量武力而获得政治斗争。
|
||||
|
||||
数字是很有说服力的。它们构建可信度并常常被用作引入新政策的理由。公共政策模型的发展赋予政治家和官僚权力,这些政治家和官僚们可能不愿意破坏现状。给予这一点可能并不容易,但 GOSM 为前所未有的公共政策改革提供了机会。
|
||||
|
||||
GOSM 将所有人的竞争环境均衡化:政治家、媒体、游说团体、利益相关者和公众。通过向社区开放政策评估的大门, 政府可以利用新的和未发现的能力用来创造、创新在公共领域的效率。但在公共政策设计中,利益相关者和政府之间战略互动有哪些实际影响?
|
||||
|
||||
GOSM 是独一无二的,因为它们主要是设计公共政策的工具,而不一定需要重新分配私人收益。利益相关者和游说团体可能会将 GOSM 与其私人信息一起使用,以获得对经济参与者私人利益的政策环境运作的新见解。
|
||||
|
||||
GOSM 可以成为利益相关者在公共辩论中保持权力平衡的武器,并为战略争取最佳利益么?
|
||||
|
||||
作为一个可变的公共资源,GOSM 在概念上由纳税人资助,并属于国家。私有实体在不向社会带来利益的情况下从 GOSM 中获得资源是合乎道德的吗?与可能用于更有效的服务提供的程序不同,替代政策建议更有可能由咨询机构使用,并有助于公众辩论。
|
||||
|
||||
开源社区经常使用“ copyleft 许可证” 来确保代码和根据此许可证的任何衍生作品对所有人都开放。当产品价值是代码本身,这需要重新分配才能获得最大利益,它需要重新分发来获得最大的利益。但是,如果代码或 GOSM 重新分发是主要产品附带的,那它会是对现有政策环境的新战略洞察么?
|
||||
|
||||
在私人收集的数据变得越来越多的时候,GOSM 背后的真正价值可能是底层数据,它可以用来改进模型本身。最终,政府是唯一有权实施政策的消费者,利益相关者可以选择在谈判中分享修改后的 GOSM。
|
||||
|
||||
政府在公开发布政策模型时面临的巨大挑战是提高透明度的同时保护隐私。理想情况下,发布 GOSM 将需要以保护建模关键特征的方式保护封闭数据。
|
||||
|
||||
公开发布 GOSM 通过促进市民对民主的更多了解和参与,使公民获得权力,从而改善政策成果和提高公众满意度。在开放的政府乌托邦中,开放的公共政策发展将是政府和社区之间的合作性努力,这里知识、数据和分析可供大家免费使用。
|
||||
|
||||
_在霍巴特举行的 linux.conf.au 2017([#lca2017][1])了解更多 Audrey Lobo-Pulo 的讲话:[公开发布的政府模型][2]。_
|
||||
|
||||
_声明:本文中提出的观点属于 Audrey Lobo-Pulo,不一定是澳大利亚政府的观点。_
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
作者简介:
|
||||
|
||||
![](https://opensource.com/sites/default/files/styles/profile_pictures/public/pictures/1-_mg_2552.jpg?itok=-RflZ4Wv)
|
||||
|
||||
Audrey Lobo-Pulo - Audrey Lobo-Pulo 博士是 Phoensight 的联合创始人,并且开放政府以及政府建模开源软件的倡导者。一位物理学家,在加入澳大利亚公共服务部后,她转而从事经济政策建模工作。Audrey 参与了各种经济政策选择的建模,目前对政府开放数据和开放式政策建模感兴趣。 Audrey 对政府的愿景是将数据科学纳入公共政策分析。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/17/1/government-open-source-models
|
||||
|
||||
作者:[Audrey Lobo-Pulo ][a]
|
||||
译者:[geekpi](https://github.com/geekpi)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://opensource.com/users/audrey-lobo-pulo
|
||||
[1]:https://twitter.com/search?q=%23lca2017&src=typd
|
||||
[2]:https://linux.conf.au/schedule/presentation/31/
|
||||
[3]:https://opensource.com/article/17/1/government-open-source-models?rate=p9P_dJ3xMrvye9a6xiz6K_Hc8pdKmRvMypzCNgYthA0
|
@ -0,0 +1,644 @@
|
||||
如何在CentOS 7 上安装 Elastic Stack
|
||||
============================================================
|
||||
|
||||
### 本页
|
||||
|
||||
1. [步骤1 - 准备操作系统][1]
|
||||
2. [步骤2 - 安装 Java][2]
|
||||
3. [步骤3 - 安装和配置 Elasticsearch][3]
|
||||
4. [步骤4 - 安装和配置 Kibana 和 Nginx][4]
|
||||
5. [步骤5 - 安装和配置 Logstash][5]
|
||||
6. [步骤6 - 在 CentOS 客户端上安装并配置 Filebeat][6]
|
||||
7. [步骤7 - 在 Ubuntu 客户端上安装并配置 Filebeat][7]
|
||||
8. [步骤8 - 测试][8]
|
||||
9. [参考][9]
|
||||
|
||||
**Elasticsearch** 是基于Lucene由Java开发的开源搜索引擎。它提供了一个分布式,多租户(译者注:多租户是指多租户技术,是一种软件架构技术,用来探讨与实现如何在多用户的环境下共用相同的系统或程序组件,并且仍可确保各用户间数据的隔离性。)的全文搜索引擎,并带有 HTTP 仪表盘的web界面(Kibana)。数据会被Elasticsearch查询,检索并且使用JSON文档方案存储。Elasticsearch 是一个可扩展的搜索引擎,可用于搜索所有类型的文本文档,包括日志文件。Elasticsearch 是‘Elastic Stack‘的核心,“Elastic Stack”也被称为“ELK Stack”。
|
||||
|
||||
**Logstash** 是用于管理事件和日志的开源工具。它为数据收集提供实时传递途径。 Logstash将收集您的日志数据,将数据转换为JSON文档,并将其存储在Elasticsearch中。
|
||||
|
||||
**Kibana** 是Elasticsearch的开源数据可视化工具。Kibana提供了一个漂亮的仪表盘Web界面。 你可以用它来管理和可视化来自Elasticsearch的数据。 它不仅美丽,而且强大。
|
||||
|
||||
在本教程中,我将向您展示如何在CentOS 7服务器上安装和配置 Elastic Stack以监视服务器日志。 然后,我将向您展示如何在操作系统为 CentOS 7和Ubuntu 16的客户端上安装“Elastic beats”。
|
||||
|
||||
**前提条件**
|
||||
|
||||
* 64位的CentOS 7,4GB 内存 - elk 主控机
|
||||
* 64位的CentOS 7 ,1 GB 内存 - 客户端1
|
||||
* 64位的Ubuntu 16 ,1GB 内存 - 客户端2
|
||||
|
||||
### 步骤1 - 准备操作系统
|
||||
|
||||
在本教程中,我们将禁用CentOS 7服务器上的SELinux。 编辑SELinux配置文件。
|
||||
|
||||
```
|
||||
vim /etc/sysconfig/selinux
|
||||
```
|
||||
|
||||
将 SELINUX 的值从 enforcing 改成 disabled 。
|
||||
|
||||
```
|
||||
SELINUX=disabled
|
||||
```
|
||||
|
||||
然后从起服务器
|
||||
|
||||
```
|
||||
reboot
|
||||
```
|
||||
|
||||
再次登录服务器并检查SELinux状态。
|
||||
|
||||
```
|
||||
getenforce
|
||||
```
|
||||
|
||||
确保结果是disabled。
|
||||
|
||||
### 步骤2 - 安装 Java
|
||||
|
||||
部署Elastic stack依赖于Java,Elasticsearch 需要Java 8 版本,推荐使用Oracle JDK 1.8 。我将从官方的Oracle rpm包安装Java 8。
|
||||
|
||||
使用wget命令下载Java 8 的JDK。
|
||||
|
||||
```
|
||||
wget --no-cookies --no-check-certificate --header "Cookie: gpw_e24=http:%2F%2Fwww.oracle.com%2F; oraclelicense=accept-securebackup-cookie" "http://download.oracle.com/otn-pub/java/jdk/8u77-b02/jdk-8u77-linux-x64.rpm"
|
||||
```
|
||||
|
||||
然后使用rpm命令安装
|
||||
|
||||
```
|
||||
rpm -ivh jdk-8u77-linux-x64.rpm
|
||||
```
|
||||
|
||||
最后,检查java JDK版本,确保它正常工作。
|
||||
|
||||
```
|
||||
java -version
|
||||
```
|
||||
|
||||
您将看到服务器的Java版本。
|
||||
|
||||
### 步骤3 - 安装和配置 Elasticsearch
|
||||
|
||||
在此步骤中,我们将安装和配置Elasticsearch。 从elastic.co网站提供的rpm包安装Elasticsearch,并将其配置在本地主机上运行(确保安装程序安全,而且不能从外部访问)。
|
||||
|
||||
在安装Elasticsearch之前,将elastic.co添加到服务器。
|
||||
|
||||
```
|
||||
rpm --import https://artifacts.elastic.co/GPG-KEY-elasticsearch
|
||||
```
|
||||
|
||||
接下来,使用wget下载Elasticsearch 5.1,然后安装它。
|
||||
|
||||
```
|
||||
wget https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-5.1.1.rpm
|
||||
rpm -ivh elasticsearch-5.1.1.rpm
|
||||
```
|
||||
|
||||
Elasticsearch 已经安装好了。 现在进入配置目录编辑elasticsaerch.yml 配置文件。
|
||||
|
||||
```
|
||||
cd /etc/elasticsearch/
|
||||
vim elasticsearch.yml
|
||||
```
|
||||
|
||||
去掉第40行的注释,启用Elasticsearch 的内存锁。
|
||||
|
||||
```
|
||||
bootstrap.memory_lock: true
|
||||
```
|
||||
|
||||
在“Network”块中,取消注释network.host和http.port行。
|
||||
|
||||
```
|
||||
network.host: localhost
|
||||
http.port: 9200
|
||||
```
|
||||
|
||||
保存文件并退出编辑器。
|
||||
|
||||
现在编辑elasticsearch.service文件获取内存锁配置。
|
||||
|
||||
```
|
||||
vim /usr/lib/systemd/system/elasticsearch.service
|
||||
```
|
||||
|
||||
去掉第60行的注释,确保该值为“unlimited”。
|
||||
|
||||
```
|
||||
MAX_LOCKED_MEMORY=unlimited
|
||||
```
|
||||
|
||||
保存并退出。
|
||||
|
||||
Elasticsearch 配置到此结束。Elasticsearch 将在本机的9200端口运行,我们通过在 CentOS 服务器上启用mlockall来禁用内存交换。重新加载systemd,将 Elasticsearch 置为启动,然后启动服务。
|
||||
|
||||
```
|
||||
sudo systemctl daemon-reload
|
||||
sudo systemctl enable elasticsearch
|
||||
sudo systemctl start elasticsearch
|
||||
```
|
||||
|
||||
等待 Eelasticsearch 启动成功,然后检查服务器上打开的端口,确保9200端口的状态是“LISTEN”
|
||||
|
||||
```
|
||||
netstat -plntu
|
||||
```
|
||||
|
||||
![Check elasticsearch running on port 9200] [10]
|
||||
|
||||
然后检查内存锁以确保启用mlockall,并使用以下命令检查Elasticsearch是否正在运行。
|
||||
|
||||
```
|
||||
curl -XGET 'localhost:9200/_nodes?filter_path=**.mlockall&pretty'
|
||||
curl -XGET 'localhost:9200/?pretty'
|
||||
```
|
||||
|
||||
会看到如下结果。
|
||||
|
||||
![Check memory lock elasticsearch and check status] [11]
|
||||
|
||||
### 步骤4 - 安装和配置 Kibana 和 Nginx
|
||||
|
||||
In this step, we will install and configure Kibana with a Nginx web server. Kibana will listen on the localhost IP address and Nginx acts as a reverse proxy for the Kibana application.
|
||||
|
||||
下载Kibana 5.1与wget,然后使用rpm命令安装:
|
||||
|
||||
```
|
||||
wget https://artifacts.elastic.co/downloads/kibana/kibana-5.1.1-x86_64.rpm
|
||||
rpm -ivh kibana-5.1.1-x86_64.rpm
|
||||
```
|
||||
|
||||
编辑 Kibana 配置文件。
|
||||
|
||||
```
|
||||
vim /etc/kibana/kibana.yml
|
||||
```
|
||||
|
||||
去掉配置文件中 server.port, server.host 和 elasticsearch.url 这三行的注释。
|
||||
|
||||
```
|
||||
server.port: 5601
|
||||
server.host: "localhost"
|
||||
elasticsearch.url: "http://localhost:9200"
|
||||
```
|
||||
|
||||
保存并退出。
|
||||
|
||||
将 Kibana 设为开机启动,并且启动Kibana 。
|
||||
|
||||
```
|
||||
sudo systemctl enable kibana
|
||||
sudo systemctl start kibana
|
||||
```
|
||||
|
||||
Kibana将作为节点应用程序运行在端口5601上。
|
||||
|
||||
```
|
||||
netstat -plntu
|
||||
```
|
||||
|
||||
![Kibana running as node application on port 5601] [12]
|
||||
|
||||
Kibana 安装到此结束。 现在我们需要安装Nginx并将其配置为反向代理,以便能够从公共IP地址访问Kibana。
|
||||
|
||||
Nginx在Epel资源库中可以找到,用yum安装epel-release。
|
||||
|
||||
```
|
||||
yum -y install epel-release
|
||||
```
|
||||
|
||||
然后安装 Nginx 和 httpd-tools 这两个包。
|
||||
|
||||
```
|
||||
yum -y install nginx httpd-tools
|
||||
```
|
||||
|
||||
httpd-tools软件包包含Web服务器的工具,可以为Kibana添加htpasswd基础认证。
|
||||
|
||||
编辑Nginx配置文件并删除'server {}'块,这样我们可以添加一个新的虚拟主机配置。
|
||||
|
||||
```
|
||||
cd /etc/nginx/
|
||||
vim nginx.conf
|
||||
```
|
||||
|
||||
删除server { }块。
|
||||
|
||||
![Remove Server Block on Nginx configuration] [13]
|
||||
|
||||
保存并退出。
|
||||
|
||||
现在我们需要在conf.d目录中创建一个新的虚拟主机配置文件。 用vim创建新文件'kibana.conf'。
|
||||
|
||||
```
|
||||
vim /etc/nginx/conf.d/kibana.conf
|
||||
```
|
||||
|
||||
复制下面的配置。
|
||||
|
||||
```
|
||||
server {
|
||||
listen 80;
|
||||
|
||||
server_name elk-stack.co;
|
||||
|
||||
auth_basic "Restricted Access";
|
||||
auth_basic_user_file /etc/nginx/.kibana-user;
|
||||
|
||||
location / {
|
||||
proxy_pass http://localhost:5601;
|
||||
proxy_http_version 1.1;
|
||||
proxy_set_header Upgrade $http_upgrade;
|
||||
proxy_set_header Connection 'upgrade';
|
||||
proxy_set_header Host $host;
|
||||
proxy_cache_bypass $http_upgrade;
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
保存并退出。
|
||||
|
||||
然后使用htpasswd命令创建一个新的基本认证文件。
|
||||
|
||||
```
|
||||
sudo htpasswd -c /etc/nginx/.kibana-user admin
|
||||
TYPE YOUR PASSWORD
|
||||
```
|
||||
|
||||
测试Nginx配置,确保没有错误。 然后设定Nginx开机启动并启动Nginx。
|
||||
|
||||
```
|
||||
nginx -t
|
||||
systemctl enable nginx
|
||||
systemctl start nginx
|
||||
```
|
||||
|
||||
![Add nginx virtual host configuration for Kibana Application] [14]
|
||||
|
||||
### 步骤5 - 安装和配置 Logstash
|
||||
|
||||
在此步骤中,我们将安装Logstash并将其配置为:从配置了filebeat的logstash客户端集中服务器的日志,然后过滤和转换Syslog数据并将其移动到存储中心(Elasticsearch)中。
|
||||
|
||||
下载Logstash并使用rpm进行安装。
|
||||
|
||||
```
|
||||
wget https://artifacts.elastic.co/downloads/logstash/logstash-5.1.1.rpm
|
||||
rpm -ivh logstash-5.1.1.rpm
|
||||
```
|
||||
|
||||
生成新的SSL证书文件,以便客户端可以识别 elastic 服务端。
|
||||
|
||||
进入tls目录并编辑openssl.cnf文件。
|
||||
|
||||
```
|
||||
cd /etc/pki/tls
|
||||
vim openssl.cnf
|
||||
```
|
||||
|
||||
在“[v3_ca]”部分添加新行,以获取服务器标识。
|
||||
|
||||
```
|
||||
[ v3_ca ]
|
||||
|
||||
# Server IP Address
|
||||
subjectAltName = IP: 10.0.15.10
|
||||
```
|
||||
|
||||
保存并退出。
|
||||
|
||||
使用openssl命令生成证书文件。
|
||||
|
||||
```
|
||||
openssl req -config /etc/pki/tls/openssl.cnf -x509 -days 3650 -batch -nodes -newkey rsa:2048 -keyout /etc/pki/tls/private/logstash-forwarder.key -out /etc/pki/tls/certs/logstash-forwarder.crt
|
||||
```
|
||||
|
||||
证书文件可以在'/etc/pki/tls/certs/'和'/etc/pki/tls/private/' 目录中找到。
|
||||
|
||||
接下来,我们会为Logstash创建新的配置文件。创建一个新的“filebeat-input.conf”文件来配置filebeat的日志源,然后创建一个“syslog-filter.conf”配置文件来处理syslog,再创建一个“output-elasticsearch.conf”文件来定义输出日志数据到Elasticsearch。
|
||||
|
||||
转到logstash配置目录,并在”conf.d“子目录中创建新的配置文件。
|
||||
|
||||
```
|
||||
cd /etc/logstash/
|
||||
vim conf.d/filebeat-input.conf
|
||||
```
|
||||
|
||||
输入配置:粘贴以下配置。
|
||||
|
||||
```
|
||||
input {
|
||||
beats {
|
||||
port => 5443
|
||||
ssl => true
|
||||
ssl_certificate => "/etc/pki/tls/certs/logstash-forwarder.crt"
|
||||
ssl_key => "/etc/pki/tls/private/logstash-forwarder.key"
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
保存并退出。
|
||||
|
||||
创建 syslog-filter.conf 文件。
|
||||
|
||||
```
|
||||
vim conf.d/syslog-filter.conf
|
||||
```
|
||||
|
||||
粘贴以下配置
|
||||
|
||||
```
|
||||
filter {
|
||||
if [type] == "syslog" {
|
||||
grok {
|
||||
match => { "message" => "%{SYSLOGTIMESTAMP:syslog_timestamp} %{SYSLOGHOST:syslog_hostname} %{DATA:syslog_program}(?:\[%{POSINT:syslog_pid}\])?: %{GREEDYDATA:syslog_message}" }
|
||||
add_field => [ "received_at", "%{@timestamp}" ]
|
||||
add_field => [ "received_from", "%{host}" ]
|
||||
}
|
||||
date {
|
||||
match => [ "syslog_timestamp", "MMM d HH:mm:ss", "MMM dd HH:mm:ss" ]
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
我们使用名为“grok”的过滤器插件来解析syslog文件。
|
||||
|
||||
保存并退出。
|
||||
|
||||
创建输出配置文件 “output-elasticsearch.conf“。
|
||||
|
||||
```
|
||||
vim conf.d/output-elasticsearch.conf
|
||||
```
|
||||
|
||||
粘贴以下配置。
|
||||
|
||||
```
|
||||
output {
|
||||
elasticsearch { hosts => ["localhost:9200"]
|
||||
hosts => "localhost:9200"
|
||||
manage_template => false
|
||||
index => "%{[@metadata][beat]}-%{+YYYY.MM.dd}"
|
||||
document_type => "%{[@metadata][type]}"
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
保存并退出。
|
||||
|
||||
最后,将logstash设定为开机启动并且启动服务。
|
||||
|
||||
```
|
||||
sudo systemctl enable logstash
|
||||
sudo systemctl start logstash
|
||||
```
|
||||
|
||||
![Logstash started on port 5443 with SSL Connection] [15]
|
||||
|
||||
### 步骤6 - 在 CentOS 客户端上安装并配置 Filebeat
|
||||
|
||||
Beat作为数据发送人的角色,是一种可以安装在客户端节点上的轻量级代理,将大量数据从客户机发送到Logstash或Elasticsearch服务器。有4中beat,“Filebeat” 用于发送“日志文件”,“Metricbeat” 用于发送“指标”,“Packetbeat” 用于发送”网络数据“,”Winlogbeat“用于发送Windows客户端的“事件日志”。
|
||||
|
||||
在本教程中,我将向您展示如何安装和配置“Filebeat”,通过SSL连接将数据日志文件传输到Logstash服务器。
|
||||
|
||||
登录到客户端1的服务器上。 然后将证书文件从elastic 服务器复制到客户端1的服务器上。
|
||||
|
||||
```
|
||||
ssh root@client1IP
|
||||
```
|
||||
|
||||
使用scp命令拷贝证书文件。
|
||||
|
||||
```
|
||||
scp root@elk-serverIP:~/logstash-forwarder.crt .
|
||||
TYPE elk-server password
|
||||
```
|
||||
|
||||
创建一个新的目录,将证书移动到这个目录中。
|
||||
|
||||
```
|
||||
sudo mkdir -p /etc/pki/tls/certs/
|
||||
mv ~/logstash-forwarder.crt /etc/pki/tls/certs/
|
||||
```
|
||||
|
||||
接下来,在客户端1服务器上导入 elastic 密钥。
|
||||
|
||||
```
|
||||
rpm --import https://artifacts.elastic.co/GPG-KEY-elasticsearch
|
||||
```
|
||||
|
||||
下载 Filebeat 并且用rpm命令安装。
|
||||
|
||||
```
|
||||
wget https://artifacts.elastic.co/downloads/beats/filebeat/filebeat-5.1.1-x86_64.rpm
|
||||
rpm -ivh filebeat-5.1.1-x86_64.rpm
|
||||
```
|
||||
|
||||
Filebeat已经安装好了,请转到配置目录并编辑“filebeat.yml”文件。
|
||||
|
||||
```
|
||||
cd /etc/filebeat/
|
||||
vim filebeat.yml
|
||||
```
|
||||
|
||||
在第21行的路径部分,添加新的日志文件。 我们将创建两个文件,”/var/log/secure“文件用于ssh活动,“/var/log/secure”文件服务器日志。
|
||||
|
||||
```
|
||||
paths:
|
||||
- /var/log/secure
|
||||
- /var/log/messages
|
||||
```
|
||||
|
||||
在第26行添加一个新配置来定义syslog类型的文件。
|
||||
|
||||
```
|
||||
document-type: syslog
|
||||
```
|
||||
|
||||
Filebeat默认使用Elasticsearch作为输出目标。 在本教程中,我们将其更改为Logshtash。 在83行和85行添加注释来禁用 Elasticsearch 输出。
|
||||
|
||||
禁用 Elasticsearch 输出。
|
||||
|
||||
```
|
||||
#-------------------------- Elasticsearch output ------------------------------
|
||||
#output.elasticsearch:
|
||||
# Array of hosts to connect to.
|
||||
# hosts: ["localhost:9200"]
|
||||
```
|
||||
|
||||
现在添加新的logstash输出配置。 去掉logstash输出配置的注释,并将所有值更改为下面配置中的值。
|
||||
|
||||
```
|
||||
output.logstash:
|
||||
# The Logstash hosts
|
||||
hosts: ["10.0.15.10:5443"]
|
||||
bulk_max_size: 1024
|
||||
ssl.certificate_authorities: ["/etc/pki/tls/certs/logstash-forwarder.crt"]
|
||||
template.name: "filebeat"
|
||||
template.path: "filebeat.template.json"
|
||||
template.overwrite: false
|
||||
```
|
||||
|
||||
保存文件并退出vim。
|
||||
|
||||
将 Filebeat 设定为开机启动并启动。
|
||||
|
||||
```
|
||||
sudo systemctl enable filebeat
|
||||
sudo systemctl start filebeat
|
||||
```
|
||||
|
||||
### 步骤7 - 在 Ubuntu 客户端上安装并配置 Filebeat
|
||||
|
||||
使用ssh连接到服务器。
|
||||
|
||||
```
|
||||
ssh root@ubuntu-clientIP
|
||||
```
|
||||
|
||||
使用scp命令拷贝证书文件。
|
||||
|
||||
```
|
||||
scp root@elk-serverIP:~/logstash-forwarder.crt .
|
||||
```
|
||||
|
||||
创建一个新的目录,将证书移动到这个目录中。
|
||||
|
||||
```
|
||||
sudo mkdir -p /etc/pki/tls/certs/
|
||||
mv ~/logstash-forwarder.crt /etc/pki/tls/certs/
|
||||
```
|
||||
|
||||
在服务器上导入 elastic 密钥。
|
||||
|
||||
```
|
||||
wget -qO - https://artifacts.elastic.co/GPG-KEY-elasticsearch | sudo apt-key add -
|
||||
```
|
||||
|
||||
下载 Filebeat .deb 包并且使用dpkg命令进行安装。
|
||||
|
||||
```
|
||||
wget https://artifacts.elastic.co/downloads/beats/filebeat/filebeat-5.1.1-amd64.deb
|
||||
dpkg -i filebeat-5.1.1-amd64.deb
|
||||
```
|
||||
|
||||
转到配置目录并编辑“filebeat.yml”文件。
|
||||
|
||||
```
|
||||
cd /etc/filebeat/
|
||||
vim filebeat.yml
|
||||
```
|
||||
|
||||
在路径配置部分添加新的日志文件路径。
|
||||
|
||||
```
|
||||
paths:
|
||||
- /var/log/auth.log
|
||||
- /var/log/syslog
|
||||
```
|
||||
|
||||
设定document type配置为 syslog 。
|
||||
|
||||
```
|
||||
document-type: syslog
|
||||
```
|
||||
|
||||
将下列几行注释掉,禁用输出到 Elasticsearch。
|
||||
|
||||
```
|
||||
#-------------------------- Elasticsearch output ------------------------------
|
||||
#output.elasticsearch:
|
||||
# Array of hosts to connect to.
|
||||
# hosts: ["localhost:9200"]
|
||||
```
|
||||
|
||||
启用logstash输出,去掉以下配置的注释并且按照如下所示更改值。
|
||||
|
||||
```
|
||||
output.logstash:
|
||||
# The Logstash hosts
|
||||
hosts: ["10.0.15.10:5443"]
|
||||
bulk_max_size: 1024
|
||||
ssl.certificate_authorities: ["/etc/pki/tls/certs/logstash-forwarder.crt"]
|
||||
template.name: "filebeat"
|
||||
template.path: "filebeat.template.json"
|
||||
template.overwrite: false
|
||||
```
|
||||
|
||||
保存并退出vim。
|
||||
|
||||
将 Filebeat 设定为开机启动并启动。
|
||||
|
||||
```
|
||||
sudo systemctl enable filebeat
|
||||
sudo systemctl start filebeat
|
||||
```
|
||||
|
||||
检查服务状态。
|
||||
|
||||
```
|
||||
systemctl status filebeat
|
||||
```
|
||||
|
||||
![Filebeat is running on the client Ubuntu] [16]
|
||||
|
||||
### 步骤8 - 测试
|
||||
|
||||
打开您的网络浏览器,并访问您在Nginx中配置的elastic stack域,我的是“elk-stack.co”。 使用管理员密码登录,然后按Enter键登录Kibana仪表盘。
|
||||
|
||||
![Login to the Kibana Dashboard with Basic Auth] [17]
|
||||
|
||||
创建一个新的默认索引”filebeat- *“,然后点击'创建'按钮。
|
||||
|
||||
![Create First index filebeat for Kibana] [18]
|
||||
|
||||
默认索引已创建。 如果elastic stack上有多个beat,您可以在“星形”按钮上点击一下即可配置默认beat。
|
||||
|
||||
![Filebeat index as default index on Kibana Dashboard] [19]
|
||||
|
||||
转到 “**Discover**” 菜单,您就可以看到elk-client1和elk-client2服务器上的所有日志文件。
|
||||
|
||||
![Discover all Log Files from the Servers] [20]
|
||||
|
||||
来自elk-client1服务器日志中的无效ssh登录的JSON输出示例。
|
||||
|
||||
![JSON output for Failed SSH Login] [21]
|
||||
|
||||
使用其他的选项,你可以使用Kibana仪表盘做更多的事情。
|
||||
|
||||
Elastic Stack已安装在CentOS 7服务器上。 Filebeat已安装在CentOS 7和Ubuntu客户端上。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.howtoforge.com/tutorial/how-to-install-elastic-stack-on-centos-7/
|
||||
|
||||
作者:[Muhammad Arul][a]
|
||||
译者:[Flowsnow](https://github.com/Flowsnow)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.howtoforge.com/tutorial/how-to-install-elastic-stack-on-centos-7/
|
||||
[1]: https://www.howtoforge.com/tutorial/how-to-install-elastic-stack-on-centos-7/#step-nbspprepare-the-operating-system
|
||||
[2]: https://www.howtoforge.com/tutorial/how-to-install-elastic-stack-on-centos-7/#step-install-java
|
||||
[3]: https://www.howtoforge.com/tutorial/how-to-install-elastic-stack-on-centos-7/#step-install-and-configure-elasticsearch
|
||||
[4]: https://www.howtoforge.com/tutorial/how-to-install-elastic-stack-on-centos-7/#step-install-and-configure-kibana-with-nginx
|
||||
[5]: https://www.howtoforge.com/tutorial/how-to-install-elastic-stack-on-centos-7/#step-install-and-configure-logstash
|
||||
[6]: https://www.howtoforge.com/tutorial/how-to-install-elastic-stack-on-centos-7/#step-install-and-configure-filebeat-on-the-centos-client
|
||||
[7]: https://www.howtoforge.com/tutorial/how-to-install-elastic-stack-on-centos-7/#step-install-and-configure-filebeat-on-the-ubuntu-client
|
||||
[8]: https://www.howtoforge.com/tutorial/how-to-install-elastic-stack-on-centos-7/#step-testing
|
||||
[9]: https://www.howtoforge.com/tutorial/how-to-install-elastic-stack-on-centos-7/#reference
|
||||
[10]: https://www.howtoforge.com/images/how-to-install-elastic-stack-on-centos-7/big/1.png
|
||||
[11]: https://www.howtoforge.com/images/how-to-install-elastic-stack-on-centos-7/big/2.png
|
||||
[12]: https://www.howtoforge.com/images/how-to-install-elastic-stack-on-centos-7/big/3.png
|
||||
[13]: https://www.howtoforge.com/images/how-to-install-elastic-stack-on-centos-7/big/4.png
|
||||
[14]: https://www.howtoforge.com/images/how-to-install-elastic-stack-on-centos-7/big/5.png
|
||||
[15]: https://www.howtoforge.com/images/how-to-install-elastic-stack-on-centos-7/big/6.png
|
||||
[16]: https://www.howtoforge.com/images/how-to-install-elastic-stack-on-centos-7/big/12.png
|
||||
[17]: https://www.howtoforge.com/images/how-to-install-elastic-stack-on-centos-7/big/7.png
|
||||
[18]: https://www.howtoforge.com/images/how-to-install-elastic-stack-on-centos-7/big/8.png
|
||||
[19]: https://www.howtoforge.com/images/how-to-install-elastic-stack-on-centos-7/big/9.png
|
||||
[20]: https://www.howtoforge.com/images/how-to-install-elastic-stack-on-centos-7/big/10.png
|
||||
[21]: https://www.howtoforge.com/images/how-to-install-elastic-stack-on-centos-7/big/11.png
|
@ -0,0 +1,195 @@
|
||||
lnav - Linux 下一个基于控制台的高级日志文件查看器
|
||||
============================================================
|
||||
|
||||
[LNAV][3](Log file Navigator)是 Linux 下一个基于控制台的高级日志文件查看器。它和其它文件查看器,例如 cat、more、tail 等,完成相同的任务,但有很多普通文件查看器没有的增强功能(尤其是它自带很多颜色和易于阅读的格式)。
|
||||
|
||||
它能在解压所有压缩日志文件(zip、gzip、bzip)的同时把它们合并到一起进行导航。基于消息的时间戳,lnav 能把多个日志文件合并到一个视图(Single Log Review),从而避免打开多个窗口。左边的颜色栏帮助显示消息所属的文件。
|
||||
|
||||
警告和错误的数目会被(黄色和红色)高亮显示,因此我们能够很轻易地看到问题出现在哪里。它会自动加载新的日志行。
|
||||
|
||||
它按照消息时间戳排序显示所有文件的日志消息。顶部和底部的状态栏会告诉你在哪个日志文件。如果你想查找特定的模式,只需要在搜索弹窗中输入就会即时显示。
|
||||
|
||||
内建的日志消息解析器会自动从每一行中发现和提取详细信息。
|
||||
|
||||
服务器日志是一个由服务器创建并经常更新、用于抓取特定服务和应用的所有活动信息的日志文件。当你的应用或者服务出现问题时这个文件就会非常有用。从日志文件中你可以获取所有关于问题的信息,例如基于警告或者错误信息它什么时候开始表现不正常。
|
||||
|
||||
当你用一个普通文件查看器打开一个日志文件时,它会用纯文本格式显示所有信息(如果用更直白的话说的话:纯白),这样很难去发现和理解哪里有警告或错误信息。为了克服这种情况,快速找到警告和错误信息来解决问题, lnav 是一个入手可用的更好的解决方案。
|
||||
|
||||
大部分普通 Linux 日志文件都放在 `/var/log/`。
|
||||
|
||||
**lnav 自动检测以下日志格式**
|
||||
|
||||
* Common Web Access Log format(普通 web 访问日志格式)
|
||||
* CUPS page_log
|
||||
* Syslog
|
||||
* Glog
|
||||
* VMware ESXi/vCenter Logs
|
||||
* dpkg.log
|
||||
* uwsgi
|
||||
* “Generic” – 以时间戳开始的消息
|
||||
* Strace
|
||||
* sudo
|
||||
* gzib & bizp
|
||||
|
||||
**lnav 高级功能**
|
||||
|
||||
* 单一日志视图 - 基于消息时间戳,所有日志文件内容都会被合并到一个单一视图。
|
||||
* 自动日志格式检测 - lnav 支持大部分日志格式
|
||||
* 过滤器 - 能进行基于正则表达式的过滤
|
||||
* 时间线视图
|
||||
* Pretty-Print 视图
|
||||
* 使用 SQL 查询日志
|
||||
* 自动数据抽取
|
||||
* 实时操作
|
||||
* 语法高亮
|
||||
* Tab 补全
|
||||
* 当你查看相同文件集时自动保存和恢复会话信息。
|
||||
* Headless 模式
|
||||
|
||||
|
||||
#### 如何在 Linux 中安装 lnav
|
||||
|
||||
大部分发行版(Debian、Ubuntu、Mint、Fedora、suse、openSUSE、Arch Linux、Manjaro、Mageia 等等)默认都有 lvan 软件包,在软件包管理器的帮助下,我们可以很轻易地从发行版官方仓库中安装它。对于 CentOS/RHEL 我们需要启用 **[EPEL 仓库][1]**。
|
||||
|
||||
```
|
||||
[在 Debian/Ubuntu/LinuxMint 上安装 lnav]
|
||||
$ sudo apt-get install lnav
|
||||
|
||||
[在 RHEL/CentOS 上安装 lnav]
|
||||
$ sudo yum install lnav
|
||||
|
||||
[在 Fedora 上安装 lnav]
|
||||
$ sudo dnf install lnav
|
||||
|
||||
[在 openSUSE 上安装 lnav]
|
||||
$ sudo zypper install lnav
|
||||
|
||||
[在 Mageia 上安装 lnav]
|
||||
$ sudo urpmi lnav
|
||||
|
||||
[在基于 Arch Linux 的系统上安装 lnav]
|
||||
$ yaourt -S lnav
|
||||
```
|
||||
|
||||
如果你的发行版没有 lnav 软件包,别担心,开发者提供了 `.rpm 和 .deb` 安装包,因此没有任何问题我们可以轻易安装。确保你从 [开发者 github 页面][4] 下载最新版本的安装包。
|
||||
|
||||
```
|
||||
[在 Debian/Ubuntu/LinuxMint 上安装 lnav]
|
||||
$ sudo wget https://github.com/tstack/lnav/releases/download/v0.8.1/lnav_0.8.1_amd64.deb
|
||||
$ sudo dpkg -i lnav_0.8.1_amd64.deb
|
||||
|
||||
[在 RHEL/CentOS 上安装 lnav]
|
||||
$ sudo yum install https://github.com/tstack/lnav/releases/download/v0.8.1/lnav-0.8.1-1.x86_64.rpm
|
||||
|
||||
[在 Fedora 上安装 lnav]
|
||||
$ sudo dnf install https://github.com/tstack/lnav/releases/download/v0.8.1/lnav-0.8.1-1.x86_64.rpm
|
||||
|
||||
[在 openSUSE 上安装 lnav]
|
||||
$ sudo zypper install https://github.com/tstack/lnav/releases/download/v0.8.1/lnav-0.8.1-1.x86_64.rpm
|
||||
|
||||
[在 Mageia 上安装 lnav]
|
||||
$ sudo rpm -ivh https://github.com/tstack/lnav/releases/download/v0.8.1/lnav-0.8.1-1.x86_64.rpm
|
||||
```
|
||||
|
||||
#### 不带参数运行 lnav
|
||||
|
||||
默认情况下你不带参数运行 lnav 时它会打开 `syslog` 文件。
|
||||
|
||||
```
|
||||
# lnav
|
||||
```
|
||||
|
||||
[
|
||||
![](http://www.2daygeek.com/wp-content/uploads/2017/01/lnav-advanced-log-file-viewer-1.png)
|
||||
][5]
|
||||
|
||||
#### 使用 lnav 查看特定日志文件
|
||||
|
||||
要用 lnav 查看特定的日志文件,在 lnav 命令后面添加日志文件路径。例如我们想看 `/var/log/dpkg.log` 日志文件。
|
||||
|
||||
```
|
||||
# lnav /var/log/dpkg.log
|
||||
```
|
||||
|
||||
[
|
||||
![](http://www.2daygeek.com/wp-content/uploads/2017/01/lnav-advanced-log-file-viewer-2.png)
|
||||
][6]
|
||||
|
||||
#### 用 lnav 查看多个日志文件
|
||||
|
||||
要用 lnav 查看多个日志文件,在 lnav 命令后面逐个添加日志文件路径,用一个空格隔开。例如我们想查看 `/var/log/dpkg.log` 和 `/var/log/kern.log` 日志文件。
|
||||
|
||||
左边的颜色栏帮助显示消息所属的文件。另外顶部状态栏还会显示当前日志文件的名称。为了显示多个日志文件,大部分应用习惯打开多个窗口、或者在窗口中水平或竖直切分,但 lnav 使用不同的方式(它基于日期组合在同一个窗口显示多个日志文件)。
|
||||
|
||||
```
|
||||
# lnav /var/log/dpkg.log /var/log/kern.log
|
||||
```
|
||||
|
||||
[
|
||||
![](http://www.2daygeek.com/wp-content/uploads/2017/01/lnav-advanced-log-file-viewer-3.png)
|
||||
][7]
|
||||
|
||||
#### 使用 lnav 查看压缩的日志文件
|
||||
|
||||
要查看并同时解压被压缩的日志文件(zip、gzip、bzip),在 lnav 命令后面添加 `-r` 选项。
|
||||
|
||||
```
|
||||
# lnav -r /var/log/Xorg.0.log.old.gz
|
||||
```
|
||||
|
||||
[
|
||||
![](http://www.2daygeek.com/wp-content/uploads/2017/01/lnav-advanced-log-file-viewer-6.png)
|
||||
][8]
|
||||
|
||||
#### 直方图视图
|
||||
|
||||
首先运行 `lnav` 然后按 `i` 键切换到/出直方图视图。
|
||||
[
|
||||
![](http://www.2daygeek.com/wp-content/uploads/2017/01/lnav-advanced-log-file-viewer-4.png)
|
||||
][9]
|
||||
|
||||
#### 查看日志解析器结果
|
||||
|
||||
首先运行 `lnav` 然后按 `p` 键打开显示日志解析器结果。
|
||||
[
|
||||
![](http://www.2daygeek.com/wp-content/uploads/2017/01/lnav-advanced-log-file-viewer-5.png)
|
||||
][10]
|
||||
|
||||
#### 语法高亮
|
||||
|
||||
你可以搜索任何给定的字符串,它会在屏幕上高亮显示。首先运行 `lnav` 然后按 `/` 键并输入你想查找的字符串。为了测试,我搜索字符串 `Default`,看下面的截图。
|
||||
[
|
||||
![](http://www.2daygeek.com/wp-content/uploads/2017/01/lnav-advanced-log-file-viewer-7.png)
|
||||
][11]
|
||||
|
||||
#### Tab 补全
|
||||
|
||||
命令窗口支持大部分操作的 tab 补全。例如,在进行搜索时,你可以使用 tab 补全屏幕上显示的单词,而不需要复制粘贴。为了测试,我搜索字符串 `/var/log/Xorg`,看下面的截图。
|
||||
[
|
||||
![](http://www.2daygeek.com/wp-content/uploads/2017/01/lnav-advanced-log-file-viewer-8.png)
|
||||
][12]
|
||||
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://www.2daygeek.com/install-and-use-advanced-log-file-viewer-navigator-lnav-in-linux/
|
||||
|
||||
作者:[Magesh Maruthamuthu][a]
|
||||
译者:[ictlyh](https://github.com/ictlyh)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://www.2daygeek.com/author/magesh/
|
||||
[1]:http://www.2daygeek.com/install-enable-epel-repository-on-rhel-centos-scientific-linux-oracle-linux/
|
||||
[2]:http://www.2daygeek.com/author/magesh/
|
||||
[3]:http://lnav.org/
|
||||
[4]:https://github.com/tstack/lnav/releases
|
||||
[5]:http://www.2daygeek.com/wp-content/uploads/2017/01/lnav-advanced-log-file-viewer-1.png
|
||||
[6]:http://www.2daygeek.com/wp-content/uploads/2017/01/lnav-advanced-log-file-viewer-2.png
|
||||
[7]:http://www.2daygeek.com/wp-content/uploads/2017/01/lnav-advanced-log-file-viewer-3.png
|
||||
[8]:http://www.2daygeek.com/wp-content/uploads/2017/01/lnav-advanced-log-file-viewer-6.png
|
||||
[9]:http://www.2daygeek.com/wp-content/uploads/2017/01/lnav-advanced-log-file-viewer-4.png
|
||||
[10]:http://www.2daygeek.com/wp-content/uploads/2017/01/lnav-advanced-log-file-viewer-5.png
|
||||
[11]:http://www.2daygeek.com/wp-content/uploads/2017/01/lnav-advanced-log-file-viewer-7.png
|
||||
[12]:http://www.2daygeek.com/wp-content/uploads/2017/01/lnav-advanced-log-file-viewer-8.png
|
@ -0,0 +1,127 @@
|
||||
# [使用tmux打造更强大的终端][3]
|
||||
|
||||
|
||||
![](https://cdn.fedoramagazine.org/wp-content/uploads/2017/01/tmux-945x400.jpg)
|
||||
|
||||
一些Fedora用户把大部分甚至是所有时间花费在了[命令行][4]终端上。 终端可让您访问整个系统,以及数以千计的强大的实用程序。 但是,它默认情况下一次只显示一个命令行会话。 即使有一个大的终端窗口,整个窗口也只会显示一个会话。 这浪费了空间,特别是在大型显示器和高分辨率的笔记本电脑屏幕上。 但是,如果你可以将终端分成多个会话呢? 这正是tmux最方便的地方,或者说不可或缺的。
|
||||
|
||||
### 安装并启动 _tmux_
|
||||
|
||||
_tmux_ 应用程序的名称来源于终端复用器(muxer)或多路复用器(multiplexer)。 换句话说,它可以将您的单终端会话分成多个会话。 它管理窗口和窗格:
|
||||
|
||||
- _窗口_ 是一个单一的视图 - 也就是终端中显示的东西的一个分类。
|
||||
- _窗格_ 是该视图的一部分,通常是一个终端会话。
|
||||
|
||||
开始前,请在系统上安装_tmux_应用程序。 你需要为您的用户帐户设置_sudo_权限(如果需要,请[查看本文][5]获取相关说明)。
|
||||
|
||||
```
|
||||
sudo dnf -y install tmux
|
||||
```
|
||||
|
||||
运行tmux程序:
|
||||
|
||||
tmux
|
||||
|
||||
### 状态栏
|
||||
|
||||
首先,它似乎什么也没有发生,除了出现在终端的底部的状态栏:
|
||||
|
||||
![Start of tmux session](https://cdn.fedoramagazine.org/wp-content/uploads/2017/01/Screenshot-from-2017-02-04-12-54-41.png)
|
||||
|
||||
底部栏显示:
|
||||
|
||||
* _[0]_ – 这是_tmux_服务创建的第一个会话。编号从0开始。tmux服务会跟踪所有的会话确认其是否存活。
|
||||
* _0:username@host:~_ – 有关该会话的第一个窗口的信息。编号从0开始。窗口的活动窗格中的终端归主机名host中username用户所有。当前目录是 _~ _ (家目录)。
|
||||
* _*_ – 显示你目前在此窗口中。
|
||||
* _“hostname”_ – 你正在使用的_tmux_服务器的主机名。
|
||||
* 此外,还会显示该特定主机上的日期和时间
|
||||
|
||||
当你向会话中添加更多窗口和窗格时,信息栏将随之改变。
|
||||
|
||||
### tmux基础知识
|
||||
|
||||
把你的终端窗口拉伸到最大。现在让我们尝试一些简单的命令来创建更多的窗格。默认情况下,所有的命令都以_Ctrl+b_开头
|
||||
|
||||
* 敲 _Ctrl+b, “_ 水平分割当前单个窗格。 现在窗口中有两个命令行闯关个,一个在顶部,一个在底部。请注意,底部的新窗格是活动窗格。
|
||||
* 敲 _Ctrl+b, %_ 垂直分割当前单个窗格。 现在你的窗口中有三个命令行窗格,右下脚的窗格是活动窗格。
|
||||
|
||||
![tmux window with three panes](https://cdn.fedoramagazine.org/wp-content/uploads/2017/01/Screenshot-from-2017-02-04-12-54-59.png)
|
||||
|
||||
注意当前窗格周围高亮显示的边框。为了浏览所有的窗格,请做以下操作:
|
||||
|
||||
* 敲 _Ctrl+b_ 然后点箭头键
|
||||
* 敲 _Ctrl+b, q_ 。数字会短暂的出现在窗格上。在这期间,你可以你想要浏览的窗格上对应的数字。
|
||||
|
||||
现在,尝试使用不同的窗格运行不同的命令。例如以下这样的:
|
||||
|
||||
* 在顶部窗格中使用 _ls_ 命令显示目录内容。
|
||||
* 在左下角的窗格中使用 _vi_ 命令 编辑一个文本文件。
|
||||
* 在右下脚的窗格中运行 _top_ 命令监控系统进程。
|
||||
|
||||
屏幕将会如下显示:
|
||||
|
||||
![tmux session with three panes running different commands](https://cdn.fedoramagazine.org/wp-content/uploads/2017/01/Screenshot-from-2017-02-04-12-57-51.png)
|
||||
|
||||
到目前为止,这个示例中只是用了一个带多个窗格的窗口。你也可以在会话中运行多个窗口。
|
||||
|
||||
* 为了创建一个新的窗口,请敲_Ctrl+b, c_ 。请注意,状态栏显示当前有两个窗口正在运行。(敏锐的读者会看到上面的截图。)
|
||||
* 要移动到上一个窗口,请敲 _Ctrl+b, p_ 。
|
||||
* 要移动到下一个窗口,请敲 _Ctrl+b, n_ 。
|
||||
* 要立即移动到特定的窗口,请敲 _Ctrl+b_ 然后跟上窗口编号。
|
||||
|
||||
如果你想知道如何关闭窗格,只需要使用 _exit_ , _logout_ , 或者 _Ctrl+d_ 来退出特定的命令行shell。一旦你关闭了窗口中的所有窗格,那么该窗口也会消失。
|
||||
|
||||
### 脱离和附加
|
||||
|
||||
tmux最强大的功能之一是能够脱离和重新附加到会话。 当你脱离的时候,你可以离开你的窗口和窗格独立运行。 此外,您甚至可以完全注销系统。 然后,您可以登录到同一个系统,重新附加到 _tmux_ 会话,查看您离开时的所有窗口和窗格。 脱离的时候你运行的命令一直保持运行状态。
|
||||
|
||||
为了脱离一个会话,请敲 _Ctrl+b, d_ 。 然后会话消失,你重新返回到一个标准的单一shell。如果要重新附加到会话中,使用一下命令:
|
||||
|
||||
```
|
||||
tmux attach-session
|
||||
```
|
||||
|
||||
当你连接到主机的网络不稳定时,这个功能就像救生员一样有用。如果连接失败,会话中 的所有的进程都会继续运行。只要连接备份了,你就可以恢复正常,就好像什么事情也没有发生一样。
|
||||
|
||||
如果这些功能还不够,在每个会话的顶层窗口和窗格中,你可以运行多个会话。你可以列举出这些窗口和窗格,然后通过编号或者名称把他们附加到正确的会话中:
|
||||
|
||||
```
|
||||
tmux list-sessions
|
||||
```
|
||||
|
||||
### 延伸阅读
|
||||
|
||||
本文只触及的 _tmux_ 的表面功能。你可以通过其他方式操作会话:
|
||||
|
||||
* 将一个窗格和另一个窗格交换
|
||||
* 将窗格移动到另一个窗口中(会话可以在同一个会话中也可以在不同的会话中)
|
||||
* 设定快捷键自动执行你喜欢的命令
|
||||
* 在 _~/.tmux.conf_ 文件中配置你最喜欢的配置项,这样每一个会话都会按照你喜欢的方式呈现
|
||||
|
||||
有关所有命令的完整说明,请查看以下参考:
|
||||
|
||||
* 官方[手册页][1]
|
||||
* _tmux_ [电子书][2]
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
作者简介:
|
||||
|
||||
Paul W. Frields自1997年以来一直是Linux用户和爱好者,并于2003年加入Fedora项目,这个项目刚推出不久。他是Fedora项目委员会的创始成员,在文档,网站发布,宣传,工具链开发和维护软件方面都有贡献。他于2008年2月至2010年7月加入Red Hat,担任Fedora项目负责人,并担任Red Hat的工程经理。目前他和妻子以及两个孩子居住在弗吉尼亚。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://fedoramagazine.org/use-tmux-more-powerful-terminal/
|
||||
|
||||
作者:[Paul W. Frields][a]
|
||||
译者:[Flowsnow](https://github.com/Flowsnow)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://fedoramagazine.org/author/pfrields/
|
||||
[1]: http://man.openbsd.org/OpenBSD-current/man1/tmux.1
|
||||
[2]: https://pragprog.com/book/bhtmux2/tmux-2
|
||||
[3]: https://fedoramagazine.org/use-tmux-more-powerful-terminal/
|
||||
[4]: http://www.cryptonomicon.com/beginning.html
|
||||
[5]: https://fedoramagazine.org/howto-use-sudo/
|
@ -1,31 +1,33 @@
|
||||
vim-kakali translating
|
||||
|
||||
Inxi – A Powerful Feature-Rich Commandline System Information Tool for Linux
|
||||
Inxi —— 一个功能强大的获取 Linux 系统信息的命令行工具
|
||||
============================================================
|
||||
|
||||
Inxi 最初是为控制台和 IRC(网络中继聊天)开发的一个强大且优秀的系统命令行工具。现在可以使用它获取用户的硬件和系统信息,它也能作为一个调试器使用或者一个社区技术支持工具。
|
||||
|
||||
Inxi is a powerful and remarkable command line-system information script designed for both console and IRC (Internet Relay Chat). It can be employed to instantly deduce user system configuration and hardware information, and also functions as a debugging, and forum technical support tool.
|
||||
使用 Inxi 可以很容易的获取所有的硬件信息:硬盘、声卡、显卡、网卡、CPU 和 RAM 等。同时也能够获取大量的操作系统信息,比如硬件驱动、Xorg 、桌面环境、内核、GCC 版本,进程,开机时间和内存等信息。
|
||||
|
||||
It displays handy information concerning system hardware (hard disk, sound cards, graphic card, network cards, CPU, RAM, and more), together with system information about drivers, Xorg, desktop environment, kernel, GCC version(s), processes, uptime, memory, and a wide array of other useful information.
|
||||
|
||||
However, it’s output slightly differs between the command line and IRC, with a few default filters and color options applicable to IRC usage. The supported IRC clients include: BitchX, Gaim/Pidgin, ircII, Irssi, Konversation, Kopete, KSirc, KVIrc, Weechat, and Xchat plus any others that are capable of showing either built in or external Inxi output.
|
||||
命令行和 IRC 上的 Inxi 输出略有不同,IRC 上会有一些可供用户使用的过滤器和颜色选项。支持 IRC 的客户端有:BitchX、Gaim/Pidgin、ircII、Irssi、 Konversation、 Kopete、 KSirc、 KVIrc、 Weechat 和 Xchat ;其他的一些客户端都会有一些过滤器和颜色选项,或者用 Inxi 的输出体现出这种区别。
|
||||
|
||||
### How to Install Inxi in Linux System
|
||||
|
||||
Inix is available in most mainstream Linux distribution repositories, and runs on BSDs as well.
|
||||
|
||||
### 在 Linux 系统上安装 Inxi
|
||||
|
||||
大多数主流 Linux 发行版的仓库中都有 Inxi ,包括大多数 BSD 系统。
|
||||
|
||||
|
||||
```
|
||||
$ sudo apt-get install inxi [On Debian/Ubuntu/Linux Mint]
|
||||
$ sudo yum install inxi [On CentOs/RHEL/Fedora]
|
||||
$ sudo dnf install inxi [On Fedora 22+]
|
||||
```
|
||||
在使用 Inxi 之前,用下面的命令查看 Inxi 的介绍信息,包括各种各样的文件夹和需要安装的包。
|
||||
|
||||
Before we start using it, we can run the command that follows to check all application dependencies plus recommends, and various directories, and display what package(s) we need to install to add support for a given feature.
|
||||
|
||||
```
|
||||
$ inxi --recommends
|
||||
```
|
||||
Inxi Checking
|
||||
Inxi 的输出:
|
||||
```
|
||||
inxi will now begin checking for the programs it needs to operate. First a check of the main languages and tools
|
||||
inxi uses. Python is only for debugging data collection.
|
||||
@ -120,22 +122,22 @@ File: /var/run/dmesg.boot
|
||||
All tests completed.
|
||||
```
|
||||
|
||||
### Basic Usage of Inxi Tool in Linux
|
||||
|
||||
Below are some basic Inxi options we can use to collect machine plus system information.
|
||||
用下面的操作获取系统和硬件的详细信息。
|
||||
|
||||
#### Show Linux System Information
|
||||
#### 获取系统信息
|
||||
Inix 不加任何选项就能输出下面的信息:CPU 、内核、开机时长、内存大小、硬盘大小、进程数、登陆终端以及 Inxi 版本。
|
||||
|
||||
When run without any flags, Inxi will produce output to do with system CPU, kernel, uptime, memory size, hard disk size, number of processes, client used and inxi version:
|
||||
|
||||
```
|
||||
$ inxi
|
||||
CPU~Dual core Intel Core i5-4210U (-HT-MCP-) speed/max~2164/2700 MHz Kernel~4.4.0-21-generic x86_64 Up~3:15 Mem~3122.0/7879.9MB HDD~1000.2GB(20.0% used) Procs~234 Client~Shell inxi~2.2.35
|
||||
```
|
||||
|
||||
#### Show Linux Kernel and Distribution Info
|
||||
#### 获取内核和发行版本信息
|
||||
|
||||
使用 Inxi 的 `-S` 选项查看本机系统信息:
|
||||
|
||||
The command below will show sample system info (hostname, kernel info, desktop environment and disto) using the `-S` flag:
|
||||
|
||||
```
|
||||
$ inxi -S
|
||||
@ -143,9 +145,10 @@ System: Host: TecMint Kernel: 4.4.0-21-generic x86_64 (64 bit) Desktop: Cinnamon
|
||||
Distro: Linux Mint 18 Sarah
|
||||
```
|
||||
|
||||
#### Find Linux Laptop or PC Model Information
|
||||
|
||||
To print machine data-same as product details (system, product id, version, Mobo, model, BIOS etc), we can use the option `-M` as follows:
|
||||
### 获取电脑机型
|
||||
使用 `-M` 选项查看机型(笔记本/台式机)、产品 ID 、机器版本、主板、制造商和 BIOS 等信息:
|
||||
|
||||
|
||||
```
|
||||
$ inxi -M
|
||||
@ -153,9 +156,9 @@ Machine: System: LENOVO (portable) product: 20354 v: Lenovo Z50-70
|
||||
Mobo: LENOVO model: Lancer 5A5 v: 31900059WIN Bios: LENOVO v: 9BCN26WW date: 07/31/2014
|
||||
```
|
||||
|
||||
#### Find Linux CPU and CPU Speed Information
|
||||
### 获取 CPU 及主频信息
|
||||
使用 `-C` 选项查看完整的 CPU 信息,包括每核 CPU 的频率及可用的最大主频。
|
||||
|
||||
We can display complete CPU information, including per CPU clock-speed and CPU max speed (if available) with the `-C` flag as follows:
|
||||
|
||||
```
|
||||
$ inxi -C
|
||||
@ -163,9 +166,10 @@ CPU: Dual core Intel Core i5-4210U (-HT-MCP-) cache: 3072 KB
|
||||
clock speeds: max: 2700 MHz 1: 1942 MHz 2: 1968 MHz 3: 1734 MHz 4: 1710 MHz
|
||||
```
|
||||
|
||||
#### Find Graphic Card Information in Linux
|
||||
|
||||
The option `-G` can be used to show graphics card info (card type, display server, resolution, GLX renderer and GLX version) like so:
|
||||
#### 获取显卡信息
|
||||
使用 `-G` 选项查看显卡信息,包括显卡类型、图形服务器、系统分辨率、GLX 渲染器(译者注: GLX 是一个 X 窗口系统的 OpenGL 扩展)和 GLX 版本。
|
||||
|
||||
|
||||
```
|
||||
$ inxi -G
|
||||
@ -175,9 +179,10 @@ Display Server: X.Org 1.18.4 drivers: intel (unloaded: fbdev,vesa) Resolution: 1
|
||||
GLX Renderer: Mesa DRI Intel Haswell Mobile GLX Version: 3.0 Mesa 11.2.0
|
||||
```
|
||||
|
||||
#### Find Audio/Sound Card Information in Linux
|
||||
|
||||
To get info about system audio/sound card, we use the `-A` flag:
|
||||
#### 获取声卡信息
|
||||
使用 `-A` 选项查看声卡信息:
|
||||
|
||||
|
||||
```
|
||||
$ inxi -A
|
||||
@ -185,9 +190,9 @@ Audio: Card-1 Intel 8 Series HD Audio Controller driver: snd_hda_intel Sound
|
||||
Card-2 Intel Haswell-ULT HD Audio Controller driver: snd_hda_intel
|
||||
```
|
||||
|
||||
#### Find Linux Network Card Information
|
||||
|
||||
To display network card info, we can make use of `-N` flag:
|
||||
#### 获取网卡信息
|
||||
使用 `-N` 选项查看网卡信息:
|
||||
|
||||
```
|
||||
$ inxi -N
|
||||
@ -195,19 +200,19 @@ Network: Card-1: Realtek RTL8111/8168/8411 PCI Express Gigabit Ethernet Contro
|
||||
Card-2: Realtek RTL8723BE PCIe Wireless Network Adapter driver: rtl8723be
|
||||
```
|
||||
|
||||
#### Find Linux Hard Disk Information
|
||||
#### 获取硬盘信息
|
||||
使用 `-D` 选项查看硬盘信息(大小、ID、型号):
|
||||
|
||||
To view full hard disk information,(size, id, model) we can use the flag `-D`:
|
||||
|
||||
```
|
||||
$ inxi -D
|
||||
Drives: HDD Total Size: 1000.2GB (20.0% used) ID-1: /dev/sda model: ST1000LM024_HN size: 1000.2GB
|
||||
```
|
||||
|
||||
#### Summarize Full Linux System Information Together
|
||||
|
||||
To show a summarized system information; combining all the information above, we need to use the `-b` flag as below:
|
||||
#### 获取简要的系统信息
|
||||
|
||||
使用 `-b` 选项显示简要系统信息:
|
||||
```
|
||||
$ inxi -b
|
||||
System: Host: TecMint Kernel: 4.4.0-21-generic x86_64 (64 bit) Desktop: Cinnamon 3.0.7
|
||||
@ -225,9 +230,9 @@ Drives: HDD Total Size: 1000.2GB (20.0% used)
|
||||
Info: Processes: 233 Uptime: 3:23 Memory: 3137.5/7879.9MB Client: Shell (bash) inxi: 2.2.35
|
||||
```
|
||||
|
||||
#### Find Linux Hard Disk Partition Details
|
||||
#### 获取硬盘分区信息
|
||||
使用 `-p` 选项输出完整的硬盘分区信息,包括每个分区的分区大小、已用空间、可用空间、文件系统以及文件系统类型。
|
||||
|
||||
The next command will enable us view complete list of hard disk partitions in relation to size, used and available space, filesystem as well as filesystem type on each partition with the `-p` flag:
|
||||
|
||||
```
|
||||
$ inxi -p
|
||||
@ -235,9 +240,9 @@ Partition: ID-1: / size: 324G used: 183G (60%) fs: ext4 dev: /dev/sda10
|
||||
ID-2: swap-1 size: 4.00GB used: 0.00GB (0%) fs: swap dev: /dev/sda9
|
||||
```
|
||||
|
||||
#### Shows Full Linux System Information
|
||||
|
||||
In order to show complete Inxi output, we use the `-F` flag as below (note that certain data is filtered for security reasons such as WAN IP):
|
||||
#### 获取完整的 Linux 系统信息
|
||||
使用 `-F` 选项查看可以完整的 Inxi 输出(安全起见比如网络 IP 地址信息不会显示,下面的示例只显示部分输出信息):
|
||||
|
||||
```
|
||||
$ inxi -F
|
||||
@ -266,22 +271,21 @@ Fan Speeds (in rpm): cpu: N/A
|
||||
Info: Processes: 234 Uptime: 3:26 Memory: 3188.9/7879.9MB Client: Shell (bash) inxi: 2.2.35
|
||||
```
|
||||
|
||||
### Linux System Monitoring with Inxi Tool
|
||||
### 使用 Inxi 工具监控 Linux 系统
|
||||
|
||||
Following are few options used to monitor Linux system processes, uptime, memory etc.
|
||||
下面是监控 Linux 系统进程、开机时间和内存的几个选项的使用方法。
|
||||
|
||||
#### Monitor Linux Processes Memory Usage
|
||||
|
||||
Get summarized system info in relation to total number of processes, uptime and memory usage:
|
||||
#### 监控 Linux 进程的内存使用
|
||||
|
||||
使用下面的命令查看进程数、开机时间和内存使用情况:
|
||||
```
|
||||
$ inxi -I
|
||||
Info: Processes: 232 Uptime: 3:35 Memory: 3256.3/7879.9MB Client: Shell (bash) inxi: 2.2.35
|
||||
```
|
||||
|
||||
#### Monitoring Processes by CPU and Memory Usage
|
||||
|
||||
By default, it can help us determine the [top 5 processes consuming CPU or memory][1]. The `-t` option used together with `c` (CPU) and/or `m` (memory) options lists the top 5 most active processes eating up CPU and/or memory as shown below:
|
||||
#### 监控进程占用的 CPU 和内存资源
|
||||
Inxi 默认显示 [前 5 个最消耗 CPU 和内存的进程][1]。 `-t` 选项和 `c` 选项一起使用查看前 5 个最消耗 CPU 资源的进程,查看最消耗内存的进程使用 `-t` 选项和 `m` 选项; `-t`选项 和 `cm` 选项一起使用显示前 5 个最消耗 CPU 和内存资源的进程。
|
||||
|
||||
```
|
||||
----------------- Linux CPU Usage -----------------
|
||||
@ -321,7 +325,7 @@ Memory: MB / % used - Used/Total: 3223.6/7879.9MB - top 5 active
|
||||
4: mem: 244.45MB (3.1%) command: chrome pid: 7405
|
||||
5: mem: 211.68MB (2.6%) command: chrome pid: 6146
|
||||
```
|
||||
|
||||
可以在选项 `cm` 后跟一个整数(在 1-20 之间)设置显示多少个进程,下面的命令显示了前 10 个最消耗 CPU 和内存的进程:
|
||||
We can use `cm` number (number can be 1-20) to specify a number other than 5, the command below will show us the [top 10 most active processes][2] eating up CPU and memory.
|
||||
|
||||
```
|
||||
@ -350,9 +354,9 @@ Memory: MB / % used - Used/Total: 3163.1/7879.9MB - top 10 active
|
||||
10: mem: 151.83MB (1.9%) command: mysqld pid: 1259
|
||||
```
|
||||
|
||||
#### Monitor Linux Network Interfaces
|
||||
#### 监控网络设备
|
||||
下面的命令会列出网卡信息,包括接口信息、网络频率、mac 地址、网卡状态和网络 IP 等信息。
|
||||
|
||||
The command that follows will show us advanced network card information including interface, speed, mac id, state, IPs, etc:
|
||||
|
||||
```
|
||||
$ inxi -Nni
|
||||
@ -364,9 +368,9 @@ WAN IP: 111.91.115.195 IF: wlp2s0 ip-v4: N/A
|
||||
IF: enp1s0 ip-v4: 192.168.0.103
|
||||
```
|
||||
|
||||
#### Monitor Linux CPU Temperature and Fan Speed
|
||||
#### 监控 CPU 温度和电脑风扇转速
|
||||
可以使用 `-s` 选项监控 [配置了传感器的机器][2] 获取温度和风扇转速:
|
||||
|
||||
We can keep track of the [hardware installed/configured sensors][3] output by using the -s option:
|
||||
|
||||
```
|
||||
$ inxi -s
|
||||
@ -374,9 +378,9 @@ Sensors: System Temperatures: cpu: 53.0C mobo: N/A
|
||||
Fan Speeds (in rpm): cpu: N/A
|
||||
```
|
||||
|
||||
#### Find Weather Report in Linux
|
||||
#### 用 Linux 查看天气预报
|
||||
使用 `-w` 选项查看本地区的天气情况(虽然使用的 API 可能不是很可靠),使用 `-w` `<different_location>` 设置所在的地区。
|
||||
|
||||
We can also view whether info (though API used is unreliable) for the current location with the `-w` or `-W``<different_location>` to set a different location.
|
||||
|
||||
```
|
||||
$ inxi -w
|
||||
@ -387,9 +391,9 @@ $ inxi -W Nairobi,Kenya
|
||||
Weather: Conditions: 70 F (21 C) - Mostly Cloudy Time: February 20, 11:08 AM EAT
|
||||
```
|
||||
|
||||
#### Find All Linux Repsitory Information
|
||||
#### 查看所有的 Linux 仓库信息。
|
||||
另外,可以使用 `-r` 选项查看一个 Linux 发行版的仓库信息:
|
||||
|
||||
We can additionally view a distro repository data with the `-r` flag:
|
||||
|
||||
```
|
||||
$ inxi -r
|
||||
@ -422,34 +426,35 @@ Active apt sources in file: /etc/apt/sources.list.d/ubuntu-mozilla-security-ppa-
|
||||
deb http://ppa.launchpad.net/ubuntu-mozilla-security/ppa/ubuntu xenial main
|
||||
deb-src http://ppa.launchpad.net/ubuntu-mozilla-security/ppa/ubuntu xenial main
|
||||
```
|
||||
下面是查看 Inxi 的安装版本、快速帮助和打开 man 主页的方法,以及更多的 Inxi 使用细节。
|
||||
|
||||
To view it’s current installed version, a quick help, and open the man page for a full list of options and detailed usage info plus lots more, type:
|
||||
|
||||
```
|
||||
$ inxi -v #show version
|
||||
$ inxi -h #quick help
|
||||
$ man inxi #open man page
|
||||
$ inxi -v #显示版本信息
|
||||
$ inxi -h #快速帮助
|
||||
$ man inxi #打开 man 主页
|
||||
```
|
||||
|
||||
浏览 Inxi 的官方 GitHub 主页 [https://github.com/smxi/inxi][4] 查看更多的信息。
|
||||
For more information, visit official GitHub Repository: [https://github.com/smxi/inxi][4]
|
||||
|
||||
That’s all for now! In this article, we reviewed Inxi, a full featured and remarkable command line tool for collecting machine hardware and system info. This is one of the best CLI based [hardware/system information collection tools][5] for Linux, I have ever used.
|
||||
Inxi 是一个功能强大的获取硬件和系统信息的命令行工具。这也是我使用过的最好的 [获取硬件和系统信息的命令行工具][5] 之一。
|
||||
|
||||
写下你的评论。如果你知道其他的像 Inxi 这样的工具,我们很高兴和你一起讨论。
|
||||
|
||||
To share your thoughts about it, use the comment form below. Lastly, in case you know of other, such useful tools as Inxi out there, you can inform us and we will be delighted to review them as well.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
|
||||
作者简介:
|
||||
Aaron Kili 是一个 Linux 和 F.O.S.S(译者注:一个 Linux 开源门户网站)的狂热爱好者,即任的 Linux 系统管理员,web 开发者,TecMint 网站的专栏作者,他喜欢使用计算机工作,并且乐于分享计算机技术。
|
||||
|
||||
Aaron Kili is a Linux and F.O.S.S enthusiast, an upcoming Linux SysAdmin, web developer, and currently a content creator for TecMint who loves working with computers and strongly believes in sharing knowledge.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://www.tecmint.com/inxi-command-to-find-linux-system-information/
|
||||
|
||||
作者:[Aaron Kili][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
译者:[vim-kakali](https://github.com/vim-kakali)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
@ -1,4 +1,4 @@
|
||||
python 是慢,但是爷就喜欢它
|
||||
python 是慢,但我并不关心
|
||||
=====================================
|
||||
|
||||
### 对追求生产率而牺牲性能的怒吼
|
||||
@ -11,11 +11,11 @@ python 是慢,但是爷就喜欢它
|
||||
|
||||
过去的情形是,程序需要花费很长的时间来运行,CPU 比较贵,内存也很贵。程序的运行时间是一个很重要的指标。计算机非常的昂贵,计算机运行所需要的电也是相当贵的。对这些资源进行优化是因为一个永恒的商业法则:
|
||||
|
||||
> <ruby>优化你最贵的资源<rt>Optimize your most expensive resource</rt></ruby>。
|
||||
> 优化你最贵的资源。
|
||||
|
||||
在过去,最贵的资源是计算机的运行时间。这就是导致计算机科学致力于研究不同算法的效率的原因。然而,这已经不再是正确的,因为现在硅芯片很便宜,确实很便宜。运行时间不再是你最贵的资源。公司最贵的资源现在是它的员工的时间。或者换句话说,就是你。把事情做完比快速地做事更加重要。实际上,这是相当的重要,我将把它再次放在这里,仿佛它是一个引用一样(对于那些只是粗略浏览的人):
|
||||
|
||||
> <ruby>把事情做完比快速地做事更加重要<rt>It’s more important to get stuff done than to make it go fast</rt></ruby>。
|
||||
> 把事情做完比快速地做事更加重要。
|
||||
|
||||
你可能会说:“我的公司在意速度,我开发一个 web 应用程序,那么所有的响应时间必须少于 x 毫秒。”或者,“我们失去了客户,因为他们认为我们的 app 运行太慢了。”我并不是想说速度一点也不重要,我只是想说速度不再是最重要的东西;它不再是你最贵的资源。
|
||||
|
||||
@ -25,7 +25,7 @@ python 是慢,但是爷就喜欢它
|
||||
|
||||
当你在编程的背景下说 _速度_ 时,你通常意味着性能,也就是 CPU 周期。当你的 CEO 在编程的背景下说 _速度_ 时,他指的是业务速度,最重要的指标是产品上市的时间。基本上,你的产品/web 程序是多么的快并不重要。它是用什么语言写的也不重要。甚至它需要花费多少钱也不重要。在一天结束时,让你的公司存活下来或者死去的唯一事物就是产品上市时间。我不只是说创业公司的想法 -- 你开始赚钱需要花费多久,更多的是“从想法到客户手中”的时间期限。企业能够存活下来的唯一方法就是比你的竞争对手更快地创新。如果在你的产品上市之前,你的竞争对手已经提前上市了,那么你想出了多少好的主意也将不再重要。你必须第一个上市,或者至少能跟上。一但你放慢了脚步,你就输了。
|
||||
|
||||
> <ruby>企业能够存活下来的唯一方法就是比你的竞争对手更快地创新<rt>The only way to survive in business is to innovate faster than your competitors</rt></ruby>。
|
||||
> 企业能够存活下来的唯一方法就是比你的竞争对手更快地创新。
|
||||
|
||||
#### 一个微服务的案例
|
||||
|
||||
@ -46,7 +46,7 @@ python 是慢,但是爷就喜欢它
|
||||
> 在高吞吐量的环境中使用解释性语言似乎是矛盾的,但是我们已经发现 CPU 时间几乎不是限制因素;语言的表达性是指,大多数程序是源程序,同时花费它们的大多数时间在 I/O 读写和本机运行时代码。而且,解释性语言无论是在语言层面的轻松实验还是在允许我们在很多机器上探索分布计算的方法都是很有帮助的,
|
||||
|
||||
再次强调:
|
||||
> <ruby>CPU 时间几乎不是限制因素<rt>the CPU time is rarely the limiting factor</rt></ruby>。
|
||||
> CPU 时间几乎不是限制因素。
|
||||
|
||||
### 如果 CPU 时间是一个问题怎么办?
|
||||
|
||||
@ -79,12 +79,76 @@ python 是慢,但是爷就喜欢它
|
||||
|
||||
* * *
|
||||
|
||||
### 但是如何速度真的重要怎么办呢?
|
||||
### 但是如果速度真的重要呢?
|
||||
|
||||
![](https://cdn-images-1.medium.com/max/600/0*bg31_URKZ7xzWy5I.jpg)
|
||||
|
||||
上述论点的语气可能会让人觉得优化与速度一点也不重要。但事实是,很多时候运行时性能真的很重要。一个例子是,你有一个web应用程序,其中有一个特定的端点需要用很长的时间来响应。你知道这个程序需要多快,并且知道程序需要改进多少。
|
||||
|
||||
在我们的例子中,发生了两件事:
|
||||
|
||||
1. 我们注意到有一个端点执行缓慢。
|
||||
2. 我们承认它是缓慢,因为我们有一个可以衡量是否足够快的标准,而它要没达到那个标准。
|
||||
|
||||
我们不必在应用程序中微调优化所有内容,只需要让其中每一个都"足够快"。如果一个端点花费了几秒钟来响应,你的用户可能会注意到,但是,他们并不会注意到你将响应时间由35毫秒到25毫秒。"足够好"就是你需要做到的所有事情。_免责声明: 我应该说有一些应用程序,如实时投标程序,确实需要细微优化,每一毫秒都相当重要。但那只是例外,而不是规则。_
|
||||
|
||||
为了明白如何对端点进行优化,你的第一步将是配置代码,并尝试找出瓶颈在哪。毕竟:
|
||||
|
||||
> 任何除了瓶颈之外的改进都是错觉。 --Gene Kim
|
||||
|
||||
如果你的优化没有触及到瓶颈,你只是浪费你的时间,并没有解决实际问题。在你优化瓶颈之前,你不会得到任何重要的改进。如果你在不知道瓶颈是什么前尝试优化,那么你最终只会在部分代码中玩耍。在测量和确定瓶颈之前优化代码被称为“过早优化”。Donald Knuth经常被归咎于以下引语,但他声称他偷了别人的话:
|
||||
|
||||
> 过早优化是万恶之源。
|
||||
|
||||
在谈到维护代码库时,来自Donald Knuth的更完整的引用是:
|
||||
|
||||
> 在 97% 的时间里,我们应该忘记微不足道的效率:过早的优化是万恶之源。然而在关
|
||||
> 键的3%,我们不应该错过优化的机会。 ——Donald Knuth
|
||||
|
||||
换句话说,他所说的是,在大多数时间你应该忘记对你的代码进行优化。它几乎总是足够好。在不是足够好的情况下,我们通常只需要触及3%的代码路径。你的端点快了几纳秒,比如因为你使用了if语句而不是函数,但这并不会使你赢得任何奖项,
|
||||
|
||||
过早的优化包括调用某些更快的函数,或者甚至使用特定的数据结构,因为它通常更快。计算机科学认为,如果一个方法或者算法与另一个具有相同的渐近增长(或者Big-O),那么它们是等价的,即使在实践中要慢两倍。计算机是如此之快,算法随着数据/使用增加而造成的计算增长远远超过实际速度本身。换句话说,如果你有两个O(log n)的函数,但是一个要慢两倍,这实际上并不重要。随着数据规模的增大,它们都以同样的速度"慢下来"。这就是过早优化是万恶之源的原因;它浪费了我们的时间,几乎从来没有真正有助于我们的性能改进。
|
||||
|
||||
就Big-O而言,你可以认为你的程序在所有的语言里都是O(n),其中n是代码或者指令的行数。对于同样的指令,它们以同样的速率增长。对于渐进增长,一种语言的速度快慢并不重要,所有语言都是相同的。在这个逻辑下,你可以说,为你的应用程序选择一种语言仅仅是因为它的“快速”是过早优化的最终形式。你选择的东西据说是快速而不用测量,而不理解瓶颈将在哪里。
|
||||
|
||||
> 为您的应用选择语言只是因为它的“快速”是过早优化的最终形式。
|
||||
|
||||
* * *
|
||||
|
||||
![](https://cdn-images-1.medium.com/max/1000/0*6WaZOtaXLIo1Vy5H.png)
|
||||
|
||||
### 优化Python
|
||||
|
||||
我最喜欢Python的一点是,它可以让你一次优化一点点代码。假设你有一个Python的方法,你发现它是你的瓶颈。你对它优化过几次,可能遵循[这里][14]和[那里][15]的一些指导,现在你正处在这样的地步,你很肯定Python本身就是你的瓶颈。Python有调用C代码的能力,这意味着,你可以用C重写这个方法来减少性能问题。你可以一次重写一个这样的方法。这个过程允许你用任何可以编译为C兼容汇编程序的语言编写良好优化的瓶颈方法。这让你能够在大多数时间何用Python编写,只在必要的时候都使用较低级的语言来写代码。
|
||||
|
||||
|
||||
有一种叫做Cython的编程语言,它是Python的超集。它几乎是Python和C的合并,是一种渐进类型的语言。任何Python代码都是有新的Cython代码,Cython代码可以编译成C代码。使用Cython,你可以编写一个模块或者一个方法,并逐渐进步到越来越多的C类型和性能。你可以将C类型和Python的鸭子类型合并在一起。使用Cython,你可以获得只在瓶颈处进行优化和在其他所有地方不失去Python的美丽的完美组合。
|
||||
|
||||
![](https://cdn-images-1.medium.com/max/600/0*LStEb38q3d2sOffq.jpg)
|
||||
|
||||
星战前夜的一幅截图:用Python编写的space MMO游戏。
|
||||
|
||||
当您最终遇到性能问题的Python墙时,你不需要把你的整个代码库用另一种不同的语言来编写。你只需要用Cython重写几个函数几乎就能得到你所需要的性能。这就是[星战前夜][16]采取的策略。这是一个大型多玩家的电脑游戏,在整个堆栈中使用Python和Cython。它们通过优化C/Cython中的瓶颈来实现游戏级别的性能。如果这个策略对他们有用,那么它应该对任何人都有帮助。或者,还有其他方法来优化你的Python。例如,[PyPy][17]是一个Python的JIT实现,它通过使用PyPy交换CPython(默认实现)为长时间运行的应用程序提供重要的运行时改进(如web server)。
|
||||
|
||||
![](https://cdn-images-1.medium.com/max/1000/0*mPc5j1btWBFz6YK7.jpg)
|
||||
|
||||
让我们回顾一下要点:
|
||||
|
||||
* 优化你最贵的资源。那就是你,而不是计算机。
|
||||
* 选择一种语言/框架/架构来帮助你快速开发(比如Python)。不要仅仅因为某些技术的快而选择它们。
|
||||
* 当你遇到性能问题时,请找到瓶颈所在。
|
||||
* 你的瓶颈很可能不是CPU或者Python本身。
|
||||
* 如何Python成为你的瓶颈(你已经优化过你的算法),那么可以转向热门的Cython或者C。
|
||||
* 尽情享受可以快速做完事情的乐趣。
|
||||
|
||||
我希望你喜欢阅读这篇文章就像我喜欢写这篇文章一样。如果你想说谢谢,请为我点下赞。另外,如果某个时候你想和我讨论Python,你可以在twitter上艾特我(@nhumrich),或者你可以在[Python slack channel][18]找到我。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
作者简介:坚持采用持续交付的方法,并为之写了很多工具。同是还是一名Python黑客与技术逛热者,目前是一名devops工程师。
|
||||
|
||||
|
||||
via: https://hackernoon.com/yes-python-is-slow-and-i-dont-care-13763980b5a1
|
||||
via: https://medium.com/hacker-daily/yes-python-is-slow-and-i-dont-care-13763980b5a1
|
||||
|
||||
作者:[Nick Humrich ][a]
|
||||
译者:[zhousiyu325](https://github.com/zhousiyu325)
|
||||
|
@ -1,16 +1,15 @@
|
||||
pyDash — 一个基于 web 的 Linux 性能监测工具
|
||||
============================================================
|
||||
|
||||
pyDash 是一个轻量且[基于 web 的 Linux 性能监测工具][1],它是用 Python 和 [Django][2] 加上 Chart.js 来写的。经测试,在下面这些主流 Linux 发行版上可运行:CentOS、Fedora、Ubuntu、Debian、Raspbian 以及 Pidora 。
|
||||
**pyDash** 是一个轻量且[基于 web 的 Linux 性能监测工具][1],它是用 **Python** 和 [Django][2] 加上 **Chart.js** 来写的。经测试,在下面这些主流 Linux 发行版上可运行:CentOS、Fedora、Ubuntu、Debian、Raspbian 以及 Pidora 。
|
||||
|
||||
你可以使用这个工具来监视你的 Linux 个人电脑/服务器资源,比如 CPU、内存
|
||||
、网络统计,包括在线用户以及更多的进程。仪表盘是完全使用主要的 Python 版本提供的 Python 库开发的,因此它的依赖关系很少,你不需要安装许多包或库来运行它。
|
||||
你可以使用这个工具来监视你的 Linux 个人电脑/服务器资源,比如 CPU、内存、网络统计,包括在线用户的进程以及更多。仪表盘是完全使用主要的 Python 版本提供的 Python 库开发的,因此它的依赖关系很少,你不需要安装许多包或库来运行它。
|
||||
|
||||
在这篇文章中,我将展示如果安装 pyDash 来监测 Linux 服务器性能。
|
||||
在这篇文章中,我将展示如果安装 **pyDash** 来监测 Linux 服务器性能。
|
||||
|
||||
#### 如何在 Linux 系统下安装 pyDash
|
||||
|
||||
1、首先,像下面这样安装需要的软件包 git 和 Python pip:
|
||||
1、首先,像下面这样安装需要的软件包 **git** 和 **Python pip**:
|
||||
|
||||
```
|
||||
-------------- 在 Debian/Ubuntu 上 --------------
|
||||
@ -22,7 +21,7 @@ $ sudo apt-get install git python-pip
|
||||
# dnf install git python-pip
|
||||
```
|
||||
|
||||
2、如果安装好了 git 和 Python pip,那么接下来,像下面这样安装 virtualenv,它有助于处理针对 Python 项目的依赖关系:
|
||||
2、如果安装好了 git 和 Python pip,那么接下来,像下面这样安装 **virtualenv**,它有助于处理针对 Python 项目的依赖关系:
|
||||
|
||||
```
|
||||
# pip install virtualenv
|
||||
@ -37,7 +36,7 @@ $ sudo pip install virtualenv
|
||||
# cd pydash
|
||||
```
|
||||
|
||||
4、下一步,使用下面的 virtualenv 命令为项目创建一个叫做 pydashtest 虚拟环境:
|
||||
4、下一步,使用下面的 **virtualenv** 命令为项目创建一个叫做 **pydashtest** 虚拟环境:
|
||||
|
||||
```
|
||||
$ virtualenv pydashtest #give a name for your virtual environment like pydashtest
|
||||
@ -48,9 +47,9 @@ $ virtualenv pydashtest #give a name for your virtual environment like pydashtes
|
||||
|
||||
*创建虚拟环境*
|
||||
|
||||
重点:请注意,上面的屏幕截图中,虚拟环境的 bin 目录被高亮显示,你的可能和这不一样,取决于你把 pyDash 目录克隆到什么位置。
|
||||
重要:请注意,上面的屏幕截图中,虚拟环境的 bin 目录被高亮显示,你的可能和这不一样,取决于你把 pyDash 目录克隆到什么位置。
|
||||
|
||||
5、创建好虚拟环境(pydashtest)以后,你需要在使用前像下面这样激活它:
|
||||
5、创建好虚拟环境(**pydashtest**)以后,你需要在使用前像下面这样激活它:
|
||||
|
||||
```
|
||||
$ source /home/aaronkilik/pydash/pydashtest/bin/activate
|
||||
@ -61,9 +60,9 @@ $ source /home/aaronkilik/pydash/pydashtest/bin/activate
|
||||
|
||||
*激活虚拟环境*
|
||||
|
||||
从上面的屏幕截图中,你可以注意到,提示字符串 1(PS1)已经发生改变,这表明虚拟环境已经被激活,而且可以开始使用。
|
||||
从上面的屏幕截图中,你可以注意到,提示字符串 1(**PS1**)已经发生改变,这表明虚拟环境已经被激活,而且可以开始使用。
|
||||
|
||||
6、现在,安装 pydash 项目 requirements;如何你是一个细心的人,那么可以使用 [cat 命令][5]查看 requirements.txt 的内容,然后像下面展示这样进行安装:
|
||||
6、现在,安装 pydash 项目 requirements;如何你好奇的话,可以使用 [cat 命令][5]查看 **requirements.txt** 的内容,然后像下面所示那样进行安装:
|
||||
|
||||
```
|
||||
$ cat requirements.txt
|
||||
@ -110,7 +109,7 @@ Password (again): ############
|
||||
$ python manage.py runserver
|
||||
```
|
||||
|
||||
10、接下来,打开你的 web 浏览器,输入网址:http://127.0.0.1:8000/ 进入 web 控制台登录界面,输入你在第 8 步中创建数据库和安装 Django 身份验证系统时创建的超级用户名和密码,然后点击登录。
|
||||
10、接下来,打开你的 web 浏览器,输入网址:**http://127.0.0.1:8000/** 进入 web 控制台登录界面,输入你在第 8 步中创建数据库和安装 Django 身份验证系统时创建的超级用户名和密码,然后点击登录。
|
||||
|
||||
[
|
||||
![pyDash Login Interface](http://www.tecmint.com/wp-content/uploads/2017/03/pyDash-web-login-interface.png)
|
||||
@ -118,7 +117,7 @@ $ python manage.py runserver
|
||||
|
||||
*pyDash 登录界面*
|
||||
|
||||
11、登录到 pydash 主页面以后,你将会得到一段监测系统的基本信息,包括 CPU、内存和硬盘使用量以及系统平均负载。
|
||||
11、登录到 pydash 主页面以后,你将会可以看到监测系统的基本信息,包括 CPU、内存和硬盘使用量以及系统平均负载。
|
||||
|
||||
向下滚动便可查看更多部分的信息。
|
||||
|
||||
@ -154,7 +153,7 @@ $ python manage.py runserver
|
||||
|
||||
作者简介:
|
||||
|
||||
我叫 Ravi Saive,是 TecMint 的创建者,是一个喜欢在网上分享技巧和知识的计算机极客和 Linux Guru 。我的大多数服务器都运行在叫做 Linux 的开源平台上。请关注我:[Twitter][10]、[Facebook][01] 以及 [Google+][02] 。
|
||||
我叫 Ravi Saive,是 TecMint 的原创作者,是一个喜欢在网上分享技巧和知识的计算机极客和 Linux Guru。我的大多数服务器都运行在 Linux 开源平台上。请关注我:[Twitter][10]、[Facebook][01] 以及 [Google+][02] 。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
@ -163,7 +162,7 @@ via: http://www.tecmint.com/pydash-a-web-based-linux-performance-monitoring-tool
|
||||
|
||||
作者:[Ravi Saive ][a]
|
||||
译者:[ucasFL](https://github.com/ucasFL)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
校对:[jasminepeng](https://github.com/jasminepeng)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
|
@ -1,380 +0,0 @@
|
||||
Translating by sanfusu
|
||||
<!--[Data-Oriented Hash Table][1]-->
|
||||
[面向数据的哈希表][1]
|
||||
============================================================
|
||||
|
||||
<!--In recent years, there’s been a lot of discussion and interest in “data-oriented design”—a programming style that emphasizes thinking about how your data is laid out in memory, how you access it and how many cache misses it’s going to incur. -->
|
||||
最近几年中,面向数据的设计已经受到了很多的关注 —— 一种强调内存中数据布局的编程风格,包括如何访问以及将会引发多少的 cache 缺失。
|
||||
<!--With memory reads taking orders of magnitude longer for cache misses than hits, the number of misses is often the key metric to optimize.-->
|
||||
由于在内存读取操作中缺失所占的数量级要大于命中的数量级,所以缺失的数量通常是优化的关键标准。
|
||||
<!--It’s not just about performance-sensitive code—data structures designed without sufficient attention to memory effects may be a big contributor to the general slowness and bloatiness of software.
|
||||
-->
|
||||
这不仅仅关乎那些对性能有要求的 code-data 结构设计的软件,由于缺乏对内存效益的重视而成为软件运行缓慢、膨胀的一个很大因素。
|
||||
|
||||
<!--The central tenet of cache-efficient data structures is to keep things flat and linear. For example, under most circumstances, to store a sequence of items you should prefer a flat array over a linked list—every pointer you have to chase to find your data adds a likely cache miss, while flat arrays can be prefetched and enable the memory system to operate at peak efficiency.
|
||||
-->
|
||||
|
||||
高效缓存数据结构的中心原则是将事情变得平滑和线性。
|
||||
比如,在大部分情况下,存储一个序列元素更倾向于使用平滑的数组而不是链表 —— 每一次通过指针来查找数据都会为 cache 缺失增加一份风险;而平滑的数组可以预先获取,并使得内存系统以最大的效率运行
|
||||
|
||||
<!--This is pretty obvious if you know a little about how the memory hierarchy works—but it’s still a good idea to test things sometimes, even if they’re “obvious”! [Baptiste Wicht tested `std::vector` vs `std::list` vs `std::deque`][4] (the latter of which is commonly implemented as a chunked array, i.e. an array of arrays) a couple of years ago. The results are mostly in line with what you’d expect, but there are a few counterintuitive findings. For instance, inserting or removing values in the middle of the sequence—something lists are supposed to be good at—is actually faster with an array, if the elements are a POD type and no bigger than 64 bytes (i.e. one cache line) or so! It turns out to actually be faster to shift around the array elements on insertion/removal than to first traverse the list to find the right position and then patch a few pointers to insert/remove one element. That’s because of the many cache misses in the list traversal, compared to relatively few for the array shift. (For larger element sizes, non-POD types, or if you already have a pointer into the list, the list wins, as you’d expect.)
|
||||
-->
|
||||
|
||||
如果你知道一点内存层级如何运作的知识,下面的内容会是想当然的结果——但是有时候即便他们相当明显,测试一下任不失为一个好主意。
|
||||
[Baptiste Wicht 测试过了 `std::vector` vs `std::list` vs `std::deque`][4]
|
||||
(几年前,后者通常使用分块数组来实现,比如:一个数组的数组)
|
||||
结果大部分会和你预期的保持一致,但是会存在一些违反直觉的东西。
|
||||
作为实例:在序列链表的中间位置做插入或者移除操作被认为会比数组快
|
||||
但如果该元素是一个 POD 类型,并且不大于 64 字节或者在 64 字节左右(在一个 cache 流水线内),
|
||||
通过对要操作的元素周围的数组元素进行移位操作要比从头遍历链表来的快。
|
||||
这是由于在遍历链表以及通过指针插入/删除元素的时候可能会导致不少的 cache 缺失,相对而言,数组移位则很少会发生。
|
||||
(对于更大的元素,非 POD 类型,或者你你已经有了指向链表元素的指针,此时和预期的一样,链表胜出)
|
||||
|
||||
|
||||
|
||||
<!--Thanks to data like Baptiste’s, we know a good deal about how memory layout affects sequence containers.-->
|
||||
多亏了 Baptiste 的数据,我们知道了内存布局如何影响序列容器。
|
||||
<!--But what about associative containers, i.e. hash tables?-->
|
||||
但是关联容器,比如 hash 表会怎么样呢?
|
||||
<!--There have been some expert recommendations: -->
|
||||
已经有了些权威推荐:
|
||||
<!--[Chandler Carruth tells us to use open addressing with local probing][5] -->
|
||||
[Chandler Carruth 推荐的带局部探测的开放寻址][5]
|
||||
<!--so that we don’t have to chase pointers,-->
|
||||
(此时,我们没必要追踪指针)
|
||||
<!--and [Mike Acton suggests segregating keys from values][6]
|
||||
in memory so that we get more keys per cache line,
|
||||
improving locality when we have to look at multiple keys. -->
|
||||
以及[Mike Acton 推荐的在内存中将 value 和 key 隔离][6](这种情况下,我们可以在每一个 cache 流水线中得到更多的 key), 这可以在我们不得查找多个 key 时提高局部性能。
|
||||
<!-- These ideas make good sense, but again, it’s a good idea to test things, -->
|
||||
<!-- and I couldn’t find any data. So I had to collect some of my own! -->
|
||||
这些想法很有意义,但再一次的说明:测试永远是好习惯,但由于我找不到任何数据,所以只好自己收集了。
|
||||
|
||||
<!--### [][7]The Tests-->
|
||||
### [][7]测试
|
||||
|
||||
<!--
|
||||
I tested four different quick-and-dirty hash table implementations, as well as `std::unordered_map`. All five used the same hash function, Bob Jenkins’ [SpookyHash][8] with 64-bit hash values. (I didn’t test different hash functions, as that wasn’t the point here; I’m also not looking at total memory consumption in my analysis.) The implementations are identified by short codes in the results tables:
|
||||
-->
|
||||
我测试了四个不同的 quick-and-dirty 哈希表实现,另外还包括 `std::unordered_map` 。
|
||||
这五个哈希表都使用了同一个哈希函数 —— Bob Jenkins' [SpookyHash][8](64 位哈希值)。
|
||||
(由于哈希函数在这里不是重点,所以我没有测试不同的哈希函数;我同样也没有检测我的分析中的总内存消耗。)
|
||||
实现会通过短代码在测试结果表中标注出来。
|
||||
|
||||
* **UM**: `std::unordered_map` 。<!--In both VS2012 and libstdc++-v3 (used by both gcc and clang), UM is implemented as a linked list containing all the elements, and an array of buckets that store iterators into the list. In VS2012, it’s a doubly-linked list and each bucket stores both begin and end iterators; in libstdc++, it’s a singly-linked list and each bucket stores just a begin iterator. In both cases, the list nodes are individually allocated and freed. Max load factor is 1.-->
|
||||
在 VS2012 和 libstdc++-v3 (libstdc++-v3: gcc 和 clang 都会用到这东西)中,
|
||||
UM 是以链接表的形式实现,所有的元素都在链表中,buckets 数组中存储了链表的迭代器。
|
||||
VS2012 中,则是一个双链表,每一个 bucket 存储了起始迭代器和结束迭代器;
|
||||
libstdc++ 中,是一个单链表,每一个 bucket 只存储了一个起始迭代器。
|
||||
这两种情况里,链表节点是独立申请和释放的。最大负载因子是 1 。
|
||||
|
||||
* **Ch**: <!--separate chaining—each bucket points to a singly-linked list of element nodes. The element nodes are stored in a flat array pool, to avoid allocating each node individually. Unused nodes are kept on a free list. Max load factor is 1.-->
|
||||
分离的、链状 buket 指向一个元素节点的单链表。
|
||||
为了避免分开申请每一个节点,元素节点存储在平滑的数组池中。
|
||||
未使用的节点保存在一个空闲链表中。
|
||||
最大负载因子是 1。
|
||||
|
||||
* **OL**:<!--open addressing with linear probing—each bucket stores a 62-bit hash,
|
||||
a 2-bit state (empty, filled, or removed), key, and value. Max load factor is 2/3.-->
|
||||
开地址线性探测 —— 每一个 bucket 存储一个 62 bit 的 hash 值,一个 2 bit 的状态值(包括 empty,filled,removed 三个状态),key 和 vale 。最大负载因子是 2/3。
|
||||
* **DO1**:<!--“data-oriented 1”—like OL, but the hashes and states are segregated from the keys and values, in two separate flat arrays.-->
|
||||
data-oriented 1 —— 和 OL 相似,但是哈希值、状态值和 key、values 分离在两个隔离的平滑数组中。
|
||||
|
||||
* **DO2**:<!--“data-oriented 2”—like OL, but the hashes/states, keys, and values are segregated in three separate flat arrays.-->
|
||||
"data-oriented 2" —— 与 OL 类似,但是哈希/状态,keys 和 values 被分离在 3 个相隔离的平滑数组中。
|
||||
|
||||
<!--All my implementations, as well as VS2012’s UM, use power-of-2 sizes by default, growing by 2x upon exceeding their max load factor. -->
|
||||
在我的所有实现中,包括 VS2012 的 UM 实现,默认使用尺寸为 2 的 n 次方。如果超出了最大负载因子,则扩展两倍。
|
||||
<!--In libstdc++, UM uses prime-number sizes by default and grows to the next prime upon exceeding its max load factor.-->
|
||||
在 libstdc++ 中,UM 默认尺寸是一个素数。如果超出了最大负载因子,则扩展为下一个素数大小。
|
||||
<!--However, I don’t think these details are very important for performance.-->
|
||||
但是我不认为这些细节对性能很重要。
|
||||
<!--The prime-number thing is a hedge against poor hash functions that don’t have enough entropy in their lower bits, but we’re using a good hash function.-->
|
||||
素数是一种对低 bit 位上没有足够熵的低劣 hash 函数的挽救手段,但是我们正在用的是一个很好的 hash 函数。
|
||||
|
||||
<!--The OL, DO1 and DO2 implementations will collectively be referred to as OA (open addressing), since we’ll find later that their performance characteristics are often pretty similar.-->
|
||||
OL,DO1 和 DO2 的实现将共同的被称为 OA(open addressing)——稍后我们将发现他们在性能特性上非常相似。
|
||||
<!--For each of these implementations, I timed several different operations, at element counts from 100K to 1M and for payload sizes (i.e. total key+value size) from 8 to 4K bytes.-->
|
||||
在每一个实现中,单元数从 100 K 到 1 M,有效负载(比如:总的 key + value 大小)从 8 到 4 k 字节
|
||||
我为几个不同的操作记了时间。
|
||||
<!--For my purposes, keys and values were always POD types and keys were always 8 bytes (except for the 8-byte payload, in which key and value were 4 bytes each).-->
|
||||
keys 和 values 永远是 POD 类型,keys 永远是 8 个字节(除了 8 字节的有效负载,此时 key 和 value 都是 4 字节)
|
||||
<!-- I kept the keys to a consistent size because my purpose here was to test memory effects, not hash function performance. Each test was repeated 5 times and the minimum timing was taken. -->
|
||||
因为我的目的是为了测试内存影响而不是哈希函数性能,所以我将 key 放在连续的尺寸空间中。
|
||||
每一个测试都会重复 5 遍,然后记录最小的耗时。
|
||||
|
||||
<!-- The operations tested were: -->
|
||||
测试的操作在这里:
|
||||
|
||||
* **Fill**:<!-- insert a randomly shuffled sequence of unique keys into the table. -->
|
||||
将一个随机的 key 序列插入到表中(key 在序列中是唯一的)。
|
||||
* **Presized fill**:<!-- like Fill, but first reserve enough memory for all the keys we’ll insert, to prevent rehashing and reallocing during the fill process. -->
|
||||
和 Fill 相似,但是在插入之间我们先为所有的 key 保留足够的内存空间,以防止在 fill 过程中 rehash 或者重申请。
|
||||
* **Lookup**: <!-- perform 100K lookups of random keys, all of which are in the table. -->
|
||||
执行 100 k 随机 key 查找,所有的 key 都在 table 中。
|
||||
* **Failed lookup**: <!-- perform 100K lookups of random keys, none of which are in the table. -->
|
||||
执行 100 k 随机 key 查找,所有的 key 都不在 table 中。
|
||||
* **Remove**:<!-- remove a randomly chosen half of the elements from a table. -->
|
||||
从 table 中移除随机选择的半数元素。
|
||||
* **Destruct**:<!-- destroy a table and free its memory. -->
|
||||
销毁 table 并释放内存.
|
||||
|
||||
<!-- You can [download my test code here][9]. -->
|
||||
你可以[在这里下载我的测试代码][9]。
|
||||
<!-- It builds for Windows or Linux, in 64-bit only. -->
|
||||
这些代码只能在 64 机器上编译(包括Windows和Linux)。
|
||||
<!-- There are some flags near the top of `main()` that you can toggle to turn on or off different tests—with all of them on, it will likely take an hour or two to run. -->
|
||||
在 `main()` 函数附件有一些标记,你可把他们打开或者关掉——如果全开,可能会需要一两个小时才能结束运行。
|
||||
<!-- The results I gathered are also included, in an Excel spreadsheet in that archive. -->
|
||||
我搜集的结果也放在了那个打包文件里的 Excel 表中。
|
||||
<!-- (Beware that the Windows and Linux results are run on different CPUs, so timings aren’t directly comparable.) -->
|
||||
(注意: Windows 和 Linux 在不同的 CPU 上跑的,所以时间不具备可比较性)
|
||||
<!-- The code also runs unit tests to verify that all the hash table implementations are behaving correctly. -->
|
||||
代码也跑了一些单元测试,用来验证所有的 hash 表实现都能运行正确。
|
||||
|
||||
<!-- Incidentally, I also tried two additional implementations: -->
|
||||
我还顺带尝试了附加的两个实现:
|
||||
<!-- separate chaining with the first node stored in the bucket instead of the pool, and open addressing with quadratic probing. -->
|
||||
Ch 中第一个节点存放在 bucket 中而不是 pool 里,二次探测的开放寻址。
|
||||
<!-- Neither of these was good enough to include in the final data, but the code for them is still there. -->
|
||||
这两个都不足以好到可以放在最终的数据里,但是他们的代码仍放在了打包文件里面。
|
||||
<!-- ### [][10]The Results -->
|
||||
### [][10]结果
|
||||
|
||||
<!-- There’s a ton of data here. -->
|
||||
这里有成吨的数据!!
|
||||
<!-- In this section I’ll discuss the results in some detail, but if your eyes are glazing over in this part, feel free to skip down to the conclusions in the next section. -->
|
||||
这一节我将详细的讨论一下结果,但是如果你对此不感兴趣,可以直接跳到下一节的总结。
|
||||
|
||||
### [][11]Windows
|
||||
|
||||
<!-- Here are the graphed results of all the tests, compiled with Visual Studio 2012, and run on Windows 8.1 on a Core i7-4710HQ machine. (Click to zoom.) -->
|
||||
这是所有的测试的图表结果,使用 Visual Studio 2012 编译,运行于 Windows8.1 和 Core i7-4710HQ 机器上。(点击可以放大。)
|
||||
|
||||
[
|
||||
![Results for VS 2012, Windows 8.1, Core i7-4710HQ](http://reedbeta.com/blog/data-oriented-hash-table/results-vs2012.png "Results for VS 2012, Windows 8.1, Core i7-4710HQ")
|
||||
][12]
|
||||
|
||||
<!-- From left to right are different payload sizes, from top to bottom are the various operations, and each graph plots time in milliseconds versus hash table element count for each of the five implementations. -->
|
||||
从左至右是不同的有效负载大小,从上往下是不同的操作
|
||||
<!-- (Note that not all the Y-axes have the same scale!) I’ll summarize the main trends for each operation. -->
|
||||
(注意:不是所有的Y轴都是相同的比例!)我将为每一个操作总结一下主要趋向。
|
||||
|
||||
**Fill**:
|
||||
<!-- Among my hash tables, chaining is a bit better than any of the OA variants, with the gap widening at larger payloads and table sizes. -->
|
||||
在我的 hash 表中,Ch 稍比任何的 OA 变种要好。随着哈希表大小和有效负载的加大,差距也随之变大。
|
||||
<!-- I guess this is because chaining only has to pull an element off the free list and stick it on the front of its bucket, while OA may have to search a few buckets to find an empty one. -->
|
||||
我猜测这是由于 Ch 只需要从一个空闲链表中拉取一个元素,然后把他放在 bucket 前面,而 OA 不得不搜索一部分 buckets 来找到一个空位置。
|
||||
<!-- The OA variants perform very similarly to each other, but DO1 appears to have a slight advantage. -->
|
||||
所有的 OA 变种的性能表现基本都很相似,当然 DO1 稍微有点优势。
|
||||
|
||||
<!-- All of my hash tables beat UM by quite a bit at small payloads, where UM pays a heavy price for doing a memory allocation on every insert. -->
|
||||
在小负载的情况,UM 几乎是所有 hash 表中表现最差的 —— 因为 UM 为每一次的插入申请(内存)付出了沉重的代价。
|
||||
<!-- But they’re about equal at 128 bytes, and UM wins by quite a bit at large payloads: -->
|
||||
但是在 128 字节的时候,这些 hash 表基本相当,大负载的时候 UM 还赢了点。
|
||||
<!-- there, all of my implementations are hamstrung by the need to resize their element pool and spend a lot of time moving the large elements into the new pool, while UM never needs to move elements once they’re allocated. -->
|
||||
因为,我所有的实现都需要重新调整元素池的大小,并需要移动大量的元素到新池里面,这一点我几乎无能为力;而 UM 一旦为元素申请了内存后便不需要移动了。
|
||||
<!-- Notice the extreme “steppy” look of the graphs for my implementations at large payloads, -->
|
||||
注意大负载中图表上夸张的跳步!
|
||||
<!-- which confirms that the problem comes when resizing. -->
|
||||
这更确认了重新调整大小带来的问题。
|
||||
<!-- In contrast, UM is quite linear—it only has to resize its bucket array, which is cheap enough not to make much of a bump. -->
|
||||
相反,UM 只是线性上升 —— 只需要重新调整 bucket 数组大小。由于没有太多隆起的地方,所以相对有效率。
|
||||
|
||||
**Presized fill**:
|
||||
<!-- Generally similar to Fill, but the graphs are more linear, not steppy (since there’s no rehashing), and there’s less difference between all the implementations. -->
|
||||
大致和 Fill 相似,但是图示结果更加的线性光滑,没有太大的跳步(因为不需要 rehash ),所有的实现差距在这一测试中要缩小了些。
|
||||
<!-- UM is still slightly faster than chaining at large payloads, but only slightly—again confirming that the problem with Fill was the resizing. -->
|
||||
大负载时 UM 依然稍快于 Ch,问题还是在于重新调整大小上。
|
||||
<!-- Chaining is still consistently faster than the OA variants, but DO1 has a slight advantage over the other OAs. -->
|
||||
Ch 仍是稳步少快于 OA 变种,但是 DO1 比其他的 OA 稍有优势。
|
||||
|
||||
**Lookup**:
|
||||
<!-- All the implementations are closely clustered, with UM and DO2 the front-runners, except at the smallest payload, where it seems like DO1 and OL may be faster. -->
|
||||
所有的实现都相当的集中。除了最小负载时,DO1 和 OL 稍快,其余情况下 UM 和 DO2 都跑在了前面。<!--Note: 你确定?-->
|
||||
<!-- It’s impressive how well UM is doing here, actually; -->
|
||||
真的,我无法描述 UM 在这一步做的多么好。
|
||||
<!-- it’s holding its own against the data-oriented variants despite needing to traverse a linked list. -->
|
||||
尽管需要遍历链表,但是 UM 还是坚守了面向数据的本性。
|
||||
|
||||
<!-- Incidentally, it’s interesting to see that the lookup time weakly depends on table size. -->
|
||||
顺带一提,查找时间和 hash 表的大小有着很弱的关联,这真的很有意思。
|
||||
<!-- Hash table lookup is expected constant-time, so from the asymptotic view it shouldn’t depend on table size at all. But that’s ignoring cache effects! -->
|
||||
哈希表查找时间期望上是一个常量时间,所以在的渐进视图中,性能不应该依赖于表的大小。但是那是在忽视了 cache 影响的情况下!
|
||||
<!-- When we do 100K lookups on a 10K-entry table, for instance, we’ll get a speedup because most of the table will be in L3 after the first 10K–20K lookups. -->
|
||||
作为具体的例子,当我们在具有 10 k 条目的表中做 100 k 查找时,速度会便变快,因为在第一次 10 k - 20 k 查找后,大部分的表会处在 L3 中。
|
||||
|
||||
**Failed lookup**:
|
||||
<!-- There’s a bit more spread here than the successful lookups. -->
|
||||
相对于成功查找,这里就有点更分散一些。
|
||||
<!-- DO1 and DO2 are the front-runners, with UM not far behind, and OL a good deal worse than the rest. -->
|
||||
DO1 和 DO2 跑在了前面,但 UM 并没有落下,OL 则是捉襟见肘啊。
|
||||
<!-- My guess is this is probably a case of OL having longer searches on average, especially in the case of a failed lookup; -->
|
||||
我猜测,这可能是因为 OL 整体上具有更长的搜索路径,尤其是在失败查询时;
|
||||
<!-- with the hash values spaced out in memory between keys and values, that hurts. -->
|
||||
内存中,hash 值在 key 和 value 之飘来荡去的找不着出路,我也很受伤啊。
|
||||
<!-- DO1 and DO2 have equally-long searches, but they have all the hash values packed together in memory, and that turns things around. -->
|
||||
DO1 和 DO2 具有相同的搜索长度,但是他们将所有的 hash 值打包在内存中,这使得问题有所缓解。
|
||||
|
||||
**Remove**:
|
||||
<!-- DO2 is the clear winner, with DO1 not far behind, chaining further behind, and UM in a distant last place due to the need to free memory on every remove; -->
|
||||
DO2 很显然是赢家,但 DO1 也未落下。Ch 则落后,UM 则是差的不是一丁半点(主要是因为每次移除都要释放内存);
|
||||
<!-- the gap widens at larger payloads. -->
|
||||
差距随着负载的增加而拉大。
|
||||
<!-- The remove operation is the only one that doesn’t touch the value data, only the hashes and keys, which explains why DO1 and DO2 are differentiated from each other here but pretty much equal in all the other tests. -->
|
||||
移除操作是唯一不需要接触数据的操作,只需要 hash 值和 key 的帮助,这也是为什么 DO1 和 DO2 在移除操作中的表现大相径庭,而其他测试中却保持一致。
|
||||
<!-- (If your value type was non-POD and needed to run a destructor, that difference would presumably disappear.) -->
|
||||
(如果你的值不是 POD 类型的,并需要析构,这种差异应该是会消失的。)
|
||||
|
||||
**Destruct**:
|
||||
<!-- Chaining is the fastest except at the smallest payload, where it’s about equal to the OA variants. -->
|
||||
Ch 除了最小负载,其他的情况都是最快的(最小负载时,约等于 OA 变种)。
|
||||
<!-- All the OA variants are essentially equal. -->
|
||||
所有的 OA 变种基本都是相等的。
|
||||
<!-- Note that for my hash tables, all they’re doing on destruction is freeing a handful of memory buffers, -->
|
||||
注意,在我的 hash 表中所做的所有析构操作都是释放少量的内存 buffer 。
|
||||
<!-- but [on Windows, freeing memory has a cost proportional to the amount allocated][13]. -->
|
||||
但是 [在Windows中,释放内存的消耗和大小成比例关系][13]。
|
||||
<!-- (And it’s a significant cost—an allocation of ~1 GB is taking ~100 ms to free!) -->
|
||||
(而且,这是一个很显著的开支 —— 申请 ~1 GB 的内存需要 ~100 ms 的时间去释放!)
|
||||
|
||||
<!-- UM is the slowest to destruct—by an order of magnitude at small payloads, and only slightly slower at large payloads. -->
|
||||
UM 在析构时是最慢的一个(小负载时,慢的程度可以用数量级来衡量),大负载时依旧是稍慢些。
|
||||
<!-- The need to free each individual element instead of just freeing a couple of arrays really hurts here. -->
|
||||
对于 UM 来讲,释放每一个元素而不是释放一组数组真的是一个硬伤。
|
||||
|
||||
### [][14]Linux
|
||||
|
||||
<!-- I also ran tests with gcc 4.8 and clang 3.5, on Linux Mint 17.1 on a Core i5-4570S machine. The gcc and clang results were very similar, so I’ll only show the gcc ones; the full set of results are in the code download archive, linked above. (Click to zoom.) -->
|
||||
我还在装有 Linux Mint 17.1 的 Core i5-4570S 机器上使用 gcc 4.8 和 clang 3.5 来运行了测试。gcc 和 clang 的结果很相像,因此我只展示了 gcc 的;完整的结果集合包含在了代码下载打包文件中,链接在上面。(点击来缩放)
|
||||
[
|
||||
![Results for g++ 4.8, Linux Mint 17.1, Core i5-4570S](http://reedbeta.com/blog/data-oriented-hash-table/results-g++4.8.png "Results for g++ 4.8, Linux Mint 17.1, Core i5-4570S")
|
||||
][15]
|
||||
|
||||
<!-- Most of the results are quite similar to those in Windows, so I’ll just highlight a few interesting differences. -->
|
||||
大部分结果和 Windows 很相似,因此我只高亮了一些有趣的不同点。
|
||||
|
||||
**Lookup**:
|
||||
<!-- Here, DO1 is the front-runner, where DO2 was a bit faster on Windows. -->
|
||||
这里 DO1 跑在前头,而在 Windows 中 DO2 更快些。
|
||||
<!-- Also, UM and chaining are way behind all the other implementations, which is actually what I expected to see in Windows as well, given that they have to do a lot of pointer chasing while the OA variants just stride linearly through memory. --><!-- @error这里原文写错了吧-->
|
||||
同样,UM 和 Ch 落后于其他所有的实现——过多的指针追踪,然而 OA 只需要在内存中线性的移动即可。
|
||||
<!-- It’s not clear to me why the Windows and Linux results are so different here. UM is also a good deal slower than chaining, especially at large payloads, which is odd; I’d expect the two of them to be about equal. -->
|
||||
至于 Windows 和 Linux 结果为何不同,则不是很清楚。UM 同样比 Ch 慢了不少,特别是大负载时,这很奇怪;我期望的是他们可以基本相同。
|
||||
|
||||
**Failed lookup**:
|
||||
<!-- Again, UM is way behind all the others, even slower than OL. -->
|
||||
UM 再一次落后于其他实现,甚至比 OL 还要慢。
|
||||
<!-- Again, it’s puzzling to me why this is so much slower than chaining, and why the results differ so much between Linux and Windows. -->
|
||||
我再一次无法理解为何 UM 比 Ch 慢这么多,Linux 和 Windows 的结果为何有着如此大的差距。
|
||||
|
||||
|
||||
**Destruct**:
|
||||
<!-- For my implementations, the destruct cost was too small to measure at small payloads; -->
|
||||
在我的实现中,小负载的时候,析构的消耗太少了,以至于无法测量;
|
||||
<!-- at large payloads, it grows quite linearly with table size—perhaps proportional to the number of virtual memory pages touched, rather than the number allocated? -->
|
||||
在大负载中,线性增加的比例和创建的虚拟内存页数量相关,而不是申请到的数量?
|
||||
<!-- It’s also orders of magnitude faster than the destruct cost on Windows. -->
|
||||
同样,要比 Windows 中的析构快上几个数量级。
|
||||
<!-- However, this isn’t anything to do with hash tables, really; -->
|
||||
但是并不是所有的都和 hash 表有关;
|
||||
<!-- we’re seeing the behavior of the respective OSes’ and runtimes’ memory systems here. -->
|
||||
我们在这里可以看出不同系统和运行时内存系统的表现。
|
||||
<!-- It seems that Linux frees large blocks memory a lot faster than Windows (or it hides the cost better, perhaps deferring work to process exit, or pushing things off to another thread or process). -->
|
||||
貌似,Linux 释放大内存块是要比 Windows 快上不少(或者 Linux 很好的隐藏了开支,或许将释放工作推迟到了进程退出,又或者将工作推给了其他线程或者进程)。
|
||||
|
||||
<!-- UM with its per-element frees is now orders of magnitude slower than all the others, across all payload sizes. -->
|
||||
UM 由于要释放每一个元素,所以在所有的负载中都比其他慢上几个数量级。
|
||||
<!-- In fact, I cut it from the graphs because it was screwing up the Y-axis scale for all the others. -->
|
||||
事实上,我将图片做了剪裁,因为 UM 太慢了,以至于破坏了 Y 轴的比例。
|
||||
|
||||
<!-- ### [][16]Conclusions -->
|
||||
### [][16]总结
|
||||
|
||||
<!-- Well, after staring at all that data and the conflicting results for all the different cases, what can we conclude? -->
|
||||
好,当我们凝视各种情况下的数据和矛盾的结果时,我们可以得出什么结果呢?
|
||||
<!-- I’d love to be able to tell you unequivocally that one of these hash table variants beats out the others, but of course it’s not that simple. -->
|
||||
我想直接了当的告诉你这些 hash 表变种中有一个打败了其他所有的 hash 表,但是这显然不那么简单。
|
||||
<!-- Still, there is some wisdom we can take away. -->
|
||||
不过我们仍然可以学到一些东西。
|
||||
|
||||
<!-- First, in many cases it’s _easy_ to do better than `std::unordered_map`. -->
|
||||
首先,在大多数情况下我们“很容易”做的比 `std::unordered_map` 还要好。
|
||||
<!-- All of the implementations I built for these tests (and they’re not sophisticated; it only took me a couple hours to write all of them) either matched or improved upon `unordered_map`, -->
|
||||
我为这些测试所写的所有实现(他们并不复杂;我只花了一两个小时就写完了)要么是符合 `unordered_map` 要么是在其基础上做的提高,
|
||||
<!-- except for insertion performance at large payload sizes (over 128 bytes), where `unordered_map`‘s separately-allocated per-node storage becomes advantageous. -->
|
||||
除了大负载(超过128字节)中的插入性能, `unordered_map` 为每一个节点独立申请存储占了优势。
|
||||
<!-- (Though I didn’t test it, I also expect `unordered_map` to win with non-POD payloads that are expensive to move.) -->
|
||||
(尽管我没有测试,我同样期望 `unordered_map` 能在非 POD 类型的负载上取得胜利。)
|
||||
<!-- The moral here is that if you care about performance at all, don’t assume the data structures in your standard library are highly optimized. -->
|
||||
具有指导意义的是,如果你非常关心性能,不要假设你的标准库中的数据结构是高度优化的。
|
||||
<!-- They may be optimized for C++ standard conformance, not performance. :P -->
|
||||
他们可能只是在 C++ 标准的一致性上做了优化,但不是性能。:P
|
||||
|
||||
<!-- Second, you could do a lot worse than to just use DO1 (open addressing, linear probing, with the hashes/states segregated from keys/values in separate flat arrays) whenever you have small, inexpensive payloads. -->
|
||||
其次,如果不管在小负载还是超负载中,若都只用 DO1 (开放寻址,线性探测,hashes/states 和 key/vaules分别处在隔离的平滑数组中),那可能不会有啥好表现。
|
||||
<!-- It’s not the fastest for insertion, but it’s not bad either (still way better than `unordered_map`), and it’s very fast for lookup, removal, and destruction. -->
|
||||
这不是最快的插入,但也不坏(还比 `unordered_map` 快),并且在查找,移除,析构中也很快。
|
||||
<!-- What do you know—“data-oriented design” works! -->
|
||||
你所知道的 —— “面向数据设计”完成了!
|
||||
|
||||
<!-- Note that my test code for these hash tables is far from production-ready—they only support POD types, -->
|
||||
注意,我的为这些哈希表做的测试代码远未能用于生产环境——他们只支持 POD 类型,
|
||||
<!-- don’t have copy constructors and such, don’t check for duplicate keys, etc. -->
|
||||
没有拷贝构造函数以及类似的东西,也未检测重复的 key,等等。
|
||||
<!-- I’ll probably build some more realistic hash tables for my utility library soon, though. -->
|
||||
我将可能尽快的构建一些实际的 hash 表,用于我的实用库中。
|
||||
<!-- To cover the bases, I think I’ll want two variants: one based on DO1, for small, cheap-to-move payloads, and another that uses chaining and avoids ever reallocating and moving elements (like `unordered_map`) for large or expensive-to-move payloads. -->
|
||||
为了覆盖基础部分,我想我将有两个变种:一个基于 DO1,用于小的,移动时不需要太大开支的负载;另一个用于链接并且避免重新申请和移动元素(就像 `unordered_map` ),用于大负载或者移动起来需要大开支的负载情况。
|
||||
<!-- That should give me the best of both worlds. -->
|
||||
这应该会给我带来最好的两个世界。
|
||||
|
||||
<!-- In the meantime, I hope this has been illuminating. -->
|
||||
与此同时,我希望你们会有所启迪。
|
||||
<!-- And remember, if Chandler Carruth and Mike Acton give you advice about data structures, listen to them. 😉 -->
|
||||
最后记住,如果 Chandler Carruth 和 Mike Acton 在数据结构上给你提出些建议,你一定要听 😉
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
|
||||
作者简介:
|
||||
|
||||
<!-- I’m a graphics programmer, currently freelancing in Seattle. Previously I worked at NVIDIA on the DevTech software team, and at Sucker Punch Productions developing rendering technology for the Infamous series of games for PS3 and PS4. -->
|
||||
我是一名图形程序员,目前在西雅图做自由职业者。之前我在 NVIDIA 的 DevTech 软件团队中工作,并在美少女特工队工作室中为 PS3 和 PS4 的 Infamous 系列游戏开发渲染技术。
|
||||
|
||||
<!-- I’ve been interested in graphics since about 2002 and have worked on a variety of assignments, including fog, atmospheric haze, volumetric lighting, water, visual effects, particle systems, skin and hair shading, postprocessing, specular models, linear-space rendering, and GPU performance measurement and optimization. -->
|
||||
自 2002 年起,我对图形非常感兴趣,并且已经完成了一系列的工作,包括:雾、大气雾霾、体积照明、水、视觉效果、粒子系统、皮肤和头发阴影、后处理、镜面模型、线性空间渲染、和 GPU 性能测量和优化。
|
||||
|
||||
<!-- You can read about what I’m up to on my blog. In addition to graphics, I’m interested in theoretical physics, and in programming language design. -->
|
||||
你可以在我的博客了解更多和我有关的事,处理图形,我还对理论物理和程序设计感兴趣。
|
||||
|
||||
<!-- You can contact me at nathaniel dot reed at gmail dot com, or follow me on Twitter (@Reedbeta) or Google+. I can also often be found answering questions at Computer Graphics StackExchange. -->
|
||||
你可以在 nathaniel.reed@gmail.com 或者在 Twitter(@Reedbeta)/Google+ 上关注我。我也会经常在 StackExchange 上回答计算机图形的问题。
|
||||
|
||||
--------------
|
||||
|
||||
via: http://reedbeta.com/blog/data-oriented-hash-table/
|
||||
|
||||
作者:[Nathan Reed][a]
|
||||
译者:[sanfusu](https://github.com/sanfusu)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://reedbeta.com/about/
|
||||
[1]:http://reedbeta.com/blog/data-oriented-hash-table/
|
||||
[2]:http://reedbeta.com/blog/category/coding/
|
||||
[3]:http://reedbeta.com/blog/data-oriented-hash-table/#comments
|
||||
[4]:http://baptiste-wicht.com/posts/2012/12/cpp-benchmark-vector-list-deque.html
|
||||
[5]:https://www.youtube.com/watch?v=fHNmRkzxHWs
|
||||
[6]:https://www.youtube.com/watch?v=rX0ItVEVjHc
|
||||
[7]:http://reedbeta.com/blog/data-oriented-hash-table/#the-tests
|
||||
[8]:http://burtleburtle.net/bob/hash/spooky.html
|
||||
[9]:http://reedbeta.com/blog/data-oriented-hash-table/hash-table-tests.zip
|
||||
[10]:http://reedbeta.com/blog/data-oriented-hash-table/#the-results
|
||||
[11]:http://reedbeta.com/blog/data-oriented-hash-table/#windows
|
||||
[12]:http://reedbeta.com/blog/data-oriented-hash-table/results-vs2012.png
|
||||
[13]:https://randomascii.wordpress.com/2014/12/10/hidden-costs-of-memory-allocation/
|
||||
[14]:http://reedbeta.com/blog/data-oriented-hash-table/#linux
|
||||
[15]:http://reedbeta.com/blog/data-oriented-hash-table/results-g++4.8.png
|
||||
[16]:http://reedbeta.com/blog/data-oriented-hash-table/#conclusions
|
Loading…
Reference in New Issue
Block a user